Running Gitlab runners locally

How to Run Gitlab runners locally

Summary

  1. Install Docker.
  2. Setup Gitlab repository and install gitlab-runner.
  3. Prepare or get the .gitlab-ci.yml to test.
  4. Execute a runner on the desired job.

Docker

Install docker

https://docs.docker.com/install/linux/docker-ce/ubuntu/

Don’t forget the post-setup steps.

https://docs.docker.com/install/linux/linux-postinstall/

Setup repository and install gitlab-runner

https://docs.gitlab.com/runner/install/linux-repository.html#installing-the-runner

Prepare or get gitlab-ci.yml

Supposing the following script:

image: "ruby:2.5"

before_script:

  - apt-get update -qq && apt-get install -y -qq sqlite3 libsqlite3-dev nodejs

  - ruby -v

  - which ruby

  - gem install bundler --no-document

  - bundle install --jobs $(nproc)  "${FLAGS[@]}"

rspec:

  script:

    - bundle exec rspec

rubocop:

  script:

    - bundle exec rubocop

there are two jobs defined

  • rspec
  • rubocop

Execute the runner on a job

in a terminal execute:

gitlab-runner execute docker rspec

this will execute the script in a clean docker the job rspec.

to define a timeout use

--timeout TIME

to pass env variables

--env VAR=VALUE

Git creating features

Creating features from fork

Important note

Don’t forget FIRST to create ISSUE in upstream describing the fix/feature.

Summary

These are the steps involved:

  1. Fork in Github.
  2. Clone fork locally.
  3. Set upstream.
  4. Create features/fixes in local branches.
  5. Sign commits.
  6. Push to your fork.
  7. Create PR upstream.

Fork to your own account/organization

Simple step, just press the Fork button in the top right of the GitHub repository, and select where to fork.

Clone your fork locally

Clone the default branch in the current directory:
git clone --recursive https://github.com/YOUR_ACCOUNT/YOUR_FORK.git

If you wish to clone a different branch:
git clone --recursive --branch BRANC_NAME https://github.com/YOUR_ACCOUNT/YOUR_FORK.git

If you wish to clone to a directory with a different name:
git clone --recursive https://github.com/YOUR_ACCOUNT/YOUR_FORK.git DESIRED_DIR_NAME

Set upstream repository

Set upstream (original source):
git remote add upstream https://github.com/ORIGINAL_ORGANIZATION/ORIGINAL_REPOSITORY.git

Confirm remotes:
git remote -v

> origin    https://github.com/YOUR_ACCOUNT/YOUR_FORK.git (fetch)
> origin    https://github.com/YOUR_ACCOUNT/YOUR_FORK.git (push)
> upstream  https://github.com/ORIGINAL_ORGANIZATION/ORIGINAL_REPOSITORY.git (fetch)
> upstream  https://github.com/ORIGINAL_ORGANIZATION/ORIGINAL_REPOSITORY.git (push)

In this example:
origin represents your fork repo.
upstream represents your fork’s origin.

NOTE: Names other than origin or upstream can be used. Just be careful to follow the same naming when pulling/pushing commits.

Create features/fixes

In the Autoware case , please always create new features from master branch.

Sign commits

GPG Sign

Set your GPG keys following GitHub article: https://help.github.com/en/articles/managing-commit-signature-verification

Signoff commits

Signoff your commits using git commit -s (https://git-scm.com/docs/git-commit#Documentation/git-commit.txt–s)

If you have an older git version the -s flag might not available. You can either update it via source/build/install,or use a PPA. ( Taken from https://unix.stackexchange.com/a/170831)

sudo add-apt-repository ppa:git-core/ppa -y
sudo apt-get update
sudo apt-get install git -y
git --version

Update fork

git checkout master
git fetch upstream
git merge upstream/master

Create feature branch

git checkout -b feature/awesome_stuff

Push to our fork

git push origin feature/awesome_stuff

Once finished

Create PR from Github website, and target master branch.

Deeplab v3 Test

  1. Create Python Env
    $ mkdir ~/tf_ros && cd ~/tf_ros
    $ virtualenv --system-site-packages -p python2 tensor_ros_deeplab
  2. Activate Environment
    source tensor_ros_deeplab/bin/activate
  3. Install tensorflow GPU
    pip install --upgrade tensorflow_gpu-1.12.0-cp27-none-linux_x86_64.whl
  4. Move to the Deeplab dir
    $ git clone https://github.com/tensorflow/models
  5. Clone Tensorflow’s Models repo containing Deeplab
    ~/tf_ros/models/research/deeplab
  6. Download a pretrained model:
    $ wget https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md
  7. Use sample script for a single image:
    https://github.com/tensorflow/models/blob/master/research/deeplab/deeplab_demo.ipynb
  8. Change the MODEL var to the path of the previously downloaded model:
    MODEL = DeepLabModel("~/tf_ros/models/research/deeplab/deeplabv3_mnv2_dm05_pascal_trainaug_2018_10_01.tar.gz")
  9. To use a local image instead of a URL change the run_visualization function to:
def run_visualization(image_path):
  """Inferences DeepLab model and visualizes result."""
  try:
    original_im = Image.open(image_path)
  except IOError:
    print('Cannot read image. Please check path: ' + image_path)
    return

  print('running deeplab on image %s...' % image_path)
  resized_im, seg_map = MODEL.run(original_im)

  vis_segmentation(resized_im, seg_map)
  1. Finally save and run the script:
    ptyhon script_name.py

Yolo3 training notes

Yolo3 custom training notes

To train a model, YOLO training code expects:
* Images
* Labels
* NAMES File
* CFG file
* train.txt file
* test.txt file
* DATA file
* Pretrained weights (optional)

Images and Labels

The images and labels should be located in the same directory. Each image and label is related to its counterpart by filename.
i.e.
For image 001.jpg, the corresponding label should be named 001.txt

Label format

The file containing the labels is a plain text file. Each line contains a bounding box for each object. The colums are separated by spaces, in the following format:

classID x y width height

The x, y, width and height should be expressed in a normalized pixel with values from 0 to 1.
x and y correspond to the coordinate of the center of the bounding box.

Yolo includes the following python helper function to easily achieve that:

def convert(size, box):
    dw = 1./(size[0])
    dh = 1./(size[1])
    x = (box[0] + box[1])/2.0 - 1
    y = (box[2] + box[3])/2.0 - 1
    w = box[1] - box[0]
    h = box[3] - box[2]
    x = x*dw
    w = w*dw
    y = y*dh
    h = h*dh
    return (x,y,w,h)

Names file

This file contains the label string for each class. The first line corresponds to the class 0. second line corresponds to the class 1, and so on.
i.e. Contents of classes.names

classA
classB
classC

This would create the following relationship:

Class ID (labels) Class identifier
0 classA
1 classB
2 classC

CFG file

This file is a darknet configuration file. To simplify the explanation:

Modifications required to train, according to:

GPU memory available:

[net]
# Training
 batch=64 #Number of images to move to GPU memory on each batch.
 ...

Number of classes

The number of classes should be set on each of the [yolo] sections in the CFG file.

[yolo]
...
classes= NUM_CLASSES (1,2,3,4) should match names file.
...

Number of Filters

Before each [yolo] section, the number of filters in the [convolutional] layer should also be updated to match the following formula:

classes=(classes + 5) * 3

For instnace, for 3 classes:

[convolutional]
size=1
stride=1
pad=1
filters=24
activation=linear

[yolo]
classes= 3

train.txt file

This plain text file contains each of the images that will be used for training. Each line should include the absolute path to the Image.
i.e.

/home/test/images/train/001.jpg
/home/test/images/train/002.jpg
/home/test/images/train/003.jpg

test.txt file

In the same way as the train.txt file, this text file contains the paths to the images used for testing, one per line.
i.e.

/home/user/dataset/images/test/001.jpg
/home/user/dataset/images/test/002.jpg
/home/user/dataset/images/test/003.jpg

DATA file

This plain text file summarizes the dataset using the following format:

classes= 20
train  = /home/user/dataset/train.txt
valid  = /home/user/dataset/test.txt
names = /home/user/dataset/classes.names
backup = /home/user/dataset/backup

Training

./darknet detector train file.data file.cfg darknet53.conv.74

TensorFlow and ROS Kinetic with Python 3.5 Natively

Setup

Native setup, no venv
1. Create working directory
$ mkdir ~/py3_tf_ros && cd ~/py3_tf_ros

  1. Install Python 3.5
    $ sudo apt-get install python3-dev python3-yaml python3-setuptools

  2. Install rospkg for Python 3

$ git clone git://github.com/ros/rospkg.git
$ cd roskpkg && sudo python3 setup.py install
  1. Install catkin_pkg for Python 3
$ git clone git://github.com/ros-infrastructure/catkin_pkg.git
$ cd catkin_pkg && sudo python3 setup.py install && cd ..
  1. Install catkin for Python 3
$ git clone git://github.com/ros/catkin.git
$ cd catkin && sudo python3 setup.py install && cd ..
  1. Install OpenCV for Python 3
    pip3 install opencv-python

  2. Download desired TensorFlow version

  3. Setup Nvidia Drivers, CUDA and CUDNN according to the TensorFlow version.

  4. Install downloaded TensforFlow package
    pip3 install --user --upgrade tensorflow-package.whl

  5. Check that symbolic link /usr/local/cuda corresponds to the CUDA version required by TensorFlow. (if there are several CUDA versions installed in the system).

  6. Test TensorFlow
    python -c "import tensorflow as tf; print(tf.__version__)"
    This should display the version 1.XX.YY you selected.

  7. It is possible to import ros and import tensorflow.

  8. If cv2 package is also required:

sys.path.remove('/opt/ros/kinetic/lib/python2.7/dist-packages')

import cv2

sys.path.append(`/opt/ros/kinetic/lib/python2.7/dist-packages`)

import ros

TensorFlow and ROS Kinetic with Python 2.7

TensorFlow and ROS Kinetic with Python 2.7

  1. Create a working directory
    $ mkdir ~/tf_ros && cd ~/tf_ros

  2. Download and Install an NVIDIA Driver >= 384.x (https://www.nvidia.co.jp/Download/index.aspx) We recommend RUN files, please check our other post on this topic.

  3. Download and Install CUDA 9.0 (https://developer.nvidia.com/cuda-90-download-archive)

  4. Download and Install CUDNN for CUDA 9.0 (https://developer.nvidia.com/rdp/cudnn-download; https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#installlinux)

  5. Download TensorFlow for Python 2.7
    $ wget https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.10.1-cp27-none-linux_x86_64.whl

  6. Install Python VirtualEnv
    $ sudo apt-get install python-virtualenv

  7. Create Python 2.7 virtual environment.
    $ virtualenv --system-site-packages -p python2 p2tf_venv
    (p2tf_venv can be any name)

  8. Activate environment
    $ source p2tf_venv/bin/activate

  9. Install TensorFlow
    (p2_venv) $ pip install --upgrade tensorflow_gpu-1.10.1-cp27-none-linux_x86_64.whl

  10. Make sure you have in LD_LIBARY_PATH, the cuda 9 libraries and binaries.

$ export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH && export PATH=/usr/local/cuda/bin:$PATH

  1. Check the ROS libraries are in the LD_LIBRARY_PATH as well.
    $ echo $LD_LIBRAY_PATH.
    It should contain /opt/ros/kinetic/lib:/opt/ros/kinetic/lib/x86_64-linux-gnu

  2. Check that Python ROS libraries are also included in the $PYTHONPATH.
    $ echo $PYTHONPATH.
    It should contain /opt/ros/kinetic/lib/python2.7/dist-packages.

  3. Test your installation is correct.
    (p2_venv) $ python -c "import tensorflow as tf; print(tf.__version__)"
    It should print the TensorFlow version install, such as 1.10.1.

  4. Ready to go!

How to setup industrial_ci locally and test Autoware

industrial_ci will build, install and test every package in an isolated manner inside a clean Docker image. In this way, missing dependencies(system and other packages) can be easily spotted and fixed before publishing a package. This eases the deployment of Autoware (or any ROS package).

Running locally instead of on the cloud (travis-ci) speeds up the build time.

Autoware and industrial_ci require two different catkin workspaces.

Requirements:

  • Docker installed and working.
  • Direct connection to the Internet (No proxy). See below for proxy instructions.

Instructions:

  1. Install catkin tools:$ sudo apt-get install python-catkin-tools.
  2. Clone Autoware (if you don’t have it already):$ git clone https://github.com/CPFL/Autoware. (if you wish to test an specific branch change to that branch or use -b).
  3. Create a directory to hold a new workspace at the same level as Autoware, and subdirectory src. (In this example catkin_ws and base dir being home ~).
    ~/$ mkdir -p catkin_ws/src && cd catkin_ws/src.
  4. Initialize that workspace running catkin_init_workspace inside src of catkin_ws.
    ~/catkin_ws/src$ catkin_init_workspace
  5. Clone industrial_ci inside the catkin_ws/src.
    ~/catkin_ws/src$ git clone https://github.com/ros-industrial/industrial_ci.
  6. The directory structure should look as follows:
~
├── Autoware
│   ├── ros
│   │   └── src
│   └── .travis.yml
├── catkin_ws
          └── src
               └── industrial_ci
  1. Go to catkin_ws and build industrial_ci. ~/catkin_ws$ catkin config --install && catkin b industrial_ci && source install/setup.bash.
  2. Once finished, move to Autoware directory ~/catkin_ws$ cd ~/Autoware.
  3. Run industrial_ci:

– Using rosrun industrial_ci run_ci ROS_DISTRO=kinetic ROS_REPO=ros or rosrun industrial_ci run_ci ROS_DISTRO=indigo ROS_REPO=ros. This method will manually specify the distribution and repositories sources.
~/Autoware$ rosrun industrial_ci run_travis .. This will parse the .travis.yml and run in a similar fashion to travis-ci.

For more detailed info: https://github.com/ros-industrial/industrial_ci/blob/master/doc/index.rst#run-industrial-ci-on-local-host

How to run behind a proxy

Configure your docker to use proxy (from https://stackoverflow.com/questions/26550360/docker-ubuntu-behind-proxy and https://docs.docker.com/config/daemon/systemd/#httphttps-proxy):

Ubuntu 14.04

Edit the file /etc/default/docker, go to the proxy section, and change the values:

# If you need Docker to use an HTTP proxy, it can also be specified here.
export http_proxy="http://proxy.address:port"
export https_proxy="https://proxy.address:port"

Execute in a terminal sudo service docker restart.

Ubuntu 16.04

  1. Create config container directory:
    $ sudo mkdir -p /etc/systemd/system/docker.service.d

  2. Create the http-conf.d file inside:
    $ sudo nano /etc/systemd/system/docker.service.d/http-proxy.conf

  3. Paste the following text and edit with your proxy values:

[Service]
Environment="HTTP_PROXY=http://proxy.address:port"
Environment="HTTPS_PROXY=https://proxy.address:port"
  1. Save the file

Modifications to industrial_ci

  1. Add the following lines in ~/catkin_ws/src/industrial_ci/industrial_ci/src/docker.sh at line 217:

Original:

FROM $DOCKER_BASE_IMAGE
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections

Change to:

FROM $DOCKER_BASE_IMAGE
ENV http_proxy "http://proxy.address:port"
ENV https_proxy "https://proxy.address:port"
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
  1. Change line 80 of ~/catkin_ws/src/industrial_ci/industrial_ci/src/tests/source_test.sh

Original:

# Setup rosdep
rosdep --version
if ! [ -d /etc/ros/rosdep/sources.list.d ]; then
    sudo rosdep init
fi

Change to:

# Setup rosdep
rosdep --version
if ! [ -d /etc/ros/rosdep/sources.list.d ]; then
    rosdep init
fi
  1. In ~/catkin_ws/src/industrial_ci/industrial_ci/src/tests/source_test.sh line 56:

Original:

ici_time_start setup_apt

sudo apt-get update -qq

# If more DEBs needed during preparation, define ADDITIONAL_DEBS variable where you list the name of DEB(S, delimitted by whitespace)
if [ "$ADDITIONAL_DEBS" ]; then
    sudo apt-get install -qq -y $ADDITIONAL_DEBS || error "One or more additional deb installation is failed. Exiting."
fi
source /opt/ros/$ROS_DISTRO/setup.bash

ici_time_end  # setup_apt

if [ "$CCACHE_DIR" ]; then
    ici_time_start setup_ccache
    sudo apt-get install -qq -y ccache || error "Could not install ccache. Exiting."
    export PATH="/usr/lib/ccache:$PATH"
    ici_time_end  # setup_ccache
fi

ici_time_start setup_rosdep

Change to:

ici_time_start setup_apt

apt-get update -qq

# If more DEBs needed during preparation, define ADDITIONAL_DEBS variable where you list the name of DEB(S, delimitted by whitespace)
if [ "$ADDITIONAL_DEBS" ]; then
    apt-get install -qq -y $ADDITIONAL_DEBS || error "One or more additional deb installation is failed. Exiting."
fi
source /opt/ros/$ROS_DISTRO/setup.bash

ici_time_end  # setup_apt

if [ "$CCACHE_DIR" ]; then
    ici_time_start setup_ccache
    apt-get install -qq -y ccache || error "Could not install ccache. Exiting."
    export PATH="/usr/lib/ccache:$PATH"
    ici_time_end  # setup_ccache
fi

ici_time_start setup_rosdep
  1. In the same way change line 70 ~/catkin_ws/src/industrial_ci/industrial_ci/src/tests/abi_check.sh:

Original:

    # Setup rosdep
    rosdep --version
    #if ! [ -d /etc/ros/rosdep/sources.list.d ]; then
        sudo rosdep init
    #fi

Change to:

    # Setup rosdep
    rosdep --version
    #if ! [ -d /etc/ros/rosdep/sources.list.d ]; then
        rosdep init
    #fi

Compile again with catkin, then you can run industrial_ci as indicated above.

Setup of Allied Vision Camera with VimbaSDK and ROS node

How to setup and use avt_vimba_camera node

Assuming an x86_64 system and a GiGE PoE camera

http://wiki.ros.org/avt_vimba_camera

Driver Setup

  1. Download the Vimba SDK from Allied Vision website https://www.alliedvision.com/en/products/software.html
  2. Extract the contents of the file
  3. Go to Vimba_2_1/VimbaGigETL/
  4. Execute sudo ./Install.sh. This will add a couple of files inside /etc/profile.d
    • VimbaGigETL_64bit.sh
    • VimbaGigETL_32bit.sh
  5. The SDK will ask you to logout and login again. If you don’t wish to do so, execute source /etc/profile.d/VimbaGigETL_64bit.sh
  6. Connect the camera.
  7. You’re ready to use the camera.

Change Camera IP Address

Initially the camera is running on DHCP mode. If you wish to change this.

  1. Go to Vimba_2_1/Tools/Viewer/Bin/x86_64bit/
  2. Execute sudo -E ./VimbaViewer
  3. Right Click the camera and Press Open CONFIG
  4. Go to the Tree and Open the GiGE root and the configuration subnode
  5. Change IP Configuration Mode to Persistent
  6. Move to the Persistent tree and change the Persistent IP Address to the desired value.
  7. Finally Click on IP Configuration Apply and click the Execute button
  8. Close the CONFIG MODE window
  9. After a few moments, the camera will appear showing the selected IP.

View Image Stream from Camera

  1. Go to Vimba_2_1/Tools/Viewer/Bin/x86_64bit/
  2. Execute sudo -E ./VimbaViewer
  3. Right Click the camera while in the viewer
  4. Select Open FULL ACCESS
  5. A new Window will appear, press the Blue Play button

ROS Node setup

  1. Once the driver is working, install the node using sudo apt-get install ros-xxxx-avt-vimba-camera. Where xxxx represents your ROS distro.
  2. From Autoware launch in a sourced terminal roslaunch runtime_manager avt_camera.launch guid:=SERIALNUMBER or roslaunch runtime_manager avt_camera.launch ip:=xxx.xxx.xxx.xxx

SERIAL NUMBER or IP Address were previously obtained from the Configuration tool.

Dynamic configuration

The avt_vimba_camera package supports dynamic configuration.

Once the node is running execute rosrun dynamic_reconfigure dynparam get /avt_camera to get all the supported parameters.

To change one execute:
rosrun dynamic_reconfigure dynparam set /avt_camera exposure_auto_max 500000

this will update the maximum auto exposure time to 500 ms.

For extra details on dynamic_reconfigure check: http://wiki.ros.org/dynamic_reconfigure

Continental ARS 308-21 SSAO Radar setup

*Originally written by Ekim Yurtsever.*

This is a short setup guide for Continental ARS 308-21 SSAO Radar node. This guide covers the following topics;

Contents

  1. Requirements and the hardware setup
  2. A brief introduction to CAN bus communication using Linux
  3. Radar configuration
  4. Receiving CAN messages on ROS
  5. Using the Autoware RADAR Node

 

1. Requirements

Hardware

  1. Continental ARS 308-21 SSAO Radar
  2. 12V DC power supply
  3. Can interface device
  4. Can adaptor*

Software

  • Ubuntu 14.04 or above
  • ROS
  • ROS Packgage: socketcan_interface (http://wiki.ros.org/socketcan_interface)

*: Adaptor is needed for the termination of the can circuit. From the technical documentation of Continental ARS 308-2C/-21;

“Since no termination resistors are included in the radar sensor ARS 308-2C and ARS 308-21, two 120 Ohm terminal resistors have to be connected to the network (separately or integrated in the CAN interface of the corresponding unit).”

 

Hardware setup

1-Connect the devices as shown below

Fig 1. Hardware setup. 

2-Turn on the power supply. The device should start working with audible  operation.

2. A brief introduction to CAN bus communication using Linux

There are various ways to communicate with CAN bus. For linux, SocketCAN is one of the most used CAN drivers. It is open source and comes with the kernel. Furthermore it can be used with many devices. If a different vendor driver is liked to be used however, please refer to  that drivers manual for communicating via CAN bus.

First load the drivers. The device sends the messages with a specific bitrate, if it is not matched the stream would not be synchronized. Therefore the ip can link must be set with the bitrate of the device.  Bitrate for this device is constant at 500000/s and cannot be changed. Below is an example sniplet for setting can device can1:

$ modprobe can_dev
$ modprobe can
$ modprobe can_raw
$ sudo ip link set can1 type can bitrate 500000
$ sudo ifconfig can1 up

Now the connection between the computer and the sensor is established.

For checking the information sent by the device, a user friendly tool package called can-utils can be used with SocketCAN for accessing the messages via the driver.

Get the can-utils;

$ sudo apt-get install can-utils

Display the can messages from can1;

$ candump can1

A stream of CAN messages should be received at this point. An example of the message stream is shown below;

The can messages sent from the device have to be converted into meaningful information. Can messages have headers to identify the content of the message. Below the headers and the content of the messages sent from the Radar are shown;

0x300 and 0x301 are the input signals. The ego-vehicle speed and yaw rate can be sent to the device. If this information is provided, the radar will return detected objects’s positions and speeds relative to the ego-vehicle. If this information is not sent, the radar will assume that it is stationary.

0x600, 0x701 and 0x702 are the output messages of the radar. The structure of these messages are given below;

Figure 2. Message structure of 0x600
Figure 3. Message structure of 0x701. The physical meanings are given also.

3. Configuring the radar

The radar must be configured first to receive tracked object information. This device does not transmit raw data from the radar scans. Instead, its microcontroller reads the raw sensing data and detects/tracks objects with its own algorithm (this algorithm is not accesable). The sensor sends the detected/tracked object information through the CAN bus.

The default behavior of the device is to send detected objects (not tracked). In order to receive tracked object messages, 0x60A, 0x60B and 0x60C, a configuration message has to be sent. The following command will send the configuration message using the can-utils for receiving tracked object messages:

$ cansend can1 200#0832000200000000 

Now 0x60A, 0x60B and 0x60C messages can be received instead of 0x600, 0x701 and 0x702. We can check this by dumping the CAN stream on the terminal screen with the following command again:

$ candump can1 

The stream should include 0x60A, 0x60B and 0x60C messages now.

4. Receiving CAN messages on ROS

Ros package socketcan_interface is needed to receive can messages in ROS. This package is used with the socketCAN.

Install socketcan_interface;

$ sudo apt-get install ros-kinetic-socketcan-interface 

Test the communication in ros;

$ rosrun socketcan_interface socketcan_dump can1

This should display the received messages in ROS. An example is shown below;

With socketcan_interface a driver for this device can be developed. However there is already a ready can driver in ros called ros_canopen.  Install this package with the following command;

$ sudo apt-get install ros-kinetic-ros-canopen

This package will be used to publish the can messages received from the device in the ROS environment.

The socketcan_to_topic node in the  socketcan_bridge package can be used to publish topics from the can stream. First, start a ROS core and then launch this node with the name of the can port as an argument (e.g can1).

$ roscore
$ rosrun socketcan_bridge socketcan_to_topic_node _can_device:="can1"

This will publish a topic called “received_messages”.  Check the messages with the following command:

$ rostopic echo /received_messages

This should show the received messages. We are interested in the “id” and “data” fields. An example of the received_messages is shown below.

Rosbag Record from Rosbag Play, timestamps out of sync

When recording data from a previously recorded rosbag instead of sensor data, clock might become a problem.
Rosbag record updates the clock to the time when the rosbag is being created, but the original timestamps are not updated causing the clock in the rosbag and the topics timestamps to be out of sync.

To fix this when recording the new rosbag add /clock to the list of recorded topics. this will keep the clock of the original rosbag instead of creating a new one.

Example:

<launch>

  <param name="use_sim_true" value="true" />

  <node pkg="rosbag" type="play" name="rosbagplay" output="screen" required="true" args="--clock /PATH/TO/ROSBAGTOPLAY"/>

  <node pkg="rosbag" type="record" name="rosbagrecord" output="screen" required="true" args="/clock LISTOFTOPICS -O /PATH/TO/OUTPUTFILE"/>

</launch>

EOF