Skip to content

Docker-based cross-compiling environment for building ROS projects for armhf

Notifications You must be signed in to change notification settings

vedranMv/ros_rpi_hybrid_cross_compilation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Hybrid cross-compilation for ROS-based projects for Raspberry Pi*

*(and other linux-armhf platforms)

Table of content

  1. Summary
  2. Intro and motivation
  3. Repository structure
  4. Requirements
  5. Preparing the host system
  6. Preparing the target system
  7. Cross-compiling ROS (comm & robot)
  8. Cross-compiling ROS projects
  9. Additional info
    1. QEMU emulation
  10. Things that don't work
  11. Contributions

Summary

This project outlines an approach for docker-based cross-compiling environment, suitable for building ROS projects for armhf architecture* on a system with a different architecture. The method proposed here can also be used to cross-compile ROS comm and ROS robot variants.

* Other target architectures should work as long as they run linux, but they haven't been tested yet

Intro and motivation

I recently started a project which I believe could produce a large code repository that I don't want to compile (and write) on the raspberry pi. Instead, I wanted to cross compile my code on a PC and move the executables to the Pi afterwards.

I was surprized realizing that there are no (simple) available solutions to achieve this. There is the section on cross compiling on ROS web page, but it mentions an abandoned Eros stack which I couldn't setup. There are also some attempts on various webpages and forums, none of which offered a satisfying (simple, step-by-step user-friendly) solution:

The last article on the list sparked an idea of a "hybrid" environment for cross compilation. All dependencies and libraries are resolved natively on target system and afterwards imported into the build server where all the muscle work (compilation & linking) takes place. It would go something like this:

  1. Install dependencies on the target system same as if you were to do the compilation there
  2. Create a docker container running build server on the host system
  3. Import part of the target filesystem (e.g. raspbian) to build server
  4. Create/import catkin workspace in build server
  5. Use customized toolchain and gcc-linux-armhf (or compiler for your target system) to compile the workspace
  6. Pack the compilation outcome and send it to the target system

Repository structure

Here's a brief overview of flies in the repository and their significance

ros_rpi_hybrid_cross_compilation
    |
    ├- buildroot/  --Content copied into a shared folder
    |   ├- bin/  --Useful bash scripts
    |   ├- img_processing/  --Scripts for importing target filesystem
    |   |   ├- process_img.bash  --Wrapper script which runs everything else inside this folder
    |       └...
    |   ├- build.env  --Definition of environment variables for cross-compilation
    |   ├- toolchain.cmake  --Toolchain file for cross-compilation
    ├- profile/  --Customized bash profiles
    |   ├- profile  --/etc/profile patched to start ssh-agent
    |   └- .bashrc  --/root/.bashrc patched to source build.env file
    ├- ssh/  --SSH keys imported into the build server
    |   ├- buildserver_rsa
    |   ├- buildserver_rsa.pub
    |   └- known_hosts
    ├- Dockerfile  --Dockerfile for building a docker image of the build server
    ├- entrypoint.sh  --Entry point for docker container
    └- start-buildserver.sh  --Script that creates and starts a docker container

SSH keys are not important unless you plan to often connect to a remote server from the container (ssh/ folder is copied to /root/.ssh)

Requirements

  1. Host computer with architecture & operating system that supports docker, preferably linux
    • On windows, get a linux-like terminal (e.g. cygwin) or install windows subsystem for linux
    • On Mac, I don't know :D
  2. Physical raspberry pi running raspbian or a QEMU-emulated raspberry pi (check Additional info section)

Preparing the host system

Preparing the host system comes down to installing docker, pulling the image of the build server and finally, creating the container.

  1. Start by cloning this repository to your machine

    user@host:~$ git clone https://github.com/vedranMv/ros_rpi_hybrid_cross_compilation
  2. Then install docker: https://docs.docker.com/install/

  3. Get the docker image of the build server, here you have two options:

    user@host:~$ docker pull vedranmv/buildserver
    • [OR] Use a Dockerfile file supplied in the root of this repo to build an image of the build server
      1. Open the cloned repo
      2. Run
    user@host:~$ /ros_rpi_hybrid_cross_compilation$ docker build -t vedranmv/buildserver:latest .
  4. Select a folder on the host system which you want to share with docker. This will be the build directory, containing the workspace to be compiled, toolchain and compilation outcome. I used /usr/local/build folder on the host system which was mapped to the same folder inside the docker container. If you prefer different folder, change $WS variable inside buildroot/build.env file.

  5. Start docker container with the command below. If everything went okay, there should be only container ID and name printed on the terminal. (3 error messages can show up if you don't supply ssh keys in ssh/ folder)

    user@host:~/ros_rpi_hybrid_cross_compilation$ ./start-buildserver.sh
  6. Open the container terminal with

    user@host:~$ docker exec -ti <container_name> bash
  7. Exit and stop the container in preparation for next step

    root@buildserver:~$ exit
    user@host:~$ docker stop <container_name>

Later on, use docker start <container_name> followed by docker exec -ti <container_name> bash to start the container and open its terminal.

Preparing the target system

As the name hinted, this is a hybrid cross compilation which means that part of the work needs to be done on the target system as well. More precisely, collect all ROS dependencies and those of a project being compiling. It's tricky to do that directly on the host system, instead we install them on the target system first, and then copy them from there into the toolchain. First, ensure you have raspbian running, either natively or emulated (check Additional info for how to use emulated raspbian).

Once on the target system, follow the instructions in the guide for installing ROS from source up until section 2.1.2 Resolving Dependencies. There we have to slightly modify the command:

pi@raspberry:~/ros_catkin_ws$ rosdep install --from-paths src --ignore-src --rosdistro kinetic -y --os=debian:stretch

This will run for a while and install all required libraries for compiling ROS from source. To compile ROS on the target system, this would be fine and we could proceed with the instructions from the link above. For cross-compilation, however, there seems to be an issue with newer boost libraries which, for some reason*, cause the cross-compilation later on to fail with error message:

.../librosconsole.so: undefined reference to `boost::re_detail_106200::cpp_regex_traits_implementation<char>::transform(char const, char const) const'
.../librosconsole.so: undefined reference to `boost::re_detail_106200::cpp_regex_traits_implementation<char>::transform_primary(char const, char const) const'
collect2: error: ld returned 1 exit status

*It seems like the cmake-related problem. Ubuntu 16.04 from build server comes with cmake-3.5 which is not compatible with boost-1.62. Rapsbian stretch, on the other hand, uses cmake-3.7. (Installing cmake-3.7 on build server doesn't resolve the issue)

To fix it, swap Boost 1.62, which is by default installed through rosdep, with the older version 1.58. So, on the target system do the following:

pi@raspberry:~$ sudo apt-get remove libboost-*-dev
pi@raspberry:~$ sudo apt-get install libboost*1.58-dev libboost-mpi-python1.58*

If you choose ros_comm or ros_robot variant, you can stop here as we can compile these versions of ROS on the build server. For other versions (desktop, desktop_full), the only way to go is to compile them directly on the target system. For native compilation, it's a good idea to use standard install path during compilation /opt/ros/<ros_version>.

At the end of this step, your target system should have all dependencies installed for the selected ROS variant, and, if you compiled ROS on the target system, ROS installed in /opt/ros/<ros_version>.

Importing data into docker build server

Now we need to import & process the target filesystem. What we're doing here is copying all system libraries and include directories (and ROS, if exists). This is a bit tricky as almost all libraries on linux are symlinks and become invalid the moment you mount the SD-card with a filesystem to your PC. So, as a part of processing, we setup a chroot on the root of the SD card and make a hard copy of all files. This converts all symlinks to their respective files. Step-by-step, the procedure goes:

  1. In previous step, while setting up the host system, we copied img_processing/ folder to our build directory ($WS) together with build.env. Now we navigate there on the host system (not docker) and run process_img.bash with root privileges. This has to be run on the host computer directly because docker has no access to mounted drives.* After the script has finished, you should see a piroot/ folder appear in the $WS folder. Process can take a while.
~user@host$ source /path/to/build.env
~user@host$ cd $WS/img_processing
~user@host$ sudo ./process_img.bash /media/user/rootfs
Copying data from mounted directory...done
Prepring environment for chroot...done
Executing the script in chroot..../cp: cannot stat '/lib_orig/./cpp': No such file or directory
<a lot of "No such file" warnings>
done
Housekeeping...done
Your environment is now ready for crosscompiling

*If you don't like running this script outside docker, you could try manually copy the folders into the shared build directory and then run the script from within docker

Cross-compiling ROS (comm & robot)

Make sure the build server is running before continuing. Use docker start <container_name> followed by docker exec -ti <container_name> bash on host system to start the container and open its terminal.

For now, only ros_comm and ros_robot variants can be built by this method. In ros_robot, collada_urdf package stubbornly fails to compile with errors about missing include files. Fixing include files yields an error about pkg-config. As of this writing, the only solution is to skip building collada_urdf.

  • ros_comm

    root@buildserver:~$ cd $WS
    root@buildserver:~$ mkdir ros_cross_comm
    root@buildserver:~$ cd ros_cross_comm
    root@buildserver:~/ros_cross_comm$ rosinstall_generator ros_comm --rosdistro kinetic --deps --wet-only --tar > kinetic-ros_comm-wet.rosinstall
    root@buildserver:~/ros_cross_comm$ wstool init -j8 src kinetic-ros_comm-wet.rosinstall
  • ros_robot

    root@buildserver:~$ cd $WS
    root@buildserver:~$ mkdir ros_cross_robot
    root@buildserver:~$ cd ros_cross_robot
    root@buildserver:~/ros_cross_robot$ rosinstall_generator robot --rosdistro kinetic --deps --wet-only --tar > kinetic-robot-wet.rosinstall
    root@buildserver:~/ros_cross_robot$ wstool init -j8 src kinetic-robot-wet.rosinstall
    root@buildserver:~/ros_cross_robot$ mv src/collada_urdf .

And in either case finish with:

root@buildserver:~/ros_cross_xxx$ ./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release -DCMAKE_TOOLCHAIN_FILE=$WS/toolchain.cmake -DCATKIN_SKIP_TESTING=ON

The command above is similar to the one used during normal ROS installation but with addition of a custom toolchain file. This file tell cmake which compiler to use and where all the libraries and include files are located at. In addition, building unit test has to be disabled (-DCATKIN_SKIP_TESTING=ON) because gtest and other required packages are not installed so the build fails.

Having ROS compiled, symlink the install directory to the same folder where the ROS in on the target system. By default, /opt/ros/<ros_version> and source the ROS environment (check next section for how to do that).

Quick sanity check at this point confirms that the cross-compiler is indeed working as intended:

root@buildserver:~$ file /opt/ros/kinetic/lib/ibcpp_common.so 
libcpp_common.so: ELF 32-bit LSB shared object, ARM, EABI5 version 1 (SYSV), dynamically linked, BuildID[sha1]=b69cc700533a051b6efa2e8d6b34cc5f370c04ff, not stripped

Cross-compiling ROS projects

Make sure the build server is running before continuing. Use docker start <container_name> followed by docker exec -ti <container_name> bash on host system to start the container and open its terminal.

Before continuing, ensure that ROS install directory on the build server is the same as the one on the target system. This is important due to the nature of catkin workspaces overlaying:

  • If you've cross-compiled ROS in previous step, symlink the install folder and source the environment:
    root@buildserver:~$ mkdir /opt/ros
    root@buildserver:~$ ln -s $WS/ros_cross_robot/install_isolated /opt/ros/kinetic
    root@buildserver:~$ source /opt/ros/kinetic/setup.bash
  • If you've imported precompiled ROS from the target system (usually installed in /opt/ros/...), create symlink to it and source the environment:
    root@buildserver:~$ mkdir /opt/ros
    root@buildserver:~$ ln -s $WS/piroot/opt/ros/kinetic /opt/ros/kinetic
    root@buildserver:~$ source /opt/ros/kinetic/setup.bash

From this point, we follow the usual procedure for making a catkin workspace, putting the code in it and compiling the workspace. Again, toolchain file needs to be specified in order to make cmake aware of the compiler and libraries we want to use:

root@buildserver:~$ cd $WS
root@buildserver:~$ mkdir catkin_project_ws
root@buildserver:~$ cd catkin_project_ws

root@buildserver:~catkin_project_ws$ mkdir src
root@buildserver:~catkin_project_ws/src$ cd src
root@buildserver:~catkin_project_ws/src$ catkin_init_workspace
Creating symlink "/usr/local/build/catkin_project_ws/src/CMakeLists.txt" pointing to "/opt/ros/kinetic/share/catkin/cmake/toplevel.cmake"

root@buildserver:~catkin_project_ws/src$ cd ..
root@buildserver:~catkin_project_ws$ catkin_make_isolated --install -DCMAKE_TOOLCHAIN_FILE=$WS/toolchain.cmake 

Once the compilation is done, zip the install_isolated folder and unpack it on the target system. If the ROS has been cross-compiled as well, copy install_isolated folder from ros_cross* workspace to /opt/ros/<ros version> on the target system and source the environment there.

When finished, exit and stop the container as shown in the last step of Preparing the host system

Additional info

QEMU emulation

(TODO)

Things that don't work

  • compiling collada_urdf package from ros_robot variant
  • compile desktop - fails when linking some graphical libraries that require QT executables in the linking process
  • use of multiarch support in dpkg to install ros for armhf directly in docker with all of its dependencies

Contributions

Feel free to drop any suggestions, comments, ideas ^^

About

Docker-based cross-compiling environment for building ROS projects for armhf

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages