The Robot Operating System (ROS) is a set of software libraries and tools that help you build robot applications. From drivers to state-of-the-art algorithms, and with powerful developer tools, ROS has what you need for your next robotics project. And it's all open source.
Dockerfile in your ROS app projectFROM ros:indigo
# place here your application's setup specifics
CMD [ "roslaunch", "my-ros-app my-ros-app.launch" ]
You can then build and run the Docker image:
docker build -t my-ros-app .
docker run -it --rm --name my-running-app my-ros-app
This dockerized image of ROS is intended to provide a simplified and consistent platform to build and deploy distributed robotic applications. Built from the official Ubuntu image and ROS's official Debian packages, it includes recent supported releases for quick access and download. This provides roboticists in research and industry with an easy way to develop, reuse and ship software for autonomous actions and task planning, control dynamics, localization and mapping, swarm behavior, as well as general system integration.
Developing such complex systems with cutting edge implementations of newly published algorithms remains challenging, as repeatability and reproducibility of robotic software can fall to the wayside in the race to innovate. With the added difficulty in coding, tuning and deploying multiple software components that span across many engineering disciplines, a more collaborative approach becomes attractive. However, the technical difficulties in sharing and maintaining a collection of software over multiple robots and platforms has for a while exceeded time and effort than many smaller labs and businesses could afford.
With the advancements and standardization of software containers, roboticists are primed to acquire a host of improved developer tooling for building and shipping software. To help alleviate the growing pains and technical challenges of adopting new practices, we have focused on providing an official resource for using ROS with these new technologies.
The available tags include supported distros along with a hierarchy tags based off the most common meta-package dependencies, designed to have a small footprint and simple configuration:
ros-core: barebone ROS installros-base: basic tools and libraries (also tagged with distro name with LTS version as latest)robot: basic install for robotsperception: basic install for perception tasksThe rest of the common meta-packages such as desktop and desktop-full are hosted on automatic build repos under OSRF's Docker Hub profile here. These meta-packages include graphical dependencies and hook a host of other large packages such as X11, X server, etc. So in the interest of keep the official images lean and secure, the desktop packages are just be hosted with OSRF's profile.
ROS uses the ~/.ros/ directory for storing logs, and debugging info. If you wish to persist these files beyond the lifecycle of the containers which produced them, the ~/.ros/ folder can be mounted to an external volume on the host, or a derived image can specify volumes to be managed by the Docker engine. By default, the container runs as the root user, so /root/.ros/ would be the full path to these files.
For example, if one wishes to use their own .ros folder that already resides in their local home directory, with a username of ubuntu, we can simple launch the container with an additional volume argument:
docker run -v "/home/ubuntu/.ros/:/root/.ros/" ros
Some application may require device access for acquiring images from connected cameras, control input from human interface device, or GPUS for hardware acceleration. This can be done using the --device run argument to mount the device inside the container, providing processes inside hardware access.
The ROS runtime "graph" is a peer-to-peer network of processes (potentially distributed across machines) that are loosely coupled using the ROS communication infrastructure. ROS implements several different styles of communication, including synchronous RPC-style communication over services, asynchronous streaming of data over topics, and storage of data on a Parameter Server. To abide by the best practice of one process per container, Docker networks can be used to string together several running ROS processes. For further details about ROS NetworkSetup wik article, or see the Deployment example below.
NOTE: This requires the experimental version of Docker for future networking features.
If we want our all ROS nodes to easily talk to each other, we'll can use a virtual network to connect the separate containers. In this short example, we'll create a virtual network, spin up a new container running roscore advertised as the master service on the new network, then spawn a message publisher and subscriber process as services on the same network.
Build a ROS image that includes ROS tutorials using this
Dockerfile:
FROM ros:indigo-ros-base
# install ros tutorials packages
RUN apt-get update && apt-get install -y
ros-indigo-ros-tutorials \
ros-indigo-common-tutorials \
&& rm -rf /var/lib/apt/lists/
Then to build the image from within the same directory:
docker build --tag ros:ros-tutorials .
To create a new network
foo, we use the network command:
docker network create foo
Now that we have a network, we can create services. Services advertise there location on the network, making it easy to resolve the location/address of the service specific container. We'll use this make sure our ROS nodes can find and connect to our ROS
master.
To create a container for the ROS master and advertise it's service:
docker run -it --rm\
--publish-service=master.foo \
--name master \
ros:ros-tutorials \
roscore
Now you can see that master is running and is ready manage our other ROS nodes. To add our
talkernode, we'll need to point the relevant environment variable to the master service:
docker run -it --rm\
--publish-service=talker.foo \
--env ROS_HOSTNAME=talker \
--env ROS_MASTER_URI=http://master:11311 \
--name talker \
ros:ros-tutorials \
rosrun roscpp_tutorials talker
Then in another terminal, run the
listenernode similarly:
docker run -it --rm\
--publish-service=listener.foo \
--env ROS_HOSTNAME=listener \
--env ROS_MASTER_URI=http://master:11311 \
--name listener \
ros:ros-tutorials \
rosrun roscpp_tutorials listener
Alright! You should see
listeneris now echoing each message thetalkerbroadcasting. You can then list the containers and see something like this:
$ docker service ls
SERVICE ID NAME NETWORK CONTAINER
67ce73355e67 listener foo a62019123321
917ee622d295 master foo f6ab9155fdbe
7f5a4748fb8d talker foo e0da2ee7570a
And for the services:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a62019123321 ros:ros-tutorials "/ros_entrypoint.sh About a minute ago Up About a minute 11311/tcp listener
e0da2ee7570a ros:ros-tutorials "/ros_entrypoint.sh About a minute ago Up About a minute 11311/tcp talker
f6ab9155fdbe ros:ros-tutorials "/ros_entrypoint.sh About a minute ago Up About a minute 11311/tcp master
Ok, now that we see the two nodes are communicating, let get inside one of the containers and do some introspection what exactly the topics are:
docker exec -it master bash
source /ros_entrypoint.sh
If we then use
rostopicto list published message topics, we should see something like this:
$ rostopic list
/chatter
/rosout
/rosout_agg
To tear down the structure we've made, we just need to stop the containers and the services. We can stop and remove the containers using
Ctrl^Cwhere we launched the containers or using the stop command with the names we gave them:
docker stop master talker listener
docker rm master talker listener
ROS.org: Main ROS website
Wiki: Find tutorials and learn more
ROS Answers: Ask questions. Get answers
Blog: Stay up-to-date
OSRF: Open Source Robotics Foundation