Quellcode durchsuchen

Merge pull request #1036 from infosiftr/image-references

Adjust a ton of image references (especially to use %%IMAGE%%)
yosifkit vor 8 Jahren
Ursprung
Commit
94cdbe9042
100 geänderte Dateien mit 519 neuen und 499 gelöschten Zeilen
  1. 7 7
      adminer/content.md
  2. 4 4
      aerospike/content.md
  3. 1 1
      alpine/content.md
  4. 8 8
      arangodb/content.md
  5. 2 2
      backdrop/content.md
  6. 9 9
      bonita/content.md
  7. 8 8
      cassandra/content.md
  8. 6 6
      centos/content.md
  9. 4 4
      chronograf/content.md
  10. 2 2
      clearlinux/content.md
  11. 3 3
      clojure/content.md
  12. 8 8
      composer/content.md
  13. 10 10
      consul/content.md
  14. 6 6
      convertigo/content.md
  15. 6 6
      couchdb/content.md
  16. 1 1
      crate/content.md
  17. 8 7
      drupal/content.md
  18. 2 2
      eclipse-mosquitto/content.md
  19. 3 3
      eggdrop/content.md
  20. 3 3
      elixir/content.md
  21. 3 3
      erlang/content.md
  22. 2 2
      fedora/content.md
  23. 4 4
      flink/content.md
  24. 1 1
      fsharp/content.md
  25. 4 4
      gazebo/content.md
  26. 3 3
      gcc/content.md
  27. 4 4
      geonetwork/content.md
  28. 5 5
      ghost/content.md
  29. 1 1
      gradle/content.md
  30. 2 2
      groovy/content.md
  31. 4 4
      haskell/content.md
  32. 3 3
      haxe/content.md
  33. 1 1
      hello-seattle/content.md
  34. 2 2
      hello-world/content.md
  35. 2 2
      hello-world/update.sh
  36. 1 1
      hola-mundo/content.md
  37. 3 3
      httpd/content.md
  38. 2 2
      hylang/content.md
  39. 5 5
      ibmjava/content.md
  40. 8 8
      influxdb/content.md
  41. 2 2
      irssi/content.md
  42. 13 13
      jenkins/content.md
  43. 10 10
      jetty/content.md
  44. 2 2
      joomla/content.md
  45. 3 3
      jruby/content.md
  46. 2 2
      julia/content.md
  47. 3 3
      kaazing-gateway/content.md
  48. 9 9
      kapacitor/content.md
  49. 1 1
      known/content.md
  50. 3 3
      kong/content.md
  51. 9 9
      lightstreamer/content.md
  52. 1 1
      mageia/content.md
  53. 19 19
      mariadb/content.md
  54. 2 2
      maven/content.md
  55. 4 4
      mediawiki/content.md
  56. 2 2
      memcached/content.md
  57. 2 2
      mongo-express/content.md
  58. 5 5
      mongo/content.md
  59. 1 1
      mono/content.md
  60. 19 19
      mysql/content.md
  61. 3 3
      nats-streaming/content.md
  62. 3 3
      nats/content.md
  63. 2 2
      neo4j/content.md
  64. 2 2
      neurodebian/content.md
  65. 57 38
      nextcloud/content.md
  66. 10 10
      nginx/content.md
  67. 4 4
      nuxeo/content.md
  68. 10 10
      odoo/content.md
  69. 1 1
      openjdk/content.md
  70. 1 1
      oraclelinux/content.md
  71. 6 6
      orientdb/content.md
  72. 1 1
      owncloud/content.md
  73. 19 19
      percona/content.md
  74. 6 6
      perl/content.md
  75. 1 1
      photon/content.md
  76. 5 5
      php-zendserver/content.md
  77. 12 12
      php/content.md
  78. 1 1
      piwik/content.md
  79. 6 6
      plone/content.md
  80. 9 9
      postgres/content.md
  81. 4 4
      pypy/content.md
  82. 4 4
      r-base/content.md
  83. 8 8
      rabbitmq/content.md
  84. 2 2
      rakudo-star/content.md
  85. 1 1
      rapidoid/content.md
  86. 5 5
      redis/content.md
  87. 4 4
      redmine/content.md
  88. 1 1
      registry/content.md
  89. 1 1
      rethinkdb/content.md
  90. 3 3
      rocket.chat/content.md
  91. 7 7
      ros/content.md
  92. 3 3
      ruby/content.md
  93. 7 7
      sentry/content.md
  94. 7 7
      silverpeas/content.md
  95. 10 10
      solr/content.md
  96. 1 1
      sonarqube/content.md
  97. 2 2
      sourcemage/content.md
  98. 5 5
      spiped/content.md
  99. 8 8
      storm/content.md
  100. 4 4
      swarm/content.md

+ 7 - 7
adminer/content.md

@@ -13,17 +13,17 @@ Adminer (formerly phpMinAdmin) is a full-featured database management tool writt
 ### Standalone
 
 ```console
-$ docker run --link some_database:db -p 8080:8080 adminer
+$ docker run --link some_database:db -p 8080:8080 %%IMAGE%%
 ```
 
 Then you can hit `http://localhost:8080` or `http://host-ip:8080` in your browser.
 
 ### FastCGI
 
-If you are already running a FastCGI capable web server you might prefer running adminer via FastCGI:
+If you are already running a FastCGI capable web server you might prefer running Adminer via FastCGI:
 
 ```console
-$ docker run --link some_database:db -p 9000:9000 adminer:fastcgi
+$ docker run --link some_database:db -p 9000:9000 %%IMAGE%%:fastcgi
 ```
 
 Then point your web server to port 9000 of the container.
@@ -36,18 +36,18 @@ Run `docker stack deploy -c stack.yml %%REPO%%` (or `docker-compose -f stack.yml
 
 ### Loading plugins
 
-This image bundles all official adminer plugins. You can find the list of plugins on GitHub: https://github.com/vrana/adminer/tree/master/plugins.
+This image bundles all official Adminer plugins. You can find the list of plugins on GitHub: https://github.com/vrana/adminer/tree/master/plugins.
 
 To load plugins you can pass a list of filenames in `ADMINER_PLUGINS`:
 
 ```console
-$ docker run --link some_database:db -p 8080:8080 -e ADMINER_PLUGINS='tables-filter tinymce' adminer
+$ docker run --link some_database:db -p 8080:8080 -e ADMINER_PLUGINS='tables-filter tinymce' %%IMAGE%%
 ```
 
 If a plugin *requires* parameters to work correctly you will need to add a custom file to the container:
 
 ```console
-$ docker run --link some_database:db -p 8080:8080 -e ADMINER_PLUGINS='login-servers' adminer
+$ docker run --link some_database:db -p 8080:8080 -e ADMINER_PLUGINS='login-servers' %%IMAGE%%
 Unable to load plugin file "login-servers", because it has required parameters: servers
 Create a file "/var/www/html/plugins-enabled/login-servers.php" with the following contents to load the plugin:
 
@@ -73,7 +73,7 @@ The image bundles all the designs that are available in the source package of ad
 To use a bundled design you can pass its name in `ADMINER_DESIGN`:
 
 ```console
-$ docker run --link some_database:db -p 8080:8080 -e ADMINER_DESIGN='nette' adminer
+$ docker run --link some_database:db -p 8080:8080 -e ADMINER_DESIGN='nette' %%IMAGE%%
 ```
 
 To use a custom design you can add a file called `/var/www/html/adminer.css`.

+ 4 - 4
aerospike/content.md

@@ -11,7 +11,7 @@ Documentation for Aerospike is available at [http://aerospike.com/docs](https://
 The following will run `asd` with all the exposed ports forwarded to the host machine.
 
 ```console
-$ docker run -d --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 aerospike/aerospike-server
+$ docker run -d --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 %%IMAGE%%
 ```
 
 **NOTE** Although this is the simplest method to getting Aerospike up and running, but it is not the preferred method. To properly run the container, please specify a **custom configuration** with the **access-address** defined.
@@ -22,7 +22,7 @@ By default, `asd` will use the configuration file at `/etc/aerospike/aerospike.c
 
 	-v <DIRECTORY>:/opt/aerospike/etc
 
-Where `<DIRECTORY>` is the path to a directory containing your custom aerospike.conf file. Next, you will want to tell `asd` to use the configuration file that was just mounted by using the `--config-file` option for `aerospike/aerospike-server`:
+Where `<DIRECTORY>` is the path to a directory containing your custom aerospike.conf file. Next, you will want to tell `asd` to use the configuration file that was just mounted by using the `--config-file` option for `%%IMAGE%%`:
 
 	--config-file /opt/aerospike/etc/aerospike.conf
 
@@ -31,7 +31,7 @@ This will tell `asd` to use the config file at `/opt/aerospike/etc/aerospike.con
 A full example:
 
 ```console
-$ docker run -d -v <DIRECTORY>:/opt/aerospike/etc --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 aerospike/aerospike-server asd --foreground --config-file /opt/aerospike/etc/aerospike.conf
+$ docker run -d -v <DIRECTORY>:/opt/aerospike/etc --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 %%IMAGE%% asd --foreground --config-file /opt/aerospike/etc/aerospike.conf
 ```
 
 ### access-address Configuration
@@ -59,7 +59,7 @@ Where `<DIRECTORY>` is the path to a directory containing your data files.
 A full example:
 
 ```console
-$ docker run -d -v <DIRECTORY>:/opt/aerospike/data --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 aerospike/aerospike-server
+$ docker run -d -v <DIRECTORY>:/opt/aerospike/data --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 %%IMAGE%%
 ```
 
 ## Clustering

+ 1 - 1
alpine/content.md

@@ -11,7 +11,7 @@
 Use like you would any other base image:
 
 ```dockerfile
-FROM alpine:3.5
+FROM %%IMAGE%%:3.5
 RUN apk add --no-cache mysql-client
 ENTRYPOINT ["mysql"]
 ```

+ 8 - 8
arangodb/content.md

@@ -32,10 +32,10 @@ Furthermore, ArangoDB offers a microservice framework called [Foxx](https://www.
 In order to start an ArangoDB instance run
 
 ```console
-unix> docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 -d --name arangodb-instance arangodb
+unix> docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 -d --name arangodb-instance %%IMAGE%%
 ```
 
-Will create and launch the arangodb docker instance as background process. The Identifier of the process is printed. By default ArangoDB listen on port 8529 for request and the image includes `EXPOSE 8529`. If you link an application container it is automatically available in the linked container. See the following examples.
+Will create and launch the %%IMAGE%% docker instance as background process. The Identifier of the process is printed. By default ArangoDB listen on port 8529 for request and the image includes `EXPOSE 8529`. If you link an application container it is automatically available in the linked container. See the following examples.
 
 In order to get the IP arango listens on run:
 
@@ -48,7 +48,7 @@ unix> docker inspect --format '{{ .NetworkSettings.IPAddress }}' arangodb-instan
 In order to use the running instance from an application, link the container
 
 ```console
-unix> docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 --name my-app --link arangodb-instance:db-link arangodb
+unix> docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 --name my-app --link arangodb-instance:db-link %%IMAGE%%
 ```
 
 This will use the instance with the name `arangodb-instance` and link it into the application container. The application container will contain environment variables
@@ -66,7 +66,7 @@ These can be used to access the database.
 If you want to expose the port to the outside world, run
 
 ```console
-unix> docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 -p 8529:8529 -d arangodb
+unix> docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 -p 8529:8529 -d %%IMAGE%%
 ```
 
 ArangoDB listen on port 8529 for request and the image includes `EXPOSE
@@ -95,7 +95,7 @@ The ArangoDB image provides several authentication methods which can be specifie
 In order to get a list of supported options, run
 
 ```console
-unix> docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 arangodb arangod --help
+unix> docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 %%IMAGE%% arangod --help
 ```
 
 ## Persistent Data
@@ -116,7 +116,7 @@ You can map the container's volumes to a directory on the host, so that the data
 unix> mkdir /tmp/arangodb
 unix> docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 -p 8529:8529 -d \
           -v /tmp/arangodb:/var/lib/arangodb3 \
-          arangodb
+          %%IMAGE%%
 ```
 
 This will use the `/tmp/arangodb` directory of the host as database directory for ArangoDB inside the container.
@@ -126,13 +126,13 @@ This will use the `/tmp/arangodb` directory of the host as database directory fo
 Alternatively you can create a container holding the data.
 
 ```console
-unix> docker create --name arangodb-persist arangodb true
+unix> docker create --name arangodb-persist %%IMAGE%% true
 ```
 
 And use this data container in your ArangoDB container.
 
 ```console
-unix> docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 --volumes-from arangodb-persist -p 8529:8529 arangodb
+unix> docker run -e ARANGO_RANDOM_ROOT_PASSWORD=1 --volumes-from arangodb-persist -p 8529:8529 %%IMAGE%%
 ```
 
 If want to save a few bytes you can alternatively use [busybox](https://registry.hub.docker.com/_/busybox) or [alpine](https://registry.hub.docker.com/_/alpine) for creating the volume only containers. Please note that you need to provide the used volumes in this case. For example

+ 2 - 2
backdrop/content.md

@@ -11,7 +11,7 @@ Backdrop CMS enables people to build highly customized websites, affordably, thr
 The basic pattern for starting a `%%REPO%%` instance is:
 
 ```console
-$ docker run --name some-%%REPO%% --link some-mysql:mysql -d %%REPO%%
+$ docker run --name some-%%REPO%% --link some-mysql:mysql -d %%IMAGE%%
 ```
 
 The following environment variables are also honored for configuring your Backdrop CMS instance:
@@ -28,7 +28,7 @@ The `BACKDROP_DB_NAME` **must already exist** on the given MySQL server. Check o
 If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used:
 
 ```console
-$ docker run --name some-%%REPO%% --link some-mysql:mysql -p 8080:80 -d %%REPO%%
+$ docker run --name some-%%REPO%% --link some-mysql:mysql -p 8080:80 -d %%IMAGE%%
 ```
 
 Then, access it via `http://localhost:8080` or `http://host-ip:8080` in a browser.

+ 9 - 9
bonita/content.md

@@ -11,7 +11,7 @@ Bonita BPM is an open-source business process management and workflow suite crea
 ## Quick start
 
 ```console
-$ docker run --name bonita -d -p 8080:8080 bonita
+$ docker run --name bonita -d -p 8080:8080 %%IMAGE%%
 ```
 
 This will start a container running the [Tomcat Bundle](http://documentation.bonitasoft.com/?page=tomcat-bundle) with Bonita BPM Engine + Bonita BPM Portal. With no environment variables specified, it's as like if you have launched the bundle on your host using startup.{sh|bat} (with security hardening on REST and HTTP APIs, cf Security part). Bonita BPM uses a H2 database here.
@@ -40,7 +40,7 @@ $ docker run --name mydbpostgres -v "$PWD"/custom_postgres/:/docker-entrypoint-i
 See the [official PostgreSQL documentation](https://registry.hub.docker.com/_/postgres/) for more details.
 
 ```console
-$ docker run --name bonita_postgres --link mydbpostgres:postgres -d -p 8080:8080 bonita
+$ docker run --name bonita_postgres --link mydbpostgres:postgres -d -p 8080:8080 %%IMAGE%%
 ```
 
 ### MySQL
@@ -64,13 +64,13 @@ See the [official MySQL documentation](https://registry.hub.docker.com/_/mysql/)
 Start your application container to link it to the MySQL container:
 
 ```console
-$ docker run --name bonita_mysql --link mydbmysql:mysql -d -p 8080:8080 bonita
+$ docker run --name bonita_mysql --link mydbmysql:mysql -d -p 8080:8080 %%IMAGE%%
 ```
 
 ## Modify default credentials
 
 ```console
-$ docker run --name=bonita -e "TENANT_LOGIN=tech_user" -e "TENANT_PASSWORD=secret" -e "PLATFORM_LOGIN=pfadmin" -e "PLATFORM_PASSWORD=pfsecret" -d -p 8080:8080 bonita
+$ docker run --name=bonita -e "TENANT_LOGIN=tech_user" -e "TENANT_PASSWORD=secret" -e "PLATFORM_LOGIN=pfadmin" -e "PLATFORM_PASSWORD=pfsecret" -d -p 8080:8080 %%IMAGE%%
 ```
 
 Now you can access the Bonita BPM Portal on localhost:8080/bonita and login using: tech_user / secret
@@ -89,7 +89,7 @@ The Docker documentation is a good starting point for understanding the differen
 1.	Create a data directory on a suitable volume on your host system, e.g. `/my/own/datadir`.
 2.	Start your `%%REPO%%` container like this:
 
-	docker run --name some-%%REPO%% -v /my/own/datadir:/opt/bonita -d %%REPO%%:tag
+	docker run --name some-%%REPO%% -v /my/own/datadir:/opt/bonita -d %%IMAGE%%:tag
 
 The `-v /my/own/datadir:/opt/bonita` part of the command mounts the `/my/own/datadir` directory from the underlying host system as `/opt/bonita` inside the container, where Bonita will deploy the bundle and write data files by default.
 
@@ -208,13 +208,13 @@ $ chcon -Rt svirt_sandbox_file_t /my/own/datadir
 	-	If < 7.3.0
 
 	```console
-	$ docker run --name=bonita_7.2.4_postgres --link mydbpostgres:postgres -e "DB_NAME=newbonitadb" -e "DB_USER=newbonitauser" -e "DB_PASS=newbonitapass" -v "$PWD"/bonita_migration:/opt/bonita/ -d -p 8081:8080 bonita:7.2.4
+	$ docker run --name=bonita_7.2.4_postgres --link mydbpostgres:postgres -e "DB_NAME=newbonitadb" -e "DB_USER=newbonitauser" -e "DB_PASS=newbonitapass" -v "$PWD"/bonita_migration:/opt/bonita/ -d -p 8081:8080 %%IMAGE%%:7.2.4
 	```
 
 	-	If >= 7.3.0
 
 	```console
-	$ docker run --name=bonita_7.5.4_postgres --link mydbpostgres:postgres -e "DB_NAME=newbonitadb" -e "DB_USER=newbonitauser" -e "DB_PASS=newbonitapass" -d -p 8081:8080 bonita:7.5.4
+	$ docker run --name=bonita_7.5.4_postgres --link mydbpostgres:postgres -e "DB_NAME=newbonitadb" -e "DB_USER=newbonitauser" -e "DB_PASS=newbonitapass" -d -p 8081:8080 %%IMAGE%%:7.5.4
 	```
 
 -	Reapply specific configuration if needed, for example with a version >= 7.3.0 :
@@ -264,7 +264,7 @@ This Docker image activates both static and dynamic authorization checks by defa
 For specific needs you can override this behavior by setting HTTP_API to true and REST_API_DYN_AUTH_CHECKS to false:
 
 ```console
-$ docker run  -e HTTP_API=true -e REST_API_DYN_AUTH_CHECKS=false --name bonita -d -p 8080:8080 bonita
+$ docker run  -e HTTP_API=true -e REST_API_DYN_AUTH_CHECKS=false --name bonita -d -p 8080:8080 %%IMAGE%%
 ```
 
 ## Environment variables
@@ -358,7 +358,7 @@ For example, you can increase the log level :
 	echo 'sed -i "s/^org.bonitasoft.level = WARNING$/org.bonitasoft.level = FINEST/" /opt/bonita/BonitaBPMCommunity-7.5.4-Tomcat-7.0.76/server/conf/logging.properties' >> custom_bonita/bonita.sh
 	chmod +x custom_bonita/bonita.sh
 	
-	docker run --name bonita_custom -v "$PWD"/custom_bonita/:/opt/custom-init.d -d -p 8080:8080 bonita
+	docker run --name bonita_custom -v "$PWD"/custom_bonita/:/opt/custom-init.d -d -p 8080:8080 %%IMAGE%%
 
 Note: There are several ways to check the `bonita` logs. One of them is
 

+ 8 - 8
cassandra/content.md

@@ -13,7 +13,7 @@ Apache Cassandra is an open source distributed database management system design
 Starting a Cassandra instance is simple:
 
 ```console
-$ docker run --name some-%%REPO%% -d %%REPO%%:tag
+$ docker run --name some-%%REPO%% -d %%IMAGE%%:tag
 ```
 
 ... where `some-%%REPO%%` is the name you want to assign to your container and `tag` is the tag specifying the Cassandra version you want. See the list above for relevant tags.
@@ -31,7 +31,7 @@ $ docker run --name some-app --link some-%%REPO%%:%%REPO%% -d app-that-uses-cass
 Using the environment variables documented below, there are two cluster scenarios: instances on the same machine and instances on separate machines. For the same machine, start the instance as described above. To start other instances, just tell each new node where the first is.
 
 ```console
-$ docker run --name some-%%REPO%%2 -d -e CASSANDRA_SEEDS="$(docker inspect --format='{{ .NetworkSettings.IPAddress }}' some-%%REPO%%)" %%REPO%%:tag
+$ docker run --name some-%%REPO%%2 -d -e CASSANDRA_SEEDS="$(docker inspect --format='{{ .NetworkSettings.IPAddress }}' some-%%REPO%%)" %%IMAGE%%:tag
 ```
 
 ... where `some-%%REPO%%` is the name of your original Cassandra Server container, taking advantage of `docker inspect` to get the IP address of the other container.
@@ -39,7 +39,7 @@ $ docker run --name some-%%REPO%%2 -d -e CASSANDRA_SEEDS="$(docker inspect --for
 Or you may use the docker run --link option to tell the new node where the first is:
 
 ```console
-$ docker run --name some-cassandra2 -d --link some-cassandra:cassandra cassandra:tag
+$ docker run --name some-cassandra2 -d --link some-cassandra:cassandra %%IMAGE%%:tag
 ```
 
 For separate machines (ie, two VMs on a cloud provider), you need to tell Cassandra what IP address to advertise to the other nodes (since the address of the container is behind the docker bridge).
@@ -47,13 +47,13 @@ For separate machines (ie, two VMs on a cloud provider), you need to tell Cassan
 Assuming the first machine's IP address is `10.42.42.42` and the second's is `10.43.43.43`, start the first with exposed gossip port:
 
 ```console
-$ docker run --name some-%%REPO%% -d -e CASSANDRA_BROADCAST_ADDRESS=10.42.42.42 -p 7000:7000 %%REPO%%:tag
+$ docker run --name some-%%REPO%% -d -e CASSANDRA_BROADCAST_ADDRESS=10.42.42.42 -p 7000:7000 %%IMAGE%%:tag
 ```
 
 Then start a Cassandra container on the second machine, with the exposed gossip port and seed pointing to the first machine:
 
 ```console
-$ docker run --name some-%%REPO%% -d -e CASSANDRA_BROADCAST_ADDRESS=10.43.43.43 -p 7000:7000 -e CASSANDRA_SEEDS=10.42.42.42 %%REPO%%:tag
+$ docker run --name some-%%REPO%% -d -e CASSANDRA_BROADCAST_ADDRESS=10.43.43.43 -p 7000:7000 -e CASSANDRA_SEEDS=10.42.42.42 %%IMAGE%%:tag
 ```
 
 ## Connect to Cassandra from `cqlsh`
@@ -61,13 +61,13 @@ $ docker run --name some-%%REPO%% -d -e CASSANDRA_BROADCAST_ADDRESS=10.43.43.43
 The following command starts another Cassandra container instance and runs `cqlsh` (Cassandra Query Language Shell) against your original Cassandra container, allowing you to execute CQL statements against your database instance:
 
 ```console
-$ docker run -it --link some-%%REPO%%:cassandra --rm %%REPO%% sh -c 'exec cqlsh "$CASSANDRA_PORT_9042_TCP_ADDR"'
+$ docker run -it --link some-%%REPO%%:cassandra --rm %%IMAGE%% sh -c 'exec cqlsh "$CASSANDRA_PORT_9042_TCP_ADDR"'
 ```
 
 ... or (simplified to take advantage of the `/etc/hosts` entry Docker adds for linked containers):
 
 ```console
-$ docker run -it --link some-%%REPO%%:cassandra --rm %%REPO%% cqlsh cassandra
+$ docker run -it --link some-%%REPO%%:cassandra --rm %%IMAGE%% cqlsh cassandra
 ```
 
 ... where `some-%%REPO%%` is the name of your original Cassandra Server container.
@@ -147,7 +147,7 @@ The Docker documentation is a good starting point for understanding the differen
 2.	Start your `%%REPO%%` container like this:
 
 	```console
-	$ docker run --name some-%%REPO%% -v /my/own/datadir:/var/lib/cassandra -d %%REPO%%:tag
+	$ docker run --name some-%%REPO%% -v /my/own/datadir:/var/lib/cassandra -d %%IMAGE%%:tag
 	```
 
 The `-v /my/own/datadir:/var/lib/cassandra` part of the command mounts the `/my/own/datadir` directory from the underlying host system as `/var/lib/cassandra` inside the container, where Cassandra by default will write its data files.

+ 6 - 6
centos/content.md

@@ -8,21 +8,21 @@ CentOS Linux is a community-supported distribution derived from sources freely p
 
 # CentOS image documentation
 
-The `centos:latest` tag is always the most recent version currently available.
+The `%%IMAGE%%:latest` tag is always the most recent version currently available.
 
 ## Rolling builds
 
-The CentOS Project offers regularly updated images for all active releases. These images will be updated monthly or as needed for emergency fixes. These rolling updates are tagged with the major version number only. For example: `docker pull centos:6` or `docker pull centos:7`
+The CentOS Project offers regularly updated images for all active releases. These images will be updated monthly or as needed for emergency fixes. These rolling updates are tagged with the major version number only. For example: `docker pull %%IMAGE%%:6` or `docker pull %%IMAGE%%:7`
 
 ## Minor tags
 
 Additionally, images with minor version tags that correspond to install media are also offered. **These images DO NOT recieve updates** as they are intended to match installation iso contents. If you choose to use these images it is highly recommended that you include `RUN yum -y update && yum clean all` in your Dockerfile, or otherwise address any potential security concerns. To use these images, please specify the minor version tag:
 
-For example: `docker pull centos:5.11` or `docker pull centos:6.6`
+For example: `docker pull %%IMAGE%%:5.11` or `docker pull %%IMAGE%%:6.6`
 
 ## Overlayfs and yum
 
-Recent Docker versions support the [overlayfs](https://docs.docker.com/engine/userguide/storagedriver/overlayfs-driver/) backend, which is enabled by default on most distros supporting it from Docker 1.13 onwards. On Centos 6 and 7, **that backend requires yum-plugin-ovl to be installed and enabled**; while it is installed by default in recent centos images, make it sure you retain the `plugins=1` option in `/etc/yum.conf` if you update that file; otherwise, you may encounter errors related to rpmdb checksum failure - see [Docker ticket 10180](https://github.com/docker/docker/issues/10180) for more details.
+Recent Docker versions support the [overlayfs](https://docs.docker.com/engine/userguide/storagedriver/overlayfs-driver/) backend, which is enabled by default on most distros supporting it from Docker 1.13 onwards. On Centos 6 and 7, **that backend requires yum-plugin-ovl to be installed and enabled**; while it is installed by default in recent %%IMAGE%% images, make it sure you retain the `plugins=1` option in `/etc/yum.conf` if you update that file; otherwise, you may encounter errors related to rpmdb checksum failure - see [Docker ticket 10180](https://github.com/docker/docker/issues/10180) for more details.
 
 # Package documentation
 
@@ -30,12 +30,12 @@ By default, the CentOS containers are built using yum's `nodocs` option, which h
 
 # Systemd integration
 
-Systemd is now included in both the centos:7 and centos:latest base containers, but it is not active by default. In order to use systemd, you will need to include text similar to the example Dockerfile below:
+Systemd is now included in both the %%IMAGE%%:7 and %%IMAGE%%:latest base containers, but it is not active by default. In order to use systemd, you will need to include text similar to the example Dockerfile below:
 
 ## Dockerfile for systemd base image
 
 ```dockerfile
-FROM centos:7
+FROM %%IMAGE%%:7
 ENV container docker
 RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
 systemd-tmpfiles-setup.service ] || rm -f $i; done); \

+ 4 - 4
chronograf/content.md

@@ -11,7 +11,7 @@ Chronograf is InfluxData’s open source web application. Use Chronograf with th
 Chronograf runs on port 8888. It can be run and accessed by exposing that port:
 
 ```console
-$ docker run -p 8888:8888 chronograf
+$ docker run -p 8888:8888 %%IMAGE%%
 ```
 
 ### Mounting a volume
@@ -21,7 +21,7 @@ The Chronograf image exposes a shared volume under `/var/lib/chronograf`, so you
 ```console
 $ docker run -p 8888:8888 \
       -v $PWD:/var/lib/chronograf \
-      chronograf
+      %%IMAGE%%
 ```
 
 Modify `$PWD` to the directory where you want to store data associated with the InfluxDB container.
@@ -31,7 +31,7 @@ You can also have Docker control the volume mountpoint by using a named volume.
 ```console
 $ docker run -p 8888:8888 \
       -v chronograf:/var/lib/chronograf \
-      chronograf
+      %%IMAGE%%
 ```
 
 ### Using the container with InfluxDB
@@ -55,7 +55,7 @@ We can now start a Chronograf container that references this database.
 ```console
 $ docker run -p 8888:8888 \
       --net=influxdb
-      chronograf --influxdb-url=http://influxdb:8086
+      %%IMAGE%% --influxdb-url=http://influxdb:8086
 ```
 
 Try combining this with Telegraf to get dashboards for your infrastructure within minutes!

+ 2 - 2
clearlinux/content.md

@@ -4,14 +4,14 @@ This serves as the official [Clear Linux OS](https://clearlinux.org) image.
 
 %%LOGO%%
 
-The `clearlinux:latest` tag will point to `clearlinux:base` which will track toward the latest release version of the distribution.
+The `%%IMAGE%%:latest` tag will point to `%%IMAGE%%:base` which will track toward the latest release version of the distribution.
 
 This image contains the os-core and os-core-update bundles, the latter can be used to add additional Clear Linux OS components (see [here](https://clearlinux.org/documentation/swupdate_about_sw_update.html) for more details about swupd and [here](https://clearlinux.org/documentation/bundles_overview.html) for more information on bundles).
 
 The following Dockerfile will install the editors and dev-utils bundles on top of the base image
 
 ```sh
-FROM clearlinux:base
+FROM %%IMAGE%%:base
 RUN swupd bundle-add editors dev-utils
 ```
 

+ 3 - 3
clojure/content.md

@@ -13,7 +13,7 @@ Clojure is a dialect of the Lisp programming language. It is a general-purpose p
 Since the most common way to use Clojure is in conjunction with [Leiningen (`lein`)](http://leiningen.org/), this image assumes that's how you'll be working. The most straightforward way to use this image is to add a `Dockerfile` to an existing Leiningen/Clojure project:
 
 ```dockerfile
-FROM clojure
+FROM %%IMAGE%%
 COPY . /usr/src/app
 WORKDIR /usr/src/app
 CMD ["lein", "run"]
@@ -29,7 +29,7 @@ $ docker run -it --rm --name my-running-app my-clojure-app
 While the above is the most straightforward example of a `Dockerfile`, it does have some drawbacks. The `lein run` command will download your dependencies, compile the project, and then run it. That's a lot of work, all of which you may not want done every time you run the image. To get around this, you can download the dependencies and compile the project ahead of time. This will significantly reduce startup time when you run your image.
 
 ```dockerfile
-FROM clojure
+FROM %%IMAGE%%
 RUN mkdir -p /usr/src/app
 WORKDIR /usr/src/app
 COPY project.clj /usr/src/app/
@@ -48,7 +48,7 @@ You can then build and run the image as above.
 If you have an existing Lein/Clojure project, it's fairly straightforward to compile your project into a jar from a container:
 
 ```console
-$ docker run -it --rm -v "$PWD":/usr/src/app -w /usr/src/app clojure lein uberjar
+$ docker run -it --rm -v "$PWD":/usr/src/app -w /usr/src/app %%IMAGE%% lein uberjar
 ```
 
 This will build your project into a jar file located in your project's `target/uberjar` directory.

+ 8 - 8
composer/content.md

@@ -13,7 +13,7 @@ Run the `composer` image:
 ```sh
 docker run --rm --interactive --tty \
     --volume $PWD:/app \
-    composer install
+    %%IMAGE%% install
 ```
 
 You can mount the Composer home directory from your host inside the Container to share caching and configuration files:
@@ -22,7 +22,7 @@ You can mount the Composer home directory from your host inside the Container to
 docker run --rm --interactive --tty \
     --volume $PWD:/app \
     --volume $COMPOSER_HOME:/tmp \
-    composer install
+    %%IMAGE%% install
 ```
 
 By default, Composer runs as root inside the container. This can lead to permission issues on your host filesystem. You can run Composer as your local user:
@@ -31,7 +31,7 @@ By default, Composer runs as root inside the container. This can lead to permiss
 docker run --rm --interactive --tty \
     --volume $PWD:/app \
     --user $(id -u):$(id -g) \
-    composer install
+    %%IMAGE%% install
 ```
 
 When you need to access private repositories, you will either need to share your configured credentials, or mount your `ssh-agent` socket inside the running container:
@@ -43,7 +43,7 @@ docker run --rm --interactive --tty \
     --volume $PWD:/app \
     --volume $SSH_AUTH_SOCK:/ssh-auth.sock \
     --env SSH_AUTH_SOCK=/ssh-auth.sock \
-    composer install
+    %%IMAGE%% install
 ```
 
 When combining the use of private repositories with running Composer as another (local) user, you might run into non-existant user errors (thrown by ssh). To work around this, simply mount the host passwd and group files (read-only) into the container:
@@ -56,7 +56,7 @@ docker run --rm --interactive --tty \
     --volume /etc/group:/etc/group:ro \
     --user $(id -u):$(id -g) \
     --env SSH_AUTH_SOCK=/ssh-auth.sock \
-    composer install
+    %%IMAGE%% install
 ```
 
 ## Suggestions
@@ -72,7 +72,7 @@ Sometimes dependencies or Composer [scripts](https://getcomposer.org/doc/article
 	```sh
 	docker run --rm --interactive --tty \
 	    --volume $PWD:/app \
-	    composer install --ignore-platform-reqs --no-scripts
+	    %%IMAGE%% install --ignore-platform-reqs --no-scripts
 	```
 
 -	Create your own image (possibly by extending `FROM composer`).
@@ -82,7 +82,7 @@ Sometimes dependencies or Composer [scripts](https://getcomposer.org/doc/article
 -	Create your own image, and copy Composer from the official image into it:
 
 	```dockerfile
-	COPY --from=composer:1.5 /usr/bin/composer /usr/bin/composer
+	COPY --from=%%IMAGE%%:1.5 /usr/bin/composer /usr/bin/composer
 	```
 
 It is highly recommended that you create a "build" image that extends from your baseline production image. Binaries such as Composer should not end up in your production environment.
@@ -103,6 +103,6 @@ composer () {
         --volume /etc/passwd:/etc/passwd:ro \
         --volume /etc/group:/etc/group:ro \
         --volume $(pwd):/app \
-        composer "$@"
+        %%IMAGE%% "$@"
 }
 ```

+ 10 - 10
consul/content.md

@@ -25,7 +25,7 @@ We chose Alpine as a lightweight base with a reasonably small surface area for s
 
 Consul always runs under [dumb-init](https://github.com/Yelp/dumb-init), which handles reaping zombie processes and forwards signals on to all processes running in the container. We also use [gosu](https://github.com/tianon/gosu) to run Consul as a non-root "consul" user for better security. These binaries are all built by HashiCorp and signed with our [GPG key](https://www.hashicorp.com/security.html), so you can verify the signed package used to build a given base image.
 
-Running the Consul container with no arguments will give you a Consul server in [development mode](https://www.consul.io/docs/agent/options.html#_dev). The provided entry point script will also look for Consul subcommands and run `consul` as the correct user and with that subcommand. For example, you can execute `docker run consul members` and it will run the `consul members` command inside the container. The entry point also adds some special configuration options as detailed in the sections below when running the `agent` subcommand. Any other command gets `exec`-ed inside the container under `dumb-init`.
+Running the Consul container with no arguments will give you a Consul server in [development mode](https://www.consul.io/docs/agent/options.html#_dev). The provided entry point script will also look for Consul subcommands and run `consul` as the correct user and with that subcommand. For example, you can execute `docker run %%IMAGE%% members` and it will run the `consul members` command inside the container. The entry point also adds some special configuration options as detailed in the sections below when running the `agent` subcommand. Any other command gets `exec`-ed inside the container under `dumb-init`.
 
 The container exposes `VOLUME /consul/data`, which is a path were Consul will place its persisted state. This isn't used in any way when running in development mode. For client agents, this stores some information about the cluster and the client's health checks in case the container is restarted. For server agents, this stores the client information plus snapshots and data related to the consensus algorithm and other state like Consul's key/value store and catalog. For servers it is highly desirable to keep this volume's data around when restarting containers to recover from outage scenarios. If this is bind mounted then ownership will be changed to the consul user when the container starts.
 
@@ -38,22 +38,22 @@ The entry point also includes a small utility to look up a client or bind addres
 ## Running Consul for Development
 
 ```console
-$ docker run -d --name=dev-consul consul
+$ docker run -d --name=dev-consul %%IMAGE%%
 ```
 
 This runs a completely in-memory Consul server agent with default bridge networking and no services exposed on the host, which is useful for development but should not be used in production. For example, if that server is running at internal address 172.17.0.2, you can run a three node cluster for development by starting up two more instances and telling them to join the first node.
 
 ```console
-$ docker run -d consul agent -dev -join=172.17.0.2
+$ docker run -d %%IMAGE%% agent -dev -join=172.17.0.2
 ... server 2 starts
-$ docker run -d consul agent -dev -join=172.17.0.2
+$ docker run -d %%IMAGE%% agent -dev -join=172.17.0.2
 ... server 3 starts
 ```
 
 Then we can query for all the members in the cluster by running a Consul CLI command in the first container:
 
 ```console
-$ docker exec -t dev-consul consul members
+$ docker exec -t dev-consul %%IMAGE%% members
 Node          Address          Status  Type    Build  Protocol  DC
 579db72c1ae1  172.17.0.3:8301  alive   server  0.6.3  2         dc1
 93fe2309ef19  172.17.0.4:8301  alive   server  0.6.3  2         dc1
@@ -67,7 +67,7 @@ Development mode also starts a version of Consul's web UI on port 8500. This can
 ## Running Consul Agent in Client Mode
 
 ```console
-$  docker run -d --net=host -e 'CONSUL_LOCAL_CONFIG={"leave_on_terminate": true}' consul agent -bind=<external ip> -retry-join=<root agent ip>
+$  docker run -d --net=host -e 'CONSUL_LOCAL_CONFIG={"leave_on_terminate": true}' %%IMAGE%% agent -bind=<external ip> -retry-join=<root agent ip>
 ==> Starting Consul agent...
 ==> Starting Consul agent RPC...
 ==> Consul agent running!
@@ -122,7 +122,7 @@ consul.service.consul.  0       IN      A       66.175.220.234
 If you want to expose the Consul interfaces to other containers via a different network, such as the bridge network, use the `-client` option for Consul:
 
 ```console
-docker run -d --net=host consul agent -bind=<external ip> -client=<bridge ip> -retry-join=<root agent ip>
+docker run -d --net=host %%IMAGE%% agent -bind=<external ip> -client=<bridge ip> -retry-join=<root agent ip>
 ==> Starting Consul agent...
 ==> Starting Consul agent RPC...
 ==> Consul agent running!
@@ -141,7 +141,7 @@ With this configuration, Consul's client interfaces will be bound to the bridge
 ## Running Consul Agent in Server Mode
 
 ```console
-$ docker run -d --net=host -e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' consul agent -server -bind=<external ip> -retry-join=<root agent ip> -bootstrap-expect=<number of server agents>
+$ docker run -d --net=host -e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' %%IMAGE%% agent -server -bind=<external ip> -retry-join=<root agent ip> -bootstrap-expect=<number of server agents>
 ```
 
 This runs a Consul server agent sharing the host's network. All of the network considerations and behavior we covered above for the client agent also apply to the server agent. A single server on its own won't be able to form a quorum and will be waiting for other servers to join.
@@ -161,7 +161,7 @@ By default, Consul's DNS server is exposed on port 8600. Because this is cumbers
 Here's an example:
 
 ```console
-$ docker run -d --net=host -e 'CONSUL_ALLOW_PRIVILEGED_PORTS=' consul -dns-port=53 -recursor=8.8.8.8
+$ docker run -d --net=host -e 'CONSUL_ALLOW_PRIVILEGED_PORTS=' %%IMAGE%% -dns-port=53 -recursor=8.8.8.8
 ```
 
 This example also includes a recursor configuration that uses Google's DNS servers for non-Consul lookups. You may want to adjust this based on your particular DNS configuration. If you are binding Consul's client interfaces to the host's loopback address, then you should be able to configure your host's `resolv.conf` to route DNS requests to Consul by including "127.0.0.1" as the primary DNS server. This would expose Consul's DNS to all applications running on the host, but due to Docker's built-in DNS server, you can't point to this directly from inside your containers; Docker will issue an error message if you attempt to do this. You must configure Consul to listen on a non-localhost address that is reachable from within other containers.
@@ -169,7 +169,7 @@ This example also includes a recursor configuration that uses Google's DNS serve
 Once you bind Consul's client interfaces to the bridge or other network, you can use the `--dns` option in your *other containers* in order for them to use Consul's DNS server, mapped to port 53. Here's an example:
 
 ```console
-$ docker run -d --net=host -e 'CONSUL_ALLOW_PRIVILEGED_PORTS=' consul agent -dns-port=53 -recursor=8.8.8.8 -bind=<bridge ip>
+$ docker run -d --net=host -e 'CONSUL_ALLOW_PRIVILEGED_PORTS=' %%IMAGE%% agent -dns-port=53 -recursor=8.8.8.8 -bind=<bridge ip>
 ```
 
 Now start another container and point it at Consul's DNS, using the bridge address of the host:

+ 6 - 6
convertigo/content.md

@@ -18,7 +18,7 @@ Convertigo Community edition brought to you by Convertigo SA (Paris & San Franci
 ## Quick start
 
 ```console
-$ docker run --name C8O -d -p 28080:28080 convertigo
+$ docker run --name C8O -d -p 28080:28080 %%IMAGE%%
 ```
 
 This will start a container running the minimum Convertigo MBaaS server. Convertigo MBaaS uses images' **/workspace** directory to store configuration file and deployed projects as an Docker volume.
@@ -38,7 +38,7 @@ $ docker run -d --name fullsync couchdb:1.6.1
 Then launch Convertigo and link it to the running 'fullsync' container. Convertigo MBaaS sever will automatically use it as its fullsync repository.
 
 ```console
-$ docker run -d --name C8O-MBAAS --link fullsync:couchdb -p 28080:28080 convertigo
+$ docker run -d --name C8O-MBAAS --link fullsync:couchdb -p 28080:28080 %%IMAGE%%
 ```
 
 ## Link Convertigo to a Billing & Analytics database
@@ -61,7 +61,7 @@ convertigo
 Projects are deployed in the Convertigo workspace, a simple file system directory. You can map the docker container **/workspace** to your physical system by using :
 
 ```console
-$ docker run --name C8O-MBAAS -v $(pwd):/workspace -d -p 28080:28080 convertigo
+$ docker run --name C8O-MBAAS -v $(pwd):/workspace -d -p 28080:28080 %%IMAGE%%
 ```
 
 You can share the same workspace by all Convertigo containers. This this case, when you deploy a project on a Convertigo container, it will be seen by others. This is the best way to build multi-instance load balanced Convertigo server farms.
@@ -83,7 +83,7 @@ These accounts can be configured through the *administration console* and saved
 You can change the default administration account :
 
 ```console
-$ docker run -d --name C8O-MBAAS -e CONVERTIGO_ADMIN_USER=administrator -e CONVERTIGO_ADMIN_PASSWORD=s3cret -p 28080:28080 convertigo
+$ docker run -d --name C8O-MBAAS -e CONVERTIGO_ADMIN_USER=administrator -e CONVERTIGO_ADMIN_PASSWORD=s3cret -p 28080:28080 %%IMAGE%%
 ```
 
 ### `CONVERTIGO_TESTPLATFORM_USER` and `CONVERTIGO_TESTPLATFORM_PASSWORD` variables
@@ -91,7 +91,7 @@ $ docker run -d --name C8O-MBAAS -e CONVERTIGO_ADMIN_USER=administrator -e CONVE
 You can lock the **testplatform** by setting the account :
 
 ```console
-$ docker run -d --name C8O-MBAAS -e CONVERTIGO_TESTPLATFORM_USER=tp_user -e CONVERTIGO_TESTPLATFORM_PASSWORD=s3cret -p 28080:28080 convertigo
+$ docker run -d --name C8O-MBAAS -e CONVERTIGO_TESTPLATFORM_USER=tp_user -e CONVERTIGO_TESTPLATFORM_PASSWORD=s3cret -p 28080:28080 %%IMAGE%%
 ```
 
 ## `JAVA_OPTS` Environment variable
@@ -101,7 +101,7 @@ Convertigo is based on a *Java* process with some defaults *JVM* options. You ca
 Add any *Java JVM* options such as -Xmx or -D[something]
 
 ```console
-$ docker run -d --name C8O-MBAAS -e JAVA_OPTS="-Xmx4096m -DjvmRoute=server1" -p 28080:28080 convertigo
+$ docker run -d --name C8O-MBAAS -e JAVA_OPTS="-Xmx4096m -DjvmRoute=server1" -p 28080:28080 %%IMAGE%%
 ```
 
 ## Pre configurated Docker compose stack

+ 6 - 6
couchdb/content.md

@@ -13,7 +13,7 @@ CouchDB comes with a suite of features, such as on-the-fly document transformati
 ### Start a CouchDB instance
 
 ```console
-$ docker run -d --name my-couchdb %%REPO%%
+$ docker run -d --name my-couchdb %%IMAGE%%
 ```
 
 This image includes `EXPOSE 5984` (the CouchDB port), so standard container linking will make it automatically available to the linked containers.
@@ -23,7 +23,7 @@ This image includes `EXPOSE 5984` (the CouchDB port), so standard container link
 In order to use the running instance from an application, link the container
 
 ```console
-$ docker run --name my-couchdb-app --link my-couchdb:couch %%REPO%%
+$ docker run --name my-couchdb-app --link my-couchdb:couch %%IMAGE%%
 ```
 
 See the [official docs](http://docs.couchdb.org/en/1.6.1/) for infomation on using and configuring CouchDB.
@@ -33,7 +33,7 @@ See the [official docs](http://docs.couchdb.org/en/1.6.1/) for infomation on usi
 If you want to expose the port to the outside world, run
 
 ```console
-$ docker run -p 5984:5984 -d %%REPO%%
+$ docker run -p 5984:5984 -d %%IMAGE%%
 ```
 
 CouchDB listens on port 5984 for requests and the image includes `EXPOSE 5984`. The flag `-p 5984:5984` exposes this port on the host.
@@ -52,7 +52,7 @@ CouchDB uses `/usr/local/var/lib/couchdb` to store its data. This directory is m
 You can map the container's volumes to a directory on the host, so that the data is kept between runs of the container. This example uses your current directory, but that is in general not the correct place to store your persistent data!
 
 ```console
-$ docker run -d -v $(pwd):/usr/local/var/lib/couchdb --name my-couchdb %%REPO%%
+$ docker run -d -v $(pwd):/usr/local/var/lib/couchdb --name my-couchdb %%IMAGE%%
 ```
 
 ## Specifying the admin user in the environment
@@ -60,7 +60,7 @@ $ docker run -d -v $(pwd):/usr/local/var/lib/couchdb --name my-couchdb %%REPO%%
 You can use the two environment variables `COUCHDB_USER` and `COUCHDB_PASSWORD` to set up the admin user.
 
 ```console
-$ docker run -e COUCHDB_USER=admin -e COUCHDB_PASSWORD=password -d %%REPO%%
+$ docker run -e COUCHDB_USER=admin -e COUCHDB_PASSWORD=password -d %%IMAGE%%
 ```
 
 ## Using your own CouchDB configuration file
@@ -70,7 +70,7 @@ The CouchDB configuration is specified in `.ini` files in `/usr/local/etc/couchd
 If you want to use a customized CouchDB configuration, you can create your configuration file in a directory on the host machine and then mount that directory as `/usr/local/etc/couchdb/local.d` inside the `%%REPO%%` container.
 
 ```console
-$ docker run --name my-couchdb -v /my/custom-config-dir:/usr/local/etc/couchdb/local.d -d %%REPO%%
+$ docker run --name my-couchdb -v /my/custom-config-dir:/usr/local/etc/couchdb/local.d -d %%IMAGE%%
 ```
 
 You can also use `couchdb` as the base image for your own couchdb instance and provie your own version of the `local.ini` config file:

+ 1 - 1
crate/content.md

@@ -19,7 +19,7 @@ The smallest CrateDB clusters can easily ingest tens of thousands of records per
 
 Spin up this Docker image like so:
 
-	$ docker run -p 4200:4200 crate
+	$ docker run -p 4200:4200 %%IMAGE%%
 
 Once you're up and running, head on over to [the introductory docs](https://crate.io/docs/stable/hello.html).
 

+ 8 - 7
drupal/content.md

@@ -11,13 +11,13 @@ Drupal is a free and open-source content-management framework written in PHP and
 The basic pattern for starting a `%%REPO%%` instance is:
 
 ```console
-$ docker run --name some-%%REPO%% -d %%REPO%%
+$ docker run --name some-%%REPO%% -d %%IMAGE%%
 ```
 
 If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used:
 
 ```console
-$ docker run --name some-%%REPO%% -p 8080:80 -d %%REPO%%
+$ docker run --name some-%%REPO%% -p 8080:80 -d %%IMAGE%%
 ```
 
 Then, access it via `http://localhost:8080` or `http://host-ip:8080` in a browser.
@@ -29,7 +29,7 @@ When first accessing the webserver provided by this image, it will go through a
 ## MySQL
 
 ```console
-$ docker run --name some-%%REPO%% --link some-mysql:mysql -d %%REPO%%
+$ docker run --name some-%%REPO%% --link some-mysql:mysql -d %%IMAGE%%
 ```
 
 -	Database type: `MySQL, MariaDB, or equivalent`
@@ -39,7 +39,7 @@ $ docker run --name some-%%REPO%% --link some-mysql:mysql -d %%REPO%%
 ## PostgreSQL
 
 ```console
-$ docker run --name some-%%REPO%% --link some-postgres:postgres -d %%REPO%%
+$ docker run --name some-%%REPO%% --link some-postgres:postgres -d %%IMAGE%%
 ```
 
 -	Database type: `PostgreSQL`
@@ -55,7 +55,7 @@ There is consensus that `/var/www/html/modules`, `/var/www/html/profiles`, and `
 If using bind-mounts, one way to accomplish pre-seeding your local `sites` directory would be something like the following:
 
 ```console
-$ docker run --rm %%REPO%% tar -cC /var/www/html/sites . | tar -xC /path/on/host/sites
+$ docker run --rm %%IMAGE%% tar -cC /var/www/html/sites . | tar -xC /path/on/host/sites
 ```
 
 This can then be bind-mounted into a new container:
@@ -66,19 +66,20 @@ $ docker run --name some-%%REPO%% --link some-postgres:postgres -d \
 	-v /path/on/host/profiles:/var/www/html/profiles \
 	-v /path/on/host/sites:/var/www/html/sites \
 	-v /path/on/host/themes:/var/www/html/themes \
-	%%REPO%%
+	%%IMAGE%%
 ```
 
 Another solution using Docker Volumes:
 
 ```console
 $ docker volume create %%REPO%%-sites
-$ docker run --rm -v %%REPO%%-sites:/temporary/sites %%REPO%% cp -aRT /var/www/html/sites /temporary/sites
+$ docker run --rm -v %%REPO%%-sites:/temporary/sites %%IMAGE%% cp -aRT /var/www/html/sites /temporary/sites
 $ docker run --name some-%%REPO%% --link some-postgres:postgres -d \
 	-v %%REPO%%-modules:/var/www/html/modules \
 	-v %%REPO%%-profiles:/var/www/html/profiles \
 	-v %%REPO%%-sites:/var/www/html/sites \
 	-v %%REPO%%-themes:/var/www/html/themes \
+	%%IMAGE%%
 ```
 
 ## %%STACK%%

+ 2 - 2
eclipse-mosquitto/content.md

@@ -19,7 +19,7 @@ Three directories have been created in the image to be used for configuration, p
 When running the image, the default configuration values are used. To use a custom configuration file, mount a **local** configuration file to `/mosquitto/config/mosquitto.conf`
 
 ```console
-$ docker run -it -p 1883:1883 -p 9001:9001 -v mosquitto.conf:/mosquitto/config/mosquitto.conf eclipse-mosquitto
+$ docker run -it -p 1883:1883 -p 9001:9001 -v mosquitto.conf:/mosquitto/config/mosquitto.conf %%IMAGE%%
 ```
 
 Configuration can be changed to:
@@ -40,7 +40,7 @@ i.e. add the following to `mosquitto.conf`:
 Run a container using the new image:
 
 ```console
-$ docker run -it -p 1883:1883 -p 9001:9001 -v mosquitto.conf:/mosquitto/config/mosquitto.conf -v /mosquitto/data -v /mosquitto/log eclipse-mosquitto
+$ docker run -it -p 1883:1883 -p 9001:9001 -v mosquitto.conf:/mosquitto/config/mosquitto.conf -v /mosquitto/data -v /mosquitto/log %%IMAGE%%
 ```
 
 **Note**: if the mosquitto configuration (mosquitto.conf) was modified to use non-default ports, the docker run command will need to be updated to expose the ports that have been configured.

+ 3 - 3
eggdrop/content.md

@@ -11,7 +11,7 @@ Eggdrop is the world's most popular Open Source IRC bot, designed for flexibilit
 To run this container the first time, you'll need to pass in, at minimum, a nickname and server via Environmental Variables. At minimum, a docker run command similar to
 
 ```console
-$ docker run -ti -e NICK=FooBot -e SERVER=irc.freenode.net -v /path/for/host/data:/home/eggdrop/eggdrop/data eggdrop
+$ docker run -ti -e NICK=FooBot -e SERVER=irc.freenode.net -v /path/for/host/data:/home/eggdrop/eggdrop/data %%IMAGE%%
 ```
 
 should be used. This will modify the appropriate values within the config file, then start your bot with the nickname FooBot and connect it to irc.freenode.net. These variables are only needed for your first run- after the first use, you can edit the config file directly. Additional configuration options are listed in the following sections.
@@ -43,13 +43,13 @@ This variable sets the nickname used by eggdrop. After the first use, you should
 After running the eggdrop container for the first time, the configuration file, user file and channel file will all be available inside the container at /home/eggdrop/eggdrop/data/ . NOTE! These files are only as persistent as the container they exist in. If you expect to use a different container over the course of using the Eggdrop docker image (intentionally or not) you will want to create a persistent data store. The easiest way to do this is to mount a directory on your host machine to /home/eggdrop/eggdrop/data. If you do this prior to your first run, you can easily edit the eggdrop configuration file on the host. Otherwise, you can also drop in existing config, user, or channel files into the mounted directory for use in the eggdrop container. You'll also likely want to daemonize eggdrop (ie, run it in the background). To do this, start your container with something similar to
 
 ```console
-$ docker run -i -e NICK=FooBot -e SERVER=irc.freenode.net -v /path/to/eggdrop/files:/home/eggdrop/eggdrop/data -d eggdrop
+$ docker run -i -e NICK=FooBot -e SERVER=irc.freenode.net -v /path/to/eggdrop/files:/home/eggdrop/eggdrop/data -d %%IMAGE%%
 ```
 
 If you provide your own config file, specify it as the argument to the docker container:
 
 ```console
-$ docker run -i -v /path/to/eggdrop/files:/home/eggdrop/eggdrop/data -d eggdrop mybot.conf
+$ docker run -i -v /path/to/eggdrop/files:/home/eggdrop/eggdrop/data -d %%IMAGE%% mybot.conf
 ```
 
 Any config file used with docker MUST end in .conf, such as eggdrop.conf or mybot.conf

+ 3 - 3
elixir/content.md

@@ -13,14 +13,14 @@ Elixir leverages the Erlang VM, known for running low-latency, distributed and f
 ## Run it as the REPL
 
 ```console
-➸ docker run -it --rm elixir
+➸ docker run -it --rm %%IMAGE%%
 Erlang/OTP 18 [erts-7.2.1] [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false]
 
 Interactive Elixir (1.2.1) - press Ctrl+C to exit (type h() ENTER for help)
 iex(1)> System.version
 "1.2.1"
 iex(2)>
-➸ docker run -it --rm -h elixir.local elixir iex --sname snode
+➸ docker run -it --rm -h elixir.local %%IMAGE%% iex --sname snode
 Erlang/OTP 18 [erts-7.2.1] [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false]
 
 Interactive Elixir (1.2.1) - press Ctrl+C to exit (type h() ENTER for help)
@@ -34,5 +34,5 @@ iex(snode@elixir)2> :c.uptime
 ## Run a single Elixir exs script
 
 ```console
-$ docker run -it --rm --name %%REPO%%-inst1 -v "$PWD":/usr/src/myapp -w /usr/src/myapp %%REPO%% elixir your-escript.exs
+$ docker run -it --rm --name %%REPO%%-inst1 -v "$PWD":/usr/src/myapp -w /usr/src/myapp %%IMAGE%% elixir your-escript.exs
 ```

+ 3 - 3
erlang/content.md

@@ -11,7 +11,7 @@ Erlang is a programming language used to build massively scalable soft real-time
 ## Run it as the REPL
 
 ```console
-➸ docker run -it --rm erlang
+➸ docker run -it --rm %%IMAGE%%
 Erlang/OTP 20 [erts-9.0] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:10] [hipe] [kernel-poll:false]
 
 Eshell V9.0  (abort with ^G)
@@ -30,7 +30,7 @@ User switch command
   q                 - quit erlang
   ? | h             - this message
  --> q
-➸ docker run -it --rm -h erlang.local erlang erl -name [email protected]
+➸ docker run -it --rm -h erlang.local %%IMAGE%% erl -name [email protected]
 Erlang/OTP 20 [erts-9.0] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:10] [hipe] [kernel-poll:false]
 
 Eshell V9.0  (abort with ^G)
@@ -44,5 +44,5 @@ User switch command
 ## Run a single Erlang escript
 
 ```console
-$ docker run -it --rm --name %%REPO%%-inst1 -v "$PWD":/usr/src/myapp -w /usr/src/myapp %%REPO%% escript your-escript.erl
+$ docker run -it --rm --name %%REPO%%-inst1 -v "$PWD":/usr/src/myapp -w /usr/src/myapp %%IMAGE%% escript your-escript.erl
 ```

+ 2 - 2
fedora/content.md

@@ -4,8 +4,8 @@ This image serves as the `official Fedora image` for the [Fedora Distribution](h
 
 %%LOGO%%
 
-The `fedora:latest` tag will always point to the latest stable release.
+The `%%IMAGE%%:latest` tag will always point to the latest stable release.
 
 This image is a relatively small footprint in comparison to a standard Fedora installation. This image is generated in the [Fedora Build System](http://koji.fedoraproject.org/koji/) and is built from [this kickstart file](https://git.fedorahosted.org/cgit/spin-kickstarts.git/tree/fedora-docker-base.ks).
 
-[Fedora Rawhide](https://fedoraproject.org/wiki/Releases/Rawhide) is available via `fedora:rawhide` and any specific version of Fedora as `fedora:$version` (example: `fedora:23`).
+[Fedora Rawhide](https://fedoraproject.org/wiki/Releases/Rawhide) is available via `%%IMAGE%%:rawhide` and any specific version of Fedora as `%%IMAGE%%:$version` (example: `%%IMAGE%%:23`).

+ 4 - 4
flink/content.md

@@ -15,7 +15,7 @@ Learn more about Flink at [https://flink.apache.org/](https://flink.apache.org/)
 To run a single Flink local cluster:
 
 ```console
-$ docker run --name flink_local -p 8081:8081 -t flink local
+$ docker run --name flink_local -p 8081:8081 -t %%IMAGE%% local
 ```
 
 Then with a web browser go to `http://localhost:8081/` to see the Flink Web Dashboard (adjust the hostname for your Docker host).
@@ -23,7 +23,7 @@ Then with a web browser go to `http://localhost:8081/` to see the Flink Web Dash
 To use Flink, you can submit a job to the cluster using the Web UI or you can also do it from a different Flink container, for example:
 
 ```console
-$ docker run --rm -t flink flink run -m <jobmanager:port> -c <your_class> <your_jar> <your_params>
+$ docker run --rm -t %%IMAGE%% flink run -m <jobmanager:port> -c <your_class> <your_jar> <your_params>
 ```
 
 ## Running a JobManager or a TaskManager
@@ -31,13 +31,13 @@ $ docker run --rm -t flink flink run -m <jobmanager:port> -c <your_class> <your_
 You can run a JobManager (master).
 
 ```console
-$ docker run --name flink_jobmanager -d -t flink jobmanager
+$ docker run --name flink_jobmanager -d -t %%IMAGE%% jobmanager
 ```
 
 You can also run a TaskManager (worker). Notice that workers need to register with the JobManager directly or via ZooKeeper so the master starts to send them tasks to execute.
 
 ```console
-$ docker run --name flink_taskmanager -d -t flink taskmanager
+$ docker run --name flink_taskmanager -d -t %%IMAGE%% taskmanager
 ```
 
 ## Running a cluster using Docker Compose

+ 1 - 1
fsharp/content.md

@@ -13,7 +13,7 @@ F# (pronounced F sharp) is a strongly typed, multi-paradigm programming language
 The most straightforward way to use this image is to use it both as the build and runtime environment. In your `Dockerfile`, you can write something similar to the following:
 
 ```dockerfile
-FROM fsharp
+FROM %%IMAGE%%
 COPY . /app
 RUN xbuild /app/myproject.sln
 ```

+ 4 - 4
gazebo/content.md

@@ -11,7 +11,7 @@ Robot simulation is an essential tool in every roboticist's toolbox. A well-desi
 ## Create a `Dockerfile` in your Gazebo project
 
 ```dockerfile
-FROM gazebo:gzserver8
+FROM %%IMAGE%%:gzserver8
 # place here your application's setup specifics
 CMD [ "gzserver", "my-gazebo-app-args" ]
 ```
@@ -42,7 +42,7 @@ Gazebo uses the `~/.gazebo/` directory for storing logs, models and scene info.
 For example, if one wishes to use their own `.gazebo` folder that already resides in their local home directory, with a username of `ubuntu`, we can simple launch the container with an additional volume argument:
 
 ```console
-$ docker run -v "/home/ubuntu/.gazebo/:/root/.gazebo/" gazebo
+$ docker run -v "/home/ubuntu/.gazebo/:/root/.gazebo/" %%IMAGE%%
 ```
 
 One thing to be careful about is that gzserver logs to files named `/root/.gazebo/server-<port>/*.log`, where `<port>` is the port number that server is listening on (11345 by default). If you run and mount multiple containers using the same default port and same host side directory, then they will collide and attempt writing to the same file. If you want to run multiple gzservers on the same docker host, then a bit more clever volume mounting of `~/.gazebo/` subfolders would be required.
@@ -62,13 +62,13 @@ In this short example, we'll spin up a new container running gazebo server, conn
 > First launch a gazebo server with a mounted volume for logging and name the container gazebo:
 
 ```console
-$ docker run -d -v="/tmp/.gazebo/:/root/.gazebo/" --name=gazebo gazebo
+$ docker run -d -v="/tmp/.gazebo/:/root/.gazebo/" --name=gazebo %%IMAGE%%
 ```
 
 > Now open a new bash session in the container using the same entrypoint to configure the environment. Then download the double_pendulum model and load it into the simulation.
 
 ```console
-$ docker exec -it gazebo bash
+$ docker exec -it %%IMAGE%% bash
 $ apt-get update && apt-get install -y curl
 $ curl -o double_pendulum.sdf http://models.gazebosim.org/double_pendulum_with_base/model-1_4.sdf
 $ gz model --model-name double_pendulum --spawn-file double_pendulum.sdf

+ 3 - 3
gcc/content.md

@@ -13,7 +13,7 @@ The GNU Compiler Collection (GCC) is a compiler system produced by the GNU Proje
 The most straightforward way to use this image is to use a gcc container as both the build and runtime environment. In your `Dockerfile`, writing something along the lines of the following will compile and run your project:
 
 ```dockerfile
-FROM gcc:4.9
+FROM %%IMAGE%%:4.9
 COPY . /usr/src/myapp
 WORKDIR /usr/src/myapp
 RUN gcc -o myapp main.c
@@ -32,11 +32,11 @@ $ docker run -it --rm --name my-running-app my-gcc-app
 There may be occasions where it is not appropriate to run your app inside a container. To compile, but not run your app inside the Docker instance, you can write something like:
 
 ```console
-$ docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp gcc:4.9 gcc -o myapp myapp.c
+$ docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp %%IMAGE%%:4.9 gcc -o myapp myapp.c
 ```
 
 This will add your current directory, as a volume, to the container, set the working directory to the volume, and run the command `gcc -o myapp myapp.c.` This tells gcc to compile the code in `myapp.c` and output the executable to myapp. Alternatively, if you have a `Makefile`, you can instead run the `make` command inside your container:
 
 ```console
-$ docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp gcc:4.9 make
+$ docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp %%IMAGE%%:4.9 make
 ```

+ 4 - 4
geonetwork/content.md

@@ -19,7 +19,7 @@ The project is part of the Open Source Geospatial Foundation ( [OSGeo](http://ww
 This command will start a debian-based container, running a Tomcat web server, with a geonetwork war deployed on the server:
 
 ```console
-$ docker run --name some-%%REPO%% -d %%REPO%%
+$ docker run --name some-%%REPO%% -d %%IMAGE%%
 ```
 
 ## Publish port
@@ -27,7 +27,7 @@ $ docker run --name some-%%REPO%% -d %%REPO%%
 Geonetwork listens on port `8080`. If you want to access the container at the host, **you must publish this port**. For instance, this, will redirect all the container traffic on port 8080, to the same port on the host:
 
 ```console
-$ docker run --name some-%%REPO%% -d -p 8080:8080 %%REPO%%
+$ docker run --name some-%%REPO%% -d -p 8080:8080 %%IMAGE%%
 ```
 
 Then, if you are running docker on Linux, you may access geonetwork at http://localhost:8080/geonetwork. Otherwise, replace `localhost` by the address of your docker machine.
@@ -41,7 +41,7 @@ By default, geonetwork sets the data directory on `/usr/local/tomcat/webapps/geo
 For instance, to set the data directory to `/var/lib/geonetwork_data`:
 
 ```console
-$ docker run --name some-%%REPO%% -d -p 8080:8080 -e DATA_DIR=/var/lib/geonetwork_data %%REPO%%
+$ docker run --name some-%%REPO%% -d -p 8080:8080 -e DATA_DIR=/var/lib/geonetwork_data %%IMAGE%%
 ```
 
 ## Persist data
@@ -49,7 +49,7 @@ $ docker run --name some-%%REPO%% -d -p 8080:8080 -e DATA_DIR=/var/lib/geonetwor
 If you want the data directory to live beyond restarts, or even destruction of the container, you can mount a directory from the docker engine's host into the container. - `-v <host path>:<data directory>`. For instance this, will mount the host directory `/host/geonetwork-docker` into `/var/lib/geonetwork_data` on the container:
 
 ```console
-$ docker run --name some-%%REPO%% -d -p 8080:8080 -e DATA_DIR=/var/lib/geonetwork_data -v /host/geonetwork-docker:/var/lib/geonetwork_data %%REPO%%
+$ docker run --name some-%%REPO%% -d -p 8080:8080 -e DATA_DIR=/var/lib/geonetwork_data -v /host/geonetwork-docker:/var/lib/geonetwork_data %%IMAGE%%
 ```
 
 ## %%STACK%%

+ 5 - 5
ghost/content.md

@@ -11,7 +11,7 @@ Ghost is a free and open source blogging platform written in JavaScript and dist
 This will start a Ghost instance listening on the default Ghost port of 2368.
 
 ```console
-$ docker run -d --name some-ghost ghost
+$ docker run -d --name some-ghost %%IMAGE%%
 ```
 
 ## Custom port
@@ -19,7 +19,7 @@ $ docker run -d --name some-ghost ghost
 If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used:
 
 ```console
-$ docker run -d --name some-ghost -p 3001:2368 ghost
+$ docker run -d --name some-ghost -p 3001:2368 %%IMAGE%%
 ```
 
 Then, access it via `http://localhost:3001` or `http://host-ip:3001` in a browser.
@@ -31,13 +31,13 @@ Mount your existing content. In this example we also use the Alpine base image.
 ### Ghost 1.x.x
 
 ```console
-$ docker run -d --name some-ghost -p 3001:2368 -v /path/to/ghost/blog:/var/lib/ghost/content ghost:1-alpine
+$ docker run -d --name some-ghost -p 3001:2368 -v /path/to/ghost/blog:/var/lib/ghost/content %%IMAGE%%:1-alpine
 ```
 
 ### Ghost 0.11.xx
 
 ```console
-$ docker run -d --name some-ghost -p 3001:2368 -v /path/to/ghost/blog:/var/lib/ghost ghost:0.11-alpine
+$ docker run -d --name some-ghost -p 3001:2368 -v /path/to/ghost/blog:/var/lib/ghost %%IMAGE%%:0.11-alpine
 ```
 
 ### Breaking change
@@ -56,7 +56,7 @@ This Docker image for Ghost uses SQLite. There is nothing special to configure.
 Alternatively you can use a [data container](http://docs.docker.com/engine/tutorials/dockervolumes/) that has a volume that points to `/var/lib/ghost/content` (or /var/lib/ghost for 0.11.x) and then reference it:
 
 ```console
-$ docker run -d --name some-ghost --volumes-from some-ghost-data ghost
+$ docker run -d --name some-ghost --volumes-from some-ghost-data %%IMAGE%%
 ```
 
 ## What is the Node.js version?

+ 1 - 1
gradle/content.md

@@ -12,6 +12,6 @@ Note that if you are mounting a volume and the uid running Docker is not `1000`,
 
 Run this from the directory of the Gradle project you want to build.
 
-`docker run --rm -v "$PWD":/home/gradle/project -w /home/gradle/project gradle:latest gradle <gradle-task>`
+`docker run --rm -v "$PWD":/home/gradle/project -w /home/gradle/project %%IMAGE%% gradle <gradle-task>`
 
 **Note: Java 9 support is experimental**

+ 2 - 2
groovy/content.md

@@ -14,7 +14,7 @@ Note that if you are mounting a volume and the uid running Docker is not `1000`,
 
 ## Running a Groovy script
 
-`docker run --rm -v "$PWD":/home/groovy/scripts -w /home/groovy/scripts groovy groovy <script> <script-args>`
+`docker run --rm -v "$PWD":/home/groovy/scripts -w /home/groovy/scripts %%IMAGE%% groovy <script> <script-args>`
 
 ## Reusing the Grapes cache
 
@@ -22,7 +22,7 @@ The local Grapes cache can be reused across containers by creating a volume and
 
 ```console
 docker volume create --name grapes-cache
-docker run --rm -it -v grapes-cache:/home/groovy/.groovy/grapes groovy
+docker run --rm -it -v grapes-cache:/home/groovy/.groovy/grapes %%IMAGE%%
 ```
 
 **Note: Java 9 support is experimental**

+ 4 - 4
haskell/content.md

@@ -26,7 +26,7 @@ Note: The GHC developers do not support legacy release branches (i.e. `7.8.x`).
 Start an interactive interpreter session with `ghci`:
 
 ```console
-$ docker run -it --rm haskell:8
+$ docker run -it --rm %%IMAGE%%:8
 GHCi, version 8.0.2: http://www.haskell.org/ghc/  :? for help
 Prelude>
 ```
@@ -34,7 +34,7 @@ Prelude>
 Dockerize an application from Hackage with a `Dockerfile`:
 
 ```dockerfile
-FROM haskell:8
+FROM %%IMAGE%%:8
 RUN stack install pandoc pandoc-citeproc
 ENTRYPOINT ["pandoc"]
 ```
@@ -42,7 +42,7 @@ ENTRYPOINT ["pandoc"]
 Alternatively, using `cabal`:
 
 ```dockerfile
-FROM haskell:8
+FROM %%IMAGE%%:8
 RUN cabal update && cabal install pandoc pandoc-citeproc
 ENTRYPOINT ["pandoc"]
 ```
@@ -50,7 +50,7 @@ ENTRYPOINT ["pandoc"]
 Iteratively develop a Haskell application with a `Dockerfile` utilizing the build cache:
 
 ```dockerfile
-FROM haskell:7.10
+FROM %%IMAGE%%:7.10
 
 WORKDIR /opt/server
 

+ 3 - 3
haxe/content.md

@@ -19,7 +19,7 @@ This image ships a minimal Haxe toolkit:
 The most straightforward way to use this image is to use a Haxe container as both the build and runtime environment. In your `Dockerfile`, writing something along the lines of the following will compile and run your project:
 
 ```dockerfile
-FROM haxe:3.4
+FROM %%IMAGE%%:3.4
 
 RUN mkdir -p /usr/src/app
 WORKDIR /usr/src/app
@@ -47,10 +47,10 @@ $ docker run -it --rm --name my-running-app my-haxe-app
 
 There are `onbuild` variants that include multiple `ONBUILD` triggers to perform all of the steps in the above Dockerfile, except there is no `CMD` instruction for running the compilation output.
 
-Rewriting the above Dockerfile with `haxe:3.4-onbuild`, we will get:
+Rewriting the above Dockerfile with `%%IMAGE%%:3.4-onbuild`, we will get:
 
 ```dockerfile
-FROM haxe:3.4-onbuild
+FROM %%IMAGE%%:3.4-onbuild
 
 # run the output when the container starts
 CMD ["neko", "Main.n"]

+ 1 - 1
hello-seattle/content.md

@@ -3,7 +3,7 @@
 This image is a vanity variant of [the `hello-world` image](https://hub.docker.com/_/hello-world/) created specifically for [DockerCon 2016](http://2016.dockercon.com/). Its use is discouraged.
 
 ```console
-$ docker run hello-seattle
+$ docker run %%IMAGE%%
 
 Hello from DockerCon 2016 (Seattle)!
 This message shows that your installation appears to be working correctly.

+ 2 - 2
hello-world/content.md

@@ -1,7 +1,7 @@
 # Example output
 
 ```console
-$ docker run hello-world
+$ docker run %%IMAGE%%
 
 Hello from Docker!
 This message shows that your installation appears to be working correctly.
@@ -24,7 +24,7 @@ For more examples and ideas, visit:
  https://docs.docker.com/engine/userguide/
 
 
-$ docker images hello-world
+$ docker images %%IMAGE%%
 REPOSITORY   TAG     IMAGE ID      SIZE
 hello-world  latest  05a3bd381fc2  1.84kB
 ```

+ 2 - 2
hello-world/update.sh

@@ -12,10 +12,10 @@ echo '# Example output'
 echo
 
 echo '```console'
-echo '$ docker run' "$image"
+echo '$ docker run %%IMAGE%%'
 docker run --rm hello-world
 echo
-echo '$ docker images' "$image"
+echo '$ docker images %%IMAGE%%'
 docker images "$image" | awk -F'  +' 'NR == 1 || $2 == "latest" { print $1"\t"$2"\t"$3"\t"$5 }' | column -t -s$'\t'
 echo '```'
 

+ 1 - 1
hola-mundo/content.md

@@ -3,7 +3,7 @@
 This image is a vanity variant of [the `hello-world` image](https://hub.docker.com/_/hello-world/) created specifically for [DockerCon EU 2015](http://europe-2015.dockercon.com/). Its use is discouraged.
 
 ```console
-$ docker run hola-mundo
+$ docker run %%IMAGE%%
 
 ¡Hola de DockerCon EU 2015 (Barcelona)!
 This message shows that your installation appears to be working correctly.

+ 3 - 3
httpd/content.md

@@ -13,7 +13,7 @@ This image only contains Apache httpd with the defaults from upstream. There is
 ### Create a `Dockerfile` in your project
 
 ```dockerfile
-FROM httpd:2.4
+FROM %%IMAGE%%:2.4
 COPY ./public-html/ /usr/local/apache2/htdocs/
 ```
 
@@ -29,7 +29,7 @@ $ docker run -dit --name my-running-app my-apache2
 If you don't want to include a `Dockerfile` in your project, it is sufficient to do the following:
 
 ```console
-$ docker run -dit --name my-apache-app -p 8080:80 -v "$PWD":/usr/local/apache2/htdocs/ httpd:2.4
+$ docker run -dit --name my-apache-app -p 8080:80 -v "$PWD":/usr/local/apache2/htdocs/ %%IMAGE%%:2.4
 ```
 
 ### Configuration
@@ -37,7 +37,7 @@ $ docker run -dit --name my-apache-app -p 8080:80 -v "$PWD":/usr/local/apache2/h
 To customize the configuration of the httpd server, just `COPY` your custom configuration in as `/usr/local/apache2/conf/httpd.conf`.
 
 ```dockerfile
-FROM httpd:2.4
+FROM %%IMAGE%%:2.4
 COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf
 ```
 

+ 2 - 2
hylang/content.md

@@ -11,7 +11,7 @@ Hy (a.k.a., Hylang) is a dialect of the Lisp programming language designed to in
 ## Create a `Dockerfile` in your Hy project
 
 ```dockerfile
-FROM hylang:0.10
+FROM %%IMAGE%%:0.10
 COPY . /usr/src/myapp
 WORKDIR /usr/src/myapp
 CMD [ "hy", "./your-daemon-or-script.hy" ]
@@ -29,5 +29,5 @@ $ docker run -it --rm --name my-running-app my-hylang-app
 For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Hy script by using the Hy Docker image directly:
 
 ```console
-$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp hylang:0.10 hy your-daemon-or-script.hy
+$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp %%IMAGE%%:0.10 hy your-daemon-or-script.hy
 ```

+ 5 - 5
ibmjava/content.md

@@ -43,7 +43,7 @@ ibmjava now has multi-arch support and so the exact same commands as below works
 To run a pre-built jar file with the JRE image, use the following commands:
 
 ```dockerfile
-FROM ibmjava:jre
+FROM %%IMAGE%%:jre
 RUN mkdir /opt/app
 COPY japp.jar /opt/app
 CMD ["java", "-jar", "/opt/app/japp.jar"]
@@ -52,11 +52,11 @@ CMD ["java", "-jar", "/opt/app/japp.jar"]
 To download the latest Java 9 Beta (Early Access) Image:
 
 ```console
-docker pull ibmjava:9-ea2-sdk
+docker pull %%IMAGE%%:9-ea2-sdk
 ```
 
 ```dockerfile
-FROM ibmjava:jre
+FROM %%IMAGE%%:jre
 RUN mkdir /opt/app
 COPY japp.jar /opt/app
 CMD ["java", "-jar", "/opt/app/japp.jar"]
@@ -72,7 +72,7 @@ docker run -it --rm japp
 If you want to place the jar file on the host file system instead of inside the container, you can mount the host path onto the container by using the following commands:
 
 ```dockerfile
-FROM ibmjava:jre
+FROM %%IMAGE%%:jre
 CMD ["java", "-jar", "/opt/app/japp.jar"]
 ```
 
@@ -88,7 +88,7 @@ IBM SDK, Java Technology Edition provides a feature called [Class data sharing](
 To enable class data sharing between JVMs that are running in different containers on the same host, a common location must be shared between containers. This requirement can be satisfied through the host or a data volume container. When enabled, class data sharing creates a named "class cache", which is a memory-mapped file, at the common location. This feature is enabled by passing the `-Xshareclasses` option to the JVM as shown in the following Dockerfile example:
 
 ```dockerfile
-FROM ibmjava:jre
+FROM %%IMAGE%%:jre
 RUN mkdir /opt/shareclasses
 RUN mkdir /opt/app
 COPY japp.jar /opt/app

+ 8 - 8
influxdb/content.md

@@ -15,7 +15,7 @@ The InfluxDB image exposes a shared volume under `/var/lib/influxdb`, so you can
 ```console
 $ docker run -p 8086:8086 \
       -v $PWD:/var/lib/influxdb \
-      influxdb
+      %%IMAGE%%
 ```
 
 Modify `$PWD` to the directory where you want to store data associated with the InfluxDB container.
@@ -25,7 +25,7 @@ You can also have Docker control the volume mountpoint by using a named volume.
 ```console
 $ docker run -p 8086:8086 \
       -v influxdb:/var/lib/influxdb \
-      influxdb
+      %%IMAGE%%
 ```
 
 ### Exposed Ports
@@ -51,7 +51,7 @@ InfluxDB can be either configured from a config file or using environment variab
 Generate the default configuration file:
 
 ```console
-$ docker run --rm influxdb influxd config > influxdb.conf
+$ docker run --rm %%IMAGE%% influxd config > influxdb.conf
 ```
 
 Modify the default configuration, which will now be available under `$PWD`. Then start the InfluxDB container.
@@ -59,7 +59,7 @@ Modify the default configuration, which will now be available under `$PWD`. Then
 ```console
 $ docker run -p 8086:8086 \
       -v $PWD/influxdb.conf:/etc/influxdb/influxdb.conf:ro \
-      influxdb -config /etc/influxdb/influxdb.conf
+      %%IMAGE%% -config /etc/influxdb/influxdb.conf
 ```
 
 Modify `$PWD` to the directory where you want to store the configuration file.
@@ -83,7 +83,7 @@ InfluxDB supports the Graphite line protocol, but the service and ports are not
 ```console
 docker run -p 8086:8086 -p 2003:2003 \
     -e INFLUXDB_GRAPHITE_ENABLED=true \
-    influxdb
+    %%IMAGE%%
 ```
 
 See the [README on GitHub](https://github.com/influxdata/influxdb/blob/master/services/graphite/README.md) for more detailed documentation to set up the Graphite service. In order to take advantage of graphite templates, you should use a configuration file by outputting a default configuration file using the steps above and modifying the `[[graphite]]` section.
@@ -95,7 +95,7 @@ The administrator interface is deprecated as of 1.1.0 and will be removed in 1.3
 ```console
 docker run -p 8086:8086 -p 8083:8083 \
     -e INFLUXDB_ADMIN_ENABLED=true \
-    influxdb
+    %%IMAGE%%
 ```
 
 To use the administrator interface, both the HTTP API and the administrator interface API's must be forwarded to the same port.
@@ -121,13 +121,13 @@ Read more about this in the [official documentation](https://docs.influxdata.com
 Start the container:
 
 ```console
-$ docker run --name=influxdb -d -p 8086:8086 influxdb
+$ docker run --name=influxdb -d -p 8086:8086 %%IMAGE%%
 ```
 
 Run the influx client in another container:
 
 ```console
-$ docker run --rm --link=influxdb -it influxdb influx -host influxdb
+$ docker run --rm --link=influxdb -it %%IMAGE%% influx -host influxdb
 ```
 
 At the moment, you cannot use `docker exec` to run the influx client since `docker exec` will not properly allocate a TTY. This is due to a current bug in Docker that is detailed in [docker/docker#8755](https://github.com/docker/docker/issues/8755).

+ 2 - 2
irssi/content.md

@@ -21,7 +21,7 @@ $ docker run -it --name my-running-irssi -e TERM -u $(id -u):$(id -g) \
     --log-driver=none \
     -v $HOME/.irssi:/home/user/.irssi:ro \
     -v /etc/localtime:/etc/localtime:ro \
-    irssi
+    %%IMAGE%%
 ```
 
 We specify `--log-driver=none` to avoid storing useless interactive terminal data.
@@ -32,7 +32,7 @@ On a Mac OS X system, run the same image using:
 $ docker run -it --name my-running-irssi -e TERM -u $(id -u):$(id -g) \
     --log-driver=none \
     -v $HOME/.irssi:/home/user/.irssi:ro \
-    irssi
+    %%IMAGE%%
 ```
 
 You omit `/etc/localtime` on Mac OS X because `boot2docker` doesn't use this file.

+ 13 - 13
jenkins/content.md

@@ -11,13 +11,13 @@ For weekly releases check out [`jenkinsci/jenkins`](https://hub.docker.com/r/jen
 # How to use this image
 
 ```console
-docker run -p 8080:8080 -p 50000:50000 jenkins
+docker run -p 8080:8080 -p 50000:50000 %%IMAGE%%
 ```
 
 This will store the workspace in /var/jenkins_home. All Jenkins data lives in there - including plugins and configuration. You will probably want to make that a persistent volume (recommended):
 
 ```console
-docker run -p 8080:8080 -p 50000:50000 -v /your/home:/var/jenkins_home jenkins
+docker run -p 8080:8080 -p 50000:50000 -v /your/home:/var/jenkins_home %%IMAGE%%
 ```
 
 This will store the jenkins data in `/your/home` on the host. Ensure that `/your/home` is accessible by the jenkins user in container (jenkins user - uid 1000) or use `-u some_other_user` parameter with `docker run`.
@@ -25,7 +25,7 @@ This will store the jenkins data in `/your/home` on the host. Ensure that `/your
 You can also use a volume container:
 
 ```console
-docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v /var/jenkins_home jenkins
+docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v /var/jenkins_home %%IMAGE%%
 ```
 
 Then myjenkins container has the volume (please do read about docker volume handling to find out more).
@@ -52,7 +52,7 @@ You can specify and set the number of executors of your Jenkins master instance
 and `Dockerfile`
 
 ```console
-FROM jenkins
+FROM %%IMAGE%%
 COPY executors.groovy /usr/share/jenkins/ref/init.groovy.d/executors.groovy
 ```
 
@@ -65,7 +65,7 @@ You can run builds on the master (out of the box) but if you want to attach buil
 You might need to customize the JVM running Jenkins, typically to pass system properties or tweak heap memory settings. Use JAVA_OPTS environment variable for this purpose :
 
 ```console
-docker run --name myjenkins -p 8080:8080 -p 50000:50000 --env JAVA_OPTS=-Dhudson.footerURL=http://mycompany.com jenkins
+docker run --name myjenkins -p 8080:8080 -p 50000:50000 --env JAVA_OPTS=-Dhudson.footerURL=http://mycompany.com %%IMAGE%%
 ```
 
 # Configuring logging
@@ -79,7 +79,7 @@ handlers=java.util.logging.ConsoleHandler
 jenkins.level=FINEST
 java.util.logging.ConsoleHandler.level=FINEST
 EOF
-docker run --name myjenkins -p 8080:8080 -p 50000:50000 --env JAVA_OPTS="-Djava.util.logging.config.file=/var/jenkins_home/log.properties" -v `pwd`/data:/var/jenkins_home jenkins
+docker run --name myjenkins -p 8080:8080 -p 50000:50000 --env JAVA_OPTS="-Djava.util.logging.config.file=/var/jenkins_home/log.properties" -v `pwd`/data:/var/jenkins_home %%IMAGE%%
 ```
 
 # Passing Jenkins launcher parameters
@@ -87,7 +87,7 @@ docker run --name myjenkins -p 8080:8080 -p 50000:50000 --env JAVA_OPTS="-Djava.
 Arguments you pass to docker running the jenkins image are passed to jenkins launcher, so you can run for example :
 
 ```console
-$ docker run jenkins --version
+$ docker run %%IMAGE%% --version
 ```
 
 This will dump Jenkins version, just like when you run jenkins as an executable war.
@@ -95,7 +95,7 @@ This will dump Jenkins version, just like when you run jenkins as an executable
 You also can define jenkins arguments as `JENKINS_OPTS`. This is useful to define a set of arguments to pass to jenkins launcher as you define a derived jenkins image based on the official one with some customized settings. The following sample Dockerfile uses this option to force use of HTTPS with a certificate included in the image
 
 ```console
-FROM jenkins:1.565.3
+FROM %%IMAGE%%:1.565.3
 
 COPY https.pem /var/lib/jenkins/cert
 COPY https.key /var/lib/jenkins/pk
@@ -106,14 +106,14 @@ EXPOSE 8083
 You can also change the default slave agent port for jenkins by defining `JENKINS_SLAVE_AGENT_PORT` in a sample Dockerfile.
 
 ```console
-FROM jenkins:1.565.3
+FROM %%IMAGE%%:1.565.3
 ENV JENKINS_SLAVE_AGENT_PORT 50001
 ```
 
 or as a parameter to docker,
 
 ```console
-$ docker run --name myjenkins -p 8080:8080 -p 50001:50001 --env JENKINS_SLAVE_AGENT_PORT=50001 jenkins
+$ docker run --name myjenkins -p 8080:8080 -p 50001:50001 --env JENKINS_SLAVE_AGENT_PORT=50001 %%IMAGE%%
 ```
 
 # Installing more tools
@@ -121,7 +121,7 @@ $ docker run --name myjenkins -p 8080:8080 -p 50001:50001 --env JENKINS_SLAVE_AG
 You can run your container as root - and install via apt-get, install as part of build steps via jenkins tool installers, or you can create your own Dockerfile to customise, for example:
 
 ```console
-FROM jenkins
+FROM %%IMAGE%%
 # if we want to install via apt
 USER root
 RUN apt-get update && apt-get install -y ruby make more-thing-here
@@ -131,7 +131,7 @@ USER jenkins # drop back to the regular jenkins user - good practice
 In such a derived image, you can customize your jenkins instance with hook scripts or additional plugins. For this purpose, use `/usr/share/jenkins/ref` as a place to define the default JENKINS_HOME content you wish the target installation to look like :
 
 ```console
-FROM jenkins
+FROM %%IMAGE%%
 COPY plugins.txt /usr/share/jenkins/ref/
 COPY custom.groovy /usr/share/jenkins/ref/init.groovy.d/custom.groovy
 RUN /usr/local/bin/plugins.sh /usr/share/jenkins/ref/plugins.txt
@@ -153,7 +153,7 @@ maven-plugin:2.7.1
 And in derived Dockerfile just invoke the utility plugin.sh script
 
 ```console
-FROM jenkins
+FROM %%IMAGE%%
 COPY plugins.txt /usr/share/jenkins/plugins.txt
 RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
 ```

+ 10 - 10
jetty/content.md

@@ -11,13 +11,13 @@ Jetty is a pure Java-based HTTP (Web) server and Java Servlet container. While W
 To run the default Jetty server in the background, use the following command:
 
 ```console
-$ docker run -d %%REPO%%
+$ docker run -d %%IMAGE%%
 ```
 
 You can test it by visiting `http://container-ip:8080` or `https://container-ip:8443/` in a browser. To expose your Jetty server to outside requests, use a port mapping as follows:
 
 ```console
-$ docker run -d -p 80:8080 -p 443:8443 %%REPO%%
+$ docker run -d -p 80:8080 -p 443:8443 %%IMAGE%%
 ```
 
 This will map port 8080 inside the container as port 80 on the host and container port 8443 as host port 443. You can then go to `http://host-ip` or `https://host-ip` in a browser.
@@ -41,19 +41,19 @@ For older EOL'd images based on Jetty 7 or Jetty 8, please follow the [legacy in
 The configuration of the Jetty server can be reported by running with the `--list-config` option:
 
 ```console
-$ docker run -d %%REPO%% --list-config
+$ docker run -d %%IMAGE%% --list-config
 ```
 
 Configuration such as parameters and additional modules may also be passed in via the command line. For example:
 
 ```console
-$ docker run -d %%REPO%% --modules=jmx jetty.threadPool.maxThreads=500
+$ docker run -d %%IMAGE%% --modules=jmx jetty.threadPool.maxThreads=500
 ```
 
 To update the server configuration in a derived Docker image, the `Dockerfile` may enable additional modules with `RUN` commands like:
 
 ```Dockerfile
-FROM jetty
+FROM %%IMAGE%%
 
 RUN java -jar "$JETTY_HOME/start.jar" --add-to-startd=jmx,stats
 ```
@@ -65,15 +65,15 @@ Modules may be configured in a `Dockerfile` by editing the properties in the cor
 JVM options can be set by passing the `JAVA_OPTIONS` environment variable to the container. For example, to set the maximum heap size to 1 gigabyte, you can run the container as follows:
 
 ```console
-$ docker run -e JAVA_OPTIONS="-Xmx1g" -d %%REPO%%
+$ docker run -e JAVA_OPTIONS="-Xmx1g" -d %%IMAGE%%
 ```
 
 ## Read-only container
 
-To run `%%REPO%%` as a read-only container, have Docker create the `/tmp/jetty` and `/run/jetty` directories as volumes:
+To run `%%IMAGE%%` as a read-only container, have Docker create the `/tmp/jetty` and `/run/jetty` directories as volumes:
 
 ```console
-$ docker run -d --read-only -v /tmp/jetty -v /run/jetty %%REPO%%
+$ docker run -d --read-only -v /tmp/jetty -v /run/jetty %%IMAGE%%
 ```
 
 Since the container is read-only, you'll need to either mount in your webapps directory with `-v /path/to/my/webapps:/var/lib/jetty/webapps` or by populating `/var/lib/jetty/webapps` in a derived image.
@@ -83,7 +83,7 @@ Since the container is read-only, you'll need to either mount in your webapps di
 Starting with version 9.3, Jetty comes with built-in support for HTTP/2. However, due to potential license compatiblity issues with the ALPN library used to implement HTTP/2, the module is not enabled by default. In order to enable HTTP/2 support in a derived `Dockerfile` for private use, you can add a `RUN` command that enables the `http2` module and approve its license as follows:
 
 ```Dockerfile
-FROM jetty
+FROM %%IMAGE%%
 
 RUN java -jar $JETTY_HOME/start.jar --add-to-startd=http2 --approve-all-licenses
 ```
@@ -99,5 +99,5 @@ By default, this image starts as user `root` and uses Jetty's `setuid` module to
 If you would like the image to start immediately as user `jetty` instead of starting as `root`, you can start the container with `-u jetty`:
 
 ```console
-$ docker run -d -u jetty %%REPO%%
+$ docker run -d -u jetty %%IMAGE%%
 ```

+ 2 - 2
joomla/content.md

@@ -9,7 +9,7 @@ Joomla is a free and open-source content management system (CMS) for publishing
 # How to use this image
 
 ```console
-$ docker run --name some-%%REPO%% --link some-mysql:mysql -d %%REPO%%
+$ docker run --name some-%%REPO%% --link some-mysql:mysql -d %%IMAGE%%
 ```
 
 The following environment variables are also honored for configuring your Joomla instance:
@@ -24,7 +24,7 @@ If the `JOOMLA_DB_NAME` specified does not already exist on the given MySQL serv
 If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used:
 
 ```console
-$ docker run --name some-%%REPO%% --link some-mysql:mysql -p 8080:80 -d %%REPO%%
+$ docker run --name some-%%REPO%% --link some-mysql:mysql -p 8080:80 -d %%IMAGE%%
 ```
 
 Then, access it via `http://localhost:8080` or `http://host-ip:8080` in a browser.

+ 3 - 3
jruby/content.md

@@ -15,7 +15,7 @@ JRuby leverages the robustness and speed of the JVM while providing the same Rub
 ## Create a `Dockerfile` in your Ruby app project
 
 ```dockerfile
-FROM jruby:1.7-onbuild
+FROM %%IMAGE%%:1.7-onbuild
 CMD ["./your-daemon-or-script.rb"]
 ```
 
@@ -35,7 +35,7 @@ $ docker run -it --name my-running-script my-ruby-app
 The `onbuild` tag expects a `Gemfile.lock` in your app directory. This `docker run` will help you generate one. Run it in the root of your app, next to the `Gemfile`:
 
 ```console
-$ docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app jruby:1.7 bundle install --system
+$ docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app %%IMAGE%%:1.7 bundle install --system
 ```
 
 ## Run a single Ruby script
@@ -43,5 +43,5 @@ $ docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app jruby:1.7 bundle instal
 For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Ruby script by using the Ruby Docker image directly:
 
 ```console
-$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp jruby:1.7 jruby your-daemon-or-script.rb
+$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp %%IMAGE%%:1.7 jruby your-daemon-or-script.rb
 ```

+ 2 - 2
julia/content.md

@@ -13,11 +13,11 @@ Julia is a high-level, high-performance dynamic programming language for technic
 Starting the Julia REPL is as easy as the following:
 
 ```console
-$ docker run -it --rm julia
+$ docker run -it --rm %%IMAGE%%
 ```
 
 ## Run Julia script from your local directory inside container
 
 ```console
-$ docker run -it --rm -v "$PWD":/usr/myapp -w /usr/myapp julia julia script.jl arg1 arg2
+$ docker run -it --rm -v "$PWD":/usr/myapp -w /usr/myapp %%IMAGE%% julia script.jl arg1 arg2
 ```

+ 3 - 3
kaazing-gateway/content.md

@@ -13,7 +13,7 @@ By default the gateway runs a WebSocket echo service similar to [websocket.org](
 You must give your gateway container a hostname. To do this, use the `docker run -h somehostname` option, along with the -e option to define an environment variable, GATEWAY_OPTS, to pass this hostname to the gateway configuration (your hostname may vary):
 
 ```console
-$ docker run --name some-kaazing-gateway -h somehostname -e GATEWAY_OPTS="-Dgateway.hostname=somehostname -Xmx512m -Djava.security.egd=file:/dev/urandom"-d -p 8000:8000 kaazing-gateway
+$ docker run --name some-kaazing-gateway -h somehostname -e GATEWAY_OPTS="-Dgateway.hostname=somehostname -Xmx512m -Djava.security.egd=file:/dev/urandom" -d -p 8000:8000 %%IMAGE%%
 ```
 
 Note: the additional GATEWAY_OPTS options, `-Xmx512m -Djava.security.egd=file:/dev/urandom`, are added in order to preserve these values from the original Dockerfile for the gateway. The `-Xmx512m` value specifies a minimum Java heap size of 512 MB, and `-Djava.security.egd=file:/dev/urandom` is to facilitate faster startup on VMs. See the `Dockerfile` link referenced above for details.
@@ -27,7 +27,7 @@ Note: all of the above assumes that `somehostname` is resolvable from your brows
 To launch a container with a specific configuration you can do the following:
 
 ```console
-$ docker run --name some-kaazing-gateway -h somehostname -e GATEWAY_OPTS="-Dgateway.hostname=somehostname -Xmx512m -Djava.security.egd=file:/dev/urandom" -v /some/gateway-config.xml:/kaazing-gateway/conf/gateway-config.xml:ro -d kaazing-gateway
+$ docker run --name some-kaazing-gateway -h somehostname -e GATEWAY_OPTS="-Dgateway.hostname=somehostname -Xmx512m -Djava.security.egd=file:/dev/urandom" -v /some/gateway-config.xml:/kaazing-gateway/conf/gateway-config.xml:ro -d %%IMAGE%%
 ```
 
 For information on the syntax of the Kaazing Gateway configuration files, see [the official documentation](https://kaazing.com/doc/5.0/index.html) (specifically the *For Administrators* section).
@@ -41,7 +41,7 @@ $ docker cp some-kaazing:/kaazing-gateway/conf/gateway-config-minimal.xml /some/
 As above, this can also be accomplished more cleanly using a simple `Dockerfile`:
 
 ```dockerfile
-FROM kaazing-gateway
+FROM %%IMAGE%%
 COPY gateway-config.xml conf/gateway-config.xml
 ```
 

+ 9 - 9
kapacitor/content.md

@@ -13,7 +13,7 @@ Kapacitor is an open source data processing engine written in Go. It can process
 Start the Kapacitor container with default options:
 
 ```console
-$ docker run -p 9092:9092 kapacitor
+$ docker run -p 9092:9092 %%IMAGE%%
 ```
 
 Start the Kapacitor container sharing the data directory with the host:
@@ -21,7 +21,7 @@ Start the Kapacitor container sharing the data directory with the host:
 ```console
 $ docker run -p 9092:9092 \
       -v $PWD:/var/lib/kapacitor \
-      kapacitor
+      %%IMAGE%%
 ```
 
 Modify `$PWD` to the directory where you want to store data associated with the Kapacitor container.
@@ -31,7 +31,7 @@ You can also have Docker control the volume mountpoint by using a named volume.
 ```console
 $ docker run -p 9092:9092 \
       -v kapacitor:/var/lib/kapacitor \
-      kapacitor
+      %%IMAGE%%
 ```
 
 ### Configuration
@@ -41,7 +41,7 @@ Kapacitor can be either configured from a config file or using environment varia
 Generate the default configuration file:
 
 ```console
-$ docker run --rm kapacitor kapacitord config > kapacitor.conf
+$ docker run --rm %%IMAGE%% kapacitord config > kapacitor.conf
 ```
 
 Modify the default configuration, which will now be available under `$PWD`. Then start the Kapacitor container.
@@ -49,7 +49,7 @@ Modify the default configuration, which will now be available under `$PWD`. Then
 ```console
 $ docker run -p 9092:9092 \
       -v $PWD/kapacitor.conf:/etc/kapacitor/kapacitor.conf:ro \
-      kapacitor
+      %%IMAGE%%
 ```
 
 Modify `$PWD` to the directory where you want to store the configuration file.
@@ -97,7 +97,7 @@ $ docker run -p 9092:9092 \
     -h kapacitor \
     --net=influxdb \
     -e KAPACITOR_INFLUXDB_0_URLS_0=http://influxdb:8086 \
-    kapacitor
+    %%IMAGE%%
 ```
 
 You can also start Kapacitor sharing the same network interface of the InfluxDB container. If you do this, Docker will act as if both processes were being run on the same machine.
@@ -106,7 +106,7 @@ You can also start Kapacitor sharing the same network interface of the InfluxDB
 $ docker run -p 9092:9092 \
       --name=kapacitor \
       --net=container:influxdb \
-      kapacitor
+      %%IMAGE%%
 ```
 
 When run like this, InfluxDB can be communicated with over `localhost`.
@@ -116,7 +116,7 @@ When run like this, InfluxDB can be communicated with over `localhost`.
 Start the container:
 
 ```console
-$ docker run --name=kapacitor -d -p 9092:9092 kapacitor
+$ docker run --name=kapacitor -d -p 9092:9092 %%IMAGE%%
 ```
 
 Run another container linked to the `kapacitor` container for using the client. Set the env `KAPACITOR_URL` so the client knows how to connect to Kapacitor. Mount in your current directory for accessing TICKscript files.
@@ -124,7 +124,7 @@ Run another container linked to the `kapacitor` container for using the client.
 ```console
 $ docker run --rm --net=container:kapacitor \
       -v $PWD:/root -w=/root -it \
-      kapacitor bash -l
+      %%IMAGE%% bash -l
 ```
 
 Then, from within the container, you can use the `kapacitor` command to interact with the daemon.

+ 1 - 1
known/content.md

@@ -9,7 +9,7 @@ Known is a social publishing platform. Publish on your own site, reach your audi
 # How to use this image
 
 ```bash
-docker run --link some-mysql:db -d known
+docker run --link some-mysql:db -d %%IMAGE%%
 ```
 
 Now you can get access to fpm running on port 9000 inside the container. If you want to access it from the Internets, we recommend using a reverse proxy in front. You can find more information on that on the [docker-compose](#docker-compose) section.

+ 3 - 3
kong/content.md

@@ -48,7 +48,7 @@ docker run --rm \
     -e "KONG_DATABASE=postgres" \
     -e "KONG_PG_HOST=kong-database" \
     -e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" \
-    kong:latest kong migrations up
+    %%IMAGE%% kong migrations up
 ```
 
 In the above example, both Cassandra and PostgreSQL are configured, but you should update the `KONG_DATABASE` environment variable with either `cassandra` or `postgres`.
@@ -69,7 +69,7 @@ $ docker run -d --name kong \
     -p 8443:8443 \
     -p 8001:8001 \
     -p 8444:8444 \
-    kong
+    %%IMAGE%%
 ```
 
 If everything went well, and if you created your container with the default ports, Kong should be listening on your host's `8000` ([Proxy][http://getkong.org/docs/latest/configuration/#proxy_port]), `8443` ([Proxy SSL](http://getkong.org/docs/latest/configuration/#proxy_listen_ssl)), `8001` ([Admin API](http://getkong.org/docs/latest/configuration/#admin_listen)) and `8444` ([Admin API SSL](http://getkong.org/docs/latest/configuration/#admin_listen_ssl)) ports.
@@ -90,7 +90,7 @@ $ docker run -d --name kong \
     -p 8001:8001 \
     -p 7946:7946 \
     -p 7946:7946/udp \
-    kong
+    %%IMAGE%%
 ```
 
 ## Reload Kong in a running container

+ 9 - 9
lightstreamer/content.md

@@ -13,7 +13,7 @@ For more information and related downloads for Lightstreamer Server and other Li
 Launch the container with the default configuration:
 
 ```console
-$ docker run --name ls-server -d -p 80:8080 lightstreamer
+$ docker run --name ls-server -d -p 80:8080 %%IMAGE%%
 ```
 
 This will map port 8080 inside the container to port 80 on local host. Then point your browser to `http://localhost` and watch the Welcome page showing real-time data flowing in from the locally deployed demo application, which is a first overview of the unique features offered by the Lightstreamer technology. More examples are available online at the [demo site](http://demos.lightstreamer.com).
@@ -23,25 +23,25 @@ This will map port 8080 inside the container to port 80 on local host. Then poin
 It is possible to customize each aspect of the Lightstreamer instance running into the container. For example, a specific configuration file may be supplied as follows:
 
 ```console
-$ docker run --name ls-server -v /path/to/my-lightstreamer_conf.xml:/lightstreamer/conf/lightstreamer_conf.xml -d -p 80:8080 lightstreamer
+$ docker run --name ls-server -v /path/to/my-lightstreamer_conf.xml:/lightstreamer/conf/lightstreamer_conf.xml -d -p 80:8080 %%IMAGE%%
 ```
 
 In the same way, you could provide a custom logging configuration, maybe in this case also specifying a dedicated volume to ensure both the persistence of log files and better performance of the container:
 
 ```console
-$ docker run --name ls-server -v /path/to/my-lightstreamer_log_conf.xml:/lightstreamer/conf/lightstreamer_log_conf.xml -v /path/to/logs:/lightstreamer/logs -d -p 80:8080 lightstreamer
+$ docker run --name ls-server -v /path/to/my-lightstreamer_log_conf.xml:/lightstreamer/conf/lightstreamer_log_conf.xml -v /path/to/logs:/lightstreamer/logs -d -p 80:8080 %%IMAGE%%
 ```
 
 If you also change in your `my-lightstreamer_log_conf.xml` file the default logging path from `../logs` to `/path/to/dest/logs`:
 
 ```console
-$ docker run --name ls-server -v /path/to/my-lightstreamer_log_conf.xml:/lightstreamer/conf/lightstreamer_log_conf.xml -v /path/to/hosted/logs:/path/to/dest/logs -d -p 80:8080 lightstreamer
+$ docker run --name ls-server -v /path/to/my-lightstreamer_log_conf.xml:/lightstreamer/conf/lightstreamer_log_conf.xml -v /path/to/hosted/logs:/path/to/dest/logs -d -p 80:8080 %%IMAGE%%
 ```
 
 Alternatively, the above tasks can be executed by deriving a new image through a `Dockerfile` as the following:
 
 ```dockerfile
-FROM lightstreamer
+FROM %%IMAGE%%
 
 # Please specify a COPY command only for the the required custom configuration file
 COPY my-lightstreamer_conf.xml /lightstreamer/conf/lightstreamer_conf.xml
@@ -73,7 +73,7 @@ To accomplish such goal, you may use similar strategies to those illustrated abo
 To deploy a single custom Adapter Set, the simplest way is to attach its files into the factory adapters folder, as follows:
 
 ```console
-$ docker run --name ls-server -v /path/to/my-adapter-set:/lightstreamer/adapters/my-adapter-set -d -p 80:8080 lightstreamer
+$ docker run --name ls-server -v /path/to/my-adapter-set:/lightstreamer/adapters/my-adapter-set -d -p 80:8080 %%IMAGE%%
 ```
 
 ### Full replacement of the "adapters" folder
@@ -81,7 +81,7 @@ $ docker run --name ls-server -v /path/to/my-adapter-set:/lightstreamer/adapters
 In the case you have many custom Adapter Sets to deploy, a more appropriate strategy is to replace the factory adapters folder with the one located in your host machine:
 
 ```console
-$ docker run --name ls-server -v /path/to/my-adapters:/lightstreamer/adapters -d -p 80:8080 lightstreamer
+$ docker run --name ls-server -v /path/to/my-adapters:/lightstreamer/adapters -d -p 80:8080 %%IMAGE%%
 ```
 
 In this case, the `/path/to/my-adapters` folder has to be structured with the required layout for an adapters folder:
@@ -102,7 +102,7 @@ Once again, a linear and clean approach is to make a new image including all nee
 In this case, you could write a simple Docker file in which the list of all your Adapter Sets configuration files is provided:
 
 ```dockerfile
-FROM lightstreamer
+FROM %%IMAGE%%
 
 # Will copy the contents of N Adapter Sets into the factory adapters folder
 COPY my-adapter-set-1 /lightstreamer/adapters/my-adapter-set-1
@@ -119,7 +119,7 @@ There might be some circumstances where you would like to provide custom pages f
 For example, with the following command you will be able to fully replace the factory `pages` folder:
 
 ```console
-$ docker run --name ls-server -v /path/to/custom/pages:/lightstreamer/pages -d -p 80:8080 lightstreamer
+$ docker run --name ls-server -v /path/to/custom/pages:/lightstreamer/pages -d -p 80:8080 %%IMAGE%%
 ```
 
 where `/path/to/custom/pages` is the path in your host machine containing the replacing web content files.

+ 1 - 1
mageia/content.md

@@ -19,7 +19,7 @@ To date, Mageia:
 ## Create a Dockerfile for your container
 
 ```dockerfile
-FROM mageia:6
+FROM %%IMAGE%%:6
 MAINTAINER  "Foo Bar" <[email protected]>
 CMD [ "bash" ]
 ```

+ 19 - 19
mariadb/content.md

@@ -10,12 +10,12 @@ The intent is also to maintain high compatibility with MySQL, ensuring a "drop-i
 
 # How to use this image
 
-## Start a `%%REPO%%` server instance
+## Start a `%%IMAGE%%` server instance
 
 Starting a MariaDB instance is simple:
 
 ```console
-$ docker run --name some-%%REPO%% -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%REPO%%:tag
+$ docker run --name some-%%REPO%% -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%IMAGE%%:tag
 ```
 
 ... where `some-%%REPO%%` is the name you want to assign to your container, `my-secret-pw` is the password to be set for the MySQL root user and `tag` is the tag specifying the MySQL version you want. See the list above for relevant tags.
@@ -32,18 +32,18 @@ $ docker run --name some-app --link some-%%REPO%%:mysql -d application-that-uses
 
 ## Connect to MariaDB from the MySQL command line client
 
-The following command starts another %%REPO%% container instance and runs the `mysql` command line client against your original %%REPO%% container, allowing you to execute SQL statements against your database instance:
+The following command starts another `%%IMAGE%%` container instance and runs the `mysql` command line client against your original `%%IMAGE%%` container, allowing you to execute SQL statements against your database instance:
 
 ```console
-$ docker run -it --link some-%%REPO%%:mysql --rm %%REPO%% sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
+$ docker run -it --link some-%%REPO%%:mysql --rm %%IMAGE%% sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
 ```
 
-... where `some-%%REPO%%` is the name of your original %%REPO%% container.
+... where `some-%%REPO%%` is the name of your original `%%IMAGE%%` container.
 
 This image can also be used as a client for non-Docker or remote MariaDB instances:
 
 ```console
-$ docker run -it --rm %%REPO%% mysql -hsome.mysql.host -usome-mysql-user -p
+$ docker run -it --rm %%IMAGE%% mysql -hsome.mysql.host -usome-mysql-user -p
 ```
 
 More information about the MySQL command line client can be found in the [MySQL documentation](http://dev.mysql.com/doc/en/mysql.html)
@@ -54,7 +54,7 @@ Run `docker stack deploy -c stack.yml %%REPO%%` (or `docker-compose -f stack.yml
 
 ## Container shell access and viewing MySQL logs
 
-The `docker exec` command allows you to run commands inside a Docker container. The following command line will give you a bash shell inside your `%%REPO%%` container:
+The `docker exec` command allows you to run commands inside a Docker container. The following command line will give you a bash shell inside your `%%IMAGE%%` container:
 
 ```console
 $ docker exec -it some-%%REPO%% bash
@@ -68,12 +68,12 @@ $ docker logs some-%%REPO%%
 
 ## Using a custom MySQL configuration file
 
-The MariaDB startup configuration is specified in the file `/etc/mysql/my.cnf`, and that file in turn includes any files found in the `/etc/mysql/conf.d` directory that end with `.cnf`. Settings in files in this directory will augment and/or override settings in `/etc/mysql/my.cnf`. If you want to use a customized MySQL configuration, you can create your alternative configuration file in a directory on the host machine and then mount that directory location as `/etc/mysql/conf.d` inside the `%%REPO%%` container.
+The MariaDB startup configuration is specified in the file `/etc/mysql/my.cnf`, and that file in turn includes any files found in the `/etc/mysql/conf.d` directory that end with `.cnf`. Settings in files in this directory will augment and/or override settings in `/etc/mysql/my.cnf`. If you want to use a customized MySQL configuration, you can create your alternative configuration file in a directory on the host machine and then mount that directory location as `/etc/mysql/conf.d` inside the `%%IMAGE%%` container.
 
-If `/my/custom/config-file.cnf` is the path and name of your custom configuration file, you can start your `%%REPO%%` container like this (note that only the directory path of the custom config file is used in this command):
+If `/my/custom/config-file.cnf` is the path and name of your custom configuration file, you can start your `%%IMAGE%%` container like this (note that only the directory path of the custom config file is used in this command):
 
 ```console
-$ docker run --name some-%%REPO%% -v /my/custom:/etc/mysql/conf.d -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%REPO%%:tag
+$ docker run --name some-%%REPO%% -v /my/custom:/etc/mysql/conf.d -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%IMAGE%%:tag
 ```
 
 This will start a new container `some-%%REPO%%` where the MariaDB instance uses the combined startup settings from `/etc/mysql/my.cnf` and `/etc/mysql/conf.d/config-file.cnf`, with settings from the latter taking precedence.
@@ -89,18 +89,18 @@ $ chcon -Rt svirt_sandbox_file_t /my/custom
 Many configuration options can be passed as flags to `mysqld`. This will give you the flexibility to customize the container without needing a `cnf` file. For example, if you want to change the default encoding and collation for all tables to use UTF-8 (`utf8mb4`) just run the following:
 
 ```console
-$ docker run --name some-%%REPO%% -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%REPO%%:tag --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
+$ docker run --name some-%%REPO%% -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%IMAGE%%:tag --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
 ```
 
 If you would like to see a complete list of available options, just run:
 
 ```console
-$ docker run -it --rm %%REPO%%:tag --verbose --help
+$ docker run -it --rm %%IMAGE%%:tag --verbose --help
 ```
 
 ## Environment Variables
 
-When you start the `%%REPO%%` image, you can adjust the configuration of the MariaDB instance by passing one or more environment variables on the `docker run` command line. Do note that none of the variables below will have any effect if you start the container with a data directory that already contains a database: any pre-existing database will always be left untouched on container startup.
+When you start the `%%IMAGE%%` image, you can adjust the configuration of the MariaDB instance by passing one or more environment variables on the `docker run` command line. Do note that none of the variables below will have any effect if you start the container with a data directory that already contains a database: any pre-existing database will always be left untouched on container startup.
 
 ### `MYSQL_ROOT_PASSWORD`
 
@@ -129,20 +129,20 @@ This is an optional variable. Set to `yes` to generate a random initial password
 As an alternative to passing sensitive information via environment variables, `_FILE` may be appended to the previously listed environment variables, causing the initialization script to load the values for those variables from files present in the container. In particular, this can be used to load passwords from Docker secrets stored in `/run/secrets/<secret_name>` files. For example:
 
 ```console
-$ docker run --name some-mysql -e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql-root -d %%REPO%%:tag
+$ docker run --name some-mysql -e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql-root -d %%IMAGE%%:tag
 ```
 
 Currently, this is only supported for `MYSQL_ROOT_PASSWORD`, `MYSQL_ROOT_HOST`, `MYSQL_DATABASE`, `MYSQL_USER`, and `MYSQL_PASSWORD`.
 
 # Initializing a fresh instance
 
-When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions `.sh`, `.sql` and `.sql.gz` that are found in `/docker-entrypoint-initdb.d`. Files will be executed in alphabetical order. You can easily populate your %%REPO%% services by [mounting a SQL dump into that directory](https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-file-as-a-data-volume) and provide [custom images](https://docs.docker.com/reference/builder/) with contributed data. SQL files will be imported by default to the database specified by the `MYSQL_DATABASE` variable.
+When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions `.sh`, `.sql` and `.sql.gz` that are found in `/docker-entrypoint-initdb.d`. Files will be executed in alphabetical order. You can easily populate your `%%IMAGE%%` services by [mounting a SQL dump into that directory](https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-file-as-a-data-volume) and provide [custom images](https://docs.docker.com/reference/builder/) with contributed data. SQL files will be imported by default to the database specified by the `MYSQL_DATABASE` variable.
 
 # Caveats
 
 ## Where to Store Data
 
-Important note: There are several ways to store data used by applications that run in Docker containers. We encourage users of the `%%REPO%%` images to familiarize themselves with the options available, including:
+Important note: There are several ways to store data used by applications that run in Docker containers. We encourage users of the `%%IMAGE%%` images to familiarize themselves with the options available, including:
 
 -	Let Docker manage the storage of your database data [by writing the database files to disk on the host system using its own internal volume management](https://docs.docker.com/engine/tutorials/dockervolumes/#adding-a-data-volume). This is the default and is easy and fairly transparent to the user. The downside is that the files may be hard to locate for tools and applications that run directly on the host system, i.e. outside containers.
 -	Create a data directory on the host system (outside the container) and [mount this to a directory visible from inside the container](https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-directory-as-a-data-volume). This places the database files in a known location on the host system, and makes it easy for tools and applications on the host system to access the files. The downside is that the user needs to make sure that the directory exists, and that e.g. directory permissions and other security mechanisms on the host system are set up correctly.
@@ -150,10 +150,10 @@ Important note: There are several ways to store data used by applications that r
 The Docker documentation is a good starting point for understanding the different storage options and variations, and there are multiple blogs and forum postings that discuss and give advice in this area. We will simply show the basic procedure here for the latter option above:
 
 1.	Create a data directory on a suitable volume on your host system, e.g. `/my/own/datadir`.
-2.	Start your `%%REPO%%` container like this:
+2.	Start your `%%IMAGE%%` container like this:
 
 	```console
-	$ docker run --name some-%%REPO%% -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%REPO%%:tag
+	$ docker run --name some-%%REPO%% -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%IMAGE%%:tag
 	```
 
 The `-v /my/own/datadir:/var/lib/mysql` part of the command mounts the `/my/own/datadir` directory from the underlying host system as `/var/lib/mysql` inside the container, where MySQL by default will write its data files.
@@ -170,7 +170,7 @@ If there is no database initialized when the container starts, then a default da
 
 ## Usage against an existing database
 
-If you start your `%%REPO%%` container instance with a data directory that already contains a database (specifically, a `mysql` subdirectory), the `$MYSQL_ROOT_PASSWORD` variable should be omitted from the run command line; it will in any case be ignored, and the pre-existing database will not be changed in any way.
+If you start your `%%IMAGE%%` container instance with a data directory that already contains a database (specifically, a `mysql` subdirectory), the `$MYSQL_ROOT_PASSWORD` variable should be omitted from the run command line; it will in any case be ignored, and the pre-existing database will not be changed in any way.
 
 ## Creating database dumps
 

+ 2 - 2
maven/content.md

@@ -9,7 +9,7 @@
 ## Create a Dockerfile in your Maven project
 
 ```dockerfile
-FROM maven:3.2-jdk-7-onbuild
+FROM %%IMAGE%%:3.2-jdk-7-onbuild
 CMD ["do-something-with-built-packages"]
 ```
 
@@ -29,5 +29,5 @@ $ docker run -it --name my-maven-script my-maven
 For many simple projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Maven project by using the Maven Docker image directly, passing a Maven command to `docker run`:
 
 ```console
-$ docker run -it --rm --name my-maven-project -v "$PWD":/usr/src/mymaven -w /usr/src/mymaven maven:3.2-jdk-7 mvn clean install
+$ docker run -it --rm --name my-maven-project -v "$PWD":/usr/src/mymaven -w /usr/src/mymaven %%IMAGE%%:3.2-jdk-7 mvn clean install
 ```

+ 4 - 4
mediawiki/content.md

@@ -11,13 +11,13 @@ MediaWiki is free and open-source wiki software. Originally developed by Magnus
 The basic pattern for starting a `%%REPO%%` instance is:
 
 ```console
-$ docker run --name some-%%REPO%% -d %%REPO%%
+$ docker run --name some-%%REPO%% -d %%IMAGE%%
 ```
 
 If you'd like to be able to access the instance from the host without the container's IP, standard port mappings can be used:
 
 ```console
-$ docker run --name some-%%REPO%% -p 8080:80 -d %%REPO%%
+$ docker run --name some-%%REPO%% -p 8080:80 -d %%IMAGE%%
 ```
 
 Then, access it via `http://localhost:8080` or `http://host-ip:8080` in a browser.
@@ -29,7 +29,7 @@ When first accessing the webserver provided by this image, it will go through a
 ## MySQL
 
 ```console
-$ docker run --name some-%%REPO%% --link some-mysql:mysql -d %%REPO%%
+$ docker run --name some-%%REPO%% --link some-mysql:mysql -d %%IMAGE%%
 ```
 
 -	Database type: `MySQL, MariaDB, or equivalent`
@@ -43,7 +43,7 @@ By default, this image does not include any volumes.
 The paths `/var/www/html/images` and `/var/www/html/LocalSettings.php` are things that generally ought to be volumes, but do not explicitly have a `VOLUME` declaration in this image because volumes cannot be removed.
 
 ```console
-$ docker run --rm %%REPO%% tar -cC /var/www/html/sites . | tar -xC /path/on/host/sites
+$ docker run --rm %%IMAGE%% tar -cC /var/www/html/sites . | tar -xC /path/on/host/sites
 ```
 
 ## %%STACK%%

+ 2 - 2
memcached/content.md

@@ -9,7 +9,7 @@ Memcached's APIs provide a very large hash table distributed across multiple mac
 # How to use this image
 
 ```console
-$ docker run --name my-memcache -d memcached
+$ docker run --name my-memcache -d %%IMAGE%%
 ```
 
 Start your memcached container with the above command and then you can connect you app to it with standard linking:
@@ -23,7 +23,7 @@ The memcached server information would then be available through the ENV variabl
 How to set the memory usage for memcached
 
 ```console
-$ docker run --name my-memcache -d memcached memcached -m 64
+$ docker run --name my-memcache -d %%IMAGE%% memcached -m 64
 ```
 
 This would set the memcache server to use 64 megabytes for storage.

+ 2 - 2
mongo-express/content.md

@@ -9,7 +9,7 @@ mongo-express is a web-based MongoDB admin interface written in Node.js, Express
 # How to use this image
 
 ```console
-$ docker run --link some_mongo_container:mongo -p 8081:8081 mongo-express
+$ docker run --link some_mongo_container:mongo -p 8081:8081 %%IMAGE%%
 ```
 
 Then you can hit `http://localhost:8081` or `http://host-ip:8081` in your browser.
@@ -60,7 +60,7 @@ $ docker run -it --rm \
     -e ME_CONFIG_OPTIONS_EDITORTHEME="ambiance" \
     -e ME_CONFIG_BASICAUTH_USERNAME="user" \
     -e ME_CONFIG_BASICAUTH_PASSWORD="fairly long password" \
-    mongo-express
+    %%IMAGE%%
 ```
 
 This example links to a container name typical of `docker-compose`, changes the editor's color theme, and enables basic authentication.

+ 5 - 5
mongo/content.md

@@ -13,7 +13,7 @@ First developed by the software company 10gen (now MongoDB Inc.) in October 2007
 ## start a mongo instance
 
 ```console
-$ docker run --name some-mongo -d mongo
+$ docker run --name some-mongo -d %%IMAGE%%
 ```
 
 This image includes `EXPOSE 27017` (the mongo port), so standard container linking will make it automatically available to the linked containers (as the following examples illustrate).
@@ -27,7 +27,7 @@ $ docker run --name some-app --link some-mongo:mongo -d application-that-uses-mo
 ## ... or via `mongo`
 
 ```console
-$ docker run -it --link some-mongo:mongo --rm mongo sh -c 'exec mongo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test"'
+$ docker run -it --link some-mongo:mongo --rm %%IMAGE%% sh -c 'exec mongo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test"'
 ```
 
 ## Configuration
@@ -37,7 +37,7 @@ See the [official docs](https://docs.mongodb.com/manual/) for infomation on usin
 Just add the `--storageEngine` argument if you want to use the WiredTiger storage engine in MongoDB 3.0 and above without making a config file. WiredTiger is the default storage engine in MongoDB 3.2 and above. Be sure to check the [docs](https://docs.mongodb.com/manual/release-notes/3.0-upgrade/#change-storage-engine-for-standalone-to-wiredtiger) on how to upgrade from older versions.
 
 ```console
-$ docker run --name some-mongo -d mongo --storageEngine wiredTiger
+$ docker run --name some-mongo -d %%IMAGE%% --storageEngine wiredTiger
 ```
 
 ### Authentication and Authorization
@@ -70,7 +70,7 @@ Successfully added user: {
 #### Connect Externally
 
 ```console
-$ docker run -it --rm --link some-mongo:mongo mongo mongo -u jsmith -p some-initial-password --authenticationDatabase admin some-mongo/some-db
+$ docker run -it --rm --link some-mongo:mongo %%IMAGE%% mongo -u jsmith -p some-initial-password --authenticationDatabase admin some-mongo/some-db
 > db.getName();
 some-db
 ```
@@ -90,7 +90,7 @@ The Docker documentation is a good starting point for understanding the differen
 2.	Start your `%%REPO%%` container like this:
 
 	```console
-	$ docker run --name some-%%REPO%% -v /my/own/datadir:/data/db -d %%REPO%%:tag
+	$ docker run --name some-%%REPO%% -v /my/own/datadir:/data/db -d %%IMAGE%%:tag
 	```
 
 The `-v /my/own/datadir:/data/db` part of the command mounts the `/my/own/datadir` directory from the underlying host system as `/data/db` inside the container, where MongoDB by default will write its data files.

+ 1 - 1
mono/content.md

@@ -16,7 +16,7 @@ This image will run stand-alone Mono console apps.
 This example Dockerfile will run an executable called `TestingConsoleApp.exe`.
 
 ```dockerfile
-FROM mono:3.10-onbuild
+FROM %%IMAGE%%:3.10-onbuild
 CMD [ "mono",  "./TestingConsoleApp.exe" ]
 ```
 

+ 19 - 19
mysql/content.md

@@ -8,12 +8,12 @@ For more information and related downloads for MySQL Server and other MySQL prod
 
 # How to use this image
 
-## Start a `%%REPO%%` server instance
+## Start a `%%IMAGE%%` server instance
 
 Starting a MySQL instance is simple:
 
 ```console
-$ docker run --name some-%%REPO%% -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%REPO%%:tag
+$ docker run --name some-%%REPO%% -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%IMAGE%%:tag
 ```
 
 ... where `some-%%REPO%%` is the name you want to assign to your container, `my-secret-pw` is the password to be set for the MySQL root user and `tag` is the tag specifying the MySQL version you want. See the list above for relevant tags.
@@ -28,18 +28,18 @@ $ docker run --name some-app --link some-%%REPO%%:mysql -d application-that-uses
 
 ## Connect to MySQL from the MySQL command line client
 
-The following command starts another %%REPO%% container instance and runs the `mysql` command line client against your original %%REPO%% container, allowing you to execute SQL statements against your database instance:
+The following command starts another `%%IMAGE%%` container instance and runs the `mysql` command line client against your original `%%IMAGE%%` container, allowing you to execute SQL statements against your database instance:
 
 ```console
-$ docker run -it --link some-%%REPO%%:mysql --rm %%REPO%% sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
+$ docker run -it --link some-%%REPO%%:mysql --rm %%IMAGE%% sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
 ```
 
-... where `some-%%REPO%%` is the name of your original %%REPO%% container.
+... where `some-%%REPO%%` is the name of your original `%%IMAGE%%` container.
 
 This image can also be used as a client for non-Docker or remote MySQL instances:
 
 ```console
-$ docker run -it --rm %%REPO%% mysql -hsome.mysql.host -usome-mysql-user -p
+$ docker run -it --rm %%IMAGE%% mysql -hsome.mysql.host -usome-mysql-user -p
 ```
 
 More information about the MySQL command line client can be found in the [MySQL documentation](http://dev.mysql.com/doc/en/mysql.html)
@@ -50,7 +50,7 @@ Run `docker stack deploy -c stack.yml %%REPO%%` (or `docker-compose -f stack.yml
 
 ## Container shell access and viewing MySQL logs
 
-The `docker exec` command allows you to run commands inside a Docker container. The following command line will give you a bash shell inside your `%%REPO%%` container:
+The `docker exec` command allows you to run commands inside a Docker container. The following command line will give you a bash shell inside your `%%IMAGE%%` container:
 
 ```console
 $ docker exec -it some-%%REPO%% bash
@@ -64,12 +64,12 @@ $ docker logs some-%%REPO%%
 
 ## Using a custom MySQL configuration file
 
-The MySQL startup configuration is specified in the file `/etc/mysql/my.cnf`, and that file in turn includes any files found in the `/etc/mysql/conf.d` directory that end with `.cnf`. Settings in files in this directory will augment and/or override settings in `/etc/mysql/my.cnf`. If you want to use a customized MySQL configuration, you can create your alternative configuration file in a directory on the host machine and then mount that directory location as `/etc/mysql/conf.d` inside the `%%REPO%%` container.
+The MySQL startup configuration is specified in the file `/etc/mysql/my.cnf`, and that file in turn includes any files found in the `/etc/mysql/conf.d` directory that end with `.cnf`. Settings in files in this directory will augment and/or override settings in `/etc/mysql/my.cnf`. If you want to use a customized MySQL configuration, you can create your alternative configuration file in a directory on the host machine and then mount that directory location as `/etc/mysql/conf.d` inside the `%%IMAGE%%` container.
 
-If `/my/custom/config-file.cnf` is the path and name of your custom configuration file, you can start your `%%REPO%%` container like this (note that only the directory path of the custom config file is used in this command):
+If `/my/custom/config-file.cnf` is the path and name of your custom configuration file, you can start your `%%IMAGE%%` container like this (note that only the directory path of the custom config file is used in this command):
 
 ```console
-$ docker run --name some-%%REPO%% -v /my/custom:/etc/mysql/conf.d -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%REPO%%:tag
+$ docker run --name some-%%REPO%% -v /my/custom:/etc/mysql/conf.d -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%IMAGE%%:tag
 ```
 
 This will start a new container `some-%%REPO%%` where the MySQL instance uses the combined startup settings from `/etc/mysql/my.cnf` and `/etc/mysql/conf.d/config-file.cnf`, with settings from the latter taking precedence.
@@ -85,18 +85,18 @@ $ chcon -Rt svirt_sandbox_file_t /my/custom
 Many configuration options can be passed as flags to `mysqld`. This will give you the flexibility to customize the container without needing a `cnf` file. For example, if you want to change the default encoding and collation for all tables to use UTF-8 (`utf8mb4`) just run the following:
 
 ```console
-$ docker run --name some-%%REPO%% -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%REPO%%:tag --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
+$ docker run --name some-%%REPO%% -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%IMAGE%%:tag --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
 ```
 
 If you would like to see a complete list of available options, just run:
 
 ```console
-$ docker run -it --rm %%REPO%%:tag --verbose --help
+$ docker run -it --rm %%IMAGE%%:tag --verbose --help
 ```
 
 ## Environment Variables
 
-When you start the `%%REPO%%` image, you can adjust the configuration of the MySQL instance by passing one or more environment variables on the `docker run` command line. Do note that none of the variables below will have any effect if you start the container with a data directory that already contains a database: any pre-existing database will always be left untouched on container startup.
+When you start the `%%IMAGE%%` image, you can adjust the configuration of the MySQL instance by passing one or more environment variables on the `docker run` command line. Do note that none of the variables below will have any effect if you start the container with a data directory that already contains a database: any pre-existing database will always be left untouched on container startup.
 
 ### `MYSQL_ROOT_PASSWORD`
 
@@ -129,20 +129,20 @@ Sets root (*not* the user specified in `MYSQL_USER`!) user as expired once init
 As an alternative to passing sensitive information via environment variables, `_FILE` may be appended to the previously listed environment variables, causing the initialization script to load the values for those variables from files present in the container. In particular, this can be used to load passwords from Docker secrets stored in `/run/secrets/<secret_name>` files. For example:
 
 ```console
-$ docker run --name some-mysql -e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql-root -d %%REPO%%:tag
+$ docker run --name some-mysql -e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql-root -d %%IMAGE%%:tag
 ```
 
 Currently, this is only supported for `MYSQL_ROOT_PASSWORD`, `MYSQL_ROOT_HOST`, `MYSQL_DATABASE`, `MYSQL_USER`, and `MYSQL_PASSWORD`.
 
 # Initializing a fresh instance
 
-When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions `.sh`, `.sql` and `.sql.gz` that are found in `/docker-entrypoint-initdb.d`. Files will be executed in alphabetical order. You can easily populate your %%REPO%% services by [mounting a SQL dump into that directory](https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-file-as-a-data-volume) and provide [custom images](https://docs.docker.com/reference/builder/) with contributed data. SQL files will be imported by default to the database specified by the `MYSQL_DATABASE` variable.
+When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions `.sh`, `.sql` and `.sql.gz` that are found in `/docker-entrypoint-initdb.d`. Files will be executed in alphabetical order. You can easily populate your `%%IMAGE%%` services by [mounting a SQL dump into that directory](https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-file-as-a-data-volume) and provide [custom images](https://docs.docker.com/reference/builder/) with contributed data. SQL files will be imported by default to the database specified by the `MYSQL_DATABASE` variable.
 
 # Caveats
 
 ## Where to Store Data
 
-Important note: There are several ways to store data used by applications that run in Docker containers. We encourage users of the `%%REPO%%` images to familiarize themselves with the options available, including:
+Important note: There are several ways to store data used by applications that run in Docker containers. We encourage users of the `%%IMAGE%%` images to familiarize themselves with the options available, including:
 
 -	Let Docker manage the storage of your database data [by writing the database files to disk on the host system using its own internal volume management](https://docs.docker.com/engine/tutorials/dockervolumes/#adding-a-data-volume). This is the default and is easy and fairly transparent to the user. The downside is that the files may be hard to locate for tools and applications that run directly on the host system, i.e. outside containers.
 -	Create a data directory on the host system (outside the container) and [mount this to a directory visible from inside the container](https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-directory-as-a-data-volume). This places the database files in a known location on the host system, and makes it easy for tools and applications on the host system to access the files. The downside is that the user needs to make sure that the directory exists, and that e.g. directory permissions and other security mechanisms on the host system are set up correctly.
@@ -150,10 +150,10 @@ Important note: There are several ways to store data used by applications that r
 The Docker documentation is a good starting point for understanding the different storage options and variations, and there are multiple blogs and forum postings that discuss and give advice in this area. We will simply show the basic procedure here for the latter option above:
 
 1.	Create a data directory on a suitable volume on your host system, e.g. `/my/own/datadir`.
-2.	Start your `%%REPO%%` container like this:
+2.	Start your `%%IMAGE%%` container like this:
 
 	```console
-	$ docker run --name some-%%REPO%% -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%REPO%%:tag
+	$ docker run --name some-%%REPO%% -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%IMAGE%%:tag
 	```
 
 The `-v /my/own/datadir:/var/lib/mysql` part of the command mounts the `/my/own/datadir` directory from the underlying host system as `/var/lib/mysql` inside the container, where MySQL by default will write its data files.
@@ -170,7 +170,7 @@ If there is no database initialized when the container starts, then a default da
 
 ## Usage against an existing database
 
-If you start your `%%REPO%%` container instance with a data directory that already contains a database (specifically, a `mysql` subdirectory), the `$MYSQL_ROOT_PASSWORD` variable should be omitted from the run command line; it will in any case be ignored, and the pre-existing database will not be changed in any way.
+If you start your `%%IMAGE%%` container instance with a data directory that already contains a database (specifically, a `mysql` subdirectory), the `$MYSQL_ROOT_PASSWORD` variable should be omitted from the run command line; it will in any case be ignored, and the pre-existing database will not be changed in any way.
 
 ## Creating database dumps
 

+ 3 - 3
nats-streaming/content.md

@@ -13,7 +13,7 @@
 # 8222 is an HTTP management port for information reporting.
 # use -p or -P as needed.
 
-$ docker run -d nats-streaming
+$ docker run -d %%IMAGE%%
 ```
 
 Output that you would get if you had started with `-ti` instead of `d` (for daemon):
@@ -40,7 +40,7 @@ Output that you would get if you had started with `-ti` instead of `d` (for daem
 To use a file based store instead, you would run:
 
 ```bash
-$ docker run -d nats-streaming -store file -dir datastore
+$ docker run -d %%IMAGE%% -store file -dir datastore
 
 [1] 2017/06/27 19:14:06.643200 [INF] STREAM: Starting nats-streaming-server[test-cluster] version 0.5.0
 [1] 2017/06/27 19:14:06.643242 [INF] STREAM: ServerID: aaAI5uJPRimoNwl6TIznom
@@ -69,7 +69,7 @@ $ docker run -d --name=nats-main nats
 Now, start the Streaming server and link it to the above docker image:
 
 ```bash
-$ docker run -d --link nats-main nats-streaming -store file -dir datastore -ns nats://nats-main:4222
+$ docker run -d --link nats-main %%IMAGE%% -store file -dir datastore -ns nats://nats-main:4222
 
 [1] 2017/06/27 19:16:53.628397 [INF] STREAM: Starting nats-streaming-server[test-cluster] version 0.5.0
 [1] 2017/06/27 19:16:53.628426 [INF] STREAM: ServerID: PNXiWzcYitFesmdKyOwBIE

+ 3 - 3
nats/content.md

@@ -14,7 +14,7 @@
 # 6222 is a routing port for clustering.
 # use -p or -P as needed.
 
-$ docker run -d --name nats-main nats
+$ docker run -d --name nats-main %%IMAGE%%
 [INF] Starting nats-server version 1.0.4
 [INF] Starting http monitor on 0.0.0.0:8222
 [INF] Listening for client connections on 0.0.0.0:4222
@@ -27,10 +27,10 @@ $ docker run -d --name nats-main nats
 # Note that since you are passing arguments, this overrides the CMD section
 # of the Dockerfile, so you need to pass all arguments, including the
 # config file.
-$ docker run -d --name=nats-2 --link nats-main nats -c gnatsd.conf --routes=nats-route://ruser:T0pS3cr3t@nats-main:6222
+$ docker run -d --name=nats-2 --link nats-main %%IMAGE%% -c gnatsd.conf --routes=nats-route://ruser:T0pS3cr3t@nats-main:6222
 
 # If you want to verify the routes are connected, try this instead:
-$ docker run -d --name=nats-2 --link nats-main nats -c gnatsd.conf --routes=nats-route://ruser:T0pS3cr3t@nats-main:6222 -DV
+$ docker run -d --name=nats-2 --link nats-main %%IMAGE%% -c gnatsd.conf --routes=nats-route://ruser:T0pS3cr3t@nats-main:6222 -DV
 [INF] Starting nats-server version 1.0.4
 [DBG] Go build version go1.8.3
 [INF] Starting http monitor on 0.0.0.0:8222

+ 2 - 2
neo4j/content.md

@@ -14,7 +14,7 @@ You can start a Neo4j container like this:
 docker run \
     --publish=7474:7474 --publish=7687:7687 \
     --volume=$HOME/neo4j/data:/data \
-    neo4j
+    %%IMAGE%%
 ```
 
 which allows you to access neo4j through your browser at [http://localhost:7474](http://localhost:7474).
@@ -33,7 +33,7 @@ You can start an instance of Neo4j 2.3 like this:
 docker run \
     --publish=7474:7474 \
     --volume=$HOME/neo4j/data:/data \
-    neo4j:2.3
+    %%IMAGE%%:2.3
 ```
 
 # Documentation

+ 2 - 2
neurodebian/content.md

@@ -12,14 +12,14 @@ NeuroDebian images only add NeuroDebian repository and repository's GPG key. No
 
 `nd` tags are used to reflect suffixes used in versions of packages available from NeuroDebian.
 
-The `neurodebian:latest` tag will always point the Neurodebian-enabled latest stable release of Debian (which is, at the time of this writing, `debian:wheezy`).
+The `%%IMAGE%%:latest` tag will always point the Neurodebian-enabled latest stable release of Debian (which is, at the time of this writing, `debian:wheezy`).
 
 ## sources.list
 
 NeuroDebian APT file is installed under `/etc/apt/sources.list.d/neurodebian.sources.list` and currently enables only `main` (DFSG-compliant) area of the archive:
 
 ```console
-$ docker run neurodebian:latest cat /etc/apt/sources.list.d/neurodebian.sources.list
+$ docker run %%IMAGE%% cat /etc/apt/sources.list.d/neurodebian.sources.list
 deb http://neuro.debian.net/debian wheezy main
 deb http://neuro.debian.net/debian data main
 #deb-src http://neuro.debian.net/debian-devel wheezy main

+ 57 - 38
nextcloud/content.md

@@ -19,7 +19,7 @@ The second option is a `fpm` container. It is based on the [php-fpm](https://hub
 The apache image contains a webserver and exposes port 80. To start the container type:
 
 ```console
-$ docker run -d -p 8080:80 nextcloud
+$ docker run -d -p 8080:80 %%IMAGE%%
 ```
 
 Now you can access Nextcloud at http://localhost:8080/ from your host system.
@@ -29,7 +29,7 @@ Now you can access Nextcloud at http://localhost:8080/ from your host system.
 To use the fpm image you need an additional web server that can proxy http-request to the fpm-port of the container. For fpm connection this container exposes port 9000. In most cases you might want use another container or your host as proxy. If you use your host you can address your Nextcloud container directly on port 9000. If you use another container, make sure that you add them to the same docker network (via `docker run --network <NAME> ...` or a `docker-compose` file). In both cases you don't want to map the fpm port to you host.
 
 ```console
-$ docker run -d nextcloud:fpm
+$ docker run -d %%IMAGE%%:fpm
 ```
 
 As the fastCGI-Process is not capable of serving static files (style sheets, images, ...) the webserver needs access to these files. This can be achieved with the `volumes-from` option. You can find more information in the docker-compose section.
@@ -46,18 +46,24 @@ To make your data persistent to upgrading and get access for backups is using na
 
 Nextcloud:
 
--	`/var/www/html/` folder where all nextcloud data lives`console
-	$ docker run -d nextcloud \
-	-v nextcloud:/var/www/html
-	`
+-	`/var/www/html/` folder where all Nextcloud data lives
+
+	```console
+	$ docker run -d \
+	-v nextcloud:/var/www/html \
+	%%IMAGE%%
+	```
 
 Database:
 
 -	`/var/lib/mysql` MySQL / MariaDB Data
--	`/var/lib/postresql/data` PostegreSQL Data`console
-	$ docker run -d mariadb \
-	-v db:/var/lib/mysql
-	`
+-	`/var/lib/postresql/data` PostegreSQL Data
+
+	```console
+	$ docker run -d \
+	-v db:/var/lib/mysql \
+	mariadb
+	```
 
 If you want to get fine grained access to your individual files, you can mount additional volumes for data, config, your theme and custom apps. The `data`, `config` are stored in respective subfolders inside `/var/www/html/`. The apps are split into core `apps` (which are shipped with Nextcloud and you don't need to take care of) and a `custom_apps` folder. If you use a custom theme it would go into the `themes` subfolder.
 
@@ -72,12 +78,13 @@ Overview of the folders that can be mounted as volumes:
 If you want to use named volumes for all of these it would look like this
 
 ```console
-$ docker run -d nextcloud \
--v nextcloud:/var/www/html \
--v apps:/var/www/html/custom_apps \
--v config:/var/www/html/config \
--v data:/var/www/html/data \
--v theme:/var/www/html/themes/<YOUR_CUSTOM_THEME>
+$ docker run -d \
+	-v nextcloud:/var/www/html \
+	-v apps:/var/www/html/custom_apps \
+	-v config:/var/www/html/config \
+	-v data:/var/www/html/data \
+	-v theme:/var/www/html/themes/<YOUR_CUSTOM_THEME> \
+	%%IMAGE%%
 ```
 
 ## Using the Nextcloud command-line interface
@@ -96,7 +103,7 @@ $ docker-compose exec --user www-data app php occ
 
 ## Auto configuration via environment variables
 
-The nextcloud image supports auto configuration via environment variables. You can preconfigure everything that is asked on the install page on first run. To enable auto configuration, set your database connection via the following environment variables. ONLY use one database type!
+The %%IMAGE%% image supports auto configuration via environment variables. You can preconfigure everything that is asked on the install page on first run. To enable auto configuration, set your database connection via the following environment variables. ONLY use one database type!
 
 **SQLITE_DATABASE**:
 
@@ -157,8 +164,8 @@ services:
       - MYSQL_DATABASE=nextcloud
       - MYSQL_USER=nextcloud
 
-  app:  
-    image: nextcloud
+  app:
+    image: %%IMAGE%%
     ports:
       - 8080:80
     links:
@@ -166,7 +173,6 @@ services:
     volumes:
       - nextcloud:/var/www/html
     restart: always
-
 ```
 
 Then run `docker-compose up -d`, now you can access Nextcloud at http://localhost:8080/ from your host system.
@@ -199,7 +205,7 @@ services:
       - MYSQL_USER=nextcloud
 
   app:
-    image: nextcloud:fpm
+    image: %%IMAGE%%:fpm
     links:
       - db
     volumes:
@@ -242,10 +248,10 @@ When you first access your Nextcloud, the setup wizard will appear and ask you t
 Updating the Nextcloud container is done by pulling the new image, throwing away the old container and starting the new one. Since all data is stored in volumes, nothing gets lost. The startup script will check for the version in your volume and the installed docker version. If it finds a mismatch, it automatically starts the upgrade process. Don't forget to add all the volumes to your new container, so it works as expected.
 
 ```console
-$ docker pull nextcloud
+$ docker pull %%IMAGE%%
 $ docker stop <your_nextcloud_container>
 $ docker rm <your_nextcloud_container>
-$ docker run <OPTIONS> -d nextcloud
+$ docker run <OPTIONS> -d %%IMAGE%%
 ```
 
 Beware that you have to run the same command with the options that you used to initially start your Nextcloud. That includes volumes, port mapping.
@@ -262,7 +268,7 @@ $ docker-compose up -d
 A lot of people want to use additional functionality inside their Nextcloud installation. If the image does not include the packages you need, you can easily build your own image on top of it. Start your derived image with the `FROM` statement and add whatever you like.
 
 ```yaml
-FROM nextcloud:apache
+FROM %%IMAGE%%:apache
 
 RUN ...
 
@@ -287,7 +293,7 @@ If you use your own Dockerfile you need to configure your docker-compose file ac
 **Updating** your own derived image is also very simple. When a new version of the Nextcloud image is available run:
 
 ```console
-docker build -t your-name --pull . 
+docker build -t your-name --pull .
 docker run -d your-name
 ```
 
@@ -305,26 +311,39 @@ The `--pull` option tells docker to look for new versions of the base image. The
 You're already using Nextcloud and want to switch to docker? Great! Here are some things to look out for:
 
 1.	Define your whole Nextcloud infrastructure in a `docker-compose` file and run it with `docker-compose up -d` to get the base installation, volumes and database. Work from there.
-2.	Restore your database from a mysqldump (nextcloud\_db\_1 is the name of your db container)`console
+2.	Restore your database from a mysqldump (nextcloud\_db\_1 is the name of your db container)
+
+	```console
 	docker cp ./database.dmp nextcloud_db_1:/dmp
 	docker-compose exec db sh -c "mysql -u USER -pPASSWORD nextcloud < /dmp"
 	docker-compose exec db rm /dmp
-	`
+	```
+
 3.	Edit your config.php
 
-	1.	Set database connection`php
+	1.	Set database connection
+
+		```php
 		'dbhost' => 'db:3306',
-		`
-	2.	Make sure you have no configuration for the `apps_paths`. Delete lines like these\`\``diff
-	3.	"apps_paths" => array (
-	4.	0 => array (
-	5.	"path" => OC::$SERVERROOT."/apps",
-	6.	"url" => "/apps",
-	7.	"writable" => true,
-	8.	),\`\`\`
-	9.	Make sure your data directory is set to /var/www/html/data`php
+		```
+
+	2.	Make sure you have no configuration for the `apps_paths`. Delete lines like these
+
+		```php
+		"apps_paths" => array (
+		    0 => array (
+		        "path" => OC::$SERVERROOT."/apps",
+		        "url" => "/apps",
+		        "writable" => true,
+		    ),
+		),
+		```
+
+	3.	Make sure your data directory is set to /var/www/html/data
+
+		```php
 		'datadirectory' => '/var/www/html/data',
-		`
+		```
 
 4.	Copy your data (nextcloud_app_1 is the name of your Nextcloud container):
 

+ 10 - 10
nginx/content.md

@@ -11,13 +11,13 @@ Nginx (pronounced "engine-x") is an open source reverse proxy server for HTTP, H
 ## Hosting some simple static content
 
 ```console
-$ docker run --name some-nginx -v /some/content:/usr/share/nginx/html:ro -d nginx
+$ docker run --name some-nginx -v /some/content:/usr/share/nginx/html:ro -d %%IMAGE%%
 ```
 
 Alternatively, a simple `Dockerfile` can be used to generate a new image that includes the necessary content (which is a much cleaner solution than the bind mount above):
 
 ```dockerfile
-FROM nginx
+FROM %%IMAGE%%
 COPY static-html-directory /usr/share/nginx/html
 ```
 
@@ -38,7 +38,7 @@ Then you can hit `http://localhost:8080` or `http://host-ip:8080` in your browse
 ## Complex configuration
 
 ```console
-$ docker run --name my-custom-nginx-container -v /host/path/nginx.conf:/etc/nginx/nginx.conf:ro -d nginx
+$ docker run --name my-custom-nginx-container -v /host/path/nginx.conf:/etc/nginx/nginx.conf:ro -d %%IMAGE%%
 ```
 
 For information on the syntax of the nginx configuration files, see [the official documentation](http://nginx.org/en/docs/) (specifically the [Beginner's Guide](http://nginx.org/en/docs/beginners_guide.html#conf_structure)).
@@ -46,7 +46,7 @@ For information on the syntax of the nginx configuration files, see [the officia
 If you wish to adapt the default configuration, use something like the following to copy it from a running nginx container:
 
 ```console
-$ docker run --name tmp-nginx-container -d nginx
+$ docker run --name tmp-nginx-container -d %%IMAGE%%
 $ docker cp tmp-nginx-container:/etc/nginx/nginx.conf /host/path/nginx.conf
 $ docker rm -f tmp-nginx-container
 ```
@@ -54,7 +54,7 @@ $ docker rm -f tmp-nginx-container
 This can also be accomplished more cleanly using a simple `Dockerfile` (in `/host/path/`):
 
 ```dockerfile
-FROM nginx
+FROM %%IMAGE%%
 COPY nginx.conf /etc/nginx/nginx.conf
 ```
 
@@ -66,15 +66,15 @@ Then build the image with `docker build -t custom-nginx .` and run it as follows
 $ docker run --name my-custom-nginx-container -d custom-nginx
 ```
 
-### Using environment variables in nginx configuration
+### Using environment variables in %%IMAGE%% configuration
 
-Out-of-the-box, nginx doesn't support environment variables inside most configuration blocks. But `envsubst` may be used as a workaround if you need to generate your nginx configuration dynamically before nginx starts.
+Out-of-the-box, %%IMAGE%% doesn't support environment variables inside most configuration blocks. But `envsubst` may be used as a workaround if you need to generate your %%IMAGE%% configuration dynamically before %%IMAGE%% starts.
 
 Here is an example using docker-compose.yml:
 
 ```yaml
 web:
-  image: nginx
+  image: %%IMAGE%%
   volumes:
    - ./mysite.template:/etc/nginx/conf.d/mysite.template
   ports:
@@ -95,14 +95,14 @@ The `mysite.template` file may then contain variable references like this:
 Images since version 1.9.8 come with `nginx-debug` binary that produces verbose output when using higher log levels. It can be used with simple CMD substitution:
 
 ```console
-$ docker run --name my-nginx -v /host/path/nginx.conf:/etc/nginx/nginx.conf:ro -d nginx nginx-debug -g 'daemon off;'
+$ docker run --name my-nginx -v /host/path/nginx.conf:/etc/nginx/nginx.conf:ro -d %%IMAGE%% nginx-debug -g 'daemon off;'
 ```
 
 Similar configuration in docker-compose.yml may look like this:
 
 ```yaml
 web:
-  image: nginx
+  image: %%IMAGE%%
   volumes:
     - ./nginx.conf:/etc/nginx/nginx.conf:ro
   command: [nginx-debug, '-g', 'daemon off;']

+ 4 - 4
nuxeo/content.md

@@ -9,7 +9,7 @@ The Nuxeo Platform is a highly customizable and extensible content management pl
 ## Start a bare nuxeo instance
 
 ```console
-$ docker run --name mynuxeo -p 8080:8080 -d nuxeo
+$ docker run --name mynuxeo -p 8080:8080 -d %%IMAGE%%
 ```
 
 This image includes `EXPOSE 8080` (the nuxeo port). The default Nuxeo configuration is applied which feature an embedded database (H2), and an embedded Elasticsearch instance. This setup is not suitable for production. See below to know how to setup a production ready container by specifying environment variables.
@@ -128,14 +128,14 @@ Allows to add custom parameters to `nuxeo.conf`. Multiple parameters can be spli
 If you would like to do additional setup in an image derived from this one, you can add a `/nuxeo.conf` file that will be appended to the end of the regular `nuxeo.conf` file.
 
 ```dockerfile
-FROM nuxeo:7.10
+FROM %%IMAGE%%:7.10
 ADD nuxeo.conf /nuxeo.conf
 ```
 
 If you need a root account to run some installation steps in your `Dockerfile`, then you need to put those steps between two `USER` command as the image is run with the user `1000` (nuxeo). For instance:
 
 ```dockerfile
-FROM nuxeo:LTS
+FROM %%IMAGE%%:LTS
 USER root
 RUN apt-get update && apt-get install -y --no-install-recommends vim
 USER 1000
@@ -150,7 +150,7 @@ You can add your own shell scripts in a special `/docker-entrypoint-initnuxeo.d`
 As it contains some non-free Codecs, we dont't ship a binary version of `ffmpeg` as part of this image. However, you can simply add the compilation in a derived images by adding these lines to your Dockerfile.
 
 ```dockerfile
-FROM nuxeo:7.10
+FROM %%IMAGE%%:7.10
 
 USER root
 

+ 10 - 10
odoo/content.md

@@ -19,7 +19,7 @@ $ docker run -d -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo --name db postgr
 ## Start an Odoo instance
 
 ```console
-$ docker run -p 8069:8069 --name odoo --link db:db -t odoo
+$ docker run -p 8069:8069 --name odoo --link db:db -t %%IMAGE%%
 ```
 
 The alias of the container running Postgres must be db for Odoo to be able to connect to the Postgres server.
@@ -42,7 +42,7 @@ Restarting a PostgreSQL server does not affect the created databases.
 The default configuration file for the server (located at `/etc/odoo/openerp-server.conf`) can be overriden at startup using volumes. Suppose you have a custom configuration at `/path/to/config/openerp-server.conf`, then
 
 ```console
-$ docker run -v /path/to/config:/etc/odoo -p 8069:8069 --name odoo --link db:db -t odoo
+$ docker run -v /path/to/config:/etc/odoo -p 8069:8069 --name odoo --link db:db -t %%IMAGE%%
 ```
 
 Please use [this configuration template](https://github.com/odoo/docker/blob/master/8.0/openerp-server.conf) to write your custom configuration as we already set some arguments for running Odoo inside a Docker container.
@@ -50,7 +50,7 @@ Please use [this configuration template](https://github.com/odoo/docker/blob/mas
 You can also directly specify Odoo arguments inline. Those arguments must be given after the keyword `--` in the command-line, as follows
 
 ```console
-$ docker run -p 8069:8069 --name odoo --link db:db -t odoo -- --db-filter=odoo_db_.*
+$ docker run -p 8069:8069 --name odoo --link db:db -t %%IMAGE%% -- --db-filter=odoo_db_.*
 ```
 
 ## Mount custom addons
@@ -58,14 +58,14 @@ $ docker run -p 8069:8069 --name odoo --link db:db -t odoo -- --db-filter=odoo_d
 You can mount your own Odoo addons within the Odoo container, at `/mnt/extra-addons`
 
 ```console
-$ docker run -v /path/to/addons:/mnt/extra-addons -p 8069:8069 --name odoo --link db:db -t odoo
+$ docker run -v /path/to/addons:/mnt/extra-addons -p 8069:8069 --name odoo --link db:db -t %%IMAGE%%
 ```
 
 ## Run multiple Odoo instances
 
 ```console
-$ docker run -p 8070:8069 --name odoo2 --link db:db -t odoo
-$ docker run -p 8071:8069 --name odoo3 --link db:db -t odoo
+$ docker run -p 8070:8069 --name odoo2 --link db:db -t %%IMAGE%%
+$ docker run -p 8071:8069 --name odoo3 --link db:db -t %%IMAGE%%
 ```
 
 Please note that for plain use of mails and reports functionalities, when the host and container ports differ (e.g. 8070 and 8069), one has to set, in Odoo, Settings->Parameters->System Parameters (requires technical features), web.base.url to the container port (e.g. 127.0.0.1:8069).
@@ -87,7 +87,7 @@ The simplest `docker-compose.yml` file would be:
 version: '2'
 services:
   web:
-    image: odoo:10.0
+    image: %%IMAGE%%:10.0
     depends_on:
       - db
     ports:
@@ -105,7 +105,7 @@ If the default postgres credentials does not suit you, tweak the environment var
 version: '2'
 services:
   web:
-    image: odoo:10.0
+    image: %%IMAGE%%:10.0
     depends_on:
       - mydb
     ports:
@@ -127,7 +127,7 @@ Here's a last example showing you how to mount custom addons, how to use a custo
 version: '2'
 services:
   web:
-    image: odoo:10.0
+    image: %%IMAGE%%:10.0
     depends_on:
       - db
     ports:
@@ -164,7 +164,7 @@ Suppose you created a database from an Odoo instance named old-odoo, and you wan
 By default, Odoo 8.0 uses a filestore (located at /var/lib/odoo/filestore/) for attachments. You should restore this filestore in your new Odoo instance by running
 
 ```console
-$ docker run --volumes-from old-odoo -p 8070:8069 --name new-odoo --link db:db -t odoo
+$ docker run --volumes-from old-odoo -p 8070:8069 --name new-odoo --link db:db -t %%IMAGE%%
 ```
 
 You can also simply prevent Odoo from using the filestore by setting the system parameter `ir_attachment.location` to `db-storage` in Settings->Parameters->System Parameters (requires technical features).

+ 1 - 1
openjdk/content.md

@@ -34,7 +34,7 @@ $ docker run -it --rm --name my-running-app my-java-app
 There may be occasions where it is not appropriate to run your app inside a container. To compile, but not run your app inside the Docker instance, you can write something like:
 
 ```console
-$ docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp %%REPO%%:7 javac Main.java
+$ docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp %%IMAGE%%:7 javac Main.java
 ```
 
 This will add your current directory as a volume to the container, set the working directory to the volume, and run the command `javac Main.java` which will tell Java to compile the code in `Main.java` and output the Java class file to `Main.class`.

+ 1 - 1
oraclelinux/content.md

@@ -6,7 +6,7 @@ Oracle Linux is an open-source operating system available under the GNU General
 
 ## How to use these images
 
-The Oracle Linux images are intended for use in the **FROM** field of an application's `Dockerfile`. For example, to use Oracle Linux 6 as the base of an image, specify `FROM oraclelinux:6`.
+The Oracle Linux images are intended for use in the **FROM** field of an application's `Dockerfile`. For example, to use Oracle Linux 6 as the base of an image, specify `FROM %%IMAGE%%:6`.
 
 ## Official Resources
 

+ 6 - 6
orientdb/content.md

@@ -9,7 +9,7 @@
 When OrientDB starts it asks for the root password. The root user is able to manage the OrientDB server: create new databases, manage users and roles. The root password can be passed to the container using an environment property:
 
 ```console
-$ docker run -d --name orientdb -p 2424:2424 -p 2480:2480 -e ORIENTDB_ROOT_PASSWORD=rootpwd orientdb
+$ docker run -d --name orientdb -p 2424:2424 -p 2480:2480 -e ORIENTDB_ROOT_PASSWORD=rootpwd %%IMAGE%%
 ```
 
 The [Studio](http://orientdb.com/docs/last/Studio-Home-page.html) is accessible to http://<docker-host>:2480 (e.g.: http://localhost:2480)
@@ -26,7 +26,7 @@ $ docker run -d --name orientdb -p 2424:2424 -p 2480:2480 \
     -v <databases_path>:/orientdb/databases \
     -v <backup_path>:/orientdb/backup \
     -e ORIENTDB_ROOT_PASSWORD=rootpwd \
-    orientdb
+    %%IMAGE%%
 ```
 
 **NOTE**: don't provide an **empty** config folder as volume, because OrientDB will startup with a very minimal configuration.
@@ -36,13 +36,13 @@ $ docker run -d --name orientdb -p 2424:2424 -p 2480:2480 \
 The OrientDB image contains a full fledge installation, so it is possible to run the [console](http://orientdb.com/docs/last/Console-Commands.html)
 
 ```console
-$ docker run --rm -it orientdb /orientdb/bin/console.sh
+$ docker run --rm -it %%IMAGE%% /orientdb/bin/console.sh
 ```
 
 or even the etl
 
 ```console
-$ docker run  --rm -it -v <config_path>:/orientdb/config orientdb /orientdb/bin/oetl.sh ../config/oetl-config.json
+$ docker run  --rm -it -v <config_path>:/orientdb/config %%IMAGE%% /orientdb/bin/oetl.sh ../config/oetl-config.json
 ```
 
 ### Override configuration parameters
@@ -56,7 +56,7 @@ $ docker run -d --name orientdb -p 2424:2424 -p 2480:2480 \
     -v <backup_path>:/orientdb/backup \
     -e ORIENTDB_ROOT_PASSWORD=rootpwd \
     -e ORIENTDB_NODE_NAME=odb1 \
-    orientdb /orientdb/bin/server.sh  -Ddistributed=true
+    %%IMAGE%% /orientdb/bin/server.sh  -Ddistributed=true
 ```
 
 For further configuration options please refer to the [Configuration](http://orientdb.com/docs/last/Configuration.html) section of the online documentation.
@@ -68,5 +68,5 @@ Environment parameters such as heap size could be passed via command line:
 ```console
 $ docker run -d --name orientdb -p 2424:2424 -p 2480:2480 \
     -e ORIENTDB_ROOT_PASSWORD=rootpwd \
-    orientdb /orientdb/bin/server.sh -Xmx8g
+    %%IMAGE%% /orientdb/bin/server.sh -Xmx8g
 ```

+ 1 - 1
owncloud/content.md

@@ -13,7 +13,7 @@ ownCloud is a self-hosted file sync and share server. It provides access to your
 Starting the ownCloud 8.1 instance listening on port 80 is as easy as the following:
 
 ```console
-$ docker run -d -p 80:80 owncloud:8.1
+$ docker run -d -p 80:80 %%IMAGE%%:8.1
 ```
 
 Then go to http://localhost/ and go through the wizard. By default this container uses SQLite for data storage, but the wizard should allow for connecting to an existing database. Additionally, tags for 6.0, 7.0, or 8.0 are available.

+ 19 - 19
percona/content.md

@@ -10,12 +10,12 @@ It aims to retain close compatibility to the official MySQL releases, while focu
 
 # How to use this image
 
-## Start a `%%REPO%%` server instance
+## Start a `%%IMAGE%%` server instance
 
 Starting a Percona instance is simple:
 
 ```console
-$ docker run --name some-%%REPO%% -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%REPO%%:tag
+$ docker run --name some-%%REPO%% -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%IMAGE%%:tag
 ```
 
 ... where `some-%%REPO%%` is the name you want to assign to your container, `my-secret-pw` is the password to be set for the MySQL root user and `tag` is the tag specifying the MySQL version you want. See the list above for relevant tags.
@@ -32,18 +32,18 @@ $ docker run --name some-app --link some-%%REPO%%:mysql -d application-that-uses
 
 ## Connect to Percona from the MySQL command line client
 
-The following command starts another %%REPO%% container instance and runs the `mysql` command line client against your original %%REPO%% container, allowing you to execute SQL statements against your database instance:
+The following command starts another `%%IMAGE%%` container instance and runs the `mysql` command line client against your original `%%IMAGE%%` container, allowing you to execute SQL statements against your database instance:
 
 ```console
-$ docker run -it --link some-%%REPO%%:mysql --rm %%REPO%% sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
+$ docker run -it --link some-%%REPO%%:mysql --rm %%IMAGE%% sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
 ```
 
-... where `some-%%REPO%%` is the name of your original %%REPO%% container.
+... where `some-%%REPO%%` is the name of your original `%%IMAGE%%` container.
 
 This image can also be used as a client for non-Docker or remote Percona instances:
 
 ```console
-$ docker run -it --rm %%REPO%% mysql -hsome.mysql.host -usome-mysql-user -p
+$ docker run -it --rm %%IMAGE%% mysql -hsome.mysql.host -usome-mysql-user -p
 ```
 
 More information about the MySQL command line client can be found in the [MySQL documentation](http://dev.mysql.com/doc/en/mysql.html)
@@ -54,7 +54,7 @@ Run `docker stack deploy -c stack.yml %%REPO%%` (or `docker-compose -f stack.yml
 
 ## Container shell access and viewing MySQL logs
 
-The `docker exec` command allows you to run commands inside a Docker container. The following command line will give you a bash shell inside your `%%REPO%%` container:
+The `docker exec` command allows you to run commands inside a Docker container. The following command line will give you a bash shell inside your `%%IMAGE%%` container:
 
 ```console
 $ docker exec -it some-%%REPO%% bash
@@ -68,12 +68,12 @@ $ docker logs some-%%REPO%%
 
 ## Using a custom MySQL configuration file
 
-The Percona startup configuration is specified in the file `/etc/mysql/my.cnf`, and that file in turn includes any files found in the `/etc/mysql/conf.d` directory that end with `.cnf`. Settings in files in this directory will augment and/or override settings in `/etc/mysql/my.cnf`. If you want to use a customized MySQL configuration, you can create your alternative configuration file in a directory on the host machine and then mount that directory location as `/etc/mysql/conf.d` inside the `%%REPO%%` container.
+The Percona startup configuration is specified in the file `/etc/mysql/my.cnf`, and that file in turn includes any files found in the `/etc/mysql/conf.d` directory that end with `.cnf`. Settings in files in this directory will augment and/or override settings in `/etc/mysql/my.cnf`. If you want to use a customized MySQL configuration, you can create your alternative configuration file in a directory on the host machine and then mount that directory location as `/etc/mysql/conf.d` inside the `%%IMAGE%%` container.
 
-If `/my/custom/config-file.cnf` is the path and name of your custom configuration file, you can start your `%%REPO%%` container like this (note that only the directory path of the custom config file is used in this command):
+If `/my/custom/config-file.cnf` is the path and name of your custom configuration file, you can start your `%%IMAGE%%` container like this (note that only the directory path of the custom config file is used in this command):
 
 ```console
-$ docker run --name some-%%REPO%% -v /my/custom:/etc/mysql/conf.d -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%REPO%%:tag
+$ docker run --name some-%%REPO%% -v /my/custom:/etc/mysql/conf.d -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%IMAGE%%:tag
 ```
 
 This will start a new container `some-%%REPO%%` where the Percona instance uses the combined startup settings from `/etc/mysql/my.cnf` and `/etc/mysql/conf.d/config-file.cnf`, with settings from the latter taking precedence.
@@ -89,18 +89,18 @@ $ chcon -Rt svirt_sandbox_file_t /my/custom
 Many configuration options can be passed as flags to `mysqld`. This will give you the flexibility to customize the container without needing a `cnf` file. For example, if you want to change the default encoding and collation for all tables to use UTF-8 (`utf8mb4`) just run the following:
 
 ```console
-$ docker run --name some-%%REPO%% -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%REPO%%:tag --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
+$ docker run --name some-%%REPO%% -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%IMAGE%%:tag --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
 ```
 
 If you would like to see a complete list of available options, just run:
 
 ```console
-$ docker run -it --rm %%REPO%%:tag --verbose --help
+$ docker run -it --rm %%IMAGE%%:tag --verbose --help
 ```
 
 ## Environment Variables
 
-When you start the `%%REPO%%` image, you can adjust the configuration of the Percona instance by passing one or more environment variables on the `docker run` command line. Do note that none of the variables below will have any effect if you start the container with a data directory that already contains a database: any pre-existing database will always be left untouched on container startup.
+When you start the `%%IMAGE%%` image, you can adjust the configuration of the Percona instance by passing one or more environment variables on the `docker run` command line. Do note that none of the variables below will have any effect if you start the container with a data directory that already contains a database: any pre-existing database will always be left untouched on container startup.
 
 ### `MYSQL_ROOT_PASSWORD`
 
@@ -133,20 +133,20 @@ Sets root (*not* the user specified in `MYSQL_USER`!) user as expired once init
 As an alternative to passing sensitive information via environment variables, `_FILE` may be appended to the previously listed environment variables, causing the initialization script to load the values for those variables from files present in the container. In particular, this can be used to load passwords from Docker secrets stored in `/run/secrets/<secret_name>` files. For example:
 
 ```console
-$ docker run --name some-mysql -e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql-root -d %%REPO%%:tag
+$ docker run --name some-mysql -e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql-root -d %%IMAGE%%:tag
 ```
 
 Currently, this is only supported for `MYSQL_ROOT_PASSWORD`, `MYSQL_ROOT_HOST`, `MYSQL_DATABASE`, `MYSQL_USER`, and `MYSQL_PASSWORD`.
 
 # Initializing a fresh instance
 
-When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions `.sh`, `.sql` and `.sql.gz` that are found in `/docker-entrypoint-initdb.d`. Files will be executed in alphabetical order. You can easily populate your %%REPO%% services by [mounting a SQL dump into that directory](https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-file-as-a-data-volume) and provide [custom images](https://docs.docker.com/reference/builder/) with contributed data. SQL files will be imported by default to the database specified by the `MYSQL_DATABASE` variable.
+When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions `.sh`, `.sql` and `.sql.gz` that are found in `/docker-entrypoint-initdb.d`. Files will be executed in alphabetical order. You can easily populate your `%%IMAGE%%` services by [mounting a SQL dump into that directory](https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-file-as-a-data-volume) and provide [custom images](https://docs.docker.com/reference/builder/) with contributed data. SQL files will be imported by default to the database specified by the `MYSQL_DATABASE` variable.
 
 # Caveats
 
 ## Where to Store Data
 
-Important note: There are several ways to store data used by applications that run in Docker containers. We encourage users of the `%%REPO%%` images to familiarize themselves with the options available, including:
+Important note: There are several ways to store data used by applications that run in Docker containers. We encourage users of the `%%IMAGE%%` images to familiarize themselves with the options available, including:
 
 -	Let Docker manage the storage of your database data [by writing the database files to disk on the host system using its own internal volume management](https://docs.docker.com/engine/tutorials/dockervolumes/#adding-a-data-volume). This is the default and is easy and fairly transparent to the user. The downside is that the files may be hard to locate for tools and applications that run directly on the host system, i.e. outside containers.
 -	Create a data directory on the host system (outside the container) and [mount this to a directory visible from inside the container](https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-directory-as-a-data-volume). This places the database files in a known location on the host system, and makes it easy for tools and applications on the host system to access the files. The downside is that the user needs to make sure that the directory exists, and that e.g. directory permissions and other security mechanisms on the host system are set up correctly.
@@ -154,10 +154,10 @@ Important note: There are several ways to store data used by applications that r
 The Docker documentation is a good starting point for understanding the different storage options and variations, and there are multiple blogs and forum postings that discuss and give advice in this area. We will simply show the basic procedure here for the latter option above:
 
 1.	Create a data directory on a suitable volume on your host system, e.g. `/my/own/datadir`.
-2.	Start your `%%REPO%%` container like this:
+2.	Start your `%%IMAGE%%` container like this:
 
 	```console
-	$ docker run --name some-%%REPO%% -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%REPO%%:tag
+	$ docker run --name some-%%REPO%% -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d %%IMAGE%%:tag
 	```
 
 The `-v /my/own/datadir:/var/lib/mysql` part of the command mounts the `/my/own/datadir` directory from the underlying host system as `/var/lib/mysql` inside the container, where MySQL by default will write its data files.
@@ -174,7 +174,7 @@ If there is no database initialized when the container starts, then a default da
 
 ## Usage against an existing database
 
-If you start your `%%REPO%%` container instance with a data directory that already contains a database (specifically, a `mysql` subdirectory), the `$MYSQL_ROOT_PASSWORD` variable should be omitted from the run command line; it will in any case be ignored, and the pre-existing database will not be changed in any way.
+If you start your `%%IMAGE%%` container instance with a data directory that already contains a database (specifically, a `mysql` subdirectory), the `$MYSQL_ROOT_PASSWORD` variable should be omitted from the run command line; it will in any case be ignored, and the pre-existing database will not be changed in any way.
 
 ## Creating database dumps
 

+ 6 - 6
perl/content.md

@@ -11,7 +11,7 @@ Perl is a high-level, general-purpose, interpreted, dynamic programming language
 ## Create a `Dockerfile` in your Perl app project
 
 ```dockerfile
-FROM perl:5.20
+FROM %%IMAGE%%:5.20
 COPY . /usr/src/myapp
 WORKDIR /usr/src/myapp
 CMD [ "perl", "./your-daemon-or-script.pl" ]
@@ -29,15 +29,15 @@ $ docker run -it --rm --name my-running-app my-perl-app
 For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Perl script by using the Perl Docker image directly:
 
 ```console
-$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp perl:5.20 perl your-daemon-or-script.pl
+$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp %%IMAGE%%:5.20 perl your-daemon-or-script.pl
 ```
 
 ## Example: Creating a reusable Carton image for Perl projects
 
-Suppose you have a project that uses [Carton](https://metacpan.org/pod/Carton) to manage Perl dependencies. You can create a `perl:carton` image that makes use of the [ONBUILD](https://docs.docker.com/engine/reference/builder/#onbuild) instruction in its `Dockerfile`, like this:
+Suppose you have a project that uses [Carton](https://metacpan.org/pod/Carton) to manage Perl dependencies. You can create a `%%IMAGE%%:carton` image that makes use of the [ONBUILD](https://docs.docker.com/engine/reference/builder/#onbuild) instruction in its `Dockerfile`, like this:
 
 ```dockerfile
-FROM perl:5.26
+FROM %%IMAGE%%:5.26
 
 RUN cpanm Carton \
     && mkdir -p /usr/src/app
@@ -49,9 +49,9 @@ ONBUILD RUN carton install
 ONBUILD COPY . /usr/src/app
 ```
 
-Then, in your Carton project, you can now reduce your project's `Dockerfile` into a single line of `FROM perl:carton`, which may be enough to build a stand-alone image.
+Then, in your Carton project, you can now reduce your project's `Dockerfile` into a single line of `FROM %%IMAGE%%:carton`, which may be enough to build a stand-alone image.
 
-Having a single `perl:carton` base image is useful especially if you have multiple Carton-based projects in development, to avoid "boilerplate" coding of installing Carton and/or copying the project source files into the derived image. Keep in mind, though, about certain things to consider when using the Perl image in this way:
+Having a single `%%IMAGE%%:carton` base image is useful especially if you have multiple Carton-based projects in development, to avoid "boilerplate" coding of installing Carton and/or copying the project source files into the derived image. Keep in mind, though, about certain things to consider when using the Perl image in this way:
 
 -	This kind of base image will hide the useful bits (such as the`COPY`/`RUN` above) in the image, separating it from more specific Dockerfiles using the base image. This might lead to confusion when creating further derived images, so be aware of how [ONBUILD triggers](https://docs.docker.com/engine/reference/builder/#onbuild) work and plan appropriately.
 -	There is the cost of maintaining an extra base image build, so if you're working on a single Carton project and/or plan to publish it, then it may be more preferable to derive directly from a versioned `perl` image instead.

+ 1 - 1
photon/content.md

@@ -10,7 +10,7 @@ See the [FAQ](http://vmware.github.io/photon/assets/files/photon_faqs.pdf) for m
 
 ## How to use these images
 
-Photon OS images are intended for use in the **FROM** field of an application's `Dockerfile`. For example, to use VMware Photon 1.0RC as the base of an image, specify `FROM photon:1.0RC`.
+Photon OS images are intended for use in the **FROM** field of an application's `Dockerfile`. For example, to use VMware Photon 1.0RC as the base of an image, specify `FROM %%IMAGE%%:1.0RC`.
 
 ## Support
 

+ 5 - 5
php-zendserver/content.md

@@ -33,12 +33,12 @@ Zend Server is shared on [Docker-Hub] as **php-zendserver**.
 
 To start a single Zend Server instance, execute:
 
-	    $ docker run php-zendserver
+	    $ docker run %%IMAGE%%
 
 -	You can specify the PHP and Zend Server version by adding ':<php-version>' or ':&lt;ZS-version&gt;-php&lt;version&gt;' to the 'docker run' command.
 
 		for example: 
-		$docker run php-zendserver:8.0-php5.6
+		$docker run %%IMAGE%%:8.0-php5.6
 
 #### Availible versions:
 
@@ -50,11 +50,11 @@ To start a single Zend Server instance, execute:
 
 To start a Zend Server cluster, execute the following command for each cluster node:
 
-	    $ docker run -e MYSQL_HOSTNAME=<db-ip> -e MYSQL_PORT=3306 -e MYSQL_USERNAME=<username> -e MYSQL_PASSWORD=<password> -e MYSQL_DBNAME=zend php-zendserver
+	    $ docker run -e MYSQL_HOSTNAME=<db-ip> -e MYSQL_PORT=3306 -e MYSQL_USERNAME=<username> -e MYSQL_PASSWORD=<password> -e MYSQL_DBNAME=zend %%IMAGE%%
 
 #### Bring your own license
 
-To use your own Zend Server license: $ docker run php-zendserver -e ZEND_LICENSE_KEY=<license-key> -e ZEND_LICENSE_ORDER=<order-number>
+To use your own Zend Server license: $ docker run %%IMAGE%% -e ZEND_LICENSE_KEY=<license-key> -e ZEND_LICENSE_ORDER=<order-number>
 
 #### Launching the Container from Dockerfile
 
@@ -82,7 +82,7 @@ Once started, the container will output the information required to access the P
 
 To access the container **remotely**, port forwarding must be configured, either manually or using docker. For example, this command redirects port 80 to port 88, and port 10081 (Zend Server UI port) to port 10088:
 
-	    $ docker run -p 88:80 -p 10088:10081 php-zendserver
+	    $ docker run -p 88:80 -p 10088:10081 %%IMAGE%%
 
 ##### For clustered instances:
 

+ 12 - 12
php/content.md

@@ -15,7 +15,7 @@ For PHP projects run through the command line interface (CLI), you can do the fo
 ### Create a `Dockerfile` in your PHP project
 
 ```dockerfile
-FROM php:7.0-cli
+FROM %%IMAGE%%:7.0-cli
 COPY . /usr/src/myapp
 WORKDIR /usr/src/myapp
 CMD [ "php", "./your-script.php" ]
@@ -33,7 +33,7 @@ $ docker run -it --rm --name my-running-app my-php-app
 For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a PHP script by using the PHP Docker image directly:
 
 ```console
-$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp php:7.0-cli php your-script.php
+$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp %%IMAGE%%:7.0-cli php your-script.php
 ```
 
 ## With Apache
@@ -43,7 +43,7 @@ More commonly, you will probably want to run PHP in conjunction with Apache http
 ### Create a `Dockerfile` in your PHP project
 
 ```dockerfile
-FROM php:7.0-apache
+FROM %%IMAGE%%:7.0-apache
 COPY src/ /var/www/html/
 ```
 
@@ -57,7 +57,7 @@ $ docker run -d --name my-running-app my-php-app
 We recommend that you add a custom `php.ini` configuration. `COPY` it into `/usr/local/etc/php` by adding one more line to the Dockerfile above and running the same commands to build and run:
 
 ```dockerfile
-FROM php:7.0-apache
+FROM %%IMAGE%%:7.0-apache
 COPY config/php.ini /usr/local/etc/php/
 COPY src/ /var/www/html/
 ```
@@ -69,7 +69,7 @@ Where `src/` is the directory containing all your PHP code and `config/` contain
 If you don't want to include a `Dockerfile` in your project, it is sufficient to do the following:
 
 ```console
-$ docker run -d -p 80:80 --name my-apache-php-app -v "$PWD":/var/www/html php:7.0-apache
+$ docker run -d -p 80:80 --name my-apache-php-app -v "$PWD":/var/www/html %%IMAGE%%:7.0-apache
 ```
 
 ### How to install more PHP extensions
@@ -79,7 +79,7 @@ We provide the helper scripts `docker-php-ext-configure`, `docker-php-ext-instal
 In order to keep the images smaller, PHP's source is kept in a compressed tar file. To facilitate linking of PHP's source with any extension, we also provide the helper script `docker-php-source` to easily extract the tar or delete the extracted source. Note: if you do use `docker-php-source` to extract the source, be sure to delete it in the same layer of the docker image.
 
 ```Dockerfile
-FROM php:7.0-apache
+FROM %%IMAGE%%:7.0-apache
 RUN docker-php-source extract \
 	# do important things \
 	&& docker-php-source delete
@@ -90,7 +90,7 @@ RUN docker-php-source extract \
 For example, if you want to have a PHP-FPM image with `iconv`, `mcrypt` and `gd` extensions, you can inherit the base image that you like, and write your own `Dockerfile` like this:
 
 ```dockerfile
-FROM php:7.0-fpm
+FROM %%IMAGE%%:7.0-fpm
 RUN apt-get update && apt-get install -y \
 		libfreetype6-dev \
 		libjpeg62-turbo-dev \
@@ -108,14 +108,14 @@ Remember, you must install dependencies for your extensions manually. If an exte
 Some extensions are not provided with the PHP source, but are instead available through [PECL](https://pecl.php.net/). To install a PECL extension, use `pecl install` to download and compile it, then use `docker-php-ext-enable` to enable it:
 
 ```dockerfile
-FROM php:7.1-fpm
+FROM %%IMAGE%%:7.1-fpm
 RUN pecl install redis-3.1.0 \
 	&& pecl install xdebug-2.5.0 \
 	&& docker-php-ext-enable redis xdebug
 ```
 
 ```dockerfile
-FROM php:5.6-fpm
+FROM %%IMAGE%%:5.6-fpm
 RUN apt-get update && apt-get install -y libmemcached-dev zlib1g-dev \
 	&& pecl install memcached-2.2.0 \
 	&& docker-php-ext-enable memcached
@@ -126,7 +126,7 @@ RUN apt-get update && apt-get install -y libmemcached-dev zlib1g-dev \
 Some extensions are not provided via either Core or PECL; these can be installed too, although the process is less automated:
 
 ```dockerfile
-FROM php:5.6-apache
+FROM %%IMAGE%%:5.6-apache
 RUN curl -fsSL 'https://xcache.lighttpd.net/pub/Releases/3.2.0/xcache-3.2.0.tar.gz' -o xcache.tar.gz \
 	&& mkdir -p xcache \
 	&& tar -xf xcache.tar.gz -C xcache --strip-components=1 \
@@ -145,7 +145,7 @@ RUN curl -fsSL 'https://xcache.lighttpd.net/pub/Releases/3.2.0/xcache-3.2.0.tar.
 The `docker-php-ext-*` scripts *can* accept an arbitrary path, but it must be absolute (to disambiguate from built-in extension names), so the above example could also be written as the following:
 
 ```dockerfile
-FROM php:5.6-apache
+FROM %%IMAGE%%:5.6-apache
 RUN curl -fsSL 'https://xcache.lighttpd.net/pub/Releases/3.2.0/xcache-3.2.0.tar.gz' -o xcache.tar.gz \
 	&& mkdir -p /tmp/xcache \
 	&& tar -xf xcache.tar.gz -C /tmp/xcache --strip-components=1 \
@@ -160,7 +160,7 @@ RUN curl -fsSL 'https://xcache.lighttpd.net/pub/Releases/3.2.0/xcache-3.2.0.tar.
 Some applications may wish to change the default `DocumentRoot` in Apache (away from `/var/www/html`). The following demonstrates one way to do so using an environment variable (which can then be modified at container runtime as well):
 
 ```dockerfile
-FROM php:7.1-apache
+FROM %%IMAGE%%:7.1-apache
 
 ENV APACHE_DOCUMENT_ROOT /path/to/new/root
 

+ 1 - 1
piwik/content.md

@@ -15,7 +15,7 @@ Piwik is the leading open-source analytics platform that gives you more than jus
 # How to use this image
 
 ```console
-$ docker run --name some-%%REPO%% --link some-mysql:db -d %%REPO%%
+$ docker run --name some-%%REPO%% --link some-mysql:db -d %%IMAGE%%
 ```
 
 Now you can get access to fpm running on port 9000 inside the container. If you want to access it from the Internets, we recommend using a reverse proxy in front. You can find more information on that on the [docker-compose](#docker-compose) section.

+ 6 - 6
plone/content.md

@@ -16,7 +16,7 @@
 This will download and start the latest Plone 5 container, based on [Debian](https://www.debian.org/).
 
 ```console
-$ docker run -p 8080:8080 plone
+$ docker run -p 8080:8080 %%IMAGE%%
 ```
 
 This image includes `EXPOSE 8080` (the Plone port), so standard container linking will make it automatically available to the linked containers. Now you can add a Plone Site at http://localhost:8080 - default Zope user and password are `admin/admin`.
@@ -26,14 +26,14 @@ This image includes `EXPOSE 8080` (the Plone port), so standard container linkin
 Start ZEO server
 
 ```console
-$ docker run --name=zeo plone zeoserver
+$ docker run --name=zeo %%IMAGE%% zeoserver
 ```
 
 Start 2 Plone clients
 
 ```console
-$ docker run --link=zeo -e ZEO_ADDRESS=zeo:8100 -p 8081:8080 plone
-$ docker run --link=zeo -e ZEO_ADDRESS=zeo:8100 -p 8082:8080 plone
+$ docker run --link=zeo -e ZEO_ADDRESS=zeo:8100 -p 8081:8080 %%IMAGE%%
+$ docker run --link=zeo -e ZEO_ADDRESS=zeo:8100 -p 8082:8080 %%IMAGE%%
 ```
 
 ### Start Plone in debug mode
@@ -41,7 +41,7 @@ $ docker run --link=zeo -e ZEO_ADDRESS=zeo:8100 -p 8082:8080 plone
 You can also start Plone in debug mode (`fg`) by running
 
 ```console
-$ docker run -p 8080:8080 plone fg
+$ docker run -p 8080:8080 %%IMAGE%% fg
 ```
 
 ### Add-ons
@@ -49,7 +49,7 @@ $ docker run -p 8080:8080 plone fg
 You can enable Plone add-ons via the `PLONE_ADDONS` environment variable
 
 ```console
-$ docker run -p 8080:8080 -e PLONE_ADDONS="eea.facetednavigation Products.PloneFormGen" plone
+$ docker run -p 8080:8080 -e PLONE_ADDONS="eea.facetednavigation Products.PloneFormGen" %%IMAGE%%
 ```
 
 For more information on how to extend this image with your own custom settings, adding more add-ons, building it or mounting volumes, please refer to our [documentation](https://github.com/plone/plone.docker/blob/master/docs/usage.rst).

+ 9 - 9
postgres/content.md

@@ -13,7 +13,7 @@ PostgreSQL implements the majority of the SQL:2011 standard, is ACID-compliant a
 ## start a postgres instance
 
 ```console
-$ docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
+$ docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d %%IMAGE%%
 ```
 
 This image includes `EXPOSE 5432` (the postgres port), so standard container linking will make it automatically available to the linked containers. The default `postgres` user and database are created in the entrypoint with `initdb`.
@@ -30,7 +30,7 @@ $ docker run --name some-app --link some-postgres:postgres -d application-that-u
 ## ... or via `psql`
 
 ```console
-$ docker run -it --rm --link some-postgres:postgres postgres psql -h postgres -U postgres
+$ docker run -it --rm --link some-postgres:postgres %%IMAGE%% psql -h postgres -U postgres
 psql (9.5.0)
 Type "help" for help.
 
@@ -81,7 +81,7 @@ This optional environment variable can be used to define another location for th
 As an alternative to passing sensitive information via environment variables, `_FILE` may be appended to the previously listed environment variables, causing the initialization script to load the values for those variables from files present in the container. In particular, this can be used to load passwords from Docker secrets stored in `/run/secrets/<secret_name>` files. For example:
 
 ```console
-$ docker run --name some-postgres -e POSTGRES_PASSWORD_FILE=/run/secrets/postgres-passwd -d postgres
+$ docker run --name some-postgres -e POSTGRES_PASSWORD_FILE=/run/secrets/postgres-passwd -d %%IMAGE%%
 ```
 
 Currently, this is only supported for `POSTGRES_INITDB_ARGS`, `POSTGRES_PASSWORD`, `POSTGRES_USER`, and `POSTGRES_DB`.
@@ -93,11 +93,11 @@ As of [docker-library/postgres#253](https://github.com/docker-library/postgres/p
 The main caveat to note is that `postgres` doesn't care what UID it runs as (as long as the owner of `/var/lib/postgresql/data` matches), but `initdb` *does* care (and needs the user to exist in `/etc/passwd`):
 
 ```console
-$ docker run -it --rm --user www-data postgres
+$ docker run -it --rm --user www-data %%IMAGE%%
 The files belonging to this database system will be owned by user "www-data".
 ...
 
-$ docker run -it --rm --user 1000:1000 postgres
+$ docker run -it --rm --user 1000:1000 %%IMAGE%%
 initdb: could not look up effective user ID 1000: user does not exist
 ```
 
@@ -106,7 +106,7 @@ The two easiest ways to get around this:
 1.	bind-mount `/etc/passwd` read-only from the host (if the UID you desire is a valid user on your host):
 
 	```console
-	$ docker run -it --rm --user "$(id -u):$(id -g)" -v /etc/passwd:/etc/passwd:ro postgres
+	$ docker run -it --rm --user "$(id -u):$(id -g)" -v /etc/passwd:/etc/passwd:ro %%IMAGE%%
 	The files belonging to this database system will be owned by user "jsmith".
 	...
 	```
@@ -115,12 +115,12 @@ The two easiest ways to get around this:
 
 	```console
 	$ docker volume create pgdata
-	$ docker run -it --rm -v pgdata:/var/lib/postgresql/data postgres
+	$ docker run -it --rm -v pgdata:/var/lib/postgresql/data %%IMAGE%%
 	The files belonging to this database system will be owned by user "postgres".
 	...
 	( once it's finished initializing successfully and is waiting for connections, stop it )
 	$ docker run -it --rm -v pgdata:/var/lib/postgresql/data bash chown -R 1000:1000 /var/lib/postgresql/data
-	$ docker run -it --rm --user 1000:1000 -v pgdata:/var/lib/postgresql/data postgres
+	$ docker run -it --rm --user 1000:1000 -v pgdata:/var/lib/postgresql/data %%IMAGE%%
 	LOG:  database system was shut down at 2017-01-20 00:03:23 UTC
 	LOG:  MultiXact member wraparound protections are now enabled
 	LOG:  autovacuum launcher started
@@ -151,7 +151,7 @@ Additionally, as of [docker-library/postgres#253](https://github.com/docker-libr
 You can also extend the image with a simple `Dockerfile` to set a different locale. The following example will set the default locale to `de_DE.utf8`:
 
 ```dockerfile
-FROM postgres:9.4
+FROM %%IMAGE%%:9.4
 RUN localedef -i de_DE -c -f UTF-8 -A /usr/share/locale/locale.alias de_DE.UTF-8
 ENV LANG de_DE.utf8
 ```

+ 4 - 4
pypy/content.md

@@ -13,14 +13,14 @@ PyPy started out as a Python interpreter written in the Python language itself.
 ## Create a `Dockerfile` in your Python app project
 
 ```dockerfile
-FROM pypy:3-onbuild
+FROM %%IMAGE%%:3-onbuild
 CMD [ "pypy3", "./your-daemon-or-script.py" ]
 ```
 
 or (if you need to use PyPy 2):
 
 ```dockerfile
-FROM pypy:2-onbuild
+FROM %%IMAGE%%:2-onbuild
 CMD [ "pypy", "./your-daemon-or-script.py" ]
 ```
 
@@ -38,11 +38,11 @@ $ docker run -it --rm --name my-running-app my-python-app
 For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Python script by using the Python Docker image directly:
 
 ```console
-$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp pypy:3 pypy3 your-daemon-or-script.py
+$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp %%IMAGE%%:3 pypy3 your-daemon-or-script.py
 ```
 
 or (again, if you need to use Python 2):
 
 ```console
-$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp pypy:2 pypy your-daemon-or-script.py
+$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp %%IMAGE%%:2 pypy your-daemon-or-script.py
 ```

+ 4 - 4
r-base/content.md

@@ -19,7 +19,7 @@ R is a GNU project. The source code for the R software environment is written pr
 Launch R directly for interactive work:
 
 ```console
-$ docker run -ti --rm r-base
+$ docker run -ti --rm %%IMAGE%%
 ```
 
 ## Batch mode
@@ -27,13 +27,13 @@ $ docker run -ti --rm r-base
 Link the working directory to run R batch commands. We recommend specifying a non-root user when linking a volume to the container to avoid permission changes, as illustrated here:
 
 ```console
-$ docker run -ti --rm -v "$PWD":/home/docker -w /home/docker -u docker r-base R CMD check .
+$ docker run -ti --rm -v "$PWD":/home/docker -w /home/docker -u docker %%IMAGE%% R CMD check .
 ```
 
 Alternatively, just run a bash session on the container first. This allows a user to run batch commands and also edit and run scripts:
 
 ```console
-$ docker run -ti --rm r-base /usr/bin/bash
+$ docker run -ti --rm %%IMAGE%% /usr/bin/bash
 $ vim.tiny myscript.R
 ```
 
@@ -48,7 +48,7 @@ $ Rscript myscript.R
 Use `r-base` as a base for your own Dockerfiles. For instance, something along the lines of the following will compile and run your project:
 
 ```dockerfile
-FROM r-base
+FROM %%IMAGE%%
 COPY . /usr/local/src/myscripts
 WORKDIR /usr/local/src/myscripts
 CMD ["Rscript", "myscript.R"]

+ 8 - 8
rabbitmq/content.md

@@ -13,7 +13,7 @@ RabbitMQ is open source message broker software (sometimes called message-orient
 One of the important things to note about RabbitMQ is that it stores data based on what it calls the "Node Name", which defaults to the hostname. What this means for usage in Docker is that we should specify `-h`/`--hostname` explicitly for each daemon so that we don't get a random hostname and can keep track of our data:
 
 ```console
-$ docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3
+$ docker run -d --hostname my-rabbit --name some-rabbit %%IMAGE%%:3
 ```
 
 If you give that a minute, then do `docker logs some-rabbit`, you'll see in the output a block similar to:
@@ -51,13 +51,13 @@ See the [RabbitMQ "Clustering Guide"](https://www.rabbitmq.com/clustering.html#e
 For setting a consistent cookie (especially useful for clustering but also for remote/cross-container administration via `rabbitmqctl`), use `RABBITMQ_ERLANG_COOKIE`:
 
 ```console
-$ docker run -d --hostname my-rabbit --name some-rabbit -e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3
+$ docker run -d --hostname my-rabbit --name some-rabbit -e RABBITMQ_ERLANG_COOKIE='secret cookie here' %%IMAGE%%:3
 ```
 
 This can then be used from a separate instance to connect:
 
 ```console
-$ docker run -it --rm --link some-rabbit:my-rabbit -e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3 bash
+$ docker run -it --rm --link some-rabbit:my-rabbit -e RABBITMQ_ERLANG_COOKIE='secret cookie here' %%IMAGE%%:3 bash
 root@f2a2d3d27c75:/# rabbitmqctl -n rabbit@my-rabbit list_users
 Listing users ...
 guest   [administrator]
@@ -66,7 +66,7 @@ guest   [administrator]
 Alternatively, one can also use `RABBITMQ_NODENAME` to make repeated `rabbitmqctl` invocations simpler:
 
 ```console
-$ docker run -it --rm --link some-rabbit:my-rabbit -e RABBITMQ_ERLANG_COOKIE='secret cookie here' -e RABBITMQ_NODENAME=rabbit@my-rabbit rabbitmq:3 bash
+$ docker run -it --rm --link some-rabbit:my-rabbit -e RABBITMQ_ERLANG_COOKIE='secret cookie here' -e RABBITMQ_NODENAME=rabbit@my-rabbit %%IMAGE%%:3 bash
 root@f2a2d3d27c75:/# rabbitmqctl list_users
 Listing users ...
 guest   [administrator]
@@ -77,13 +77,13 @@ guest   [administrator]
 There is a second set of tags provided with the [management plugin](https://www.rabbitmq.com/management.html) installed and enabled by default, which is available on the standard management port of 15672, with the default username and password of `guest` / `guest`:
 
 ```console
-$ docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3-management
+$ docker run -d --hostname my-rabbit --name some-rabbit %%IMAGE%%:3-management
 ```
 
 You can access it by visiting `http://container-ip:15672` in a browser or, if you need access outside the host, on port 8080:
 
 ```console
-$ docker run -d --hostname my-rabbit --name some-rabbit -p 8080:15672 rabbitmq:3-management
+$ docker run -d --hostname my-rabbit --name some-rabbit -p 8080:15672 %%IMAGE%%:3-management
 ```
 
 You can then go to `http://localhost:8080` or `http://host-ip:8080` in a browser.
@@ -93,7 +93,7 @@ You can then go to `http://localhost:8080` or `http://host-ip:8080` in a browser
 If you wish to change the default username and password of `guest` / `guest`, you can do so with the `RABBITMQ_DEFAULT_USER` and `RABBITMQ_DEFAULT_PASS` environmental variables:
 
 ```console
-$ docker run -d --hostname my-rabbit --name some-rabbit -e RABBITMQ_DEFAULT_USER=user -e RABBITMQ_DEFAULT_PASS=password rabbitmq:3-management
+$ docker run -d --hostname my-rabbit --name some-rabbit -e RABBITMQ_DEFAULT_USER=user -e RABBITMQ_DEFAULT_PASS=password %%IMAGE%%:3-management
 ```
 
 You can then go to `http://localhost:8080` or `http://host-ip:8080` in a browser and use `user`/`password` to gain access to the management console
@@ -103,7 +103,7 @@ You can then go to `http://localhost:8080` or `http://host-ip:8080` in a browser
 If you wish to change the default vhost, you can do so with the `RABBITMQ_DEFAULT_VHOST` environmental variables:
 
 ```console
-$ docker run -d --hostname my-rabbit --name some-rabbit -e RABBITMQ_DEFAULT_VHOST=my_vhost rabbitmq:3-management
+$ docker run -d --hostname my-rabbit --name some-rabbit -e RABBITMQ_DEFAULT_VHOST=my_vhost %%IMAGE%%:3-management
 ```
 
 ## Enabling HiPE

+ 2 - 2
rakudo-star/content.md

@@ -19,7 +19,7 @@ Perl 6 Language Documentation: [http://doc.perl6.org/](http://doc.perl6.org/)
 Simply running a container with the image will launch a Perl 6 REPL:
 
 ```console
-$ docker run -it rakudo-star
+$ docker run -it %%IMAGE%%
 > say 'Hello, Perl!'
 Hello, Perl!
 ```
@@ -27,7 +27,7 @@ Hello, Perl!
 You can also provide perl6 command line switches to `docker run`:
 
 ```console
-$ docker run -it rakudo-star -e 'say "Hello!"'
+$ docker run -it %%IMAGE%% -e 'say "Hello!"'
 ```
 
 # Contributing/Getting Help

+ 1 - 1
rapidoid/content.md

@@ -11,7 +11,7 @@ Rapidoid is an extremely fast HTTP server and modern Java web framework / applic
 To quickly start Rapidoid and display some basic usage help, run:
 
 ```console
-$ docker run --rm %%REPO%% --help
+$ docker run --rm %%IMAGE%% --help
 ```
 
 Rapidoid can be used in different ways:

+ 5 - 5
redis/content.md

@@ -11,7 +11,7 @@ Redis is an open-source, networked, in-memory, key-value data store with optiona
 ## start a redis instance
 
 ```console
-$ docker run --name some-redis -d redis
+$ docker run --name some-redis -d %%IMAGE%%
 ```
 
 This image includes `EXPOSE 6379` (the redis port), so standard container linking will make it automatically available to the linked containers (as the following examples illustrate).
@@ -19,7 +19,7 @@ This image includes `EXPOSE 6379` (the redis port), so standard container linkin
 ## start with persistent storage
 
 ```console
-$ docker run --name some-redis -d redis redis-server --appendonly yes
+$ docker run --name some-redis -d %%IMAGE%% redis-server --appendonly yes
 ```
 
 If persistence is enabled, data is stored in the `VOLUME /data`, which can be used with `--volumes-from some-volume-container` or `-v /docker/host/dir:/data` (see [docs.docker volumes](https://docs.docker.com/engine/tutorials/dockervolumes/)).
@@ -35,7 +35,7 @@ $ docker run --name some-app --link some-redis:redis -d application-that-uses-re
 ## ... or via `redis-cli`
 
 ```console
-$ docker run -it --link some-redis:redis --rm redis redis-cli -h redis -p 6379
+$ docker run -it --link some-redis:redis --rm %%IMAGE%% redis-cli -h redis -p 6379
 ```
 
 ## Additionally, If you want to use your own redis.conf ...
@@ -43,7 +43,7 @@ $ docker run -it --link some-redis:redis --rm redis redis-cli -h redis -p 6379
 You can create your own Dockerfile that adds a redis.conf from the context into /data/, like so.
 
 ```dockerfile
-FROM redis
+FROM %%IMAGE%%
 COPY redis.conf /usr/local/etc/redis/redis.conf
 CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
 ```
@@ -51,7 +51,7 @@ CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
 Alternatively, you can specify something along the same lines with `docker run` options.
 
 ```console
-$ docker run -v /myredis/conf/redis.conf:/usr/local/etc/redis/redis.conf --name myredis redis redis-server /usr/local/etc/redis/redis.conf
+$ docker run -v /myredis/conf/redis.conf:/usr/local/etc/redis/redis.conf --name myredis %%IMAGE%% redis-server /usr/local/etc/redis/redis.conf
 ```
 
 Where `/myredis/conf/` is a local directory containing your `redis.conf` file. Using this method means that there is no need for you to have a Dockerfile for your redis container.

+ 4 - 4
redmine/content.md

@@ -13,7 +13,7 @@ Redmine is a free and open source, web-based project management and issue tracki
 This is the simplest setup; just run redmine.
 
 ```console
-$ docker run -d --name some-redmine redmine
+$ docker run -d --name some-redmine %%IMAGE%%
 ```
 
 > not for multi-user production use ([redmine wiki](http://www.redmine.org/projects/redmine/wiki/RedmineInstall#Supported-database-back-ends))
@@ -39,7 +39,7 @@ Running Redmine with a database server is the recommened way.
 2.	start redmine
 
 	```console
-	$ docker run -d --name some-%%REPO%% --link some-postgres:postgres %%REPO%%
+	$ docker run -d --name some-%%REPO%% --link some-postgres:postgres %%IMAGE%%
 	```
 
 ## %%STACK%%
@@ -67,7 +67,7 @@ The Docker documentation is a good starting point for understanding the differen
 2.	Start your `%%REPO%%` container like this:
 
 	```console
-	$ docker run -d --name some-%%REPO%% -v /my/own/datadir:/usr/src/redmine/files --link some-postgres:postgres %%REPO%%
+	$ docker run -d --name some-%%REPO%% -v /my/own/datadir:/usr/src/redmine/files --link some-postgres:postgres %%IMAGE%%
 	```
 
 The `-v /my/own/datadir:/usr/src/redmine/files` part of the command mounts the `/my/own/datadir` directory from the underlying host system as `/usr/src/redmine/files` inside the container, where Redmine will store uploaded files.
@@ -131,7 +131,7 @@ This variable is used to create an initial `config/secrets.yml` and set the `sec
 As an alternative to passing sensitive information via environment variables, `_FILE` may be appended to the previously listed environment variables, causing the initialization script to load the values for those variables from files present in the container. In particular, this can be used to load passwords from Docker secrets stored in `/run/secrets/<secret_name>` files. For example:
 
 ```console
-$ docker run -d --name some-%%REPO%% -e REDMINE_DB_MYSQL_FILE=/run/secrets/mysql-host -e REDMINE_DB_PASSWORD_FILE=/run/secrets/mysql-root %%REPO%%:tag
+$ docker run -d --name some-%%REPO%% -e REDMINE_DB_MYSQL_FILE=/run/secrets/mysql-host -e REDMINE_DB_PASSWORD_FILE=/run/secrets/mysql-root %%IMAGE%%:tag
 ```
 
 Currently, this is only supported for `REDMINE_DB_MYSQL`, `REDMINE_DB_POSTGRES`, `REDMINE_DB_PORT`, `REDMINE_DB_USERNAME`, `REDMINE_DB_PASSWORD`, `REDMINE_DB_DATABASE`, `REDMINE_DB_ENCODING`, and `REDMINE_SECRET_KEY_BASE`.

+ 1 - 1
registry/content.md

@@ -5,7 +5,7 @@ This image contains an implementation of the Docker Registry HTTP API V2 for use
 ## Run a local registry: Quick Version
 
 ```console
-$ docker run -d -p 5000:5000 --restart always --name registry registry:2
+$ docker run -d -p 5000:5000 --restart always --name registry %%IMAGE%%:2
 ```
 
 Now, use it from within Docker:

+ 1 - 1
rethinkdb/content.md

@@ -11,7 +11,7 @@ RethinkDB is an open-source, distributed database built to store JSON documents
 The default CMD of the image is `rethinkdb --bind all`, so the RethinkDB daemon will bind to all network interfaces available to the container (by default, RethinkDB only accepts connections from `localhost`).
 
 ```bash
-docker run --name some-rethink -v "$PWD:/data" -d rethinkdb
+docker run --name some-rethink -v "$PWD:/data" -d %%IMAGE%%
 ```
 
 ## Connect the instance to an application

+ 3 - 3
rocket.chat/content.md

@@ -17,7 +17,7 @@ $ docker run --name db -d mongo:3.0 --smallfiles
 Then start Rocket.Chat linked to this mongo instance:
 
 ```console
-$ docker run --name rocketchat --link db -d rocket.chat
+$ docker run --name rocketchat --link db -d %%IMAGE%%
 ```
 
 This will start a Rocket.Chat instance listening on the default Meteor port of 3000 on the container.
@@ -25,7 +25,7 @@ This will start a Rocket.Chat instance listening on the default Meteor port of 3
 If you'd like to be able to access the instance directly at standard port on the host machine:
 
 ```console
-$ docker run --name rocketchat -p 80:3000 --env ROOT_URL=http://localhost --link db -d rocket.chat
+$ docker run --name rocketchat -p 80:3000 --env ROOT_URL=http://localhost --link db -d %%IMAGE%%
 ```
 
 Then, access it via `http://localhost` in a browser. Replace `localhost` in `ROOT_URL` with your own domain name if you are hosting at your own domain.
@@ -33,5 +33,5 @@ Then, access it via `http://localhost` in a browser. Replace `localhost` in `ROO
 If you're using a third party Mongo provider, or working with Kubernetes, you need to override the `MONGO_URL` environment variable:
 
 ```console
-$ docker run --name rocketchat -p 80:3000 --env ROOT_URL=http://localhost --env MONGO_URL=mongodb://mymongourl/mydb -d rocket.chat
+$ docker run --name rocketchat -p 80:3000 --env ROOT_URL=http://localhost --env MONGO_URL=mongodb://mymongourl/mydb -d %%IMAGE%%
 ```

+ 7 - 7
ros/content.md

@@ -11,7 +11,7 @@ The Robot Operating System (ROS) is a set of software libraries and tools that h
 ## Create a `Dockerfile` in your ROS app project
 
 ```dockerfile
-FROM ros:indigo
+FROM %%IMAGE%%:indigo
 # place here your application's setup specifics
 CMD [ "roslaunch", "my-ros-app my-ros-app.launch" ]
 ```
@@ -49,7 +49,7 @@ ROS uses the `~/.ros/` directory for storing logs, and debugging info. If you wi
 For example, if one wishes to use their own `.ros` folder that already resides in their local home directory, with a username of `ubuntu`, we can simple launch the container with an additional volume argument:
 
 ```console
-$ docker run -v "/home/ubuntu/.ros/:/root/.ros/" ros
+$ docker run -v "/home/ubuntu/.ros/:/root/.ros/" %%IMAGE%%
 ```
 
 ### Devices
@@ -69,7 +69,7 @@ If we want our all ROS nodes to easily talk to each other, we'll can use a virtu
 > Build a ROS image that includes ROS tutorials using this `Dockerfile:`
 
 ```dockerfile
-FROM ros:indigo-ros-base
+FROM %%IMAGE%%:indigo-ros-base
 # install ros tutorials packages
 RUN apt-get update && apt-get install -y
     ros-indigo-ros-tutorials \
@@ -80,7 +80,7 @@ RUN apt-get update && apt-get install -y
 > Then to build the image from within the same directory:
 
 ```console
-$ docker build --tag ros:ros-tutorials .
+$ docker build --tag %%IMAGE%%:ros-tutorials .
 ```
 
 #### Create network
@@ -99,7 +99,7 @@ $ docker build --tag ros:ros-tutorials .
 $ docker run -it --rm \
     --net foo \
     --name master \
-    ros:ros-tutorials \
+    %%IMAGE%%:ros-tutorials \
     roscore
 ```
 
@@ -111,7 +111,7 @@ $ docker run -it --rm \
     --name talker \
     --env ROS_HOSTNAME=talker \
     --env ROS_MASTER_URI=http://master:11311 \
-    ros:ros-tutorials \
+    %%IMAGE%%:ros-tutorials \
     rosrun roscpp_tutorials talker
 ```
 
@@ -123,7 +123,7 @@ $ docker run -it --rm \
     --name listener \
     --env ROS_HOSTNAME=listener \
     --env ROS_MASTER_URI=http://master:11311 \
-    ros:ros-tutorials \
+    %%IMAGE%%:ros-tutorials \
     rosrun roscpp_tutorials listener
 ```
 

+ 3 - 3
ruby/content.md

@@ -11,7 +11,7 @@ Ruby is a dynamic, reflective, object-oriented, general-purpose, open-source pro
 ## Create a `Dockerfile` in your Ruby app project
 
 ```dockerfile
-FROM ruby:2.1-onbuild
+FROM %%IMAGE%%:2.1-onbuild
 CMD ["./your-daemon-or-script.rb"]
 ```
 
@@ -32,7 +32,7 @@ $ docker run -it --name my-running-script my-ruby-app
 The `onbuild` tag expects a `Gemfile.lock` in your app directory. This `docker run` will help you generate one. Run it in the root of your app, next to the `Gemfile`:
 
 ```console
-$ docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app ruby:2.1 bundle install
+$ docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app %%IMAGE%%:2.1 bundle install
 ```
 
 ## Run a single Ruby script
@@ -40,7 +40,7 @@ $ docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app ruby:2.1 bundle install
 For many simple, single file projects, you may find it inconvenient to write a complete `Dockerfile`. In such cases, you can run a Ruby script by using the Ruby Docker image directly:
 
 ```console
-$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp ruby:2.1 ruby your-daemon-or-script.rb
+$ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp %%IMAGE%%:2.1 ruby your-daemon-or-script.rb
 ```
 
 ## Encoding

+ 7 - 7
sentry/content.md

@@ -25,13 +25,13 @@ Sentry is a realtime event logging and aggregation platform. It specializes in m
 3.	Generate a new secret key to be shared by all `%%REPO%%` containers. This value will then be used as the `SENTRY_SECRET_KEY` environment variable.
 
 	```console
-	$ docker run --rm sentry config generate-secret-key
+	$ docker run --rm %%IMAGE%% config generate-secret-key
 	```
 
 4.	If this is a new database, you'll need to run `upgrade`
 
 	```console
-	$ docker run -it --rm -e SENTRY_SECRET_KEY='<secret-key>' --link sentry-postgres:postgres --link sentry-redis:redis sentry upgrade
+	$ docker run -it --rm -e SENTRY_SECRET_KEY='<secret-key>' --link sentry-postgres:postgres --link sentry-redis:redis %%IMAGE%% upgrade
 	```
 
 	**Note: the `-it` is important as the initial upgrade will prompt to create an initial user and will fail without it**
@@ -39,14 +39,14 @@ Sentry is a realtime event logging and aggregation platform. It specializes in m
 5.	Now start up Sentry server
 
 	```console
-	$ docker run -d --name my-sentry -e SENTRY_SECRET_KEY='<secret-key>' --link sentry-redis:redis --link sentry-postgres:postgres sentry
+	$ docker run -d --name my-sentry -e SENTRY_SECRET_KEY='<secret-key>' --link sentry-redis:redis --link sentry-postgres:postgres %%IMAGE%%
 	```
 
 6.	The default config needs a celery beat and celery workers, start as many workers as you need (each with a unique name)
 
 	```console
-	$ docker run -d --name sentry-cron -e SENTRY_SECRET_KEY='<secret-key>' --link sentry-postgres:postgres --link sentry-redis:redis sentry run cron
-	$ docker run -d --name sentry-worker-1 -e SENTRY_SECRET_KEY='<secret-key>' --link sentry-postgres:postgres --link sentry-redis:redis sentry run worker
+	$ docker run -d --name sentry-cron -e SENTRY_SECRET_KEY='<secret-key>' --link sentry-postgres:postgres --link sentry-redis:redis %%IMAGE%% run cron
+	$ docker run -d --name sentry-worker-1 -e SENTRY_SECRET_KEY='<secret-key>' --link sentry-postgres:postgres --link sentry-redis:redis %%IMAGE%% run worker
 	```
 
 ### Port mapping
@@ -58,7 +58,7 @@ If you'd like to be able to access the instance from the host without the contai
 If you did not create a superuser during `upgrade`, use the following to create one:
 
 ```console
-$ docker run -it --rm -e SENTRY_SECRET_KEY='<secret-key>' --link sentry-redis:redis --link sentry-postgres:postgres sentry createuser
+$ docker run -it --rm -e SENTRY_SECRET_KEY='<secret-key>' --link sentry-redis:redis --link sentry-postgres:postgres %%IMAGE%% createuser
 ```
 
 ## Environment variables
@@ -70,7 +70,7 @@ When you start the `%%REPO%%` image, you can adjust the configuration of the Sen
 A secret key used for cryptographic functions within Sentry. This key should be unique and consistent across all running instances. You can generate a new secret key doing something like:
 
 ```console
-$ docker run --rm sentry config generate-secret-key
+$ docker run --rm %%IMAGE%% config generate-secret-key
 ```
 
 ### `SENTRY_POSTGRES_HOST`, `SENTRY_POSTGRES_PORT`, `SENTRY_DB_NAME`, `SENTRY_DB_USER`, `SENTRY_DB_PASSWORD`

+ 7 - 7
silverpeas/content.md

@@ -57,7 +57,7 @@ $ docker run --name silverpeas -p 8080:8000 -d \
     -v silverpeas-log:/opt/silverpeas/log \
     -v silverpeas-data:/opt/silverpeas/data \
     --link postgresql:database \
-    silverpeas
+    %%IMAGE%%
 ```
 
 By default, `database` is the default hostname used by Silverpeas for its persistence backend. So, as the PostgreSQL database is linked here under the alias `database`, we don't have to explicitly indicate its hostname with the `DB_SERVER` environment variable. The Silverpeas images expose the 8000 port and here this port is mapped to the 8080 port of the host; Silverpeas is then accessible at `http://localhost:8080/silverpeas`. You can sign in Silverpeas with the administrator account `SilverAdmin` and with as password `SilverAdmin`.
@@ -76,7 +76,7 @@ $ docker run --name silverpeas -p 8080:8000 -d \
     -v silverpeas-log:/opt/silverpeas/log \
     -v silverpeas-data:/opt/silverpeas/data \
     --link postgresql:database \
-    silverpeas
+    %%IMAGE%%
 ```
 
 where `/etc/silverpeas/config.properties` is your own configuration file on the host. For security reason, we strongly recommend to set explicitly the administrator's credentials with the properties `SILVERPEAS_ADMIN_LOGIN` and `SILVERPEAS_ADMIN_PASSWORD` in the `config.properties` file. (Don't forget to set also the administrator email address with the property `SILVERPEAS_ADMIN_EMAIL`.)
@@ -114,7 +114,7 @@ $ docker run --name silverpeas -p 8080:8000 -d \
     -v /etc/silverpeas/config.properties:/opt/silverpeas/configuration/config.properties \
     -v silverpeas-log:/opt/silverpeas/log \
     -v silverpeas-data:/opt/silverpeas/data \
-    silverpeas
+    %%IMAGE%%
 ```
 
 where `database` is the hostname referred by the `DB_SERVER` parameter in your `/etc/silverpeas/config.properties` file as the host running the database system and that is mapped here to the actual IP address of this host. The hostname is added in the `/etc/hosts` file in the container.
@@ -155,7 +155,7 @@ $ docker create --name silverpeas-store \
     -v silverpeas-log:/opt/silverpeas/log \
     -v silverpeas-workflows:/opt/silverpeas/xmlcomponents/workflows \
     -v /etc/silverpeas/config.properties:/opt/silverpeas/configuration/config.properties \
-    silverpeas \
+    %%IMAGE%% \
     /bin/true
 ```
 
@@ -165,7 +165,7 @@ Then to mount the volumes in the Silverpeas container:
 $ docker run --name silverpeas -p 8080:8000 -d \
     --link postgresql:database \
     --volumes-from silverpeas-store \
-    silverpeas
+    %%IMAGE%%
 ```
 
 If you have to customize the settings of Silverpeas or add, for example, a new database definition, then specify these settings with the Data Volume Container, so that they will be available to the next versions of Silverpeas which will be then configured correctly like your previous Silverpeas installation:
@@ -178,7 +178,7 @@ $ docker create --name silverpeas-store \
     -v /etc/silverpeas/config.properties:/opt/silverpeas/configuration/config.properties \
     -v /etc/silverpeas/CustomerSettings.xml:/opt/silverpeas/configuration/silverpeas/CustomerSettings.xml \
     -v /etc/silverpeas/my-datasource.cli:/opt/silverpeas/configuration/jboss/my-datasource.cli \
-    silverpeas \
+    %%IMAGE%% \
     /bin/true
 ```
 
@@ -211,7 +211,7 @@ $ docker run --name silverpeas -p 8080:8000 -d \
     --link postgresql:database \
     --link libreoffice:libreoffice \
     --volumes-from silverpeas-store \
-    silverpeas
+    %%IMAGE%%
 ```
 
 # Logs

+ 10 - 10
solr/content.md

@@ -15,7 +15,7 @@ Learn more on [Apache Solr homepage](http://lucene.apache.org/solr/) and in the
 To run a single Solr server:
 
 ```console
-$ docker run --name my_solr -d -p 8983:8983 -t solr
+$ docker run --name my_solr -d -p 8983:8983 -t %%IMAGE%%
 ```
 
 Then with a web browser go to `http://localhost:8983/` to see the Admin Console (adjust the hostname for your docker host).
@@ -41,7 +41,7 @@ In the UI, find the "Core selector" popup menu and select the "gettingstarted" c
 For convenience, there is a single command that starts Solr, creates a collection called "demo", and loads sample data into it:
 
 ```console
-$ docker run --name solr_demo -d -P solr solr-demo
+$ docker run --name solr_demo -d -P %%IMAGE%% solr-demo
 ```
 
 ## Loading your own data
@@ -56,7 +56,7 @@ $ docker exec -it --user=solr my_solr bin/post -c gettingstarted mydata.xml
 or by using Docker host volumes:
 
 ```console
-$ docker run --name my_solr -d -p 8983:8983 -t -v $HOME/mydata:/opt/solr/mydata solr
+$ docker run --name my_solr -d -p 8983:8983 -t -v $HOME/mydata:/opt/solr/mydata %%IMAGE%%
 $ docker exec -it --user=solr my_solr bin/solr create_core -c gettingstarted
 $ docker exec -it --user=solr my_solr bin/post -c gettingstarted mydata/mydata.xml
 ```
@@ -70,7 +70,7 @@ In addition to the `docker exec` method explained above, you can create a core a
 If you run:
 
 ```console
-$ docker run -d -P solr solr-create -c mycore
+$ docker run -d -P %%IMAGE%% solr-create -c mycore
 ```
 
 the container will:
@@ -84,7 +84,7 @@ the container will:
 You can combine this with mounted volumes to pass in core configuration from your host:
 
 ```console
-$ docker run -d -P -v $PWD/myconfig:/myconfig solr solr-create -c mycore -d /myconfig
+$ docker run -d -P -v $PWD/myconfig:/myconfig %%IMAGE%% solr-create -c mycore -d /myconfig
 ```
 
 When using the `solr-create` command, Solr will log to the standard docker log (inspect with `docker logs`), and the collection creation will happen in the background and log to `/opt/docker-solr/init.log`.
@@ -94,8 +94,8 @@ This first way closely mirrors the manual core creation steps and uses Solr's ow
 The second way of creating a core at start time is using the `solr-precreate` command. This will create the core in the filesystem before running Solr. You should pass it the core name, and optionally the directory to copy the config from (this defaults to Solr's built-in "basic_configs"). For example:
 
 ```console
-$ docker run -d -P solr solr-precreate mycore
-$ docker run -d -P -v $PWD/myconfig:/myconfig solr solr-precreate mycore /myconfig
+$ docker run -d -P %%IMAGE%% solr-precreate mycore
+$ docker run -d -P -v $PWD/myconfig:/myconfig %%IMAGE%% solr-precreate mycore /myconfig
 ```
 
 This method stores the core in an intermediate subdirectory called "mycores". This allows you to use mounted volumes:
@@ -103,7 +103,7 @@ This method stores the core in an intermediate subdirectory called "mycores". Th
 ```console
 $ mkdir mycores
 $ sudo chown 8983:8983 mycores
-$ docker run -d -P -v $PWD/mycores:/opt/solr/server/solr/mycores solr solr-precreate mycore
+$ docker run -d -P -v $PWD/mycores:/opt/solr/server/solr/mycores %%IMAGE%% solr-precreate mycore
 ```
 
 This second way is quicker, easier to monitor because it logs to the docker log, and can fail immediately if something is wrong. But, because it makes assumptions about Solr's "basic_configs", future upstream changes could break that.
@@ -118,7 +118,7 @@ With Docker Compose you can create a Solr container with the index stored in a n
 version: '2'
 services:
   solr:
-    image: solr
+    image: %%IMAGE%%
     ports:
      - "8983:8983"
     volumes:
@@ -150,7 +150,7 @@ grep '^SOLR_HEAP=' /opt/solr/bin/solr.in.sh
 you can run:
 
 ```console
-$ docker run --name solr_heap1 -d -P -v $PWD/docs/set-heap.sh:/docker-entrypoint-initdb.d/set-heap.sh solr
+$ docker run --name solr_heap1 -d -P -v $PWD/docs/set-heap.sh:/docker-entrypoint-initdb.d/set-heap.sh %%IMAGE%%
 $ sleep 5
 $ docker logs solr_heap1 | head
 /opt/docker-solr/scripts/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/set-heap.sh

+ 1 - 1
sonarqube/content.md

@@ -13,7 +13,7 @@ SonarQube is an open source platform for continuous inspection of code quality.
 The server is started this way:
 
 ```console
-$ docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 sonarqube
+$ docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 %%IMAGE%%
 ```
 
 To analyse a project:

+ 2 - 2
sourcemage/content.md

@@ -11,13 +11,13 @@ All of our scripts are [GPL](https://www.gnu.org/licenses/gpl.html)'d and our pa
 These images are based on our [chroot images](https://sourcemage.org/Install/Chroot). To use them, simply do the following:
 
 ```shell
-$ docker run -it sourcemage
+$ docker run -it %%IMAGE%%
 ```
 
 or
 
 ```shell
-$ docker run -it sourcemage:0.62
+$ docker run -it %%IMAGE%%:0.62
 ```
 
 ---

+ 5 - 5
spiped/content.md

@@ -11,7 +11,7 @@ Spiped (pronounced "ess-pipe-dee") is a utility for creating symmetrically encry
 This image automatically takes the key from the `/spiped/key` file (`-k`) and runs spiped in foreground (`-F`). Other than that it takes the same options *spiped* itself does. You can list the available flags by running the image without arguments:
 
 ```console
-$ docker run -it --rm spiped
+$ docker run -it --rm %%IMAGE%%
 usage: spiped {-e | -d} -s <source socket> -t <target socket> -k <key file>
     [-DFj] [-f | -g] [-n <max # connections>] [-o <connection timeout>]
     [-p <pidfile>] [-r <rtime> | -R]
@@ -20,19 +20,19 @@ usage: spiped {-e | -d} -s <source socket> -t <target socket> -k <key file>
 For example running spiped to take encrypted connections on port 8025 and forward them to port 25 on localhost would look like this:
 
 ```console
-$ docker run -d -v /path/to/keyfile:/spiped/key:ro -p 8025:8025 --init spiped -d -s '[0.0.0.0]:8025' -t '[127.0.0.1]:25'
+$ docker run -d -v /path/to/keyfile:/spiped/key:ro -p 8025:8025 --init %%IMAGE%% -d -s '[0.0.0.0]:8025' -t '[127.0.0.1]:25'
 ```
 
 Usually you would combine this image with another linked container. The following example would take encrypted connections on port 9200 and forward them to port 9200 in the container with the name `elasticsearch`:
 
 ```console
-$ docker run -d -v /path/to/keyfile:/spiped/key:ro -p 9200:9200 --link elasticsearch:elasticsearch --init spiped -d -s '[0.0.0.0]:9200' -t 'elasticsearch:9200'
+$ docker run -d -v /path/to/keyfile:/spiped/key:ro -p 9200:9200 --link elasticsearch:elasticsearch --init %%IMAGE%% -d -s '[0.0.0.0]:9200' -t 'elasticsearch:9200'
 ```
 
 If you don’t need any to bind to a privileged port you can pass `--user spiped` to make *spiped* run as an unprivileged user:
 
 ```console
-$ docker run -d -v /path/to/keyfile:/spiped/key:ro --user spiped -p 9200:9200 --link elasticsearch:elasticsearch --init spiped -d -s '[0.0.0.0]:9200' -t 'elasticsearch:9200'
+$ docker run -d -v /path/to/keyfile:/spiped/key:ro --user spiped -p 9200:9200 --link elasticsearch:elasticsearch --init %%IMAGE%% -d -s '[0.0.0.0]:9200' -t 'elasticsearch:9200'
 ```
 
 ### Generating a key
@@ -40,7 +40,7 @@ $ docker run -d -v /path/to/keyfile:/spiped/key:ro --user spiped -p 9200:9200 --
 You can save a new keyfile named `spiped-keyfile` to the folder `/path/to/keyfile/` by running:
 
 ```console
-$ docker run -it --rm -v /path/to/keyfile:/spiped/key spiped spiped-generate-key.sh
+$ docker run -it --rm -v /path/to/keyfile:/spiped/key %%IMAGE%% spiped-generate-key.sh
 ```
 
 Afterwards transmit `spiped-keyfile` securely to another host (e.g. by using scp).

+ 8 - 8
storm/content.md

@@ -13,7 +13,7 @@ Apache Storm is a distributed computation framework written predominantly in the
 Assuming you have `topology.jar` in the current directory.
 
 ```console
-$ docker run -it -v $(pwd)/topology.jar:/topology.jar storm storm jar /topology.jar org.apache.storm.starter.ExclamationTopology
+$ docker run -it -v $(pwd)/topology.jar:/topology.jar %%IMAGE%% storm jar /topology.jar org.apache.storm.starter.ExclamationTopology
 ```
 
 ## Setting up a minimal Storm cluster
@@ -27,25 +27,25 @@ $ docker run -it -v $(pwd)/topology.jar:/topology.jar storm storm jar /topology.
 2.	The Nimbus daemon has to be connected with the Zookeeper. It's also a "fail fast" system.
 
 	```console
-	$ docker run -d --restart always --name some-nimbus --link some-zookeeper:zookeeper storm storm nimbus
+	$ docker run -d --restart always --name some-nimbus --link some-zookeeper:zookeeper %%IMAGE%% storm nimbus
 	```
 
 3.	Finally start a single Supervisor node. It will talk to the Nimbus and Zookeeper.
 
 	```console
-	$ docker run -d --restart always --name supervisor --link some-zookeeper:zookeeper --link some-nimbus:nimbus storm storm supervisor
+	$ docker run -d --restart always --name supervisor --link some-zookeeper:zookeeper --link some-nimbus:nimbus %%IMAGE%% storm supervisor
 	```
 
 4.	Now you can submit a topology to our cluster.
 
 	```console
-	$ docker run --link some-nimbus:nimbus -it --rm -v $(pwd)/topology.jar:/topology.jar storm storm jar /topology.jar org.apache.storm.starter.WordCountTopology topology
+	$ docker run --link some-nimbus:nimbus -it --rm -v $(pwd)/topology.jar:/topology.jar %%IMAGE%% storm jar /topology.jar org.apache.storm.starter.WordCountTopology topology
 	```
 
 5.	Optionally, you can start the Storm UI.
 
 	```console
-	$ docker run -d -p 8080:8080 --restart always --name ui --link some-nimbus:nimbus storm storm ui
+	$ docker run -d -p 8080:8080 --restart always --name ui --link some-nimbus:nimbus %%IMAGE%% storm ui
 	```
 
 ## %%COMPOSE%%
@@ -59,13 +59,13 @@ This image uses [default configuration](https://github.com/apache/storm/blob/v1.
 1.	Using command line arguments.
 
 	```console
-	$ docker run -d --restart always --name nimbus storm storm nimbus -c storm.zookeeper.servers='["zookeeper"]'
+	$ docker run -d --restart always --name nimbus %%IMAGE%% storm nimbus -c storm.zookeeper.servers='["zookeeper"]'
 	```
 
 2.	Assuming you have `storm.yaml` in the current directory you can mount it as a volume.
 
 	```console
-	$ docker run -it -v $(pwd)/storm.yaml:/conf/storm.yaml storm storm nimbus
+	$ docker run -it -v $(pwd)/storm.yaml:/conf/storm.yaml %%IMAGE%% storm nimbus
 	```
 
 ## Logging
@@ -77,7 +77,7 @@ This image uses [default logging configuration](https://github.com/apache/storm/
 No data are persisted by default. For convenience there are `/data` and `/logs` directories in the image owned by `storm` user. Use them accordingly to persist data and logs using volumes.
 
 ```console
-$ docker run -it -v /logs -v /data storm storm nimbus
+$ docker run -it -v /logs -v /data %%IMAGE%% storm nimbus
 ```
 
 *Please be noticed that using paths other than those predefined is likely to cause permission denied errors. It's because for [security reasons](https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#user) the Storm is running under the non-root `storm` user.*

+ 4 - 4
swarm/content.md

@@ -12,16 +12,16 @@ Like the other Docker projects, `swarm` follows the "batteries included but remo
 
 ```bash
 # create a cluster
-$ docker run --rm swarm create
+$ docker run --rm %%IMAGE%% create
 6856663cdefdec325839a4b7e1de38e8 # <- this is your unique <cluster_id>
 
 # on each of your nodes, start the swarm agent
 #  <node_ip> doesn't have to be public (eg. 192.168.0.X),
 #  as long as the swarm manager can access it.
-$ docker run -d swarm join --addr=<node_ip:2375> token://<cluster_id>
+$ docker run -d %%IMAGE%% join --addr=<node_ip:2375> token://<cluster_id>
 
 # start the manager on any machine or your laptop
-$ docker run -t -p <swarm_port>:2375 -t swarm manage token://<cluster_id>
+$ docker run -t -p <swarm_port>:2375 -t %%IMAGE%% manage token://<cluster_id>
 
 # use the regular docker cli
 $ docker -H tcp://<swarm_ip:swarm_port> info
@@ -31,7 +31,7 @@ $ docker -H tcp://<swarm_ip:swarm_port> logs ...
 ...
 
 # list nodes in your cluster
-$ docker run --rm swarm list token://<cluster_id>
+$ docker run --rm %%IMAGE%% list token://<cluster_id>
 <node_ip:2375>
 ```
 

Einige Dateien werden nicht angezeigt, da zu viele Dateien in diesem Diff geändert wurden.