Browse Source

Merge pull request #2478 from infosiftr/deprecations

Remove deprecated/removed repos
yosifkit 1 year ago
parent
commit
b9bcaab04a
54 changed files with 0 additions and 1898 deletions
  1. 0 1
      centos/README-short.txt
  2. 0 169
      centos/README.md
  3. 0 108
      centos/content.md
  4. 0 1
      centos/deprecated.md
  5. 0 1
      centos/github-repo
  6. 0 1
      centos/issues.md
  7. 0 1
      centos/license.md
  8. BIN
      centos/logo.png
  9. 0 1
      centos/maintainer.md
  10. 0 5
      centos/metadata.json
  11. 0 1
      consul/README-short.txt
  12. 0 258
      consul/README.md
  13. 0 197
      consul/content.md
  14. 0 1
      consul/deprecated.md
  15. 0 1
      consul/github-repo
  16. 0 1
      consul/license.md
  17. 0 7
      consul/logo.svg
  18. 0 1
      consul/maintainer.md
  19. 0 5
      consul/metadata.json
  20. 0 1
      express-gateway/README-short.txt
  21. 0 124
      express-gateway/README.md
  22. 0 63
      express-gateway/content.md
  23. 0 1
      express-gateway/deprecated.md
  24. 0 1
      express-gateway/github-repo
  25. 0 1
      express-gateway/license.md
  26. BIN
      express-gateway/logo.png
  27. 0 1
      express-gateway/maintainer.md
  28. 0 5
      express-gateway/metadata.json
  29. 0 1
      jobber/README-short.txt
  30. 0 74
      jobber/README.md
  31. 0 13
      jobber/content.md
  32. 0 1
      jobber/deprecated.md
  33. 0 1
      jobber/github-repo
  34. 0 1
      jobber/license.md
  35. 0 1
      jobber/maintainer.md
  36. 0 5
      jobber/metadata.json
  37. 0 1
      nats-streaming/README-short.txt
  38. 0 340
      nats-streaming/README.md
  39. 0 279
      nats-streaming/content.md
  40. 0 1
      nats-streaming/deprecated.md
  41. 0 1
      nats-streaming/github-repo
  42. 0 1
      nats-streaming/license.md
  43. BIN
      nats-streaming/logo.png
  44. 0 1
      nats-streaming/maintainer.md
  45. 0 5
      nats-streaming/metadata.json
  46. 0 1
      vault/README-short.txt
  47. 0 129
      vault/README.md
  48. 0 68
      vault/content.md
  49. 0 1
      vault/deprecated.md
  50. 0 1
      vault/github-repo
  51. 0 1
      vault/license.md
  52. 0 6
      vault/logo.svg
  53. 0 1
      vault/maintainer.md
  54. 0 7
      vault/metadata.json

+ 0 - 1
centos/README-short.txt

@@ -1 +0,0 @@
-DEPRECATED; The official build of CentOS.

+ 0 - 169
centos/README.md

@@ -1,169 +0,0 @@
-<!--
-
-********************************************************************************
-
-WARNING:
-
-    DO NOT EDIT "centos/README.md"
-
-    IT IS AUTO-GENERATED
-
-    (from the other files in "centos/" combined with a set of templates)
-
-********************************************************************************
-
--->
-
-# **DEPRECATION NOTICE**
-
-*All* tags of this image are EOL ([June 30, 2024](https://www.redhat.com/en/topics/linux/centos-linux-eol) / [docker-library/official-images#17094](https://github.com/docker-library/official-images/pull/17094), although the last meaningful update was November 16, 2020, long before the EOL date: [docker-library/official-images#9102](https://github.com/docker-library/official-images/pull/9102); see also https://www.centos.org/centos-linux-eol/ and [docker-library/docs#2205](https://github.com/docker-library/docs/pull/2205)). Please adjust your usage accordingly.
-
-# Quick reference
-
--	**Maintained by**:  
-	[The CentOS Project](https://github.com/CentOS/sig-cloud-instance-images)
-
--	**Where to get help**:  
-	[the Docker Community Slack](https://dockr.ly/comm-slack), [Server Fault](https://serverfault.com/help/on-topic), [Unix & Linux](https://unix.stackexchange.com/help/on-topic), or [Stack Overflow](https://stackoverflow.com/help/on-topic)
-
-# Supported tags and respective `Dockerfile` links
-
-**No supported tags**
-
-# Quick reference (cont.)
-
--	**Where to file issues**:  
-	[https://bugs.centos.org](https://bugs.centos.org) or [GitHub](https://github.com/CentOS/sig-cloud-instance-images/issues)
-
--	**Supported architectures**: ([more info](https://github.com/docker-library/official-images#architectures-other-than-amd64))  
-	**No supported architectures**
-
--	**Published image artifact details**:  
-	[repo-info repo's `repos/centos/` directory](https://github.com/docker-library/repo-info/blob/master/repos/centos) ([history](https://github.com/docker-library/repo-info/commits/master/repos/centos))  
-	(image metadata, transfer size, etc)
-
--	**Image updates**:  
-	[official-images repo's `library/centos` label](https://github.com/docker-library/official-images/issues?q=label%3Alibrary%2Fcentos)  
-	[official-images repo's `library/centos` file](https://github.com/docker-library/official-images/blob/master/library/centos) ([history](https://github.com/docker-library/official-images/commits/master/library/centos))
-
--	**Source of this description**:  
-	[docs repo's `centos/` directory](https://github.com/docker-library/docs/tree/master/centos) ([history](https://github.com/docker-library/docs/commits/master/centos))
-
-# CentOS
-
-CentOS Linux is a community-supported distribution derived from sources freely provided to the public by [Red Hat](ftp://ftp.redhat.com/pub/redhat/linux/enterprise/) for Red Hat Enterprise Linux (RHEL). As such, CentOS Linux aims to be functionally compatible with RHEL. The CentOS Project mainly changes packages to remove upstream vendor branding and artwork. CentOS Linux is no-cost and free to redistribute. Each CentOS Linux version is maintained for up to 10 years (by means of security updates -- the duration of the support interval by Red Hat has varied over time with respect to Sources released). A new CentOS Linux version is released approximately every 2 years and each CentOS Linux version is periodically updated (roughly every 6 months) to support newer hardware. This results in a secure, low-maintenance, reliable, predictable, and reproducible Linux environment.
-
-> [wiki.centos.org](https://wiki.centos.org/FrontPage)
-
-![logo](https://raw.githubusercontent.com/docker-library/docs/c4df0024e2cad985326dc38f6b6ce39abeab59c5/centos/logo.png)
-
-# CentOS image documentation
-
-The `centos:latest` tag is always the most recent version currently available.
-
-## Rolling builds
-
-The CentOS Project offers regularly updated images for all active releases. These images will be updated monthly or as needed for emergency fixes. These rolling updates are tagged with the major version number only. For example: `docker pull centos:6` or `docker pull centos:7`
-
-## Minor tags
-
-Additionally, images with minor version tags that correspond to install media are also offered. **These images DO NOT receive updates** as they are intended to match installation iso contents. If you choose to use these images it is highly recommended that you include `RUN yum -y update && yum clean all` in your Dockerfile, or otherwise address any potential security concerns. To use these images, please specify the minor version tag:
-
-For example: `docker pull centos:5.11` or `docker pull centos:6.6`
-
-## Overlayfs and yum
-
-Recent Docker versions support the [overlayfs](https://docs.docker.com/engine/userguide/storagedriver/overlayfs-driver/) backend, which is enabled by default on most distros supporting it from Docker 1.13 onwards. On Centos 6 and 7, **that backend requires yum-plugin-ovl to be installed and enabled**; while it is installed by default in recent centos images, make it sure you retain the `plugins=1` option in `/etc/yum.conf` if you update that file; otherwise, you may encounter errors related to rpmdb checksum failure - see [Docker ticket 10180](https://github.com/docker/docker/issues/10180) for more details.
-
-# Package documentation
-
-By default, the CentOS containers are built using yum's `nodocs` option, which helps reduce the size of the image. If you install a package and discover files missing, please comment out the line `tsflags=nodocs` in `/etc/yum.conf` and reinstall your package.
-
-# Systemd integration
-
-Systemd is now included in both the centos:7 and centos:latest base containers, but it is not active by default. In order to use systemd, you will need to include text similar to the example Dockerfile below:
-
-## Dockerfile for systemd base image
-
-```dockerfile
-FROM centos:7
-ENV container docker
-RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
-systemd-tmpfiles-setup.service ] || rm -f $i; done); \
-rm -f /lib/systemd/system/multi-user.target.wants/*;\
-rm -f /etc/systemd/system/*.wants/*;\
-rm -f /lib/systemd/system/local-fs.target.wants/*; \
-rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
-rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
-rm -f /lib/systemd/system/basic.target.wants/*;\
-rm -f /lib/systemd/system/anaconda.target.wants/*;
-VOLUME [ "/sys/fs/cgroup" ]
-CMD ["/usr/sbin/init"]
-```
-
-This Dockerfile deletes a number of unit files which might cause issues. From here, you are ready to build your base image.
-
-```console
-$ docker build --rm -t local/c7-systemd .
-```
-
-## Example systemd enabled app container
-
-In order to use the systemd enabled base container created above, you will need to create your `Dockerfile` similar to the one below.
-
-```dockerfile
-FROM local/c7-systemd
-RUN yum -y install httpd; yum clean all; systemctl enable httpd.service
-EXPOSE 80
-CMD ["/usr/sbin/init"]
-```
-
-Build this image:
-
-```console
-$ docker build --rm -t local/c7-systemd-httpd .
-```
-
-## Running a systemd enabled app container
-
-In order to run a container with systemd, you will need to mount the cgroups volumes from the host. Below is an example command that will run the systemd enabled httpd container created earlier.
-
-```console
-$ docker run -ti -v /sys/fs/cgroup:/sys/fs/cgroup:ro -p 80:80 local/c7-systemd-httpd
-```
-
-This container is running with systemd in a limited context, with the cgroups filesystem mounted. There have been reports that if you're using an Ubuntu host, you will need to add `-v /tmp/$(mktemp -d):/run` in addition to the cgroups mount.
-
-## A note about vsyscall
-
-CentOS 6 binaries and/or libraries are built to expect some system calls to be accessed via `vsyscall` mappings. Some linux distributions have opted to disable `vsyscall` entirely (opting exclusively for more secure `vdso` mappings), causing segmentation faults.
-
-If running `docker run --rm -it centos:centos6.7 bash` immediately exits with status code `139`, check to see if your system has disabled vsyscall:
-
-```console
-$ cat /proc/self/maps | egrep 'vdso|vsyscall'
-7fffccfcc000-7fffccfce000 r-xp 00000000 00:00 0                          [vdso]
-$
-```
-
-vs
-
-```console
-$ cat /proc/self/maps | egrep 'vdso|vsyscall'
-7fffe03fe000-7fffe0400000 r-xp 00000000 00:00 0                          [vdso]
-ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0                  [vsyscall]
-```
-
-If you do not see a `vsyscall` mapping, and you need to run a CentOS 6 container, try adding `vsyscall=emulated` to the kernel options in your bootloader
-
-Further reading : [lwn.net](https://lwn.net/Articles/446528/)
-
-# License
-
-View [license information](https://www.centos.org/legal/) for the software contained in this image.
-
-As with all Docker images, these likely also contain other software which may be under other licenses (such as Bash, etc from the base distribution, along with any direct or indirect dependencies of the primary software being contained).
-
-Some additional license information which was able to be auto-detected might be found in [the `repo-info` repository's `centos/` directory](https://github.com/docker-library/repo-info/tree/master/repos/centos).
-
-As for any pre-built image usage, it is the image user's responsibility to ensure that any use of this image complies with any relevant licenses for all software contained within.

+ 0 - 108
centos/content.md

@@ -1,108 +0,0 @@
-# CentOS
-
-CentOS Linux is a community-supported distribution derived from sources freely provided to the public by [Red Hat](ftp://ftp.redhat.com/pub/redhat/linux/enterprise/) for Red Hat Enterprise Linux (RHEL). As such, CentOS Linux aims to be functionally compatible with RHEL. The CentOS Project mainly changes packages to remove upstream vendor branding and artwork. CentOS Linux is no-cost and free to redistribute. Each CentOS Linux version is maintained for up to 10 years (by means of security updates -- the duration of the support interval by Red Hat has varied over time with respect to Sources released). A new CentOS Linux version is released approximately every 2 years and each CentOS Linux version is periodically updated (roughly every 6 months) to support newer hardware. This results in a secure, low-maintenance, reliable, predictable, and reproducible Linux environment.
-
-> [wiki.centos.org](https://wiki.centos.org/FrontPage)
-
-%%LOGO%%
-
-# CentOS image documentation
-
-The `%%IMAGE%%:latest` tag is always the most recent version currently available.
-
-## Rolling builds
-
-The CentOS Project offers regularly updated images for all active releases. These images will be updated monthly or as needed for emergency fixes. These rolling updates are tagged with the major version number only. For example: `docker pull %%IMAGE%%:6` or `docker pull %%IMAGE%%:7`
-
-## Minor tags
-
-Additionally, images with minor version tags that correspond to install media are also offered. **These images DO NOT receive updates** as they are intended to match installation iso contents. If you choose to use these images it is highly recommended that you include `RUN yum -y update && yum clean all` in your Dockerfile, or otherwise address any potential security concerns. To use these images, please specify the minor version tag:
-
-For example: `docker pull %%IMAGE%%:5.11` or `docker pull %%IMAGE%%:6.6`
-
-## Overlayfs and yum
-
-Recent Docker versions support the [overlayfs](https://docs.docker.com/engine/userguide/storagedriver/overlayfs-driver/) backend, which is enabled by default on most distros supporting it from Docker 1.13 onwards. On Centos 6 and 7, **that backend requires yum-plugin-ovl to be installed and enabled**; while it is installed by default in recent %%IMAGE%% images, make it sure you retain the `plugins=1` option in `/etc/yum.conf` if you update that file; otherwise, you may encounter errors related to rpmdb checksum failure - see [Docker ticket 10180](https://github.com/docker/docker/issues/10180) for more details.
-
-# Package documentation
-
-By default, the CentOS containers are built using yum's `nodocs` option, which helps reduce the size of the image. If you install a package and discover files missing, please comment out the line `tsflags=nodocs` in `/etc/yum.conf` and reinstall your package.
-
-# Systemd integration
-
-Systemd is now included in both the %%IMAGE%%:7 and %%IMAGE%%:latest base containers, but it is not active by default. In order to use systemd, you will need to include text similar to the example Dockerfile below:
-
-## Dockerfile for systemd base image
-
-```dockerfile
-FROM %%IMAGE%%:7
-ENV container docker
-RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
-systemd-tmpfiles-setup.service ] || rm -f $i; done); \
-rm -f /lib/systemd/system/multi-user.target.wants/*;\
-rm -f /etc/systemd/system/*.wants/*;\
-rm -f /lib/systemd/system/local-fs.target.wants/*; \
-rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
-rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
-rm -f /lib/systemd/system/basic.target.wants/*;\
-rm -f /lib/systemd/system/anaconda.target.wants/*;
-VOLUME [ "/sys/fs/cgroup" ]
-CMD ["/usr/sbin/init"]
-```
-
-This Dockerfile deletes a number of unit files which might cause issues. From here, you are ready to build your base image.
-
-```console
-$ docker build --rm -t local/c7-systemd .
-```
-
-## Example systemd enabled app container
-
-In order to use the systemd enabled base container created above, you will need to create your `Dockerfile` similar to the one below.
-
-```dockerfile
-FROM local/c7-systemd
-RUN yum -y install httpd; yum clean all; systemctl enable httpd.service
-EXPOSE 80
-CMD ["/usr/sbin/init"]
-```
-
-Build this image:
-
-```console
-$ docker build --rm -t local/c7-systemd-httpd .
-```
-
-## Running a systemd enabled app container
-
-In order to run a container with systemd, you will need to mount the cgroups volumes from the host. Below is an example command that will run the systemd enabled httpd container created earlier.
-
-```console
-$ docker run -ti -v /sys/fs/cgroup:/sys/fs/cgroup:ro -p 80:80 local/c7-systemd-httpd
-```
-
-This container is running with systemd in a limited context, with the cgroups filesystem mounted. There have been reports that if you're using an Ubuntu host, you will need to add `-v /tmp/$(mktemp -d):/run` in addition to the cgroups mount.
-
-## A note about vsyscall
-
-CentOS 6 binaries and/or libraries are built to expect some system calls to be accessed via `vsyscall` mappings. Some linux distributions have opted to disable `vsyscall` entirely (opting exclusively for more secure `vdso` mappings), causing segmentation faults.
-
-If running `docker run --rm -it centos:centos6.7 bash` immediately exits with status code `139`, check to see if your system has disabled vsyscall:
-
-```console
-$ cat /proc/self/maps | egrep 'vdso|vsyscall'
-7fffccfcc000-7fffccfce000 r-xp 00000000 00:00 0                          [vdso]
-$
-```
-
-vs
-
-```console
-$ cat /proc/self/maps | egrep 'vdso|vsyscall'
-7fffe03fe000-7fffe0400000 r-xp 00000000 00:00 0                          [vdso]
-ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0                  [vsyscall]
-```
-
-If you do not see a `vsyscall` mapping, and you need to run a CentOS 6 container, try adding `vsyscall=emulated` to the kernel options in your bootloader
-
-Further reading : [lwn.net](https://lwn.net/Articles/446528/)

+ 0 - 1
centos/deprecated.md

@@ -1 +0,0 @@
-*All* tags of this image are EOL ([June 30, 2024](https://www.redhat.com/en/topics/linux/centos-linux-eol) / [docker-library/official-images#17094](https://github.com/docker-library/official-images/pull/17094), although the last meaningful update was November 16, 2020, long before the EOL date: [docker-library/official-images#9102](https://github.com/docker-library/official-images/pull/9102); see also https://www.centos.org/centos-linux-eol/ and [docker-library/docs#2205](https://github.com/docker-library/docs/pull/2205)). Please adjust your usage accordingly.

+ 0 - 1
centos/github-repo

@@ -1 +0,0 @@
-https://github.com/CentOS/sig-cloud-instance-images

+ 0 - 1
centos/issues.md

@@ -1 +0,0 @@
-[https://bugs.centos.org](https://bugs.centos.org) or [GitHub](%%GITHUB-REPO%%/issues)

+ 0 - 1
centos/license.md

@@ -1 +0,0 @@
-View [license information](https://www.centos.org/legal/) for the software contained in this image.

BIN
centos/logo.png


+ 0 - 1
centos/maintainer.md

@@ -1 +0,0 @@
-[The CentOS Project](%%GITHUB-REPO%%)

+ 0 - 5
centos/metadata.json

@@ -1,5 +0,0 @@
-{
-  "hub": {
-    "categories": []
-  }
-}

+ 0 - 1
consul/README-short.txt

@@ -1 +0,0 @@
-Consul is a datacenter runtime that provides service discovery, configuration, and orchestration.

+ 0 - 258
consul/README.md

@@ -1,258 +0,0 @@
-<!--
-
-********************************************************************************
-
-WARNING:
-
-    DO NOT EDIT "consul/README.md"
-
-    IT IS AUTO-GENERATED
-
-    (from the other files in "consul/" combined with a set of templates)
-
-********************************************************************************
-
--->
-
-# **DEPRECATION NOTICE**
-
-Upcoming in Consul 1.16, we will stop publishing official Dockerhub images and publish only our Verified Publisher images. Users of Docker images should pull from [hashicorp/consul](https://hub.docker.com/r/hashicorp/consul) instead of [consul](https://hub.docker.com/_/consul). Verified Publisher images can be found at https://hub.docker.com/r/hashicorp/consul.
-
-# Quick reference
-
--	**Maintained by**:  
-	[HashiCorp](https://github.com/hashicorp/docker-consul)
-
--	**Where to get help**:  
-	[the Docker Community Slack](https://dockr.ly/comm-slack), [Server Fault](https://serverfault.com/help/on-topic), [Unix & Linux](https://unix.stackexchange.com/help/on-topic), or [Stack Overflow](https://stackoverflow.com/help/on-topic)
-
-# Supported tags and respective `Dockerfile` links
-
-**No supported tags**
-
-# Quick reference (cont.)
-
--	**Where to file issues**:  
-	[https://github.com/hashicorp/docker-consul/issues](https://github.com/hashicorp/docker-consul/issues?q=)
-
--	**Supported architectures**: ([more info](https://github.com/docker-library/official-images#architectures-other-than-amd64))  
-	**No supported architectures**
-
--	**Published image artifact details**:  
-	[repo-info repo's `repos/consul/` directory](https://github.com/docker-library/repo-info/blob/master/repos/consul) ([history](https://github.com/docker-library/repo-info/commits/master/repos/consul))  
-	(image metadata, transfer size, etc)
-
--	**Image updates**:  
-	[official-images repo's `library/consul` label](https://github.com/docker-library/official-images/issues?q=label%3Alibrary%2Fconsul)  
-	[official-images repo's `library/consul` file](https://github.com/docker-library/official-images/blob/master/library/consul) ([history](https://github.com/docker-library/official-images/commits/master/library/consul))
-
--	**Source of this description**:  
-	[docs repo's `consul/` directory](https://github.com/docker-library/docs/tree/master/consul) ([history](https://github.com/docker-library/docs/commits/master/consul))
-
-# Consul
-
-Consul is a distributed, highly-available, and multi-datacenter aware tool for service discovery, configuration, and orchestration. Consul enables rapid deployment, configuration, and maintenance of service-oriented architectures at massive scale. For more information, please see:
-
--	[Consul documentation](https://www.consul.io/)
--	[Consul on GitHub](https://github.com/hashicorp/consul)
-
-![logo](https://raw.githubusercontent.com/docker-library/docs/8adb88e1e328c244711742f65319ed4064cff9a2/consul/logo.svg?sanitize=true)
-
-# Consul and Docker
-
-Consul has several moving parts so we'll start with a brief introduction to Consul's architecture and then detail how Consul interacts with Docker. Please see the [Consul Architecture](https://www.consul.io/docs/architecture) guide for more detail on all these concepts.
-
-Each host in a Consul cluster runs the Consul agent, a long running daemon that can be started in client or server mode. Each cluster has at least 1 agent in server mode, and usually 3 or 5 for high availability. The server agents participate in a [consensus protocol](https://www.consul.io/docs/internals/consensus.html), maintain a centralized view of the cluster's state, and respond to queries from other agents in the cluster. The rest of the agents in client mode participate in a [gossip protocol](https://www.consul.io/docs/internals/gossip.html) to discover other agents and check them for failures, and they forward queries about the cluster to the server agents.
-
-Applications running on a given host communicate only with their local Consul agent, using its HTTP APIs or DNS interface. Services on the host are also registered with the local Consul agent, which syncs the information with the Consul servers. Doing the most basic DNS-based service discovery using Consul, an application queries for `foo.service.consul` and gets a randomly shuffled subset of all the hosts providing service "foo". This allows applications to locate services and balance the load without any intermediate proxies. Several HTTP APIs are also available for applications doing a deeper integration with Consul's service discovery capabilities, as well as its other features such as the key/value store.
-
-These concepts also apply when running Consul in Docker. Typically, you'll run a single Consul agent container on each host, running alongside the Docker daemon. You'll also need to configure some of the agents as servers (at least 3 for a basic HA setup). Consul should always be run with `--net=host` in Docker because Consul's consensus and gossip protocols are sensitive to delays and packet loss, so the extra layers involved with other networking types are usually undesirable and unnecessary. We will talk more about this below.
-
-We don't cover Consul's multi-datacenter capability here, but as long as `--net=host` is used, there should be no special considerations for Docker.
-
-# Using the Container
-
-We chose Alpine as a lightweight base with a reasonably small surface area for security concerns, but with enough functionality for development, interactive debugging, and useful health, watch, and exec scripts running under Consul in the container. As of Consul 0.7, the image also includes `curl` since it is so commonly used for health checks.
-
-Consul always runs under [dumb-init](https://github.com/Yelp/dumb-init), which handles reaping zombie processes and forwards signals on to all processes running in the container. We also use [gosu](https://github.com/tianon/gosu) to run Consul as a non-root "consul" user for better security. These binaries are all built by HashiCorp and signed with our [GPG key](https://www.hashicorp.com/security.html), so you can verify the signed package used to build a given base image.
-
-Running the Consul container with no arguments will give you a Consul server in [development mode](https://www.consul.io/docs/agent/options.html#_dev). The provided entry point script will also look for Consul subcommands and run `consul` as the correct user and with that subcommand. For example, you can execute `docker run consul members` and it will run the `consul members` command inside the container. The entry point also adds some special configuration options as detailed in the sections below when running the `agent` subcommand. Any other command gets `exec`-ed inside the container under `dumb-init`.
-
-The container exposes `VOLUME /consul/data`, which is a path where Consul will place its persisted state. This isn't used in any way when running in development mode. For client agents, this stores some information about the cluster and the client's health checks in case the container is restarted. For server agents, this stores the client information plus snapshots and data related to the consensus algorithm and other state like Consul's key/value store and catalog. For servers it is highly desirable to keep this volume's data around when restarting containers to recover from outage scenarios. If this is bind mounted then ownership will be changed to the consul user when the container starts.
-
-The container has a Consul configuration directory set up at `/consul/config` and the agent will load any configuration files placed here by binding a volume or by composing a new image and adding files. Alternatively, configuration can be added by passing the configuration JSON via environment variable `CONSUL_LOCAL_CONFIG`. If this is bind mounted then ownership will be changed to the consul user when the container starts.
-
-Since Consul is almost always run with `--net=host` in Docker, some care is required when configuring Consul's IP addresses. Consul has the concept of its cluster address as well as its client address. The cluster address is the address at which other Consul agents may contact a given agent. The client address is the address where other processes on the host contact Consul in order to make HTTP or DNS requests. You will typically need to tell Consul what its cluster address is when starting so that it binds to the correct interface and advertises a workable interface to the rest of the Consul agents. You'll see this in the examples below as the `-bind=<external ip>` argument to Consul.
-
-The entry point also includes a small utility to look up a client or bind address by interface name. To use this, set the `CONSUL_CLIENT_INTERFACE` and/or `CONSUL_BIND_INTERFACE` environment variables to the name of the interface you'd like Consul to use and a `-client=<interface ip>` and/or `-bind=<interface ip>` argument will be computed and passed to Consul at startup.
-
-## Running Consul for Development
-
-```console
-$ docker run -d --name=dev-consul -e CONSUL_BIND_INTERFACE=eth0 consul
-```
-
-This runs a completely in-memory Consul server agent with default bridge networking and no services exposed on the host, which is useful for development but should not be used in production. For example, if that server is running at internal address 172.17.0.2, you can run a three node cluster for development by starting up two more instances and telling them to join the first node.
-
-```console
-$ docker run -d -e CONSUL_BIND_INTERFACE=eth0 consul agent -dev -join=172.17.0.2
-... server 2 starts
-$ docker run -d -e CONSUL_BIND_INTERFACE=eth0 consul agent -dev -join=172.17.0.2
-... server 3 starts
-```
-
-Then we can query for all the members in the cluster by running a Consul CLI command in the first container:
-
-```console
-$ docker exec -t dev-consul consul members
-Node          Address          Status  Type    Build  Protocol  DC
-579db72c1ae1  172.17.0.3:8301  alive   server  0.6.3  2         dc1
-93fe2309ef19  172.17.0.4:8301  alive   server  0.6.3  2         dc1
-c9caabfd4c2a  172.17.0.2:8301  alive   server  0.6.3  2         dc1
-```
-
-Remember that Consul doesn't use the data volume in this mode - once the container stops all of your state will be wiped out, so please don't use this mode for production. Running completely on the bridge network with the development server is useful for testing multiple instances of Consul on a single machine, which is normally difficult to do because of port conflicts.
-
-Development mode also starts a version of Consul's web UI on port 8500. This can be added to the other Consul configurations by supplying the `-ui` option to Consul on the command line. The web assets are bundled inside the Consul binary in the container.
-
-## Running Consul Agent in Client Mode
-
-```console
-$  docker run -d --net=host -e 'CONSUL_LOCAL_CONFIG={"leave_on_terminate": true}' consul agent -bind=<external ip> -retry-join=<root agent ip>
-==> Starting Consul agent...
-==> Starting Consul agent RPC...
-==> Consul agent running!
-         Node name: 'linode'
-        Datacenter: 'dc1'
-            Server: false (bootstrap: false)
-       Client Addr: 127.0.0.1 (HTTP: 8500, HTTPS: -1, DNS: 8600, RPC: 8400)
-      Cluster Addr: <external ip> (LAN: 8301, WAN: 8302)
-    Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
-             Atlas: <disabled>
-...
-```
-
-This runs a Consul client agent sharing the host's network and advertising the external IP address to the rest of the cluster. Note that the agent defaults to binding its client interfaces to 127.0.0.1, which is the host's loopback interface. This would be a good configuration to use if other containers on the host also use `--net=host`, and it also exposes the agent to processes running directly on the host outside a container, such as HashiCorp's Nomad.
-
-The `-retry-join` parameter specifies the external IP of one other agent in the cluster to use to join at startup. There are several ways to control how an agent joins the cluster, see the [agent configuration](https://www.consul.io/docs/agent/options.html) guide for more details on the `-join`, `-retry-join`, and `-atlas-join` options.
-
-Note also we've set [`leave_on_terminate`](https://www.consul.io/docs/agent/options.html#leave_on_terminate) using the `CONSUL_LOCAL_CONFIG` environment variable. This is recommended for clients to and will be defaulted to `true` in Consul 0.7 and later, so this will no longer be necessary.
-
-At startup, the agent will read config JSON files from `/consul/config`. Data will be persisted in the `/consul/data` volume.
-
-Here are some example queries on a host with an external IP of 66.175.220.234:
-
-```console
-$ curl http://localhost:8500/v1/health/service/consul?pretty
-[
-    {
-        "Node": {
-            "Node": "linode",
-            "Address": "66.175.220.234",
-...
-```
-
-```console
-$ dig @localhost -p 8600 consul.service.consul
-; <<>> DiG 9.9.5-3ubuntu0.7-Ubuntu <<>> @localhost -p 8600 consul.service.consul
-; (2 servers found)
-;; global options: +cmd
-;; Got answer:
-;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 61616
-;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
-;; WARNING: recursion requested but not available
-
-;; QUESTION SECTION:
-;consul.service.consul.         IN      A
-
-;; ANSWER SECTION:
-consul.service.consul.  0       IN      A       66.175.220.234
-...
-```
-
-If you want to expose the Consul interfaces to other containers via a different network, such as the bridge network, use the `-client` option for Consul:
-
-```console
-docker run -d --net=host consul agent -bind=<external ip> -client=<bridge ip> -retry-join=<root agent ip>
-==> Starting Consul agent...
-==> Starting Consul agent RPC...
-==> Consul agent running!
-         Node name: 'linode'
-        Datacenter: 'dc1'
-            Server: false (bootstrap: false)
-       Client Addr: <bridge ip> (HTTP: 8500, HTTPS: -1, DNS: 8600, RPC: 8400)
-      Cluster Addr: <external ip> (LAN: 8301, WAN: 8302)
-    Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
-             Atlas: <disabled>
-...
-```
-
-With this configuration, Consul's client interfaces will be bound to the bridge IP and available to other containers on that network, but not on the host network. Note that we still keep the cluster address out on the host network for performance. Consul will also accept the `-client=0.0.0.0` option to bind to all interfaces.
-
-## Running Consul Agent in Server Mode
-
-```console
-$ docker run -d --net=host -e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' consul agent -server -bind=<external ip> -retry-join=<root agent ip> -bootstrap-expect=<number of server agents>
-```
-
-This runs a Consul server agent sharing the host's network. All of the network considerations and behavior we covered above for the client agent also apply to the server agent. A single server on its own won't be able to form a quorum and will be waiting for other servers to join.
-
-Just like the client agent, the `-retry-join` parameter specifies the external IP of one other agent in the cluster to use to join at startup. There are several ways to control how an agent joins the cluster, see the [agent configuration](https://www.consul.io/docs/agent/options.html) guide for more details on the `-join`, `-retry-join`, and `-atlas-join` options. The server agent also consumes a `-bootstrap-expect` option that specifies how many server agents to watch for before bootstrapping the cluster for the first time. This provides an easy way to get an orderly startup with a new cluster. See the [agent configuration](https://www.consul.io/docs/agent/options.html) guide for more details on the `-bootstrap` and `-bootstrap-expect` options.
-
-Note also we've set [`skip_leave_on_interrupt`](https://www.consul.io/docs/agent/options.html#skip_leave_on_interrupt) using the `CONSUL_LOCAL_CONFIG` environment variable. This is recommended for servers and will be defaulted to `true` in Consul 0.7 and later, so this will no longer be necessary.
-
-At startup, the agent will read config JSON files from `/consul/config`. Data will be persisted in the `/consul/data` volume.
-
-Once the cluster is bootstrapped and quorum is achieved, you must use care to keep the minimum number of servers operating in order to avoid an outage state for the cluster. The deployment table in the [consensus](https://www.consul.io/docs/internals/consensus.html) guide outlines the number of servers required for different configurations. There's also an [adding/removing servers](https://www.consul.io/docs/guides/servers.html) guide that describes that process, which is relevant to Docker configurations as well. The [outage recovery](https://www.consul.io/docs/guides/outage.html) guide has steps to perform if servers are permanently lost. In general it's best to restart or replace servers one at a time, making sure servers are healthy before proceeding to the next server.
-
-## Exposing Consul's DNS Server on Port 53
-
-By default, Consul's DNS server is exposed on port 8600. Because this is cumbersome to configure with facilities like `resolv.conf`, you may want to expose DNS on port 53. Consul 0.7 and later supports this by setting an environment variable that runs `setcap` on the Consul binary, allowing it to bind to privileged ports. Note that not all Docker storage backends support this feature (notably AUFS).
-
-Here's an example:
-
-```console
-$ docker run -d --net=host -e 'CONSUL_ALLOW_PRIVILEGED_PORTS=' consul -dns-port=53 -recursor=8.8.8.8
-```
-
-This example also includes a recursor configuration that uses Google's DNS servers for non-Consul lookups. You may want to adjust this based on your particular DNS configuration. If you are binding Consul's client interfaces to the host's loopback address, then you should be able to configure your host's `resolv.conf` to route DNS requests to Consul by including "127.0.0.1" as the primary DNS server. This would expose Consul's DNS to all applications running on the host, but due to Docker's built-in DNS server, you can't point to this directly from inside your containers; Docker will issue an error message if you attempt to do this. You must configure Consul to listen on a non-localhost address that is reachable from within other containers.
-
-Once you bind Consul's client interfaces to the bridge or other network, you can use the `--dns` option in your *other containers* in order for them to use Consul's DNS server, mapped to port 53. Here's an example:
-
-```console
-$ docker run -d --net=host -e 'CONSUL_ALLOW_PRIVILEGED_PORTS=' consul agent -dns-port=53 -recursor=8.8.8.8 -bind=<bridge ip>
-```
-
-Now start another container and point it at Consul's DNS, using the bridge address of the host:
-
-```console
-$ docker run -i --dns=<bridge ip> -t ubuntu sh -c "apt-get update && apt-get install -y dnsutils && dig consul.service.consul"
-...
-;; ANSWER SECTION:
-consul.service.consul.  0       IN      A       66.175.220.234
-...
-```
-
-In the example above, adding the bridge address to the host's `/etc/resolv.conf` file should expose it to all containers without running with the `--dns` option.
-
-## Service Discovery with Containers
-
-There are several approaches you can use to register services running in containers with Consul. For manual configuration, your containers can use the local agent's APIs to register and deregister themselves, see the [Agent API](https://www.consul.io/docs/agent/http/agent.html) for more details. Another strategy is to create a derived Consul container for each host type which includes JSON config files for Consul to parse at startup, see [Services](https://www.consul.io/docs/agent/services.html) for more information. Both of these approaches are fairly cumbersome, and the configured services may fall out of sync if containers die or additional containers are started.
-
-If you run your containers under [HashiCorp's Nomad](https://www.nomadproject.io/) scheduler, it has [first class support for Consul](https://www.nomadproject.io/docs/jobspec/servicediscovery.html). The Nomad agent runs on each host alongside the Consul agent. When jobs are scheduled on a given host, the Nomad agent automatically takes care of syncing the Consul agent with the service information. This is very easy to manage, and even services on hosts running outside of Docker containers can be managed by Nomad and registered with Consul. You can find out more about running Docker under Nomad in the [Docker Driver](https://www.nomadproject.io/docs/drivers/docker.html) guide.
-
-Other open source options include [Registrator](http://gliderlabs.com/registrator/latest/) from Glider Labs and [ContainerPilot](https://www.joyent.com/containerpilot) from Joyent. Registrator works by running a Registrator instance on each host, alongside the Consul agent. Registrator monitors the Docker daemon for container stop and start events, and handles service registration with Consul using the container names and exposed ports as the service information. ContainerPilot manages service registration using tooling running inside the container to register services with Consul on start, manage a Consul TTL health check while running, and deregister services when the container stops.
-
-## Running Health Checks in Docker Containers
-
-Consul has the ability to execute health checks inside containers. If the Docker daemon is exposed to the Consul agent and the `DOCKER_HOST` environment variable is set, then checks can be configured with the Docker container ID to execute in. See the [health checks](https://www.consul.io/docs/agent/checks.html) guide for more details.
-
-# License
-
-View [license information](https://raw.githubusercontent.com/hashicorp/consul/master/LICENSE) for the software contained in this image.
-
-As with all Docker images, these likely also contain other software which may be under other licenses (such as Bash, etc from the base distribution, along with any direct or indirect dependencies of the primary software being contained).
-
-Some additional license information which was able to be auto-detected might be found in [the `repo-info` repository's `consul/` directory](https://github.com/docker-library/repo-info/tree/master/repos/consul).
-
-As for any pre-built image usage, it is the image user's responsibility to ensure that any use of this image complies with any relevant licenses for all software contained within.

+ 0 - 197
consul/content.md

@@ -1,197 +0,0 @@
-# Consul
-
-Consul is a distributed, highly-available, and multi-datacenter aware tool for service discovery, configuration, and orchestration. Consul enables rapid deployment, configuration, and maintenance of service-oriented architectures at massive scale. For more information, please see:
-
--	[Consul documentation](https://www.consul.io/)
--	[Consul on GitHub](https://github.com/hashicorp/consul)
-
-%%LOGO%%
-
-# Consul and Docker
-
-Consul has several moving parts so we'll start with a brief introduction to Consul's architecture and then detail how Consul interacts with Docker. Please see the [Consul Architecture](https://www.consul.io/docs/architecture) guide for more detail on all these concepts.
-
-Each host in a Consul cluster runs the Consul agent, a long running daemon that can be started in client or server mode. Each cluster has at least 1 agent in server mode, and usually 3 or 5 for high availability. The server agents participate in a [consensus protocol](https://www.consul.io/docs/internals/consensus.html), maintain a centralized view of the cluster's state, and respond to queries from other agents in the cluster. The rest of the agents in client mode participate in a [gossip protocol](https://www.consul.io/docs/internals/gossip.html) to discover other agents and check them for failures, and they forward queries about the cluster to the server agents.
-
-Applications running on a given host communicate only with their local Consul agent, using its HTTP APIs or DNS interface. Services on the host are also registered with the local Consul agent, which syncs the information with the Consul servers. Doing the most basic DNS-based service discovery using Consul, an application queries for `foo.service.consul` and gets a randomly shuffled subset of all the hosts providing service "foo". This allows applications to locate services and balance the load without any intermediate proxies. Several HTTP APIs are also available for applications doing a deeper integration with Consul's service discovery capabilities, as well as its other features such as the key/value store.
-
-These concepts also apply when running Consul in Docker. Typically, you'll run a single Consul agent container on each host, running alongside the Docker daemon. You'll also need to configure some of the agents as servers (at least 3 for a basic HA setup). Consul should always be run with `--net=host` in Docker because Consul's consensus and gossip protocols are sensitive to delays and packet loss, so the extra layers involved with other networking types are usually undesirable and unnecessary. We will talk more about this below.
-
-We don't cover Consul's multi-datacenter capability here, but as long as `--net=host` is used, there should be no special considerations for Docker.
-
-# Using the Container
-
-We chose Alpine as a lightweight base with a reasonably small surface area for security concerns, but with enough functionality for development, interactive debugging, and useful health, watch, and exec scripts running under Consul in the container. As of Consul 0.7, the image also includes `curl` since it is so commonly used for health checks.
-
-Consul always runs under [dumb-init](https://github.com/Yelp/dumb-init), which handles reaping zombie processes and forwards signals on to all processes running in the container. We also use [gosu](https://github.com/tianon/gosu) to run Consul as a non-root "consul" user for better security. These binaries are all built by HashiCorp and signed with our [GPG key](https://www.hashicorp.com/security.html), so you can verify the signed package used to build a given base image.
-
-Running the Consul container with no arguments will give you a Consul server in [development mode](https://www.consul.io/docs/agent/options.html#_dev). The provided entry point script will also look for Consul subcommands and run `consul` as the correct user and with that subcommand. For example, you can execute `docker run %%IMAGE%% members` and it will run the `consul members` command inside the container. The entry point also adds some special configuration options as detailed in the sections below when running the `agent` subcommand. Any other command gets `exec`-ed inside the container under `dumb-init`.
-
-The container exposes `VOLUME /consul/data`, which is a path where Consul will place its persisted state. This isn't used in any way when running in development mode. For client agents, this stores some information about the cluster and the client's health checks in case the container is restarted. For server agents, this stores the client information plus snapshots and data related to the consensus algorithm and other state like Consul's key/value store and catalog. For servers it is highly desirable to keep this volume's data around when restarting containers to recover from outage scenarios. If this is bind mounted then ownership will be changed to the consul user when the container starts.
-
-The container has a Consul configuration directory set up at `/consul/config` and the agent will load any configuration files placed here by binding a volume or by composing a new image and adding files. Alternatively, configuration can be added by passing the configuration JSON via environment variable `CONSUL_LOCAL_CONFIG`. If this is bind mounted then ownership will be changed to the consul user when the container starts.
-
-Since Consul is almost always run with `--net=host` in Docker, some care is required when configuring Consul's IP addresses. Consul has the concept of its cluster address as well as its client address. The cluster address is the address at which other Consul agents may contact a given agent. The client address is the address where other processes on the host contact Consul in order to make HTTP or DNS requests. You will typically need to tell Consul what its cluster address is when starting so that it binds to the correct interface and advertises a workable interface to the rest of the Consul agents. You'll see this in the examples below as the `-bind=<external ip>` argument to Consul.
-
-The entry point also includes a small utility to look up a client or bind address by interface name. To use this, set the `CONSUL_CLIENT_INTERFACE` and/or `CONSUL_BIND_INTERFACE` environment variables to the name of the interface you'd like Consul to use and a `-client=<interface ip>` and/or `-bind=<interface ip>` argument will be computed and passed to Consul at startup.
-
-## Running Consul for Development
-
-```console
-$ docker run -d --name=dev-consul -e CONSUL_BIND_INTERFACE=eth0 %%IMAGE%%
-```
-
-This runs a completely in-memory Consul server agent with default bridge networking and no services exposed on the host, which is useful for development but should not be used in production. For example, if that server is running at internal address 172.17.0.2, you can run a three node cluster for development by starting up two more instances and telling them to join the first node.
-
-```console
-$ docker run -d -e CONSUL_BIND_INTERFACE=eth0 %%IMAGE%% agent -dev -join=172.17.0.2
-... server 2 starts
-$ docker run -d -e CONSUL_BIND_INTERFACE=eth0 %%IMAGE%% agent -dev -join=172.17.0.2
-... server 3 starts
-```
-
-Then we can query for all the members in the cluster by running a Consul CLI command in the first container:
-
-```console
-$ docker exec -t dev-consul %%IMAGE%% members
-Node          Address          Status  Type    Build  Protocol  DC
-579db72c1ae1  172.17.0.3:8301  alive   server  0.6.3  2         dc1
-93fe2309ef19  172.17.0.4:8301  alive   server  0.6.3  2         dc1
-c9caabfd4c2a  172.17.0.2:8301  alive   server  0.6.3  2         dc1
-```
-
-Remember that Consul doesn't use the data volume in this mode - once the container stops all of your state will be wiped out, so please don't use this mode for production. Running completely on the bridge network with the development server is useful for testing multiple instances of Consul on a single machine, which is normally difficult to do because of port conflicts.
-
-Development mode also starts a version of Consul's web UI on port 8500. This can be added to the other Consul configurations by supplying the `-ui` option to Consul on the command line. The web assets are bundled inside the Consul binary in the container.
-
-## Running Consul Agent in Client Mode
-
-```console
-$  docker run -d --net=host -e 'CONSUL_LOCAL_CONFIG={"leave_on_terminate": true}' %%IMAGE%% agent -bind=<external ip> -retry-join=<root agent ip>
-==> Starting Consul agent...
-==> Starting Consul agent RPC...
-==> Consul agent running!
-         Node name: 'linode'
-        Datacenter: 'dc1'
-            Server: false (bootstrap: false)
-       Client Addr: 127.0.0.1 (HTTP: 8500, HTTPS: -1, DNS: 8600, RPC: 8400)
-      Cluster Addr: <external ip> (LAN: 8301, WAN: 8302)
-    Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
-             Atlas: <disabled>
-...
-```
-
-This runs a Consul client agent sharing the host's network and advertising the external IP address to the rest of the cluster. Note that the agent defaults to binding its client interfaces to 127.0.0.1, which is the host's loopback interface. This would be a good configuration to use if other containers on the host also use `--net=host`, and it also exposes the agent to processes running directly on the host outside a container, such as HashiCorp's Nomad.
-
-The `-retry-join` parameter specifies the external IP of one other agent in the cluster to use to join at startup. There are several ways to control how an agent joins the cluster, see the [agent configuration](https://www.consul.io/docs/agent/options.html) guide for more details on the `-join`, `-retry-join`, and `-atlas-join` options.
-
-Note also we've set [`leave_on_terminate`](https://www.consul.io/docs/agent/options.html#leave_on_terminate) using the `CONSUL_LOCAL_CONFIG` environment variable. This is recommended for clients to and will be defaulted to `true` in Consul 0.7 and later, so this will no longer be necessary.
-
-At startup, the agent will read config JSON files from `/consul/config`. Data will be persisted in the `/consul/data` volume.
-
-Here are some example queries on a host with an external IP of 66.175.220.234:
-
-```console
-$ curl http://localhost:8500/v1/health/service/consul?pretty
-[
-    {
-        "Node": {
-            "Node": "linode",
-            "Address": "66.175.220.234",
-...
-```
-
-```console
-$ dig @localhost -p 8600 consul.service.consul
-; <<>> DiG 9.9.5-3ubuntu0.7-Ubuntu <<>> @localhost -p 8600 consul.service.consul
-; (2 servers found)
-;; global options: +cmd
-;; Got answer:
-;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 61616
-;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
-;; WARNING: recursion requested but not available
-
-;; QUESTION SECTION:
-;consul.service.consul.         IN      A
-
-;; ANSWER SECTION:
-consul.service.consul.  0       IN      A       66.175.220.234
-...
-```
-
-If you want to expose the Consul interfaces to other containers via a different network, such as the bridge network, use the `-client` option for Consul:
-
-```console
-docker run -d --net=host %%IMAGE%% agent -bind=<external ip> -client=<bridge ip> -retry-join=<root agent ip>
-==> Starting Consul agent...
-==> Starting Consul agent RPC...
-==> Consul agent running!
-         Node name: 'linode'
-        Datacenter: 'dc1'
-            Server: false (bootstrap: false)
-       Client Addr: <bridge ip> (HTTP: 8500, HTTPS: -1, DNS: 8600, RPC: 8400)
-      Cluster Addr: <external ip> (LAN: 8301, WAN: 8302)
-    Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
-             Atlas: <disabled>
-...
-```
-
-With this configuration, Consul's client interfaces will be bound to the bridge IP and available to other containers on that network, but not on the host network. Note that we still keep the cluster address out on the host network for performance. Consul will also accept the `-client=0.0.0.0` option to bind to all interfaces.
-
-## Running Consul Agent in Server Mode
-
-```console
-$ docker run -d --net=host -e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' %%IMAGE%% agent -server -bind=<external ip> -retry-join=<root agent ip> -bootstrap-expect=<number of server agents>
-```
-
-This runs a Consul server agent sharing the host's network. All of the network considerations and behavior we covered above for the client agent also apply to the server agent. A single server on its own won't be able to form a quorum and will be waiting for other servers to join.
-
-Just like the client agent, the `-retry-join` parameter specifies the external IP of one other agent in the cluster to use to join at startup. There are several ways to control how an agent joins the cluster, see the [agent configuration](https://www.consul.io/docs/agent/options.html) guide for more details on the `-join`, `-retry-join`, and `-atlas-join` options. The server agent also consumes a `-bootstrap-expect` option that specifies how many server agents to watch for before bootstrapping the cluster for the first time. This provides an easy way to get an orderly startup with a new cluster. See the [agent configuration](https://www.consul.io/docs/agent/options.html) guide for more details on the `-bootstrap` and `-bootstrap-expect` options.
-
-Note also we've set [`skip_leave_on_interrupt`](https://www.consul.io/docs/agent/options.html#skip_leave_on_interrupt) using the `CONSUL_LOCAL_CONFIG` environment variable. This is recommended for servers and will be defaulted to `true` in Consul 0.7 and later, so this will no longer be necessary.
-
-At startup, the agent will read config JSON files from `/consul/config`. Data will be persisted in the `/consul/data` volume.
-
-Once the cluster is bootstrapped and quorum is achieved, you must use care to keep the minimum number of servers operating in order to avoid an outage state for the cluster. The deployment table in the [consensus](https://www.consul.io/docs/internals/consensus.html) guide outlines the number of servers required for different configurations. There's also an [adding/removing servers](https://www.consul.io/docs/guides/servers.html) guide that describes that process, which is relevant to Docker configurations as well. The [outage recovery](https://www.consul.io/docs/guides/outage.html) guide has steps to perform if servers are permanently lost. In general it's best to restart or replace servers one at a time, making sure servers are healthy before proceeding to the next server.
-
-## Exposing Consul's DNS Server on Port 53
-
-By default, Consul's DNS server is exposed on port 8600. Because this is cumbersome to configure with facilities like `resolv.conf`, you may want to expose DNS on port 53. Consul 0.7 and later supports this by setting an environment variable that runs `setcap` on the Consul binary, allowing it to bind to privileged ports. Note that not all Docker storage backends support this feature (notably AUFS).
-
-Here's an example:
-
-```console
-$ docker run -d --net=host -e 'CONSUL_ALLOW_PRIVILEGED_PORTS=' %%IMAGE%% -dns-port=53 -recursor=8.8.8.8
-```
-
-This example also includes a recursor configuration that uses Google's DNS servers for non-Consul lookups. You may want to adjust this based on your particular DNS configuration. If you are binding Consul's client interfaces to the host's loopback address, then you should be able to configure your host's `resolv.conf` to route DNS requests to Consul by including "127.0.0.1" as the primary DNS server. This would expose Consul's DNS to all applications running on the host, but due to Docker's built-in DNS server, you can't point to this directly from inside your containers; Docker will issue an error message if you attempt to do this. You must configure Consul to listen on a non-localhost address that is reachable from within other containers.
-
-Once you bind Consul's client interfaces to the bridge or other network, you can use the `--dns` option in your *other containers* in order for them to use Consul's DNS server, mapped to port 53. Here's an example:
-
-```console
-$ docker run -d --net=host -e 'CONSUL_ALLOW_PRIVILEGED_PORTS=' %%IMAGE%% agent -dns-port=53 -recursor=8.8.8.8 -bind=<bridge ip>
-```
-
-Now start another container and point it at Consul's DNS, using the bridge address of the host:
-
-```console
-$ docker run -i --dns=<bridge ip> -t ubuntu sh -c "apt-get update && apt-get install -y dnsutils && dig consul.service.consul"
-...
-;; ANSWER SECTION:
-consul.service.consul.  0       IN      A       66.175.220.234
-...
-```
-
-In the example above, adding the bridge address to the host's `/etc/resolv.conf` file should expose it to all containers without running with the `--dns` option.
-
-## Service Discovery with Containers
-
-There are several approaches you can use to register services running in containers with Consul. For manual configuration, your containers can use the local agent's APIs to register and deregister themselves, see the [Agent API](https://www.consul.io/docs/agent/http/agent.html) for more details. Another strategy is to create a derived Consul container for each host type which includes JSON config files for Consul to parse at startup, see [Services](https://www.consul.io/docs/agent/services.html) for more information. Both of these approaches are fairly cumbersome, and the configured services may fall out of sync if containers die or additional containers are started.
-
-If you run your containers under [HashiCorp's Nomad](https://www.nomadproject.io/) scheduler, it has [first class support for Consul](https://www.nomadproject.io/docs/jobspec/servicediscovery.html). The Nomad agent runs on each host alongside the Consul agent. When jobs are scheduled on a given host, the Nomad agent automatically takes care of syncing the Consul agent with the service information. This is very easy to manage, and even services on hosts running outside of Docker containers can be managed by Nomad and registered with Consul. You can find out more about running Docker under Nomad in the [Docker Driver](https://www.nomadproject.io/docs/drivers/docker.html) guide.
-
-Other open source options include [Registrator](http://gliderlabs.com/registrator/latest/) from Glider Labs and [ContainerPilot](https://www.joyent.com/containerpilot) from Joyent. Registrator works by running a Registrator instance on each host, alongside the Consul agent. Registrator monitors the Docker daemon for container stop and start events, and handles service registration with Consul using the container names and exposed ports as the service information. ContainerPilot manages service registration using tooling running inside the container to register services with Consul on start, manage a Consul TTL health check while running, and deregister services when the container stops.
-
-## Running Health Checks in Docker Containers
-
-Consul has the ability to execute health checks inside containers. If the Docker daemon is exposed to the Consul agent and the `DOCKER_HOST` environment variable is set, then checks can be configured with the Docker container ID to execute in. See the [health checks](https://www.consul.io/docs/agent/checks.html) guide for more details.

+ 0 - 1
consul/deprecated.md

@@ -1 +0,0 @@
-Upcoming in Consul 1.16, we will stop publishing official Dockerhub images and publish only our Verified Publisher images. Users of Docker images should pull from [hashicorp/consul](https://hub.docker.com/r/hashicorp/consul) instead of [consul](https://hub.docker.com/_/consul). Verified Publisher images can be found at https://hub.docker.com/r/hashicorp/consul.

+ 0 - 1
consul/github-repo

@@ -1 +0,0 @@
-https://github.com/hashicorp/docker-consul

+ 0 - 1
consul/license.md

@@ -1 +0,0 @@
-View [license information](https://raw.githubusercontent.com/hashicorp/consul/master/LICENSE) for the software contained in this image.

+ 0 - 7
consul/logo.svg

@@ -1,7 +0,0 @@
-<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 250 250">
-  <g fill="none" fill-rule="evenodd">
-    <path fill="#961D59" d="M121.452 145.097c-11.613 0-20.646-9.032-20.646-20.645s9.033-20.646 20.646-20.646c11.613 0 20.645 9.033 20.645 20.646 0 11.613-9.032 20.645-20.645 20.645"/>
-    <path fill="#D62783" d="M161.774 134.13c-5.16 0-9.677-4.195-9.677-9.678 0-5.162 4.193-9.678 9.677-9.678 5.16 0 9.678 4.194 9.678 9.678 0 5.483-4.517 9.677-9.678 9.677m34.84 9.03c-1.29 5.16-6.453 8.06-11.614 6.77-5.16-1.29-8.065-6.45-6.774-11.61 1.29-5.16 6.45-8.07 11.613-6.78 4.83 1.29 8.06 5.805 7.09 10.97 0 0 0 .32-.32.642m-6.78-24.517c-5.164 1.29-10.325-1.936-11.615-7.098-1.29-5.16 1.935-10.323 7.097-11.614 5.16-1.29 10.322 1.937 11.612 7.1.322 1.29.322 2.58 0 3.87-.323 3.55-2.904 6.774-7.097 7.742m34.19 23.225c-.97 5.16-5.81 8.71-10.97 7.743-5.16-.968-8.71-5.807-7.74-10.968.966-5.16 5.805-8.71 10.966-7.743 4.84.968 8.39 5.484 8.067 10.323-.32.322-.32.322-.32.645M216.29 118c-5.16.968-10-2.58-10.967-7.742-.968-5.16 2.58-10 7.742-10.968 5.16-.967 10 2.58 10.967 7.742 0 .968.323 1.613 0 2.58-.322 4.194-3.548 7.743-7.742 8.388m-6.774 57.097c-2.58 4.516-8.387 6.13-12.903 3.548-4.516-2.58-6.13-8.387-3.548-12.903 2.58-4.516 8.387-6.13 12.903-3.548 3.226 1.935 5.16 5.483 4.838 9.032-.322 1.29-.645 2.58-1.29 3.87m-3.548-87.74c-4.516 2.58-10.323.967-12.903-3.55-2.58-4.516-.968-10.322 3.548-12.903 4.516-2.58 10.322-.968 12.903 3.55.968 1.934 1.29 3.547 1.29 5.482-.322 2.904-1.935 5.807-4.838 7.42"/>
-    <path fill="#D62783" fill-rule="nonzero" d="M121.774 220.903c-25.806 0-49.677-10-68.064-28.064-17.742-18.39-27.742-42.59-27.742-68.07 0-25.81 10-49.68 28.064-68.07 18.065-18.06 42.258-28.06 67.742-28.06 21.29 0 41.613 6.773 58.387 19.68L168.23 63.8C154.68 53.485 138.55 48 121.78 48c-20.322 0-39.677 8.065-54.193 22.58-14.51 14.517-22.253 33.55-22.253 54.194 0 20.323 8.064 39.678 22.58 54.194 14.516 14.516 33.55 22.258 54.194 22.258 17.097 0 33.226-5.484 46.45-15.807l11.614 15.48c-16.773 12.9-37.095 20-58.386 20z"/>
-  </g>
-</svg>

+ 0 - 1
consul/maintainer.md

@@ -1 +0,0 @@
-../.common-templates/maintainer-hashicorp.md

+ 0 - 5
consul/metadata.json

@@ -1,5 +0,0 @@
-{
-  "hub": {
-    "categories": []
-  }
-}

+ 0 - 1
express-gateway/README-short.txt

@@ -1 +0,0 @@
-DEPRECATED; The Official Docker Image of Express Gateway, an API Gateway for APIs and Microservices

+ 0 - 124
express-gateway/README.md

@@ -1,124 +0,0 @@
-<!--
-
-********************************************************************************
-
-WARNING:
-
-    DO NOT EDIT "express-gateway/README.md"
-
-    IT IS AUTO-GENERATED
-
-    (from the other files in "express-gateway/" combined with a set of templates)
-
-********************************************************************************
-
--->
-
-# **DEPRECATION NOTICE**
-
-This project is no longer maintained. Read [here](https://github.com/ExpressGateway/express-gateway/issues/1011#issuecomment-748354599) for more details or if you're interested in taking over the project.
-
-# Quick reference
-
--	**Maintained by**:  
-	[the Express Gateway Team](https://github.com/ExpressGateway/express-gateway)
-
--	**Where to get help**:  
-	[the Docker Community Slack](https://dockr.ly/comm-slack), [Server Fault](https://serverfault.com/help/on-topic), [Unix & Linux](https://unix.stackexchange.com/help/on-topic), or [Stack Overflow](https://stackoverflow.com/help/on-topic)
-
-# Supported tags and respective `Dockerfile` links
-
-**No supported tags**
-
-# Quick reference (cont.)
-
--	**Where to file issues**:  
-	[https://github.com/ExpressGateway/express-gateway/issues](https://github.com/ExpressGateway/express-gateway/issues?q=)
-
--	**Supported architectures**: ([more info](https://github.com/docker-library/official-images#architectures-other-than-amd64))  
-	**No supported architectures**
-
--	**Published image artifact details**:  
-	[repo-info repo's `repos/express-gateway/` directory](https://github.com/docker-library/repo-info/blob/master/repos/express-gateway) ([history](https://github.com/docker-library/repo-info/commits/master/repos/express-gateway))  
-	(image metadata, transfer size, etc)
-
--	**Image updates**:  
-	[official-images repo's `library/express-gateway` label](https://github.com/docker-library/official-images/issues?q=label%3Alibrary%2Fexpress-gateway)  
-	[official-images repo's `library/express-gateway` file](https://github.com/docker-library/official-images/blob/master/library/express-gateway) ([history](https://github.com/docker-library/official-images/commits/master/library/express-gateway))
-
--	**Source of this description**:  
-	[docs repo's `express-gateway/` directory](https://github.com/docker-library/docs/tree/master/express-gateway) ([history](https://github.com/docker-library/docs/commits/master/express-gateway))
-
-# What is Express-Gateway?
-
-Express Gateway is an API Gateway that sits at the heart of any microservices architecture, regardless of what language or platform you're using. Express Gateway secures your microservices and exposes them through APIs using Node.js, ExpressJS and Express middleware. Developing microservices, orchestrating and managing them now can be done insanely fast all on one seamless platform without having to introduce additional infrastructure.
-
-Express-Gateway's documentation can be found at [https://express-gateway.io/docs](https://express-gateway.io/docs).
-
-## Main Features
-
--	Built Entirely on Express and Express Middleware
--	Dynamic Centralized Config
--	API Consumer and Credentials Management
--	Plugins and Plugin Framework
--	Distributed Data Store
--	CLI
--	Admin API
-
-![logo](https://raw.githubusercontent.com/docker-library/docs/8ee4b026326a61ab0ccf22634eacbbbfbfaaf678/express-gateway/logo.png)
-
-## How to use this image
-
-Unless you're using identity features (such as `users`, `applications` and `credentials`), Express-Gateway does not require any data storage.
-
-If so, skip directly to the point **2**; else, please keep going with this guide.
-
-### 1. Link Express-Gateway to a Redis container
-
-#### Start Redis
-
-Start a Redis container by executing:
-
-```shell
-$ docker run -d --name express-gateway-data-store \
-                -p 6379:6379 \
-                redis:alpine
-```
-
-### 2. Start the Express-Gateway instance
-
-Once the Redis instance has been started (if required), we can start the Express-Gateway instance link it to the Redis container.
-
-```shell
-$ docker run -d --name express-gateway \
-    --link eg-database:eg-database \
-    -v /my/own/datadir:/var/lib/eg \
-    -p 8080:8080 \
-    -p 9876:9876 \
-    express-gateway
-```
-
-*Note:* You might want to expose other ports to the host in case you're serving your APIs through **HTTPS**.
-
-*Note:* You need to mount a volume with configuration files and volumes in order to make Express-Gateway start correctly.
-
-You can now read the docs at [express-gateway.io/docs](http://express-gateway.io/docs) to learn more about Express-Gateway and configure it accordingly to your needs.
-
-### Install plugin
-
-You can install custom plugins to the current Express Gateway image just creating a new `Dockerfile`, use `express-gateway` as base image and then install the required plugins as global yarn packages
-
-```dockerfile
-FROM express-gateway
-RUN yarn global add express-gateway-plugin-name
-```
-
-# License
-
-View [license information](https://github.com/ExpressGateway/express-gateway/blob/master/LICENSE) for the software contained in this image.
-
-As with all Docker images, these likely also contain other software which may be under other licenses (such as Bash, etc from the base distribution, along with any direct or indirect dependencies of the primary software being contained).
-
-Some additional license information which was able to be auto-detected might be found in [the `repo-info` repository's `express-gateway/` directory](https://github.com/docker-library/repo-info/tree/master/repos/express-gateway).
-
-As for any pre-built image usage, it is the image user's responsibility to ensure that any use of this image complies with any relevant licenses for all software contained within.

+ 0 - 63
express-gateway/content.md

@@ -1,63 +0,0 @@
-# What is Express-Gateway?
-
-Express Gateway is an API Gateway that sits at the heart of any microservices architecture, regardless of what language or platform you're using. Express Gateway secures your microservices and exposes them through APIs using Node.js, ExpressJS and Express middleware. Developing microservices, orchestrating and managing them now can be done insanely fast all on one seamless platform without having to introduce additional infrastructure.
-
-Express-Gateway's documentation can be found at [https://express-gateway.io/docs](https://express-gateway.io/docs).
-
-## Main Features
-
--	Built Entirely on Express and Express Middleware
--	Dynamic Centralized Config
--	API Consumer and Credentials Management
--	Plugins and Plugin Framework
--	Distributed Data Store
--	CLI
--	Admin API
-
-%%LOGO%%
-
-## How to use this image
-
-Unless you're using identity features (such as `users`, `applications` and `credentials`), Express-Gateway does not require any data storage.
-
-If so, skip directly to the point **2**; else, please keep going with this guide.
-
-### 1. Link Express-Gateway to a Redis container
-
-#### Start Redis
-
-Start a Redis container by executing:
-
-```shell
-$ docker run -d --name express-gateway-data-store \
-                -p 6379:6379 \
-                redis:alpine
-```
-
-### 2. Start the Express-Gateway instance
-
-Once the Redis instance has been started (if required), we can start the Express-Gateway instance link it to the Redis container.
-
-```shell
-$ docker run -d --name express-gateway \
-    --link eg-database:eg-database \
-    -v /my/own/datadir:/var/lib/eg \
-    -p 8080:8080 \
-    -p 9876:9876 \
-    %%IMAGE%%
-```
-
-*Note:* You might want to expose other ports to the host in case you're serving your APIs through **HTTPS**.
-
-*Note:* You need to mount a volume with configuration files and volumes in order to make Express-Gateway start correctly.
-
-You can now read the docs at [express-gateway.io/docs](http://express-gateway.io/docs) to learn more about Express-Gateway and configure it accordingly to your needs.
-
-### Install plugin
-
-You can install custom plugins to the current Express Gateway image just creating a new `Dockerfile`, use `%%IMAGE%%` as base image and then install the required plugins as global yarn packages
-
-```dockerfile
-FROM %%IMAGE%%
-RUN yarn global add express-gateway-plugin-name
-```

+ 0 - 1
express-gateway/deprecated.md

@@ -1 +0,0 @@
-This project is no longer maintained. Read [here](https://github.com/ExpressGateway/express-gateway/issues/1011#issuecomment-748354599) for more details or if you're interested in taking over the project.

+ 0 - 1
express-gateway/github-repo

@@ -1 +0,0 @@
-https://github.com/ExpressGateway/express-gateway

+ 0 - 1
express-gateway/license.md

@@ -1 +0,0 @@
-View [license information](https://github.com/ExpressGateway/express-gateway/blob/master/LICENSE) for the software contained in this image.

BIN
express-gateway/logo.png


+ 0 - 1
express-gateway/maintainer.md

@@ -1 +0,0 @@
-[the Express Gateway Team](%%GITHUB-REPO%%)

+ 0 - 5
express-gateway/metadata.json

@@ -1,5 +0,0 @@
-{
-  "hub": {
-    "categories": []
-  }
-}

+ 0 - 1
jobber/README-short.txt

@@ -1 +0,0 @@
-DEPRECATED; Jobber is an alternative to cron, with sophisticated status-reporting and error-handling

+ 0 - 74
jobber/README.md

@@ -1,74 +0,0 @@
-<!--
-
-********************************************************************************
-
-WARNING:
-
-    DO NOT EDIT "jobber/README.md"
-
-    IT IS AUTO-GENERATED
-
-    (from the other files in "jobber/" combined with a set of templates)
-
-********************************************************************************
-
--->
-
-# **DEPRECATION NOTICE**
-
-This project is not actively maintained. See [dshearer/jobber#334](https://github.com/dshearer/jobber/pull/334) for more details.
-
-# Quick reference
-
--	**Maintained by**:  
-	[Jobber](https://github.com/dshearer/jobber-docker)
-
--	**Where to get help**:  
-	[the Docker Community Slack](https://dockr.ly/comm-slack), [Server Fault](https://serverfault.com/help/on-topic), [Unix & Linux](https://unix.stackexchange.com/help/on-topic), or [Stack Overflow](https://stackoverflow.com/help/on-topic)
-
-# Supported tags and respective `Dockerfile` links
-
-**No supported tags**
-
-# Quick reference (cont.)
-
--	**Where to file issues**:  
-	[https://github.com/dshearer/jobber-docker/issues](https://github.com/dshearer/jobber-docker/issues?q=)
-
--	**Supported architectures**: ([more info](https://github.com/docker-library/official-images#architectures-other-than-amd64))  
-	**No supported architectures**
-
--	**Published image artifact details**:  
-	[repo-info repo's `repos/jobber/` directory](https://github.com/docker-library/repo-info/blob/master/repos/jobber) ([history](https://github.com/docker-library/repo-info/commits/master/repos/jobber))  
-	(image metadata, transfer size, etc)
-
--	**Image updates**:  
-	[official-images repo's `library/jobber` label](https://github.com/docker-library/official-images/issues?q=label%3Alibrary%2Fjobber)  
-	[official-images repo's `library/jobber` file](https://github.com/docker-library/official-images/blob/master/library/jobber) ([history](https://github.com/docker-library/official-images/commits/master/library/jobber))
-
--	**Source of this description**:  
-	[docs repo's `jobber/` directory](https://github.com/docker-library/docs/tree/master/jobber) ([history](https://github.com/docker-library/docs/commits/master/jobber))
-
-# What is Jobber?
-
-Jobber is a utility for Unix-like systems that can run arbitrary commands, or "jobs", according to a schedule. It is meant to be a better alternative to the classic Unix utility cron.
-
-Along with the functionality of cron, Jobber also provides:
-
--	**Job execution history:** you can see what jobs have recently run, and whether they succeeded or failed.
--	**Sophisticated error handling:** you can control whether and when a job is run again after it fails. For example, after an initial failure of a job, Jobber can schedule future runs using an exponential backoff algorithm.
--	**Sophisticated error reporting:** you can control whether Jobber notifies you about each failed run, or only about jobs that have been disabled due to repeated failures.
-
-# How to use this image
-
-This image contains Jobber running as an unprivileged user named "jobberuser". The jobs are defined in the file /home/jobberuser/.jobber. By default, the only job is one that prints "Jobber is running!" every second. You should replace it with your own jobs. Refer to [the documentation](https://dshearer.github.io/jobber/doc/v1.4/#jobfile) to learn how to do this.
-
-# License
-
-[Jobber's license](https://github.com/dshearer/jobber/blob/master/LICENSE)
-
-As with all Docker images, these likely also contain other software which may be under other licenses (such as Bash, etc from the base distribution, along with any direct or indirect dependencies of the primary software being contained).
-
-Some additional license information which was able to be auto-detected might be found in [the `repo-info` repository's `jobber/` directory](https://github.com/docker-library/repo-info/tree/master/repos/jobber).
-
-As for any pre-built image usage, it is the image user's responsibility to ensure that any use of this image complies with any relevant licenses for all software contained within.

+ 0 - 13
jobber/content.md

@@ -1,13 +0,0 @@
-# What is Jobber?
-
-Jobber is a utility for Unix-like systems that can run arbitrary commands, or "jobs", according to a schedule. It is meant to be a better alternative to the classic Unix utility cron.
-
-Along with the functionality of cron, Jobber also provides:
-
--	**Job execution history:** you can see what jobs have recently run, and whether they succeeded or failed.
--	**Sophisticated error handling:** you can control whether and when a job is run again after it fails. For example, after an initial failure of a job, Jobber can schedule future runs using an exponential backoff algorithm.
--	**Sophisticated error reporting:** you can control whether Jobber notifies you about each failed run, or only about jobs that have been disabled due to repeated failures.
-
-# How to use this image
-
-This image contains Jobber running as an unprivileged user named "jobberuser". The jobs are defined in the file /home/jobberuser/.jobber. By default, the only job is one that prints "Jobber is running!" every second. You should replace it with your own jobs. Refer to [the documentation](https://dshearer.github.io/jobber/doc/v1.4/#jobfile) to learn how to do this.

+ 0 - 1
jobber/deprecated.md

@@ -1 +0,0 @@
-This project is not actively maintained. See [dshearer/jobber#334](https://github.com/dshearer/jobber/pull/334) for more details.

+ 0 - 1
jobber/github-repo

@@ -1 +0,0 @@
-https://github.com/dshearer/jobber-docker

+ 0 - 1
jobber/license.md

@@ -1 +0,0 @@
-[Jobber's license](https://github.com/dshearer/jobber/blob/master/LICENSE)

+ 0 - 1
jobber/maintainer.md

@@ -1 +0,0 @@
-[Jobber](%%GITHUB-REPO%%)

+ 0 - 5
jobber/metadata.json

@@ -1,5 +0,0 @@
-{
-  "hub": {
-    "categories": []
-  }
-}

+ 0 - 1
nats-streaming/README-short.txt

@@ -1 +0,0 @@
-DEPRECATED; An open-source, high-performance, cloud native messaging streaming system.

+ 0 - 340
nats-streaming/README.md

@@ -1,340 +0,0 @@
-<!--
-
-********************************************************************************
-
-WARNING:
-
-    DO NOT EDIT "nats-streaming/README.md"
-
-    IT IS AUTO-GENERATED
-
-    (from the other files in "nats-streaming/" combined with a set of templates)
-
-********************************************************************************
-
--->
-
-# **DEPRECATION NOTICE**
-
-The NATS Streaming Server is being deprecated. Critical bug fixes and security fixes will be applied until June of 2023. NATS enabled applications requiring persistence should use [JetStream](https://docs.nats.io/jetstream/jetstream).
-
-# Quick reference
-
--	**Maintained by**:  
-	[the NATS Project](https://github.com/nats-io/nats-streaming-docker)
-
--	**Where to get help**:  
-	[the Docker Community Slack](https://dockr.ly/comm-slack), [Server Fault](https://serverfault.com/help/on-topic), [Unix & Linux](https://unix.stackexchange.com/help/on-topic), or [Stack Overflow](https://stackoverflow.com/help/on-topic)
-
-# Supported tags and respective `Dockerfile` links
-
-**No supported tags**
-
-# Quick reference (cont.)
-
--	**Where to file issues**:  
-	[https://github.com/nats-io/nats-streaming-docker/issues](https://github.com/nats-io/nats-streaming-docker/issues?q=)
-
--	**Supported architectures**: ([more info](https://github.com/docker-library/official-images#architectures-other-than-amd64))  
-	**No supported architectures**
-
--	**Published image artifact details**:  
-	[repo-info repo's `repos/nats-streaming/` directory](https://github.com/docker-library/repo-info/blob/master/repos/nats-streaming) ([history](https://github.com/docker-library/repo-info/commits/master/repos/nats-streaming))  
-	(image metadata, transfer size, etc)
-
--	**Image updates**:  
-	[official-images repo's `library/nats-streaming` label](https://github.com/docker-library/official-images/issues?q=label%3Alibrary%2Fnats-streaming)  
-	[official-images repo's `library/nats-streaming` file](https://github.com/docker-library/official-images/blob/master/library/nats-streaming) ([history](https://github.com/docker-library/official-images/commits/master/library/nats-streaming))
-
--	**Source of this description**:  
-	[docs repo's `nats-streaming/` directory](https://github.com/docker-library/docs/tree/master/nats-streaming) ([history](https://github.com/docker-library/docs/commits/master/nats-streaming))
-
-# [NATS Streaming](https://nats.io): A high-performance cloud native messaging streaming system.
-
-![logo](https://raw.githubusercontent.com/docker-library/docs/ad703934a62fabf54452755c8486698ff6fc5cc2/nats-streaming/logo.png)
-
-`nats-streaming` is a high performance streaming server for the NATS Messaging System.
-
-# Backward compatibility note
-
-Note that the Streaming server itself is backward compatible with previous releases, however, v0.15.0+ now embeds a NATS Server 2.0, which means that if you run with the embedded NATS server and want to route it to your existing v0.14.3- servers, it will fail due to NATS Server routing protocol change. You can however use v0.15.0+ and connect it to existing NATS cluster and therefore have a mix of v0.15.0 and v0.14.3- streaming servers.
-
-# Windows Docker images
-
-Due to restrictions on how the Windows Docker Image is built, running the image without argument will run the NATS Streaming server with memory based store on port 4222 and the monitoring port 8222. If you need to specify any additional argument, or modify these options, you need to specify the executable name as this:
-
-```bash
-$ docker run -p 4223:4223 -p 8223:8223 nats-streaming nats-streaming-server -p 4223 -m 8223
-```
-
-If you need to specify the entrypoint:
-
-```bash
-$ docker run --entrypoint c:/nats-streaming-server/nats-streaming-server -p 4222:4222 -p 8222:8222 nats-streaming
-```
-
-# Non Windows Docker images
-
-If you need to provide arguments to the NATS Streaming server, just pass them to the command line. For instance, to change the listen and monitoring port to 4223 and 8223 respectively:
-
-```bash
-$ docker run -p 4223:4223 -p 8223:8223 nats-streaming -p 4223 -m 8223
-```
-
-If you need to specify the entrypoint:
-
-```bash
-$ docker run --entrypoint /nats-streaming-server -p 4222:4222 -p 8222:8222 nats-streaming
-```
-
-# Example usage
-
-```bash
-# Run a NATS Streaming server
-# Each server exposes multiple ports
-# 4222 is for clients.
-# 8222 is an HTTP management port for information reporting.
-#
-# To actually publish the ports when running the container, use the Docker port mapping
-# flag "docker run -p <hostport>:<containerport>" to publish and map one or more ports,
-# or the -P flag to publish all exposed ports and map them to high-order ports.
-#
-# This should not be confused with the NATS Streaming Server own -p parameter.
-# For instance, to run the NATS Streaming Server and have it listen on port 4444,
-# you would have to run like this:
-#
-#   docker run -p 4444:4444 nats-streaming -p 4444
-#
-# Or, if you want to publish the port 4444 as a different port, for example 5555:
-#
-#   docker run -p 5555:4444 nats-streaming -p 4444
-#
-# Check "docker run" for more information.
-
-$ docker run -d -p 4222:4222 -p 8222:8222 nats-streaming
-```
-
-Output that you would get if you had started with `-ti` instead of `d` (for daemon):
-
-```bash
-[1] 2022/10/11 14:57:50.404688 [INF] STREAM: Starting nats-streaming-server[test-cluster] version 0.25.2
-[1] 2022/10/11 14:57:50.404739 [INF] STREAM: ServerID: fbZJjwGYLBpNM5I8z23NSN
-[1] 2022/10/11 14:57:50.404741 [INF] STREAM: Go version: go1.19.2
-[1] 2022/10/11 14:57:50.404743 [INF] STREAM: Git commit: [9e599667]
-[1] 2022/10/11 14:57:50.406004 [INF] Starting nats-server
-[1] 2022/10/11 14:57:50.406009 [INF]   Version:  2.9.3
-[1] 2022/10/11 14:57:50.406011 [INF]   Git:      [25e82d7]
-[1] 2022/10/11 14:57:50.406013 [INF]   Name:     NDQOBTB34ECZWAKAJAREPEXQPXGKUEJEZINCHV2CIHGGJQCSCVPQPU5W
-[1] 2022/10/11 14:57:50.406015 [INF]   ID:       NDQOBTB34ECZWAKAJAREPEXQPXGKUEJEZINCHV2CIHGGJQCSCVPQPU5W
-[1] 2022/10/11 14:57:50.406423 [INF] Listening for client connections on 0.0.0.0:4222
-[1] 2022/10/11 14:57:50.406679 [INF] Server is ready
-[1] 2022/10/11 14:57:50.434935 [INF] STREAM: Recovering the state...
-[1] 2022/10/11 14:57:50.434945 [INF] STREAM: No recovered state
-[1] 2022/10/11 14:57:50.435271 [INF] STREAM: Message store is MEMORY
-[1] 2022/10/11 14:57:50.435303 [INF] STREAM: ---------- Store Limits ----------
-[1] 2022/10/11 14:57:50.435306 [INF] STREAM: Channels:                  100 *
-[1] 2022/10/11 14:57:50.435308 [INF] STREAM: --------- Channels Limits --------
-[1] 2022/10/11 14:57:50.435310 [INF] STREAM:   Subscriptions:          1000 *
-[1] 2022/10/11 14:57:50.435311 [INF] STREAM:   Messages     :       1000000 *
-[1] 2022/10/11 14:57:50.435313 [INF] STREAM:   Bytes        :     976.56 MB *
-[1] 2022/10/11 14:57:50.435315 [INF] STREAM:   Age          :     unlimited *
-[1] 2022/10/11 14:57:50.435316 [INF] STREAM:   Inactivity   :     unlimited *
-[1] 2022/10/11 14:57:50.435318 [INF] STREAM: ----------------------------------
-[1] 2022/10/11 14:57:50.435320 [INF] STREAM: Streaming Server is ready
-```
-
-To use a file based store instead, you would run:
-
-```bash
-$ docker run -d -p 4222:4222 -p 8222:8222 nats-streaming -store file -dir datastore
-
-[1] 2022/10/11 14:59:45.818823 [INF] STREAM: Starting nats-streaming-server[test-cluster] version 0.25.2
-[1] 2022/10/11 14:59:45.818874 [INF] STREAM: ServerID: mNhpLEpCO6WFqrnD9CYEZa
-[1] 2022/10/11 14:59:45.818876 [INF] STREAM: Go version: go1.19.2
-[1] 2022/10/11 14:59:45.818877 [INF] STREAM: Git commit: [9e599667]
-[1] 2022/10/11 14:59:45.820192 [INF] Starting nats-server
-[1] 2022/10/11 14:59:45.820196 [INF]   Version:  2.9.3
-[1] 2022/10/11 14:59:45.820198 [INF]   Git:      [25e82d7]
-[1] 2022/10/11 14:59:45.820200 [INF]   Name:     NCDMFFEVOSPVVGQZVEQ3O5434LHF2KAPOR5LKAI7YEIAFIABTHQLZRLA
-[1] 2022/10/11 14:59:45.820202 [INF]   ID:       NCDMFFEVOSPVVGQZVEQ3O5434LHF2KAPOR5LKAI7YEIAFIABTHQLZRLA
-[1] 2022/10/11 14:59:45.820688 [INF] Listening for client connections on 0.0.0.0:4222
-[1] 2022/10/11 14:59:45.820849 [INF] Server is ready
-[1] 2022/10/11 14:59:45.848443 [INF] STREAM: Recovering the state...
-[1] 2022/10/11 14:59:45.848737 [INF] STREAM: No recovered state
-[1] 2022/10/11 14:59:45.849050 [INF] STREAM: Message store is FILE
-[1] 2022/10/11 14:59:45.849054 [INF] STREAM: Store location: datastore
-[1] 2022/10/11 14:59:45.849070 [INF] STREAM: ---------- Store Limits ----------
-[1] 2022/10/11 14:59:45.849072 [INF] STREAM: Channels:                  100 *
-[1] 2022/10/11 14:59:45.849073 [INF] STREAM: --------- Channels Limits --------
-[1] 2022/10/11 14:59:45.849075 [INF] STREAM:   Subscriptions:          1000 *
-[1] 2022/10/11 14:59:45.849076 [INF] STREAM:   Messages     :       1000000 *
-[1] 2022/10/11 14:59:45.849077 [INF] STREAM:   Bytes        :     976.56 MB *
-[1] 2022/10/11 14:59:45.849078 [INF] STREAM:   Age          :     unlimited *
-[1] 2022/10/11 14:59:45.849079 [INF] STREAM:   Inactivity   :     unlimited *
-[1] 2022/10/11 14:59:45.849080 [INF] STREAM: ----------------------------------
-[1] 2022/10/11 14:59:45.849082 [INF] STREAM: Streaming Server is ready
-```
-
-You can also connect to a remote NATS Server running in a docker image. First, run NATS Server:
-
-```bash
-$ docker run -d --name=nats-main -p 4222:4222 -p 6222:6222 -p 8222:8222 nats
-```
-
-Now, start the Streaming server and link it to the above docker image:
-
-```bash
-$ docker run -d --link nats-main nats-streaming -store file -dir datastore -ns nats://nats-main:4222
-
-[1] 2022/10/11 15:00:56.780184 [INF] STREAM: Starting nats-streaming-server[test-cluster] version 0.25.2
-[1] 2022/10/11 15:00:56.780235 [INF] STREAM: ServerID: jVQkB4KiIN4IVIuVoSumE0
-[1] 2022/10/11 15:00:56.780237 [INF] STREAM: Go version: go1.19.2
-[1] 2022/10/11 15:00:56.780241 [INF] STREAM: Git commit: [9e599667]
-[1] 2022/10/11 15:00:56.809173 [INF] STREAM: Recovering the state...
-[1] 2022/10/11 15:00:56.810336 [INF] STREAM: Recovered 0 channel(s)
-[1] 2022/10/11 15:00:56.810612 [INF] STREAM: Message store is FILE
-[1] 2022/10/11 15:00:56.810617 [INF] STREAM: Store location: datastore
-[1] 2022/10/11 15:00:56.810633 [INF] STREAM: ---------- Store Limits ----------
-[1] 2022/10/11 15:00:56.810635 [INF] STREAM: Channels:                  100 *
-[1] 2022/10/11 15:00:56.810636 [INF] STREAM: --------- Channels Limits --------
-[1] 2022/10/11 15:00:56.810637 [INF] STREAM:   Subscriptions:          1000 *
-[1] 2022/10/11 15:00:56.810639 [INF] STREAM:   Messages     :       1000000 *
-[1] 2022/10/11 15:00:56.810640 [INF] STREAM:   Bytes        :     976.56 MB *
-[1] 2022/10/11 15:00:56.810641 [INF] STREAM:   Age          :     unlimited *
-[1] 2022/10/11 15:00:56.810642 [INF] STREAM:   Inactivity   :     unlimited *
-[1] 2022/10/11 15:00:56.810643 [INF] STREAM: ----------------------------------
-[1] 2022/10/11 15:00:56.810644 [INF] STREAM: Streaming Server is ready
-```
-
-Notice that the output shows that the NATS Server was not started, as opposed to the first output.
-
-# Commandline Options
-
-```bash
-Streaming Server Options:
-    -cid, --cluster_id  <string>         Cluster ID (default: test-cluster)
-    -st,  --store <string>               Store type: MEMORY|FILE|SQL (default: MEMORY)
-          --dir <string>                 For FILE store type, this is the root directory
-    -mc,  --max_channels <int>           Max number of channels (0 for unlimited)
-    -msu, --max_subs <int>               Max number of subscriptions per channel (0 for unlimited)
-    -mm,  --max_msgs <int>               Max number of messages per channel (0 for unlimited)
-    -mb,  --max_bytes <size>             Max messages total size per channel (0 for unlimited)
-    -ma,  --max_age <duration>           Max duration a message can be stored ("0s" for unlimited)
-    -mi,  --max_inactivity <duration>    Max inactivity (no new message, no subscription) after which a channel can be garbage collected (0 for unlimited)
-    -ns,  --nats_server <string>         Connect to this external NATS Server URL (embedded otherwise)
-    -sc,  --stan_config <string>         Streaming server configuration file
-    -hbi, --hb_interval <duration>       Interval at which server sends heartbeat to a client
-    -hbt, --hb_timeout <duration>        How long server waits for a heartbeat response
-    -hbf, --hb_fail_count <int>          Number of failed heartbeats before server closes the client connection
-          --ft_group <string>            Name of the FT Group. A group can be 2 or more servers with a single active server and all sharing the same datastore
-    -sl,  --signal <signal>[=<pid>]      Send signal to nats-streaming-server process (stop, quit, reopen, reload - only for embedded NATS Server)
-          --encrypt <bool>               Specify if server should use encryption at rest
-          --encryption_cipher <string>   Cipher to use for encryption. Currently support AES and CHAHA (ChaChaPoly). Defaults to AES
-          --encryption_key <string>      Encryption Key. It is recommended to specify it through the NATS_STREAMING_ENCRYPTION_KEY environment variable instead
-          --replace_durable <bool>       Replace the existing durable subscription instead of reporting a duplicate durable error
-
-Streaming Server Clustering Options:
-    --clustered <bool>                     Run the server in a clustered configuration (default: false)
-    --cluster_node_id <string>             ID of the node within the cluster if there is no stored ID (default: random UUID)
-    --cluster_bootstrap <bool>             Bootstrap the cluster if there is no existing state by electing self as leader (default: false)
-    --cluster_peers <string, ...>          Comma separated list of cluster peer node IDs to bootstrap cluster state
-    --cluster_log_path <string>            Directory to store log replication data
-    --cluster_log_cache_size <int>         Number of log entries to cache in memory to reduce disk IO (default: 512)
-    --cluster_log_snapshots <int>          Number of log snapshots to retain (default: 2)
-    --cluster_trailing_logs <int>          Number of log entries to leave after a snapshot and compaction
-    --cluster_sync <bool>                  Do a file sync after every write to the replication log and message store
-    --cluster_raft_logging <bool>          Enable logging from the Raft library (disabled by default)
-    --cluster_allow_add_remove_node <bool> Enable the ability to send NATS requests to the leader to add/remove cluster nodes
-
-Streaming Server File Store Options:
-    --file_compact_enabled <bool>        Enable file compaction
-    --file_compact_frag <int>            File fragmentation threshold for compaction
-    --file_compact_interval <int>        Minimum interval (in seconds) between file compactions
-    --file_compact_min_size <size>       Minimum file size for compaction
-    --file_buffer_size <size>            File buffer size (in bytes)
-    --file_crc <bool>                    Enable file CRC-32 checksum
-    --file_crc_poly <int>                Polynomial used to make the table used for CRC-32 checksum
-    --file_sync <bool>                   Enable File.Sync on Flush
-    --file_slice_max_msgs <int>          Maximum number of messages per file slice (subject to channel limits)
-    --file_slice_max_bytes <size>        Maximum file slice size - including index file (subject to channel limits)
-    --file_slice_max_age <duration>      Maximum file slice duration starting when the first message is stored (subject to channel limits)
-    --file_slice_archive_script <string> Path to script to use if you want to archive a file slice being removed
-    --file_fds_limit <int>               Store will try to use no more file descriptors than this given limit
-    --file_parallel_recovery <int>       On startup, number of channels that can be recovered in parallel
-    --file_truncate_bad_eof <bool>       Truncate files for which there is an unexpected EOF on recovery, dataloss may occur
-    --file_read_buffer_size <size>       Size of messages read ahead buffer (0 to disable)
-    --file_auto_sync <duration>          Interval at which the store should be automatically flushed and sync'ed on disk (<= 0 to disable)
-
-Streaming Server SQL Store Options:
-    --sql_driver <string>            Name of the SQL Driver ("mysql" or "postgres")
-    --sql_source <string>            Datasource used when opening an SQL connection to the database
-    --sql_no_caching <bool>          Enable/Disable caching for improved performance
-    --sql_max_open_conns <int>       Maximum number of opened connections to the database
-    --sql_bulk_insert_limit <int>    Maximum number of messages stored with a single SQL "INSERT" statement
-
-Streaming Server TLS Options:
-    -secure <bool>                   Use a TLS connection to the NATS server without
-                                     verification; weaker than specifying certificates.
-    -tls_client_key <string>         Client key for the streaming server
-    -tls_client_cert <string>        Client certificate for the streaming server
-    -tls_client_cacert <string>      Client certificate CA for the streaming server
-
-Streaming Server Logging Options:
-    -SD, --stan_debug=<bool>         Enable STAN debugging output
-    -SV, --stan_trace=<bool>         Trace the raw STAN protocol
-    -SDV                             Debug and trace STAN
-         --syslog_name               On Windows, when running several servers as a service, use this name for the event source
-    (See additional NATS logging options below)
-
-Embedded NATS Server Options:
-    -a, --addr <string>              Bind to host address (default: 0.0.0.0)
-    -p, --port <int>                 Use port for clients (default: 4222)
-    -P, --pid <string>               File to store PID
-    -m, --http_port <int>            Use port for http monitoring
-    -ms,--https_port <int>           Use port for https monitoring
-    -c, --config <string>            Configuration file
-
-Logging Options:
-    -l, --log <string>               File to redirect log output
-    -T, --logtime=<bool>             Timestamp log entries (default: true)
-    -s, --syslog <bool>              Enable syslog as log method
-    -r, --remote_syslog <string>     Syslog server addr (udp://localhost:514)
-    -D, --debug=<bool>               Enable debugging output
-    -V, --trace=<bool>               Trace the raw protocol
-    -DV                              Debug and trace
-
-Authorization Options:
-        --user <string>              User required for connections
-        --pass <string>              Password required for connections
-        --auth <string>              Authorization token required for connections
-
-TLS Options:
-        --tls=<bool>                 Enable TLS, do not verify clients (default: false)
-        --tlscert <string>           Server certificate file
-        --tlskey <string>            Private key for server certificate
-        --tlsverify=<bool>           Enable TLS, verify client certificates
-        --tlscacert <string>         Client certificate CA for verification
-
-NATS Clustering Options:
-        --routes <string, ...>       Routes to solicit and connect
-        --cluster <string>           Cluster URL for solicited routes
-
-Common Options:
-    -h, --help                       Show this message
-    -v, --version                    Show version
-        --help_tls                   TLS help.
-```
-
-# Configuration
-
-Details on how to configure further the NATS Streaming server can be found [here](https://docs.nats.io/nats-streaming-server/configuring)
-
-# License
-
-View [license information](https://github.com/nats-io/nats-streaming-server/blob/master/LICENSE) for the software contained in this image.
-
-As with all Docker images, these likely also contain other software which may be under other licenses (such as Bash, etc from the base distribution, along with any direct or indirect dependencies of the primary software being contained).
-
-Some additional license information which was able to be auto-detected might be found in [the `repo-info` repository's `nats-streaming/` directory](https://github.com/docker-library/repo-info/tree/master/repos/nats-streaming).
-
-As for any pre-built image usage, it is the image user's responsibility to ensure that any use of this image complies with any relevant licenses for all software contained within.

+ 0 - 279
nats-streaming/content.md

@@ -1,279 +0,0 @@
-# [NATS Streaming](https://nats.io): A high-performance cloud native messaging streaming system.
-
-%%LOGO%%
-
-`nats-streaming` is a high performance streaming server for the NATS Messaging System.
-
-# Backward compatibility note
-
-Note that the Streaming server itself is backward compatible with previous releases, however, v0.15.0+ now embeds a NATS Server 2.0, which means that if you run with the embedded NATS server and want to route it to your existing v0.14.3- servers, it will fail due to NATS Server routing protocol change. You can however use v0.15.0+ and connect it to existing NATS cluster and therefore have a mix of v0.15.0 and v0.14.3- streaming servers.
-
-# Windows Docker images
-
-Due to restrictions on how the Windows Docker Image is built, running the image without argument will run the NATS Streaming server with memory based store on port 4222 and the monitoring port 8222. If you need to specify any additional argument, or modify these options, you need to specify the executable name as this:
-
-```bash
-$ docker run -p 4223:4223 -p 8223:8223 %%IMAGE%% nats-streaming-server -p 4223 -m 8223
-```
-
-If you need to specify the entrypoint:
-
-```bash
-$ docker run --entrypoint c:/nats-streaming-server/nats-streaming-server -p 4222:4222 -p 8222:8222 %%IMAGE%%
-```
-
-# Non Windows Docker images
-
-If you need to provide arguments to the NATS Streaming server, just pass them to the command line. For instance, to change the listen and monitoring port to 4223 and 8223 respectively:
-
-```bash
-$ docker run -p 4223:4223 -p 8223:8223 %%IMAGE%% -p 4223 -m 8223
-```
-
-If you need to specify the entrypoint:
-
-```bash
-$ docker run --entrypoint /nats-streaming-server -p 4222:4222 -p 8222:8222 %%IMAGE%%
-```
-
-# Example usage
-
-```bash
-# Run a NATS Streaming server
-# Each server exposes multiple ports
-# 4222 is for clients.
-# 8222 is an HTTP management port for information reporting.
-#
-# To actually publish the ports when running the container, use the Docker port mapping
-# flag "docker run -p <hostport>:<containerport>" to publish and map one or more ports,
-# or the -P flag to publish all exposed ports and map them to high-order ports.
-#
-# This should not be confused with the NATS Streaming Server own -p parameter.
-# For instance, to run the NATS Streaming Server and have it listen on port 4444,
-# you would have to run like this:
-#
-#   docker run -p 4444:4444 %%IMAGE%% -p 4444
-#
-# Or, if you want to publish the port 4444 as a different port, for example 5555:
-#
-#   docker run -p 5555:4444 %%IMAGE%% -p 4444
-#
-# Check "docker run" for more information.
-
-$ docker run -d -p 4222:4222 -p 8222:8222 %%IMAGE%%
-```
-
-Output that you would get if you had started with `-ti` instead of `d` (for daemon):
-
-```bash
-[1] 2022/10/11 14:57:50.404688 [INF] STREAM: Starting nats-streaming-server[test-cluster] version 0.25.2
-[1] 2022/10/11 14:57:50.404739 [INF] STREAM: ServerID: fbZJjwGYLBpNM5I8z23NSN
-[1] 2022/10/11 14:57:50.404741 [INF] STREAM: Go version: go1.19.2
-[1] 2022/10/11 14:57:50.404743 [INF] STREAM: Git commit: [9e599667]
-[1] 2022/10/11 14:57:50.406004 [INF] Starting nats-server
-[1] 2022/10/11 14:57:50.406009 [INF]   Version:  2.9.3
-[1] 2022/10/11 14:57:50.406011 [INF]   Git:      [25e82d7]
-[1] 2022/10/11 14:57:50.406013 [INF]   Name:     NDQOBTB34ECZWAKAJAREPEXQPXGKUEJEZINCHV2CIHGGJQCSCVPQPU5W
-[1] 2022/10/11 14:57:50.406015 [INF]   ID:       NDQOBTB34ECZWAKAJAREPEXQPXGKUEJEZINCHV2CIHGGJQCSCVPQPU5W
-[1] 2022/10/11 14:57:50.406423 [INF] Listening for client connections on 0.0.0.0:4222
-[1] 2022/10/11 14:57:50.406679 [INF] Server is ready
-[1] 2022/10/11 14:57:50.434935 [INF] STREAM: Recovering the state...
-[1] 2022/10/11 14:57:50.434945 [INF] STREAM: No recovered state
-[1] 2022/10/11 14:57:50.435271 [INF] STREAM: Message store is MEMORY
-[1] 2022/10/11 14:57:50.435303 [INF] STREAM: ---------- Store Limits ----------
-[1] 2022/10/11 14:57:50.435306 [INF] STREAM: Channels:                  100 *
-[1] 2022/10/11 14:57:50.435308 [INF] STREAM: --------- Channels Limits --------
-[1] 2022/10/11 14:57:50.435310 [INF] STREAM:   Subscriptions:          1000 *
-[1] 2022/10/11 14:57:50.435311 [INF] STREAM:   Messages     :       1000000 *
-[1] 2022/10/11 14:57:50.435313 [INF] STREAM:   Bytes        :     976.56 MB *
-[1] 2022/10/11 14:57:50.435315 [INF] STREAM:   Age          :     unlimited *
-[1] 2022/10/11 14:57:50.435316 [INF] STREAM:   Inactivity   :     unlimited *
-[1] 2022/10/11 14:57:50.435318 [INF] STREAM: ----------------------------------
-[1] 2022/10/11 14:57:50.435320 [INF] STREAM: Streaming Server is ready
-```
-
-To use a file based store instead, you would run:
-
-```bash
-$ docker run -d -p 4222:4222 -p 8222:8222 %%IMAGE%% -store file -dir datastore
-
-[1] 2022/10/11 14:59:45.818823 [INF] STREAM: Starting nats-streaming-server[test-cluster] version 0.25.2
-[1] 2022/10/11 14:59:45.818874 [INF] STREAM: ServerID: mNhpLEpCO6WFqrnD9CYEZa
-[1] 2022/10/11 14:59:45.818876 [INF] STREAM: Go version: go1.19.2
-[1] 2022/10/11 14:59:45.818877 [INF] STREAM: Git commit: [9e599667]
-[1] 2022/10/11 14:59:45.820192 [INF] Starting nats-server
-[1] 2022/10/11 14:59:45.820196 [INF]   Version:  2.9.3
-[1] 2022/10/11 14:59:45.820198 [INF]   Git:      [25e82d7]
-[1] 2022/10/11 14:59:45.820200 [INF]   Name:     NCDMFFEVOSPVVGQZVEQ3O5434LHF2KAPOR5LKAI7YEIAFIABTHQLZRLA
-[1] 2022/10/11 14:59:45.820202 [INF]   ID:       NCDMFFEVOSPVVGQZVEQ3O5434LHF2KAPOR5LKAI7YEIAFIABTHQLZRLA
-[1] 2022/10/11 14:59:45.820688 [INF] Listening for client connections on 0.0.0.0:4222
-[1] 2022/10/11 14:59:45.820849 [INF] Server is ready
-[1] 2022/10/11 14:59:45.848443 [INF] STREAM: Recovering the state...
-[1] 2022/10/11 14:59:45.848737 [INF] STREAM: No recovered state
-[1] 2022/10/11 14:59:45.849050 [INF] STREAM: Message store is FILE
-[1] 2022/10/11 14:59:45.849054 [INF] STREAM: Store location: datastore
-[1] 2022/10/11 14:59:45.849070 [INF] STREAM: ---------- Store Limits ----------
-[1] 2022/10/11 14:59:45.849072 [INF] STREAM: Channels:                  100 *
-[1] 2022/10/11 14:59:45.849073 [INF] STREAM: --------- Channels Limits --------
-[1] 2022/10/11 14:59:45.849075 [INF] STREAM:   Subscriptions:          1000 *
-[1] 2022/10/11 14:59:45.849076 [INF] STREAM:   Messages     :       1000000 *
-[1] 2022/10/11 14:59:45.849077 [INF] STREAM:   Bytes        :     976.56 MB *
-[1] 2022/10/11 14:59:45.849078 [INF] STREAM:   Age          :     unlimited *
-[1] 2022/10/11 14:59:45.849079 [INF] STREAM:   Inactivity   :     unlimited *
-[1] 2022/10/11 14:59:45.849080 [INF] STREAM: ----------------------------------
-[1] 2022/10/11 14:59:45.849082 [INF] STREAM: Streaming Server is ready
-```
-
-You can also connect to a remote NATS Server running in a docker image. First, run NATS Server:
-
-```bash
-$ docker run -d --name=nats-main -p 4222:4222 -p 6222:6222 -p 8222:8222 nats
-```
-
-Now, start the Streaming server and link it to the above docker image:
-
-```bash
-$ docker run -d --link nats-main %%IMAGE%% -store file -dir datastore -ns nats://nats-main:4222
-
-[1] 2022/10/11 15:00:56.780184 [INF] STREAM: Starting nats-streaming-server[test-cluster] version 0.25.2
-[1] 2022/10/11 15:00:56.780235 [INF] STREAM: ServerID: jVQkB4KiIN4IVIuVoSumE0
-[1] 2022/10/11 15:00:56.780237 [INF] STREAM: Go version: go1.19.2
-[1] 2022/10/11 15:00:56.780241 [INF] STREAM: Git commit: [9e599667]
-[1] 2022/10/11 15:00:56.809173 [INF] STREAM: Recovering the state...
-[1] 2022/10/11 15:00:56.810336 [INF] STREAM: Recovered 0 channel(s)
-[1] 2022/10/11 15:00:56.810612 [INF] STREAM: Message store is FILE
-[1] 2022/10/11 15:00:56.810617 [INF] STREAM: Store location: datastore
-[1] 2022/10/11 15:00:56.810633 [INF] STREAM: ---------- Store Limits ----------
-[1] 2022/10/11 15:00:56.810635 [INF] STREAM: Channels:                  100 *
-[1] 2022/10/11 15:00:56.810636 [INF] STREAM: --------- Channels Limits --------
-[1] 2022/10/11 15:00:56.810637 [INF] STREAM:   Subscriptions:          1000 *
-[1] 2022/10/11 15:00:56.810639 [INF] STREAM:   Messages     :       1000000 *
-[1] 2022/10/11 15:00:56.810640 [INF] STREAM:   Bytes        :     976.56 MB *
-[1] 2022/10/11 15:00:56.810641 [INF] STREAM:   Age          :     unlimited *
-[1] 2022/10/11 15:00:56.810642 [INF] STREAM:   Inactivity   :     unlimited *
-[1] 2022/10/11 15:00:56.810643 [INF] STREAM: ----------------------------------
-[1] 2022/10/11 15:00:56.810644 [INF] STREAM: Streaming Server is ready
-```
-
-Notice that the output shows that the NATS Server was not started, as opposed to the first output.
-
-# Commandline Options
-
-```bash
-Streaming Server Options:
-    -cid, --cluster_id  <string>         Cluster ID (default: test-cluster)
-    -st,  --store <string>               Store type: MEMORY|FILE|SQL (default: MEMORY)
-          --dir <string>                 For FILE store type, this is the root directory
-    -mc,  --max_channels <int>           Max number of channels (0 for unlimited)
-    -msu, --max_subs <int>               Max number of subscriptions per channel (0 for unlimited)
-    -mm,  --max_msgs <int>               Max number of messages per channel (0 for unlimited)
-    -mb,  --max_bytes <size>             Max messages total size per channel (0 for unlimited)
-    -ma,  --max_age <duration>           Max duration a message can be stored ("0s" for unlimited)
-    -mi,  --max_inactivity <duration>    Max inactivity (no new message, no subscription) after which a channel can be garbage collected (0 for unlimited)
-    -ns,  --nats_server <string>         Connect to this external NATS Server URL (embedded otherwise)
-    -sc,  --stan_config <string>         Streaming server configuration file
-    -hbi, --hb_interval <duration>       Interval at which server sends heartbeat to a client
-    -hbt, --hb_timeout <duration>        How long server waits for a heartbeat response
-    -hbf, --hb_fail_count <int>          Number of failed heartbeats before server closes the client connection
-          --ft_group <string>            Name of the FT Group. A group can be 2 or more servers with a single active server and all sharing the same datastore
-    -sl,  --signal <signal>[=<pid>]      Send signal to nats-streaming-server process (stop, quit, reopen, reload - only for embedded NATS Server)
-          --encrypt <bool>               Specify if server should use encryption at rest
-          --encryption_cipher <string>   Cipher to use for encryption. Currently support AES and CHAHA (ChaChaPoly). Defaults to AES
-          --encryption_key <string>      Encryption Key. It is recommended to specify it through the NATS_STREAMING_ENCRYPTION_KEY environment variable instead
-          --replace_durable <bool>       Replace the existing durable subscription instead of reporting a duplicate durable error
-
-Streaming Server Clustering Options:
-    --clustered <bool>                     Run the server in a clustered configuration (default: false)
-    --cluster_node_id <string>             ID of the node within the cluster if there is no stored ID (default: random UUID)
-    --cluster_bootstrap <bool>             Bootstrap the cluster if there is no existing state by electing self as leader (default: false)
-    --cluster_peers <string, ...>          Comma separated list of cluster peer node IDs to bootstrap cluster state
-    --cluster_log_path <string>            Directory to store log replication data
-    --cluster_log_cache_size <int>         Number of log entries to cache in memory to reduce disk IO (default: 512)
-    --cluster_log_snapshots <int>          Number of log snapshots to retain (default: 2)
-    --cluster_trailing_logs <int>          Number of log entries to leave after a snapshot and compaction
-    --cluster_sync <bool>                  Do a file sync after every write to the replication log and message store
-    --cluster_raft_logging <bool>          Enable logging from the Raft library (disabled by default)
-    --cluster_allow_add_remove_node <bool> Enable the ability to send NATS requests to the leader to add/remove cluster nodes
-
-Streaming Server File Store Options:
-    --file_compact_enabled <bool>        Enable file compaction
-    --file_compact_frag <int>            File fragmentation threshold for compaction
-    --file_compact_interval <int>        Minimum interval (in seconds) between file compactions
-    --file_compact_min_size <size>       Minimum file size for compaction
-    --file_buffer_size <size>            File buffer size (in bytes)
-    --file_crc <bool>                    Enable file CRC-32 checksum
-    --file_crc_poly <int>                Polynomial used to make the table used for CRC-32 checksum
-    --file_sync <bool>                   Enable File.Sync on Flush
-    --file_slice_max_msgs <int>          Maximum number of messages per file slice (subject to channel limits)
-    --file_slice_max_bytes <size>        Maximum file slice size - including index file (subject to channel limits)
-    --file_slice_max_age <duration>      Maximum file slice duration starting when the first message is stored (subject to channel limits)
-    --file_slice_archive_script <string> Path to script to use if you want to archive a file slice being removed
-    --file_fds_limit <int>               Store will try to use no more file descriptors than this given limit
-    --file_parallel_recovery <int>       On startup, number of channels that can be recovered in parallel
-    --file_truncate_bad_eof <bool>       Truncate files for which there is an unexpected EOF on recovery, dataloss may occur
-    --file_read_buffer_size <size>       Size of messages read ahead buffer (0 to disable)
-    --file_auto_sync <duration>          Interval at which the store should be automatically flushed and sync'ed on disk (<= 0 to disable)
-
-Streaming Server SQL Store Options:
-    --sql_driver <string>            Name of the SQL Driver ("mysql" or "postgres")
-    --sql_source <string>            Datasource used when opening an SQL connection to the database
-    --sql_no_caching <bool>          Enable/Disable caching for improved performance
-    --sql_max_open_conns <int>       Maximum number of opened connections to the database
-    --sql_bulk_insert_limit <int>    Maximum number of messages stored with a single SQL "INSERT" statement
-
-Streaming Server TLS Options:
-    -secure <bool>                   Use a TLS connection to the NATS server without
-                                     verification; weaker than specifying certificates.
-    -tls_client_key <string>         Client key for the streaming server
-    -tls_client_cert <string>        Client certificate for the streaming server
-    -tls_client_cacert <string>      Client certificate CA for the streaming server
-
-Streaming Server Logging Options:
-    -SD, --stan_debug=<bool>         Enable STAN debugging output
-    -SV, --stan_trace=<bool>         Trace the raw STAN protocol
-    -SDV                             Debug and trace STAN
-         --syslog_name               On Windows, when running several servers as a service, use this name for the event source
-    (See additional NATS logging options below)
-
-Embedded NATS Server Options:
-    -a, --addr <string>              Bind to host address (default: 0.0.0.0)
-    -p, --port <int>                 Use port for clients (default: 4222)
-    -P, --pid <string>               File to store PID
-    -m, --http_port <int>            Use port for http monitoring
-    -ms,--https_port <int>           Use port for https monitoring
-    -c, --config <string>            Configuration file
-
-Logging Options:
-    -l, --log <string>               File to redirect log output
-    -T, --logtime=<bool>             Timestamp log entries (default: true)
-    -s, --syslog <bool>              Enable syslog as log method
-    -r, --remote_syslog <string>     Syslog server addr (udp://localhost:514)
-    -D, --debug=<bool>               Enable debugging output
-    -V, --trace=<bool>               Trace the raw protocol
-    -DV                              Debug and trace
-
-Authorization Options:
-        --user <string>              User required for connections
-        --pass <string>              Password required for connections
-        --auth <string>              Authorization token required for connections
-
-TLS Options:
-        --tls=<bool>                 Enable TLS, do not verify clients (default: false)
-        --tlscert <string>           Server certificate file
-        --tlskey <string>            Private key for server certificate
-        --tlsverify=<bool>           Enable TLS, verify client certificates
-        --tlscacert <string>         Client certificate CA for verification
-
-NATS Clustering Options:
-        --routes <string, ...>       Routes to solicit and connect
-        --cluster <string>           Cluster URL for solicited routes
-
-Common Options:
-    -h, --help                       Show this message
-    -v, --version                    Show version
-        --help_tls                   TLS help.
-```
-
-# Configuration
-
-Details on how to configure further the NATS Streaming server can be found [here](https://docs.nats.io/nats-streaming-server/configuring)

+ 0 - 1
nats-streaming/deprecated.md

@@ -1 +0,0 @@
-The NATS Streaming Server is being deprecated. Critical bug fixes and security fixes will be applied until June of 2023. NATS enabled applications requiring persistence should use [JetStream](https://docs.nats.io/jetstream/jetstream).

+ 0 - 1
nats-streaming/github-repo

@@ -1 +0,0 @@
-https://github.com/nats-io/nats-streaming-docker

+ 0 - 1
nats-streaming/license.md

@@ -1 +0,0 @@
-View [license information](https://github.com/nats-io/nats-streaming-server/blob/master/LICENSE) for the software contained in this image.

BIN
nats-streaming/logo.png


+ 0 - 1
nats-streaming/maintainer.md

@@ -1 +0,0 @@
-../nats/maintainer.md

+ 0 - 5
nats-streaming/metadata.json

@@ -1,5 +0,0 @@
-{
-  "hub": {
-    "categories": []
-  }
-}

+ 0 - 1
vault/README-short.txt

@@ -1 +0,0 @@
-Vault is a tool for securely accessing secrets via a unified interface and tight access control.

+ 0 - 129
vault/README.md

@@ -1,129 +0,0 @@
-<!--
-
-********************************************************************************
-
-WARNING:
-
-    DO NOT EDIT "vault/README.md"
-
-    IT IS AUTO-GENERATED
-
-    (from the other files in "vault/" combined with a set of templates)
-
-********************************************************************************
-
--->
-
-# **DEPRECATION NOTICE**
-
-Upcoming in Vault 1.14, we will stop publishing official Dockerhub images and publish only our Verified Publisher images. Users of Docker images should pull from [hashicorp/vault](https://hub.docker.com/r/hashicorp/vault) instead of [vault](https://hub.docker.com/_/vault). Verified Publisher images can be found at https://hub.docker.com/r/hashicorp/vault.
-
-# Quick reference
-
--	**Maintained by**:  
-	[HashiCorp](https://github.com/hashicorp/docker-vault)
-
--	**Where to get help**:  
-	[the Docker Community Slack](https://dockr.ly/comm-slack), [Server Fault](https://serverfault.com/help/on-topic), [Unix & Linux](https://unix.stackexchange.com/help/on-topic), or [Stack Overflow](https://stackoverflow.com/help/on-topic)
-
-# Supported tags and respective `Dockerfile` links
-
-**No supported tags**
-
-# Quick reference (cont.)
-
--	**Where to file issues**:  
-	[https://github.com/hashicorp/docker-vault/issues](https://github.com/hashicorp/docker-vault/issues?q=)
-
--	**Supported architectures**: ([more info](https://github.com/docker-library/official-images#architectures-other-than-amd64))  
-	**No supported architectures**
-
--	**Published image artifact details**:  
-	[repo-info repo's `repos/vault/` directory](https://github.com/docker-library/repo-info/blob/master/repos/vault) ([history](https://github.com/docker-library/repo-info/commits/master/repos/vault))  
-	(image metadata, transfer size, etc)
-
--	**Image updates**:  
-	[official-images repo's `library/vault` label](https://github.com/docker-library/official-images/issues?q=label%3Alibrary%2Fvault)  
-	[official-images repo's `library/vault` file](https://github.com/docker-library/official-images/blob/master/library/vault) ([history](https://github.com/docker-library/official-images/commits/master/library/vault))
-
--	**Source of this description**:  
-	[docs repo's `vault/` directory](https://github.com/docker-library/docs/tree/master/vault) ([history](https://github.com/docker-library/docs/commits/master/vault))
-
-# Vault
-
-Vault is a tool for securely accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, certificates, and more. Vault provides a unified interface to any secret, while providing tight access control and recording a detailed audit log. For more information, please see:
-
--	[Vault documentation](https://www.vaultproject.io/)
--	[Vault on GitHub](https://github.com/hashicorp/vault)
-
-![logo](https://raw.githubusercontent.com/docker-library/docs/90d4d43bdfccd5cb21e5fd964d32b0074af0f357/vault/logo.svg?sanitize=true)
-
-# Using the Container
-
-We chose Alpine as a lightweight base with a reasonably small surface area for security concerns, but with enough functionality for development and interactive debugging.
-
-Vault always runs under [dumb-init](https://github.com/Yelp/dumb-init), which handles reaping zombie processes and forwards signals on to all processes running in the container. This binary is built by HashiCorp and signed with our [GPG key](https://www.hashicorp.com/security.html), so you can verify the signed package used to build a given base image.
-
-Running the Vault container with no arguments will give you a Vault server in [development mode](https://www.vaultproject.io/docs/concepts/dev-server.html). The provided entry point script will also look for Vault subcommands and run `vault` with that subcommand. For example, you can execute `docker run vault status` and it will run the `vault status` command inside the container. The entry point also adds some special configuration options as detailed in the sections below when running the `server` subcommand. Any other command gets `exec`-ed inside the container under `dumb-init`.
-
-The container exposes two optional `VOLUME`s:
-
--	`/vault/logs`, to use for writing persistent audit logs. By default nothing is written here; the `file` audit backend must be enabled with a path under this directory.
--	`/vault/file`, to use for writing persistent storage data when using the`file` data storage plugin. By default nothing is written here (a `dev` server uses an in-memory data store); the `file` data storage backend must be enabled in Vault's configuration before the container is started.
-
-The container has a Vault configuration directory set up at `/vault/config` and the server will load any HCL or JSON configuration files placed here by binding a volume or by composing a new image and adding files. Alternatively, configuration can be added by passing the configuration JSON via environment variable `VAULT_LOCAL_CONFIG`.
-
-## Memory Locking and 'setcap'
-
-The container will attempt to lock memory to prevent sensitive values from being swapped to disk and as a result must have `--cap-add=IPC_LOCK` provided to `docker run`. Since the Vault binary runs as a non-root user, `setcap` is used to give the binary the ability to lock memory. With some Docker storage plugins in some distributions this call will not work correctly; it seems to fail most often with AUFS. The memory locking behavior can be disabled by setting the `SKIP_SETCAP` environment variable to any non-empty value.
-
-## Running Vault for Development
-
-```console
-$ docker run --cap-add=IPC_LOCK -d --name=dev-vault vault
-```
-
-This runs a completely in-memory Vault server, which is useful for development but should not be used in production.
-
-When running in development mode, two additional options can be set via environment variables:
-
--	`VAULT_DEV_ROOT_TOKEN_ID`: This sets the ID of the initial generated root token to the given value
--	`VAULT_DEV_LISTEN_ADDRESS`: This sets the IP:port of the development server listener (defaults to 0.0.0.0:8200)
-
-As an example:
-
-```console
-$ docker run --cap-add=IPC_LOCK -e 'VAULT_DEV_ROOT_TOKEN_ID=myroot' -e 'VAULT_DEV_LISTEN_ADDRESS=0.0.0.0:1234' vault
-```
-
-## Running Vault in Server Mode for Development
-
-```console
-$ docker run --cap-add=IPC_LOCK -e 'VAULT_LOCAL_CONFIG={"storage": {"file": {"path": "/vault/file"}}, "listener": [{"tcp": { "address": "0.0.0.0:8200", "tls_disable": true}}], "default_lease_ttl": "168h", "max_lease_ttl": "720h", "ui": true}' -p 8200:8200 vault server
-```
-
-This runs a Vault server with TLS disabled, the `file` storage backend at path `/vault/file` and a default secret lease duration of one week and a maximum of 30 days. Disabling TLS and using the `file` storage backend are not recommended for production use.
-
-Note the `--cap-add=IPC_LOCK`: this is required in order for Vault to lock memory, which prevents it from being swapped to disk. This is highly recommended. In a non-development environment, if you do not wish to use this functionality, you must add `"disable_mlock: true"` to the configuration information.
-
-At startup, the server will read configuration HCL and JSON files from `/vault/config` (any information passed into `VAULT_LOCAL_CONFIG` is written into `local.json` in this directory and read as part of reading the directory for configuration files). Please see Vault's [configuration documentation](https://www.vaultproject.io/docs/config/index.html) for a full list of options.
-
-We suggest volume mounting a directory into the Docker image in order to give both the configuration and TLS certificates to Vault. You can accomplish this with:
-
-```console
-$ docker run --volume config/:/vault/config.d ...
-```
-
-For more scalability and reliability, we suggest running containerized Vault in an orchestration environment like k8s or OpenShift.
-
-Since 0.6.3 this container also supports the `VAULT_REDIRECT_INTERFACE` and `VAULT_CLUSTER_INTERFACE` environment variables. If set, the IP addresses used for the redirect and cluster addresses in Vault's configuration will be the address of the named interface inside the container (e.g. `eth0`).
-
-# License
-
-View [license information](https://raw.githubusercontent.com/hashicorp/vault/master/LICENSE) for the software contained in this image.
-
-As with all Docker images, these likely also contain other software which may be under other licenses (such as Bash, etc from the base distribution, along with any direct or indirect dependencies of the primary software being contained).
-
-Some additional license information which was able to be auto-detected might be found in [the `repo-info` repository's `vault/` directory](https://github.com/docker-library/repo-info/tree/master/repos/vault).
-
-As for any pre-built image usage, it is the image user's responsibility to ensure that any use of this image complies with any relevant licenses for all software contained within.

+ 0 - 68
vault/content.md

@@ -1,68 +0,0 @@
-# Vault
-
-Vault is a tool for securely accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, certificates, and more. Vault provides a unified interface to any secret, while providing tight access control and recording a detailed audit log. For more information, please see:
-
--	[Vault documentation](https://www.vaultproject.io/)
--	[Vault on GitHub](https://github.com/hashicorp/vault)
-
-%%LOGO%%
-
-# Using the Container
-
-We chose Alpine as a lightweight base with a reasonably small surface area for security concerns, but with enough functionality for development and interactive debugging.
-
-Vault always runs under [dumb-init](https://github.com/Yelp/dumb-init), which handles reaping zombie processes and forwards signals on to all processes running in the container. This binary is built by HashiCorp and signed with our [GPG key](https://www.hashicorp.com/security.html), so you can verify the signed package used to build a given base image.
-
-Running the Vault container with no arguments will give you a Vault server in [development mode](https://www.vaultproject.io/docs/concepts/dev-server.html). The provided entry point script will also look for Vault subcommands and run `vault` with that subcommand. For example, you can execute `docker run vault status` and it will run the `vault status` command inside the container. The entry point also adds some special configuration options as detailed in the sections below when running the `server` subcommand. Any other command gets `exec`-ed inside the container under `dumb-init`.
-
-The container exposes two optional `VOLUME`s:
-
--	`/vault/logs`, to use for writing persistent audit logs. By default nothing is written here; the `file` audit backend must be enabled with a path under this directory.
--	`/vault/file`, to use for writing persistent storage data when using the`file` data storage plugin. By default nothing is written here (a `dev` server uses an in-memory data store); the `file` data storage backend must be enabled in Vault's configuration before the container is started.
-
-The container has a Vault configuration directory set up at `/vault/config` and the server will load any HCL or JSON configuration files placed here by binding a volume or by composing a new image and adding files. Alternatively, configuration can be added by passing the configuration JSON via environment variable `VAULT_LOCAL_CONFIG`.
-
-## Memory Locking and 'setcap'
-
-The container will attempt to lock memory to prevent sensitive values from being swapped to disk and as a result must have `--cap-add=IPC_LOCK` provided to `docker run`. Since the Vault binary runs as a non-root user, `setcap` is used to give the binary the ability to lock memory. With some Docker storage plugins in some distributions this call will not work correctly; it seems to fail most often with AUFS. The memory locking behavior can be disabled by setting the `SKIP_SETCAP` environment variable to any non-empty value.
-
-## Running Vault for Development
-
-```console
-$ docker run --cap-add=IPC_LOCK -d --name=dev-vault %%IMAGE%%
-```
-
-This runs a completely in-memory Vault server, which is useful for development but should not be used in production.
-
-When running in development mode, two additional options can be set via environment variables:
-
--	`VAULT_DEV_ROOT_TOKEN_ID`: This sets the ID of the initial generated root token to the given value
--	`VAULT_DEV_LISTEN_ADDRESS`: This sets the IP:port of the development server listener (defaults to 0.0.0.0:8200)
-
-As an example:
-
-```console
-$ docker run --cap-add=IPC_LOCK -e 'VAULT_DEV_ROOT_TOKEN_ID=myroot' -e 'VAULT_DEV_LISTEN_ADDRESS=0.0.0.0:1234' %%IMAGE%%
-```
-
-## Running Vault in Server Mode for Development
-
-```console
-$ docker run --cap-add=IPC_LOCK -e 'VAULT_LOCAL_CONFIG={"storage": {"file": {"path": "/vault/file"}}, "listener": [{"tcp": { "address": "0.0.0.0:8200", "tls_disable": true}}], "default_lease_ttl": "168h", "max_lease_ttl": "720h", "ui": true}' -p 8200:8200 %%IMAGE%% server
-```
-
-This runs a Vault server with TLS disabled, the `file` storage backend at path `/vault/file` and a default secret lease duration of one week and a maximum of 30 days. Disabling TLS and using the `file` storage backend are not recommended for production use.
-
-Note the `--cap-add=IPC_LOCK`: this is required in order for Vault to lock memory, which prevents it from being swapped to disk. This is highly recommended. In a non-development environment, if you do not wish to use this functionality, you must add `"disable_mlock: true"` to the configuration information.
-
-At startup, the server will read configuration HCL and JSON files from `/vault/config` (any information passed into `VAULT_LOCAL_CONFIG` is written into `local.json` in this directory and read as part of reading the directory for configuration files). Please see Vault's [configuration documentation](https://www.vaultproject.io/docs/config/index.html) for a full list of options.
-
-We suggest volume mounting a directory into the Docker image in order to give both the configuration and TLS certificates to Vault. You can accomplish this with:
-
-```console
-$ docker run --volume config/:/vault/config.d ...
-```
-
-For more scalability and reliability, we suggest running containerized Vault in an orchestration environment like k8s or OpenShift.
-
-Since 0.6.3 this container also supports the `VAULT_REDIRECT_INTERFACE` and `VAULT_CLUSTER_INTERFACE` environment variables. If set, the IP addresses used for the redirect and cluster addresses in Vault's configuration will be the address of the named interface inside the container (e.g. `eth0`).

+ 0 - 1
vault/deprecated.md

@@ -1 +0,0 @@
-Upcoming in Vault 1.14, we will stop publishing official Dockerhub images and publish only our Verified Publisher images. Users of Docker images should pull from [hashicorp/vault](https://hub.docker.com/r/hashicorp/vault) instead of [vault](https://hub.docker.com/_/vault). Verified Publisher images can be found at https://hub.docker.com/r/hashicorp/vault.

+ 0 - 1
vault/github-repo

@@ -1 +0,0 @@
-https://github.com/hashicorp/docker-vault

+ 0 - 1
vault/license.md

@@ -1 +0,0 @@
-View [license information](https://raw.githubusercontent.com/hashicorp/vault/master/LICENSE) for the software contained in this image.

+ 0 - 6
vault/logo.svg

@@ -1,6 +0,0 @@
-<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 250 250" width="250" height="250">
-  <g fill="none">
-    <path fill="#000" d="M225 25L124.655 225 25 25"/>
-    <path fill="#FFF" d="M101.897 65h11.724v11.724H101.9V65zm17.24 0h11.725v11.724h-11.724V65zm17.242 0h11.72v11.724h-11.72V65zm-34.486 17.586h11.724V94.31h-11.724V82.586zm17.24 0h11.725V94.31h-11.724V82.586zm17.242 0H148.1V94.31h-11.72V82.586zm-34.483 17.242h11.724v11.724h-11.724V99.828zm17.24 0h11.725v11.724h-11.725V99.828zm0 17.586h11.725v11.724h-11.725v-11.724zm17.587-17.586h11.725v11.724H136.72V99.828z"/>
-  </g>
-</svg>

+ 0 - 1
vault/maintainer.md

@@ -1 +0,0 @@
-../.common-templates/maintainer-hashicorp.md

+ 0 - 7
vault/metadata.json

@@ -1,7 +0,0 @@
-{
-  "hub": {
-    "categories": [
-      "security"
-    ]
-  }
-}