Browse Source

Merge pull request #6984 from docker/bump-1.25.0-rc3

Bump 1.25.0-rc3
Djordje Lukic 6 years ago
parent
commit
ea22d5821c

+ 3 - 0
.github/ISSUE_TEMPLATE/bug_report.md

@@ -1,6 +1,9 @@
 ---
 ---
 name: Bug report
 name: Bug report
 about: Report a bug encountered while using docker-compose
 about: Report a bug encountered while using docker-compose
+title: ''
+labels: kind/bug
+assignees: ''
 
 
 ---
 ---
 
 

+ 3 - 0
.github/ISSUE_TEMPLATE/feature_request.md

@@ -1,6 +1,9 @@
 ---
 ---
 name: Feature request
 name: Feature request
 about: Suggest an idea to improve Compose
 about: Suggest an idea to improve Compose
+title: ''
+labels: kind/feature
+assignees: ''
 
 
 ---
 ---
 
 

+ 3 - 0
.github/ISSUE_TEMPLATE/question-about-using-compose.md

@@ -1,6 +1,9 @@
 ---
 ---
 name: Question about using Compose
 name: Question about using Compose
 about: This is not the appropriate channel
 about: This is not the appropriate channel
+title: ''
+labels: kind/question
+assignees: ''
 
 
 ---
 ---
 
 

+ 59 - 0
.github/stale.yml

@@ -0,0 +1,59 @@
+# Configuration for probot-stale - https://github.com/probot/stale
+
+# Number of days of inactivity before an Issue or Pull Request becomes stale
+daysUntilStale: 180
+
+# Number of days of inactivity before an Issue or Pull Request with the stale label is closed.
+# Set to false to disable. If disabled, issues still need to be closed manually, but will remain marked as stale.
+daysUntilClose: 7
+
+# Only issues or pull requests with all of these labels are check if stale. Defaults to `[]` (disabled)
+onlyLabels: []
+
+# Issues or Pull Requests with these labels will never be considered stale. Set to `[]` to disable
+exemptLabels:
+  - kind/feature
+
+# Set to true to ignore issues in a project (defaults to false)
+exemptProjects: false
+
+# Set to true to ignore issues in a milestone (defaults to false)
+exemptMilestones: false
+
+# Set to true to ignore issues with an assignee (defaults to false)
+exemptAssignees: true
+
+# Label to use when marking as stale
+staleLabel: stale
+
+# Comment to post when marking as stale. Set to `false` to disable
+markComment: >
+  This issue has been automatically marked as stale because it has not had
+  recent activity. It will be closed if no further activity occurs. Thank you
+  for your contributions.
+
+# Comment to post when removing the stale label.
+unmarkComment: >
+  This issue has been automatically marked as not stale anymore due to the recent activity.
+
+# Comment to post when closing a stale Issue or Pull Request.
+closeComment: >
+  This issue has been automatically closed because it had not recent activity during the stale period.
+
+# Limit the number of actions per hour, from 1-30. Default is 30
+limitPerRun: 30
+
+# Limit to only `issues` or `pulls`
+only: issues
+
+# Optionally, specify configuration settings that are specific to just 'issues' or 'pulls':
+# pulls:
+#   daysUntilStale: 30
+#   markComment: >
+#     This pull request has been automatically marked as stale because it has not had
+#     recent activity. It will be closed if no further activity occurs. Thank you
+#     for your contributions.
+
+# issues:
+#   exemptLabels:
+#     - confirmed

+ 15 - 3
CHANGELOG.md

@@ -1,18 +1,24 @@
 Change log
 Change log
 ==========
 ==========
 
 
-1.25.0-rc2 (2019-08-06)
+1.25.0-rc3 (2019-10-28)
 -------------------
 -------------------
 
 
 ### Features
 ### Features
 
 
+- Add BuildKit support, use `DOCKER_BUILDKIT=1` and `COMPOSE_NATIVE_BUILDER=1`
+
+- Bump paramiko to 2.6.0
+
+- Add working dir, config files and env file in service labels
+
 - Add tag `docker-compose:latest`
 - Add tag `docker-compose:latest`
 
 
 - Add `docker-compose:<version>-alpine` image/tag
 - Add `docker-compose:<version>-alpine` image/tag
 
 
 - Add `docker-compose:<version>-debian` image/tag
 - Add `docker-compose:<version>-debian` image/tag
 
 
-- Bumped `docker-py` 4.0.1
+- Bumped `docker-py` 4.1.0
 
 
 - Supports `requests` up to 2.22.0 version
 - Supports `requests` up to 2.22.0 version
 
 
@@ -28,7 +34,7 @@ Change log
 
 
 - Added `--no-interpolate` to `docker-compose config`
 - Added `--no-interpolate` to `docker-compose config`
 
 
-- Bump OpenSSL for macOS build (`1.1.0j` to `1.1.1a`)
+- Bump OpenSSL for macOS build (`1.1.0j` to `1.1.1c`)
 
 
 - Added `--no-rm` to `build` command
 - Added `--no-rm` to `build` command
 
 
@@ -48,6 +54,12 @@ Change log
 
 
 ### Bugfixes
 ### Bugfixes
 
 
+- Fix same file 'extends' optimization
+
+- Use python POSIX support to get tty size
+
+- Format image size as decimal to be align with Docker CLI
+
 - Fixed stdin_open
 - Fixed stdin_open
 
 
 - Fixed `--remove-orphans` when used with `up --no-start`
 - Fixed `--remove-orphans` when used with `up --no-start`

+ 7 - 4
Dockerfile

@@ -2,8 +2,8 @@ ARG DOCKER_VERSION=18.09.7
 ARG PYTHON_VERSION=3.7.4
 ARG PYTHON_VERSION=3.7.4
 ARG BUILD_ALPINE_VERSION=3.10
 ARG BUILD_ALPINE_VERSION=3.10
 ARG BUILD_DEBIAN_VERSION=slim-stretch
 ARG BUILD_DEBIAN_VERSION=slim-stretch
-ARG RUNTIME_ALPINE_VERSION=3.10.0
-ARG RUNTIME_DEBIAN_VERSION=stretch-20190708-slim
+ARG RUNTIME_ALPINE_VERSION=3.10.1
+ARG RUNTIME_DEBIAN_VERSION=stretch-20190812-slim
 
 
 ARG BUILD_PLATFORM=alpine
 ARG BUILD_PLATFORM=alpine
 
 
@@ -30,15 +30,18 @@ RUN apk add --no-cache \
 ENV BUILD_BOOTLOADER=1
 ENV BUILD_BOOTLOADER=1
 
 
 FROM python:${PYTHON_VERSION}-${BUILD_DEBIAN_VERSION} AS build-debian
 FROM python:${PYTHON_VERSION}-${BUILD_DEBIAN_VERSION} AS build-debian
-RUN apt-get update && apt-get install -y \
+RUN apt-get update && apt-get install --no-install-recommends -y \
     curl \
     curl \
     gcc \
     gcc \
     git \
     git \
     libc-dev \
     libc-dev \
+    libffi-dev \
     libgcc-6-dev \
     libgcc-6-dev \
+    libssl-dev \
     make \
     make \
     openssl \
     openssl \
-    python2.7-dev
+    python2.7-dev \
+    zlib1g-dev
 
 
 FROM build-${BUILD_PLATFORM} AS build
 FROM build-${BUILD_PLATFORM} AS build
 COPY docker-compose-entrypoint.sh /usr/local/bin/
 COPY docker-compose-entrypoint.sh /usr/local/bin/

+ 1 - 1
Dockerfile.s390x

@@ -1,4 +1,4 @@
-FROM s390x/alpine:3.6
+FROM s390x/alpine:3.10.1
 
 
 ARG COMPOSE_VERSION=1.16.1
 ARG COMPOSE_VERSION=1.16.1
 
 

+ 3 - 3
Jenkinsfile

@@ -2,7 +2,7 @@
 
 
 def buildImage = { String baseImage ->
 def buildImage = { String baseImage ->
   def image
   def image
-  wrappedNode(label: "ubuntu && !zfs", cleanWorkspace: true) {
+  wrappedNode(label: "ubuntu && amd64 && !zfs", cleanWorkspace: true) {
     stage("build image for \"${baseImage}\"") {
     stage("build image for \"${baseImage}\"") {
       checkout(scm)
       checkout(scm)
       def imageName = "dockerbuildbot/compose:${baseImage}-${gitCommit()}"
       def imageName = "dockerbuildbot/compose:${baseImage}-${gitCommit()}"
@@ -29,7 +29,7 @@ def buildImage = { String baseImage ->
 
 
 def get_versions = { String imageId, int number ->
 def get_versions = { String imageId, int number ->
   def docker_versions
   def docker_versions
-  wrappedNode(label: "ubuntu && !zfs") {
+  wrappedNode(label: "ubuntu && amd64 && !zfs") {
     def result = sh(script: """docker run --rm \\
     def result = sh(script: """docker run --rm \\
         --entrypoint=/code/.tox/py27/bin/python \\
         --entrypoint=/code/.tox/py27/bin/python \\
         ${imageId} \\
         ${imageId} \\
@@ -55,7 +55,7 @@ def runTests = { Map settings ->
   }
   }
 
 
   { ->
   { ->
-    wrappedNode(label: "ubuntu && !zfs", cleanWorkspace: true) {
+    wrappedNode(label: "ubuntu && amd64 && !zfs", cleanWorkspace: true) {
       stage("test python=${pythonVersions} / docker=${dockerVersions} / baseImage=${baseImage}") {
       stage("test python=${pythonVersions} / docker=${dockerVersions} / baseImage=${baseImage}") {
         checkout(scm)
         checkout(scm)
         def storageDriver = sh(script: 'docker info | awk -F \': \' \'$1 == "Storage Driver" { print $2; exit }\'', returnStdout: true).trim()
         def storageDriver = sh(script: 'docker info | awk -F \': \' \'$1 == "Storage Driver" { print $2; exit }\'', returnStdout: true).trim()

+ 1 - 1
README.md

@@ -6,7 +6,7 @@ Compose is a tool for defining and running multi-container Docker applications.
 With Compose, you use a Compose file to configure your application's services.
 With Compose, you use a Compose file to configure your application's services.
 Then, using a single command, you create and start all the services
 Then, using a single command, you create and start all the services
 from your configuration. To learn more about all the features of Compose
 from your configuration. To learn more about all the features of Compose
-see [the list of features](https://github.com/docker/docker.github.io/blob/master/compose/overview.md#features).
+see [the list of features](https://github.com/docker/docker.github.io/blob/master/compose/index.md#features).
 
 
 Compose is great for development, testing, and staging environments, as well as
 Compose is great for development, testing, and staging environments, as well as
 CI workflows. You can learn more about each case in
 CI workflows. You can learn more about each case in

+ 1 - 1
compose/__init__.py

@@ -1,4 +1,4 @@
 from __future__ import absolute_import
 from __future__ import absolute_import
 from __future__ import unicode_literals
 from __future__ import unicode_literals
 
 
-__version__ = '1.25.0-rc2'
+__version__ = '1.25.0-rc3'

+ 27 - 3
compose/cli/command.py

@@ -13,6 +13,9 @@ from .. import config
 from .. import parallel
 from .. import parallel
 from ..config.environment import Environment
 from ..config.environment import Environment
 from ..const import API_VERSIONS
 from ..const import API_VERSIONS
+from ..const import LABEL_CONFIG_FILES
+from ..const import LABEL_ENVIRONMENT_FILE
+from ..const import LABEL_WORKING_DIR
 from ..project import Project
 from ..project import Project
 from .docker_client import docker_client
 from .docker_client import docker_client
 from .docker_client import get_tls_version
 from .docker_client import get_tls_version
@@ -57,7 +60,8 @@ def project_from_options(project_dir, options, additional_options={}):
         environment=environment,
         environment=environment,
         override_dir=override_dir,
         override_dir=override_dir,
         compatibility=options.get('--compatibility'),
         compatibility=options.get('--compatibility'),
-        interpolate=(not additional_options.get('--no-interpolate'))
+        interpolate=(not additional_options.get('--no-interpolate')),
+        environment_file=environment_file
     )
     )
 
 
 
 
@@ -125,7 +129,7 @@ def get_client(environment, verbose=False, version=None, tls_config=None, host=N
 
 
 def get_project(project_dir, config_path=None, project_name=None, verbose=False,
 def get_project(project_dir, config_path=None, project_name=None, verbose=False,
                 host=None, tls_config=None, environment=None, override_dir=None,
                 host=None, tls_config=None, environment=None, override_dir=None,
-                compatibility=False, interpolate=True):
+                compatibility=False, interpolate=True, environment_file=None):
     if not environment:
     if not environment:
         environment = Environment.from_env_file(project_dir)
         environment = Environment.from_env_file(project_dir)
     config_details = config.find(project_dir, config_path, environment, override_dir)
     config_details = config.find(project_dir, config_path, environment, override_dir)
@@ -145,10 +149,30 @@ def get_project(project_dir, config_path=None, project_name=None, verbose=False,
 
 
     with errors.handle_connection_errors(client):
     with errors.handle_connection_errors(client):
         return Project.from_config(
         return Project.from_config(
-            project_name, config_data, client, environment.get('DOCKER_DEFAULT_PLATFORM')
+            project_name,
+            config_data,
+            client,
+            environment.get('DOCKER_DEFAULT_PLATFORM'),
+            execution_context_labels(config_details, environment_file),
         )
         )
 
 
 
 
+def execution_context_labels(config_details, environment_file):
+    extra_labels = [
+        '{0}={1}'.format(LABEL_WORKING_DIR, os.path.abspath(config_details.working_dir)),
+        '{0}={1}'.format(LABEL_CONFIG_FILES, config_files_label(config_details)),
+    ]
+    if environment_file is not None:
+        extra_labels.append('{0}={1}'.format(LABEL_ENVIRONMENT_FILE,
+                                             os.path.normpath(environment_file)))
+    return extra_labels
+
+
+def config_files_label(config_details):
+    return ",".join(
+        map(str, (os.path.normpath(c.filename) for c in config_details.config_files)))
+
+
 def get_project_name(working_dir, project_name=None, environment=None):
 def get_project_name(working_dir, project_name=None, environment=None):
     def normalize_name(name):
     def normalize_name(name):
         return re.sub(r'[^-_a-z0-9]', '', name.lower())
         return re.sub(r'[^-_a-z0-9]', '', name.lower())

+ 14 - 7
compose/cli/formatter.py

@@ -2,25 +2,32 @@ from __future__ import absolute_import
 from __future__ import unicode_literals
 from __future__ import unicode_literals
 
 
 import logging
 import logging
-import os
+import shutil
 
 
 import six
 import six
 import texttable
 import texttable
 
 
 from compose.cli import colors
 from compose.cli import colors
 
 
+if hasattr(shutil, "get_terminal_size"):
+    from shutil import get_terminal_size
+else:
+    from backports.shutil_get_terminal_size import get_terminal_size
+
 
 
 def get_tty_width():
 def get_tty_width():
-    tty_size = os.popen('stty size 2> /dev/null', 'r').read().split()
-    if len(tty_size) != 2:
+    try:
+        width, _ = get_terminal_size()
+        return int(width)
+    except OSError:
         return 0
         return 0
-    _, width = tty_size
-    return int(width)
 
 
 
 
-class Formatter(object):
+class Formatter:
     """Format tabular data for printing."""
     """Format tabular data for printing."""
-    def table(self, headers, rows):
+
+    @staticmethod
+    def table(headers, rows):
         table = texttable.Texttable(max_width=get_tty_width())
         table = texttable.Texttable(max_width=get_tty_width())
         table.set_cols_dtype(['t' for h in headers])
         table.set_cols_dtype(['t' for h in headers])
         table.add_rows([headers] + rows)
         table.add_rows([headers] + rows)

+ 7 - 1
compose/cli/log_printer.py

@@ -230,7 +230,13 @@ def watch_events(thread_map, event_stream, presenters, thread_args):
 
 
         # Container crashed so we should reattach to it
         # Container crashed so we should reattach to it
         if event['id'] in crashed_containers:
         if event['id'] in crashed_containers:
-            event['container'].attach_log_stream()
+            container = event['container']
+            if not container.is_restarting:
+                try:
+                    container.attach_log_stream()
+                except APIError:
+                    # Just ignore errors when reattaching to already crashed containers
+                    pass
             crashed_containers.remove(event['id'])
             crashed_containers.remove(event['id'])
 
 
         thread_map[event['id']] = build_thread(
         thread_map[event['id']] = build_thread(

+ 17 - 7
compose/cli/main.py

@@ -263,14 +263,17 @@ class TopLevelCommand(object):
         Usage: build [options] [--build-arg key=val...] [SERVICE...]
         Usage: build [options] [--build-arg key=val...] [SERVICE...]
 
 
         Options:
         Options:
+            --build-arg key=val     Set build-time variables for services.
             --compress              Compress the build context using gzip.
             --compress              Compress the build context using gzip.
             --force-rm              Always remove intermediate containers.
             --force-rm              Always remove intermediate containers.
+            -m, --memory MEM        Set memory limit for the build container.
             --no-cache              Do not use cache when building the image.
             --no-cache              Do not use cache when building the image.
             --no-rm                 Do not remove intermediate containers after a successful build.
             --no-rm                 Do not remove intermediate containers after a successful build.
-            --pull                  Always attempt to pull a newer version of the image.
-            -m, --memory MEM        Sets memory limit for the build container.
-            --build-arg key=val     Set build-time variables for services.
             --parallel              Build images in parallel.
             --parallel              Build images in parallel.
+            --progress string       Set type of progress output (auto, plain, tty).
+                                    EXPERIMENTAL flag for native builder.
+                                    To enable, run with COMPOSE_DOCKER_CLI_BUILD=1)
+            --pull                  Always attempt to pull a newer version of the image.
             -q, --quiet             Don't print anything to STDOUT
             -q, --quiet             Don't print anything to STDOUT
         """
         """
         service_names = options['SERVICE']
         service_names = options['SERVICE']
@@ -283,6 +286,8 @@ class TopLevelCommand(object):
                 )
                 )
             build_args = resolve_build_args(build_args, self.toplevel_environment)
             build_args = resolve_build_args(build_args, self.toplevel_environment)
 
 
+        native_builder = self.toplevel_environment.get_boolean('COMPOSE_DOCKER_CLI_BUILD')
+
         self.project.build(
         self.project.build(
             service_names=options['SERVICE'],
             service_names=options['SERVICE'],
             no_cache=bool(options.get('--no-cache', False)),
             no_cache=bool(options.get('--no-cache', False)),
@@ -293,7 +298,9 @@ class TopLevelCommand(object):
             build_args=build_args,
             build_args=build_args,
             gzip=options.get('--compress', False),
             gzip=options.get('--compress', False),
             parallel_build=options.get('--parallel', False),
             parallel_build=options.get('--parallel', False),
-            silent=options.get('--quiet', False)
+            silent=options.get('--quiet', False),
+            cli=native_builder,
+            progress=options.get('--progress'),
         )
         )
 
 
     def bundle(self, options):
     def bundle(self, options):
@@ -613,7 +620,7 @@ class TopLevelCommand(object):
                 image_id,
                 image_id,
                 size
                 size
             ])
             ])
-        print(Formatter().table(headers, rows))
+        print(Formatter.table(headers, rows))
 
 
     def kill(self, options):
     def kill(self, options):
         """
         """
@@ -747,7 +754,7 @@ class TopLevelCommand(object):
                     container.human_readable_state,
                     container.human_readable_state,
                     container.human_readable_ports,
                     container.human_readable_ports,
                 ])
                 ])
-            print(Formatter().table(headers, rows))
+            print(Formatter.table(headers, rows))
 
 
     def pull(self, options):
     def pull(self, options):
         """
         """
@@ -987,7 +994,7 @@ class TopLevelCommand(object):
                 rows.append(process)
                 rows.append(process)
 
 
             print(container.name)
             print(container.name)
-            print(Formatter().table(headers, rows))
+            print(Formatter.table(headers, rows))
 
 
     def unpause(self, options):
     def unpause(self, options):
         """
         """
@@ -1071,6 +1078,8 @@ class TopLevelCommand(object):
         for excluded in [x for x in opts if options.get(x) and no_start]:
         for excluded in [x for x in opts if options.get(x) and no_start]:
             raise UserError('--no-start and {} cannot be combined.'.format(excluded))
             raise UserError('--no-start and {} cannot be combined.'.format(excluded))
 
 
+        native_builder = self.toplevel_environment.get_boolean('COMPOSE_DOCKER_CLI_BUILD')
+
         with up_shutdown_context(self.project, service_names, timeout, detached):
         with up_shutdown_context(self.project, service_names, timeout, detached):
             warn_for_swarm_mode(self.project.client)
             warn_for_swarm_mode(self.project.client)
 
 
@@ -1090,6 +1099,7 @@ class TopLevelCommand(object):
                     reset_container_image=rebuild,
                     reset_container_image=rebuild,
                     renew_anonymous_volumes=options.get('--renew-anon-volumes'),
                     renew_anonymous_volumes=options.get('--renew-anon-volumes'),
                     silent=options.get('--quiet-pull'),
                     silent=options.get('--quiet-pull'),
+                    cli=native_builder,
                 )
                 )
 
 
             try:
             try:

+ 2 - 2
compose/cli/utils.py

@@ -133,12 +133,12 @@ def generate_user_agent():
 
 
 def human_readable_file_size(size):
 def human_readable_file_size(size):
     suffixes = ['B', 'kB', 'MB', 'GB', 'TB', 'PB', 'EB', ]
     suffixes = ['B', 'kB', 'MB', 'GB', 'TB', 'PB', 'EB', ]
-    order = int(math.log(size, 2) / 10) if size else 0
+    order = int(math.log(size, 1000)) if size else 0
     if order >= len(suffixes):
     if order >= len(suffixes):
         order = len(suffixes) - 1
         order = len(suffixes) - 1
 
 
     return '{0:.4g} {1}'.format(
     return '{0:.4g} {1}'.format(
-        size / float(1 << (order * 10)),
+        size / pow(10, order * 3),
         suffixes[order]
         suffixes[order]
     )
     )
 
 

+ 1 - 1
compose/config/config.py

@@ -615,7 +615,7 @@ class ServiceExtendsResolver(object):
         config_path = self.get_extended_config_path(extends)
         config_path = self.get_extended_config_path(extends)
         service_name = extends['service']
         service_name = extends['service']
 
 
-        if config_path == self.config_file.filename:
+        if config_path == os.path.abspath(self.config_file.filename):
             try:
             try:
                 service_config = self.config_file.get_service(service_name)
                 service_config = self.config_file.get_service(service_name)
             except KeyError:
             except KeyError:

+ 3 - 0
compose/const.py

@@ -11,6 +11,9 @@ IS_WINDOWS_PLATFORM = (sys.platform == "win32")
 LABEL_CONTAINER_NUMBER = 'com.docker.compose.container-number'
 LABEL_CONTAINER_NUMBER = 'com.docker.compose.container-number'
 LABEL_ONE_OFF = 'com.docker.compose.oneoff'
 LABEL_ONE_OFF = 'com.docker.compose.oneoff'
 LABEL_PROJECT = 'com.docker.compose.project'
 LABEL_PROJECT = 'com.docker.compose.project'
+LABEL_WORKING_DIR = 'com.docker.compose.project.working_dir'
+LABEL_CONFIG_FILES = 'com.docker.compose.project.config_files'
+LABEL_ENVIRONMENT_FILE = 'com.docker.compose.project.environment_file'
 LABEL_SERVICE = 'com.docker.compose.service'
 LABEL_SERVICE = 'com.docker.compose.service'
 LABEL_NETWORK = 'com.docker.compose.network'
 LABEL_NETWORK = 'com.docker.compose.network'
 LABEL_VERSION = 'com.docker.compose.version'
 LABEL_VERSION = 'com.docker.compose.version'

+ 1 - 1
compose/network.py

@@ -226,7 +226,7 @@ def check_remote_network_config(remote, local):
         raise NetworkConfigChangedError(local.true_name, 'enable_ipv6')
         raise NetworkConfigChangedError(local.true_name, 'enable_ipv6')
 
 
     local_labels = local.labels or {}
     local_labels = local.labels or {}
-    remote_labels = remote.get('Labels', {})
+    remote_labels = remote.get('Labels') or {}
     for k in set.union(set(remote_labels.keys()), set(local_labels.keys())):
     for k in set.union(set(remote_labels.keys()), set(local_labels.keys())):
         if k.startswith('com.docker.'):  # We are only interested in user-specified labels
         if k.startswith('com.docker.'):  # We are only interested in user-specified labels
             continue
             continue

+ 29 - 5
compose/project.py

@@ -6,6 +6,7 @@ import logging
 import operator
 import operator
 import re
 import re
 from functools import reduce
 from functools import reduce
+from os import path
 
 
 import enum
 import enum
 import six
 import six
@@ -82,7 +83,7 @@ class Project(object):
         return labels
         return labels
 
 
     @classmethod
     @classmethod
-    def from_config(cls, name, config_data, client, default_platform=None):
+    def from_config(cls, name, config_data, client, default_platform=None, extra_labels=[]):
         """
         """
         Construct a Project from a config.Config object.
         Construct a Project from a config.Config object.
         """
         """
@@ -135,6 +136,7 @@ class Project(object):
                     pid_mode=pid_mode,
                     pid_mode=pid_mode,
                     platform=service_dict.pop('platform', None),
                     platform=service_dict.pop('platform', None),
                     default_platform=default_platform,
                     default_platform=default_platform,
+                    extra_labels=extra_labels,
                     **service_dict)
                     **service_dict)
             )
             )
 
 
@@ -355,7 +357,8 @@ class Project(object):
         return containers
         return containers
 
 
     def build(self, service_names=None, no_cache=False, pull=False, force_rm=False, memory=None,
     def build(self, service_names=None, no_cache=False, pull=False, force_rm=False, memory=None,
-              build_args=None, gzip=False, parallel_build=False, rm=True, silent=False):
+              build_args=None, gzip=False, parallel_build=False, rm=True, silent=False, cli=False,
+              progress=None):
 
 
         services = []
         services = []
         for service in self.get_services(service_names):
         for service in self.get_services(service_names):
@@ -364,8 +367,17 @@ class Project(object):
             elif not silent:
             elif not silent:
                 log.info('%s uses an image, skipping' % service.name)
                 log.info('%s uses an image, skipping' % service.name)
 
 
+        if cli:
+            log.warning("Native build is an experimental feature and could change at any time")
+            if parallel_build:
+                log.warning("Flag '--parallel' is ignored when building with "
+                            "COMPOSE_DOCKER_CLI_BUILD=1")
+            if gzip:
+                log.warning("Flag '--compress' is ignored when building with "
+                            "COMPOSE_DOCKER_CLI_BUILD=1")
+
         def build_service(service):
         def build_service(service):
-            service.build(no_cache, pull, force_rm, memory, build_args, gzip, rm, silent)
+            service.build(no_cache, pull, force_rm, memory, build_args, gzip, rm, silent, cli, progress)
         if parallel_build:
         if parallel_build:
             _, errors = parallel.parallel_execute(
             _, errors = parallel.parallel_execute(
                 services,
                 services,
@@ -509,8 +521,12 @@ class Project(object):
            reset_container_image=False,
            reset_container_image=False,
            renew_anonymous_volumes=False,
            renew_anonymous_volumes=False,
            silent=False,
            silent=False,
+           cli=False,
            ):
            ):
 
 
+        if cli:
+            log.warning("Native build is an experimental feature and could change at any time")
+
         self.initialize()
         self.initialize()
         if not ignore_orphans:
         if not ignore_orphans:
             self.find_orphan_containers(remove_orphans)
             self.find_orphan_containers(remove_orphans)
@@ -523,7 +539,7 @@ class Project(object):
             include_deps=start_deps)
             include_deps=start_deps)
 
 
         for svc in services:
         for svc in services:
-            svc.ensure_image_exists(do_build=do_build, silent=silent)
+            svc.ensure_image_exists(do_build=do_build, silent=silent, cli=cli)
         plans = self._get_convergence_plans(
         plans = self._get_convergence_plans(
             services, strategy, always_recreate_deps=always_recreate_deps)
             services, strategy, always_recreate_deps=always_recreate_deps)
 
 
@@ -793,7 +809,15 @@ def get_secrets(service, service_secrets, secret_defs):
                 )
                 )
             )
             )
 
 
-        secrets.append({'secret': secret, 'file': secret_def.get('file')})
+        secret_file = secret_def.get('file')
+        if not path.isfile(str(secret_file)):
+            log.warning(
+                "Service \"{service}\" uses an undefined secret file \"{secret_file}\", "
+                "the following file should be created \"{secret_file}\"".format(
+                    service=service, secret_file=secret_file
+                )
+            )
+        secrets.append({'secret': secret, 'file': secret_file})
 
 
     return secrets
     return secrets
 
 

+ 179 - 33
compose/service.py

@@ -2,10 +2,12 @@ from __future__ import absolute_import
 from __future__ import unicode_literals
 from __future__ import unicode_literals
 
 
 import itertools
 import itertools
+import json
 import logging
 import logging
 import os
 import os
 import re
 import re
 import sys
 import sys
+import tempfile
 from collections import namedtuple
 from collections import namedtuple
 from collections import OrderedDict
 from collections import OrderedDict
 from operator import attrgetter
 from operator import attrgetter
@@ -59,8 +61,12 @@ from .utils import parse_seconds_float
 from .utils import truncate_id
 from .utils import truncate_id
 from .utils import unique_everseen
 from .utils import unique_everseen
 
 
-log = logging.getLogger(__name__)
+if six.PY2:
+    import subprocess32 as subprocess
+else:
+    import subprocess
 
 
+log = logging.getLogger(__name__)
 
 
 HOST_CONFIG_KEYS = [
 HOST_CONFIG_KEYS = [
     'cap_add',
     'cap_add',
@@ -130,7 +136,6 @@ class NoSuchImageError(Exception):
 
 
 ServiceName = namedtuple('ServiceName', 'project service number')
 ServiceName = namedtuple('ServiceName', 'project service number')
 
 
-
 ConvergencePlan = namedtuple('ConvergencePlan', 'action containers')
 ConvergencePlan = namedtuple('ConvergencePlan', 'action containers')
 
 
 
 
@@ -166,20 +171,21 @@ class BuildAction(enum.Enum):
 
 
 class Service(object):
 class Service(object):
     def __init__(
     def __init__(
-        self,
-        name,
-        client=None,
-        project='default',
-        use_networking=False,
-        links=None,
-        volumes_from=None,
-        network_mode=None,
-        networks=None,
-        secrets=None,
-        scale=1,
-        pid_mode=None,
-        default_platform=None,
-        **options
+            self,
+            name,
+            client=None,
+            project='default',
+            use_networking=False,
+            links=None,
+            volumes_from=None,
+            network_mode=None,
+            networks=None,
+            secrets=None,
+            scale=1,
+            pid_mode=None,
+            default_platform=None,
+            extra_labels=[],
+            **options
     ):
     ):
         self.name = name
         self.name = name
         self.client = client
         self.client = client
@@ -194,6 +200,7 @@ class Service(object):
         self.scale_num = scale
         self.scale_num = scale
         self.default_platform = default_platform
         self.default_platform = default_platform
         self.options = options
         self.options = options
+        self.extra_labels = extra_labels
 
 
     def __repr__(self):
     def __repr__(self):
         return '<Service: {}>'.format(self.name)
         return '<Service: {}>'.format(self.name)
@@ -208,7 +215,7 @@ class Service(object):
             for container in self.client.containers(
             for container in self.client.containers(
                 all=stopped,
                 all=stopped,
                 filters=filters)])
                 filters=filters)])
-        )
+                      )
         if result:
         if result:
             return result
             return result
 
 
@@ -338,9 +345,9 @@ class Service(object):
             raise OperationFailedError("Cannot create container for service %s: %s" %
             raise OperationFailedError("Cannot create container for service %s: %s" %
                                        (self.name, ex.explanation))
                                        (self.name, ex.explanation))
 
 
-    def ensure_image_exists(self, do_build=BuildAction.none, silent=False):
+    def ensure_image_exists(self, do_build=BuildAction.none, silent=False, cli=False):
         if self.can_be_built() and do_build == BuildAction.force:
         if self.can_be_built() and do_build == BuildAction.force:
-            self.build()
+            self.build(cli=cli)
             return
             return
 
 
         try:
         try:
@@ -356,7 +363,7 @@ class Service(object):
         if do_build == BuildAction.skip:
         if do_build == BuildAction.skip:
             raise NeedsBuildError(self)
             raise NeedsBuildError(self)
 
 
-        self.build()
+        self.build(cli=cli)
         log.warning(
         log.warning(
             "Image for service {} was built because it did not already exist. To "
             "Image for service {} was built because it did not already exist. To "
             "rebuild this image you must use `docker-compose build` or "
             "rebuild this image you must use `docker-compose build` or "
@@ -397,8 +404,8 @@ class Service(object):
             return ConvergencePlan('start', containers)
             return ConvergencePlan('start', containers)
 
 
         if (
         if (
-            strategy is ConvergenceStrategy.always or
-            self._containers_have_diverged(containers)
+                strategy is ConvergenceStrategy.always or
+                self._containers_have_diverged(containers)
         ):
         ):
             return ConvergencePlan('recreate', containers)
             return ConvergencePlan('recreate', containers)
 
 
@@ -475,6 +482,7 @@ class Service(object):
                 container, timeout=timeout, attach_logs=not detached,
                 container, timeout=timeout, attach_logs=not detached,
                 start_new_container=start, renew_anonymous_volumes=renew_anonymous_volumes
                 start_new_container=start, renew_anonymous_volumes=renew_anonymous_volumes
             )
             )
+
         containers, errors = parallel_execute(
         containers, errors = parallel_execute(
             containers,
             containers,
             recreate,
             recreate,
@@ -616,6 +624,8 @@ class Service(object):
         try:
         try:
             container.start()
             container.start()
         except APIError as ex:
         except APIError as ex:
+            if "driver failed programming external connectivity" in ex.explanation:
+                log.warn("Host is already in use by another container")
             raise OperationFailedError("Cannot start service %s: %s" % (self.name, ex.explanation))
             raise OperationFailedError("Cannot start service %s: %s" % (self.name, ex.explanation))
         return container
         return container
 
 
@@ -696,11 +706,11 @@ class Service(object):
         net_name = self.network_mode.service_name
         net_name = self.network_mode.service_name
         pid_namespace = self.pid_mode.service_name
         pid_namespace = self.pid_mode.service_name
         return (
         return (
-            self.get_linked_service_names() +
-            self.get_volumes_from_names() +
-            ([net_name] if net_name else []) +
-            ([pid_namespace] if pid_namespace else []) +
-            list(self.options.get('depends_on', {}).keys())
+                self.get_linked_service_names() +
+                self.get_volumes_from_names() +
+                ([net_name] if net_name else []) +
+                ([pid_namespace] if pid_namespace else []) +
+                list(self.options.get('depends_on', {}).keys())
         )
         )
 
 
     def get_dependency_configs(self):
     def get_dependency_configs(self):
@@ -890,7 +900,7 @@ class Service(object):
 
 
         container_options['labels'] = build_container_labels(
         container_options['labels'] = build_container_labels(
             container_options.get('labels', {}),
             container_options.get('labels', {}),
-            self.labels(one_off=one_off),
+            self.labels(one_off=one_off) + self.extra_labels,
             number,
             number,
             self.config_hash if add_config_hash else None,
             self.config_hash if add_config_hash else None,
             slug
             slug
@@ -1049,7 +1059,7 @@ class Service(object):
         return [build_spec(secret) for secret in self.secrets]
         return [build_spec(secret) for secret in self.secrets]
 
 
     def build(self, no_cache=False, pull=False, force_rm=False, memory=None, build_args_override=None,
     def build(self, no_cache=False, pull=False, force_rm=False, memory=None, build_args_override=None,
-              gzip=False, rm=True, silent=False):
+              gzip=False, rm=True, silent=False, cli=False, progress=None):
         output_stream = open(os.devnull, 'w')
         output_stream = open(os.devnull, 'w')
         if not silent:
         if not silent:
             output_stream = sys.stdout
             output_stream = sys.stdout
@@ -1070,7 +1080,8 @@ class Service(object):
                 'Impossible to perform platform-targeted builds for API version < 1.35'
                 'Impossible to perform platform-targeted builds for API version < 1.35'
             )
             )
 
 
-        build_output = self.client.build(
+        builder = self.client if not cli else _CLIBuilder(progress)
+        build_output = builder.build(
             path=path,
             path=path,
             tag=self.image_name,
             tag=self.image_name,
             rm=rm,
             rm=rm,
@@ -1542,9 +1553,9 @@ def warn_on_masked_volume(volumes_option, container_volumes, service):
 
 
     for volume in volumes_option:
     for volume in volumes_option:
         if (
         if (
-            volume.external and
-            volume.internal in container_volumes and
-            container_volumes.get(volume.internal) != volume.external
+                volume.external and
+                volume.internal in container_volumes and
+                container_volumes.get(volume.internal) != volume.external
         ):
         ):
             log.warning((
             log.warning((
                 "Service \"{service}\" is using volume \"{volume}\" from the "
                 "Service \"{service}\" is using volume \"{volume}\" from the "
@@ -1591,6 +1602,7 @@ def build_mount(mount_spec):
         read_only=mount_spec.read_only, consistency=mount_spec.consistency, **kwargs
         read_only=mount_spec.read_only, consistency=mount_spec.consistency, **kwargs
     )
     )
 
 
+
 # Labels
 # Labels
 
 
 
 
@@ -1645,6 +1657,7 @@ def format_environment(environment):
         if isinstance(value, six.binary_type):
         if isinstance(value, six.binary_type):
             value = value.decode('utf-8')
             value = value.decode('utf-8')
         return '{key}={value}'.format(key=key, value=value)
         return '{key}={value}'.format(key=key, value=value)
+
     return [format_env(*item) for item in environment.items()]
     return [format_env(*item) for item in environment.items()]
 
 
 
 
@@ -1701,3 +1714,136 @@ def rewrite_build_path(path):
         path = WINDOWS_LONGPATH_PREFIX + os.path.normpath(path)
         path = WINDOWS_LONGPATH_PREFIX + os.path.normpath(path)
 
 
     return path
     return path
+
+
+class _CLIBuilder(object):
+    def __init__(self, progress):
+        self._progress = progress
+
+    def build(self, path, tag=None, quiet=False, fileobj=None,
+              nocache=False, rm=False, timeout=None,
+              custom_context=False, encoding=None, pull=False,
+              forcerm=False, dockerfile=None, container_limits=None,
+              decode=False, buildargs=None, gzip=False, shmsize=None,
+              labels=None, cache_from=None, target=None, network_mode=None,
+              squash=None, extra_hosts=None, platform=None, isolation=None,
+              use_config_proxy=True):
+        """
+        Args:
+            path (str): Path to the directory containing the Dockerfile
+            buildargs (dict): A dictionary of build arguments
+            cache_from (:py:class:`list`): A list of images used for build
+                cache resolution
+            container_limits (dict): A dictionary of limits applied to each
+                container created by the build process. Valid keys:
+                - memory (int): set memory limit for build
+                - memswap (int): Total memory (memory + swap), -1 to disable
+                    swap
+                - cpushares (int): CPU shares (relative weight)
+                - cpusetcpus (str): CPUs in which to allow execution, e.g.,
+                    ``"0-3"``, ``"0,1"``
+            custom_context (bool): Optional if using ``fileobj``
+            decode (bool): If set to ``True``, the returned stream will be
+                decoded into dicts on the fly. Default ``False``
+            dockerfile (str): path within the build context to the Dockerfile
+            encoding (str): The encoding for a stream. Set to ``gzip`` for
+                compressing
+            extra_hosts (dict): Extra hosts to add to /etc/hosts in building
+                containers, as a mapping of hostname to IP address.
+            fileobj: A file object to use as the Dockerfile. (Or a file-like
+                object)
+            forcerm (bool): Always remove intermediate containers, even after
+                unsuccessful builds
+            isolation (str): Isolation technology used during build.
+                Default: `None`.
+            labels (dict): A dictionary of labels to set on the image
+            network_mode (str): networking mode for the run commands during
+                build
+            nocache (bool): Don't use the cache when set to ``True``
+            platform (str): Platform in the format ``os[/arch[/variant]]``
+            pull (bool): Downloads any updates to the FROM image in Dockerfiles
+            quiet (bool): Whether to return the status
+            rm (bool): Remove intermediate containers. The ``docker build``
+                command now defaults to ``--rm=true``, but we have kept the old
+                default of `False` to preserve backward compatibility
+            shmsize (int): Size of `/dev/shm` in bytes. The size must be
+                greater than 0. If omitted the system uses 64MB
+            squash (bool): Squash the resulting images layers into a
+                single layer.
+            tag (str): A tag to add to the final image
+            target (str): Name of the build-stage to build in a multi-stage
+                Dockerfile
+            timeout (int): HTTP timeout
+            use_config_proxy (bool): If ``True``, and if the docker client
+                configuration file (``~/.docker/config.json`` by default)
+                contains a proxy configuration, the corresponding environment
+                variables will be set in the container being built.
+        Returns:
+            A generator for the build output.
+        """
+        if dockerfile:
+            dockerfile = os.path.join(path, dockerfile)
+        iidfile = tempfile.mktemp()
+
+        command_builder = _CommandBuilder()
+        command_builder.add_params("--build-arg", buildargs)
+        command_builder.add_list("--cache-from", cache_from)
+        command_builder.add_arg("--file", dockerfile)
+        command_builder.add_flag("--force-rm", forcerm)
+        command_builder.add_arg("--memory", container_limits.get("memory"))
+        command_builder.add_flag("--no-cache", nocache)
+        command_builder.add_arg("--progress", self._progress)
+        command_builder.add_flag("--pull", pull)
+        command_builder.add_arg("--tag", tag)
+        command_builder.add_arg("--target", target)
+        command_builder.add_arg("--iidfile", iidfile)
+        args = command_builder.build([path])
+
+        magic_word = "Successfully built "
+        appear = False
+        with subprocess.Popen(args, stdout=subprocess.PIPE, universal_newlines=True) as p:
+            while True:
+                line = p.stdout.readline()
+                if not line:
+                    break
+                if line.startswith(magic_word):
+                    appear = True
+                yield json.dumps({"stream": line})
+
+        with open(iidfile) as f:
+            line = f.readline()
+            image_id = line.split(":")[1].strip()
+        os.remove(iidfile)
+
+        # In case of `DOCKER_BUILDKIT=1`
+        # there is no success message already present in the output.
+        # Since that's the way `Service::build` gets the `image_id`
+        # it has to be added `manually`
+        if not appear:
+            yield json.dumps({"stream": "{}{}\n".format(magic_word, image_id)})
+
+
+class _CommandBuilder(object):
+    def __init__(self):
+        self._args = ["docker", "build"]
+
+    def add_arg(self, name, value):
+        if value:
+            self._args.extend([name, str(value)])
+
+    def add_flag(self, name, flag):
+        if flag:
+            self._args.extend([name])
+
+    def add_params(self, name, params):
+        if params:
+            for key, val in params.items():
+                self._args.extend([name, "{}={}".format(key, val)])
+
+    def add_list(self, name, values):
+        if values:
+            for val in values:
+                self._args.extend([name, val])
+
+    def build(self, args):
+        return self._args + args

+ 1 - 1
requirements-build.txt

@@ -1 +1 @@
-pyinstaller==3.4
+pyinstaller==3.5

+ 5 - 4
requirements.txt

@@ -1,9 +1,10 @@
+backports.shutil_get_terminal_size==1.0.0
 backports.ssl-match-hostname==3.5.0.1; python_version < '3'
 backports.ssl-match-hostname==3.5.0.1; python_version < '3'
 cached-property==1.3.0
 cached-property==1.3.0
 certifi==2017.4.17
 certifi==2017.4.17
 chardet==3.0.4
 chardet==3.0.4
 colorama==0.4.0; sys_platform == 'win32'
 colorama==0.4.0; sys_platform == 'win32'
-docker==4.0.1
+docker==4.1.0
 docker-pycreds==0.4.0
 docker-pycreds==0.4.0
 dockerpty==0.4.1
 dockerpty==0.4.1
 docopt==0.6.2
 docopt==0.6.2
@@ -11,14 +12,14 @@ enum34==1.1.6; python_version < '3.4'
 functools32==3.2.3.post2; python_version < '3.2'
 functools32==3.2.3.post2; python_version < '3.2'
 idna==2.5
 idna==2.5
 ipaddress==1.0.18
 ipaddress==1.0.18
-jsonschema==2.6.0
-paramiko==2.4.2
+jsonschema==3.0.1
+paramiko==2.6.0
 pypiwin32==219; sys_platform == 'win32' and python_version < '3.6'
 pypiwin32==219; sys_platform == 'win32' and python_version < '3.6'
 pypiwin32==223; sys_platform == 'win32' and python_version >= '3.6'
 pypiwin32==223; sys_platform == 'win32' and python_version >= '3.6'
 PySocks==1.6.7
 PySocks==1.6.7
 PyYAML==4.2b1
 PyYAML==4.2b1
 requests==2.22.0
 requests==2.22.0
-six==1.10.0
+six==1.12.0
 texttable==1.6.2
 texttable==1.6.2
 urllib3==1.24.2; python_version == '3.3'
 urllib3==1.24.2; python_version == '3.3'
 websocket-client==0.32.0
 websocket-client==0.32.0

+ 20 - 0
script/Jenkinsfile.fossa

@@ -0,0 +1,20 @@
+pipeline {
+    agent any
+    stages {
+        stage("License Scan") {
+            agent {
+                label 'ubuntu-1604-aufs-edge'
+            }
+
+            steps {
+                withCredentials([
+                    string(credentialsId: 'fossa-api-key', variable: 'FOSSA_API_KEY')
+                ]) {
+                    checkout scm
+                    sh "FOSSA_API_KEY='${FOSSA_API_KEY}' BRANCH_NAME='${env.BRANCH_NAME}' make -f script/fossa.mk fossa-analyze"
+                    sh "FOSSA_API_KEY='${FOSSA_API_KEY}' make -f script/fossa.mk fossa-test"
+                }
+            }
+        }
+    }
+}

+ 2 - 1
script/build/linux

@@ -12,6 +12,7 @@ docker build -t "${TAG}" . \
        --build-arg GIT_COMMIT="${DOCKER_COMPOSE_GITSHA}"
        --build-arg GIT_COMMIT="${DOCKER_COMPOSE_GITSHA}"
 TMP_CONTAINER=$(docker create "${TAG}")
 TMP_CONTAINER=$(docker create "${TAG}")
 mkdir -p dist
 mkdir -p dist
-docker cp "${TMP_CONTAINER}":/usr/local/bin/docker-compose dist/docker-compose-Linux-x86_64
+ARCH=$(uname -m)
+docker cp "${TMP_CONTAINER}":/usr/local/bin/docker-compose "dist/docker-compose-Linux-${ARCH}"
 docker container rm -f "${TMP_CONTAINER}"
 docker container rm -f "${TMP_CONTAINER}"
 docker image rm -f "${TAG}"
 docker image rm -f "${TAG}"

+ 4 - 3
script/build/linux-entrypoint

@@ -20,10 +20,11 @@ echo "${DOCKER_COMPOSE_GITSHA}" > compose/GITSHA
 export PATH="${CODE_PATH}/pyinstaller:${PATH}"
 export PATH="${CODE_PATH}/pyinstaller:${PATH}"
 
 
 if [ ! -z "${BUILD_BOOTLOADER}" ]; then
 if [ ! -z "${BUILD_BOOTLOADER}" ]; then
-    # Build bootloader for alpine
-    git clone --single-branch --branch master https://github.com/pyinstaller/pyinstaller.git /tmp/pyinstaller
+    # Build bootloader for alpine; develop is the main branch
+    git clone --single-branch --branch develop https://github.com/pyinstaller/pyinstaller.git /tmp/pyinstaller
     cd /tmp/pyinstaller/bootloader
     cd /tmp/pyinstaller/bootloader
-    git checkout v3.4
+    # Checkout commit corresponding to version in requirements-build
+    git checkout v3.5
     "${VENV}"/bin/python3 ./waf configure --no-lsb all
     "${VENV}"/bin/python3 ./waf configure --no-lsb all
     "${VENV}"/bin/pip3 install ..
     "${VENV}"/bin/pip3 install ..
     cd "${CODE_PATH}"
     cd "${CODE_PATH}"

+ 0 - 2
script/circle/bintray-deploy.sh

@@ -1,7 +1,5 @@
 #!/bin/bash
 #!/bin/bash
 
 
-set -x
-
 curl -f -u$BINTRAY_USERNAME:$BINTRAY_API_KEY -X GET \
 curl -f -u$BINTRAY_USERNAME:$BINTRAY_API_KEY -X GET \
   https://api.bintray.com/repos/docker-compose/${CIRCLE_BRANCH}
   https://api.bintray.com/repos/docker-compose/${CIRCLE_BRANCH}
 
 

+ 16 - 0
script/fossa.mk

@@ -0,0 +1,16 @@
+# Variables for Fossa
+BUILD_ANALYZER?=docker/fossa-analyzer
+FOSSA_OPTS?=--option all-tags:true --option allow-unresolved:true
+
+fossa-analyze:
+	docker run --rm -e FOSSA_API_KEY=$(FOSSA_API_KEY) \
+		-v $(CURDIR)/$*:/go/src/github.com/docker/compose \
+		-w /go/src/github.com/docker/compose \
+		$(BUILD_ANALYZER) analyze ${FOSSA_OPTS} --branch ${BRANCH_NAME}
+
+ # This command is used to run the fossa test command
+fossa-test:
+	docker run -i -e FOSSA_API_KEY=$(FOSSA_API_KEY) \
+		-v $(CURDIR)/$*:/go/src/github.com/docker/compose \
+		-w /go/src/github.com/docker/compose \
+		$(BUILD_ANALYZER) test

+ 1 - 1
script/run/run.sh

@@ -15,7 +15,7 @@
 
 
 set -e
 set -e
 
 
-VERSION="1.25.0-rc2"
+VERSION="1.25.0-rc3"
 IMAGE="docker/compose:$VERSION"
 IMAGE="docker/compose:$VERSION"
 
 
 
 

+ 4 - 2
setup.py

@@ -39,7 +39,7 @@ install_requires = [
     'docker[ssh] >= 3.7.0, < 5',
     'docker[ssh] >= 3.7.0, < 5',
     'dockerpty >= 0.4.1, < 1',
     'dockerpty >= 0.4.1, < 1',
     'six >= 1.3.0, < 2',
     'six >= 1.3.0, < 2',
-    'jsonschema >= 2.5.1, < 3',
+    'jsonschema >= 2.5.1, < 4',
 ]
 ]
 
 
 
 
@@ -52,9 +52,11 @@ if sys.version_info[:2] < (3, 4):
     tests_require.append('mock >= 1.0.1, < 4')
     tests_require.append('mock >= 1.0.1, < 4')
 
 
 extras_require = {
 extras_require = {
+    ':python_version < "3.2"': ['subprocess32 >= 3.5.4, < 4'],
     ':python_version < "3.4"': ['enum34 >= 1.0.4, < 2'],
     ':python_version < "3.4"': ['enum34 >= 1.0.4, < 2'],
     ':python_version < "3.5"': ['backports.ssl_match_hostname >= 3.5, < 4'],
     ':python_version < "3.5"': ['backports.ssl_match_hostname >= 3.5, < 4'],
-    ':python_version < "3.3"': ['ipaddress >= 1.0.16, < 2'],
+    ':python_version < "3.3"': ['backports.shutil_get_terminal_size == 1.0.0',
+                                'ipaddress >= 1.0.16, < 2'],
     ':sys_platform == "win32"': ['colorama >= 0.4, < 1'],
     ':sys_platform == "win32"': ['colorama >= 0.4, < 1'],
     'socks': ['PySocks >= 1.5.6, != 1.5.7, < 2'],
     'socks': ['PySocks >= 1.5.6, != 1.5.7, < 2'],
 }
 }

+ 5 - 5
tests/acceptance/cli_test.py

@@ -360,7 +360,7 @@ class CLITestCase(DockerClientTestCase):
             'services': {
             'services': {
                 'web': {
                 'web': {
                     'command': 'echo uwu',
                     'command': 'echo uwu',
-                    'image': 'alpine:3.4',
+                    'image': 'alpine:3.10.1',
                     'ports': ['3341/tcp', '4449/tcp']
                     'ports': ['3341/tcp', '4449/tcp']
                 }
                 }
             },
             },
@@ -559,7 +559,7 @@ class CLITestCase(DockerClientTestCase):
             'services': {
             'services': {
                 'foo': {
                 'foo': {
                     'command': '/bin/true',
                     'command': '/bin/true',
-                    'image': 'alpine:3.7',
+                    'image': 'alpine:3.10.1',
                     'scale': 3,
                     'scale': 3,
                     'restart': 'always:7',
                     'restart': 'always:7',
                     'mem_limit': '300M',
                     'mem_limit': '300M',
@@ -2816,8 +2816,8 @@ class CLITestCase(DockerClientTestCase):
         result = self.dispatch(['images'])
         result = self.dispatch(['images'])
 
 
         assert 'busybox' in result.stdout
         assert 'busybox' in result.stdout
-        assert 'multiple-composefiles_another_1' in result.stdout
-        assert 'multiple-composefiles_simple_1' in result.stdout
+        assert '_another_1' in result.stdout
+        assert '_simple_1' in result.stdout
 
 
     @mock.patch.dict(os.environ)
     @mock.patch.dict(os.environ)
     def test_images_tagless_image(self):
     def test_images_tagless_image(self):
@@ -2865,4 +2865,4 @@ class CLITestCase(DockerClientTestCase):
 
 
         assert re.search(r'foo1.+test[ \t]+dev', result.stdout) is not None
         assert re.search(r'foo1.+test[ \t]+dev', result.stdout) is not None
         assert re.search(r'foo2.+test[ \t]+prod', result.stdout) is not None
         assert re.search(r'foo2.+test[ \t]+prod', result.stdout) is not None
-        assert re.search(r'foo3.+_foo3[ \t]+latest', result.stdout) is not None
+        assert re.search(r'foo3.+test[ \t]+latest', result.stdout) is not None

+ 1 - 1
tests/fixtures/compatibility-mode/docker-compose.yml

@@ -1,7 +1,7 @@
 version: '3.5'
 version: '3.5'
 services:
 services:
   foo:
   foo:
-    image: alpine:3.7
+    image: alpine:3.10.1
     command: /bin/true
     command: /bin/true
     deploy:
     deploy:
       replicas: 3
       replicas: 3

+ 1 - 1
tests/fixtures/default-env-file/alt/.env

@@ -1,4 +1,4 @@
-IMAGE=alpine:3.4
+IMAGE=alpine:3.10.1
 COMMAND=echo uwu
 COMMAND=echo uwu
 PORT1=3341
 PORT1=3341
 PORT2=4449
 PORT2=4449

+ 1 - 0
tests/fixtures/images-service-tag/docker-compose.yml

@@ -8,3 +8,4 @@ services:
     image: test:prod
     image: test:prod
   foo3:
   foo3:
     build: .
     build: .
+    image: test:latest

+ 3 - 3
tests/fixtures/networks/docker-compose.yml

@@ -2,17 +2,17 @@ version: "2"
 
 
 services:
 services:
   web:
   web:
-    image: alpine:3.7
+    image: alpine:3.10.1
     command: top
     command: top
     networks: ["front"]
     networks: ["front"]
   app:
   app:
-    image: alpine:3.7
+    image: alpine:3.10.1
     command: top
     command: top
     networks: ["front", "back"]
     networks: ["front", "back"]
     links:
     links:
       - "db:database"
       - "db:database"
   db:
   db:
-    image: alpine:3.7
+    image: alpine:3.10.1
     command: top
     command: top
     networks: ["back"]
     networks: ["back"]
 
 

+ 39 - 0
tests/integration/service_test.py

@@ -38,6 +38,8 @@ from compose.container import Container
 from compose.errors import OperationFailedError
 from compose.errors import OperationFailedError
 from compose.parallel import ParallelStreamWriter
 from compose.parallel import ParallelStreamWriter
 from compose.project import OneOffFilter
 from compose.project import OneOffFilter
+from compose.project import Project
+from compose.service import BuildAction
 from compose.service import ConvergencePlan
 from compose.service import ConvergencePlan
 from compose.service import ConvergenceStrategy
 from compose.service import ConvergenceStrategy
 from compose.service import NetworkMode
 from compose.service import NetworkMode
@@ -966,6 +968,43 @@ class ServiceTest(DockerClientTestCase):
 
 
         assert self.client.inspect_image('composetest_web')
         assert self.client.inspect_image('composetest_web')
 
 
+    def test_build_cli(self):
+        base_dir = tempfile.mkdtemp()
+        self.addCleanup(shutil.rmtree, base_dir)
+
+        with open(os.path.join(base_dir, 'Dockerfile'), 'w') as f:
+            f.write("FROM busybox\n")
+
+        service = self.create_service('web',
+                                      build={'context': base_dir},
+                                      environment={
+                                          'COMPOSE_DOCKER_CLI_BUILD': '1',
+                                          'DOCKER_BUILDKIT': '1',
+                                      })
+        service.build(cli=True)
+        self.addCleanup(self.client.remove_image, service.image_name)
+        assert self.client.inspect_image('composetest_web')
+
+    def test_up_build_cli(self):
+        base_dir = tempfile.mkdtemp()
+        self.addCleanup(shutil.rmtree, base_dir)
+
+        with open(os.path.join(base_dir, 'Dockerfile'), 'w') as f:
+            f.write("FROM busybox\n")
+
+        web = self.create_service('web',
+                                  build={'context': base_dir},
+                                  environment={
+                                      'COMPOSE_DOCKER_CLI_BUILD': '1',
+                                      'DOCKER_BUILDKIT': '1',
+                                  })
+        project = Project('composetest', [web], self.client)
+        project.up(do_build=BuildAction.force)
+
+        containers = project.containers(['web'])
+        assert len(containers) == 1
+        assert containers[0].name.startswith('composetest_web_')
+
     def test_build_non_ascii_filename(self):
     def test_build_non_ascii_filename(self):
         base_dir = tempfile.mkdtemp()
         base_dir = tempfile.mkdtemp()
         self.addCleanup(shutil.rmtree, base_dir)
         self.addCleanup(shutil.rmtree, base_dir)

+ 11 - 0
tests/unit/cli/log_printer_test.py

@@ -152,6 +152,17 @@ class TestWatchEvents(object):
                 *thread_args)
                 *thread_args)
         assert container_id in thread_map
         assert container_id in thread_map
 
 
+    def test_container_attach_event(self, thread_map, mock_presenters):
+        container_id = 'abcd'
+        mock_container = mock.Mock(is_restarting=False)
+        mock_container.attach_log_stream.side_effect = APIError("race condition")
+        event_die = {'action': 'die', 'id': container_id}
+        event_start = {'action': 'start', 'id': container_id, 'container': mock_container}
+        event_stream = [event_die, event_start]
+        thread_args = 'foo', 'bar'
+        watch_events(thread_map, event_stream, mock_presenters, thread_args)
+        assert mock_container.attach_log_stream.called
+
     def test_other_event(self, thread_map, mock_presenters):
     def test_other_event(self, thread_map, mock_presenters):
         container_id = 'abcd'
         container_id = 'abcd'
         event_stream = [{'action': 'create', 'id': container_id}]
         event_stream = [{'action': 'create', 'id': container_id}]

+ 13 - 9
tests/unit/cli/utils_test.py

@@ -29,16 +29,20 @@ class HumanReadableFileSizeTest(unittest.TestCase):
         assert human_readable_file_size(100) == '100 B'
         assert human_readable_file_size(100) == '100 B'
 
 
     def test_1kb(self):
     def test_1kb(self):
-        assert human_readable_file_size(1024) == '1 kB'
+        assert human_readable_file_size(1000) == '1 kB'
+        assert human_readable_file_size(1024) == '1.024 kB'
 
 
     def test_1023b(self):
     def test_1023b(self):
-        assert human_readable_file_size(1023) == '1023 B'
+        assert human_readable_file_size(1023) == '1.023 kB'
+
+    def test_999b(self):
+        assert human_readable_file_size(999) == '999 B'
 
 
     def test_units(self):
     def test_units(self):
-        assert human_readable_file_size((2 ** 10) ** 0) == '1 B'
-        assert human_readable_file_size((2 ** 10) ** 1) == '1 kB'
-        assert human_readable_file_size((2 ** 10) ** 2) == '1 MB'
-        assert human_readable_file_size((2 ** 10) ** 3) == '1 GB'
-        assert human_readable_file_size((2 ** 10) ** 4) == '1 TB'
-        assert human_readable_file_size((2 ** 10) ** 5) == '1 PB'
-        assert human_readable_file_size((2 ** 10) ** 6) == '1 EB'
+        assert human_readable_file_size((10 ** 3) ** 0) == '1 B'
+        assert human_readable_file_size((10 ** 3) ** 1) == '1 kB'
+        assert human_readable_file_size((10 ** 3) ** 2) == '1 MB'
+        assert human_readable_file_size((10 ** 3) ** 3) == '1 GB'
+        assert human_readable_file_size((10 ** 3) ** 4) == '1 TB'
+        assert human_readable_file_size((10 ** 3) ** 5) == '1 PB'
+        assert human_readable_file_size((10 ** 3) ** 6) == '1 EB'

+ 8 - 2
tests/unit/config/config_test.py

@@ -18,6 +18,7 @@ from ...helpers import build_config_details
 from ...helpers import BUSYBOX_IMAGE_WITH_TAG
 from ...helpers import BUSYBOX_IMAGE_WITH_TAG
 from compose.config import config
 from compose.config import config
 from compose.config import types
 from compose.config import types
+from compose.config.config import ConfigFile
 from compose.config.config import resolve_build_args
 from compose.config.config import resolve_build_args
 from compose.config.config import resolve_environment
 from compose.config.config import resolve_environment
 from compose.config.environment import Environment
 from compose.config.environment import Environment
@@ -3620,7 +3621,7 @@ class InterpolationTest(unittest.TestCase):
             'version': '3.5',
             'version': '3.5',
             'services': {
             'services': {
                 'foo': {
                 'foo': {
-                    'image': 'alpine:3.7',
+                    'image': 'alpine:3.10.1',
                     'deploy': {
                     'deploy': {
                         'replicas': 3,
                         'replicas': 3,
                         'restart_policy': {
                         'restart_policy': {
@@ -3646,7 +3647,7 @@ class InterpolationTest(unittest.TestCase):
 
 
         service_dict = cfg.services[0]
         service_dict = cfg.services[0]
         assert service_dict == {
         assert service_dict == {
-            'image': 'alpine:3.7',
+            'image': 'alpine:3.10.1',
             'scale': 3,
             'scale': 3,
             'restart': {'MaximumRetryCount': 7, 'Name': 'always'},
             'restart': {'MaximumRetryCount': 7, 'Name': 'always'},
             'mem_limit': '300M',
             'mem_limit': '300M',
@@ -4887,6 +4888,11 @@ class ExtendsTest(unittest.TestCase):
             assert types.SecurityOpt.parse('apparmor:unconfined') in svc['security_opt']
             assert types.SecurityOpt.parse('apparmor:unconfined') in svc['security_opt']
             assert types.SecurityOpt.parse('seccomp:unconfined') in svc['security_opt']
             assert types.SecurityOpt.parse('seccomp:unconfined') in svc['security_opt']
 
 
+    @mock.patch.object(ConfigFile, 'from_filename', wraps=ConfigFile.from_filename)
+    def test_extends_same_file_optimization(self, from_filename_mock):
+        load_from_filename('tests/fixtures/extends/no-file-specified.yml')
+        from_filename_mock.assert_called_once()
+
 
 
 @pytest.mark.xfail(IS_WINDOWS_PLATFORM, reason='paths use slash')
 @pytest.mark.xfail(IS_WINDOWS_PLATFORM, reason='paths use slash')
 class ExpandPathTest(unittest.TestCase):
 class ExpandPathTest(unittest.TestCase):

+ 5 - 0
tests/unit/network_test.py

@@ -168,3 +168,8 @@ class NetworkTest(unittest.TestCase):
         mock_log.warning.assert_called_once_with(mock.ANY)
         mock_log.warning.assert_called_once_with(mock.ANY)
         _, args, kwargs = mock_log.warning.mock_calls[0]
         _, args, kwargs = mock_log.warning.mock_calls[0]
         assert 'label "com.project.touhou.character" has changed' in args[0]
         assert 'label "com.project.touhou.character" has changed' in args[0]
+
+    def test_remote_config_labels_none(self):
+        remote = {'Labels': None}
+        local = Network(None, 'test_project', 'test_network')
+        check_remote_network_config(remote, local)

+ 84 - 0
tests/unit/project_test.py

@@ -3,6 +3,8 @@ from __future__ import absolute_import
 from __future__ import unicode_literals
 from __future__ import unicode_literals
 
 
 import datetime
 import datetime
+import os
+import tempfile
 
 
 import docker
 import docker
 import pytest
 import pytest
@@ -11,6 +13,7 @@ from docker.errors import NotFound
 from .. import mock
 from .. import mock
 from .. import unittest
 from .. import unittest
 from ..helpers import BUSYBOX_IMAGE_WITH_TAG
 from ..helpers import BUSYBOX_IMAGE_WITH_TAG
+from compose.config import ConfigurationError
 from compose.config.config import Config
 from compose.config.config import Config
 from compose.config.types import VolumeFromSpec
 from compose.config.types import VolumeFromSpec
 from compose.const import COMPOSEFILE_V1 as V1
 from compose.const import COMPOSEFILE_V1 as V1
@@ -21,6 +24,7 @@ from compose.const import DEFAULT_TIMEOUT
 from compose.const import LABEL_SERVICE
 from compose.const import LABEL_SERVICE
 from compose.container import Container
 from compose.container import Container
 from compose.errors import OperationFailedError
 from compose.errors import OperationFailedError
+from compose.project import get_secrets
 from compose.project import NoSuchService
 from compose.project import NoSuchService
 from compose.project import Project
 from compose.project import Project
 from compose.project import ProjectError
 from compose.project import ProjectError
@@ -841,3 +845,83 @@ class ProjectTest(unittest.TestCase):
         with mock.patch('compose.service.Service.push') as fake_push:
         with mock.patch('compose.service.Service.push') as fake_push:
             project.push()
             project.push()
             assert fake_push.call_count == 2
             assert fake_push.call_count == 2
+
+    def test_get_secrets_no_secret_def(self):
+        service = 'foo'
+        secret_source = 'bar'
+
+        secret_defs = mock.Mock()
+        secret_defs.get.return_value = None
+        secret = mock.Mock(source=secret_source)
+
+        with self.assertRaises(ConfigurationError):
+            get_secrets(service, [secret], secret_defs)
+
+    def test_get_secrets_external_warning(self):
+        service = 'foo'
+        secret_source = 'bar'
+
+        secret_def = mock.Mock()
+        secret_def.get.return_value = True
+
+        secret_defs = mock.Mock()
+        secret_defs.get.side_effect = secret_def
+        secret = mock.Mock(source=secret_source)
+
+        with mock.patch('compose.project.log') as mock_log:
+            get_secrets(service, [secret], secret_defs)
+
+        mock_log.warning.assert_called_with("Service \"{service}\" uses secret \"{secret}\" "
+                                            "which is external. External secrets are not available"
+                                            " to containers created by docker-compose."
+                                            .format(service=service, secret=secret_source))
+
+    def test_get_secrets_uid_gid_mode_warning(self):
+        service = 'foo'
+        secret_source = 'bar'
+
+        _, filename_path = tempfile.mkstemp()
+        self.addCleanup(os.remove, filename_path)
+
+        def mock_get(key):
+            return {'external': False, 'file': filename_path}[key]
+
+        secret_def = mock.MagicMock()
+        secret_def.get = mock.MagicMock(side_effect=mock_get)
+
+        secret_defs = mock.Mock()
+        secret_defs.get.return_value = secret_def
+
+        secret = mock.Mock(uid=True, gid=True, mode=True, source=secret_source)
+
+        with mock.patch('compose.project.log') as mock_log:
+            get_secrets(service, [secret], secret_defs)
+
+        mock_log.warning.assert_called_with("Service \"{service}\" uses secret \"{secret}\" with uid, "
+                                            "gid, or mode. These fields are not supported by this "
+                                            "implementation of the Compose file"
+                                            .format(service=service, secret=secret_source))
+
+    def test_get_secrets_secret_file_warning(self):
+        service = 'foo'
+        secret_source = 'bar'
+        not_a_path = 'NOT_A_PATH'
+
+        def mock_get(key):
+            return {'external': False, 'file': not_a_path}[key]
+
+        secret_def = mock.MagicMock()
+        secret_def.get = mock.MagicMock(side_effect=mock_get)
+
+        secret_defs = mock.Mock()
+        secret_defs.get.return_value = secret_def
+
+        secret = mock.Mock(uid=False, gid=False, mode=False, source=secret_source)
+
+        with mock.patch('compose.project.log') as mock_log:
+            get_secrets(service, [secret], secret_defs)
+
+        mock_log.warning.assert_called_with("Service \"{service}\" uses an undefined secret file "
+                                            "\"{secret_file}\", the following file should be created "
+                                            "\"{secret_file}\""
+                                            .format(service=service, secret_file=not_a_path))