content
listlengths
1
139
tag
dict
[ { "data": "Cloud Native Buildpacks (CNBs) transform your application source code into container images that can run on any cloud. With buildpacks, organizations can concentrate the knowledge of container build best practices within a specialized team, instead of having application developers across the organization individually maintain their own Dockerfiles. This makes it easier to know what is inside application images, enforce security and compliance requirements, and perform upgrades with minimal effort and intervention. The CNB project was initiated by Pivotal and Heroku in January 2018 and joined the Cloud Native Computing Foundation (CNCF) as an Apache-2.0 licensed project in October 2018. It is currently an incubating project within the CNCF. See how-to guides, concepts, and tutorials tailored to specific personas: CircleCI is a continuous integration and delivery platform. The CNB project maintains an integration, called an orb, which allows users to run pack commands inside their pipelines. kpack is a Kubernetes-native platform that uses unprivileged Kubernetes primitives to perform buildpacks builds and keep application images up-to-date. kpack is part of the Buildpacks Community organization. Tekton is an open-source CI/CD system running on k8s. The CNB project has created two reference tasks for performing buildpacks builds, both of which use the lifecycle directly (i.e. they do not use pack). Reference documents for various key aspects of the project. We love talks to share the latest development updates, explain buildpacks basics and more, receive feedback and questions, and get to know other members of the community. Check out some of our most recent and exciting conference talks below. More talks are available in our Conference Talks Playlist on YouTube. If you are interested in giving a talk about buildpacks, the linked slides may provide a useful starting point. Please feel free to reach out in Slack if youd like input or help from the CNB team! Feel free to look through the archive of previous community meetings in our Working Group Playlist on YouTube. If you would like to attend a Working Group meeting, check out our community page. Cloud Native Buildpacks is an incubating project in the CNCF. We welcome contribution from the community. Here you will find helpful information for interacting with the core team and contributing to the project. The best place to contact the Cloud Native Buildpack team is on the CNCF Slack in the #buildpacks or mailing list. Find out the various ways that you can contribute to the Cloud Native Buildpacks project using our contributors guide. This is a community driven project and our roadmap is publicly available on our Github page. We encourage you to contribute with feature requests. We are a Cloud Native Computing Foundation incubating project. Copyright 2022 The Linux Foundation . All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Buildpacks", "subcategory": "Application Definition & Image Build" }
[ { "data": "In this tutorial, well explain how to use pack and buildpacks to create a runnable app image from source code. In order to run the build process in an isolated fashion, pack uses Docker or a Docker-compatible daemon to create the containers where buildpacks execute. That means youll need to make sure you have both pack and a daemon installed: Install Docker or alternatively, see this page about working with podman. NOTE: pack is only one implementation of the Cloud Native Buildpacks Platform Specification. Additionally, not all Cloud Native Buildpacks Platforms require Docker. Before we set out, youll need to know the basics of buildpacks and how they work. A buildpack is something youve probably used without knowing it, as theyre currently being used in many cloud platforms. A buildpacks job is to gather everything your app needs to build and run, and it often does this job quickly and quietly. That said, while buildpacks are often a behind-the-scenes detail, they are at the heart of transforming your source code into a runnable app image. What enables buildpacks to be transparent is auto-detection. This happens when a platform sequentially tests groups of buildpacks against your apps source code. The first group that successfully detects your source code will become the selected set of buildpacks for your app. Detection criteria is specific to each buildpack for instance, an NPM buildpack might look for a package.json, and a Go buildpack might look for Go source files. A builder is an image that contains all the components necessary to execute a build. A builder image is created by taking a build image and adding a lifecycle, buildpacks, and files that configure aspects of the build including the buildpack detection order and the location(s) of the run image. Lets see all this in action using pack build. Run the following commands in a shell to clone and build this simple Java app. ``` git clone https://github.com/buildpacks/samples ``` ``` cd samples/apps/java-maven ``` ``` pack build myapp --builder cnbs/sample-builder:jammy ``` NOTE: This is your first time running pack build for myapp, so youll notice that the build might take longer than usual. Subsequent builds will take advantage of various forms of caching. If youre curious, try running pack build myapp a second time to see the difference in build time. Thats it! Youve now got a runnable app image called myapp available on your local Docker daemon. We did say this was a brief journey after all. Take note that your app was built without needing to install a JDK, run Maven, or otherwise configure a build environment. pack and buildpacks took care of that for you. To test out your new app image locally, you can run it with Docker: ``` docker run --rm -p 8080:8080 myapp ``` Now hit localhost:8080 in your favorite browser and take a minute to enjoy the view. pack uses buildpacks to help you easily create OCI images that you can run just about anywhere. Try deploying your new image to your favorite cloud! In case you need it, pack build has a handy flag called --publish that will build your image directly onto a Docker registry. You can learn more about pack features in the documentation. Windows image builds are now supported! Windows build guide We are a Cloud Native Computing Foundation incubating project. Copyright 2022 The Linux Foundation . All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page" } ]
{ "category": "App Definition and Development", "file_name": "app-journey.md", "project_name": "Buildpacks", "subcategory": "Application Definition & Image Build" }
[ { "data": "Chef Habitat Builder acts as the core of Chefs Application Delivery Enterprise hub. Chef Habitat Builder was first launched as a cloud service and as the repository of all available plan templates built by Chef and the supporting community. Due to the fact that the application source code is stored alongside the build package, many users expressed a preference for storing packages and running Chef Habitat Builder on-prem. As a result, Chef Habitat Builder can be consumed either as a cloud based or on-premises solution. Plan files are stored in the Chef Habitat Builder SaaS, where they can be viewed and accessed by the Chef Habitat community and then shared with the on-premises version of the builder where they can then be copied and maintained locally. For more information on how the SaaS and On-Prem versions of Chef Habitat Builder work together, read the blog - Chef Habitat Builder On-Prem Enhancements that Extend Support to Airgap Environments and Simplify Set-Up Was this page helpful? Help us improve this document. Still stuck? How can we improve this document? Thank you for your feedback! Page Last Modified: February 23, 2022 Copyright 2024 Progress Software Corporation and/or its subsidiaries or affiliates. All Rights Reserved." } ]
{ "category": "App Definition and Development", "file_name": "builder_overview.md", "project_name": "Chef Habitat", "subcategory": "Application Definition & Image Build" }
[ { "data": "Chef Habitat is a workload-packaging, orchestration, and deployment system that allows you to build, package, deploy, and manage applications and services without worrying about which infrastructure your application will deploy on, and without any rewriting or refactoring if you switch to a different infrastructure. Habitat separates the platform-independent parts of your applicationthe build dependencies, runtime dependencies, lifecycle events, and application codebasefrom the operating system or deployment environment that the application will run on, and bundles it into an immutable Habitat Package. The package is sent to the Chef Habitat Builder (SaaS or on-prem), which acts as a package store like Docker Hub where you can store, build, and deploy your Habitat package. Habitat Supervisor pulls packages from Habitat Builder, and will start, stop, run, monitor, and update your application based on the plan and lifecycle hooks you define in the package. Habitat Supervisor runs on bare metal, virtual machines, containers, or Platform-as-a-Service environments. A package under management by a Supervisor is called a service. Services can be joined together in a service group, which is a collection of services with the same package and topology type that are connected together across a Supervisor network. Chef Habitat Builder acts as the core of Chefs Application Delivery Enterprise hub. It provides a repository for all available Chef Habitat packages built by Chef and the supporting community, as well as search and an API for clients. You can store application plans on the Chef Habitat Builder SaaS where the Chef Habitat community can view and access them. You can also deploy the on-prem version of Chef Habitat Builder where you can store and maintain your apps in a secure environment. For more information, see the Chef Habitat Builder documentation. A Habitat Package is an artifact that contains the application codebase, lifecycle hooks, and a manifest that defines build and runtime dependencies of the application. The package is bundled into a Habitat Artifact (.HART) file, which is a binary distribution of a given package built with Chef" }, { "data": "The package is immutable and cryptographically signed with a key so you can verify that the artifact came from the place you expected it to come from. Artifacts can be exported to run in a variety of runtimes with zero refactoring or rewriting. A plan is the set of instructions, templates, and configuration files that define how you download, configure, make, install, and manage the lifecycle of the application artifact. The plan is defined in the habitat directory at the root of your project repository. The habitat directory includes a plan file (plan.sh for Linux systems or plan.ps1 for Windows), a default.toml file, an optional config directory for configuration templates, and an optional hooks directory for lifecycle hooks. You can create this directory at the root of your application with hab plan init. For more information, see the plan documentation. See the services documentation for more information. See the Habitat Studio documentation for more information. Chef Habitat Supervisor is a process manager that has two primary responsibilities: In the Supervisor you can define topologies for you application, such as leader-follower or standalone, or more complex applications that include databases. The supervisor also allows you to inject tunables into your application. Allowing you to defer decisions about how your application behaves until runtime. See the Habitat Supervisor documentation for more information. Chef Habitat allows you to build and package your applications and deploy them anywhere without having to refactor or rewrite your package for each platform. Everything that the application needs to run is defined, without assuming anything about the underlying infrastructure that the application is running on. This will allow you to repackage and modernize legacy workloads in-place to increase their manageability, make them portable, and migrate them to modern operating systems or even cloud-native infrastructure like containers. You can also develop your application if you are unsure of the infrastructure your application will run on, or in the event that business requirements change and you have to switch your application to a different environment. Was this page helpful? Help us improve this document. Still stuck? How can we improve this document? Thank you for your feedback! Page Last Modified: July 10, 2023 Copyright 2024 Progress Software Corporation and/or its subsidiaries or affiliates. All Rights Reserved." } ]
{ "category": "App Definition and Development", "file_name": "habitat.md", "project_name": "Chef Habitat", "subcategory": "Application Definition & Image Build" }
[ { "data": "Codezero is an overlay network that empowers development teams to turn Kubernetes clusters into Teamspaces. A Teamspace is a collaborative development environment where developers can locally Consume services discoverable in a Service Catalog. Services featured in the catalog operate either within the Kubernetes cluster, or on a team member's local machine. Developers can Serve local Variants of services through this catalog to other team members. Consider the application above. Services A, B and C are deployed to a development cluster or namespace. You would either have to replicate the entire application locally or, replace Service B with the new version in the development environment in order to test. The version of the app one experiences is determined by the path a ray of traffic takes across the services. With a Teamspace, in order to work on Service B, you simply run the service locally. This Local Service B Variant receives traffic based on Conditions you specify. The Local Variant then delivers traffic back by Consuming Service C. Traffic that does not meet the specified condition flows through the Default Service B Variant running in the cluster untouched. Local Variants need not be containerized. They are simply services running on a local port but through the service catalog appear like they are deployed to the Kubernetes cluster. Developers can, therefore, use preferred local tooling like IDEs, debuggers, profilers and test tools (e.g. Postman) during the development process. Teamspaces are language agnostic and operate at the network level. Any authorized member can define Conditions that reshape traffic across the services available in the catalog to instantly create a Logical Ephemeral Environment. While the Teamspace is long running, this temporary traffic shaped environment comprising of a mix of remote and local services can be used to rapidly build and test software before code is pushed. You do not have to be a Kubernetes admin or a networking guru to develop using a Teamspace. Once set up, most developers need not have any direct knowledge of, or access to the underlying Kubernetes Clusters. This documentation is geared to both Kubernetes Admins who want to create Teamspaces as well as Developers who simply want to work with Teamspaces. We recommend you go through this documentation in the order it is presented as we build on previously defined concepts. Happy Learning! The Guides cover setting up and administering a Teamspace. You will require a Kubernetes Cluster to create a Teamspace. The Kubernetes QuickStart has several options to get started if you do not currently have a custer. Due to inherent limitations, you cannot use a local cluster like Minikube or Kind with Codezero. The Tutorials focus on using a Teamspace once setup. We have a Sample Kubernetes Project that comprises some of the most common Microservices Patterns you would encounter in a Kubernetes cluster. This project is used across all the Tutorials and Videos in this documentation. The Tutorials walk you through scenarios you will encounter in just about any modern microservices application development." } ]
{ "category": "App Definition and Development", "file_name": "docs.codezero.io.md", "project_name": "CodeZero", "subcategory": "Application Definition & Image Build" }
[ { "data": "A Job creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up the Pods it created. Suspending a Job will delete its active Pods until the Job is resumed again. A simple case is to create one Job object in order to reliably run one Pod to completion. The Job object will start a new Pod if the first Pod fails or is deleted (for example due to a node hardware failure or a node reboot). You can also use a Job to run multiple Pods in parallel. If you want to run a Job (either a single task, or several in parallel) on a schedule, see CronJob. Here is an example Job config. It computes to 2000 places and prints it out. It takes around 10s to complete. ``` apiVersion: batch/v1 kind: Job metadata: name: pi spec: template: spec: containers: name: pi image: perl:5.34.0 command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: Never backoffLimit: 4 ``` You can run the example with this command: ``` kubectl apply -f https://kubernetes.io/examples/controllers/job.yaml ``` The output is similar to this: ``` job.batch/pi created ``` Check on the status of the Job with kubectl: ``` Name: pi Namespace: default Selector: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c batch.kubernetes.io/job-name=pi ... Annotations: batch.kubernetes.io/job-tracking: \"\" Parallelism: 1 Completions: 1 Start Time: Mon, 02 Dec 2019 15:20:11 +0200 Completed At: Mon, 02 Dec 2019 15:21:16 +0200 Duration: 65s Pods Statuses: 0 Running / 1 Succeeded / 0 Failed Pod Template: Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c batch.kubernetes.io/job-name=pi Containers: pi: Image: perl:5.34.0 Port: <none> Host Port: <none> Command: perl -Mbignum=bpi -wle print bpi(2000) Environment: <none> Mounts: <none> Volumes: <none> Events: Type Reason Age From Message - - - Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4 Normal Completed 18s job-controller Job completed ``` ``` apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223 batch.kubernetes.io/job-name: pi name: pi namespace: default resourceVersion: \"4751\" uid: 204fb678-040b-497f-9266-35ffa8716d14 spec: backoffLimit: 4 completionMode: NonIndexed completions: 1 parallelism: 1 selector: matchLabels: batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223 suspend: false template: metadata: creationTimestamp: null labels: batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223 batch.kubernetes.io/job-name: pi spec: containers: command: perl -Mbignum=bpi -wle print bpi(2000) image: perl:5.34.0 imagePullPolicy: IfNotPresent name: pi resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Never schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: active: 1 ready: 0 startTime: \"2022-11-10T17:53:57Z\" uncountedTerminatedPods: {} ``` To view completed Pods of a Job, use kubectl get pods. To list all the Pods that belong to a Job in a machine readable form, you can use a command like this: ``` pods=$(kubectl get pods --selector=batch.kubernetes.io/job-name=pi --output=jsonpath='{.items[*].metadata.name}') echo $pods ``` The output is similar to this: ``` pi-5rwd7 ``` Here, the selector is the same as the selector for the Job. The --output=jsonpath option specifies an expression with the name from each Pod in the returned list. View the standard output of one of the pods: ``` kubectl logs $pods ``` Another way to view the logs of a Job: ``` kubectl logs jobs/pi ``` The output is similar to this: ``` 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901 ``` As with all other Kubernetes config, a Job needs apiVersion, kind, and metadata fields. When the control plane creates new Pods for a Job, the .metadata.name of the Job is part of the basis for naming those Pods. The name of a Job must be a valid DNS subdomain value, but this can produce unexpected results for the Pod" }, { "data": "For best compatibility, the name should follow the more restrictive rules for a DNS label. Even when the name is a DNS subdomain, the name must be no longer than 63 characters. A Job also needs a .spec section. Job labels will have batch.kubernetes.io/ prefix for job-name and controller-uid. The .spec.template is the only required field of the .spec. The .spec.template is a pod template. It has exactly the same schema as a Pod, except it is nested and does not have an apiVersion or kind. In addition to required fields for a Pod, a pod template in a Job must specify appropriate labels (see pod selector) and an appropriate restart policy. Only a RestartPolicy equal to Never or OnFailure is allowed. The .spec.selector field is optional. In almost all cases you should not specify it. See section specifying your own pod selector. There are three main types of task suitable to run as a Job: For a non-parallel Job, you can leave both .spec.completions and .spec.parallelism unset. When both are unset, both are defaulted to 1. For a fixed completion count Job, you should set .spec.completions to the number of completions needed. You can set .spec.parallelism, or leave it unset and it will default to 1. For a work queue Job, you must leave .spec.completions unset, and set .spec.parallelism to a non-negative integer. For more information about how to make use of the different types of job, see the job patterns section. The requested parallelism (.spec.parallelism) can be set to any non-negative value. If it is unspecified, it defaults to 1. If it is specified as 0, then the Job is effectively paused until it is increased. Actual parallelism (number of pods running at any instant) may be more or less than requested parallelism, for a variety of reasons: Jobs with fixed completion count - that is, jobs that have non null .spec.completions - can have a completion mode that is specified in .spec.completionMode: NonIndexed (default): the Job is considered complete when there have been .spec.completions successfully completed Pods. In other words, each Pod completion is homologous to each other. Note that Jobs that have null .spec.completions are implicitly NonIndexed. Indexed: the Pods of a Job get an associated completion index from 0 to .spec.completions-1. The index is available through four mechanisms: The Job is considered complete when there is one successfully completed Pod for each index. For more information about how to use this mode, see Indexed Job for Parallel Processing with Static Work Assignment. A container in a Pod may fail for a number of reasons, such as because the process in it exited with a non-zero exit code, or the container was killed for exceeding a memory limit, etc. If this happens, and the .spec.template.spec.restartPolicy = \"OnFailure\", then the Pod stays on the node, but the container is re-run. Therefore, your program needs to handle the case when it is restarted locally, or else specify .spec.template.spec.restartPolicy = \"Never\". See pod lifecycle for more information on restartPolicy. An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node (node is upgraded, rebooted, deleted, etc.), or if a container of the Pod fails and the .spec.template.spec.restartPolicy = \"Never\". When a Pod fails, then the Job controller starts a new Pod. This means that your application needs to handle the case when it is restarted in a new pod. In particular, it needs to handle temporary files, locks, incomplete output and the like caused by previous" }, { "data": "By default, each pod failure is counted towards the .spec.backoffLimit limit, see pod backoff failure policy. However, you can customize handling of pod failures by setting the Job's pod failure policy. Additionally, you can choose to count the pod failures independently for each index of an Indexed Job by setting the .spec.backoffLimitPerIndex field (for more information, see backoff limit per index). Note that even if you specify .spec.parallelism = 1 and .spec.completions = 1 and .spec.template.spec.restartPolicy = \"Never\", the same program may sometimes be started twice. If you do specify .spec.parallelism and .spec.completions both greater than 1, then there may be multiple pods running at once. Therefore, your pods must also be tolerant of concurrency. When the feature gates PodDisruptionConditions and JobPodFailurePolicy are both enabled, and the .spec.podFailurePolicy field is set, the Job controller does not consider a terminating Pod (a pod that has a .metadata.deletionTimestamp field set) as a failure until that Pod is terminal (its .status.phase is Failed or Succeeded). However, the Job controller creates a replacement Pod as soon as the termination becomes apparent. Once the pod terminates, the Job controller evaluates .backoffLimit and .podFailurePolicy for the relevant Job, taking this now-terminated Pod into consideration. If either of these requirements is not satisfied, the Job controller counts a terminating Pod as an immediate failure, even if that Pod later terminates with phase: \"Succeeded\". There are situations where you want to fail a Job after some amount of retries due to a logical error in configuration etc. To do so, set .spec.backoffLimit to specify the number of retries before considering a Job as failed. The back-off limit is set by default to 6. Failed Pods associated with the Job are recreated by the Job controller with an exponential back-off delay (10s, 20s, 40s ...) capped at six minutes. The number of retries is calculated in two ways: If either of the calculations reaches the .spec.backoffLimit, the Job is considered failed. When you run an indexed Job, you can choose to handle retries for pod failures independently for each index. To do so, set the .spec.backoffLimitPerIndex to specify the maximal number of pod failures per index. When the per-index backoff limit is exceeded for an index, Kubernetes considers the index as failed and adds it to the .status.failedIndexes field. The succeeded indexes, those with a successfully executed pods, are recorded in the .status.completedIndexes field, regardless of whether you set the backoffLimitPerIndex field. Note that a failing index does not interrupt execution of other indexes. Once all indexes finish for a Job where you specified a backoff limit per index, if at least one of those indexes did fail, the Job controller marks the overall Job as failed, by setting the Failed condition in the status. The Job gets marked as failed even if some, potentially nearly all, of the indexes were processed successfully. You can additionally limit the maximal number of indexes marked failed by setting the .spec.maxFailedIndexes field. When the number of failed indexes exceeds the maxFailedIndexes field, the Job controller triggers termination of all remaining running Pods for that Job. Once all pods are terminated, the entire Job is marked failed by the Job controller, by setting the Failed condition in the Job status. Here is an example manifest for a Job that defines a backoffLimitPerIndex: ``` apiVersion: batch/v1 kind: Job metadata: name: job-backoff-limit-per-index-example spec: completions: 10 parallelism: 3 completionMode: Indexed # required for the feature backoffLimitPerIndex: 1 # maximal number of failures per index maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution template: spec: restartPolicy:" }, { "data": "# required for the feature containers: name: example image: python command: # The jobs fails as there is at least one failed index python3 -c | import os, sys print(\"Hello world\") if int(os.environ.get(\"JOBCOMPLETIONINDEX\")) % 2 == 0: sys.exit(1) ``` In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated. Once the job is finished, the Job status looks as follows: ``` kubectl get -o yaml job job-backoff-limit-per-index-example ``` ``` status: completedIndexes: 1,3,5,7,9 failedIndexes: 0,2,4,6,8 succeeded: 5 # 1 succeeded pod for each of 5 succeeded indexes failed: 10 # 2 failed pods (1 retry) for each of 5 failed indexes conditions: message: Job has failed indexes reason: FailedIndexes status: \"True\" type: Failed ``` Additionally, you may want to use the per-index backoff along with a pod failure policy. When using per-index backoff, there is a new FailIndex action available which allows you to avoid unnecessary retries within an index. A Pod failure policy, defined with the .spec.podFailurePolicy field, enables your cluster to handle Pod failures based on the container exit codes and the Pod conditions. In some situations, you may want to have a better control when handling Pod failures than the control provided by the Pod backoff failure policy, which is based on the Job's .spec.backoffLimit. These are some examples of use cases: You can configure a Pod failure policy, in the .spec.podFailurePolicy field, to meet the above use cases. This policy can handle Pod failures based on the container exit codes and the Pod conditions. Here is a manifest for a Job that defines a podFailurePolicy: ``` apiVersion: batch/v1 kind: Job metadata: name: job-pod-failure-policy-example spec: completions: 12 parallelism: 3 template: spec: restartPolicy: Never containers: name: main image: docker.io/library/bash:5 command: [\"bash\"] # example command simulating a bug which triggers the FailJob action args: -c echo \"Hello world!\" && sleep 5 && exit 42 backoffLimit: 6 podFailurePolicy: rules: action: FailJob onExitCodes: containerName: main # optional operator: In # one of: In, NotIn values: [42] action: Ignore # one of: Ignore, FailJob, Count onPodConditions: type: DisruptionTarget # indicates Pod disruption ``` In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the main container specifically: The second rule of the Pod failure policy, specifying the Ignore action for failed Pods with condition DisruptionTarget excludes Pod disruptions from being counted towards the .spec.backoffLimit limit of retries. These are some requirements and semantics of the API: When creating an Indexed Job, you can define when a Job can be declared as succeeded using a .spec.successPolicy, based on the pods that succeeded. By default, a Job succeeds when the number of succeeded Pods equals .spec.completions. These are some situations where you might want additional control for declaring a Job succeeded: You can configure a success policy, in the .spec.successPolicy field, to meet the above use cases. This policy can handle Job success based on the succeeded pods. After the Job meets the success policy, the job controller terminates the lingering Pods. A success policy is defined by rules. Each rule can take one of the following forms: Note that when you specify multiple rules in the .spec.successPolicy.rules, the job controller evaluates the rules in order. Once the Job meets a rule, the job controller ignores remaining" }, { "data": "Here is a manifest for a Job with successPolicy: ``` apiVersion: batch/v1 kind: Job metadata: name: job-success spec: parallelism: 10 completions: 10 completionMode: Indexed # Required for the success policy successPolicy: rules: succeededIndexes: 0,2-3 succeededCount: 1 template: spec: containers: name: main image: python command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded, python3 -c | import os, sys if os.environ.get(\"JOBCOMPLETIONINDEX\") == \"2\": sys.exit(0) else: sys.exit(1) restartPolicy: Never ``` In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the specified indexes, 0, 2, or 3, succeed. The Job that meets the success policy gets the SuccessCriteriaMet condition. After the removal of the lingering Pods is issued, the Job gets the Complete condition. Note that the succeededIndexes is represented as intervals separated by a hyphen. The number are listed in represented by the first and last element of the series, separated by a hyphen. When a Job completes, no more Pods are created, but the Pods are usually not deleted either. Keeping them around allows you to still view the logs of completed pods to check for errors, warnings, or other diagnostic output. The job object also remains after it is completed so that you can view its status. It is up to the user to delete old jobs after noting their status. Delete the job with kubectl (e.g. kubectl delete jobs/pi or kubectl delete -f ./job.yaml). When you delete the job using kubectl, all the pods it created are deleted too. By default, a Job will run uninterrupted unless a Pod fails (restartPolicy=Never) or a Container exits in error (restartPolicy=OnFailure), at which point the Job defers to the .spec.backoffLimit described above. Once .spec.backoffLimit has been reached the Job will be marked as failed and any running Pods will be terminated. Another way to terminate a Job is by setting an active deadline. Do this by setting the .spec.activeDeadlineSeconds field of the Job to a number of seconds. The activeDeadlineSeconds applies to the duration of the job, no matter how many Pods are created. Once a Job reaches activeDeadlineSeconds, all of its running Pods are terminated and the Job status will become type: Failed with reason: DeadlineExceeded. Note that a Job's .spec.activeDeadlineSeconds takes precedence over its .spec.backoffLimit. Therefore, a Job that is retrying one or more failed Pods will not deploy additional Pods once it reaches the time limit specified by activeDeadlineSeconds, even if the backoffLimit is not yet reached. Example: ``` apiVersion: batch/v1 kind: Job metadata: name: pi-with-timeout spec: backoffLimit: 5 activeDeadlineSeconds: 100 template: spec: containers: name: pi image: perl:5.34.0 command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: Never ``` Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level. Keep in mind that the restartPolicy applies to the Pod, and not to the Job itself: there is no automatic Job restart once the Job status is type: Failed. That is, the Job termination mechanisms activated with .spec.activeDeadlineSeconds and .spec.backoffLimit result in a permanent Job failure that requires manual intervention to resolve. Finished Jobs are usually no longer needed in the system. Keeping them around in the system will put pressure on the API server. If the Jobs are managed directly by a higher level controller, such as CronJobs, the Jobs can be cleaned up by CronJobs based on the specified capacity-based cleanup policy. Another way to clean up finished Jobs (either Complete or Failed) automatically is to use a TTL mechanism provided by a TTL controller for finished resources, by specifying the" }, { "data": "field of the Job. When the TTL controller cleans up the Job, it will delete the Job cascadingly, i.e. delete its dependent objects, such as Pods, together with the Job. Note that when the Job is deleted, its lifecycle guarantees, such as finalizers, will be honored. For example: ``` apiVersion: batch/v1 kind: Job metadata: name: pi-with-ttl spec: ttlSecondsAfterFinished: 100 template: spec: containers: name: pi image: perl:5.34.0 command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: Never ``` The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes. If the field is set to 0, the Job will be eligible to be automatically deleted immediately after it finishes. If the field is unset, this Job won't be cleaned up by the TTL controller after it finishes. It is recommended to set ttlSecondsAfterFinished field because unmanaged jobs (Jobs that you created directly, and not indirectly through other workload APIs such as CronJob) have a default deletion policy of orphanDependents causing Pods created by an unmanaged Job to be left around after that Job is fully deleted. Even though the control plane eventually garbage collects the Pods from a deleted Job after they either fail or complete, sometimes those lingering pods may cause cluster performance degradation or in worst case cause the cluster to go offline due to this degradation. You can use LimitRanges and ResourceQuotas to place a cap on the amount of resources that a particular namespace can consume. The Job object can be used to process a set of independent but related work items. These might be emails to be sent, frames to be rendered, files to be transcoded, ranges of keys in a NoSQL database to scan, and so on. In a complex system, there may be multiple different sets of work items. Here we are just considering one set of work items that the user wants to manage together a batch job. There are several different patterns for parallel computation, each with strengths and weaknesses. The tradeoffs are: The tradeoffs are summarized here, with columns 2 to 4 corresponding to the above tradeoffs. The pattern names are also links to examples and more detailed description. | Pattern | Single Job object | Fewer pods than work items? | Use app unmodified? | |:-|:--|:|:-| | Queue with Pod Per Work Item | | nan | sometimes | | Queue with Variable Pod Count | | | nan | | Indexed Job with Static Work Assignment | | nan | | | Job with Pod-to-Pod Communication | | sometimes | sometimes | | Job Template Expansion | nan | nan | | When you specify completions with .spec.completions, each Pod created by the Job controller has an identical spec. This means that all pods for a task will have the same command line and the same image, the same volumes, and (almost) the same environment variables. These patterns are different ways to arrange for pods to work on different things. This table shows the required settings for .spec.parallelism and .spec.completions for each of the patterns. Here, W is the number of work items. | Pattern | .spec.completions |" }, { "data": "| |:-|:--|:--| | Queue with Pod Per Work Item | W | any | | Queue with Variable Pod Count | nan | any | | Indexed Job with Static Work Assignment | W | any | | Job with Pod-to-Pod Communication | W | W | | Job Template Expansion | 1 | should be 1 | When a Job is created, the Job controller will immediately begin creating Pods to satisfy the Job's requirements and will continue to do so until the Job is complete. However, you may want to temporarily suspend a Job's execution and resume it later, or start Jobs in suspended state and have a custom controller decide later when to start them. To suspend a Job, you can update the .spec.suspend field of the Job to true; later, when you want to resume it again, update it to false. Creating a Job with .spec.suspend set to true will create it in the suspended state. When a Job is resumed from suspension, its .status.startTime field will be reset to the current time. This means that the .spec.activeDeadlineSeconds timer will be stopped and reset when a Job is suspended and resumed. When you suspend a Job, any running Pods that don't have a status of Completed will be terminated with a SIGTERM signal. The Pod's graceful termination period will be honored and your Pod must handle this signal in this period. This may involve saving progress for later or undoing changes. Pods terminated this way will not count towards the Job's completions count. An example Job definition in the suspended state can be like so: ``` kubectl get job myjob -o yaml ``` ``` apiVersion: batch/v1 kind: Job metadata: name: myjob spec: suspend: true parallelism: 1 completions: 5 template: spec: ... ``` You can also toggle Job suspension by patching the Job using the command line. Suspend an active Job: ``` kubectl patch job/myjob --type=strategic --patch '{\"spec\":{\"suspend\":true}}' ``` Resume a suspended Job: ``` kubectl patch job/myjob --type=strategic --patch '{\"spec\":{\"suspend\":false}}' ``` The Job's status can be used to determine if a Job is suspended or has been suspended in the past: ``` kubectl get jobs/myjob -o yaml ``` ``` apiVersion: batch/v1 kind: Job status: conditions: lastProbeTime: \"2021-02-05T13:14:33Z\" lastTransitionTime: \"2021-02-05T13:14:33Z\" status: \"True\" type: Suspended startTime: \"2021-02-05T13:13:48Z\" ``` The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of that condition is \"False\", then the Job was previously suspended and is now running. If such a condition does not exist in the Job's status, the Job has never been stopped. Events are also created when the Job is suspended and resumed: ``` kubectl describe jobs/myjob ``` ``` Name: myjob ... Events: Type Reason Age From Message - - - Normal SuccessfulCreate 12m job-controller Created pod: myjob-hlrpl Normal SuccessfulDelete 11m job-controller Deleted pod: myjob-hlrpl Normal Suspended 11m job-controller Job suspended Normal SuccessfulCreate 3s job-controller Created pod: myjob-jvb44 Normal Resumed 3s job-controller Job resumed ``` The last four events, particularly the \"Suspended\" and \"Resumed\" events, are directly a result of toggling the .spec.suspend field. In the time between these two events, we see that no Pods were created, but Pod creation restarted as soon as the Job was resumed. In most cases, a parallel job will want the pods to run with constraints, like all in the same zone, or all either on GPU model x or y but not a mix of both. The suspend field is the first step towards achieving those semantics. Suspend allows a custom queue controller to decide when a job should start; However, once a job is unsuspended, a custom queue controller has no influence on where the pods of a job will actually land. This feature allows updating a Job's scheduling directives before it starts, which gives custom queue controllers the ability to influence pod placement while at the same time offloading actual pod-to-node assignment to" }, { "data": "This is allowed only for suspended Jobs that have never been unsuspended before. The fields in a Job's pod template that can be updated are node affinity, node selector, tolerations, labels, annotations and scheduling gates. Normally, when you create a Job object, you do not specify .spec.selector. The system defaulting logic adds this field when the Job is created. It picks a selector value that will not overlap with any other jobs. However, in some cases, you might need to override this automatically set selector. To do this, you can specify the .spec.selector of the Job. Be very careful when doing this. If you specify a label selector which is not unique to the pods of that Job, and which matches unrelated Pods, then pods of the unrelated job may be deleted, or this Job may count other Pods as completing it, or one or both Jobs may refuse to create Pods or run to completion. If a non-unique selector is chosen, then other controllers (e.g. ReplicationController) and their Pods may behave in unpredictable ways too. Kubernetes will not stop you from making a mistake when specifying .spec.selector. Here is an example of a case when you might want to use this feature. Say Job old is already running. You want existing Pods to keep running, but you want the rest of the Pods it creates to use a different pod template and for the Job to have a new name. You cannot update the Job because these fields are not updatable. Therefore, you delete Job old but leave its pods running, using kubectl delete jobs/old --cascade=orphan. Before deleting it, you make a note of what selector it uses: ``` kubectl get job old -o yaml ``` The output is similar to this: ``` kind: Job metadata: name: old ... spec: selector: matchLabels: batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002 ... ``` Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they are controlled by Job new as well. You need to specify manualSelector: true in the new Job since you are not using the selector that the system normally generates for you automatically. ``` kind: Job metadata: name: new ... spec: manualSelector: true selector: matchLabels: batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002 ... ``` The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch. The control plane keeps track of the Pods that belong to any Job and notices if any such Pod is removed from the API server. To do that, the Job controller creates Pods with the finalizer batch.kubernetes.io/job-tracking. The controller removes the finalizer only after the Pod has been accounted for in the Job status, allowing the Pod to be removed by other controllers or users. You can scale Indexed Jobs up or down by mutating both .spec.parallelism and .spec.completions together such that .spec.parallelism == .spec.completions. When the ElasticIndexedJobfeature gate on the API server is disabled, .spec.completions is immutable. Use cases for elastic Indexed Jobs include batch workloads which require scaling an indexed Job, such as MPI, Horovord, Ray, and PyTorch training jobs. By default, the Job controller recreates Pods as soon they either fail or are terminating (have a deletion timestamp). This means that, at a given time, when some of the Pods are terminating, the number of running Pods for a Job can be greater than parallelism or greater than one Pod per index (if you are using an Indexed" }, { "data": "You may choose to create replacement Pods only when the terminating Pod is fully terminal (has status.phase: Failed). To do this, set the .spec.podReplacementPolicy: Failed. The default replacement policy depends on whether the Job has a podFailurePolicy set. With no Pod failure policy defined for a Job, omitting the podReplacementPolicy field selects the TerminatingOrFailed replacement policy: the control plane creates replacement Pods immediately upon Pod deletion (as soon as the control plane sees that a Pod for this Job has deletionTimestamp set). For Jobs with a Pod failure policy set, the default podReplacementPolicy is Failed, and no other value is permitted. See Pod failure policy to learn more about Pod failure policies for Jobs. ``` kind: Job metadata: name: new ... spec: podReplacementPolicy: Failed ... ``` Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating. ``` kubectl get jobs/myjob -o yaml ``` ``` apiVersion: batch/v1 kind: Job status: terminating: 3 # three Pods are terminating and have not yet reached the Failed phase ``` This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller. You indicate the controller that reconciles the Job by setting a custom value for the spec.managedBy field - any value other than kubernetes.io/job-controller. The value of the field is immutable. When developing an external Job controller be aware that your controller needs to operate in a fashion conformant with the definitions of the API spec and status fields of the Job object. Please review these in detail in the Job API. We also recommend that you run the e2e conformance tests for the Job object to verify your implementation. Finally, when developing an external Job controller make sure it does not use the batch.kubernetes.io/job-tracking finalizer, reserved for the built-in controller. When the node that a Pod is running on reboots or fails, the pod is terminated and will not be restarted. However, a Job will create new Pods to replace terminated ones. For this reason, we recommend that you use a Job rather than a bare Pod, even if your application requires only a single Pod. Jobs are complementary to Replication Controllers. A Replication Controller manages Pods which are not expected to terminate (e.g. web servers), and a Job manages Pods that are expected to terminate (e.g. batch tasks). As discussed in Pod Lifecycle, Job is only appropriate for pods with RestartPolicy equal to OnFailure or Never. (Note: If RestartPolicy is not set, the default value is Always.) Another pattern is for a single Job to create a Pod which then creates other Pods, acting as a sort of custom controller for those Pods. This allows the most flexibility, but may be somewhat complicated to get started with and offers less integration with Kubernetes. One example of this pattern would be a Job which starts a Pod which runs a script that in turn starts a Spark master controller (see spark example), runs a spark driver, and then cleans up. An advantage of this approach is that the overall process gets the completion guarantee of a Job object, but maintains complete control over what Pods are created and how work is assigned to them. Was this page helpful? Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub Repository if you want to report a problem or suggest an" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "CloudTTY", "subcategory": "Application Definition & Image Build" }
[ { "data": "Welcome to Cyclops, a powerful user interface for managing and interacting with Kubernetes clusters. Cyclops is designed to simplify the management of containerized applications on Kubernetes, providing an intuitive and user-friendly experience for developers, system administrators, and DevOps professionals. Divide the responsibility between your infrastructure and your developer teams so that everyone can play to their strengths. Automate your processes and shrink the window for deployment mistakes. Cyclops is an innovative web-based tool designed to simplify the management of distributed systems, specifically focusing on the widely used Kubernetes platform. By providing a user-friendly interface, Cyclops abstracts complex Kubernetes configuration files into intuitive web forms, making it easier for developers to deploy applications and manage Kubernetes environments. It offers predefined fields and graphical representations of deployments, enhancing visibility and reducing the learning curve associated with Kubernetes. Cyclops aims to empower IT operations teams, DevOps teams, developers and business owners, enabling them to streamline processes, increase productivity, and achieve cost savings in managing Kubernetes clusters. Cyclops provides a comprehensive dashboard that offers an overview of the cluster's health, performance, and resource utilization. The dashboard presents key metrics and information about pods, nodes, deployments, services, and more, enabling users to monitor the cluster's status at a glance. With Cyclops, users can effortlessly deploy and scale their applications on the cluster. The application provides an intuitive interface to create, manage, and update deployments, allowing users to easily adjust the number of replicas, configure rolling updates, and monitor the deployment's progress. Cyclops lets you create templates of YAML configuration files for your applications with variables that can be assigned later. This empowers users to create parameterized and customizable configurations that can be easily adapted to different environments or use cases. Templating YAML configuration files simplifies the management of Kubernetes resources, promotes consistency, and streamlines the deployment process, making it more efficient and adaptable to varying requirements. Versioning templates provide a structured way to keep track of changes and updates made to templates over time. Each version represents a specific iteration or snapshot of the template at a particular point in" }, { "data": "By using versioning, it becomes easier to manage and track different versions of templates, facilitating collaboration, maintenance, and rollback if necessary. Helm has already established itself in the Kubernetes community as a tool for writing configuration files. We understand that nobody likes to change the way they are doing things. To make the transition easier, we integrated Helm into our system and made it possible to bring your old configuration files written with Helm into Cyclops. No need for starting over, continue were you left off! By dividing responsibilities, each team can work efficiently in their respective domains. The infrastructure team can dedicate their efforts to infrastructure optimization, scalability, and security, ensuring that the Kubernetes environment is robust and well-maintained. Simultaneously, the developer team can focus on delivering their product without having to learn Kubernetes in depth. This division of responsibilities enhances collaboration and fosters a smoother development workflow. Using a form-based UI eliminates the need for manual configuration and command-line interactions, making the deployment process more user-friendly and accessible to individuals with varying levels of technical expertise. Advanced users can write their own configuration files, but we offer some basic templates for users still new to Kubernetes to help them start off. Cyclops deploys your applications trough forms with predefined fields. This means that your developers can edit only certain fields and input only values of certain type. Forms drastically shrink the window for deployment mistakes which are often costly for businesses, both financially and reputation-wise. Developers do not need to know the intricacies of Kubernetes, only the basics, which in return will speed up their onboarding and bolster their productivity. Cyclops promotes consistency and standardization in deployment practices. By providing predefined templates or configuration presets, Cyclops ensures that deployments adhere to established best practices and guidelines. This consistency not only improves the reliability and stability of deployments but also facilitates collaboration among team members who can easily understand and reproduce each other's deployments. Cyclops offers a streamlined and intuitive interface for managing Kubernetes clusters, simplifying complex operations and enabling efficient application orchestration. Whether you're new to Kubernetes or an experienced user, Cyclops empowers you to interact with your cluster effectively and enhances your productivity. Start leveraging the power of Kubernetes with a user-friendly experience through Cyclops." } ]
{ "category": "App Definition and Development", "file_name": "about.md", "project_name": "Cyclops", "subcategory": "Application Definition & Image Build" }
[ { "data": "To install Cyclops in your cluster, run commands below: ``` kubectl apply -f https://raw.githubusercontent.com/cyclops-ui/cyclops/v0.6.2/install/cyclops-install.yaml && kubectl apply -f https://raw.githubusercontent.com/cyclops-ui/cyclops/v0.6.2/install/demo-templates.yaml``` It will create a new namespace called cyclops and deploy everything you need for your Cyclops instance to run. Now all that is left is to expose Cyclops server outside the cluster: ``` kubectl port-forward svc/cyclops-ui 3000:3000 -n cyclops``` You can now access Cyclops in your browser on http://localhost:3000." } ]
{ "category": "App Definition and Development", "file_name": "manifest.md", "project_name": "Cyclops", "subcategory": "Application Definition & Image Build" }
[ { "data": "Welcome to Cyclops, a powerful user interface for managing and interacting with Kubernetes clusters. Cyclops is designed to simplify the management of containerized applications on Kubernetes, providing an intuitive and user-friendly experience for developers, system administrators, and DevOps professionals. Divide the responsibility between your infrastructure and your developer teams so that everyone can play to their strengths. Automate your processes and shrink the window for deployment mistakes. Cyclops is an innovative web-based tool designed to simplify the management of distributed systems, specifically focusing on the widely used Kubernetes platform. By providing a user-friendly interface, Cyclops abstracts complex Kubernetes configuration files into intuitive web forms, making it easier for developers to deploy applications and manage Kubernetes environments. It offers predefined fields and graphical representations of deployments, enhancing visibility and reducing the learning curve associated with Kubernetes. Cyclops aims to empower IT operations teams, DevOps teams, developers and business owners, enabling them to streamline processes, increase productivity, and achieve cost savings in managing Kubernetes clusters. Cyclops provides a comprehensive dashboard that offers an overview of the cluster's health, performance, and resource utilization. The dashboard presents key metrics and information about pods, nodes, deployments, services, and more, enabling users to monitor the cluster's status at a glance. With Cyclops, users can effortlessly deploy and scale their applications on the cluster. The application provides an intuitive interface to create, manage, and update deployments, allowing users to easily adjust the number of replicas, configure rolling updates, and monitor the deployment's progress. Cyclops lets you create templates of YAML configuration files for your applications with variables that can be assigned later. This empowers users to create parameterized and customizable configurations that can be easily adapted to different environments or use cases. Templating YAML configuration files simplifies the management of Kubernetes resources, promotes consistency, and streamlines the deployment process, making it more efficient and adaptable to varying requirements. Versioning templates provide a structured way to keep track of changes and updates made to templates over time. Each version represents a specific iteration or snapshot of the template at a particular point in" }, { "data": "By using versioning, it becomes easier to manage and track different versions of templates, facilitating collaboration, maintenance, and rollback if necessary. Helm has already established itself in the Kubernetes community as a tool for writing configuration files. We understand that nobody likes to change the way they are doing things. To make the transition easier, we integrated Helm into our system and made it possible to bring your old configuration files written with Helm into Cyclops. No need for starting over, continue were you left off! By dividing responsibilities, each team can work efficiently in their respective domains. The infrastructure team can dedicate their efforts to infrastructure optimization, scalability, and security, ensuring that the Kubernetes environment is robust and well-maintained. Simultaneously, the developer team can focus on delivering their product without having to learn Kubernetes in depth. This division of responsibilities enhances collaboration and fosters a smoother development workflow. Using a form-based UI eliminates the need for manual configuration and command-line interactions, making the deployment process more user-friendly and accessible to individuals with varying levels of technical expertise. Advanced users can write their own configuration files, but we offer some basic templates for users still new to Kubernetes to help them start off. Cyclops deploys your applications trough forms with predefined fields. This means that your developers can edit only certain fields and input only values of certain type. Forms drastically shrink the window for deployment mistakes which are often costly for businesses, both financially and reputation-wise. Developers do not need to know the intricacies of Kubernetes, only the basics, which in return will speed up their onboarding and bolster their productivity. Cyclops promotes consistency and standardization in deployment practices. By providing predefined templates or configuration presets, Cyclops ensures that deployments adhere to established best practices and guidelines. This consistency not only improves the reliability and stability of deployments but also facilitates collaboration among team members who can easily understand and reproduce each other's deployments. Cyclops offers a streamlined and intuitive interface for managing Kubernetes clusters, simplifying complex operations and enabling efficient application orchestration. Whether you're new to Kubernetes or an experienced user, Cyclops empowers you to interact with your cluster effectively and enhances your productivity. Start leveraging the power of Kubernetes with a user-friendly experience through Cyclops." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Cyclops", "subcategory": "Application Definition & Image Build" }
[ { "data": "These are the general steps for setting up a Teamspace. Apart from setting up a new Kubernetes cluster, the following steps should take less than 10 minutes to complete. These steps should be carried out by someone comfortable around Kubernetes: Once a Teamspace is set up and certified, individual developers can then install the Codezero local tools to work with the Teamspace. Developers will not require credentials for the Kubernetes cluster as they authenticate to the Teamspace via the Hub. NOTE: We currently support Github and Google authentication." } ]
{ "category": "App Definition and Development", "file_name": "getting-started.md", "project_name": "CodeZero", "subcategory": "Application Definition & Image Build" }
[ { "data": "In order to test out Cyclops you are going to need some things. First thing you are going to need is a Kubernetes cluster. If you have one that you can use to play with, great, if not you can try installing minikube. Minikube sets up a local Kubernetes cluster that you can use to test stuff out. Check the docs on how to install it. Another thing you will need is kubectl. It is a command line interface for running commands against your cluster. Once you have installed minikube and kubectl, run your local cluster with: ``` minikube start``` After some time you will have a running cluster that you can use for testing. To verify everything is in order, you can try fetching all namespaces from the cluster with: ``` kubectl get namespaces``` Output should be something like this: ``` NAME STATUS AGEdefault Active 10mkube-node-lease Active 10mkube-public Active 10mkube-system Active 10m...```" } ]
{ "category": "App Definition and Development", "file_name": "prerequisites.md", "project_name": "Cyclops", "subcategory": "Application Definition & Image Build" }
[ { "data": "Using Depot's remote builders for local development allows you to get faster Docker image builds with the entire Docker layer cache instantly available across builds. The cache is shared across your entire team who has access to a given Depot project, allowing you to reuse build results and cache across your entire team for faster local development. Additionally, routing the image build to remote builders frees your local machine's CPU and memory resources. There is nothing additional you need to configure to share your build cache across your team for local builds. If your team members can access the Depot project, they will automatically share the same build cache. So, if you build an image locally, your team members can reuse the layers you built in their own builds. To leverage Depot locally, install the depot CLI tool and configure your Depot project, if you haven't already. With those two things complete, you can then login to Depot via the CLI: ``` depot login``` Once you're logged in, you can configure Depot inside of your git repository by running the init command: ``` depot init``` The init command writes a depot.json file to the root of your repository with the Depot project ID that you selected. Alternatively, you can skip the init command if you'd like and use the --project flag on the build command to specify the project ID. You can run a build with Depot locally by running the build command: ``` depot build -t my-image:latest .``` By default, Depot won't return you the built image locally. Instead, the built image and the layers produced will remain in the build cache. However, if you'd like to download the image locally, for instance, so you can docker run it, you can specify the --load flag: ``` depot build -t my-image:latest --load .``` You can also run a build with Depot locally via the docker build or docker buildx build commands. To do so, you'll need to run depot configure-docker to configure your Docker CLI to use Depot as the default builder: ``` depot configure-docker docker build -t my-image:latest .``` For a full guide on using Depot via your existing docker build of docker compose commands, see our Docker integration guide." } ]
{ "category": "App Definition and Development", "file_name": "docs.md", "project_name": "Depot", "subcategory": "Application Definition & Image Build" }
[ { "data": "Distributed applications are commonly comprised of many microservices, with dozens - sometimes hundreds - of instances scaling across underlying infrastructure. As these distributed solutions grow in size and complexity, the potential for system failures inevitably increases. Service instances can fail or become unresponsive due to any number of issues, including hardware failures, unexpected throughput, or application lifecycle events, such as scaling out and application restarts. Designing and implementing a self-healing solution with the ability to detect, mitigate, and respond to failure is critical. Dapr provides a capability for defining and applying fault tolerance resiliency policies to your application. You can define policies for following resiliency patterns: These policies can be applied to any Dapr API calls when calling components with a resiliency spec. Applications can become unresponsive for a variety of reasons. For example, they are too busy to accept new work, could have crashed, or be in a deadlock state. Sometimes the condition can be transitory or persistent. Dapr provides a capability for monitoring app health through probes that check the health of your application and react to status changes. When an unhealthy app is detected, Dapr stops accepting new work on behalf of the application. Read more on how to apply app health checks to your application. Dapr provides a way to determine its health using an HTTP /healthz endpoint. With this endpoint, the daprd process, or sidecar, can be: Read more on about how to apply dapr health checks to your application. Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Dapr", "subcategory": "Application Definition & Image Build" }
[ { "data": "How to install the depot CLI on all platforms, with links to CI configuration guides. For Mac, you can install the CLI with Homebrew: ``` brew install depot/tap/depot``` Or download the latest version from GitHub releases. Either install with our installation script: ``` curl -L https://depot.dev/install-cli.sh | sh curl -L https://depot.dev/install-cli.sh | sh -s 1.6.0``` Or download the latest version from GitHub releases." } ]
{ "category": "App Definition and Development", "file_name": "installation.md", "project_name": "Depot", "subcategory": "Application Definition & Image Build" }
[ { "data": "For questions, concerns, or information about our security policies or to disclose a security vulnerability, please get in touch with us at security@depot.dev. A Depot organization represents a collection of projects that contain builder VMs and SSD cache disks. These VMs and disks are associated with a single organization and are not shared across organizations. When a build request arrives, the build is routed to the correct builder VM based on organization, project, and requested CPU architecture. Communication between the depot CLI and builder VM uses an encrypted HTTPS (TLS) connection. Cache volumes are encrypted at rest using our infrastructure providers' encryption capabilities. A builder in Depot and its SSD cache are tied to a single project and the organization that owns it. Builders are never shared across organizations. Instead, builds running on a given builder are connected to one and only one organization, the organization that owns the projects. Connections from the Depot CLI to the builder VM are routed through a stateless load balancer directly to the project's builder VM and are encrypted using TLS (HTTPS). Our services and applications run in the cloud using one of our infrastructure providers, AWS and GCP. Depot has no physical access to the underlying physical infrastructure. For more information, see AWS's security details and GCP's security details. All data transferred in and out of Depot is encrypted using hardened TLS. This includes connections between the Depot CLI and builder VMs, which are conducted via HTTPS. In addition, Depot's domain is protected by HTTP Strict Transport Security (HSTS). Cache volumes attached to project builders are encrypted at rest using our infrastructure providers' encryption capabilities. Depot does not access builders or cache volumes directly, except for use in debugging when explicit permission is granted from the organization owner. Today, Depot operates cloud infrastructure in regions that are geographically located inside the United States of America as well as the European Union (if a project chooses the EU as its region). Depot supports API-token-based authentication for various aspects of the application: Depot keeps up to date with software dependencies and has automated tools scanning for dependency vulnerabilities. Development environments are separated physically from Depot's production environment. You can add and remove user access to your organization via the settings page. Users can have one of two roles: We expect to expand the available roles and permissions in the future; don't hesitate to contact us if you have any special requirements. In addition to users, Depot also allows creating trust relationships with GitHub Actions. These relationships enable workflow runs initiated in GitHub Actions to access specific projects in your organization to run builds. Trust relationships can be configured in the project settings. Access to create project builds effectively equates to access to the builder VM due to the nature of how docker build works. Anyone with access to build a project can access that project's build cache files and potentially add, edit, or remove cache entries. You should be careful that you trust the users and trust relationships that you have given access to a project and use tools like OIDC trust relationships to limit access to only the necessary scope." } ]
{ "category": "App Definition and Development", "file_name": "security.md", "project_name": "Depot", "subcategory": "Application Definition & Image Build" }
[ { "data": "Introduction Organizations looking to standardize their development environment can do so by adopting devfiles. In the simplest case, developers can just consume the devfiles that are available from the public community registry. If your organization needs custom devfiles that are authored and shared internally, then you need a role based approach so developers, devfile authors, and registry administrators can interact together. A devfile author, also known as a runtime provider, can be an individual or a group representing a runtime vendor. Devfile authors need sound knowledge of the supported runtime so they can create devfiles to build and run applications. If a runtime stack is not available in the public registry, an organization can choose to develop their own and keep it private for their in-house development. The public community registry is managed by the community and hosted by Red Hat. Share your devfile to the public community registry so other teams can benefit from your application. If an organization wants to keep their own devfiles private but wishes to share with other departments, they can assign a registry administrator. The registry administrator deploys, maintains, and manages the contents of their private registry and the default list of devfile registries available in a given cluster. Developers can use the supported tools to access devfiles. Many of the existing tools offer a way to register or catalog public and private devfile registries which then allows the tool to expose the devfiles for development. In addition, each registry comes packaged with an index server and a registry viewer so developers can browse and view the devfile contents before deciding which ones they want to adopt. Developers can also extend an existing parent devfile to customize the workflow of their specific application. The devfile can be packaged as part of the application source to ensure consistent behavior when moving across different tools. Note! Tools that support the devfile spec might have varying levels of support. Check their product pages for more information. An open standard defining containerized development environments." } ]
{ "category": "App Definition and Development", "file_name": "community.md", "project_name": "Devfile", "subcategory": "Application Definition & Image Build" }
[ { "data": "You can add seats and manage invitations to your Docker Build Cloud Team in the Docker Build Cloud dashboard. Note If you have a Docker Build Cloud Business subscription, you can add and remove seats by working with your account executive, then assign your purchased seats in the Docker Build Cloud dashboard. The number of seats will be charged to your payment information on file, and are added immediately. The charge for the reduced seat count will be reflected on the next billing cycle. Optionally, you can cancel the seat downgrade any time before the next billing cycle. As an owner of the Docker Build Cloud team, you can invite members to access cloud builders. To invite team members to your team in Docker Build Cloud: Invitees receive an email with instructions on how they can accept the invite. After they accept, the seat will be marked as Allocated in the User management section in the Docker Build Cloud dashboard. For more information on the permissions granted to members, see Roles and permissions. Edit this page Request changes Copyright 2013-2024 Docker Inc. All rights reserved." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Docker Compose", "subcategory": "Application Definition & Image Build" }
[ { "data": "Docker recommends you use the Docker Official Images in your projects. These images have clear documentation, promote best practices, and are regularly updated. Docker Official Images support most common use cases, making them perfect for new Docker users. Advanced users can benefit from more specialized image variants as well as review Docker Official Images as part of your Dockerfile learning process. The repository description for each Docker Official Image contains a Supported tags and respective Dockerfile links section that lists all the current tags with links to the Dockerfiles that created the image with those tags. The purpose of this section is to show what image variants are available. Tags listed on the same line all refer to the same underlying image. Multiple tags can point to the same image. For example, in the previous screenshot taken from the ubuntu Docker Official Images repository, the tags 24.04, noble-20240225, noble, and devel all refer to the same image. The latest tag for a Docker Official Image is often optimized for ease of use and includes a wide variety of useful software, such as developer and build tools. By tagging an image as latest, the image maintainers are essentially suggesting that image be used as the default. In other words, if you do not know what tag to use or are unfamiliar with the underlying software, you should probably start with the latest image. As your understanding of the software and image variants advances, you may find other image variants better suit your needs. A number of language stacks such as Node.js, Python, and Ruby have slim tag variants designed to provide a lightweight, production-ready base image with fewer packages. A typical consumption pattern for slim images is as the base image for the final stage of a multi-staged build. For example, you build your application in the first stage of the build using the latest variant and then copy your application into the final stage based upon the slim variant. Here is an example Dockerfile. ``` FROM node:latest AS build WORKDIR /app COPY package.json package-lock.json" }, { "data": "RUN npm ci COPY . ./ FROM node:slim WORKDIR /app COPY --from=build /app /app CMD [\"node\", \"app.js\"]``` Many Docker Official Images repositories also offer alpine variants. These images are built on top of the Alpine Linux distribution rather than Debian or Ubuntu. Alpine Linux is focused on providing a small, simple, and secure base for container images, and Docker Official Images alpine variants typically aim to install only necessary packages. As a result, Docker Official Images alpine variants are typically even smaller than slim variants. The main caveat to note is that Alpine Linux uses musl libc instead of glibc. Additionally, to minimize image size, it's uncommon for Alpine-based images to include tools such as Git or Bash by default. Depending on the depth of libc requirements or assumptions in your programs, you may find yourself running into issues due to missing libraries or tools. When you use Alpine images as a base, consider the following options in order to make your program compatible with Alpine Linux and musl: Refer to the alpine image description on Docker Hub for examples on how to install packages if you are unfamiliar. Tags with words that look like Toy Story characters (for example, bookworm, bullseye, and trixie) or adjectives (such as focal, jammy, and noble), indicate the codename of the Linux distribution they use as a base image. Debian release codenames are based on Toy Story characters, and Ubuntu's take the form of \"Adjective Animal\". For example, the codename for Ubuntu 24.04 is \"Noble Numbat\". Linux distribution indicators are helpful because many Docker Official Images provide variants built upon multiple underlying distribution versions (for example, postgres:bookworm and postgres:bullseye). Docker Official Images tags may contain other hints to the purpose of their image variant in addition to those described here. Often these tag variants are explained in the Docker Official Images repository documentation. Reading through the How to use this image and Image Variants sections will help you to understand how to use these variants. Edit this page Request changes Copyright 2013-2024 Docker Inc. All rights reserved." } ]
{ "category": "App Definition and Development", "file_name": "compose-file.md", "project_name": "Docker Compose", "subcategory": "Application Definition & Image Build" }
[ { "data": "How to install the depot CLI on all platforms, with links to CI configuration guides. For Mac, you can install the CLI with Homebrew: ``` brew install depot/tap/depot``` Or download the latest version from GitHub releases. Either install with our installation script: ``` curl -L https://depot.dev/install-cli.sh | sh curl -L https://depot.dev/install-cli.sh | sh -s 1.6.0``` Or download the latest version from GitHub releases." } ]
{ "category": "App Definition and Development", "file_name": "local-development.md", "project_name": "Depot", "subcategory": "Application Definition & Image Build" }
[ { "data": "Docker recommends you use the Docker Official Images in your projects. These images have clear documentation, promote best practices, and are regularly updated. Docker Official Images support most common use cases, making them perfect for new Docker users. Advanced users can benefit from more specialized image variants as well as review Docker Official Images as part of your Dockerfile learning process. The repository description for each Docker Official Image contains a Supported tags and respective Dockerfile links section that lists all the current tags with links to the Dockerfiles that created the image with those tags. The purpose of this section is to show what image variants are available. Tags listed on the same line all refer to the same underlying image. Multiple tags can point to the same image. For example, in the previous screenshot taken from the ubuntu Docker Official Images repository, the tags 24.04, noble-20240225, noble, and devel all refer to the same image. The latest tag for a Docker Official Image is often optimized for ease of use and includes a wide variety of useful software, such as developer and build tools. By tagging an image as latest, the image maintainers are essentially suggesting that image be used as the default. In other words, if you do not know what tag to use or are unfamiliar with the underlying software, you should probably start with the latest image. As your understanding of the software and image variants advances, you may find other image variants better suit your needs. A number of language stacks such as Node.js, Python, and Ruby have slim tag variants designed to provide a lightweight, production-ready base image with fewer packages. A typical consumption pattern for slim images is as the base image for the final stage of a multi-staged build. For example, you build your application in the first stage of the build using the latest variant and then copy your application into the final stage based upon the slim variant. Here is an example Dockerfile. ``` FROM node:latest AS build WORKDIR /app COPY package.json package-lock.json" }, { "data": "RUN npm ci COPY . ./ FROM node:slim WORKDIR /app COPY --from=build /app /app CMD [\"node\", \"app.js\"]``` Many Docker Official Images repositories also offer alpine variants. These images are built on top of the Alpine Linux distribution rather than Debian or Ubuntu. Alpine Linux is focused on providing a small, simple, and secure base for container images, and Docker Official Images alpine variants typically aim to install only necessary packages. As a result, Docker Official Images alpine variants are typically even smaller than slim variants. The main caveat to note is that Alpine Linux uses musl libc instead of glibc. Additionally, to minimize image size, it's uncommon for Alpine-based images to include tools such as Git or Bash by default. Depending on the depth of libc requirements or assumptions in your programs, you may find yourself running into issues due to missing libraries or tools. When you use Alpine images as a base, consider the following options in order to make your program compatible with Alpine Linux and musl: Refer to the alpine image description on Docker Hub for examples on how to install packages if you are unfamiliar. Tags with words that look like Toy Story characters (for example, bookworm, bullseye, and trixie) or adjectives (such as focal, jammy, and noble), indicate the codename of the Linux distribution they use as a base image. Debian release codenames are based on Toy Story characters, and Ubuntu's take the form of \"Adjective Animal\". For example, the codename for Ubuntu 24.04 is \"Noble Numbat\". Linux distribution indicators are helpful because many Docker Official Images provide variants built upon multiple underlying distribution versions (for example, postgres:bookworm and postgres:bullseye). Docker Official Images tags may contain other hints to the purpose of their image variant in addition to those described here. Often these tag variants are explained in the Docker Official Images repository documentation. Reading through the How to use this image and Image Variants sections will help you to understand how to use these variants. Edit this page Request changes Copyright 2013-2024 Docker Inc. All rights reserved." } ]
{ "category": "App Definition and Development", "file_name": "faq.md", "project_name": "Docker Compose", "subcategory": "Application Definition & Image Build" }
[ { "data": "The Docker Official Images are a curated set of Docker repositories hosted on Docker Hub. Note Use of Docker Official Images is subject to Docker's Terms of Service. These images provide essential base repositories that serve as the starting point for the majority of users. These include operating systems such as Ubuntu and Alpine, programming language runtimes such as Python and Node, and other essential tools such as memcached and MySQL. The images are some of the most secure images on Docker Hub. This is particularly important as Docker Official Images are some of the most popular on Docker Hub. Typically, Docker Official images have few or no packages containing CVEs. The images exemplify Dockerfile best practices and provide clear documentation to serve as a reference for other Dockerfile authors. Images that are part of this program have a special badge on Docker Hub making it easier for you to identify projects that are part of Docker Official Images. Using Docker Official Images Contributing to Docker Official Images Edit this page Request changes Copyright 2013-2024 Docker Inc. All rights reserved." } ]
{ "category": "App Definition and Development", "file_name": "release-notes.md", "project_name": "Docker Compose", "subcategory": "Application Definition & Image Build" }
[ { "data": "For more detailed information, see the release notes in the Compose repo. This release fixes a build issue with Docker Desktop for Windows introduced in Compose v2.24.0. Note The watch command is now generally available (GA). You can directly use it from the root command docker compose watch. For more information, see File watch. Note The format of docker compose ps and docker compose ps --format=json changed to better align with docker ps output. See compose#10918. For the full change log or additional information, check the Compose repository 2.12.2 release page. For the full change log or additional information, check the Compose repository 2.12.1 release page. CI update to the documentation repository path Upgraded to compose-go from 1.5.1 to 1.6.0 Updated to go 1.19.2 to address CVE-2022-2879, CVE-2022-2880, CVE-2022-41715 For the full change log or additional information, check the Compose repository 2.12.0 release page. Note For the full change log or additional information, check the Compose repository 2.11.2 release page. For the full change log or additional information, check the Compose repository 2.11.1 release page. For the full change log or additional information, check the Compose repository 2.11.0 release page. For the full change log or additional information, check the Compose repository 2.10.2 release page. For the full change log or additional information, check the Compose repository 2.10.1 release page. For the full change log, check the Compose repository 2.10.0 release page. Important Compose v2.9.0 contains changes to the environment variable's precedence that have since been reverted. We recommend using v2.10+ to avoid compatibility issues. Note This release reverts the breaking changes introduced in Compose v2.8.0 by compose-go v1.3.0. For the full change log or additional information, check the Compose repository 2.9.0 release page. Important This release introduced a breaking change via compose-go v1.3.0 and this PR. In this release, Docker Compose recreates new resources (networks, volumes, secrets, configs, etc.) with new names, using a - (dash) instead an _ (underscore) and tries to connect to or use these newly created resources instead of your existing ones! Please use Compose the v2.9.0 release instead. For the full change log or additional information, check the Compose repository 2.8.0 release page. For the full change log or additional information, check the Compose repository 2.7.0 release page. For the full change log or additional information, check the Compose repository 2.6.1 release page. For the full change log or additional information, check the Compose repository 2.6.0 release page. For the full change log or additional information, check the Compose repository 2.5.1 release page. For the full change log or additional information, check the Compose repository 2.5.0 release page. For the full change log or additional information, check the Compose repository 2.4.1 release page. For the full change log or additional information, check the Compose repository 2.4.0 release page. For the full change log or additional information, check the Compose repository 2.3.4 release page. (2022-03-8 to 2022-04-14) For the releases later than 1.29.2 and earlier than 2.3.4, please check the Compose repository release pages. (2021-05-10) Removed the prompt to use docker-compose in the up command. Bumped py to 1.10.0 in requirements-indirect.txt. (2021-04-13) Fixed invalid handler warning on Windows builds. Fixed config hash to trigger container re-creation on IPC mode updates. Fixed conversion map for placement.maxreplicasper_node. Removed extra scan suggestion on build. (2021-04-06) Added profile filter to docker-compose config. Added a depends_on condition to wait for successful service completion. Added an image scan message on build. Updated warning message for --no-ansi to mention --ansi never as alternative. Bumped docker-py to" }, { "data": "Bumped PyYAML to 5.4.1. Bumped python-dotenv to 0.17.0. (2021-03-23) Made --env-file relative to the current working directory. Environment file paths set with --env-file are now relative to the current working directory and override the default .env file located in the project directory. Fixed missing service property storage_opt by updating the Compose schema. Fixed build extra_hosts list format. Removed additional error message on exec. (2021-02-26) Fixed the OpenSSL version mismatch error when shelling out to the SSH client (via bump to docker-py 4.4.4 which contains the fix). Added missing build flags to the native builder: platform, isolation and extra_hosts. Removed info message on native build. Fixed the log fetching bug when service logging driver is set to 'none'. (2021-02-18) (2021-02-17) Fixed SSH hostname parsing when it contains a leading 's'/'h', and removed the quiet option that was hiding the error (via docker-py bump to 4.4.2). Fixed key error for --no-log-prefix option. Fixed incorrect CLI environment variable name for service profiles: COMPOSEPROFILES instead of COMPOSEPROFILE. Fixed the fish completion. Bumped cryptography to 3.3.2. Removed the log driver filter. For a list of PRs and issues fixed in this release, see Compose 1.28.3. (2021-01-26) Revert to Python 3.7 bump for Linux static builds Add bash completion for docker-compose logs|up --no-log-prefix (2021-01-20) Added support for NVIDIA GPUs through device requests. Added support for service profiles. Changed the SSH connection approach to the Docker CLI by shelling out to the local SSH client. Set the COMPOSEPARAMIKOSSH=1 environment variable to enable the old behavior. Added a flag to disable log prefix. Added a flag for ANSI output control. Docker Compose now uses the native Docker CLI's build command when building images. Set the COMPOSEDOCKERCLI_BUILD=0 environment variable to disable this feature. Made parallel_pull=True by default. Restored the warning for configs in non-swarm mode. Took --file into account when defining project_dir. Fixed a service attach bug on compose up. Added usage metrics. Synced schema with COMPOSE specification. Improved failure report for missing mandatory environment variables. Bumped attrs to 20.3.0. Bumped more_itertools to 8.6.0. Bumped cryptograhy to 3.2.1. Bumped cffi to 1.14.4. Bumped virtualenv to 20.2.2. Bumped bcrypt to 3.2.0. Bumped GitPython to 3.1.11. Bumped docker-py to 4.4.1. Bumped Python to 3.9. Linux: bumped Debian base image from stretch to buster (required for Python 3.9). macOS: Bumped OpenSSL 1.1.1g to 1.1.1h, and Python 3.7.7 to 3.9.0. Bumped PyInstaller to 4.1. Relaxed the restriction on base images to latest minor. Updated READMEs. (2020-09-24) Removed path checks for bind mounts. Fixed port rendering to output long form syntax for non-v1. Added protocol to the Docker socket address. (2020-09-16) Merged maxreplicasper_node on docker-compose config. Fixed depends_on serialization on docker-compose config. Fixed scaling when some containers are not running on docker-compose up. Enabled relative paths for driver_opts.device for local driver. Allowed strings for cpus fields. (2020-09-10) (2020-09-10) Fixed docker-compose run when service.scale is specified. Allowed the driver property for external networks as a temporary workaround for the Swarm network propagation issue. Pinned the new internal schema version to 3.9 as the default. Preserved the version number configured in the Compose file. (2020-09-07) Merged 2.x and 3.x Compose formats and aligned with COMPOSE_SPEC schema. Implemented service mode for ipc. Passed COMPOSEPROJECTNAME environment variable in container mode. Made run behave in the same way as up. Used docker build on docker-compose run when COMPOSEDOCKERCLI_BUILD environment variable is set. Used the docker-py default API version for engine queries (auto). Parsed network_mode on build. Ignored build context path validation when building is not" }, { "data": "Fixed float to bytes conversion via docker-py bump to 4.3.1. Fixed the scale bug when the deploy section is set. Fixed docker-py bump in setup.py. Fixed experimental build failure detection. Fixed context propagation to the Docker CLI. Bumped docker-py to 4.3.1. Bumped tox to 3.19.0. Bumped virtualenv to 20.0.30. Added script for Docs synchronization. (2020-07-02) (2020-06-30) Enforced docker-py 4.2.1 as minimum version when installing with pip. Fixed context load for non-docker endpoints. (2020-06-03) Added docker context support. Added missing test dependency ddt to setup.py. Added --attach-dependencies to command up for attaching to dependencies. Allowed compatibility option with COMPOSE_COMPATIBILITY environment variable. Bumped Pytest to 5.3.4 and add refactor compatibility with the new version. Bumped OpenSSL from 1.1.1f to 1.1.1g. Bumped certifi from 2019.11.28 to 2020.4.5.1. Bumped docker-py from 4.2.0 to 4.2.1. Properly escaped values coming from env_files. Synchronized compose-schemas with upstream (docker/cli). Removed None entries on exec command. Added distro package to get distro information. Added python-dotenv to delegate .env file processing. Stopped adjusting output on terminal width when piped into another command. Showed an error message when version attribute is malformed. Fixed HTTPS connection when DOCKER_HOST is remote. (2020-04-10) Bumped OpenSSL from 1.1.1d to 1.1.1f. Added Compose version 3.8. (2020-02-03) Fixed the CI script to enforce the minimal MacOS version to 10.11. Fixed docker-compose exec for keys with no value on environment files. (2020-01-23) Fixed the CI script to enforce the compilation with Python3. Updated the binary's sha256 on the release page. (2020-01-20) Fixed an issue that caused Docker Compose to crash when the version field was set to an invalid value. Docker Compose now displays an error message when invalid values are used in the version field. Fixed an issue that caused Docker Compose to render messages incorrectly when running commands outside a terminal. (2020-01-06) Decoded the APIError explanation to Unicode before using it to create and start a container. Docker Compose discards com.docker.compose.filepaths labels that have None as value. This usually occurs when labels originate from stdin. Added OS X binary as a directory to solve slow start up time issues caused by macOS Catalina binary scan. Passed the HOME environment variable in container mode when running with script/run/run.sh. Docker Compose now reports images that cannot be pulled, however, are required to be built. (2019-11-18) Set no-colors to true by changing CLICOLOR env variable to 0. Added working directory, config files, and env file to service labels. Added ARM build dependencies. Added BuildKit support (use DOCKERBUILDKIT=1 and COMPOSEDOCKERCLIBUILD=1). Raised Paramiko to version 2.6.0. Added the following tags: docker-compose:latest, docker-compose:<version>-alpine, and docker-compose:<version>-debian. Raised docker-py to version 4.1.0. Enhanced support for requests, up to version 2.22.0. Removed empty tag on build:cache_from. Dockerfile enhancement that provides for the generation of libmusl binaries for Alpine Linux. Pulling only of images that cannot be built. The scale attribute now accepts 0 as a value. Added a --quiet option and a --no-rm option to the docker-compose build command. Added a --no-interpolate option to the docker-compose config command. Raised OpenSSL for MacOS build from 1.1.0 to 1.1.1c. Added support for the docker-compose.yml file's credential_spec configuration option. Resolution of digests without having to pull the image. Upgraded pyyaml to version 4.2b1. Lowered the severity to warning for instances in which down attempts to remove a non-existent image. Mandated the use of improved API fields for project events, when possible. Updated setup.py for modern pypi/setuptools, and removed pandoc dependencies. Removed Dockerfile.armhf, which is no longer required. Made container service color deterministic, including the removal of the color red. Fixed non-ASCII character errors (Python 2" }, { "data": "Changed image sizing to decimal format, to align with Docker CLI. tty size acquired through Python POSIX support. Fixed same file extends optimization. Fixed stdin_open. Fixed the issue of --remove-orphans being ignored encountered during use with up --no-start option. Fixed docker-compose ps --all command. Fixed the depends_on dependency recreation behavior. Fixed bash completion for the docker-compose build --memory command. Fixed the misleading environmental variables warning that occurs when the docker-compose exec command is performed. Fixed the failure check in the parallelexecutewatch function. Fixed the race condition that occurs following the pulling of an image. Fixed error on duplicate mount points (a configuration error message now displays). Fixed the merge on networks section. Compose container is always connected to stdin by default. Fixed the presentation of failed services on the docker-compose start command when containers are not available. (2019-06-24) This release contains minor improvements and bug fixes. (2019-03-28) Added support for connecting to the Docker Engine using the ssh protocol. Added an --all flag to docker-compose ps to include stopped one-off containers in the command's output. Added bash completion for ps --all|-a. Added support for credential_spec. Added --parallel to docker build's options in bash and zsh completion. Fixed a bug where some valid credential helpers weren't properly handled by Compose when attempting to pull images from private registries. Fixed an issue where the output of docker-compose start before containers were created was misleading. Compose will no longer accept whitespace in variable names sourced from environment files. This matches the Docker CLI behavior. Compose will now report a configuration error if a service attempts to declare duplicate mount points in the volumes section. Fixed an issue with the containerized version of Compose that prevented users from writing to stdin during interactive sessions started by run or exec. One-off containers started by run no longer adopt the restart policy of the service, and are instead set to never restart. Fixed an issue that caused some container events to not appear in the output of the docker-compose events command. Missing images will no longer stop the execution of docker-compose down commands. A warning is now displayed instead. Force virtualenv version for macOS CI. Fixed merging of Compose files when network has None config. Fixed CTRL+C issues by enabling bootloaderignoresignals in pyinstaller. Bumped docker-py version to 3.7.2 to fix SSH and proxy configuration issues. Fixed release script and some typos on release documentation. (2018-11-28) Reverted a 1.23.0 change that appended random strings to container names created by docker-compose up, causing addressability issues. Note: Containers created by docker-compose run will continue to use randomly generated names to avoid collisions during parallel runs. Fixed an issue where some dockerfile paths would fail unexpectedly when attempting to build on Windows. Fixed a bug where build context URLs would fail to build on Windows. Fixed a bug that caused run and exec commands to fail for some otherwise accepted values of the --host parameter. Fixed an issue where overrides for the storage_opt and isolation keys in service definitions weren't properly applied. Fixed a bug where some invalid Compose files would raise an uncaught exception during validation. (2018-11-01) Fixed a bug where working with containers created with a version of Compose earlier than 1.23.0 would cause unexpected crashes. Fixed an issue where the behavior of the --project-directory flag would vary depending on which subcommand was used. (2018-10-30) The default naming scheme for containers created by Compose in this version has changed from <project><service><index> to <project><service><index>_<slug>, where <slug> is a randomly-generated hexadecimal" }, { "data": "Please make sure to update scripts relying on the old naming scheme accordingly before upgrading. Logs for containers restarting after a crash will now appear in the output of the up and logs commands. Added --hash option to the docker-compose config command, allowing users to print a hash string for each service's configuration to facilitate rolling updates. Added --parallel flag to the docker-compose build command, allowing Compose to build up to 5 images simultaneously. Output for the pull command now reports status / progress even when pulling multiple images in parallel. For images with multiple names, Compose will now attempt to match the one present in the service configuration in the output of the images command. Fixed an issue where parallel run commands for the same service would fail due to name collisions. Fixed an issue where paths longer than 260 characters on Windows clients would cause docker-compose build to fail. Fixed a bug where attempting to mount /var/run/docker.sock with Docker Desktop for Windows would result in failure. The --project-directory option is now used by Compose to determine where to look for the .env file. docker-compose build no longer fails when attempting to pull an image with credentials provided by the gcloud credential helper. Fixed the --exit-code-from option in docker-compose up to always report the actual exit code even when the watched container is not the cause of the exit. Fixed an issue that would prevent recreating a service in some cases where a volume would be mapped to the same mountpoint as a volume declared within the Dockerfile for that image. Fixed a bug that caused hash configuration with multiple networks to be inconsistent, causing some services to be unnecessarily restarted. Fixed a bug that would cause failures with variable substitution for services with a name containing one or more dot characters. Fixed a pipe handling issue when using the containerized version of Compose. Fixed a bug causing external: false entries in the Compose file to be printed as external: true in the output of docker-compose config. Fixed a bug where issuing a docker-compose pull command on services without a defined image key would cause Compose to crash. Volumes and binds are now mounted in the order they are declared in the service definition. (2018-07-17) Introduced version 3.7 of the docker-compose.yml specification. This version requires Docker Engine 18.06.0 or above. Added support for rollback_config in the deploy configuration Added support for the init parameter in service configurations Added support for extension fields in service, network, volume, secret, and config configurations Fixed a bug that prevented deployment with some Compose files when DOCKERDEFAULTPLATFORM was set Compose will no longer try to create containers or volumes with invalid starting characters Fixed several bugs that prevented Compose commands from working properly with containers created with an older version of Compose Fixed an issue with the output of docker-compose config with the --compatibility-mode flag enabled when the source file contains attachable networks Fixed a bug that prevented the gcloud credential store from working properly when used with the Compose binary on UNIX Fixed a bug that caused connection errors when trying to operate over a non-HTTPS TCP connection on Windows Fixed a bug that caused builds to fail on Windows if the Dockerfile was located in a subdirectory of the build context Fixed an issue that prevented proper parsing of UTF-8 BOM encoded Compose files on Windows Fixed an issue with handling of the double-wildcard () pattern in .dockerignore files when using docker-compose build Fixed a bug that caused auth values in legacy" }, { "data": "files to be ignored docker-compose build will no longer attempt to create image names starting with an invalid character (2018-05-03) (2018-04-27) In 1.21.0, we introduced a change to how project names are sanitized for internal use in resource names. This caused issues when manipulating an existing, deployed application whose name had changed as a result. This release properly detects resources using \"legacy\" naming conventions. Fixed an issue where specifying an in-context Dockerfile using an absolute path would fail despite being valid. Fixed a bug where IPAM option changes were incorrectly detected, preventing redeployments. Validation of v2 files now properly checks the structure of IPAM configs. Improved support for credentials stores on Windows to include binaries using extensions other than .exe. The list of valid extensions is determined by the contents of the PATHEXT environment variable. Fixed a bug where Compose would generate invalid binds containing duplicate elements with some v3.2 files, triggering errors at the Engine level during deployment. (2018-04-11) Introduced version 2.4 of the docker-compose.yml specification. This version requires Docker Engine 17.12.0 or above. Added support for the platform parameter in service definitions. If supplied, the parameter is also used when performing build for the service. Added support for the cpu_period parameter in service definitions (2.x only). Added support for the isolation parameter in service build configurations. Additionally, the isolation parameter in service definitions is used for builds as well if no build.isolation parameter is defined. (2.x only) Added support for the --workdir flag in docker-compose exec. Added support for the --compress flag in docker-compose build. docker-compose pull is now performed in parallel by default. You can opt out using the --no-parallel flag. The --parallel flag is now deprecated and will be removed in a future version. Dashes and underscores in project names are no longer stripped out. docker-compose build now supports the use of Dockerfile from outside the build context. Compose now checks that the volume's configuration matches the remote volume, and errors out if a mismatch is detected. Fixed a bug that caused Compose to raise unexpected errors when attempting to create several one-off containers in parallel. Fixed a bug with argument parsing when using docker-machine config to generate TLS flags for exec and run commands. Fixed a bug where variable substitution with an empty default value (e.g. ${VAR:-}) would print an incorrect warning. Improved resilience when encoding of the Compose file doesn't match the system's. Users are encouraged to use UTF-8 when possible. Fixed a bug where external overlay networks in Swarm would be incorrectly recognized as inexistent by Compose, interrupting otherwise valid operations. (2018-03-20) Introduced version 3.6 of the docker-compose.yml specification. This version must be used with Docker Engine 18.02.0 or above. Added support for the tmpfs.size property in volume mappings Added support for devicecgrouprules in service definitions Added support for the tmpfs.size property in long-form volume mappings The --build-arg option can now be used without specifying a service in docker-compose build Added a --log-level option to the top-level docker-compose command. Accepted values are debug, info, warning, error, critical. Default log level is info docker-compose run now allows users to unset the container's entrypoint Proxy configuration found in the" }, { "data": "file now populates environment and build args for containers created by Compose Added the --use-aliases flag to docker-compose run, indicating that network aliases declared in the service's config should be used for the running container Added the --include-deps flag to docker-compose pull docker-compose run now kills and removes the running container upon receiving SIGHUP docker-compose ps now shows the containers' health status if available Added the long-form --detach option to the exec, run and up commands Fixed .dockerignore handling, notably with regard to absolute paths and last-line precedence rules Fixed an issue where Compose would make costly DNS lookups when connecting to the Engine when using Docker For Mac Fixed a bug introduced in 1.19.0 which caused the default certificate path to not be honored by Compose Fixed a bug where Compose would incorrectly check whether a symlink's destination was accessible when part of a build context Fixed a bug where .dockerignore files containing lines of whitespace caused Compose to error out on Windows Fixed a bug where --tls* and --host options wouldn't be properly honored for interactive run and exec commands A seccomp:<filepath> entry in the security_opt config now correctly sends the contents of the file to the engine ANSI output for up and down operations should no longer affect the wrong lines Improved support for non-unicode locales Fixed a crash occurring on Windows when the user's home directory name contained non-ASCII characters Fixed a bug occurring during builds caused by files with a negative mtime values in the build context Fixed an encoding bug when streaming build progress (2018-02-07) Added --renew-anon-volumes (shorthand -V) to the up command, preventing Compose from recovering volume data from previous containers for anonymous volumes Added limit for number of simultaneous parallel operations, which should prevent accidental resource exhaustion of the server. Default is 64 and can be configured using the COMPOSEPARALLELLIMIT environment variable Added --always-recreate-deps flag to the up command to force recreating dependent services along with the dependency owner Added COMPOSEIGNOREORPHANS environment variable to forgo orphan container detection and suppress warnings Added COMPOSEFORCEWINDOWS_HOST environment variable to force Compose to parse volume definitions as if the Docker host was a Windows system, even if Compose itself is currently running on UNIX Bash completion should now be able to better differentiate between running, stopped and paused services Fixed a bug that would cause the build command to report a connection error when the build context contained unreadable files or FIFO objects. These file types will now be handled appropriately Fixed various issues around interactive run/exec sessions. Fixed a bug where setting TLS options with environment and CLI flags simultaneously would result in part of the configuration being ignored Fixed a bug where the DOCKERTLSVERIFY environment variable was being ignored by Compose Fixed a bug where the -d and --timeout flags in up were erroneously marked as incompatible Fixed a bug where the recreation of a service would break if the image associated with the previous container had been removed Fixed a bug where updating a mount's target would break Compose when trying to recreate the associated service Fixed a bug where tmpfs volumes declared using the extended syntax in Compose files using version 3.2 would be erroneously created as anonymous volumes instead Fixed a bug where type conversion errors would print a stacktrace instead of exiting gracefully Fixed some errors related to unicode handling Dependent services no longer get recreated along with the dependency owner if their configuration hasn't changed Added better validation of labels fields in Compose files. Label values containing scalar types (number, boolean) now get automatically converted to strings (2017-12-18) Introduced version 3.5 of the docker-compose.yml specification. This version requires Docker Engine" }, { "data": "or above Added support for the shm_size parameter in build configurations Added support for the isolation parameter in service definitions Added support for custom names for network, secret and config definitions Added support for extra_hosts in build configuration Added support for the long syntax for volume entries, as previously introduced in the 3.2 format. Using this syntax will create mounts instead of volumes. Added support for the oomkilldisable parameter in service definitions (2.x only) Added support for custom names for network definitions (2.x only) Values interpolated from the environment will now be converted to the proper type when used in non-string fields. Added support for --label in docker-compose run Added support for --timeout in docker-compose down Added support for --memory in docker-compose build Setting stopgraceperiod in service definitions now also sets the container's stop_timeout Fixed an issue where Compose was still handling service hostname according to legacy engine behavior, causing hostnames containing dots to be cut up Fixed a bug where the X-Y:Z syntax for ports was considered invalid by Compose Fixed an issue with CLI logging causing duplicate messages and inelegant output to occur Fixed an issue that caused stopgraceperiod to be ignored when using multiple Compose files Fixed a bug that caused docker-compose images to crash when using untagged images Fixed a bug where the valid ${VAR:-} syntax would cause Compose to error out Fixed a bug where env_file entries using an UTF-8 BOM were being read incorrectly Fixed a bug where missing secret files would generate an empty directory in their place Fixed character encoding issues in the CLI's error handlers Added validation for the test field in healthchecks Added validation for the subnet field in IPAM configurations Added validation for volumes properties when using the long syntax in service definitions The CLI now explicit prevents using -d and --timeout together in docker-compose up (2017-11-01) Introduced version 3.4 of the docker-compose.yml specification. This version requires to be used with Docker Engine 17.06.0 or above. Added support for cache_from, network and target options in build configurations Added support for the order parameter in the update_config section Added support for setting a custom name in volume definitions using the name parameter Fixed a bug where extra_hosts values would be overridden by extension files instead of merging together Fixed a bug where the validation for v3.2 files would prevent using the consistency field in service volume definitions Fixed a bug that would cause a crash when configuration fields expecting unique items would contain duplicates Fixed a bug where mount overrides with a different mode would create a duplicate entry instead of overriding the original entry Fixed a bug where build labels declared as a list wouldn't be properly parsed Fixed a bug where the output of docker-compose config would be invalid for some versions if the file contained custom-named external volumes Improved error handling when issuing a build command on Windows using an unsupported file version Fixed an issue where networks with identical names would sometimes be created when running up commands concurrently. (2017-08-31) Introduced version 2.3 of the docker-compose.yml specification. This version requires to be used with Docker Engine 17.06.0 or above. Added support for the target parameter in build configurations Added support for the start_period parameter in healthcheck configurations Added support for the blkio_config parameter in service definitions Added support for setting a custom name in volume definitions using the name parameter (not available for version 2.0) Fixed a bug where nested extends instructions weren't resolved properly, causing \"file not found\" errors Fixed several issues with" }, { "data": "parsing Fixed issues where logs of TTY-enabled services were being printed incorrectly and causing MemoryError exceptions Fixed a bug where printing application logs would sometimes be interrupted by a UnicodeEncodeError exception on Python 3 The $ character in the output of docker-compose config is now properly escaped Fixed a bug where running docker-compose top would sometimes fail with an uncaught exception Fixed a bug where docker-compose pull with the --parallel flag would return a 0 exit code when failing Fixed an issue where keys in deploy.resources were not being validated Fixed an issue where the logging options in the output of docker-compose config would be set to null, an invalid value Fixed the output of the docker-compose images command when an image would come from a private repository using an explicit port number Fixed the output of docker-compose config when a port definition used 0 as the value for the published port (2017-07-26) The pid option in a service's definition now supports a service:<name> value. Added support for the storage_opt parameter in service definitions. This option is not available for the v3 format Added --quiet flag to docker-compose pull, suppressing progress output Some improvements to CLI output Volumes specified through the --volume flag of docker-compose run now complement volumes declared in the service's definition instead of replacing them Fixed a bug where using multiple Compose files would unset the scale value defined inside the Compose file. Fixed an issue where the credHelpers entries in the config.json file were not being honored by Compose Fixed a bug where using multiple Compose files with port declarations would cause failures in Python 3 environments Fixed a bug where some proxy-related options present in the user's environment would prevent Compose from running Fixed an issue where the output of docker-compose config would be invalid if the original file used Y or N values Fixed an issue preventing up operations on a previously created stack on Windows Engine. (2017-06-19) Added shorthand -u for --user flag in docker-compose exec Differences in labels between the Compose file and remote network will now print a warning instead of preventing redeployment. Fixed a bug where service's dependencies were being rescaled to their default scale when running a docker-compose run command Fixed a bug where docker-compose rm with the --stop flag was not behaving properly when provided with a list of services to remove Fixed a bug where cache_from in the build section would be ignored when using more than one Compose file. Fixed a bug that prevented binding the same port to different IPs when using more than one Compose file. Fixed a bug where override files would not be picked up by Compose if they had the .yaml extension Fixed a bug on Windows Engine where networks would be incorrectly flagged for recreation Fixed a bug where services declaring ports would cause crashes on some versions of Python 3 Fixed a bug where the output of docker-compose config would sometimes contain invalid port definitions (2017-05-02) Introduced version 2.2 of the docker-compose.yml specification. This version requires to be used with Docker Engine 1.13.0 or above Added support for init in service definitions. Added support for scale in service definitions. The configuration's value can be overridden using the --scale flag in docker-compose" }, { "data": "The scale command is disabled for this file format Fixed a bug where paths provided to compose via the -f option were not being resolved properly Fixed a bug where the extip::targetport notation in the ports section was incorrectly marked as invalid Fixed an issue where the exec command would sometimes not return control to the terminal when using the -d flag Fixed a bug where secrets were missing from the output of the config command for v3.2 files Fixed an issue where docker-compose would hang if no internet connection was available Fixed an issue where paths containing unicode characters passed via the -f flag were causing Compose to crash Fixed an issue where the output of docker-compose config would be invalid if the Compose file contained external secrets Fixed a bug where using --exit-code-from with up would fail if Compose was installed in a Python 3 environment Fixed a bug where recreating containers using a combination of tmpfs and volumes would result in an invalid config state (2017-04-04) Introduced version 3.2 of the docker-compose.yml specification Added support for cache_from in the build section of services Added support for the new expanded ports syntax in service definitions Added support for the new expanded volumes syntax in service definitions Added --volumes option to docker-compose config that lists named volumes declared for that project Added support for mem_reservation in service definitions (2.x only) Added support for dns_opt in service definitions (2.x only) Added a new docker-compose images command that lists images used by the current project's containers Added a --stop (shorthand -s) option to docker-compose rm that stops the running containers before removing them Added a --resolve-image-digests option to docker-compose config that pins the image version for each service to a permanent digest Added a --exit-code-from SERVICE option to docker-compose up. When used, docker-compose will exit on any container's exit with the code corresponding to the specified service's exit code Added a --parallel option to docker-compose pull that enables images for multiple services to be pulled simultaneously Added a --build-arg option to docker-compose build Added a --volume <volume_mapping> (shorthand -v) option to docker-compose run to declare runtime volumes to be mounted Added a --project-directory PATH option to docker-compose that will affect path resolution for the project When using --abort-on-container-exit in docker-compose up, the exit code for the container that caused the abort will be the exit code of the docker-compose up command Users can now configure which path separator character they want to use to separate the COMPOSE_FILE environment value using the COMPOSEPATHSEPARATOR environment variable Added support for port range to a single port in port mappings, such as 8000-8010:80. docker-compose run --rm now removes anonymous volumes after execution, matching the behavior of docker run" }, { "data": "Fixed a bug where override files containing port lists would cause a TypeError to be raised Fixed a bug where the deploy key would be missing from the output of docker-compose config Fixed a bug where scaling services up or down would sometimes re-use obsolete containers Fixed a bug where the output of docker-compose config would be invalid if the project declared anonymous volumes Variable interpolation now properly occurs in the secrets section of the Compose file The secrets section now properly appears in the output of docker-compose config Fixed a bug where changes to some networks properties would not be detected against previously created networks Fixed a bug where docker-compose would crash when trying to write into a closed pipe Fixed an issue where Compose would not pick up on the value of COMPOSETLSVERSION when used in combination with command-line TLS flags (2017-02-17) Fixed a bug that was preventing secrets configuration from being loaded properly Fixed a bug where the docker-compose config command would fail if the config file contained secrets definitions Fixed an issue where Compose on some linux distributions would pick up and load an outdated version of the requests library Fixed an issue where socket-type files inside a build folder would cause docker-compose to crash when trying to build that service Fixed an issue where recursive wildcard patterns were not being recognized in .dockerignore files. (2017-02-09) (2017-02-08) Fixed a bug where extending a service defining a healthcheck dictionary would cause docker-compose to error out. Fixed an issue where the pid entry in a service definition was being ignored when using multiple Compose files. (2017-02-01) Fixed an issue where the presence of older versions of the docker-py package would cause unexpected crashes while running Compose Fixed an issue where healthcheck dependencies would be lost when using multiple compose files for a project Fixed a few issues that made the output of the config command invalid Fixed an issue where adding volume labels to v3 Compose files would result in an error Fixed an issue on Windows where build context paths containing unicode characters were being improperly encoded Fixed a bug where Compose would occasionally crash while streaming logs when containers would stop or restart (2017-01-18) Healthcheck configuration can now be done in the service definition using the healthcheck parameter Containers dependencies can now be set up to wait on positive healthchecks when declared using depends_on. See the documentation for the updated syntax. Note: This feature will not be ported to version 3 Compose files. Added support for the sysctls parameter in service definitions Added support for the userns_mode parameter in service definitions Compose now adds identifying labels to networks and volumes it creates Colored output now works properly on Windows. Fixed a bug where docker-compose run would fail to set up link aliases in interactive mode on Windows. Networks created by Compose are now always made attachable (Compose files v2.1 and up). Fixed a bug where falsy values of COMPOSECONVERTWINDOWS_PATHS (0, false, empty value) were being interpreted as true. Fixed a bug where forward slashes in some .dockerignore patterns weren't being parsed correctly on Windows (2016-11-16) Breaking changes Interactive mode for docker-compose run and docker-compose exec is now supported on Windows platforms. The docker binary is required to be present on the system for this feature to work. Introduced version 2.1 of the docker-compose.yml specification. This version requires to be used with Docker Engine 1.12 or above. Added support for the groupadd and oomscore_adj parameters in service definitions. Added support for the internal and enable_ipv6 parameters in network definitions. Compose now defaults to using the npipe protocol on Windows. Overriding a logging configuration will now properly merge the options mappings if the driver values do not conflict. Fixed several bugs related to npipe protocol support on Windows. Fixed an issue with Windows paths being incorrectly converted when using Docker on Windows Server. Fixed a bug where an empty restart value would sometimes result in an exception being raised. Fixed an issue where service logs containing unicode characters would sometimes cause an error to occur. Fixed a bug where unicode values in environment variables would sometimes raise a unicode exception when retrieved. Fixed an issue where Compose would incorrectly detect a configuration mismatch for overlay networks. (2016-09-22) Fixed a bug where users using a credentials store were not able to access their private images. Fixed a bug where users using identity tokens to authenticate were not able to access their private images. Fixed a bug where an HttpHeaders entry in the docker configuration file would cause Compose to crash when trying to build an image. Fixed a few bugs related to the handling of Windows paths in volume binding" }, { "data": "Fixed a bug where Compose would sometimes crash while trying to read a streaming response from the engine. Fixed an issue where Compose would crash when encountering an API error while streaming container logs. Fixed an issue where Compose would erroneously try to output logs from drivers not handled by the Engine's API. Fixed a bug where options from the docker-machine config command would not be properly interpreted by Compose. Fixed a bug where the connection to the Docker Engine would sometimes fail when running a large number of services simultaneously. Fixed an issue where Compose would sometimes print a misleading suggestion message when running the bundle command. Fixed a bug where connection errors would not be handled properly by Compose during the project initialization phase. Fixed a bug where a misleading error would appear when encountering a connection timeout. (2016-06-14) As announced in 1.7.0, docker-compose rm now removes containers created by docker-compose run by default. Setting entrypoint on a service now empties out any default command that was set on the image (i.e. any CMD instruction in the Dockerfile used to build it). This makes it consistent with the --entrypoint flag to docker run. Added docker-compose bundle, a command that builds a bundle file to be consumed by the new Docker Stack commands in Docker 1.12. Added docker-compose push, a command that pushes service images to a registry. Compose now supports specifying a custom TLS version for interaction with the Docker Engine using the COMPOSETLSVERSION environment variable. Fixed a bug where Compose would erroneously try to read .env at the project's root when it is a directory. docker-compose run -e VAR now passes VAR through from the shell to the container, as with docker run -e VAR. Improved config merging when multiple compose files are involved for several service sub-keys. Fixed a bug where volume mappings containing Windows drives would sometimes be parsed incorrectly. Fixed a bug in Windows environment where volume mappings of the host's root directory would be parsed incorrectly. Fixed a bug where docker-compose config would output an invalid Compose file if external networks were specified. Fixed an issue where unset buildargs would be assigned a string containing 'None' instead of the expected empty value. Fixed a bug where yes/no prompts on Windows would not show before receiving input. Fixed a bug where trying to docker-compose exec on Windows without the -d option would exit with a stacktrace. This will still fail for the time being, but should do so gracefully. Fixed a bug where errors during docker-compose up would show an unrelated stacktrace at the end of the process. docker-compose create and docker-compose start show more descriptive error messages when something goes wrong. (2016-05-04) Fixed a bug where the output of docker-compose config for v1 files would be an invalid configuration file. Fixed a bug where docker-compose config would not check the validity of links. Fixed an issue where docker-compose help would not output a list of available commands and generic options as expected. Fixed an issue where filtering by service when using docker-compose logs would not apply for newly created services. Fixed a bug where unchanged services would sometimes be recreated in in the up phase when using Compose with Python 3. Fixed an issue where API errors encountered during the up phase would not be recognized as a failure state by Compose. Fixed a bug where Compose would raise a NameError because of an undefined exception name on non-Windows platforms. Fixed a bug where the wrong version of docker-py would sometimes be installed alongside" }, { "data": "Fixed a bug where the host value output by docker-machine config default would not be recognized as valid options by the docker-compose command line. Fixed an issue where Compose would sometimes exit unexpectedly while reading events broadcasted by a Swarm cluster. Corrected a statement in the docs about the location of the .env file, which is indeed read from the current directory, instead of in the same location as the Compose file. (2016-04-13) docker-compose logs no longer follows log output by default. It now matches the behavior of docker logs and exits after the current logs are printed. Use -f to get the old default behavior. Booleans are no longer allows as values for mappings in the Compose file (for keys environment, labels and extra_hosts). Previously this was a warning. Boolean values should be quoted so they become string values. Compose now looks for a .env file in the directory where it's run and reads any environment variables defined inside, if they're not already set in the shell environment. This lets you easily set defaults for variables used in the Compose file, or for any of the COMPOSE_* or DOCKER_* variables. Added a --remove-orphans flag to both docker-compose up and docker-compose down to remove containers for services that were removed from the Compose file. Added a --all flag to docker-compose rm to include containers created by docker-compose run. This will become the default behavior in the next version of Compose. Added support for all the same TLS configuration flags used by the docker client: --tls, --tlscert, --tlskey, etc. Compose files now support the tmpfs and shm_size options. Added the --workdir flag to docker-compose run docker-compose logs now shows logs for new containers that are created after it starts. The COMPOSE_FILE environment variable can now contain multiple files, separated by the host system's standard path separator (: on Mac/Linux, ; on Windows). You can now specify a static IP address when connecting a service to a network with the ipv4address and ipv6address options. Added --follow, --timestamp, and --tail flags to the docker-compose logs command. docker-compose up, and docker-compose start will now start containers in parallel where possible. docker-compose stop now stops containers in reverse dependency order instead of all at once. Added the --build flag to docker-compose up to force it to build a new image. It now shows a warning if an image is automatically built when the flag is not used. Added the docker-compose exec command for executing a process in a running container. docker-compose down now removes containers created by docker-compose run. A more appropriate error is shown when a timeout is hit during up when using a tty. Fixed a bug in docker-compose down where it would abort if some resources had already been removed. Fixed a bug where changes to network aliases would not trigger a service to be recreated. Fix a bug where a log message was printed about creating a new volume when it already existed. Fixed a bug where interrupting up would not always shut down containers. Fixed a bug where logopt and logdriver were not properly carried over when extending services in the v1 Compose file format. Fixed a bug where empty values for build args would cause file validation to fail. (2016-02-23) (2016-02-23) Fixed a bug where recreating a container multiple times would cause the new container to be started without the previous" }, { "data": "Fixed a bug where Compose would set the value of unset environment variables to an empty string, instead of a key without a value. Provide a better error message when Compose requires a more recent version of the Docker API. Add a missing config field network.aliases which allows setting a network scoped alias for a service. Fixed a bug where run would not start services listed in depends_on. Fixed a bug where networks and network_mode where not merged when using extends or multiple Compose files. Fixed a bug with service aliases where the short container id alias was only contained 10 characters, instead of the 12 characters used in previous versions. Added a missing log message when creating a new named volume. Fixed a bug where build.args was not merged when using extends or multiple Compose files. Fixed some bugs with config validation when null values or incorrect types were used instead of a mapping. Fixed a bug where a build section without a context would show a stack trace instead of a helpful validation message. Improved compatibility with swarm by only setting a container affinity to the previous instance of a services' container when the service uses an anonymous container volume. Previously the affinity was always set on all containers. Fixed the validation of some driver_opts would cause an error if a number was used instead of a string. Some improvements to the run.sh script used by the Compose container install option. Fixed a bug with up --abort-on-container-exit where Compose would exit, but would not stop other containers. Corrected the warning message that is printed when a boolean value is used as a value in a mapping. (2016-01-15) Compose 1.6 introduces a new format for docker-compose.yml which lets you define networks and volumes in the Compose file as well as services. It also makes a few changes to the structure of some configuration options. You don't have to use it - your existing Compose files will run on Compose 1.6 exactly as they do today. Check the upgrade guide for full details. Support for networking has exited experimental status and is the recommended way to enable communication between containers. If you use the new file format, your app will use networking. If you aren't ready yet, just leave your Compose file as it is and it'll continue to work just the same. By default, you don't have to configure any networks. In fact, using networking with Compose involves even less configuration than using links. Consult the networking guide for how to use it. The experimental flags --x-networking and --x-network-driver, introduced in Compose 1.5, have been removed. You can now pass arguments to a build if you're using the new file format: ``` build: context: . args: buildno: 1 ``` You can now specify both a build and an image key if you're using the new file format. docker-compose build will build the image and tag it with the name you've specified, while docker-compose pull will attempt to pull it. There's a new events command for monitoring container events from the application, much like docker events. This is a good primitive for building tools on top of Compose for performing actions when particular things happen, such as containers starting and stopping. There's a new depends_on option for specifying dependencies between services. This enforces the order of startup, and ensures that when you run docker-compose up SERVICE on a service with dependencies, those are started as well. Added a new command config which validates and prints the Compose configuration after interpolating variables, resolving relative paths, and merging multiple files and" }, { "data": "Added a new command create for creating containers without starting them. Added a new command down to stop and remove all the resources created by up in a single command. Added support for the cpu_quota configuration option. Added support for the stop_signal configuration option. Commands start, restart, pause, and unpause now exit with an error status code if no containers were modified. Added a new --abort-on-container-exit flag to up which causes up to stop all container and exit once the first container exits. Removed support for FIGFILE, FIGPROJECT_NAME, and no longer reads fig.yml as a default Compose file location. Removed the migrate-to-labels command. Removed the --allow-insecure-ssl flag. Fixed a validation bug that prevented the use of a range of ports in the expose field. Fixed a validation bug that prevented the use of arrays in the entrypoint field if they contained duplicate entries. Fixed a bug that caused ulimits to be ignored when used with extends. Fixed a bug that prevented ipv6 addresses in extra_hosts. Fixed a bug that caused extends to be ignored when included from multiple Compose files. Fixed an incorrect warning when a container volume was defined in the Compose file. Fixed a bug that prevented the force shutdown behavior of up and logs. Fixed a bug that caused None to be printed as the network driver name when the default network driver was used. Fixed a bug where using the string form of dns or dns_search would cause an error. Fixed a bug where a container would be reported as \"Up\" when it was in the restarting state. Fixed a confusing error message when DOCKERCERTPATH was not set properly. Fixed a bug where attaching to a container would fail if it was using a non-standard logging driver (or none at all). (2015-12-03) Fixed a bug which broke the use of environment and env_file with extends, and caused environment keys without values to have a None value, instead of a value from the host environment. Fixed a regression in 1.5.1 that caused a warning about volumes to be raised incorrectly when containers were recreated. Fixed a bug which prevented building a Dockerfile that used ADD <url> Fixed a bug with docker-compose restart which prevented it from starting stopped containers. Fixed handling of SIGTERM and SIGINT to properly stop containers Add support for using a url as the value of build Improved the validation of the expose option (2015-11-12) Add the --force-rm option to build. Add the ulimit option for services in the Compose file. Fixed a bug where up would error with \"service needs to be built\" if a service changed from using image to using build. Fixed a bug that would cause incorrect output of parallel operations on some terminals. Fixed a bug that prevented a container from being recreated when the mode of a volumes_from was changed. Fixed a regression in 1.5.0 where non-utf-8 unicode characters would cause up or logs to crash. Fixed a regression in 1.5.0 where Compose would use a success exit status code when a command fails due to an HTTP timeout communicating with the docker daemon. Fixed a regression in 1.5.0 where name was being accepted as a valid service option which would override the actual name of the service. When using --x-networking Compose no longer sets the hostname to the container name. When using --x-networking Compose will only create the default network if at least one container is using the network. When printings logs during up or logs, flush the output buffer after each line to prevent buffering issues from hiding" }, { "data": "Recreate a container if one of its dependencies is being created. Previously a container was only recreated if it's dependencies already existed, but were being recreated as well. Add a warning when a volume in the Compose file is being ignored and masked by a container volume from a previous container. Improve the output of pull when run without a tty. When using multiple Compose files, validate each before attempting to merge them together. Previously invalid files would result in not helpful errors. Allow dashes in keys in the environment service option. Improve validation error messages by including the filename as part of the error message. (2015-11-03) With the introduction of variable substitution support in the Compose file, any Compose file that uses an environment variable ($VAR or ${VAR}) in the command: or entrypoint: field will break. Previously these values were interpolated inside the container, with a value from the container environment. In Compose 1.5.0, the values will be interpolated on the host, with a value from the host environment. To migrate a Compose file to 1.5.0, escape the variables with an extra $ (ex: $$VAR or $${VAR}). See https://github.com/docker/compose/blob/8cc8e61/docs/compose-file.md#variable-substitution Compose is now available for Windows. Environment variables can be used in the Compose file. See https://github.com/docker/compose/blob/8cc8e61/docs/compose-file.md#variable-substitution Multiple compose files can be specified, allowing you to override settings in the default Compose file. See https://github.com/docker/compose/blob/8cc8e61/docs/reference/docker-compose.md for more details. Compose now produces better error messages when a file contains invalid configuration. up now waits for all services to exit before shutting down, rather than shutting down as soon as one container exits. Experimental support for the new docker networking system can be enabled with the --x-networking flag. Read more here: https://github.com/docker/docker/blob/8fee1c20/docs/userguide/dockernetworks.md You can now optionally pass a mode to volumes_from. For example, volumes_from: [\"servicename:ro\"]. Since Docker now lets you create volumes with names, you can refer to those volumes by name in docker-compose.yml. For example, volumes: [\"mydatavolume:/data\"] will mount the volume named mydatavolume at the path /data inside the container. If the first component of an entry in volumes starts with a ., / or ~, it is treated as a path and expansion of relative paths is performed as necessary. Otherwise, it is treated as a volume name and passed straight through to Docker. Read more on named volumes and volume drivers here: https://github.com/docker/docker/blob/244d9c33/docs/userguide/dockervolumes.md docker-compose build --pull instructs Compose to pull the base image for each Dockerfile before building. docker-compose pull --ignore-pull-failures instructs Compose to continue if it fails to pull a single service's image, rather than aborting. You can now specify an IPC namespace in docker-compose.yml with the ipc option. Containers created by docker-compose run can now be named with the --name flag. If you install Compose with pip or use it as a library, it now works with Python 3. image now supports image digests (in addition to ids and tags). For example, image: \"busybox@sha256:38a203e1986cf79639cfb9b2e1d6e773de84002feea2d4eb006b52004ee8502d\" ports now supports ranges of ports. For example, ``` ports: \"3000-3005\" \"9000-9001:8000-8001\" ``` docker-compose run now supports a -p|--publish parameter, much like docker run -p, for publishing specific ports to the host. docker-compose pause and docker-compose unpause have been implemented, analogous to docker pause and docker unpause. When using extends to copy configuration from another service in the same Compose file, you can omit the file option. Compose can be installed and run as a Docker image. This is an experimental feature. All values for the log_driver option which are supported by the Docker daemon are now supported by" }, { "data": "docker-compose build can now be run successfully against a Swarm cluster. (2015-09-22) (2015-09-10) (2015-08-04) By default, docker-compose up now only recreates containers for services whose configuration has changed since they were created. This should result in a dramatic speed-up for many applications. The experimental --x-smart-recreate flag which introduced this feature in Compose 1.3.0 has been removed, and a --force-recreate flag has been added for when you want to recreate everything. Several of Compose's commands - scale, stop, kill and rm - now perform actions on multiple containers in parallel, rather than in sequence, which will run much faster on larger applications. You can now specify a custom name for a service's container with container_name. Because Docker container names must be unique, this means you can't scale the service beyond one container. You no longer have to specify a file option when using extends - it will default to the current file. Service names can now contain dots, dashes and underscores. Compose can now read YAML configuration from standard input, rather than from a file, by specifying - as the filename. This makes it easier to generate configuration dynamically: ``` $ echo 'redis: {\"image\": \"redis\"}' | docker-compose --file - up ``` There's a new docker-compose version command which prints extended information about Compose's bundled dependencies. docker-compose.yml now supports logopt as well as logdriver, allowing you to pass extra configuration to a service's logging driver. docker-compose.yml now supports memswap_limit, similar to docker run --memory-swap. When mounting volumes with the volumes option, you can now pass in any mode supported by the daemon, not just :ro or :rw. For example, SELinux users can pass :z or :Z. You can now specify a custom volume driver with the volume_driver option in docker-compose.yml, much like docker run --volume-driver. A bug has been fixed where Compose would fail to pull images from private registries serving plain (unsecured) HTTP. The --allow-insecure-ssl flag, which was previously used to work around this issue, has been deprecated and now has no effect. A bug has been fixed where docker-compose build would fail if the build depended on a private Hub image or an image from a private registry. A bug has been fixed where Compose would crash if there were containers which the Docker daemon had not finished removing. Two bugs have been fixed where Compose would sometimes fail with a \"Duplicate bind mount\" error, or fail to attach volumes to a container, if there was a volume path specified in docker-compose.yml with a trailing slash. Thanks @mnowster, @dnephin, @ekristen, @funkyfuture, @jeffk and @lukemarsden! (2015-07-15) (2015-07-14) Thanks @dano, @josephpage, @kevinsimper, @lieryan, @phemmer, @soulrebel and @sschepens! (2015-06-21) (2015-06-18) This release contains breaking changes, and you will need to either remove or migrate your existing containers before running your app - see the upgrading section of the install docs for details. Compose now requires Docker 1.6.0 or later. Compose now uses container labels, rather than names, to keep track of containers. This makes Compose both faster and easier to integrate with your own tools. Compose no longer uses \"intermediate containers\" when recreating containers for a service. This makes docker-compose up less complex and more resilient to failure. docker-compose up has an experimental new behavior: it will only recreate containers for services whose configuration has changed in docker-compose.yml. This will eventually become the default, but for now you can take it for a spin: ``` $ docker-compose up --x-smart-recreate ``` When invoked in a subdirectory of a project, docker-compose will now climb up through parent directories until it finds a" }, { "data": "Several new configuration keys have been added to docker-compose.yml: Thanks @ahromis, @albers, @aleksandr-vin, @antoineco, @ccverak, @chernjie, @dnephin, @edmorley, @fordhurley, @josephpage, @KyleJamesWalker, @lsowen, @mchasal, @noironetworks, @sdake, @sdurrheimer, @sherter, @stephenlawrence, @thaJeztah, @thieman, @turtlemonvh, @twhiteman, @vdemeester, @xuxinkun and @zwily! (2015-04-16) docker-compose.yml now supports an extends option, which enables a service to inherit configuration from another service in another configuration file. This is really good for sharing common configuration between apps, or for configuring the same app for different environments. Here's the documentation. When using Compose with a Swarm cluster, containers that depend on one another will be co-scheduled on the same node. This means that most Compose apps will now work out of the box, as long as they don't use build. Repeated invocations of docker-compose up when using Compose with a Swarm cluster now work reliably. Directories passed to build, filenames passed to env_file and volume host paths passed to volumes are now treated as relative to the directory of the configuration file, not the directory that docker-compose is being run in. In the majority of cases, those are the same, but if you use the -f|--file argument to specify a configuration file in another directory, this is a breaking change. A service can now share another service's network namespace with net: container:<service>. volumes_from and net: container:<service> entries are taken into account when resolving dependencies, so docker-compose up <service> will correctly start all dependencies of <service>. docker-compose run now accepts a --user argument to specify a user to run the command as, just like docker run. The up, stop and restart commands now accept a --timeout (or -t) argument to specify how long to wait when attempting to gracefully stop containers, just like docker stop. docker-compose rm now accepts -f as a shorthand for --force, just like docker rm. Thanks, @abesto, @albers, @alunduil, @dnephin, @funkyfuture, @gilclark, @IanVS, @KingsleyKelly, @knutwalker, @thaJeztah and @vmalloc! (2015-02-25) Fig has been renamed to Docker Compose, or just Compose for short. This has several implications for you: Besides that, theres a lot of new stuff in this release: Weve made a few small changes to ensure that Compose will work with Swarm, Dockers new clustering tool ( https://github.com/docker/swarm). Eventually you'll be able to point Compose at a Swarm cluster instead of a standalone Docker host and itll run your containers on the cluster with no extra work from you. As Swarm is still developing, integration is rough and lots of Compose features don't work yet. docker-compose run now has a --service-ports flag for exposing ports on the given service. This is useful for running your webapp with an interactive debugger, for example. You can now link to containers outside your app with the external_links option in docker-compose.yml. You can now prevent docker-compose up from automatically building images with the --no-build option. This will make fewer API calls and run faster. If you dont specify a tag when using the image key, Compose will default to the latest tag, rather than pulling all tags. docker-compose kill now supports the -s flag, allowing you to specify the exact signal you want to send to a services containers. docker-compose.yml now has an env_file key, analogous to docker run --env-file, letting you specify multiple environment variables in a separate file. This is great if you have a lot of them, or if you want to keep sensitive information out of version control. docker-compose.yml now supports the dnssearch, capadd, capdrop, cpushares and restart options, analogous to docker runs --dns-search, --cap-add, --cap-drop, --cpu-shares and --restart" }, { "data": "Compose now ships with Bash tab completion - see the installation and usage docs at https://github.com/docker/compose/blob/1.1.0/docs/completion.md A number of bugs have been fixed - see the milestone for details: https://github.com/docker/compose/issues?q=milestone%3A1.1.0+ Thanks @dnephin, @squebe, @jbalonso, @raulcd, @benlangfield, @albers, @ggtools, @bersace, @dtenenba, @petercv, @drewkett, @TFenby, @paulRbr, @Aigeruth and @salehe! (2014-11-04) (2014-10-16) The highlights: Fig has joined Docker. Fig will continue to be maintained, but we'll also be incorporating the best bits of Fig into Docker itself. This means the GitHub repository has moved to https://github.com/docker/fig and our IRC channel is now #docker-fig on Freenode. Fig can be used with the official Docker OS X installer. Boot2Docker will mount the home directory from your host machine so volumes work as expected. Fig supports Docker 1.3. It is now possible to connect to the Docker daemon using TLS by using the DOCKERCERTPATH and DOCKERTLSVERIFY environment variables. There is a new fig port command which outputs the host port binding of a service, in a similar way to docker port. There is a new fig pull command which pulls the latest images for a service. There is a new fig restart command which restarts a service's containers. Fig creates multiple containers in service by appending a number to the service name. For example, db1, db2. As a convenience, Fig will now give the first container an alias of the service name. For example, db. This link alias is also a valid hostname and added to /etc/hosts so you can connect to linked services using their hostname. For example, instead of resolving the environment variables DBPORT5432TCPADDR and DBPORT5432TCPPORT, you could just use the hostname db and port 5432 directly. Volume definitions now support ro mode, expanding ~ and expanding environment variables. .dockerignore is supported when building. The project name can be set with the FIGPROJECTNAME environment variable. The --env and --entrypoint options have been added to fig run. The Fig binary for Linux is now linked against an older version of glibc so it works on CentOS 6 and Debian Wheezy. Other things: Thanks @dnephin, @d11wtq, @marksteve, @rubbish, @jbalonso, @timfreund, @alunduil, @mieciu, @shuron, @moss, @suzaku and @chmouel! Whew. (2014-07-28) Thanks @dnephin and @marksteve! (2014-07-11) Thanks @ryanbrainard and @d11wtq! (2014-07-11) Fig now starts links when you run fig run or fig up. For example, if you have a web service which depends on a db service, fig run web ... will start the db service. Environment variables can now be resolved from the environment that Fig is running in. Just specify it as a blank variable in your fig.yml and, if set, it'll be resolved: ``` environment: RACK_ENV: development SESSION_SECRET:``` volumes_from is now supported in fig.yml. All of the volumes from the specified services and containers will be mounted: ``` volumes_from: service_name container_name``` A host address can now be specified in ports: ``` ports: \"0.0.0.0:8000:8000\" \"127.0.0.1:8001:8001\"``` The net and workdir options are now supported in fig.yml. The hostname option now works in the same way as the Docker CLI, splitting out into a domainname option. TTY behavior is far more robust, and resizes are supported correctly. Load YAML files safely. Thanks to @d11wtq, @ryanbrainard, @rail44, @j0hnsmith, @binarin, @Elemecca, @mozz100 and @marksteve for their help with this release! (2014-06-18) (2014-05-08) (2014-04-29) (2014-03-05) (2014-03-04) (2014-03-03) Thanks @marksteve, @Gazler and @teozkr! (2014-02-17) Thanks to @barnybug and @dustinlacewell for their work on this release. (2014-02-04) (2014-01-31) Big thanks to @cameronmaske, @mrchrisadams and @damianmoore for their help with this release. (2014-01-27) (2014-01-23) (2014-01-22) (2014-01-17) (2014-01-16) Big thanks to @tomstuart, @EnTeQuAk, @schickling, @aronasorman and @GeoffreyPlitt. (2014-01-02) (2013-12-20) Initial release. Edit this page Request changes Copyright 2013-2024 Docker Inc. All rights reserved." } ]
{ "category": "App Definition and Development", "file_name": "new_template=doc_issue.yml&location=https%3a%2f%2fdocs.docker.com%2fcompose%2f&labels=status%2Ftriage.md", "project_name": "Docker Compose", "subcategory": "Application Definition & Image Build" }
[ { "data": "You can enhance your teams' builds with a Build Cloud subscription. This page describes the features available for the different subscription tiers. To compare features available for each tier, see Docker Build Cloud pricing. If you have an existing Docker Core subscription, a base level of Build Cloud minutes and cache are included. The features available vary depending on your Docker Core subscription tier. You can buy Docker Build Cloud Team if you dont have a Docker Core subscription, or upgrade any Docker Core tier to enhance your developers' experience with the following features: The Docker Build Cloud Team subscription is tied to a Docker organization. To use the build minutes or shared cache of a Docker Build Cloud Team subscription, users must be a part of the organization associated with the subscription. See Manage seats and invites. To learn how to buy this subscription for your Docker organization, see Buy your subscription - existing account or organization. If you havent created a Docker organization yet and dont have an existing Docker Core subscription, see Buy your subscription - new organization. For organizations without a Docker Core subscription, this plan also includes 50 shared minutes in addition to the Docker Build Cloud Team minutes. For enterprise features such as paying via invoice and additional build minutes, contact sales. Edit this page Request changes Copyright 2013-2024 Docker Inc. All rights reserved." } ]
{ "category": "App Definition and Development", "file_name": "install.md", "project_name": "Docker Compose", "subcategory": "Application Definition & Image Build" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "App Definition and Development", "file_name": "github-privacy-statement.md", "project_name": "Fabric8 Kubernetes Client", "subcategory": "Application Definition & Image Build" }

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
5
Add dataset card