tag
dict
content
listlengths
1
139
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Buildpacks", "subcategory": "Application Definition & Image Build" }
[ { "data": "Cloud Native Buildpacks (CNBs) transform your application source code into container images that can run on any cloud. With buildpacks, organizations can concentrate the knowledge of container build best practices within a specialized team, instead of having application developers across the organization individually maintain their own Dockerfiles. This makes it easier to know what is inside application images, enforce security and compliance requirements, and perform upgrades with minimal effort and intervention. The CNB project was initiated by Pivotal and Heroku in January 2018 and joined the Cloud Native Computing Foundation (CNCF) as an Apache-2.0 licensed project in October 2018. It is currently an incubating project within the CNCF. See how-to guides, concepts, and tutorials tailored to specific personas: CircleCI is a continuous integration and delivery platform. The CNB project maintains an integration, called an orb, which allows users to run pack commands inside their pipelines. kpack is a Kubernetes-native platform that uses unprivileged Kubernetes primitives to perform buildpacks builds and keep application images up-to-date. kpack is part of the Buildpacks Community organization. Tekton is an open-source CI/CD system running on k8s. The CNB project has created two reference tasks for performing buildpacks builds, both of which use the lifecycle directly (i.e. they do not use pack). Reference documents for various key aspects of the project. We love talks to share the latest development updates, explain buildpacks basics and more, receive feedback and questions, and get to know other members of the community. Check out some of our most recent and exciting conference talks below. More talks are available in our Conference Talks Playlist on YouTube. If you are interested in giving a talk about buildpacks, the linked slides may provide a useful starting point. Please feel free to reach out in Slack if youd like input or help from the CNB team! Feel free to look through the archive of previous community meetings in our Working Group Playlist on YouTube. If you would like to attend a Working Group meeting, check out our community page. Cloud Native Buildpacks is an incubating project in the CNCF. We welcome contribution from the community. Here you will find helpful information for interacting with the core team and contributing to the project. The best place to contact the Cloud Native Buildpack team is on the CNCF Slack in the #buildpacks or mailing list. Find out the various ways that you can contribute to the Cloud Native Buildpacks project using our contributors guide. This is a community driven project and our roadmap is publicly available on our Github page. We encourage you to contribute with feature requests. We are a Cloud Native Computing Foundation incubating project. Copyright 2022 The Linux Foundation . All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page" } ]
{ "category": "App Definition and Development", "file_name": "app-journey.md", "project_name": "Buildpacks", "subcategory": "Application Definition & Image Build" }
[ { "data": "In this tutorial, well explain how to use pack and buildpacks to create a runnable app image from source code. In order to run the build process in an isolated fashion, pack uses Docker or a Docker-compatible daemon to create the containers where buildpacks execute. That means youll need to make sure you have both pack and a daemon installed: Install Docker or alternatively, see this page about working with podman. NOTE: pack is only one implementation of the Cloud Native Buildpacks Platform Specification. Additionally, not all Cloud Native Buildpacks Platforms require Docker. Before we set out, youll need to know the basics of buildpacks and how they work. A buildpack is something youve probably used without knowing it, as theyre currently being used in many cloud platforms. A buildpacks job is to gather everything your app needs to build and run, and it often does this job quickly and quietly. That said, while buildpacks are often a behind-the-scenes detail, they are at the heart of transforming your source code into a runnable app image. What enables buildpacks to be transparent is auto-detection. This happens when a platform sequentially tests groups of buildpacks against your apps source code. The first group that successfully detects your source code will become the selected set of buildpacks for your app. Detection criteria is specific to each buildpack for instance, an NPM buildpack might look for a package.json, and a Go buildpack might look for Go source files. A builder is an image that contains all the components necessary to execute a build. A builder image is created by taking a build image and adding a lifecycle, buildpacks, and files that configure aspects of the build including the buildpack detection order and the location(s) of the run image. Lets see all this in action using pack build. Run the following commands in a shell to clone and build this simple Java app. ``` git clone https://github.com/buildpacks/samples ``` ``` cd samples/apps/java-maven ``` ``` pack build myapp --builder cnbs/sample-builder:jammy ``` NOTE: This is your first time running pack build for myapp, so youll notice that the build might take longer than usual. Subsequent builds will take advantage of various forms of caching. If youre curious, try running pack build myapp a second time to see the difference in build time. Thats it! Youve now got a runnable app image called myapp available on your local Docker daemon. We did say this was a brief journey after all. Take note that your app was built without needing to install a JDK, run Maven, or otherwise configure a build environment. pack and buildpacks took care of that for you. To test out your new app image locally, you can run it with Docker: ``` docker run --rm -p 8080:8080 myapp ``` Now hit localhost:8080 in your favorite browser and take a minute to enjoy the view. pack uses buildpacks to help you easily create OCI images that you can run just about anywhere. Try deploying your new image to your favorite cloud! In case you need it, pack build has a handy flag called --publish that will build your image directly onto a Docker registry. You can learn more about pack features in the documentation. Windows image builds are now supported! Windows build guide We are a Cloud Native Computing Foundation incubating project. Copyright 2022 The Linux Foundation . All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page" } ]
{ "category": "App Definition and Development", "file_name": "builder_overview.md", "project_name": "Chef Habitat", "subcategory": "Application Definition & Image Build" }
[ { "data": "Chef Habitat Builder acts as the core of Chefs Application Delivery Enterprise hub. Chef Habitat Builder was first launched as a cloud service and as the repository of all available plan templates built by Chef and the supporting community. Due to the fact that the application source code is stored alongside the build package, many users expressed a preference for storing packages and running Chef Habitat Builder on-prem. As a result, Chef Habitat Builder can be consumed either as a cloud based or on-premises solution. Plan files are stored in the Chef Habitat Builder SaaS, where they can be viewed and accessed by the Chef Habitat community and then shared with the on-premises version of the builder where they can then be copied and maintained locally. For more information on how the SaaS and On-Prem versions of Chef Habitat Builder work together, read the blog - Chef Habitat Builder On-Prem Enhancements that Extend Support to Airgap Environments and Simplify Set-Up Was this page helpful? Help us improve this document. Still stuck? How can we improve this document? Thank you for your feedback! Page Last Modified: February 23, 2022 Copyright 2024 Progress Software Corporation and/or its subsidiaries or affiliates. All Rights Reserved." } ]
{ "category": "App Definition and Development", "file_name": "habitat.md", "project_name": "Chef Habitat", "subcategory": "Application Definition & Image Build" }
[ { "data": "Chef Habitat is a workload-packaging, orchestration, and deployment system that allows you to build, package, deploy, and manage applications and services without worrying about which infrastructure your application will deploy on, and without any rewriting or refactoring if you switch to a different infrastructure. Habitat separates the platform-independent parts of your applicationthe build dependencies, runtime dependencies, lifecycle events, and application codebasefrom the operating system or deployment environment that the application will run on, and bundles it into an immutable Habitat Package. The package is sent to the Chef Habitat Builder (SaaS or on-prem), which acts as a package store like Docker Hub where you can store, build, and deploy your Habitat package. Habitat Supervisor pulls packages from Habitat Builder, and will start, stop, run, monitor, and update your application based on the plan and lifecycle hooks you define in the package. Habitat Supervisor runs on bare metal, virtual machines, containers, or Platform-as-a-Service environments. A package under management by a Supervisor is called a service. Services can be joined together in a service group, which is a collection of services with the same package and topology type that are connected together across a Supervisor network. Chef Habitat Builder acts as the core of Chefs Application Delivery Enterprise hub. It provides a repository for all available Chef Habitat packages built by Chef and the supporting community, as well as search and an API for clients. You can store application plans on the Chef Habitat Builder SaaS where the Chef Habitat community can view and access them. You can also deploy the on-prem version of Chef Habitat Builder where you can store and maintain your apps in a secure environment. For more information, see the Chef Habitat Builder documentation. A Habitat Package is an artifact that contains the application codebase, lifecycle hooks, and a manifest that defines build and runtime dependencies of the application. The package is bundled into a Habitat Artifact (.HART) file, which is a binary distribution of a given package built with Chef" }, { "data": "The package is immutable and cryptographically signed with a key so you can verify that the artifact came from the place you expected it to come from. Artifacts can be exported to run in a variety of runtimes with zero refactoring or rewriting. A plan is the set of instructions, templates, and configuration files that define how you download, configure, make, install, and manage the lifecycle of the application artifact. The plan is defined in the habitat directory at the root of your project repository. The habitat directory includes a plan file (plan.sh for Linux systems or plan.ps1 for Windows), a default.toml file, an optional config directory for configuration templates, and an optional hooks directory for lifecycle hooks. You can create this directory at the root of your application with hab plan init. For more information, see the plan documentation. See the services documentation for more information. See the Habitat Studio documentation for more information. Chef Habitat Supervisor is a process manager that has two primary responsibilities: In the Supervisor you can define topologies for you application, such as leader-follower or standalone, or more complex applications that include databases. The supervisor also allows you to inject tunables into your application. Allowing you to defer decisions about how your application behaves until runtime. See the Habitat Supervisor documentation for more information. Chef Habitat allows you to build and package your applications and deploy them anywhere without having to refactor or rewrite your package for each platform. Everything that the application needs to run is defined, without assuming anything about the underlying infrastructure that the application is running on. This will allow you to repackage and modernize legacy workloads in-place to increase their manageability, make them portable, and migrate them to modern operating systems or even cloud-native infrastructure like containers. You can also develop your application if you are unsure of the infrastructure your application will run on, or in the event that business requirements change and you have to switch your application to a different environment. Was this page helpful? Help us improve this document. Still stuck? How can we improve this document? Thank you for your feedback! Page Last Modified: July 10, 2023 Copyright 2024 Progress Software Corporation and/or its subsidiaries or affiliates. All Rights Reserved." } ]
{ "category": "App Definition and Development", "file_name": "docs.codezero.io.md", "project_name": "CodeZero", "subcategory": "Application Definition & Image Build" }
[ { "data": "Codezero is an overlay network that empowers development teams to turn Kubernetes clusters into Teamspaces. A Teamspace is a collaborative development environment where developers can locally Consume services discoverable in a Service Catalog. Services featured in the catalog operate either within the Kubernetes cluster, or on a team member's local machine. Developers can Serve local Variants of services through this catalog to other team members. Consider the application above. Services A, B and C are deployed to a development cluster or namespace. You would either have to replicate the entire application locally or, replace Service B with the new version in the development environment in order to test. The version of the app one experiences is determined by the path a ray of traffic takes across the services. With a Teamspace, in order to work on Service B, you simply run the service locally. This Local Service B Variant receives traffic based on Conditions you specify. The Local Variant then delivers traffic back by Consuming Service C. Traffic that does not meet the specified condition flows through the Default Service B Variant running in the cluster untouched. Local Variants need not be containerized. They are simply services running on a local port but through the service catalog appear like they are deployed to the Kubernetes cluster. Developers can, therefore, use preferred local tooling like IDEs, debuggers, profilers and test tools (e.g. Postman) during the development process. Teamspaces are language agnostic and operate at the network level. Any authorized member can define Conditions that reshape traffic across the services available in the catalog to instantly create a Logical Ephemeral Environment. While the Teamspace is long running, this temporary traffic shaped environment comprising of a mix of remote and local services can be used to rapidly build and test software before code is pushed. You do not have to be a Kubernetes admin or a networking guru to develop using a Teamspace. Once set up, most developers need not have any direct knowledge of, or access to the underlying Kubernetes Clusters. This documentation is geared to both Kubernetes Admins who want to create Teamspaces as well as Developers who simply want to work with Teamspaces. We recommend you go through this documentation in the order it is presented as we build on previously defined concepts. Happy Learning! The Guides cover setting up and administering a Teamspace. You will require a Kubernetes Cluster to create a Teamspace. The Kubernetes QuickStart has several options to get started if you do not currently have a custer. Due to inherent limitations, you cannot use a local cluster like Minikube or Kind with Codezero. The Tutorials focus on using a Teamspace once setup. We have a Sample Kubernetes Project that comprises some of the most common Microservices Patterns you would encounter in a Kubernetes cluster. This project is used across all the Tutorials and Videos in this documentation. The Tutorials walk you through scenarios you will encounter in just about any modern microservices application development." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "CloudTTY", "subcategory": "Application Definition & Image Build" }
[ { "data": "A Job creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up the Pods it created. Suspending a Job will delete its active Pods until the Job is resumed again. A simple case is to create one Job object in order to reliably run one Pod to completion. The Job object will start a new Pod if the first Pod fails or is deleted (for example due to a node hardware failure or a node reboot). You can also use a Job to run multiple Pods in parallel. If you want to run a Job (either a single task, or several in parallel) on a schedule, see CronJob. Here is an example Job config. It computes to 2000 places and prints it out. It takes around 10s to complete. ``` apiVersion: batch/v1 kind: Job metadata: name: pi spec: template: spec: containers: name: pi image: perl:5.34.0 command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: Never backoffLimit: 4 ``` You can run the example with this command: ``` kubectl apply -f https://kubernetes.io/examples/controllers/job.yaml ``` The output is similar to this: ``` job.batch/pi created ``` Check on the status of the Job with kubectl: ``` Name: pi Namespace: default Selector: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c batch.kubernetes.io/job-name=pi ... Annotations: batch.kubernetes.io/job-tracking: \"\" Parallelism: 1 Completions: 1 Start Time: Mon, 02 Dec 2019 15:20:11 +0200 Completed At: Mon, 02 Dec 2019 15:21:16 +0200 Duration: 65s Pods Statuses: 0 Running / 1 Succeeded / 0 Failed Pod Template: Labels: batch.kubernetes.io/controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c batch.kubernetes.io/job-name=pi Containers: pi: Image: perl:5.34.0 Port: <none> Host Port: <none> Command: perl -Mbignum=bpi -wle print bpi(2000) Environment: <none> Mounts: <none> Volumes: <none> Events: Type Reason Age From Message - - - Normal SuccessfulCreate 21s job-controller Created pod: pi-xf9p4 Normal Completed 18s job-controller Job completed ``` ``` apiVersion: batch/v1 kind: Job metadata: annotations: batch.kubernetes.io/job-tracking: \"\" ... creationTimestamp: \"2022-11-10T17:53:53Z\" generation: 1 labels: batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223 batch.kubernetes.io/job-name: pi name: pi namespace: default resourceVersion: \"4751\" uid: 204fb678-040b-497f-9266-35ffa8716d14 spec: backoffLimit: 4 completionMode: NonIndexed completions: 1 parallelism: 1 selector: matchLabels: batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223 suspend: false template: metadata: creationTimestamp: null labels: batch.kubernetes.io/controller-uid: 863452e6-270d-420e-9b94-53a54146c223 batch.kubernetes.io/job-name: pi spec: containers: command: perl -Mbignum=bpi -wle print bpi(2000) image: perl:5.34.0 imagePullPolicy: IfNotPresent name: pi resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Never schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: active: 1 ready: 0 startTime: \"2022-11-10T17:53:57Z\" uncountedTerminatedPods: {} ``` To view completed Pods of a Job, use kubectl get pods. To list all the Pods that belong to a Job in a machine readable form, you can use a command like this: ``` pods=$(kubectl get pods --selector=batch.kubernetes.io/job-name=pi --output=jsonpath='{.items[*].metadata.name}') echo $pods ``` The output is similar to this: ``` pi-5rwd7 ``` Here, the selector is the same as the selector for the Job. The --output=jsonpath option specifies an expression with the name from each Pod in the returned list. View the standard output of one of the pods: ``` kubectl logs $pods ``` Another way to view the logs of a Job: ``` kubectl logs jobs/pi ``` The output is similar to this: ``` 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901 ``` As with all other Kubernetes config, a Job needs apiVersion, kind, and metadata fields. When the control plane creates new Pods for a Job, the .metadata.name of the Job is part of the basis for naming those Pods. The name of a Job must be a valid DNS subdomain value, but this can produce unexpected results for the Pod" }, { "data": "For best compatibility, the name should follow the more restrictive rules for a DNS label. Even when the name is a DNS subdomain, the name must be no longer than 63 characters. A Job also needs a .spec section. Job labels will have batch.kubernetes.io/ prefix for job-name and controller-uid. The .spec.template is the only required field of the .spec. The .spec.template is a pod template. It has exactly the same schema as a Pod, except it is nested and does not have an apiVersion or kind. In addition to required fields for a Pod, a pod template in a Job must specify appropriate labels (see pod selector) and an appropriate restart policy. Only a RestartPolicy equal to Never or OnFailure is allowed. The .spec.selector field is optional. In almost all cases you should not specify it. See section specifying your own pod selector. There are three main types of task suitable to run as a Job: For a non-parallel Job, you can leave both .spec.completions and .spec.parallelism unset. When both are unset, both are defaulted to 1. For a fixed completion count Job, you should set .spec.completions to the number of completions needed. You can set .spec.parallelism, or leave it unset and it will default to 1. For a work queue Job, you must leave .spec.completions unset, and set .spec.parallelism to a non-negative integer. For more information about how to make use of the different types of job, see the job patterns section. The requested parallelism (.spec.parallelism) can be set to any non-negative value. If it is unspecified, it defaults to 1. If it is specified as 0, then the Job is effectively paused until it is increased. Actual parallelism (number of pods running at any instant) may be more or less than requested parallelism, for a variety of reasons: Jobs with fixed completion count - that is, jobs that have non null .spec.completions - can have a completion mode that is specified in .spec.completionMode: NonIndexed (default): the Job is considered complete when there have been .spec.completions successfully completed Pods. In other words, each Pod completion is homologous to each other. Note that Jobs that have null .spec.completions are implicitly NonIndexed. Indexed: the Pods of a Job get an associated completion index from 0 to .spec.completions-1. The index is available through four mechanisms: The Job is considered complete when there is one successfully completed Pod for each index. For more information about how to use this mode, see Indexed Job for Parallel Processing with Static Work Assignment. A container in a Pod may fail for a number of reasons, such as because the process in it exited with a non-zero exit code, or the container was killed for exceeding a memory limit, etc. If this happens, and the .spec.template.spec.restartPolicy = \"OnFailure\", then the Pod stays on the node, but the container is re-run. Therefore, your program needs to handle the case when it is restarted locally, or else specify .spec.template.spec.restartPolicy = \"Never\". See pod lifecycle for more information on restartPolicy. An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node (node is upgraded, rebooted, deleted, etc.), or if a container of the Pod fails and the .spec.template.spec.restartPolicy = \"Never\". When a Pod fails, then the Job controller starts a new Pod. This means that your application needs to handle the case when it is restarted in a new pod. In particular, it needs to handle temporary files, locks, incomplete output and the like caused by previous" }, { "data": "By default, each pod failure is counted towards the .spec.backoffLimit limit, see pod backoff failure policy. However, you can customize handling of pod failures by setting the Job's pod failure policy. Additionally, you can choose to count the pod failures independently for each index of an Indexed Job by setting the .spec.backoffLimitPerIndex field (for more information, see backoff limit per index). Note that even if you specify .spec.parallelism = 1 and .spec.completions = 1 and .spec.template.spec.restartPolicy = \"Never\", the same program may sometimes be started twice. If you do specify .spec.parallelism and .spec.completions both greater than 1, then there may be multiple pods running at once. Therefore, your pods must also be tolerant of concurrency. When the feature gates PodDisruptionConditions and JobPodFailurePolicy are both enabled, and the .spec.podFailurePolicy field is set, the Job controller does not consider a terminating Pod (a pod that has a .metadata.deletionTimestamp field set) as a failure until that Pod is terminal (its .status.phase is Failed or Succeeded). However, the Job controller creates a replacement Pod as soon as the termination becomes apparent. Once the pod terminates, the Job controller evaluates .backoffLimit and .podFailurePolicy for the relevant Job, taking this now-terminated Pod into consideration. If either of these requirements is not satisfied, the Job controller counts a terminating Pod as an immediate failure, even if that Pod later terminates with phase: \"Succeeded\". There are situations where you want to fail a Job after some amount of retries due to a logical error in configuration etc. To do so, set .spec.backoffLimit to specify the number of retries before considering a Job as failed. The back-off limit is set by default to 6. Failed Pods associated with the Job are recreated by the Job controller with an exponential back-off delay (10s, 20s, 40s ...) capped at six minutes. The number of retries is calculated in two ways: If either of the calculations reaches the .spec.backoffLimit, the Job is considered failed. When you run an indexed Job, you can choose to handle retries for pod failures independently for each index. To do so, set the .spec.backoffLimitPerIndex to specify the maximal number of pod failures per index. When the per-index backoff limit is exceeded for an index, Kubernetes considers the index as failed and adds it to the .status.failedIndexes field. The succeeded indexes, those with a successfully executed pods, are recorded in the .status.completedIndexes field, regardless of whether you set the backoffLimitPerIndex field. Note that a failing index does not interrupt execution of other indexes. Once all indexes finish for a Job where you specified a backoff limit per index, if at least one of those indexes did fail, the Job controller marks the overall Job as failed, by setting the Failed condition in the status. The Job gets marked as failed even if some, potentially nearly all, of the indexes were processed successfully. You can additionally limit the maximal number of indexes marked failed by setting the .spec.maxFailedIndexes field. When the number of failed indexes exceeds the maxFailedIndexes field, the Job controller triggers termination of all remaining running Pods for that Job. Once all pods are terminated, the entire Job is marked failed by the Job controller, by setting the Failed condition in the Job status. Here is an example manifest for a Job that defines a backoffLimitPerIndex: ``` apiVersion: batch/v1 kind: Job metadata: name: job-backoff-limit-per-index-example spec: completions: 10 parallelism: 3 completionMode: Indexed # required for the feature backoffLimitPerIndex: 1 # maximal number of failures per index maxFailedIndexes: 5 # maximal number of failed indexes before terminating the Job execution template: spec: restartPolicy:" }, { "data": "# required for the feature containers: name: example image: python command: # The jobs fails as there is at least one failed index python3 -c | import os, sys print(\"Hello world\") if int(os.environ.get(\"JOBCOMPLETIONINDEX\")) % 2 == 0: sys.exit(1) ``` In the example above, the Job controller allows for one restart for each of the indexes. When the total number of failed indexes exceeds 5, then the entire Job is terminated. Once the job is finished, the Job status looks as follows: ``` kubectl get -o yaml job job-backoff-limit-per-index-example ``` ``` status: completedIndexes: 1,3,5,7,9 failedIndexes: 0,2,4,6,8 succeeded: 5 # 1 succeeded pod for each of 5 succeeded indexes failed: 10 # 2 failed pods (1 retry) for each of 5 failed indexes conditions: message: Job has failed indexes reason: FailedIndexes status: \"True\" type: Failed ``` Additionally, you may want to use the per-index backoff along with a pod failure policy. When using per-index backoff, there is a new FailIndex action available which allows you to avoid unnecessary retries within an index. A Pod failure policy, defined with the .spec.podFailurePolicy field, enables your cluster to handle Pod failures based on the container exit codes and the Pod conditions. In some situations, you may want to have a better control when handling Pod failures than the control provided by the Pod backoff failure policy, which is based on the Job's .spec.backoffLimit. These are some examples of use cases: You can configure a Pod failure policy, in the .spec.podFailurePolicy field, to meet the above use cases. This policy can handle Pod failures based on the container exit codes and the Pod conditions. Here is a manifest for a Job that defines a podFailurePolicy: ``` apiVersion: batch/v1 kind: Job metadata: name: job-pod-failure-policy-example spec: completions: 12 parallelism: 3 template: spec: restartPolicy: Never containers: name: main image: docker.io/library/bash:5 command: [\"bash\"] # example command simulating a bug which triggers the FailJob action args: -c echo \"Hello world!\" && sleep 5 && exit 42 backoffLimit: 6 podFailurePolicy: rules: action: FailJob onExitCodes: containerName: main # optional operator: In # one of: In, NotIn values: [42] action: Ignore # one of: Ignore, FailJob, Count onPodConditions: type: DisruptionTarget # indicates Pod disruption ``` In the example above, the first rule of the Pod failure policy specifies that the Job should be marked failed if the main container fails with the 42 exit code. The following are the rules for the main container specifically: The second rule of the Pod failure policy, specifying the Ignore action for failed Pods with condition DisruptionTarget excludes Pod disruptions from being counted towards the .spec.backoffLimit limit of retries. These are some requirements and semantics of the API: When creating an Indexed Job, you can define when a Job can be declared as succeeded using a .spec.successPolicy, based on the pods that succeeded. By default, a Job succeeds when the number of succeeded Pods equals .spec.completions. These are some situations where you might want additional control for declaring a Job succeeded: You can configure a success policy, in the .spec.successPolicy field, to meet the above use cases. This policy can handle Job success based on the succeeded pods. After the Job meets the success policy, the job controller terminates the lingering Pods. A success policy is defined by rules. Each rule can take one of the following forms: Note that when you specify multiple rules in the .spec.successPolicy.rules, the job controller evaluates the rules in order. Once the Job meets a rule, the job controller ignores remaining" }, { "data": "Here is a manifest for a Job with successPolicy: ``` apiVersion: batch/v1 kind: Job metadata: name: job-success spec: parallelism: 10 completions: 10 completionMode: Indexed # Required for the success policy successPolicy: rules: succeededIndexes: 0,2-3 succeededCount: 1 template: spec: containers: name: main image: python command: # Provided that at least one of the Pods with 0, 2, and 3 indexes has succeeded, python3 -c | import os, sys if os.environ.get(\"JOBCOMPLETIONINDEX\") == \"2\": sys.exit(0) else: sys.exit(1) restartPolicy: Never ``` In the example above, both succeededIndexes and succeededCount have been specified. Therefore, the job controller will mark the Job as succeeded and terminate the lingering Pods when either of the specified indexes, 0, 2, or 3, succeed. The Job that meets the success policy gets the SuccessCriteriaMet condition. After the removal of the lingering Pods is issued, the Job gets the Complete condition. Note that the succeededIndexes is represented as intervals separated by a hyphen. The number are listed in represented by the first and last element of the series, separated by a hyphen. When a Job completes, no more Pods are created, but the Pods are usually not deleted either. Keeping them around allows you to still view the logs of completed pods to check for errors, warnings, or other diagnostic output. The job object also remains after it is completed so that you can view its status. It is up to the user to delete old jobs after noting their status. Delete the job with kubectl (e.g. kubectl delete jobs/pi or kubectl delete -f ./job.yaml). When you delete the job using kubectl, all the pods it created are deleted too. By default, a Job will run uninterrupted unless a Pod fails (restartPolicy=Never) or a Container exits in error (restartPolicy=OnFailure), at which point the Job defers to the .spec.backoffLimit described above. Once .spec.backoffLimit has been reached the Job will be marked as failed and any running Pods will be terminated. Another way to terminate a Job is by setting an active deadline. Do this by setting the .spec.activeDeadlineSeconds field of the Job to a number of seconds. The activeDeadlineSeconds applies to the duration of the job, no matter how many Pods are created. Once a Job reaches activeDeadlineSeconds, all of its running Pods are terminated and the Job status will become type: Failed with reason: DeadlineExceeded. Note that a Job's .spec.activeDeadlineSeconds takes precedence over its .spec.backoffLimit. Therefore, a Job that is retrying one or more failed Pods will not deploy additional Pods once it reaches the time limit specified by activeDeadlineSeconds, even if the backoffLimit is not yet reached. Example: ``` apiVersion: batch/v1 kind: Job metadata: name: pi-with-timeout spec: backoffLimit: 5 activeDeadlineSeconds: 100 template: spec: containers: name: pi image: perl:5.34.0 command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: Never ``` Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds field. Ensure that you set this field at the proper level. Keep in mind that the restartPolicy applies to the Pod, and not to the Job itself: there is no automatic Job restart once the Job status is type: Failed. That is, the Job termination mechanisms activated with .spec.activeDeadlineSeconds and .spec.backoffLimit result in a permanent Job failure that requires manual intervention to resolve. Finished Jobs are usually no longer needed in the system. Keeping them around in the system will put pressure on the API server. If the Jobs are managed directly by a higher level controller, such as CronJobs, the Jobs can be cleaned up by CronJobs based on the specified capacity-based cleanup policy. Another way to clean up finished Jobs (either Complete or Failed) automatically is to use a TTL mechanism provided by a TTL controller for finished resources, by specifying the" }, { "data": "field of the Job. When the TTL controller cleans up the Job, it will delete the Job cascadingly, i.e. delete its dependent objects, such as Pods, together with the Job. Note that when the Job is deleted, its lifecycle guarantees, such as finalizers, will be honored. For example: ``` apiVersion: batch/v1 kind: Job metadata: name: pi-with-ttl spec: ttlSecondsAfterFinished: 100 template: spec: containers: name: pi image: perl:5.34.0 command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: Never ``` The Job pi-with-ttl will be eligible to be automatically deleted, 100 seconds after it finishes. If the field is set to 0, the Job will be eligible to be automatically deleted immediately after it finishes. If the field is unset, this Job won't be cleaned up by the TTL controller after it finishes. It is recommended to set ttlSecondsAfterFinished field because unmanaged jobs (Jobs that you created directly, and not indirectly through other workload APIs such as CronJob) have a default deletion policy of orphanDependents causing Pods created by an unmanaged Job to be left around after that Job is fully deleted. Even though the control plane eventually garbage collects the Pods from a deleted Job after they either fail or complete, sometimes those lingering pods may cause cluster performance degradation or in worst case cause the cluster to go offline due to this degradation. You can use LimitRanges and ResourceQuotas to place a cap on the amount of resources that a particular namespace can consume. The Job object can be used to process a set of independent but related work items. These might be emails to be sent, frames to be rendered, files to be transcoded, ranges of keys in a NoSQL database to scan, and so on. In a complex system, there may be multiple different sets of work items. Here we are just considering one set of work items that the user wants to manage together a batch job. There are several different patterns for parallel computation, each with strengths and weaknesses. The tradeoffs are: The tradeoffs are summarized here, with columns 2 to 4 corresponding to the above tradeoffs. The pattern names are also links to examples and more detailed description. | Pattern | Single Job object | Fewer pods than work items? | Use app unmodified? | |:-|:--|:|:-| | Queue with Pod Per Work Item | | nan | sometimes | | Queue with Variable Pod Count | | | nan | | Indexed Job with Static Work Assignment | | nan | | | Job with Pod-to-Pod Communication | | sometimes | sometimes | | Job Template Expansion | nan | nan | | When you specify completions with .spec.completions, each Pod created by the Job controller has an identical spec. This means that all pods for a task will have the same command line and the same image, the same volumes, and (almost) the same environment variables. These patterns are different ways to arrange for pods to work on different things. This table shows the required settings for .spec.parallelism and .spec.completions for each of the patterns. Here, W is the number of work items. | Pattern | .spec.completions |" }, { "data": "| |:-|:--|:--| | Queue with Pod Per Work Item | W | any | | Queue with Variable Pod Count | nan | any | | Indexed Job with Static Work Assignment | W | any | | Job with Pod-to-Pod Communication | W | W | | Job Template Expansion | 1 | should be 1 | When a Job is created, the Job controller will immediately begin creating Pods to satisfy the Job's requirements and will continue to do so until the Job is complete. However, you may want to temporarily suspend a Job's execution and resume it later, or start Jobs in suspended state and have a custom controller decide later when to start them. To suspend a Job, you can update the .spec.suspend field of the Job to true; later, when you want to resume it again, update it to false. Creating a Job with .spec.suspend set to true will create it in the suspended state. When a Job is resumed from suspension, its .status.startTime field will be reset to the current time. This means that the .spec.activeDeadlineSeconds timer will be stopped and reset when a Job is suspended and resumed. When you suspend a Job, any running Pods that don't have a status of Completed will be terminated with a SIGTERM signal. The Pod's graceful termination period will be honored and your Pod must handle this signal in this period. This may involve saving progress for later or undoing changes. Pods terminated this way will not count towards the Job's completions count. An example Job definition in the suspended state can be like so: ``` kubectl get job myjob -o yaml ``` ``` apiVersion: batch/v1 kind: Job metadata: name: myjob spec: suspend: true parallelism: 1 completions: 5 template: spec: ... ``` You can also toggle Job suspension by patching the Job using the command line. Suspend an active Job: ``` kubectl patch job/myjob --type=strategic --patch '{\"spec\":{\"suspend\":true}}' ``` Resume a suspended Job: ``` kubectl patch job/myjob --type=strategic --patch '{\"spec\":{\"suspend\":false}}' ``` The Job's status can be used to determine if a Job is suspended or has been suspended in the past: ``` kubectl get jobs/myjob -o yaml ``` ``` apiVersion: batch/v1 kind: Job status: conditions: lastProbeTime: \"2021-02-05T13:14:33Z\" lastTransitionTime: \"2021-02-05T13:14:33Z\" status: \"True\" type: Suspended startTime: \"2021-02-05T13:13:48Z\" ``` The Job condition of type \"Suspended\" with status \"True\" means the Job is suspended; the lastTransitionTime field can be used to determine how long the Job has been suspended for. If the status of that condition is \"False\", then the Job was previously suspended and is now running. If such a condition does not exist in the Job's status, the Job has never been stopped. Events are also created when the Job is suspended and resumed: ``` kubectl describe jobs/myjob ``` ``` Name: myjob ... Events: Type Reason Age From Message - - - Normal SuccessfulCreate 12m job-controller Created pod: myjob-hlrpl Normal SuccessfulDelete 11m job-controller Deleted pod: myjob-hlrpl Normal Suspended 11m job-controller Job suspended Normal SuccessfulCreate 3s job-controller Created pod: myjob-jvb44 Normal Resumed 3s job-controller Job resumed ``` The last four events, particularly the \"Suspended\" and \"Resumed\" events, are directly a result of toggling the .spec.suspend field. In the time between these two events, we see that no Pods were created, but Pod creation restarted as soon as the Job was resumed. In most cases, a parallel job will want the pods to run with constraints, like all in the same zone, or all either on GPU model x or y but not a mix of both. The suspend field is the first step towards achieving those semantics. Suspend allows a custom queue controller to decide when a job should start; However, once a job is unsuspended, a custom queue controller has no influence on where the pods of a job will actually land. This feature allows updating a Job's scheduling directives before it starts, which gives custom queue controllers the ability to influence pod placement while at the same time offloading actual pod-to-node assignment to" }, { "data": "This is allowed only for suspended Jobs that have never been unsuspended before. The fields in a Job's pod template that can be updated are node affinity, node selector, tolerations, labels, annotations and scheduling gates. Normally, when you create a Job object, you do not specify .spec.selector. The system defaulting logic adds this field when the Job is created. It picks a selector value that will not overlap with any other jobs. However, in some cases, you might need to override this automatically set selector. To do this, you can specify the .spec.selector of the Job. Be very careful when doing this. If you specify a label selector which is not unique to the pods of that Job, and which matches unrelated Pods, then pods of the unrelated job may be deleted, or this Job may count other Pods as completing it, or one or both Jobs may refuse to create Pods or run to completion. If a non-unique selector is chosen, then other controllers (e.g. ReplicationController) and their Pods may behave in unpredictable ways too. Kubernetes will not stop you from making a mistake when specifying .spec.selector. Here is an example of a case when you might want to use this feature. Say Job old is already running. You want existing Pods to keep running, but you want the rest of the Pods it creates to use a different pod template and for the Job to have a new name. You cannot update the Job because these fields are not updatable. Therefore, you delete Job old but leave its pods running, using kubectl delete jobs/old --cascade=orphan. Before deleting it, you make a note of what selector it uses: ``` kubectl get job old -o yaml ``` The output is similar to this: ``` kind: Job metadata: name: old ... spec: selector: matchLabels: batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002 ... ``` Then you create a new Job with name new and you explicitly specify the same selector. Since the existing Pods have label batch.kubernetes.io/controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002, they are controlled by Job new as well. You need to specify manualSelector: true in the new Job since you are not using the selector that the system normally generates for you automatically. ``` kind: Job metadata: name: new ... spec: manualSelector: true selector: matchLabels: batch.kubernetes.io/controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002 ... ``` The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002. Setting manualSelector: true tells the system that you know what you are doing and to allow this mismatch. The control plane keeps track of the Pods that belong to any Job and notices if any such Pod is removed from the API server. To do that, the Job controller creates Pods with the finalizer batch.kubernetes.io/job-tracking. The controller removes the finalizer only after the Pod has been accounted for in the Job status, allowing the Pod to be removed by other controllers or users. You can scale Indexed Jobs up or down by mutating both .spec.parallelism and .spec.completions together such that .spec.parallelism == .spec.completions. When the ElasticIndexedJobfeature gate on the API server is disabled, .spec.completions is immutable. Use cases for elastic Indexed Jobs include batch workloads which require scaling an indexed Job, such as MPI, Horovord, Ray, and PyTorch training jobs. By default, the Job controller recreates Pods as soon they either fail or are terminating (have a deletion timestamp). This means that, at a given time, when some of the Pods are terminating, the number of running Pods for a Job can be greater than parallelism or greater than one Pod per index (if you are using an Indexed" }, { "data": "You may choose to create replacement Pods only when the terminating Pod is fully terminal (has status.phase: Failed). To do this, set the .spec.podReplacementPolicy: Failed. The default replacement policy depends on whether the Job has a podFailurePolicy set. With no Pod failure policy defined for a Job, omitting the podReplacementPolicy field selects the TerminatingOrFailed replacement policy: the control plane creates replacement Pods immediately upon Pod deletion (as soon as the control plane sees that a Pod for this Job has deletionTimestamp set). For Jobs with a Pod failure policy set, the default podReplacementPolicy is Failed, and no other value is permitted. See Pod failure policy to learn more about Pod failure policies for Jobs. ``` kind: Job metadata: name: new ... spec: podReplacementPolicy: Failed ... ``` Provided your cluster has the feature gate enabled, you can inspect the .status.terminating field of a Job. The value of the field is the number of Pods owned by the Job that are currently terminating. ``` kubectl get jobs/myjob -o yaml ``` ``` apiVersion: batch/v1 kind: Job status: terminating: 3 # three Pods are terminating and have not yet reached the Failed phase ``` This feature allows you to disable the built-in Job controller, for a specific Job, and delegate reconciliation of the Job to an external controller. You indicate the controller that reconciles the Job by setting a custom value for the spec.managedBy field - any value other than kubernetes.io/job-controller. The value of the field is immutable. When developing an external Job controller be aware that your controller needs to operate in a fashion conformant with the definitions of the API spec and status fields of the Job object. Please review these in detail in the Job API. We also recommend that you run the e2e conformance tests for the Job object to verify your implementation. Finally, when developing an external Job controller make sure it does not use the batch.kubernetes.io/job-tracking finalizer, reserved for the built-in controller. When the node that a Pod is running on reboots or fails, the pod is terminated and will not be restarted. However, a Job will create new Pods to replace terminated ones. For this reason, we recommend that you use a Job rather than a bare Pod, even if your application requires only a single Pod. Jobs are complementary to Replication Controllers. A Replication Controller manages Pods which are not expected to terminate (e.g. web servers), and a Job manages Pods that are expected to terminate (e.g. batch tasks). As discussed in Pod Lifecycle, Job is only appropriate for pods with RestartPolicy equal to OnFailure or Never. (Note: If RestartPolicy is not set, the default value is Always.) Another pattern is for a single Job to create a Pod which then creates other Pods, acting as a sort of custom controller for those Pods. This allows the most flexibility, but may be somewhat complicated to get started with and offers less integration with Kubernetes. One example of this pattern would be a Job which starts a Pod which runs a script that in turn starts a Spark master controller (see spark example), runs a spark driver, and then cleans up. An advantage of this approach is that the overall process gets the completion guarantee of a Job object, but maintains complete control over what Pods are created and how work is assigned to them. Was this page helpful? Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub Repository if you want to report a problem or suggest an" } ]
{ "category": "App Definition and Development", "file_name": "about.md", "project_name": "Cyclops", "subcategory": "Application Definition & Image Build" }
[ { "data": "Welcome to Cyclops, a powerful user interface for managing and interacting with Kubernetes clusters. Cyclops is designed to simplify the management of containerized applications on Kubernetes, providing an intuitive and user-friendly experience for developers, system administrators, and DevOps professionals. Divide the responsibility between your infrastructure and your developer teams so that everyone can play to their strengths. Automate your processes and shrink the window for deployment mistakes. Cyclops is an innovative web-based tool designed to simplify the management of distributed systems, specifically focusing on the widely used Kubernetes platform. By providing a user-friendly interface, Cyclops abstracts complex Kubernetes configuration files into intuitive web forms, making it easier for developers to deploy applications and manage Kubernetes environments. It offers predefined fields and graphical representations of deployments, enhancing visibility and reducing the learning curve associated with Kubernetes. Cyclops aims to empower IT operations teams, DevOps teams, developers and business owners, enabling them to streamline processes, increase productivity, and achieve cost savings in managing Kubernetes clusters. Cyclops provides a comprehensive dashboard that offers an overview of the cluster's health, performance, and resource utilization. The dashboard presents key metrics and information about pods, nodes, deployments, services, and more, enabling users to monitor the cluster's status at a glance. With Cyclops, users can effortlessly deploy and scale their applications on the cluster. The application provides an intuitive interface to create, manage, and update deployments, allowing users to easily adjust the number of replicas, configure rolling updates, and monitor the deployment's progress. Cyclops lets you create templates of YAML configuration files for your applications with variables that can be assigned later. This empowers users to create parameterized and customizable configurations that can be easily adapted to different environments or use cases. Templating YAML configuration files simplifies the management of Kubernetes resources, promotes consistency, and streamlines the deployment process, making it more efficient and adaptable to varying requirements. Versioning templates provide a structured way to keep track of changes and updates made to templates over time. Each version represents a specific iteration or snapshot of the template at a particular point in" }, { "data": "By using versioning, it becomes easier to manage and track different versions of templates, facilitating collaboration, maintenance, and rollback if necessary. Helm has already established itself in the Kubernetes community as a tool for writing configuration files. We understand that nobody likes to change the way they are doing things. To make the transition easier, we integrated Helm into our system and made it possible to bring your old configuration files written with Helm into Cyclops. No need for starting over, continue were you left off! By dividing responsibilities, each team can work efficiently in their respective domains. The infrastructure team can dedicate their efforts to infrastructure optimization, scalability, and security, ensuring that the Kubernetes environment is robust and well-maintained. Simultaneously, the developer team can focus on delivering their product without having to learn Kubernetes in depth. This division of responsibilities enhances collaboration and fosters a smoother development workflow. Using a form-based UI eliminates the need for manual configuration and command-line interactions, making the deployment process more user-friendly and accessible to individuals with varying levels of technical expertise. Advanced users can write their own configuration files, but we offer some basic templates for users still new to Kubernetes to help them start off. Cyclops deploys your applications trough forms with predefined fields. This means that your developers can edit only certain fields and input only values of certain type. Forms drastically shrink the window for deployment mistakes which are often costly for businesses, both financially and reputation-wise. Developers do not need to know the intricacies of Kubernetes, only the basics, which in return will speed up their onboarding and bolster their productivity. Cyclops promotes consistency and standardization in deployment practices. By providing predefined templates or configuration presets, Cyclops ensures that deployments adhere to established best practices and guidelines. This consistency not only improves the reliability and stability of deployments but also facilitates collaboration among team members who can easily understand and reproduce each other's deployments. Cyclops offers a streamlined and intuitive interface for managing Kubernetes clusters, simplifying complex operations and enabling efficient application orchestration. Whether you're new to Kubernetes or an experienced user, Cyclops empowers you to interact with your cluster effectively and enhances your productivity. Start leveraging the power of Kubernetes with a user-friendly experience through Cyclops." } ]
{ "category": "App Definition and Development", "file_name": "manifest.md", "project_name": "Cyclops", "subcategory": "Application Definition & Image Build" }
[ { "data": "To install Cyclops in your cluster, run commands below: ``` kubectl apply -f https://raw.githubusercontent.com/cyclops-ui/cyclops/v0.6.2/install/cyclops-install.yaml && kubectl apply -f https://raw.githubusercontent.com/cyclops-ui/cyclops/v0.6.2/install/demo-templates.yaml``` It will create a new namespace called cyclops and deploy everything you need for your Cyclops instance to run. Now all that is left is to expose Cyclops server outside the cluster: ``` kubectl port-forward svc/cyclops-ui 3000:3000 -n cyclops``` You can now access Cyclops in your browser on http://localhost:3000." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Cyclops", "subcategory": "Application Definition & Image Build" }
[ { "data": "Welcome to Cyclops, a powerful user interface for managing and interacting with Kubernetes clusters. Cyclops is designed to simplify the management of containerized applications on Kubernetes, providing an intuitive and user-friendly experience for developers, system administrators, and DevOps professionals. Divide the responsibility between your infrastructure and your developer teams so that everyone can play to their strengths. Automate your processes and shrink the window for deployment mistakes. Cyclops is an innovative web-based tool designed to simplify the management of distributed systems, specifically focusing on the widely used Kubernetes platform. By providing a user-friendly interface, Cyclops abstracts complex Kubernetes configuration files into intuitive web forms, making it easier for developers to deploy applications and manage Kubernetes environments. It offers predefined fields and graphical representations of deployments, enhancing visibility and reducing the learning curve associated with Kubernetes. Cyclops aims to empower IT operations teams, DevOps teams, developers and business owners, enabling them to streamline processes, increase productivity, and achieve cost savings in managing Kubernetes clusters. Cyclops provides a comprehensive dashboard that offers an overview of the cluster's health, performance, and resource utilization. The dashboard presents key metrics and information about pods, nodes, deployments, services, and more, enabling users to monitor the cluster's status at a glance. With Cyclops, users can effortlessly deploy and scale their applications on the cluster. The application provides an intuitive interface to create, manage, and update deployments, allowing users to easily adjust the number of replicas, configure rolling updates, and monitor the deployment's progress. Cyclops lets you create templates of YAML configuration files for your applications with variables that can be assigned later. This empowers users to create parameterized and customizable configurations that can be easily adapted to different environments or use cases. Templating YAML configuration files simplifies the management of Kubernetes resources, promotes consistency, and streamlines the deployment process, making it more efficient and adaptable to varying requirements. Versioning templates provide a structured way to keep track of changes and updates made to templates over time. Each version represents a specific iteration or snapshot of the template at a particular point in" }, { "data": "By using versioning, it becomes easier to manage and track different versions of templates, facilitating collaboration, maintenance, and rollback if necessary. Helm has already established itself in the Kubernetes community as a tool for writing configuration files. We understand that nobody likes to change the way they are doing things. To make the transition easier, we integrated Helm into our system and made it possible to bring your old configuration files written with Helm into Cyclops. No need for starting over, continue were you left off! By dividing responsibilities, each team can work efficiently in their respective domains. The infrastructure team can dedicate their efforts to infrastructure optimization, scalability, and security, ensuring that the Kubernetes environment is robust and well-maintained. Simultaneously, the developer team can focus on delivering their product without having to learn Kubernetes in depth. This division of responsibilities enhances collaboration and fosters a smoother development workflow. Using a form-based UI eliminates the need for manual configuration and command-line interactions, making the deployment process more user-friendly and accessible to individuals with varying levels of technical expertise. Advanced users can write their own configuration files, but we offer some basic templates for users still new to Kubernetes to help them start off. Cyclops deploys your applications trough forms with predefined fields. This means that your developers can edit only certain fields and input only values of certain type. Forms drastically shrink the window for deployment mistakes which are often costly for businesses, both financially and reputation-wise. Developers do not need to know the intricacies of Kubernetes, only the basics, which in return will speed up their onboarding and bolster their productivity. Cyclops promotes consistency and standardization in deployment practices. By providing predefined templates or configuration presets, Cyclops ensures that deployments adhere to established best practices and guidelines. This consistency not only improves the reliability and stability of deployments but also facilitates collaboration among team members who can easily understand and reproduce each other's deployments. Cyclops offers a streamlined and intuitive interface for managing Kubernetes clusters, simplifying complex operations and enabling efficient application orchestration. Whether you're new to Kubernetes or an experienced user, Cyclops empowers you to interact with your cluster effectively and enhances your productivity. Start leveraging the power of Kubernetes with a user-friendly experience through Cyclops." } ]
{ "category": "App Definition and Development", "file_name": "getting-started.md", "project_name": "CodeZero", "subcategory": "Application Definition & Image Build" }
[ { "data": "These are the general steps for setting up a Teamspace. Apart from setting up a new Kubernetes cluster, the following steps should take less than 10 minutes to complete. These steps should be carried out by someone comfortable around Kubernetes: Once a Teamspace is set up and certified, individual developers can then install the Codezero local tools to work with the Teamspace. Developers will not require credentials for the Kubernetes cluster as they authenticate to the Teamspace via the Hub. NOTE: We currently support Github and Google authentication." } ]
{ "category": "App Definition and Development", "file_name": "prerequisites.md", "project_name": "Cyclops", "subcategory": "Application Definition & Image Build" }
[ { "data": "In order to test out Cyclops you are going to need some things. First thing you are going to need is a Kubernetes cluster. If you have one that you can use to play with, great, if not you can try installing minikube. Minikube sets up a local Kubernetes cluster that you can use to test stuff out. Check the docs on how to install it. Another thing you will need is kubectl. It is a command line interface for running commands against your cluster. Once you have installed minikube and kubectl, run your local cluster with: ``` minikube start``` After some time you will have a running cluster that you can use for testing. To verify everything is in order, you can try fetching all namespaces from the cluster with: ``` kubectl get namespaces``` Output should be something like this: ``` NAME STATUS AGEdefault Active 10mkube-node-lease Active 10mkube-public Active 10mkube-system Active 10m...```" } ]
{ "category": "App Definition and Development", "file_name": "docs.md", "project_name": "Depot", "subcategory": "Application Definition & Image Build" }
[ { "data": "Using Depot's remote builders for local development allows you to get faster Docker image builds with the entire Docker layer cache instantly available across builds. The cache is shared across your entire team who has access to a given Depot project, allowing you to reuse build results and cache across your entire team for faster local development. Additionally, routing the image build to remote builders frees your local machine's CPU and memory resources. There is nothing additional you need to configure to share your build cache across your team for local builds. If your team members can access the Depot project, they will automatically share the same build cache. So, if you build an image locally, your team members can reuse the layers you built in their own builds. To leverage Depot locally, install the depot CLI tool and configure your Depot project, if you haven't already. With those two things complete, you can then login to Depot via the CLI: ``` depot login``` Once you're logged in, you can configure Depot inside of your git repository by running the init command: ``` depot init``` The init command writes a depot.json file to the root of your repository with the Depot project ID that you selected. Alternatively, you can skip the init command if you'd like and use the --project flag on the build command to specify the project ID. You can run a build with Depot locally by running the build command: ``` depot build -t my-image:latest .``` By default, Depot won't return you the built image locally. Instead, the built image and the layers produced will remain in the build cache. However, if you'd like to download the image locally, for instance, so you can docker run it, you can specify the --load flag: ``` depot build -t my-image:latest --load .``` You can also run a build with Depot locally via the docker build or docker buildx build commands. To do so, you'll need to run depot configure-docker to configure your Docker CLI to use Depot as the default builder: ``` depot configure-docker docker build -t my-image:latest .``` For a full guide on using Depot via your existing docker build of docker compose commands, see our Docker integration guide." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Dapr", "subcategory": "Application Definition & Image Build" }
[ { "data": "Distributed applications are commonly comprised of many microservices, with dozens - sometimes hundreds - of instances scaling across underlying infrastructure. As these distributed solutions grow in size and complexity, the potential for system failures inevitably increases. Service instances can fail or become unresponsive due to any number of issues, including hardware failures, unexpected throughput, or application lifecycle events, such as scaling out and application restarts. Designing and implementing a self-healing solution with the ability to detect, mitigate, and respond to failure is critical. Dapr provides a capability for defining and applying fault tolerance resiliency policies to your application. You can define policies for following resiliency patterns: These policies can be applied to any Dapr API calls when calling components with a resiliency spec. Applications can become unresponsive for a variety of reasons. For example, they are too busy to accept new work, could have crashed, or be in a deadlock state. Sometimes the condition can be transitory or persistent. Dapr provides a capability for monitoring app health through probes that check the health of your application and react to status changes. When an unhealthy app is detected, Dapr stops accepting new work on behalf of the application. Read more on how to apply app health checks to your application. Dapr provides a way to determine its health using an HTTP /healthz endpoint. With this endpoint, the daprd process, or sidecar, can be: Read more on about how to apply dapr health checks to your application. Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve." } ]
{ "category": "App Definition and Development", "file_name": "installation.md", "project_name": "Depot", "subcategory": "Application Definition & Image Build" }
[ { "data": "How to install the depot CLI on all platforms, with links to CI configuration guides. For Mac, you can install the CLI with Homebrew: ``` brew install depot/tap/depot``` Or download the latest version from GitHub releases. Either install with our installation script: ``` curl -L https://depot.dev/install-cli.sh | sh curl -L https://depot.dev/install-cli.sh | sh -s 1.6.0``` Or download the latest version from GitHub releases." } ]
{ "category": "App Definition and Development", "file_name": "security.md", "project_name": "Depot", "subcategory": "Application Definition & Image Build" }
[ { "data": "For questions, concerns, or information about our security policies or to disclose a security vulnerability, please get in touch with us at security@depot.dev. A Depot organization represents a collection of projects that contain builder VMs and SSD cache disks. These VMs and disks are associated with a single organization and are not shared across organizations. When a build request arrives, the build is routed to the correct builder VM based on organization, project, and requested CPU architecture. Communication between the depot CLI and builder VM uses an encrypted HTTPS (TLS) connection. Cache volumes are encrypted at rest using our infrastructure providers' encryption capabilities. A builder in Depot and its SSD cache are tied to a single project and the organization that owns it. Builders are never shared across organizations. Instead, builds running on a given builder are connected to one and only one organization, the organization that owns the projects. Connections from the Depot CLI to the builder VM are routed through a stateless load balancer directly to the project's builder VM and are encrypted using TLS (HTTPS). Our services and applications run in the cloud using one of our infrastructure providers, AWS and GCP. Depot has no physical access to the underlying physical infrastructure. For more information, see AWS's security details and GCP's security details. All data transferred in and out of Depot is encrypted using hardened TLS. This includes connections between the Depot CLI and builder VMs, which are conducted via HTTPS. In addition, Depot's domain is protected by HTTP Strict Transport Security (HSTS). Cache volumes attached to project builders are encrypted at rest using our infrastructure providers' encryption capabilities. Depot does not access builders or cache volumes directly, except for use in debugging when explicit permission is granted from the organization owner. Today, Depot operates cloud infrastructure in regions that are geographically located inside the United States of America as well as the European Union (if a project chooses the EU as its region). Depot supports API-token-based authentication for various aspects of the application: Depot keeps up to date with software dependencies and has automated tools scanning for dependency vulnerabilities. Development environments are separated physically from Depot's production environment. You can add and remove user access to your organization via the settings page. Users can have one of two roles: We expect to expand the available roles and permissions in the future; don't hesitate to contact us if you have any special requirements. In addition to users, Depot also allows creating trust relationships with GitHub Actions. These relationships enable workflow runs initiated in GitHub Actions to access specific projects in your organization to run builds. Trust relationships can be configured in the project settings. Access to create project builds effectively equates to access to the builder VM due to the nature of how docker build works. Anyone with access to build a project can access that project's build cache files and potentially add, edit, or remove cache entries. You should be careful that you trust the users and trust relationships that you have given access to a project and use tools like OIDC trust relationships to limit access to only the necessary scope." } ]
{ "category": "App Definition and Development", "file_name": "community.md", "project_name": "Devfile", "subcategory": "Application Definition & Image Build" }
[ { "data": "Introduction Organizations looking to standardize their development environment can do so by adopting devfiles. In the simplest case, developers can just consume the devfiles that are available from the public community registry. If your organization needs custom devfiles that are authored and shared internally, then you need a role based approach so developers, devfile authors, and registry administrators can interact together. A devfile author, also known as a runtime provider, can be an individual or a group representing a runtime vendor. Devfile authors need sound knowledge of the supported runtime so they can create devfiles to build and run applications. If a runtime stack is not available in the public registry, an organization can choose to develop their own and keep it private for their in-house development. The public community registry is managed by the community and hosted by Red Hat. Share your devfile to the public community registry so other teams can benefit from your application. If an organization wants to keep their own devfiles private but wishes to share with other departments, they can assign a registry administrator. The registry administrator deploys, maintains, and manages the contents of their private registry and the default list of devfile registries available in a given cluster. Developers can use the supported tools to access devfiles. Many of the existing tools offer a way to register or catalog public and private devfile registries which then allows the tool to expose the devfiles for development. In addition, each registry comes packaged with an index server and a registry viewer so developers can browse and view the devfile contents before deciding which ones they want to adopt. Developers can also extend an existing parent devfile to customize the workflow of their specific application. The devfile can be packaged as part of the application source to ensure consistent behavior when moving across different tools. Note! Tools that support the devfile spec might have varying levels of support. Check their product pages for more information. An open standard defining containerized development environments." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Docker Compose", "subcategory": "Application Definition & Image Build" }
[ { "data": "You can add seats and manage invitations to your Docker Build Cloud Team in the Docker Build Cloud dashboard. Note If you have a Docker Build Cloud Business subscription, you can add and remove seats by working with your account executive, then assign your purchased seats in the Docker Build Cloud dashboard. The number of seats will be charged to your payment information on file, and are added immediately. The charge for the reduced seat count will be reflected on the next billing cycle. Optionally, you can cancel the seat downgrade any time before the next billing cycle. As an owner of the Docker Build Cloud team, you can invite members to access cloud builders. To invite team members to your team in Docker Build Cloud: Invitees receive an email with instructions on how they can accept the invite. After they accept, the seat will be marked as Allocated in the User management section in the Docker Build Cloud dashboard. For more information on the permissions granted to members, see Roles and permissions. Edit this page Request changes Copyright 2013-2024 Docker Inc. All rights reserved." } ]
{ "category": "App Definition and Development", "file_name": "compose-file.md", "project_name": "Docker Compose", "subcategory": "Application Definition & Image Build" }
[ { "data": "Docker recommends you use the Docker Official Images in your projects. These images have clear documentation, promote best practices, and are regularly updated. Docker Official Images support most common use cases, making them perfect for new Docker users. Advanced users can benefit from more specialized image variants as well as review Docker Official Images as part of your Dockerfile learning process. The repository description for each Docker Official Image contains a Supported tags and respective Dockerfile links section that lists all the current tags with links to the Dockerfiles that created the image with those tags. The purpose of this section is to show what image variants are available. Tags listed on the same line all refer to the same underlying image. Multiple tags can point to the same image. For example, in the previous screenshot taken from the ubuntu Docker Official Images repository, the tags 24.04, noble-20240225, noble, and devel all refer to the same image. The latest tag for a Docker Official Image is often optimized for ease of use and includes a wide variety of useful software, such as developer and build tools. By tagging an image as latest, the image maintainers are essentially suggesting that image be used as the default. In other words, if you do not know what tag to use or are unfamiliar with the underlying software, you should probably start with the latest image. As your understanding of the software and image variants advances, you may find other image variants better suit your needs. A number of language stacks such as Node.js, Python, and Ruby have slim tag variants designed to provide a lightweight, production-ready base image with fewer packages. A typical consumption pattern for slim images is as the base image for the final stage of a multi-staged build. For example, you build your application in the first stage of the build using the latest variant and then copy your application into the final stage based upon the slim variant. Here is an example Dockerfile. ``` FROM node:latest AS build WORKDIR /app COPY package.json package-lock.json" }, { "data": "RUN npm ci COPY . ./ FROM node:slim WORKDIR /app COPY --from=build /app /app CMD [\"node\", \"app.js\"]``` Many Docker Official Images repositories also offer alpine variants. These images are built on top of the Alpine Linux distribution rather than Debian or Ubuntu. Alpine Linux is focused on providing a small, simple, and secure base for container images, and Docker Official Images alpine variants typically aim to install only necessary packages. As a result, Docker Official Images alpine variants are typically even smaller than slim variants. The main caveat to note is that Alpine Linux uses musl libc instead of glibc. Additionally, to minimize image size, it's uncommon for Alpine-based images to include tools such as Git or Bash by default. Depending on the depth of libc requirements or assumptions in your programs, you may find yourself running into issues due to missing libraries or tools. When you use Alpine images as a base, consider the following options in order to make your program compatible with Alpine Linux and musl: Refer to the alpine image description on Docker Hub for examples on how to install packages if you are unfamiliar. Tags with words that look like Toy Story characters (for example, bookworm, bullseye, and trixie) or adjectives (such as focal, jammy, and noble), indicate the codename of the Linux distribution they use as a base image. Debian release codenames are based on Toy Story characters, and Ubuntu's take the form of \"Adjective Animal\". For example, the codename for Ubuntu 24.04 is \"Noble Numbat\". Linux distribution indicators are helpful because many Docker Official Images provide variants built upon multiple underlying distribution versions (for example, postgres:bookworm and postgres:bullseye). Docker Official Images tags may contain other hints to the purpose of their image variant in addition to those described here. Often these tag variants are explained in the Docker Official Images repository documentation. Reading through the How to use this image and Image Variants sections will help you to understand how to use these variants. Edit this page Request changes Copyright 2013-2024 Docker Inc. All rights reserved." } ]
{ "category": "App Definition and Development", "file_name": "local-development.md", "project_name": "Depot", "subcategory": "Application Definition & Image Build" }
[ { "data": "How to install the depot CLI on all platforms, with links to CI configuration guides. For Mac, you can install the CLI with Homebrew: ``` brew install depot/tap/depot``` Or download the latest version from GitHub releases. Either install with our installation script: ``` curl -L https://depot.dev/install-cli.sh | sh curl -L https://depot.dev/install-cli.sh | sh -s 1.6.0``` Or download the latest version from GitHub releases." } ]
{ "category": "App Definition and Development", "file_name": "faq.md", "project_name": "Docker Compose", "subcategory": "Application Definition & Image Build" }
[ { "data": "Docker recommends you use the Docker Official Images in your projects. These images have clear documentation, promote best practices, and are regularly updated. Docker Official Images support most common use cases, making them perfect for new Docker users. Advanced users can benefit from more specialized image variants as well as review Docker Official Images as part of your Dockerfile learning process. The repository description for each Docker Official Image contains a Supported tags and respective Dockerfile links section that lists all the current tags with links to the Dockerfiles that created the image with those tags. The purpose of this section is to show what image variants are available. Tags listed on the same line all refer to the same underlying image. Multiple tags can point to the same image. For example, in the previous screenshot taken from the ubuntu Docker Official Images repository, the tags 24.04, noble-20240225, noble, and devel all refer to the same image. The latest tag for a Docker Official Image is often optimized for ease of use and includes a wide variety of useful software, such as developer and build tools. By tagging an image as latest, the image maintainers are essentially suggesting that image be used as the default. In other words, if you do not know what tag to use or are unfamiliar with the underlying software, you should probably start with the latest image. As your understanding of the software and image variants advances, you may find other image variants better suit your needs. A number of language stacks such as Node.js, Python, and Ruby have slim tag variants designed to provide a lightweight, production-ready base image with fewer packages. A typical consumption pattern for slim images is as the base image for the final stage of a multi-staged build. For example, you build your application in the first stage of the build using the latest variant and then copy your application into the final stage based upon the slim variant. Here is an example Dockerfile. ``` FROM node:latest AS build WORKDIR /app COPY package.json package-lock.json" }, { "data": "RUN npm ci COPY . ./ FROM node:slim WORKDIR /app COPY --from=build /app /app CMD [\"node\", \"app.js\"]``` Many Docker Official Images repositories also offer alpine variants. These images are built on top of the Alpine Linux distribution rather than Debian or Ubuntu. Alpine Linux is focused on providing a small, simple, and secure base for container images, and Docker Official Images alpine variants typically aim to install only necessary packages. As a result, Docker Official Images alpine variants are typically even smaller than slim variants. The main caveat to note is that Alpine Linux uses musl libc instead of glibc. Additionally, to minimize image size, it's uncommon for Alpine-based images to include tools such as Git or Bash by default. Depending on the depth of libc requirements or assumptions in your programs, you may find yourself running into issues due to missing libraries or tools. When you use Alpine images as a base, consider the following options in order to make your program compatible with Alpine Linux and musl: Refer to the alpine image description on Docker Hub for examples on how to install packages if you are unfamiliar. Tags with words that look like Toy Story characters (for example, bookworm, bullseye, and trixie) or adjectives (such as focal, jammy, and noble), indicate the codename of the Linux distribution they use as a base image. Debian release codenames are based on Toy Story characters, and Ubuntu's take the form of \"Adjective Animal\". For example, the codename for Ubuntu 24.04 is \"Noble Numbat\". Linux distribution indicators are helpful because many Docker Official Images provide variants built upon multiple underlying distribution versions (for example, postgres:bookworm and postgres:bullseye). Docker Official Images tags may contain other hints to the purpose of their image variant in addition to those described here. Often these tag variants are explained in the Docker Official Images repository documentation. Reading through the How to use this image and Image Variants sections will help you to understand how to use these variants. Edit this page Request changes Copyright 2013-2024 Docker Inc. All rights reserved." } ]
{ "category": "App Definition and Development", "file_name": "release-notes.md", "project_name": "Docker Compose", "subcategory": "Application Definition & Image Build" }
[ { "data": "The Docker Official Images are a curated set of Docker repositories hosted on Docker Hub. Note Use of Docker Official Images is subject to Docker's Terms of Service. These images provide essential base repositories that serve as the starting point for the majority of users. These include operating systems such as Ubuntu and Alpine, programming language runtimes such as Python and Node, and other essential tools such as memcached and MySQL. The images are some of the most secure images on Docker Hub. This is particularly important as Docker Official Images are some of the most popular on Docker Hub. Typically, Docker Official images have few or no packages containing CVEs. The images exemplify Dockerfile best practices and provide clear documentation to serve as a reference for other Dockerfile authors. Images that are part of this program have a special badge on Docker Hub making it easier for you to identify projects that are part of Docker Official Images. Using Docker Official Images Contributing to Docker Official Images Edit this page Request changes Copyright 2013-2024 Docker Inc. All rights reserved." } ]
{ "category": "App Definition and Development", "file_name": "new_template=doc_issue.yml&location=https%3a%2f%2fdocs.docker.com%2fcompose%2f&labels=status%2Ftriage.md", "project_name": "Docker Compose", "subcategory": "Application Definition & Image Build" }
[ { "data": "For more detailed information, see the release notes in the Compose repo. This release fixes a build issue with Docker Desktop for Windows introduced in Compose v2.24.0. Note The watch command is now generally available (GA). You can directly use it from the root command docker compose watch. For more information, see File watch. Note The format of docker compose ps and docker compose ps --format=json changed to better align with docker ps output. See compose#10918. For the full change log or additional information, check the Compose repository 2.12.2 release page. For the full change log or additional information, check the Compose repository 2.12.1 release page. CI update to the documentation repository path Upgraded to compose-go from 1.5.1 to 1.6.0 Updated to go 1.19.2 to address CVE-2022-2879, CVE-2022-2880, CVE-2022-41715 For the full change log or additional information, check the Compose repository 2.12.0 release page. Note For the full change log or additional information, check the Compose repository 2.11.2 release page. For the full change log or additional information, check the Compose repository 2.11.1 release page. For the full change log or additional information, check the Compose repository 2.11.0 release page. For the full change log or additional information, check the Compose repository 2.10.2 release page. For the full change log or additional information, check the Compose repository 2.10.1 release page. For the full change log, check the Compose repository 2.10.0 release page. Important Compose v2.9.0 contains changes to the environment variable's precedence that have since been reverted. We recommend using v2.10+ to avoid compatibility issues. Note This release reverts the breaking changes introduced in Compose v2.8.0 by compose-go v1.3.0. For the full change log or additional information, check the Compose repository 2.9.0 release page. Important This release introduced a breaking change via compose-go v1.3.0 and this PR. In this release, Docker Compose recreates new resources (networks, volumes, secrets, configs, etc.) with new names, using a - (dash) instead an _ (underscore) and tries to connect to or use these newly created resources instead of your existing ones! Please use Compose the v2.9.0 release instead. For the full change log or additional information, check the Compose repository 2.8.0 release page. For the full change log or additional information, check the Compose repository 2.7.0 release page. For the full change log or additional information, check the Compose repository 2.6.1 release page. For the full change log or additional information, check the Compose repository 2.6.0 release page. For the full change log or additional information, check the Compose repository 2.5.1 release page. For the full change log or additional information, check the Compose repository 2.5.0 release page. For the full change log or additional information, check the Compose repository 2.4.1 release page. For the full change log or additional information, check the Compose repository 2.4.0 release page. For the full change log or additional information, check the Compose repository 2.3.4 release page. (2022-03-8 to 2022-04-14) For the releases later than 1.29.2 and earlier than 2.3.4, please check the Compose repository release pages. (2021-05-10) Removed the prompt to use docker-compose in the up command. Bumped py to 1.10.0 in requirements-indirect.txt. (2021-04-13) Fixed invalid handler warning on Windows builds. Fixed config hash to trigger container re-creation on IPC mode updates. Fixed conversion map for placement.maxreplicasper_node. Removed extra scan suggestion on build. (2021-04-06) Added profile filter to docker-compose config. Added a depends_on condition to wait for successful service completion. Added an image scan message on build. Updated warning message for --no-ansi to mention --ansi never as alternative. Bumped docker-py to" }, { "data": "Bumped PyYAML to 5.4.1. Bumped python-dotenv to 0.17.0. (2021-03-23) Made --env-file relative to the current working directory. Environment file paths set with --env-file are now relative to the current working directory and override the default .env file located in the project directory. Fixed missing service property storage_opt by updating the Compose schema. Fixed build extra_hosts list format. Removed additional error message on exec. (2021-02-26) Fixed the OpenSSL version mismatch error when shelling out to the SSH client (via bump to docker-py 4.4.4 which contains the fix). Added missing build flags to the native builder: platform, isolation and extra_hosts. Removed info message on native build. Fixed the log fetching bug when service logging driver is set to 'none'. (2021-02-18) (2021-02-17) Fixed SSH hostname parsing when it contains a leading 's'/'h', and removed the quiet option that was hiding the error (via docker-py bump to 4.4.2). Fixed key error for --no-log-prefix option. Fixed incorrect CLI environment variable name for service profiles: COMPOSEPROFILES instead of COMPOSEPROFILE. Fixed the fish completion. Bumped cryptography to 3.3.2. Removed the log driver filter. For a list of PRs and issues fixed in this release, see Compose 1.28.3. (2021-01-26) Revert to Python 3.7 bump for Linux static builds Add bash completion for docker-compose logs|up --no-log-prefix (2021-01-20) Added support for NVIDIA GPUs through device requests. Added support for service profiles. Changed the SSH connection approach to the Docker CLI by shelling out to the local SSH client. Set the COMPOSEPARAMIKOSSH=1 environment variable to enable the old behavior. Added a flag to disable log prefix. Added a flag for ANSI output control. Docker Compose now uses the native Docker CLI's build command when building images. Set the COMPOSEDOCKERCLI_BUILD=0 environment variable to disable this feature. Made parallel_pull=True by default. Restored the warning for configs in non-swarm mode. Took --file into account when defining project_dir. Fixed a service attach bug on compose up. Added usage metrics. Synced schema with COMPOSE specification. Improved failure report for missing mandatory environment variables. Bumped attrs to 20.3.0. Bumped more_itertools to 8.6.0. Bumped cryptograhy to 3.2.1. Bumped cffi to 1.14.4. Bumped virtualenv to 20.2.2. Bumped bcrypt to 3.2.0. Bumped GitPython to 3.1.11. Bumped docker-py to 4.4.1. Bumped Python to 3.9. Linux: bumped Debian base image from stretch to buster (required for Python 3.9). macOS: Bumped OpenSSL 1.1.1g to 1.1.1h, and Python 3.7.7 to 3.9.0. Bumped PyInstaller to 4.1. Relaxed the restriction on base images to latest minor. Updated READMEs. (2020-09-24) Removed path checks for bind mounts. Fixed port rendering to output long form syntax for non-v1. Added protocol to the Docker socket address. (2020-09-16) Merged maxreplicasper_node on docker-compose config. Fixed depends_on serialization on docker-compose config. Fixed scaling when some containers are not running on docker-compose up. Enabled relative paths for driver_opts.device for local driver. Allowed strings for cpus fields. (2020-09-10) (2020-09-10) Fixed docker-compose run when service.scale is specified. Allowed the driver property for external networks as a temporary workaround for the Swarm network propagation issue. Pinned the new internal schema version to 3.9 as the default. Preserved the version number configured in the Compose file. (2020-09-07) Merged 2.x and 3.x Compose formats and aligned with COMPOSE_SPEC schema. Implemented service mode for ipc. Passed COMPOSEPROJECTNAME environment variable in container mode. Made run behave in the same way as up. Used docker build on docker-compose run when COMPOSEDOCKERCLI_BUILD environment variable is set. Used the docker-py default API version for engine queries (auto). Parsed network_mode on build. Ignored build context path validation when building is not" }, { "data": "Fixed float to bytes conversion via docker-py bump to 4.3.1. Fixed the scale bug when the deploy section is set. Fixed docker-py bump in setup.py. Fixed experimental build failure detection. Fixed context propagation to the Docker CLI. Bumped docker-py to 4.3.1. Bumped tox to 3.19.0. Bumped virtualenv to 20.0.30. Added script for Docs synchronization. (2020-07-02) (2020-06-30) Enforced docker-py 4.2.1 as minimum version when installing with pip. Fixed context load for non-docker endpoints. (2020-06-03) Added docker context support. Added missing test dependency ddt to setup.py. Added --attach-dependencies to command up for attaching to dependencies. Allowed compatibility option with COMPOSE_COMPATIBILITY environment variable. Bumped Pytest to 5.3.4 and add refactor compatibility with the new version. Bumped OpenSSL from 1.1.1f to 1.1.1g. Bumped certifi from 2019.11.28 to 2020.4.5.1. Bumped docker-py from 4.2.0 to 4.2.1. Properly escaped values coming from env_files. Synchronized compose-schemas with upstream (docker/cli). Removed None entries on exec command. Added distro package to get distro information. Added python-dotenv to delegate .env file processing. Stopped adjusting output on terminal width when piped into another command. Showed an error message when version attribute is malformed. Fixed HTTPS connection when DOCKER_HOST is remote. (2020-04-10) Bumped OpenSSL from 1.1.1d to 1.1.1f. Added Compose version 3.8. (2020-02-03) Fixed the CI script to enforce the minimal MacOS version to 10.11. Fixed docker-compose exec for keys with no value on environment files. (2020-01-23) Fixed the CI script to enforce the compilation with Python3. Updated the binary's sha256 on the release page. (2020-01-20) Fixed an issue that caused Docker Compose to crash when the version field was set to an invalid value. Docker Compose now displays an error message when invalid values are used in the version field. Fixed an issue that caused Docker Compose to render messages incorrectly when running commands outside a terminal. (2020-01-06) Decoded the APIError explanation to Unicode before using it to create and start a container. Docker Compose discards com.docker.compose.filepaths labels that have None as value. This usually occurs when labels originate from stdin. Added OS X binary as a directory to solve slow start up time issues caused by macOS Catalina binary scan. Passed the HOME environment variable in container mode when running with script/run/run.sh. Docker Compose now reports images that cannot be pulled, however, are required to be built. (2019-11-18) Set no-colors to true by changing CLICOLOR env variable to 0. Added working directory, config files, and env file to service labels. Added ARM build dependencies. Added BuildKit support (use DOCKERBUILDKIT=1 and COMPOSEDOCKERCLIBUILD=1). Raised Paramiko to version 2.6.0. Added the following tags: docker-compose:latest, docker-compose:<version>-alpine, and docker-compose:<version>-debian. Raised docker-py to version 4.1.0. Enhanced support for requests, up to version 2.22.0. Removed empty tag on build:cache_from. Dockerfile enhancement that provides for the generation of libmusl binaries for Alpine Linux. Pulling only of images that cannot be built. The scale attribute now accepts 0 as a value. Added a --quiet option and a --no-rm option to the docker-compose build command. Added a --no-interpolate option to the docker-compose config command. Raised OpenSSL for MacOS build from 1.1.0 to 1.1.1c. Added support for the docker-compose.yml file's credential_spec configuration option. Resolution of digests without having to pull the image. Upgraded pyyaml to version 4.2b1. Lowered the severity to warning for instances in which down attempts to remove a non-existent image. Mandated the use of improved API fields for project events, when possible. Updated setup.py for modern pypi/setuptools, and removed pandoc dependencies. Removed Dockerfile.armhf, which is no longer required. Made container service color deterministic, including the removal of the color red. Fixed non-ASCII character errors (Python 2" }, { "data": "Changed image sizing to decimal format, to align with Docker CLI. tty size acquired through Python POSIX support. Fixed same file extends optimization. Fixed stdin_open. Fixed the issue of --remove-orphans being ignored encountered during use with up --no-start option. Fixed docker-compose ps --all command. Fixed the depends_on dependency recreation behavior. Fixed bash completion for the docker-compose build --memory command. Fixed the misleading environmental variables warning that occurs when the docker-compose exec command is performed. Fixed the failure check in the parallelexecutewatch function. Fixed the race condition that occurs following the pulling of an image. Fixed error on duplicate mount points (a configuration error message now displays). Fixed the merge on networks section. Compose container is always connected to stdin by default. Fixed the presentation of failed services on the docker-compose start command when containers are not available. (2019-06-24) This release contains minor improvements and bug fixes. (2019-03-28) Added support for connecting to the Docker Engine using the ssh protocol. Added an --all flag to docker-compose ps to include stopped one-off containers in the command's output. Added bash completion for ps --all|-a. Added support for credential_spec. Added --parallel to docker build's options in bash and zsh completion. Fixed a bug where some valid credential helpers weren't properly handled by Compose when attempting to pull images from private registries. Fixed an issue where the output of docker-compose start before containers were created was misleading. Compose will no longer accept whitespace in variable names sourced from environment files. This matches the Docker CLI behavior. Compose will now report a configuration error if a service attempts to declare duplicate mount points in the volumes section. Fixed an issue with the containerized version of Compose that prevented users from writing to stdin during interactive sessions started by run or exec. One-off containers started by run no longer adopt the restart policy of the service, and are instead set to never restart. Fixed an issue that caused some container events to not appear in the output of the docker-compose events command. Missing images will no longer stop the execution of docker-compose down commands. A warning is now displayed instead. Force virtualenv version for macOS CI. Fixed merging of Compose files when network has None config. Fixed CTRL+C issues by enabling bootloaderignoresignals in pyinstaller. Bumped docker-py version to 3.7.2 to fix SSH and proxy configuration issues. Fixed release script and some typos on release documentation. (2018-11-28) Reverted a 1.23.0 change that appended random strings to container names created by docker-compose up, causing addressability issues. Note: Containers created by docker-compose run will continue to use randomly generated names to avoid collisions during parallel runs. Fixed an issue where some dockerfile paths would fail unexpectedly when attempting to build on Windows. Fixed a bug where build context URLs would fail to build on Windows. Fixed a bug that caused run and exec commands to fail for some otherwise accepted values of the --host parameter. Fixed an issue where overrides for the storage_opt and isolation keys in service definitions weren't properly applied. Fixed a bug where some invalid Compose files would raise an uncaught exception during validation. (2018-11-01) Fixed a bug where working with containers created with a version of Compose earlier than 1.23.0 would cause unexpected crashes. Fixed an issue where the behavior of the --project-directory flag would vary depending on which subcommand was used. (2018-10-30) The default naming scheme for containers created by Compose in this version has changed from <project><service><index> to <project><service><index>_<slug>, where <slug> is a randomly-generated hexadecimal" }, { "data": "Please make sure to update scripts relying on the old naming scheme accordingly before upgrading. Logs for containers restarting after a crash will now appear in the output of the up and logs commands. Added --hash option to the docker-compose config command, allowing users to print a hash string for each service's configuration to facilitate rolling updates. Added --parallel flag to the docker-compose build command, allowing Compose to build up to 5 images simultaneously. Output for the pull command now reports status / progress even when pulling multiple images in parallel. For images with multiple names, Compose will now attempt to match the one present in the service configuration in the output of the images command. Fixed an issue where parallel run commands for the same service would fail due to name collisions. Fixed an issue where paths longer than 260 characters on Windows clients would cause docker-compose build to fail. Fixed a bug where attempting to mount /var/run/docker.sock with Docker Desktop for Windows would result in failure. The --project-directory option is now used by Compose to determine where to look for the .env file. docker-compose build no longer fails when attempting to pull an image with credentials provided by the gcloud credential helper. Fixed the --exit-code-from option in docker-compose up to always report the actual exit code even when the watched container is not the cause of the exit. Fixed an issue that would prevent recreating a service in some cases where a volume would be mapped to the same mountpoint as a volume declared within the Dockerfile for that image. Fixed a bug that caused hash configuration with multiple networks to be inconsistent, causing some services to be unnecessarily restarted. Fixed a bug that would cause failures with variable substitution for services with a name containing one or more dot characters. Fixed a pipe handling issue when using the containerized version of Compose. Fixed a bug causing external: false entries in the Compose file to be printed as external: true in the output of docker-compose config. Fixed a bug where issuing a docker-compose pull command on services without a defined image key would cause Compose to crash. Volumes and binds are now mounted in the order they are declared in the service definition. (2018-07-17) Introduced version 3.7 of the docker-compose.yml specification. This version requires Docker Engine 18.06.0 or above. Added support for rollback_config in the deploy configuration Added support for the init parameter in service configurations Added support for extension fields in service, network, volume, secret, and config configurations Fixed a bug that prevented deployment with some Compose files when DOCKERDEFAULTPLATFORM was set Compose will no longer try to create containers or volumes with invalid starting characters Fixed several bugs that prevented Compose commands from working properly with containers created with an older version of Compose Fixed an issue with the output of docker-compose config with the --compatibility-mode flag enabled when the source file contains attachable networks Fixed a bug that prevented the gcloud credential store from working properly when used with the Compose binary on UNIX Fixed a bug that caused connection errors when trying to operate over a non-HTTPS TCP connection on Windows Fixed a bug that caused builds to fail on Windows if the Dockerfile was located in a subdirectory of the build context Fixed an issue that prevented proper parsing of UTF-8 BOM encoded Compose files on Windows Fixed an issue with handling of the double-wildcard () pattern in .dockerignore files when using docker-compose build Fixed a bug that caused auth values in legacy" }, { "data": "files to be ignored docker-compose build will no longer attempt to create image names starting with an invalid character (2018-05-03) (2018-04-27) In 1.21.0, we introduced a change to how project names are sanitized for internal use in resource names. This caused issues when manipulating an existing, deployed application whose name had changed as a result. This release properly detects resources using \"legacy\" naming conventions. Fixed an issue where specifying an in-context Dockerfile using an absolute path would fail despite being valid. Fixed a bug where IPAM option changes were incorrectly detected, preventing redeployments. Validation of v2 files now properly checks the structure of IPAM configs. Improved support for credentials stores on Windows to include binaries using extensions other than .exe. The list of valid extensions is determined by the contents of the PATHEXT environment variable. Fixed a bug where Compose would generate invalid binds containing duplicate elements with some v3.2 files, triggering errors at the Engine level during deployment. (2018-04-11) Introduced version 2.4 of the docker-compose.yml specification. This version requires Docker Engine 17.12.0 or above. Added support for the platform parameter in service definitions. If supplied, the parameter is also used when performing build for the service. Added support for the cpu_period parameter in service definitions (2.x only). Added support for the isolation parameter in service build configurations. Additionally, the isolation parameter in service definitions is used for builds as well if no build.isolation parameter is defined. (2.x only) Added support for the --workdir flag in docker-compose exec. Added support for the --compress flag in docker-compose build. docker-compose pull is now performed in parallel by default. You can opt out using the --no-parallel flag. The --parallel flag is now deprecated and will be removed in a future version. Dashes and underscores in project names are no longer stripped out. docker-compose build now supports the use of Dockerfile from outside the build context. Compose now checks that the volume's configuration matches the remote volume, and errors out if a mismatch is detected. Fixed a bug that caused Compose to raise unexpected errors when attempting to create several one-off containers in parallel. Fixed a bug with argument parsing when using docker-machine config to generate TLS flags for exec and run commands. Fixed a bug where variable substitution with an empty default value (e.g. ${VAR:-}) would print an incorrect warning. Improved resilience when encoding of the Compose file doesn't match the system's. Users are encouraged to use UTF-8 when possible. Fixed a bug where external overlay networks in Swarm would be incorrectly recognized as inexistent by Compose, interrupting otherwise valid operations. (2018-03-20) Introduced version 3.6 of the docker-compose.yml specification. This version must be used with Docker Engine 18.02.0 or above. Added support for the tmpfs.size property in volume mappings Added support for devicecgrouprules in service definitions Added support for the tmpfs.size property in long-form volume mappings The --build-arg option can now be used without specifying a service in docker-compose build Added a --log-level option to the top-level docker-compose command. Accepted values are debug, info, warning, error, critical. Default log level is info docker-compose run now allows users to unset the container's entrypoint Proxy configuration found in the" }, { "data": "file now populates environment and build args for containers created by Compose Added the --use-aliases flag to docker-compose run, indicating that network aliases declared in the service's config should be used for the running container Added the --include-deps flag to docker-compose pull docker-compose run now kills and removes the running container upon receiving SIGHUP docker-compose ps now shows the containers' health status if available Added the long-form --detach option to the exec, run and up commands Fixed .dockerignore handling, notably with regard to absolute paths and last-line precedence rules Fixed an issue where Compose would make costly DNS lookups when connecting to the Engine when using Docker For Mac Fixed a bug introduced in 1.19.0 which caused the default certificate path to not be honored by Compose Fixed a bug where Compose would incorrectly check whether a symlink's destination was accessible when part of a build context Fixed a bug where .dockerignore files containing lines of whitespace caused Compose to error out on Windows Fixed a bug where --tls* and --host options wouldn't be properly honored for interactive run and exec commands A seccomp:<filepath> entry in the security_opt config now correctly sends the contents of the file to the engine ANSI output for up and down operations should no longer affect the wrong lines Improved support for non-unicode locales Fixed a crash occurring on Windows when the user's home directory name contained non-ASCII characters Fixed a bug occurring during builds caused by files with a negative mtime values in the build context Fixed an encoding bug when streaming build progress (2018-02-07) Added --renew-anon-volumes (shorthand -V) to the up command, preventing Compose from recovering volume data from previous containers for anonymous volumes Added limit for number of simultaneous parallel operations, which should prevent accidental resource exhaustion of the server. Default is 64 and can be configured using the COMPOSEPARALLELLIMIT environment variable Added --always-recreate-deps flag to the up command to force recreating dependent services along with the dependency owner Added COMPOSEIGNOREORPHANS environment variable to forgo orphan container detection and suppress warnings Added COMPOSEFORCEWINDOWS_HOST environment variable to force Compose to parse volume definitions as if the Docker host was a Windows system, even if Compose itself is currently running on UNIX Bash completion should now be able to better differentiate between running, stopped and paused services Fixed a bug that would cause the build command to report a connection error when the build context contained unreadable files or FIFO objects. These file types will now be handled appropriately Fixed various issues around interactive run/exec sessions. Fixed a bug where setting TLS options with environment and CLI flags simultaneously would result in part of the configuration being ignored Fixed a bug where the DOCKERTLSVERIFY environment variable was being ignored by Compose Fixed a bug where the -d and --timeout flags in up were erroneously marked as incompatible Fixed a bug where the recreation of a service would break if the image associated with the previous container had been removed Fixed a bug where updating a mount's target would break Compose when trying to recreate the associated service Fixed a bug where tmpfs volumes declared using the extended syntax in Compose files using version 3.2 would be erroneously created as anonymous volumes instead Fixed a bug where type conversion errors would print a stacktrace instead of exiting gracefully Fixed some errors related to unicode handling Dependent services no longer get recreated along with the dependency owner if their configuration hasn't changed Added better validation of labels fields in Compose files. Label values containing scalar types (number, boolean) now get automatically converted to strings (2017-12-18) Introduced version 3.5 of the docker-compose.yml specification. This version requires Docker Engine" }, { "data": "or above Added support for the shm_size parameter in build configurations Added support for the isolation parameter in service definitions Added support for custom names for network, secret and config definitions Added support for extra_hosts in build configuration Added support for the long syntax for volume entries, as previously introduced in the 3.2 format. Using this syntax will create mounts instead of volumes. Added support for the oomkilldisable parameter in service definitions (2.x only) Added support for custom names for network definitions (2.x only) Values interpolated from the environment will now be converted to the proper type when used in non-string fields. Added support for --label in docker-compose run Added support for --timeout in docker-compose down Added support for --memory in docker-compose build Setting stopgraceperiod in service definitions now also sets the container's stop_timeout Fixed an issue where Compose was still handling service hostname according to legacy engine behavior, causing hostnames containing dots to be cut up Fixed a bug where the X-Y:Z syntax for ports was considered invalid by Compose Fixed an issue with CLI logging causing duplicate messages and inelegant output to occur Fixed an issue that caused stopgraceperiod to be ignored when using multiple Compose files Fixed a bug that caused docker-compose images to crash when using untagged images Fixed a bug where the valid ${VAR:-} syntax would cause Compose to error out Fixed a bug where env_file entries using an UTF-8 BOM were being read incorrectly Fixed a bug where missing secret files would generate an empty directory in their place Fixed character encoding issues in the CLI's error handlers Added validation for the test field in healthchecks Added validation for the subnet field in IPAM configurations Added validation for volumes properties when using the long syntax in service definitions The CLI now explicit prevents using -d and --timeout together in docker-compose up (2017-11-01) Introduced version 3.4 of the docker-compose.yml specification. This version requires to be used with Docker Engine 17.06.0 or above. Added support for cache_from, network and target options in build configurations Added support for the order parameter in the update_config section Added support for setting a custom name in volume definitions using the name parameter Fixed a bug where extra_hosts values would be overridden by extension files instead of merging together Fixed a bug where the validation for v3.2 files would prevent using the consistency field in service volume definitions Fixed a bug that would cause a crash when configuration fields expecting unique items would contain duplicates Fixed a bug where mount overrides with a different mode would create a duplicate entry instead of overriding the original entry Fixed a bug where build labels declared as a list wouldn't be properly parsed Fixed a bug where the output of docker-compose config would be invalid for some versions if the file contained custom-named external volumes Improved error handling when issuing a build command on Windows using an unsupported file version Fixed an issue where networks with identical names would sometimes be created when running up commands concurrently. (2017-08-31) Introduced version 2.3 of the docker-compose.yml specification. This version requires to be used with Docker Engine 17.06.0 or above. Added support for the target parameter in build configurations Added support for the start_period parameter in healthcheck configurations Added support for the blkio_config parameter in service definitions Added support for setting a custom name in volume definitions using the name parameter (not available for version 2.0) Fixed a bug where nested extends instructions weren't resolved properly, causing \"file not found\" errors Fixed several issues with" }, { "data": "parsing Fixed issues where logs of TTY-enabled services were being printed incorrectly and causing MemoryError exceptions Fixed a bug where printing application logs would sometimes be interrupted by a UnicodeEncodeError exception on Python 3 The $ character in the output of docker-compose config is now properly escaped Fixed a bug where running docker-compose top would sometimes fail with an uncaught exception Fixed a bug where docker-compose pull with the --parallel flag would return a 0 exit code when failing Fixed an issue where keys in deploy.resources were not being validated Fixed an issue where the logging options in the output of docker-compose config would be set to null, an invalid value Fixed the output of the docker-compose images command when an image would come from a private repository using an explicit port number Fixed the output of docker-compose config when a port definition used 0 as the value for the published port (2017-07-26) The pid option in a service's definition now supports a service:<name> value. Added support for the storage_opt parameter in service definitions. This option is not available for the v3 format Added --quiet flag to docker-compose pull, suppressing progress output Some improvements to CLI output Volumes specified through the --volume flag of docker-compose run now complement volumes declared in the service's definition instead of replacing them Fixed a bug where using multiple Compose files would unset the scale value defined inside the Compose file. Fixed an issue where the credHelpers entries in the config.json file were not being honored by Compose Fixed a bug where using multiple Compose files with port declarations would cause failures in Python 3 environments Fixed a bug where some proxy-related options present in the user's environment would prevent Compose from running Fixed an issue where the output of docker-compose config would be invalid if the original file used Y or N values Fixed an issue preventing up operations on a previously created stack on Windows Engine. (2017-06-19) Added shorthand -u for --user flag in docker-compose exec Differences in labels between the Compose file and remote network will now print a warning instead of preventing redeployment. Fixed a bug where service's dependencies were being rescaled to their default scale when running a docker-compose run command Fixed a bug where docker-compose rm with the --stop flag was not behaving properly when provided with a list of services to remove Fixed a bug where cache_from in the build section would be ignored when using more than one Compose file. Fixed a bug that prevented binding the same port to different IPs when using more than one Compose file. Fixed a bug where override files would not be picked up by Compose if they had the .yaml extension Fixed a bug on Windows Engine where networks would be incorrectly flagged for recreation Fixed a bug where services declaring ports would cause crashes on some versions of Python 3 Fixed a bug where the output of docker-compose config would sometimes contain invalid port definitions (2017-05-02) Introduced version 2.2 of the docker-compose.yml specification. This version requires to be used with Docker Engine 1.13.0 or above Added support for init in service definitions. Added support for scale in service definitions. The configuration's value can be overridden using the --scale flag in docker-compose" }, { "data": "The scale command is disabled for this file format Fixed a bug where paths provided to compose via the -f option were not being resolved properly Fixed a bug where the extip::targetport notation in the ports section was incorrectly marked as invalid Fixed an issue where the exec command would sometimes not return control to the terminal when using the -d flag Fixed a bug where secrets were missing from the output of the config command for v3.2 files Fixed an issue where docker-compose would hang if no internet connection was available Fixed an issue where paths containing unicode characters passed via the -f flag were causing Compose to crash Fixed an issue where the output of docker-compose config would be invalid if the Compose file contained external secrets Fixed a bug where using --exit-code-from with up would fail if Compose was installed in a Python 3 environment Fixed a bug where recreating containers using a combination of tmpfs and volumes would result in an invalid config state (2017-04-04) Introduced version 3.2 of the docker-compose.yml specification Added support for cache_from in the build section of services Added support for the new expanded ports syntax in service definitions Added support for the new expanded volumes syntax in service definitions Added --volumes option to docker-compose config that lists named volumes declared for that project Added support for mem_reservation in service definitions (2.x only) Added support for dns_opt in service definitions (2.x only) Added a new docker-compose images command that lists images used by the current project's containers Added a --stop (shorthand -s) option to docker-compose rm that stops the running containers before removing them Added a --resolve-image-digests option to docker-compose config that pins the image version for each service to a permanent digest Added a --exit-code-from SERVICE option to docker-compose up. When used, docker-compose will exit on any container's exit with the code corresponding to the specified service's exit code Added a --parallel option to docker-compose pull that enables images for multiple services to be pulled simultaneously Added a --build-arg option to docker-compose build Added a --volume <volume_mapping> (shorthand -v) option to docker-compose run to declare runtime volumes to be mounted Added a --project-directory PATH option to docker-compose that will affect path resolution for the project When using --abort-on-container-exit in docker-compose up, the exit code for the container that caused the abort will be the exit code of the docker-compose up command Users can now configure which path separator character they want to use to separate the COMPOSE_FILE environment value using the COMPOSEPATHSEPARATOR environment variable Added support for port range to a single port in port mappings, such as 8000-8010:80. docker-compose run --rm now removes anonymous volumes after execution, matching the behavior of docker run" }, { "data": "Fixed a bug where override files containing port lists would cause a TypeError to be raised Fixed a bug where the deploy key would be missing from the output of docker-compose config Fixed a bug where scaling services up or down would sometimes re-use obsolete containers Fixed a bug where the output of docker-compose config would be invalid if the project declared anonymous volumes Variable interpolation now properly occurs in the secrets section of the Compose file The secrets section now properly appears in the output of docker-compose config Fixed a bug where changes to some networks properties would not be detected against previously created networks Fixed a bug where docker-compose would crash when trying to write into a closed pipe Fixed an issue where Compose would not pick up on the value of COMPOSETLSVERSION when used in combination with command-line TLS flags (2017-02-17) Fixed a bug that was preventing secrets configuration from being loaded properly Fixed a bug where the docker-compose config command would fail if the config file contained secrets definitions Fixed an issue where Compose on some linux distributions would pick up and load an outdated version of the requests library Fixed an issue where socket-type files inside a build folder would cause docker-compose to crash when trying to build that service Fixed an issue where recursive wildcard patterns were not being recognized in .dockerignore files. (2017-02-09) (2017-02-08) Fixed a bug where extending a service defining a healthcheck dictionary would cause docker-compose to error out. Fixed an issue where the pid entry in a service definition was being ignored when using multiple Compose files. (2017-02-01) Fixed an issue where the presence of older versions of the docker-py package would cause unexpected crashes while running Compose Fixed an issue where healthcheck dependencies would be lost when using multiple compose files for a project Fixed a few issues that made the output of the config command invalid Fixed an issue where adding volume labels to v3 Compose files would result in an error Fixed an issue on Windows where build context paths containing unicode characters were being improperly encoded Fixed a bug where Compose would occasionally crash while streaming logs when containers would stop or restart (2017-01-18) Healthcheck configuration can now be done in the service definition using the healthcheck parameter Containers dependencies can now be set up to wait on positive healthchecks when declared using depends_on. See the documentation for the updated syntax. Note: This feature will not be ported to version 3 Compose files. Added support for the sysctls parameter in service definitions Added support for the userns_mode parameter in service definitions Compose now adds identifying labels to networks and volumes it creates Colored output now works properly on Windows. Fixed a bug where docker-compose run would fail to set up link aliases in interactive mode on Windows. Networks created by Compose are now always made attachable (Compose files v2.1 and up). Fixed a bug where falsy values of COMPOSECONVERTWINDOWS_PATHS (0, false, empty value) were being interpreted as true. Fixed a bug where forward slashes in some .dockerignore patterns weren't being parsed correctly on Windows (2016-11-16) Breaking changes Interactive mode for docker-compose run and docker-compose exec is now supported on Windows platforms. The docker binary is required to be present on the system for this feature to work. Introduced version 2.1 of the docker-compose.yml specification. This version requires to be used with Docker Engine 1.12 or above. Added support for the groupadd and oomscore_adj parameters in service definitions. Added support for the internal and enable_ipv6 parameters in network definitions. Compose now defaults to using the npipe protocol on Windows. Overriding a logging configuration will now properly merge the options mappings if the driver values do not conflict. Fixed several bugs related to npipe protocol support on Windows. Fixed an issue with Windows paths being incorrectly converted when using Docker on Windows Server. Fixed a bug where an empty restart value would sometimes result in an exception being raised. Fixed an issue where service logs containing unicode characters would sometimes cause an error to occur. Fixed a bug where unicode values in environment variables would sometimes raise a unicode exception when retrieved. Fixed an issue where Compose would incorrectly detect a configuration mismatch for overlay networks. (2016-09-22) Fixed a bug where users using a credentials store were not able to access their private images. Fixed a bug where users using identity tokens to authenticate were not able to access their private images. Fixed a bug where an HttpHeaders entry in the docker configuration file would cause Compose to crash when trying to build an image. Fixed a few bugs related to the handling of Windows paths in volume binding" }, { "data": "Fixed a bug where Compose would sometimes crash while trying to read a streaming response from the engine. Fixed an issue where Compose would crash when encountering an API error while streaming container logs. Fixed an issue where Compose would erroneously try to output logs from drivers not handled by the Engine's API. Fixed a bug where options from the docker-machine config command would not be properly interpreted by Compose. Fixed a bug where the connection to the Docker Engine would sometimes fail when running a large number of services simultaneously. Fixed an issue where Compose would sometimes print a misleading suggestion message when running the bundle command. Fixed a bug where connection errors would not be handled properly by Compose during the project initialization phase. Fixed a bug where a misleading error would appear when encountering a connection timeout. (2016-06-14) As announced in 1.7.0, docker-compose rm now removes containers created by docker-compose run by default. Setting entrypoint on a service now empties out any default command that was set on the image (i.e. any CMD instruction in the Dockerfile used to build it). This makes it consistent with the --entrypoint flag to docker run. Added docker-compose bundle, a command that builds a bundle file to be consumed by the new Docker Stack commands in Docker 1.12. Added docker-compose push, a command that pushes service images to a registry. Compose now supports specifying a custom TLS version for interaction with the Docker Engine using the COMPOSETLSVERSION environment variable. Fixed a bug where Compose would erroneously try to read .env at the project's root when it is a directory. docker-compose run -e VAR now passes VAR through from the shell to the container, as with docker run -e VAR. Improved config merging when multiple compose files are involved for several service sub-keys. Fixed a bug where volume mappings containing Windows drives would sometimes be parsed incorrectly. Fixed a bug in Windows environment where volume mappings of the host's root directory would be parsed incorrectly. Fixed a bug where docker-compose config would output an invalid Compose file if external networks were specified. Fixed an issue where unset buildargs would be assigned a string containing 'None' instead of the expected empty value. Fixed a bug where yes/no prompts on Windows would not show before receiving input. Fixed a bug where trying to docker-compose exec on Windows without the -d option would exit with a stacktrace. This will still fail for the time being, but should do so gracefully. Fixed a bug where errors during docker-compose up would show an unrelated stacktrace at the end of the process. docker-compose create and docker-compose start show more descriptive error messages when something goes wrong. (2016-05-04) Fixed a bug where the output of docker-compose config for v1 files would be an invalid configuration file. Fixed a bug where docker-compose config would not check the validity of links. Fixed an issue where docker-compose help would not output a list of available commands and generic options as expected. Fixed an issue where filtering by service when using docker-compose logs would not apply for newly created services. Fixed a bug where unchanged services would sometimes be recreated in in the up phase when using Compose with Python 3. Fixed an issue where API errors encountered during the up phase would not be recognized as a failure state by Compose. Fixed a bug where Compose would raise a NameError because of an undefined exception name on non-Windows platforms. Fixed a bug where the wrong version of docker-py would sometimes be installed alongside" }, { "data": "Fixed a bug where the host value output by docker-machine config default would not be recognized as valid options by the docker-compose command line. Fixed an issue where Compose would sometimes exit unexpectedly while reading events broadcasted by a Swarm cluster. Corrected a statement in the docs about the location of the .env file, which is indeed read from the current directory, instead of in the same location as the Compose file. (2016-04-13) docker-compose logs no longer follows log output by default. It now matches the behavior of docker logs and exits after the current logs are printed. Use -f to get the old default behavior. Booleans are no longer allows as values for mappings in the Compose file (for keys environment, labels and extra_hosts). Previously this was a warning. Boolean values should be quoted so they become string values. Compose now looks for a .env file in the directory where it's run and reads any environment variables defined inside, if they're not already set in the shell environment. This lets you easily set defaults for variables used in the Compose file, or for any of the COMPOSE_* or DOCKER_* variables. Added a --remove-orphans flag to both docker-compose up and docker-compose down to remove containers for services that were removed from the Compose file. Added a --all flag to docker-compose rm to include containers created by docker-compose run. This will become the default behavior in the next version of Compose. Added support for all the same TLS configuration flags used by the docker client: --tls, --tlscert, --tlskey, etc. Compose files now support the tmpfs and shm_size options. Added the --workdir flag to docker-compose run docker-compose logs now shows logs for new containers that are created after it starts. The COMPOSE_FILE environment variable can now contain multiple files, separated by the host system's standard path separator (: on Mac/Linux, ; on Windows). You can now specify a static IP address when connecting a service to a network with the ipv4address and ipv6address options. Added --follow, --timestamp, and --tail flags to the docker-compose logs command. docker-compose up, and docker-compose start will now start containers in parallel where possible. docker-compose stop now stops containers in reverse dependency order instead of all at once. Added the --build flag to docker-compose up to force it to build a new image. It now shows a warning if an image is automatically built when the flag is not used. Added the docker-compose exec command for executing a process in a running container. docker-compose down now removes containers created by docker-compose run. A more appropriate error is shown when a timeout is hit during up when using a tty. Fixed a bug in docker-compose down where it would abort if some resources had already been removed. Fixed a bug where changes to network aliases would not trigger a service to be recreated. Fix a bug where a log message was printed about creating a new volume when it already existed. Fixed a bug where interrupting up would not always shut down containers. Fixed a bug where logopt and logdriver were not properly carried over when extending services in the v1 Compose file format. Fixed a bug where empty values for build args would cause file validation to fail. (2016-02-23) (2016-02-23) Fixed a bug where recreating a container multiple times would cause the new container to be started without the previous" }, { "data": "Fixed a bug where Compose would set the value of unset environment variables to an empty string, instead of a key without a value. Provide a better error message when Compose requires a more recent version of the Docker API. Add a missing config field network.aliases which allows setting a network scoped alias for a service. Fixed a bug where run would not start services listed in depends_on. Fixed a bug where networks and network_mode where not merged when using extends or multiple Compose files. Fixed a bug with service aliases where the short container id alias was only contained 10 characters, instead of the 12 characters used in previous versions. Added a missing log message when creating a new named volume. Fixed a bug where build.args was not merged when using extends or multiple Compose files. Fixed some bugs with config validation when null values or incorrect types were used instead of a mapping. Fixed a bug where a build section without a context would show a stack trace instead of a helpful validation message. Improved compatibility with swarm by only setting a container affinity to the previous instance of a services' container when the service uses an anonymous container volume. Previously the affinity was always set on all containers. Fixed the validation of some driver_opts would cause an error if a number was used instead of a string. Some improvements to the run.sh script used by the Compose container install option. Fixed a bug with up --abort-on-container-exit where Compose would exit, but would not stop other containers. Corrected the warning message that is printed when a boolean value is used as a value in a mapping. (2016-01-15) Compose 1.6 introduces a new format for docker-compose.yml which lets you define networks and volumes in the Compose file as well as services. It also makes a few changes to the structure of some configuration options. You don't have to use it - your existing Compose files will run on Compose 1.6 exactly as they do today. Check the upgrade guide for full details. Support for networking has exited experimental status and is the recommended way to enable communication between containers. If you use the new file format, your app will use networking. If you aren't ready yet, just leave your Compose file as it is and it'll continue to work just the same. By default, you don't have to configure any networks. In fact, using networking with Compose involves even less configuration than using links. Consult the networking guide for how to use it. The experimental flags --x-networking and --x-network-driver, introduced in Compose 1.5, have been removed. You can now pass arguments to a build if you're using the new file format: ``` build: context: . args: buildno: 1 ``` You can now specify both a build and an image key if you're using the new file format. docker-compose build will build the image and tag it with the name you've specified, while docker-compose pull will attempt to pull it. There's a new events command for monitoring container events from the application, much like docker events. This is a good primitive for building tools on top of Compose for performing actions when particular things happen, such as containers starting and stopping. There's a new depends_on option for specifying dependencies between services. This enforces the order of startup, and ensures that when you run docker-compose up SERVICE on a service with dependencies, those are started as well. Added a new command config which validates and prints the Compose configuration after interpolating variables, resolving relative paths, and merging multiple files and" }, { "data": "Added a new command create for creating containers without starting them. Added a new command down to stop and remove all the resources created by up in a single command. Added support for the cpu_quota configuration option. Added support for the stop_signal configuration option. Commands start, restart, pause, and unpause now exit with an error status code if no containers were modified. Added a new --abort-on-container-exit flag to up which causes up to stop all container and exit once the first container exits. Removed support for FIGFILE, FIGPROJECT_NAME, and no longer reads fig.yml as a default Compose file location. Removed the migrate-to-labels command. Removed the --allow-insecure-ssl flag. Fixed a validation bug that prevented the use of a range of ports in the expose field. Fixed a validation bug that prevented the use of arrays in the entrypoint field if they contained duplicate entries. Fixed a bug that caused ulimits to be ignored when used with extends. Fixed a bug that prevented ipv6 addresses in extra_hosts. Fixed a bug that caused extends to be ignored when included from multiple Compose files. Fixed an incorrect warning when a container volume was defined in the Compose file. Fixed a bug that prevented the force shutdown behavior of up and logs. Fixed a bug that caused None to be printed as the network driver name when the default network driver was used. Fixed a bug where using the string form of dns or dns_search would cause an error. Fixed a bug where a container would be reported as \"Up\" when it was in the restarting state. Fixed a confusing error message when DOCKERCERTPATH was not set properly. Fixed a bug where attaching to a container would fail if it was using a non-standard logging driver (or none at all). (2015-12-03) Fixed a bug which broke the use of environment and env_file with extends, and caused environment keys without values to have a None value, instead of a value from the host environment. Fixed a regression in 1.5.1 that caused a warning about volumes to be raised incorrectly when containers were recreated. Fixed a bug which prevented building a Dockerfile that used ADD <url> Fixed a bug with docker-compose restart which prevented it from starting stopped containers. Fixed handling of SIGTERM and SIGINT to properly stop containers Add support for using a url as the value of build Improved the validation of the expose option (2015-11-12) Add the --force-rm option to build. Add the ulimit option for services in the Compose file. Fixed a bug where up would error with \"service needs to be built\" if a service changed from using image to using build. Fixed a bug that would cause incorrect output of parallel operations on some terminals. Fixed a bug that prevented a container from being recreated when the mode of a volumes_from was changed. Fixed a regression in 1.5.0 where non-utf-8 unicode characters would cause up or logs to crash. Fixed a regression in 1.5.0 where Compose would use a success exit status code when a command fails due to an HTTP timeout communicating with the docker daemon. Fixed a regression in 1.5.0 where name was being accepted as a valid service option which would override the actual name of the service. When using --x-networking Compose no longer sets the hostname to the container name. When using --x-networking Compose will only create the default network if at least one container is using the network. When printings logs during up or logs, flush the output buffer after each line to prevent buffering issues from hiding" }, { "data": "Recreate a container if one of its dependencies is being created. Previously a container was only recreated if it's dependencies already existed, but were being recreated as well. Add a warning when a volume in the Compose file is being ignored and masked by a container volume from a previous container. Improve the output of pull when run without a tty. When using multiple Compose files, validate each before attempting to merge them together. Previously invalid files would result in not helpful errors. Allow dashes in keys in the environment service option. Improve validation error messages by including the filename as part of the error message. (2015-11-03) With the introduction of variable substitution support in the Compose file, any Compose file that uses an environment variable ($VAR or ${VAR}) in the command: or entrypoint: field will break. Previously these values were interpolated inside the container, with a value from the container environment. In Compose 1.5.0, the values will be interpolated on the host, with a value from the host environment. To migrate a Compose file to 1.5.0, escape the variables with an extra $ (ex: $$VAR or $${VAR}). See https://github.com/docker/compose/blob/8cc8e61/docs/compose-file.md#variable-substitution Compose is now available for Windows. Environment variables can be used in the Compose file. See https://github.com/docker/compose/blob/8cc8e61/docs/compose-file.md#variable-substitution Multiple compose files can be specified, allowing you to override settings in the default Compose file. See https://github.com/docker/compose/blob/8cc8e61/docs/reference/docker-compose.md for more details. Compose now produces better error messages when a file contains invalid configuration. up now waits for all services to exit before shutting down, rather than shutting down as soon as one container exits. Experimental support for the new docker networking system can be enabled with the --x-networking flag. Read more here: https://github.com/docker/docker/blob/8fee1c20/docs/userguide/dockernetworks.md You can now optionally pass a mode to volumes_from. For example, volumes_from: [\"servicename:ro\"]. Since Docker now lets you create volumes with names, you can refer to those volumes by name in docker-compose.yml. For example, volumes: [\"mydatavolume:/data\"] will mount the volume named mydatavolume at the path /data inside the container. If the first component of an entry in volumes starts with a ., / or ~, it is treated as a path and expansion of relative paths is performed as necessary. Otherwise, it is treated as a volume name and passed straight through to Docker. Read more on named volumes and volume drivers here: https://github.com/docker/docker/blob/244d9c33/docs/userguide/dockervolumes.md docker-compose build --pull instructs Compose to pull the base image for each Dockerfile before building. docker-compose pull --ignore-pull-failures instructs Compose to continue if it fails to pull a single service's image, rather than aborting. You can now specify an IPC namespace in docker-compose.yml with the ipc option. Containers created by docker-compose run can now be named with the --name flag. If you install Compose with pip or use it as a library, it now works with Python 3. image now supports image digests (in addition to ids and tags). For example, image: \"busybox@sha256:38a203e1986cf79639cfb9b2e1d6e773de84002feea2d4eb006b52004ee8502d\" ports now supports ranges of ports. For example, ``` ports: \"3000-3005\" \"9000-9001:8000-8001\" ``` docker-compose run now supports a -p|--publish parameter, much like docker run -p, for publishing specific ports to the host. docker-compose pause and docker-compose unpause have been implemented, analogous to docker pause and docker unpause. When using extends to copy configuration from another service in the same Compose file, you can omit the file option. Compose can be installed and run as a Docker image. This is an experimental feature. All values for the log_driver option which are supported by the Docker daemon are now supported by" }, { "data": "docker-compose build can now be run successfully against a Swarm cluster. (2015-09-22) (2015-09-10) (2015-08-04) By default, docker-compose up now only recreates containers for services whose configuration has changed since they were created. This should result in a dramatic speed-up for many applications. The experimental --x-smart-recreate flag which introduced this feature in Compose 1.3.0 has been removed, and a --force-recreate flag has been added for when you want to recreate everything. Several of Compose's commands - scale, stop, kill and rm - now perform actions on multiple containers in parallel, rather than in sequence, which will run much faster on larger applications. You can now specify a custom name for a service's container with container_name. Because Docker container names must be unique, this means you can't scale the service beyond one container. You no longer have to specify a file option when using extends - it will default to the current file. Service names can now contain dots, dashes and underscores. Compose can now read YAML configuration from standard input, rather than from a file, by specifying - as the filename. This makes it easier to generate configuration dynamically: ``` $ echo 'redis: {\"image\": \"redis\"}' | docker-compose --file - up ``` There's a new docker-compose version command which prints extended information about Compose's bundled dependencies. docker-compose.yml now supports logopt as well as logdriver, allowing you to pass extra configuration to a service's logging driver. docker-compose.yml now supports memswap_limit, similar to docker run --memory-swap. When mounting volumes with the volumes option, you can now pass in any mode supported by the daemon, not just :ro or :rw. For example, SELinux users can pass :z or :Z. You can now specify a custom volume driver with the volume_driver option in docker-compose.yml, much like docker run --volume-driver. A bug has been fixed where Compose would fail to pull images from private registries serving plain (unsecured) HTTP. The --allow-insecure-ssl flag, which was previously used to work around this issue, has been deprecated and now has no effect. A bug has been fixed where docker-compose build would fail if the build depended on a private Hub image or an image from a private registry. A bug has been fixed where Compose would crash if there were containers which the Docker daemon had not finished removing. Two bugs have been fixed where Compose would sometimes fail with a \"Duplicate bind mount\" error, or fail to attach volumes to a container, if there was a volume path specified in docker-compose.yml with a trailing slash. Thanks @mnowster, @dnephin, @ekristen, @funkyfuture, @jeffk and @lukemarsden! (2015-07-15) (2015-07-14) Thanks @dano, @josephpage, @kevinsimper, @lieryan, @phemmer, @soulrebel and @sschepens! (2015-06-21) (2015-06-18) This release contains breaking changes, and you will need to either remove or migrate your existing containers before running your app - see the upgrading section of the install docs for details. Compose now requires Docker 1.6.0 or later. Compose now uses container labels, rather than names, to keep track of containers. This makes Compose both faster and easier to integrate with your own tools. Compose no longer uses \"intermediate containers\" when recreating containers for a service. This makes docker-compose up less complex and more resilient to failure. docker-compose up has an experimental new behavior: it will only recreate containers for services whose configuration has changed in docker-compose.yml. This will eventually become the default, but for now you can take it for a spin: ``` $ docker-compose up --x-smart-recreate ``` When invoked in a subdirectory of a project, docker-compose will now climb up through parent directories until it finds a" }, { "data": "Several new configuration keys have been added to docker-compose.yml: Thanks @ahromis, @albers, @aleksandr-vin, @antoineco, @ccverak, @chernjie, @dnephin, @edmorley, @fordhurley, @josephpage, @KyleJamesWalker, @lsowen, @mchasal, @noironetworks, @sdake, @sdurrheimer, @sherter, @stephenlawrence, @thaJeztah, @thieman, @turtlemonvh, @twhiteman, @vdemeester, @xuxinkun and @zwily! (2015-04-16) docker-compose.yml now supports an extends option, which enables a service to inherit configuration from another service in another configuration file. This is really good for sharing common configuration between apps, or for configuring the same app for different environments. Here's the documentation. When using Compose with a Swarm cluster, containers that depend on one another will be co-scheduled on the same node. This means that most Compose apps will now work out of the box, as long as they don't use build. Repeated invocations of docker-compose up when using Compose with a Swarm cluster now work reliably. Directories passed to build, filenames passed to env_file and volume host paths passed to volumes are now treated as relative to the directory of the configuration file, not the directory that docker-compose is being run in. In the majority of cases, those are the same, but if you use the -f|--file argument to specify a configuration file in another directory, this is a breaking change. A service can now share another service's network namespace with net: container:<service>. volumes_from and net: container:<service> entries are taken into account when resolving dependencies, so docker-compose up <service> will correctly start all dependencies of <service>. docker-compose run now accepts a --user argument to specify a user to run the command as, just like docker run. The up, stop and restart commands now accept a --timeout (or -t) argument to specify how long to wait when attempting to gracefully stop containers, just like docker stop. docker-compose rm now accepts -f as a shorthand for --force, just like docker rm. Thanks, @abesto, @albers, @alunduil, @dnephin, @funkyfuture, @gilclark, @IanVS, @KingsleyKelly, @knutwalker, @thaJeztah and @vmalloc! (2015-02-25) Fig has been renamed to Docker Compose, or just Compose for short. This has several implications for you: Besides that, theres a lot of new stuff in this release: Weve made a few small changes to ensure that Compose will work with Swarm, Dockers new clustering tool ( https://github.com/docker/swarm). Eventually you'll be able to point Compose at a Swarm cluster instead of a standalone Docker host and itll run your containers on the cluster with no extra work from you. As Swarm is still developing, integration is rough and lots of Compose features don't work yet. docker-compose run now has a --service-ports flag for exposing ports on the given service. This is useful for running your webapp with an interactive debugger, for example. You can now link to containers outside your app with the external_links option in docker-compose.yml. You can now prevent docker-compose up from automatically building images with the --no-build option. This will make fewer API calls and run faster. If you dont specify a tag when using the image key, Compose will default to the latest tag, rather than pulling all tags. docker-compose kill now supports the -s flag, allowing you to specify the exact signal you want to send to a services containers. docker-compose.yml now has an env_file key, analogous to docker run --env-file, letting you specify multiple environment variables in a separate file. This is great if you have a lot of them, or if you want to keep sensitive information out of version control. docker-compose.yml now supports the dnssearch, capadd, capdrop, cpushares and restart options, analogous to docker runs --dns-search, --cap-add, --cap-drop, --cpu-shares and --restart" }, { "data": "Compose now ships with Bash tab completion - see the installation and usage docs at https://github.com/docker/compose/blob/1.1.0/docs/completion.md A number of bugs have been fixed - see the milestone for details: https://github.com/docker/compose/issues?q=milestone%3A1.1.0+ Thanks @dnephin, @squebe, @jbalonso, @raulcd, @benlangfield, @albers, @ggtools, @bersace, @dtenenba, @petercv, @drewkett, @TFenby, @paulRbr, @Aigeruth and @salehe! (2014-11-04) (2014-10-16) The highlights: Fig has joined Docker. Fig will continue to be maintained, but we'll also be incorporating the best bits of Fig into Docker itself. This means the GitHub repository has moved to https://github.com/docker/fig and our IRC channel is now #docker-fig on Freenode. Fig can be used with the official Docker OS X installer. Boot2Docker will mount the home directory from your host machine so volumes work as expected. Fig supports Docker 1.3. It is now possible to connect to the Docker daemon using TLS by using the DOCKERCERTPATH and DOCKERTLSVERIFY environment variables. There is a new fig port command which outputs the host port binding of a service, in a similar way to docker port. There is a new fig pull command which pulls the latest images for a service. There is a new fig restart command which restarts a service's containers. Fig creates multiple containers in service by appending a number to the service name. For example, db1, db2. As a convenience, Fig will now give the first container an alias of the service name. For example, db. This link alias is also a valid hostname and added to /etc/hosts so you can connect to linked services using their hostname. For example, instead of resolving the environment variables DBPORT5432TCPADDR and DBPORT5432TCPPORT, you could just use the hostname db and port 5432 directly. Volume definitions now support ro mode, expanding ~ and expanding environment variables. .dockerignore is supported when building. The project name can be set with the FIGPROJECTNAME environment variable. The --env and --entrypoint options have been added to fig run. The Fig binary for Linux is now linked against an older version of glibc so it works on CentOS 6 and Debian Wheezy. Other things: Thanks @dnephin, @d11wtq, @marksteve, @rubbish, @jbalonso, @timfreund, @alunduil, @mieciu, @shuron, @moss, @suzaku and @chmouel! Whew. (2014-07-28) Thanks @dnephin and @marksteve! (2014-07-11) Thanks @ryanbrainard and @d11wtq! (2014-07-11) Fig now starts links when you run fig run or fig up. For example, if you have a web service which depends on a db service, fig run web ... will start the db service. Environment variables can now be resolved from the environment that Fig is running in. Just specify it as a blank variable in your fig.yml and, if set, it'll be resolved: ``` environment: RACK_ENV: development SESSION_SECRET:``` volumes_from is now supported in fig.yml. All of the volumes from the specified services and containers will be mounted: ``` volumes_from: service_name container_name``` A host address can now be specified in ports: ``` ports: \"0.0.0.0:8000:8000\" \"127.0.0.1:8001:8001\"``` The net and workdir options are now supported in fig.yml. The hostname option now works in the same way as the Docker CLI, splitting out into a domainname option. TTY behavior is far more robust, and resizes are supported correctly. Load YAML files safely. Thanks to @d11wtq, @ryanbrainard, @rail44, @j0hnsmith, @binarin, @Elemecca, @mozz100 and @marksteve for their help with this release! (2014-06-18) (2014-05-08) (2014-04-29) (2014-03-05) (2014-03-04) (2014-03-03) Thanks @marksteve, @Gazler and @teozkr! (2014-02-17) Thanks to @barnybug and @dustinlacewell for their work on this release. (2014-02-04) (2014-01-31) Big thanks to @cameronmaske, @mrchrisadams and @damianmoore for their help with this release. (2014-01-27) (2014-01-23) (2014-01-22) (2014-01-17) (2014-01-16) Big thanks to @tomstuart, @EnTeQuAk, @schickling, @aronasorman and @GeoffreyPlitt. (2014-01-02) (2013-12-20) Initial release. Edit this page Request changes Copyright 2013-2024 Docker Inc. All rights reserved." } ]
{ "category": "App Definition and Development", "file_name": "install.md", "project_name": "Docker Compose", "subcategory": "Application Definition & Image Build" }
[ { "data": "You can enhance your teams' builds with a Build Cloud subscription. This page describes the features available for the different subscription tiers. To compare features available for each tier, see Docker Build Cloud pricing. If you have an existing Docker Core subscription, a base level of Build Cloud minutes and cache are included. The features available vary depending on your Docker Core subscription tier. You can buy Docker Build Cloud Team if you dont have a Docker Core subscription, or upgrade any Docker Core tier to enhance your developers' experience with the following features: The Docker Build Cloud Team subscription is tied to a Docker organization. To use the build minutes or shared cache of a Docker Build Cloud Team subscription, users must be a part of the organization associated with the subscription. See Manage seats and invites. To learn how to buy this subscription for your Docker organization, see Buy your subscription - existing account or organization. If you havent created a Docker organization yet and dont have an existing Docker Core subscription, see Buy your subscription - new organization. For organizations without a Docker Core subscription, this plan also includes 50 shared minutes in addition to the Docker Build Cloud Team minutes. For enterprise features such as paying via invoice and additional build minutes, contact sales. Edit this page Request changes Copyright 2013-2024 Docker Inc. All rights reserved." } ]
{ "category": "App Definition and Development", "file_name": "github-privacy-statement.md", "project_name": "Fabric8 Kubernetes Client", "subcategory": "Application Definition & Image Build" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Gefyra", "subcategory": "Application Definition & Image Build" }
[ { "data": "This guide describes the usage of Gefyra for the local development of a Kubernetes application running in Minikube. Important: This getting started guide for Minikube requires Gefyra in version >=2.0.0+. ``` minikube start``` Important: the following example does not fully work with --driver=qemu since minikube service is not currently implemented with the qemu2 driver. See https://github.com/kubernetes/minikube/issues/14146 for details. Tested drivers are: docker, kvm2, kvm, virtualbox. Others are potentially working, but are not tested. After some time of downloading the required resources, the cluster will be running. You may enable the required addons based on your requirements. The kubectl context is immediately set to this cluster. You can check if kubectl config current-context is set to minikube. ``` kubectl config current-context``` ``` kubectl apply -f https://raw.githubusercontent.com/gefyrahq/gefyra/main/testing/workloads/hello.yaml``` ``` kubectl expose deployment hello-nginxdemo --type=NodePort --port=80``` ``` minikube service hello-nginxdemo``` ``` gefyra up --minikube``` Important: The --minikube switch detects all required connection parameters from your local cluster. The connection won't work if this switch is missing when working with Minikube. File ./Dockerfile ``` FROM ubuntu# run a server on port 8000RUN apt update && apt install -y iproute2 iputils-ping python3 traceroute wget curlCOPY local.py local.pyCMD python3 local.py``` File ./local.py ``` import http.serverimport signalimport socketimport socketserverimport sysfrom datetime import datetimeif sys.argv[1:]: port = int(sys.argv[1])else: port = 8000class MyHttpRequestHandler(http.server.SimpleHTTPRequestHandler): def doGET(self): self.sendresponse(200) self.sendheader(\"Content-type\", \"text/html\") self.endheaders() hostname = socket.gethostname() now = datetime.utcnow() self.wfile.write( bytes( f\"<html><body><h1>Hello from Gefyra. It is {now} on\" f\" {hostname}.</h1></body></html>\".encode(\"utf-8\") ) )myhandler = MyHttpRequestHandlerserver = socketserver.ThreadingTCPServer((\"\", port), myhandler)def signalhandler(signal, frame): try: if server: server.serverclose() finally: sys.exit(0)signal.signal(signal.SIGINT, signalhandler)try: while True: sys.stdout.flush() server.serveforever()except KeyboardInterrupt: passserver.server_close()``` ``` gefyra run -d -i pyserver -N mypyserver -n default``` Important: gefyra run is just a wrapper for docker run (with additional flags), yet it also applies Gefyra's networking configuration to connect the container with Kubernetes. Check out the docs for gefyra run ``` docker exec -it mypyserver bash``` ``` wget -O- hello-nginx``` will print out the website of the cluster service hello-nginx from within the cluster. ``` gefyra bridge -N mypyserver -n default --ports 80:8000 --target deploy/hello-nginxdemo/hello-nginx``` Check out the locally running server serving the cluster by refreshing the address from: ``` minikube service hello-nginxdemo``` It shows you a different message: Hello from Gefyra. It is .... Yes, that is really coming from your local container! You can list all currently active bridges with: ``` gefyra list --bridges``` You will find all local containers that are currently linked into the cluster serving requests. ``` gefyra unbridge --all``` Check out the original response from: ``` minikube service hello-nginxdemo``` The cluster is now reset to its inital state again. Remove Gefyra's components from the cluster and your local Docker host with: ``` gefyra down``` ``` minikube delete``` Did everything work as expected? How was the experience of using Gefyra? We'd appreciate if you could take 2 minutes of your time to fill out our feedback form." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Glasskube", "subcategory": "Application Definition & Image Build" }
[ { "data": "Glasskube is available for macOS, Windows and Linux. Packages are available for Homebrew and most package managers popular on Linux systems. On macOS, you can use Homebrew to install and update Glasskube. ``` brew install glasskube/tap/glasskube``` You can install Glasskube using one of the package managers below. ``` dnf install https://releases.dl.glasskube.dev/glasskubev0.8.0amd64.rpm``` ``` curl -LO https://releases.dl.glasskube.dev/glasskubev0.8.0amd64.debsudo dpkg -i glasskubev0.8.0amd64.deb``` ``` curl -LO https://releases.dl.glasskube.dev/glasskubev0.8.0amd64.apkapk add glasskube.apk``` If you are using a distribution that does not use one of the package managers above, or require a 32-bit binary, check out additional download options attached to our latest release. Download the windows archive from our latest Release and unpack it using Windows Explorer. You can either use the package temporarily in a nix-shell: ``` nix-shell -p glasskube``` Or install it globally by adding pkgs.glasskube to your environment.systemPackages. After installing Glasskube on your local machine, make sure to install the necessary components in your Kubernetes cluster by running glasskube bootstrap. For more information, check out our bootstrap guide. Glasskube provides extensive autocomplete for many popular shells. To take full advantage of this feature, please follow the steps for your shell below. To install completions in the current shell, run: ``` source <(glasskube completion bash)``` For more information (including persistent completions), run: ``` glasskube help completion bash``` To install completions in the current shell, run: ``` source <(glasskube completion zsh)``` For more information (including persistent completions), run: ``` glasskube help completion zsh``` To install completions in the current shell, run: ``` glasskube completion fish | source``` For more information (including persistent completions), run: ``` glasskube help completion fish``` To install completions in the current shell, run: ``` glasskube completion powershell | Out-String | Invoke-Expression``` For more information (including persistent completions), run: ``` glasskube help completion powershell```" } ]
{ "category": "App Definition and Development", "file_name": "github-terms-of-service.md", "project_name": "Fabric8 Kubernetes Client", "subcategory": "Application Definition & Image Build" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "App Definition and Development", "file_name": "docs.gradle.org.md", "project_name": "Gradle Build Tool", "subcategory": "Application Definition & Image Build" }
[ { "data": "Apache Maven is a build tool for Java and other JVM-based projects. It is typical to migrate an existing Maven build to Gradle. This guide will help with such a migration by explaining the differences and similarities between the two tools and providing steps that you can follow to ease the process. Converting a build can be scary, but you dont have to do it alone. You can search our documentation, post on our community forums, or reach out on our Slack channel if you get stuck. The primary differences between Gradle and Maven are flexibility, performance, user experience, and dependency management. A visual overview of these aspects is available in the Maven vs Gradle feature comparison. Since Gradle 3.0, Gradle has invested heavily in making Gradle builds much faster, with features such as build caching, compile avoidance, and an improved incremental Java compiler. Gradle is now 2-10x faster than Maven for the vast majority of projects, even without using a build cache. In-depth performance comparison and business cases for switching from Maven to Gradle can be found here. Gradle and Maven have fundamentally different views on how to build a project. Gradle provides a flexible and extensible build model that delegates the actual work to the execution of a graph of tasks. Maven uses a model of fixed, linear phases to which you can attach goals (the things that do the work). This may make migrating between the two seem intimidating, but migrations can be surprisingly easy because Gradle follows many of the same conventions as Mavensuch as the standard project structureand its dependency management works in a similar way. Here we lay out a series of steps for you to follow that will help facilitate the migration of any Maven build to Gradle: | 0 | 1 | |-:|:--| | nan | Keep the old Maven build and new Gradle build side by side. You know the Maven build works, so you should keep it until you are confident that the Gradle build produces all the same artifacts. This also means that users can try the Gradle build without creating a new copy of the source tree. | Create a build scan for the Maven build. A build scan will make it easier to visualize whats happening in your existing Maven build. For Maven builds, you will be able to see the project structure, what plugins are being used, a timeline of the build steps, and more. Keep this handy so you can compare it to the Gradle build scans while converting the project. Develop a mechanism to verify that the two builds produce the same artifacts. This is a vitally important step to ensure that your deployments and tests dont break. Even small changes, such as the contents of a manifest file in a JAR, can cause problems. If your Gradle build produces the same output as the Maven build, this will give you confidence in switching over and make it easier to implement the changes that will provide the greatest benefits. This doesnt mean that you need to verify every artifact at every stage, although doing so can help you quickly identify the source of a problem. You should focus on the critical output such as final reports and the artifacts that are published or deployed. You will need to factor in some inherent differences in the build output that Gradle produces compared to" }, { "data": "Generated POMs will contain only the information needed for consumption and they will use <compile> and <runtime> scopes correctly for that scenario. You might also see differences in the order of files in archives and of files on classpaths. Most differences will be minor, but its worth identifying them and verifying that they are acceptable. Run an automatic conversion. This will create all the Gradle build files you need, even for multi-module builds. For simpler Maven projects, the Gradle build will be ready to run! Create a build scan for the Gradle build. A build scan will make it easier to visualize whats happening in the build. For Gradle builds, youll be able to see the project structure, the dependencies (regular and inter-project ones), what plugins are being used and the console output of the build. Your build may fail at this point, but thats ok; the scan will still run. Compare the build scan for the Gradle build to the one for the Maven build and continue down this list to troubleshoot the failures. We recommend that you regularly generate build scans during the migration to help you identify and troubleshoot problems. If you want, you can also use a Gradle build scan to identify opportunities to improve the performance of the build. Verify your dependencies and fix any problems. Configure integration and functional tests. Many tests can simply be migrated by configuring an extra source set. If you are using a third-party library, such as FitNesse, look to see whether there is a suitable community plugin available on the Gradle Plugin Portal. Replace Maven plugins with Gradle equivalents. In the case of popular plugins, Gradle often has an equivalent plugin that you can use. You might also find that you can replace a plugin with built-in Gradle functionality. As a last resort, you may need to reimplement a Maven plugin via your own custom plugins and task types. The rest of this chapter looks in more detail at specific aspects of migrating a build from Maven to Gradle. Maven builds are based around the concept of build lifecycles that consist of a set of fixed phases. This can be a challenge for users migrating to Gradle because the build lifecycle is a new concept. Although its important to understand how Gradle builds fit into the structure of initialization, configuration, and execution phases, Gradle provides a helper feature that can mimic Mavens phases: lifecycle tasks. This feature allow you to define your own \"lifecycles\" by creating no-action tasks that simply depend on the tasks youre interested in. And to make the transition to Gradle easier for Maven users, the Base Pluginapplied by all the JVM language plugins like the Java Library Pluginprovides a set of lifecycle tasks that correspond to the main Maven phases. Here is a list of some of the main Maven phases and the Gradle tasks that they map to: Use the clean task provided by the Base Plugin. Use the classes task provided by the Java Plugin and other JVM language plugins. This compiles all classes for all source files of all languages and also performs resource filtering via the processResources task. Use the test task provided by the Java Plugin. It runs the unit tests, and more specifically, the tests that make up the test source set. Use the assemble task provided by the Base Plugin. This builds whatever is the appropriate package for the project; for example, a JAR for Java libraries or a WAR for traditional Java webapps. Use the check task provided by the Base" }, { "data": "This runs all verification tasks that are attached to it, which typically includes the unit tests, any static analysis taskssuch as Checkstyleand others. If you want to include integration tests, you will have to configure these manually. Use the publishToMavenLocal task provided by the Maven Publish Plugin. Note that Gradle builds dont require you to \"install\" artifacts as you have access to more appropriate features like inter-project dependencies and composite builds. You should only use publishToMavenLocal for interoperating with Maven builds. Gradle also allows you to resolve dependencies against the local Maven cache, as described in the Declaring repositories section. Use the publish task provided by the Maven Publish Pluginmaking sure you switch from the older Maven Plugin (ID: maven) if your build is using that one. This will publish your package to all configured publication repositories. There are also tasks that allow you to publish to a single repository even when multiple ones are defined. Note that the Maven Publish Plugin does not publish source and Javadoc JARs by default, but this can easily be activated as explained in the guide for building java projects. Gradles init task is typically used to create a new skeleton project, but you can also use it to convert an existing Maven build to Gradle automatically. Once Gradle is installed on your system, all you have to do is run the command ``` gradle init``` from the root project directory. This consists of parsing the existing POMs and generating the corresponding Gradle build scripts. Gradle will also create a settings script if youre migrating a multi-project build. Youll find that the new Gradle build includes the following: All the custom repositories that are specified in the POM Your external and inter-project dependencies The appropriate plugins to build the project (limited to one or more of the Maven Publish, Java and War Plugins) See the Build Init Plugin chapter for a complete list of the automatic conversion features. One thing to keep in mind is that assemblies are not automatically converted. This additional conversion will required some manual work. Options include: Using the Distribution Plugin Using the Java Library Distribution Plugin Using the Application Plugin Creating custom archive tasks Using a suitable community plugin from the Gradle Plugin Portal If your Maven build does not have many plugins or custom steps, you can simply run ``` gradle build``` once the migration has completed. This will run the tests and produce the required artifacts automatically. Gradles dependency management system is more flexible than Mavens, but it still supports the same concepts of repositories, declared dependencies, scopes (dependency configurations in Gradle), and transitive dependencies. In fact, Gradle works with Maven-compatible repositories which makes it easy to migrate your dependencies. | 0 | 1 | |-:|:--| | nan | One notable difference between the two tools is in how they manage version conflicts. Maven uses a \"closest\" match algorithm, whereas Gradle picks the newest. Dont worry though, you have a lot of control over which versions are selected, as documented in Managing Transitive Dependencies. | Over the following sections, we will show you how to migrate the most common elements of a Maven builds dependency management information. Gradle uses the same dependency identifier components as Maven: group ID, artifact ID and version. It also supports classifiers. All you need to do is substitute the identifier information for a dependency into Gradles syntax, which is described in the Declaring Dependencies chapter. For example, consider this Maven-style dependency on Log4J: ``` <dependencies> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId>" }, { "data": "</dependency> </dependencies>``` This dependency would look like the following in a Gradle build script: ``` dependencies { implementation(\"log4j:log4j:1.2.12\") (1) }``` ``` dependencies { implementation 'log4j:log4j:1.2.12' (1) }``` | 0 | 1 | |-:|:--| | 1 | Attaches version 1.2.12 of Log4J to the implementation configuration (scope) | The string identifier takes the Maven values of groupId, artifactId and version, although Gradle refers to them as group, module and version. The above example raises an obvious question: what is that implementation configuration? Its one of the standard dependency configurations provided by the Java Plugin and is often used as a substitute for Mavens default compile scope. Several of the differences between Mavens scopes and Gradles standard configurations come down to Gradle distinguishing between the dependencies required to build a module and the dependencies required to build a module that depends on it. Maven makes no such distinction, so published POMs typically include dependencies that consumers of a library dont actually need. Here are the main Maven dependency scopes and how you should deal with their migration: Gradle has two configurations that can be used in place of the compile scope: implementation and api. The former is available to any project that applies the Java Plugin, while api is only available to projects that specifically apply the Java Library Plugin. In most cases you should simply use the implementation configuration, particularly if youre building an application or webapp. But if youre building a library, you can learn about which dependencies should be declared using api in the section on Building Java libraries. Even more information on the differences between api and implementation is provided in the Java Library Plugin chapter linked above. Use the runtimeOnly configuration. Gradle distinguishes between those dependencies that are required to compile a projects tests and those that are only needed to run them. Dependencies required for test compilation should be declared against the testImplementation configuration. Those that are only required for running the tests should use testRuntimeOnly. Use the compileOnly configuration. Note that the War Plugin adds providedCompile and providedRuntime dependency configurations. These behave slightly differently from compileOnly and simply ensure that those dependencies arent packaged in the WAR file. However, the dependencies are included on runtime and test runtime classpaths, so use these configurations if thats the behavior you need. The import scope is mostly used within <dependencyManagement> blocks and applies solely to POM-only publications. Read the section on Using bills of materials to learn more about how to replicate this behavior. You can also specify a regular dependency on a POM-only publication. In this case, the dependencies declared in that POM are treated as normal transitive dependencies of the build. For example, imagine you want to use the groovy-all POM for your tests. Its a POM-only publication that has its own dependencies listed inside a <dependencies> block. The appropriate configuration in the Gradle build looks like this: ``` dependencies { testImplementation(\"org.codehaus.groovy:groovy-all:2.5.4\") }``` ``` dependencies { testImplementation 'org.codehaus.groovy:groovy-all:2.5.4' }``` The result of this will be that all compile and runtime scope dependencies in the groovy-all POM get added to the test runtime classpath, while only the compile scope dependencies get added to the test compilation classpath. Dependencies with other scopes will be ignored. Gradle allows you to retrieve declared dependencies from any Maven-compatible or Ivy-compatible repository. Unlike Maven, it has no default repository and so you have to declare at least" }, { "data": "In order to have the same behavior as your Maven build, just configure Maven Central in your Gradle build, like this: ``` repositories { mavenCentral() }``` ``` repositories { mavenCentral() }``` You can also use the repositories {} block to configure custom repositories, as described in the Repository Types chapter. Lastly, Gradle allows you to resolve dependencies against the local Maven cache/repository. This helps Gradle builds interoperate with Maven builds, but it shouldnt be a technique that you use if you dont need that interoperability. If you want to share published artifacts via the filesystem, consider configuring a custom Maven repository with a file:// URL. You might also be interested in learning about Gradles own dependency cache, which behaves more reliably than Mavens and can be used safely by multiple concurrent Gradle processes. The existence of transitive dependencies means that you can very easily end up with multiple versions of the same dependency in your dependency graph. By default, Gradle will pick the newest version of a dependency in the graph, but thats not always the right solution. Thats why it provides several mechanisms for controlling which version of a given dependency is resolved. On a per-project basis, you can use: Dependency constraints Bills of materials (Maven BOMs) Overriding transitive versions There are even more, specialized options listed in the controlling transitive dependencies chapter. If you want to ensure consistency of versions across all projects in a multi-project build, similar to how the <dependencyManagement> block in Maven works, you can use the Java Platform Plugin. This allows you declare a set of dependency constraints that can be applied to multiple projects. You can even publish the platform as a Maven BOM or using Gradles metadata format. See the plugin page for more information on how to do that, and in particular the section on Consuming platforms to see how you can apply a platform to other projects in the same build. Maven builds use exclusions to keep unwanted dependenciesor unwanted versions of dependenciesout of the dependency graph. You can do the same thing with Gradle, but thats not necessarily the right thing to do. Gradle provides other options that may be more appropriate for a given situation, so you really need to understand why an exclusion is in place to migrate it properly. If you want to exclude a dependency for reasons unrelated to versions, then check out the section on excluding transitive dependencies. It shows you how to attach an exclusion either to an entire configuration (often the most appropriate solution) or to a dependency. You can even easily apply an exclusion to all configurations. If youre more interested in controlling which version of a dependency is actually resolved, see the previous section. You are likely to encounter two situations regarding optional dependencies: Some of your transitive dependencies are declared as optional You want to declare some of your direct dependencies as optional in your projects published POM For the first scenario, Gradle behaves the same way as Maven and simply ignores any transitive dependencies that are declared as optional. They are not resolved and have no impact on the versions selected if the same dependencies appear elsewhere in the dependency graph as non-optional. As for publishing dependencies as optional, Gradle provides a richer model called feature variants, which will let you declare the \"optional features\" your library provides. Maven allows you to share dependency constraints by defining dependencies inside a <dependencyManagement> section of a POM file that has a packaging type of pom. This special type of POM (a BOM) can then be imported into other POMs so that you have consistent library versions across your projects. Gradle can use such BOMs for the same purpose, using a special dependency syntax based on platform() and enforcedPlatform()" }, { "data": "You simply declare the dependency in the normal way, but wrap the dependency identifier in the appropriate method, as shown in this example that \"imports\" the Spring Boot Dependencies BOM: ``` dependencies { implementation(platform(\"org.springframework.boot:spring-boot-dependencies:1.5.8.RELEASE\")) (1) implementation(\"com.google.code.gson:gson\") (2) implementation(\"dom4j:dom4j\") }``` ``` dependencies { implementation platform('org.springframework.boot:spring-boot-dependencies:1.5.8.RELEASE') (1) implementation 'com.google.code.gson:gson' (2) implementation 'dom4j:dom4j' }``` | 0 | 1 | |-:|:-| | 1 | Applies the Spring Boot Dependencies BOM | | 2 | Adds a dependency whose version is defined by that BOM | You can learn more about this feature and the difference between platform() and enforcedPlatform() in the section on importing version recommendations from a Maven BOM. | 0 | 1 | |-:|:--| | nan | You can use this feature to apply the <dependencyManagement> information from any dependencys POM to the Gradle build, even those that dont have a packaging type of pom. Both platform() and enforcedPlatform() will ignore any dependencies declared in the <dependencies> block. | Mavens multi-module builds map nicely to Gradles multi-project builds. Try the corresponding sample to see how a basic multi-project Gradle build is set up. To migrate a multi-module Maven build, simply follow these steps: Create a settings script that matches the <modules> block of the root POM. For example, this <modules> block: ``` <modules> <module>simple-weather</module> <module>simple-webapp</module> </modules>``` can be migrated by adding the following line to the settings script: ``` rootProject.name = \"simple-multi-module\" (1) include(\"simple-weather\", \"simple-webapp\") (2)``` ``` rootProject.name = 'simple-multi-module' (1) include 'simple-weather', 'simple-webapp' (2)``` | 0 | 1 | |-:|:-| | 1 | Sets the name of the overall project | | 2 | Configures two subprojects as part of this build | ``` gradle projects Root project 'simple-multi-module' Root project 'simple-multi-module' Project ':simple-weather' \\ Project ':simple-webapp' To see a list of the tasks of a project, run gradle <project-path>:tasks For example, try running gradle :simple-weather:tasks``` Replace cross-module dependencies with project dependencies. Replicate project inheritance with convention plugins. This basically involves creating a root project build script that injects shared configuration into the appropriate subprojects. If you want to replicate the Maven pattern of having dependency versions declared in the dependencyManagement section of the root POM file, the best approach is to leverage the java-platform plugin. You will need to add a dedicated project for this and consume it in the regular projects of your build. See the documentation for more details on this pattern. Maven allows you parameterize builds using properties of various sorts. Some are read-only properties of the project model, others are user-defined in the POM. It even allows you to treat system properties as project properties. Gradle has a similar system of project properties, although it differentiates between those and system properties. You can, for example, define properties in: the build script a gradle.properties file in the root project directory a gradle.properties file in the $HOME/.gradle directory Those arent the only options, so if you are interested in finding out more about how and where you can define properties, check out the Build Environment chapter. One important piece of behavior you need to be aware of is what happens when the same property is defined in both the build script and one of the external properties files: the build script value takes precedence. Always. Fortunately, you can mimic the concept of profiles to provide overridable default values. Which brings us to Maven profiles. These are a way to enable and disable different configurations based on environment, target platform, or any other similar factor. Logically, they are nothing more than limited if" }, { "data": "And since Gradle has much more powerful ways to declare conditions, it does not needto have formal support for profiles (except in the POMs of dependencies). You can easily get the same behavior by combining conditions with secondary build scripts, as youll see. Lets say you have different deployment settings depending on the environment: local development (the default), a test environment, and production. To add profile-like behavior, you first create build scripts for each environment in the project root: profile-default.gradle, profile-test.gradle, and profile-prod.gradle. You can then conditionally apply one of those profile scripts based on a project property of your own choice. The following example demonstrates the basic technique using a project property called buildProfile and profile scripts that simply initialize an extra project property called message: ``` val buildProfile: String? by project (1) apply(from = \"profile-${buildProfile ?: \"default\"}.gradle.kts\") (2) tasks.register(\"greeting\") { // Store the message into a variable, because referencing extras from the task action // is not compatible with the configuration cache. val message = project.extra[\"message\"] doLast { println(message) (3) } }``` ``` val message by extra(\"foobar\") (4)``` ``` val message by extra(\"testing 1 2 3\") (4)``` ``` val message by extra(\"Hello, world!\") (4)``` ``` if (!hasProperty('buildProfile')) ext.buildProfile = 'default' (1) apply from: \"profile-${buildProfile}.gradle\" (2) tasks.register('greeting') { // Store the message into a variable, because referencing extras from the task action // is not compatible with the configuration cache. def message = project.message doLast { println message (3) } }``` ``` ext.message = 'foobar' (4)``` ``` ext.message = 'testing 1 2 3' (4)``` ``` ext.message = 'Hello, world!' (4)``` | 0 | 1 | |-:|:| | 1 | Checks for the existence of (Groovy) or binds (Kotlin) the buildProfile project property | | 2 | Applies the appropriate profile script, using the value of buildProfile in the script filename | | 3 | Prints out the value of the message extra project property | | 4 | Initializes the message extra project property, whose value can then be used in the main build script | With this setup in place, you can activate one of the profiles by passing a value for the project property youre usingbuildProfile in this case: ``` gradle greeting foobar``` ``` gradle -PbuildProfile=test greeting testing 1 2 3``` Youre not limited to checking project properties. You could also check environment variables, the JDK version, the OS the build is running on, or anything else you can imagine. One thing to bear in mind is that high level condition statements make builds harder to understand and maintain, similar to the way they complicate object-oriented code. The same applies to profiles. Gradle offers you many better ways to avoid the extensive use of profiles that Maven often requires, for example by configuring multiple tasks that are variants of one another. See the publishPubNamePublicationToRepoNameRepository tasks created by the Maven Publish Plugin. For a lengthier discussion on working with Maven profiles in Gradle, look no further than this blog post. Maven has a phase called process-resources that has the goal resources:resources bound to it by default. This gives the build author an opportunity to perform variable substitution on various files, such as web resources, packaged properties files, etc. The Java plugin for Gradle provides a processResources task to do the same thing. This is a ProcessResources task that copies files from the configured resources directorysrc/main/resources by defaultto an output directory. And as with any ProcessResources or Copy task, you can configure it to perform file filtering, renaming, and content" }, { "data": "As an example, heres a configuration that treats the source files as Groovy SimpleTemplateEngine templates, providing version and buildNumber properties to those templates: ``` tasks { processResources { expand(\"version\" to version, \"buildNumber\" to currentBuildNumber) } }``` ``` processResources { expand(version: version, buildNumber: currentBuildNumber) }``` See the API docs for CopySpec to see all the options available to you. Many Maven builds incorporate integration tests of some sort, which Maven supports through an extra set of phases: pre-integration-test, integration-test, post-integration-test, and verify. It also uses the Failsafe plugin in place of Surefire so that failed integration tests dont automatically fail the build (because you may need to clean up resources, such as a running application server). This behavior is easy to replicate in Gradle with source sets, as explained in our chapter on Testing in Java & JVM projects. You can then configure a clean-up task, such as one that shuts down a test server for example, to always run after the integration tests regardless of whether they succeed or fail using Task.finalizedBy(). If you really dont want your integration tests to fail the build, then you can use the Test.ignoreFailures setting described in the Test execution section of the Java testing chapter. Source sets also give you a lot of flexibility on where you place the source files for your integration tests. You can easily keep them in the same directory as the unit tests or, more preferably, in a separate source directory like src/integTest/java. To support other types of tests, simple add more source sets and Test tasks. Maven and Gradle share a common approach of extending the build through plugins. Although the plugin systems are very different beneath the surface, they share many feature-based plugins, such as: Shade/Shadow Jetty Checkstyle JaCoCo AntRun (see further down) Why does this matter? Because many plugins rely on standard Java conventions, migration is just a matter of replicating the configuration of the Maven plugin in Gradle. As an example, heres a simple Maven Checkstyle plugin configuration: ``` ... <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-checkstyle-plugin</artifactId> <version>2.17</version> <executions> <execution> <id>validate</id> <phase>validate</phase> <configuration> <configLocation>checkstyle.xml</configLocation> <encoding>UTF-8</encoding> <consoleOutput>true</consoleOutput> <failsOnError>true</failsOnError> <linkXRef>false</linkXRef> </configuration> <goals> <goal>check</goal> </goals> </execution> </executions> </plugin> ...``` Everything outside of the configuration block can safely be ignored when migrating to Gradle. In this case, the corresponding Gradle configuration is as follows: ``` checkstyle { config = resources.text.fromFile(\"checkstyle.xml\", \"UTF-8\") isShowViolations = true isIgnoreFailures = false }``` ``` checkstyle { config = resources.text.fromFile('checkstyle.xml', 'UTF-8') showViolations = true ignoreFailures = false }``` The Checkstyle tasks are automatically added as dependencies of the check task, which also includes test. If you want to ensure that Checkstyle runs before the tests, then just specify an ordering with the mustRunAfter() method: ``` tasks { test { mustRunAfter(checkstyleMain, checkstyleTest) } }``` ``` test.mustRunAfter checkstyleMain, checkstyleTest``` As you can see, the Gradle configuration is often much shorter than the Maven equivalent. You also have a much more flexible execution model since you are no longer constrained by Mavens fixed phases. While migrating a project from Maven, dont forget about source sets. These often provide a more elegant solution for handling integration tests or generated sources than Maven can provide, so you should factor them into your migration plans. Many Maven builds rely on the AntRun plugin to customize the build without the overhead of implementing a custom Maven plugin. Gradle has no equivalent plugin because Ant is a first-class citizen in Gradle builds, via the ant object. For example, you can use Ants Echo task like this: ``` tasks.register(\"sayHello\") { doLast { ant.withGroovyBuilder { \"echo\"(\"message\" to \"Hello!\") } } }``` ``` tasks.register('sayHello') { doLast {" }, { "data": "message: 'Hello!' } }``` Even Ant properties and filesets are supported natively. To learn more, see Using Ant from Gradle. | 0 | 1 | |-:|:| | nan | It may be simpler and cleaner to just create custom task types to replace the work that Ant is doing for you. You can then more readily benefit from incremental build and other useful Gradle features. | It may be simpler and cleaner to just create custom task types to replace the work that Ant is doing for you. You can then more readily benefit from incremental build and other useful Gradle features. Its worth remembering that Gradle builds are typically easier to extend and customize than Maven ones. In this context, that means you may not need a Gradle plugin to replace a Maven one. For example, the Maven Enforcer plugin allows you to control dependency versions and environmental factors, but these things can easily be configured in a normal Gradle build script. You may come across Maven plugins that have no counterpart in Gradle, particularly if you or someone in your organisation has written a custom plugin. Such cases rely on you understanding how Gradle (and potentially Maven) works, because you will usually have to write your own plugin. For the purposes of migration, there are two key types of Maven plugins: Those that use the Maven project object. Those that dont. Why is this important? Because if you use one of the latter, you can trivially reimplement it as a custom Gradle task type. Simply define task inputs and outputs that correspond to the mojo parameters and convert the execution logic into a task action. If a plugin depends on the Maven project, then you will have to rewrite it. Dont start by considering how the Maven plugin works, but look at what problem it is trying to solve. Then try to work out how to solve that problem in Gradle. You will probably find that the two build models are different enough that \"transcribing\" Maven plugin code into a Gradle plugin just wont be effective. On the plus side, the plugin is likely to be much easier to write than the original Maven one because Gradle has a much richer build model and API. If you do need to implement custom logic, either via build scripts or plugins, check out the Guides related to plugin development. Also be sure to familiarize yourself with Gradles Groovy DSL Reference, which provides comprehensive documentation on the API that youll be working with. It details the standard configuration blocks (and the objects that back them), the core types in the system (Project, Task, etc.), and the standard set of task types. The main entry point is the Project interface as thats the top-level object that backs the build scripts. This chapter has covered the major topics that are specific to migrating Maven builds to Gradle. All that remain are a few other areas that may be useful during or after a migration: Learn how to configure Gradles build environment, including the JVM settings used to run it Learn how to structure your builds effectively Configure Gradles logging and use it from your builds As a final note, this guide has only touched on a few of Gradles features and we encourage you to learn about the rest from the other chapters of the user manual and from our step-by-step samples. By entering your email, you agree to our Terms and Privacy Policy, including receipt of emails. You can unsubscribe at any time." } ]
{ "category": "App Definition and Development", "file_name": "docs.github.com.md", "project_name": "kaniko", "subcategory": "Application Definition & Image Build" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "App Definition and Development", "file_name": "ECR_on_EKS.html.md", "project_name": "kaniko", "subcategory": "Application Definition & Image Build" }
[ { "data": "You can use your Amazon ECR images with Amazon EKS. When referencing an image from Amazon ECR, you must use the full registry/repository:tag naming for the image. For example, awsaccountid.dkr.ecr.region.amazonaws.com/my-repository:latest. If you have Amazon EKS workloads hosted on managed nodes, self-managed nodes, or AWS Fargate, review the following: Amazon EKS workloads hosted on managed or self-managed nodes: The Amazon EKS worker node IAM role (NodeInstanceRole) is required. The Amazon EKS worker node IAM role must contain the following IAM policy permissions for Amazon ECR. ``` { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ecr:BatchCheckLayerAvailability\", \"ecr:BatchGetImage\", \"ecr:GetDownloadUrlForLayer\", \"ecr:GetAuthorizationToken\" ], \"Resource\": \"*\" } ] }``` If you used eksctl or the AWS CloudFormation templates in Getting Started with Amazon EKS to create your cluster and worker node groups, these IAM permissions are applied to your worker node IAM role by default. Amazon EKS workloads hosted on AWS Fargate: Use the Fargate pod execution role, which provides your pods permission to pull images from private Amazon ECR repositories. For more information, see Create a Fargate pod execution role. Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better." } ]
{ "category": "App Definition and Development", "file_name": "docs.md", "project_name": "kaniko", "subcategory": "Application Definition & Image Build" }
[ { "data": "In Running Docker with HTTPS, you learned that, by default, Docker runs via a non-networked Unix socket and TLS must be enabled in order to have the Docker client and the daemon communicate securely over HTTPS. TLS ensures authenticity of the registry endpoint and that traffic to/from registry is encrypted. This article demonstrates how to ensure the traffic between the Docker registry server and the Docker daemon (a client of the registry server) is encrypted and properly authenticated using certificate-based client-server authentication. We show you how to install a Certificate Authority (CA) root certificate for the registry and how to set the client TLS certificate for verification. A custom certificate is configured by creating a directory under /etc/docker/certs.d using the same name as the registry's hostname, such as localhost. All *.crt files are added to this directory as CA roots. Note On Linux any root certificates authorities are merged with the system defaults, including the host's root CA set. If you are running Docker on Windows Server, or Docker Desktop for Windows with Windows containers, the system default certificates are only used when no custom root certificates are configured. The presence of one or more <filename>.key/cert pairs indicates to Docker that there are custom certificates required for access to the desired repository. Note If multiple certificates exist, each is tried in alphabetical order. If there is a 4xx-level or 5xx-level authentication error, Docker continues to try with the next certificate. The following illustrates a configuration with custom certificates: ``` /etc/docker/certs.d/ <-- Certificate directory localhost:5000 <-- Hostname:port client.cert <-- Client certificate client.key <-- Client key ca.crt <-- Root CA that signed the registry certificate, in PEM``` The preceding example is operating-system specific and is for illustrative purposes only. You should consult your operating system documentation for creating an os-provided bundled certificate chain. Use OpenSSL's genrsa and req commands to first generate an RSA key and then use the key to create the certificate. ``` $ openssl genrsa -out client.key 4096 $ openssl req -new -x509 -text -key client.key -out client.cert ``` Note These TLS commands only generate a working set of certificates on Linux. The version of OpenSSL in macOS is incompatible with the type of certificate Docker requires. The Docker daemon interprets .crt files as CA certificates and .cert files as client certificates. If a CA certificate is accidentally given the extension .cert instead of the correct .crt extension, the Docker daemon logs the following error message: ``` Missing key KEYNAME for client certificate CERTNAME. CA certificates should use the extension .crt.``` If the Docker registry is accessed without a port number, do not add the port to the directory name. The following shows the configuration for a registry on default port 443 which is accessed with docker login my-https.registry.example.com: ``` /etc/docker/certs.d/ my-https.registry.example.com <-- Hostname without port client.cert client.key ca.crt``` Edit this page Request changes Copyright 2013-2024 Docker Inc. All rights reserved." } ]
{ "category": "App Definition and Development", "file_name": "github-privacy-statement.md", "project_name": "kaniko", "subcategory": "Application Definition & Image Build" }
[ { "data": "We read every piece of feedback, and take your input very seriously. To see all available qualifiers, see our documentation. | Name | Name.1 | Name.2 | Last commit message | Last commit date | |:-|:-|:-|-:|-:| | parent directory.. | parent directory.. | parent directory.. | nan | nan | | designproposals | designproposals | design_proposals | nan | nan | | images | images | images | nan | nan | | demo.gif | demo.gif | demo.gif | nan | nan | | designdoc.md | designdoc.md | designdoc.md | nan | nan | | testplan.md | testplan.md | testplan.md | nan | nan | | tutorial.md | tutorial.md | tutorial.md | nan | nan | | View all files | View all files | View all files | nan | nan |" } ]
{ "category": "App Definition and Development", "file_name": "pod-security-policies.md", "project_name": "kaniko", "subcategory": "Application Definition & Image Build" }
[ { "data": "This browser is no longer supported. Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support. A connection string includes the authorization information required for your application to access data in an Azure Storage account at runtime using Shared Key authorization. You can configure connection strings to: To learn how to view your account access keys and copy a connection string, see Manage storage account access keys. Important For optimal security, Microsoft recommends using Microsoft Entra ID with managed identities to authorize requests against blob, queue, and table data, whenever possible. Authorization with Microsoft Entra ID and managed identities provides superior security and ease of use over Shared Key authorization. To learn more about managed identities, see What are managed identities for Azure resources. For an example of how to enable and use a managed identity for a .NET application, see Authenticating Azure-hosted apps to Azure resources with .NET. For resources hosted outside of Azure, such as on-premises applications, you can use managed identities through Azure Arc. For example, apps running on Azure Arc-enabled servers can use managed identities to connect to Azure services. To learn more, see Authenticate against Azure resources with Azure Arc-enabled servers. For scenarios where shared access signatures (SAS) are used, Microsoft recommends using a user delegation SAS. A user delegation SAS is secured with Microsoft Entra credentials instead of the account key. To learn about shared access signatures, see Grant limited access to data with shared access signatures. For an example of how to create and use a user delegation SAS with .NET, see Create a user delegation SAS for a blob with .NET. Storage account access keys provide full access to the configuration of a storage account, as well as the data. Always be careful to protect your access keys. Use Azure Key Vault to manage and rotate your keys securely. Access to the shared key grants a user full access to a storage accounts configuration and its data. Access to shared keys should be carefully limited and monitored. Use user delegation SAS tokens with limited scope of access in scenarios where Microsoft Entra ID based authorization can't be used. Avoid hard-coding access keys or saving them anywhere in plain text that is accessible to others. Rotate your keys if you believe they might have been compromised. Important To prevent users from accessing data in your storage account with Shared Key, you can disallow Shared Key authorization for the storage account. Granular access to data with least privileges necessary is recommended as a security best practice. Microsoft Entra ID based authorization using managed identities should be used for scenarios that support OAuth. Kerberos or SMTP should be used for Azure Files over SMB. For Azure Files over REST, SAS tokens can be used. Shared key access should be disabled if not required to prevent its inadvertent use. For more information, see Prevent Shared Key authorization for an Azure Storage account. To protect an Azure Storage account with Microsoft Entra Conditional Access policies, you must disallow Shared Key authorization for the storage" }, { "data": "If you have disabled shared key access and you are seeing Shared Key authorization reported in the diagnostic logs, this indicates that trusted access is being used to access storage. For more details, see Trusted access for resources registered in your Microsoft Entra tenant. Your application needs to access the connection string at runtime to authorize requests made to Azure Storage. You have several options for storing your account access keys or connection string: Warning Storing your account access keys or connection string in clear text presents a security risk and is not recommended. Store your account keys in an encrypted format, or migrate your applications to use Microsoft Entra authorization for access to your storage account. The emulator supports a single fixed account and a well-known authentication key for Shared Key authentication. This account and key are the only Shared Key credentials permitted for use with the emulator. They are: ``` Account name: devstoreaccount1 Account key: Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw== ``` Note The authentication key supported by the emulator is intended only for testing the functionality of your client authentication code. It does not serve any security purpose. You cannot use your production storage account and key with the emulator. You should not use the development account with production data. The emulator supports connection via HTTP only. However, HTTPS is the recommended protocol for accessing resources in a production Azure storage account. The easiest way to connect to the emulator from your application is to configure a connection string in your application's configuration file that references the shortcut UseDevelopmentStorage=true. The shortcut is equivalent to the full connection string for the emulator, which specifies the account name, the account key, and the emulator endpoints for each of the Azure Storage services: ``` DefaultEndpointsProtocol=http;AccountName=devstoreaccount1; AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==; BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1; QueueEndpoint=http://127.0.0.1:10001/devstoreaccount1; TableEndpoint=http://127.0.0.1:10002/devstoreaccount1; ``` The following .NET code snippet shows how you can use the shortcut from a method that takes a connection string. For example, the BlobContainerClient(String, String) constructor takes a connection string. ``` BlobContainerClient blobContainerClient = new BlobContainerClient(\"UseDevelopmentStorage=true\", \"sample-container\"); blobContainerClient.CreateIfNotExists(); ``` Make sure that the emulator is running before calling the code in the snippet. For more information about Azurite, see Use the Azurite emulator for local Azure Storage development. To create a connection string for your Azure storage account, use the following format. Indicate whether you want to connect to the storage account through HTTPS (recommended) or HTTP, replace myAccountName with the name of your storage account, and replace myAccountKey with your account access key: DefaultEndpointsProtocol=[http|https];AccountName=myAccountName;AccountKey=myAccountKey For example, your connection string might look similar to: DefaultEndpointsProtocol=https;AccountName=storagesample;AccountKey=<account-key> Although Azure Storage supports both HTTP and HTTPS in a connection string, HTTPS is highly recommended. Tip You can find your storage account's connection strings in the Azure portal. Navigate to Security + networking > Access keys in your storage account's settings to see connection strings for both primary and secondary access keys. If you possess a shared access signature (SAS) URL that grants you access to resources in a storage account, you can use the SAS in a connection string. Because the SAS contains the information required to authenticate the request, a connection string with a SAS provides the protocol, the service endpoint, and the necessary credentials to access the" }, { "data": "To create a connection string that includes a shared access signature, specify the string in the following format: ``` BlobEndpoint=myBlobEndpoint; QueueEndpoint=myQueueEndpoint; TableEndpoint=myTableEndpoint; FileEndpoint=myFileEndpoint; SharedAccessSignature=sasToken ``` Each service endpoint is optional, although the connection string must contain at least one. Note Using HTTPS with a SAS is recommended as a best practice. If you are specifying a SAS in a connection string in a configuration file, you may need to encode special characters in the URL. Here's an example of a connection string that includes a service SAS for Blob storage: ``` BlobEndpoint=https://storagesample.blob.core.windows.net; SharedAccessSignature=sv=2015-04-05&sr=b&si=tutorial-policy-635959936145100803&sig=9aCzs76n0E7y5BpEi2GvsSv433BZa22leDOZXX%2BXXIU%3D ``` And here's an example of the same connection string with URL encoding: ``` BlobEndpoint=https://storagesample.blob.core.windows.net; SharedAccessSignature=sv=2015-04-05&amp;sr=b&amp;si=tutorial-policy-635959936145100803&amp;sig=9aCzs76n0E7y5BpEi2GvsSv433BZa22leDOZXX%2BXXIU%3D ``` Here's an example of a connection string that includes an account SAS for Blob and File storage. Note that endpoints for both services are specified: ``` BlobEndpoint=https://storagesample.blob.core.windows.net; FileEndpoint=https://storagesample.file.core.windows.net; SharedAccessSignature=sv=2015-07-08&sig=iCvQmdZngZNW%2F4vw43j6%2BVz6fndHF5LI639QJba4r8o%3D&spr=https&st=2016-04-12T03%3A24%3A31Z&se=2016-04-13T03%3A29%3A31Z&srt=s&ss=bf&sp=rwl ``` And here's an example of the same connection string with URL encoding: ``` BlobEndpoint=https://storagesample.blob.core.windows.net; FileEndpoint=https://storagesample.file.core.windows.net; SharedAccessSignature=sv=2015-07-08&amp;sig=iCvQmdZngZNW%2F4vw43j6%2BVz6fndHF5LI639QJba4r8o%3D&amp;spr=https&amp;st=2016-04-12T03%3A24%3A31Z&amp;se=2016-04-13T03%3A29%3A31Z&amp;srt=s&amp;ss=bf&amp;sp=rwl ``` You can specify explicit service endpoints in your connection string instead of using the default endpoints. To create a connection string that specifies an explicit endpoint, specify the complete service endpoint for each service, including the protocol specification (HTTPS (recommended) or HTTP), in the following format: ``` DefaultEndpointsProtocol=[http|https]; BlobEndpoint=myBlobEndpoint; FileEndpoint=myFileEndpoint; QueueEndpoint=myQueueEndpoint; TableEndpoint=myTableEndpoint; AccountName=myAccountName; AccountKey=myAccountKey ``` One scenario where you might wish to specify an explicit endpoint is when you've mapped your Blob storage endpoint to a custom domain. In that case, you can specify your custom endpoint for Blob storage in your connection string. You can optionally specify the default endpoints for the other services if your application uses them. Here is an example of a connection string that specifies an explicit endpoint for the Blob service: ``` DefaultEndpointsProtocol=https; BlobEndpoint=http://www.mydomain.com; AccountName=storagesample; AccountKey=<account-key> ``` This example specifies explicit endpoints for all services, including a custom domain for the Blob service: ``` DefaultEndpointsProtocol=https; BlobEndpoint=http://www.mydomain.com; FileEndpoint=https://myaccount.file.core.windows.net; QueueEndpoint=https://myaccount.queue.core.windows.net; TableEndpoint=https://myaccount.table.core.windows.net; AccountName=storagesample; AccountKey=<account-key> ``` The endpoint values in a connection string are used to construct the request URIs to the storage services, and dictate the form of any URIs that are returned to your code. If you've mapped a storage endpoint to a custom domain and omit that endpoint from a connection string, then you will not be able to use that connection string to access data in that service from your code. For more information about configuring a custom domain for Azure Storage, see Map a custom domain to an Azure Blob Storage endpoint. Important Service endpoint values in your connection strings must be well-formed URIs, including https:// (recommended) or http://. To create a connection string for a storage service in regions or instances with different endpoint suffixes, such as for Microsoft Azure operated by 21Vianet or Azure Government, use the following connection string format. Indicate whether you want to connect to the storage account through HTTPS (recommended) or HTTP, replace myAccountName with the name of your storage account, replace myAccountKey with your account access key, and replace mySuffix with the URI suffix: ``` DefaultEndpointsProtocol=[http|https]; AccountName=myAccountName; AccountKey=myAccountKey; EndpointSuffix=mySuffix; ``` Here's an example connection string for storage services in Azure operated by 21Vianet: ``` DefaultEndpointsProtocol=https; AccountName=storagesample; AccountKey=<account-key>; EndpointSuffix=core.chinacloudapi.cn; ``` To learn how to authorize access to Azure Storage with the account key or with a connection string, see one of the following articles: Coming soon: Throughout 2024 we will be phasing out GitHub Issues as the feedback mechanism for content and replacing it with a new feedback system. For more information see: https://aka.ms/ContentUserFeedback. Submit and view feedback for" } ]
{ "category": "App Definition and Development", "file_name": "linux_saas_runner.html#machine-types-available-for-private-projects-x86-64.md", "project_name": "kaniko", "subcategory": "Application Definition & Image Build" }
[ { "data": "Docker is an open platform for developing, shipping, and running applications. Docker allows you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Dockers methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production. You can download and install Docker on multiple platforms. Refer to the following section and choose the best installation path for you. Docker Desktop terms Commercial use of Docker Desktop in larger enterprises (more than 250 employees OR more than $10 million USD in annual revenue) requires a paid subscription. Note If you're looking for information on how to install Docker Engine, see Docker Engine installation overview. Edit this page Request changes Copyright 2013-2024 Docker Inc. All rights reserved." } ]
{ "category": "App Definition and Development", "file_name": "predefined_variables.html.md", "project_name": "kaniko", "subcategory": "Application Definition & Image Build" }
[ { "data": "This browser is no longer supported. Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support. A connection string includes the authorization information required for your application to access data in an Azure Storage account at runtime using Shared Key authorization. You can configure connection strings to: To learn how to view your account access keys and copy a connection string, see Manage storage account access keys. Important For optimal security, Microsoft recommends using Microsoft Entra ID with managed identities to authorize requests against blob, queue, and table data, whenever possible. Authorization with Microsoft Entra ID and managed identities provides superior security and ease of use over Shared Key authorization. To learn more about managed identities, see What are managed identities for Azure resources. For an example of how to enable and use a managed identity for a .NET application, see Authenticating Azure-hosted apps to Azure resources with .NET. For resources hosted outside of Azure, such as on-premises applications, you can use managed identities through Azure Arc. For example, apps running on Azure Arc-enabled servers can use managed identities to connect to Azure services. To learn more, see Authenticate against Azure resources with Azure Arc-enabled servers. For scenarios where shared access signatures (SAS) are used, Microsoft recommends using a user delegation SAS. A user delegation SAS is secured with Microsoft Entra credentials instead of the account key. To learn about shared access signatures, see Grant limited access to data with shared access signatures. For an example of how to create and use a user delegation SAS with .NET, see Create a user delegation SAS for a blob with .NET. Storage account access keys provide full access to the configuration of a storage account, as well as the data. Always be careful to protect your access keys. Use Azure Key Vault to manage and rotate your keys securely. Access to the shared key grants a user full access to a storage accounts configuration and its data. Access to shared keys should be carefully limited and monitored. Use user delegation SAS tokens with limited scope of access in scenarios where Microsoft Entra ID based authorization can't be used. Avoid hard-coding access keys or saving them anywhere in plain text that is accessible to others. Rotate your keys if you believe they might have been compromised. Important To prevent users from accessing data in your storage account with Shared Key, you can disallow Shared Key authorization for the storage account. Granular access to data with least privileges necessary is recommended as a security best practice. Microsoft Entra ID based authorization using managed identities should be used for scenarios that support OAuth. Kerberos or SMTP should be used for Azure Files over SMB. For Azure Files over REST, SAS tokens can be used. Shared key access should be disabled if not required to prevent its inadvertent use. For more information, see Prevent Shared Key authorization for an Azure Storage account. To protect an Azure Storage account with Microsoft Entra Conditional Access policies, you must disallow Shared Key authorization for the storage" }, { "data": "If you have disabled shared key access and you are seeing Shared Key authorization reported in the diagnostic logs, this indicates that trusted access is being used to access storage. For more details, see Trusted access for resources registered in your Microsoft Entra tenant. Your application needs to access the connection string at runtime to authorize requests made to Azure Storage. You have several options for storing your account access keys or connection string: Warning Storing your account access keys or connection string in clear text presents a security risk and is not recommended. Store your account keys in an encrypted format, or migrate your applications to use Microsoft Entra authorization for access to your storage account. The emulator supports a single fixed account and a well-known authentication key for Shared Key authentication. This account and key are the only Shared Key credentials permitted for use with the emulator. They are: ``` Account name: devstoreaccount1 Account key: Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw== ``` Note The authentication key supported by the emulator is intended only for testing the functionality of your client authentication code. It does not serve any security purpose. You cannot use your production storage account and key with the emulator. You should not use the development account with production data. The emulator supports connection via HTTP only. However, HTTPS is the recommended protocol for accessing resources in a production Azure storage account. The easiest way to connect to the emulator from your application is to configure a connection string in your application's configuration file that references the shortcut UseDevelopmentStorage=true. The shortcut is equivalent to the full connection string for the emulator, which specifies the account name, the account key, and the emulator endpoints for each of the Azure Storage services: ``` DefaultEndpointsProtocol=http;AccountName=devstoreaccount1; AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==; BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1; QueueEndpoint=http://127.0.0.1:10001/devstoreaccount1; TableEndpoint=http://127.0.0.1:10002/devstoreaccount1; ``` The following .NET code snippet shows how you can use the shortcut from a method that takes a connection string. For example, the BlobContainerClient(String, String) constructor takes a connection string. ``` BlobContainerClient blobContainerClient = new BlobContainerClient(\"UseDevelopmentStorage=true\", \"sample-container\"); blobContainerClient.CreateIfNotExists(); ``` Make sure that the emulator is running before calling the code in the snippet. For more information about Azurite, see Use the Azurite emulator for local Azure Storage development. To create a connection string for your Azure storage account, use the following format. Indicate whether you want to connect to the storage account through HTTPS (recommended) or HTTP, replace myAccountName with the name of your storage account, and replace myAccountKey with your account access key: DefaultEndpointsProtocol=[http|https];AccountName=myAccountName;AccountKey=myAccountKey For example, your connection string might look similar to: DefaultEndpointsProtocol=https;AccountName=storagesample;AccountKey=<account-key> Although Azure Storage supports both HTTP and HTTPS in a connection string, HTTPS is highly recommended. Tip You can find your storage account's connection strings in the Azure portal. Navigate to Security + networking > Access keys in your storage account's settings to see connection strings for both primary and secondary access keys. If you possess a shared access signature (SAS) URL that grants you access to resources in a storage account, you can use the SAS in a connection string. Because the SAS contains the information required to authenticate the request, a connection string with a SAS provides the protocol, the service endpoint, and the necessary credentials to access the" }, { "data": "To create a connection string that includes a shared access signature, specify the string in the following format: ``` BlobEndpoint=myBlobEndpoint; QueueEndpoint=myQueueEndpoint; TableEndpoint=myTableEndpoint; FileEndpoint=myFileEndpoint; SharedAccessSignature=sasToken ``` Each service endpoint is optional, although the connection string must contain at least one. Note Using HTTPS with a SAS is recommended as a best practice. If you are specifying a SAS in a connection string in a configuration file, you may need to encode special characters in the URL. Here's an example of a connection string that includes a service SAS for Blob storage: ``` BlobEndpoint=https://storagesample.blob.core.windows.net; SharedAccessSignature=sv=2015-04-05&sr=b&si=tutorial-policy-635959936145100803&sig=9aCzs76n0E7y5BpEi2GvsSv433BZa22leDOZXX%2BXXIU%3D ``` And here's an example of the same connection string with URL encoding: ``` BlobEndpoint=https://storagesample.blob.core.windows.net; SharedAccessSignature=sv=2015-04-05&amp;sr=b&amp;si=tutorial-policy-635959936145100803&amp;sig=9aCzs76n0E7y5BpEi2GvsSv433BZa22leDOZXX%2BXXIU%3D ``` Here's an example of a connection string that includes an account SAS for Blob and File storage. Note that endpoints for both services are specified: ``` BlobEndpoint=https://storagesample.blob.core.windows.net; FileEndpoint=https://storagesample.file.core.windows.net; SharedAccessSignature=sv=2015-07-08&sig=iCvQmdZngZNW%2F4vw43j6%2BVz6fndHF5LI639QJba4r8o%3D&spr=https&st=2016-04-12T03%3A24%3A31Z&se=2016-04-13T03%3A29%3A31Z&srt=s&ss=bf&sp=rwl ``` And here's an example of the same connection string with URL encoding: ``` BlobEndpoint=https://storagesample.blob.core.windows.net; FileEndpoint=https://storagesample.file.core.windows.net; SharedAccessSignature=sv=2015-07-08&amp;sig=iCvQmdZngZNW%2F4vw43j6%2BVz6fndHF5LI639QJba4r8o%3D&amp;spr=https&amp;st=2016-04-12T03%3A24%3A31Z&amp;se=2016-04-13T03%3A29%3A31Z&amp;srt=s&amp;ss=bf&amp;sp=rwl ``` You can specify explicit service endpoints in your connection string instead of using the default endpoints. To create a connection string that specifies an explicit endpoint, specify the complete service endpoint for each service, including the protocol specification (HTTPS (recommended) or HTTP), in the following format: ``` DefaultEndpointsProtocol=[http|https]; BlobEndpoint=myBlobEndpoint; FileEndpoint=myFileEndpoint; QueueEndpoint=myQueueEndpoint; TableEndpoint=myTableEndpoint; AccountName=myAccountName; AccountKey=myAccountKey ``` One scenario where you might wish to specify an explicit endpoint is when you've mapped your Blob storage endpoint to a custom domain. In that case, you can specify your custom endpoint for Blob storage in your connection string. You can optionally specify the default endpoints for the other services if your application uses them. Here is an example of a connection string that specifies an explicit endpoint for the Blob service: ``` DefaultEndpointsProtocol=https; BlobEndpoint=http://www.mydomain.com; AccountName=storagesample; AccountKey=<account-key> ``` This example specifies explicit endpoints for all services, including a custom domain for the Blob service: ``` DefaultEndpointsProtocol=https; BlobEndpoint=http://www.mydomain.com; FileEndpoint=https://myaccount.file.core.windows.net; QueueEndpoint=https://myaccount.queue.core.windows.net; TableEndpoint=https://myaccount.table.core.windows.net; AccountName=storagesample; AccountKey=<account-key> ``` The endpoint values in a connection string are used to construct the request URIs to the storage services, and dictate the form of any URIs that are returned to your code. If you've mapped a storage endpoint to a custom domain and omit that endpoint from a connection string, then you will not be able to use that connection string to access data in that service from your code. For more information about configuring a custom domain for Azure Storage, see Map a custom domain to an Azure Blob Storage endpoint. Important Service endpoint values in your connection strings must be well-formed URIs, including https:// (recommended) or http://. To create a connection string for a storage service in regions or instances with different endpoint suffixes, such as for Microsoft Azure operated by 21Vianet or Azure Government, use the following connection string format. Indicate whether you want to connect to the storage account through HTTPS (recommended) or HTTP, replace myAccountName with the name of your storage account, replace myAccountKey with your account access key, and replace mySuffix with the URI suffix: ``` DefaultEndpointsProtocol=[http|https]; AccountName=myAccountName; AccountKey=myAccountKey; EndpointSuffix=mySuffix; ``` Here's an example connection string for storage services in Azure operated by 21Vianet: ``` DefaultEndpointsProtocol=https; AccountName=storagesample; AccountKey=<account-key>; EndpointSuffix=core.chinacloudapi.cn; ``` To learn how to authorize access to Azure Storage with the account key or with a connection string, see one of the following articles: Coming soon: Throughout 2024 we will be phasing out GitHub Issues as the feedback mechanism for content and replacing it with a new feedback system. For more information see: https://aka.ms/ContentUserFeedback. Submit and view feedback for" } ]
{ "category": "App Definition and Development", "file_name": "github-terms-of-service.md", "project_name": "kaniko", "subcategory": "Application Definition & Image Build" }
[ { "data": "GitHub offers hosted virtual machines to run workflows. The virtual machine contains an environment of tools, packages, and settings available for GitHub Actions to use. Runners are the machines that execute jobs in a GitHub Actions workflow. For example, a runner can clone your repository locally, install testing software, and then run commands that evaluate your code. GitHub provides runners that you can use to run your jobs, or you can host your own runners. Each GitHub-hosted runner is a new virtual machine (VM) hosted by GitHub with the runner application and other tools preinstalled, and is available with Ubuntu Linux, Windows, or macOS operating systems. When you use a GitHub-hosted runner, machine maintenance and upgrades are taken care of for you. You can choose one of the standard GitHub-hosted runner options or, if you are on the GitHub Team or GitHub Enterprise Cloud plan, you can provision a runner with more cores, or a runner that's powered by a GPU or ARM processor. These machines are referred to as \"larger runner.\" For more information, see \"About larger runners.\" Using GitHub-hosted runners requires network access with at least 70 kilobits per second upload and download speeds. To use a GitHub-hosted runner, create a job and use runs-on to specify the type of runner that will process the job, such as ubuntu-latest, windows-latest, or macos-latest. For the full list of runner types, see \"About GitHub-hosted runners.\" If you have repo: write access to a repository, you can view a list of the runners available to use in workflows in the repository. For more information, see \"Viewing available runners for a repository.\" When the job begins, GitHub automatically provisions a new VM for that job. All steps in the job execute on the VM, allowing the steps in that job to share information using the runner's filesystem. You can run workflows directly on the VM or in a Docker container. When the job has finished, the VM is automatically decommissioned. The following diagram demonstrates how two jobs in a workflow are executed on two different GitHub-hosted runners. The following example workflow has two jobs, named Run-npm-on-Ubuntu and Run-PSScriptAnalyzer-on-Windows. When this workflow is triggered, GitHub provisions a new virtual machine for each job. ``` name: Run commands on different operating systems on: push: branches: [ main ] pull_request: branches: [ main ] jobs: Run-npm-on-Ubuntu: name: Run npm on Ubuntu runs-on: ubuntu-latest steps: uses: actions/checkout@v4 uses: actions/setup-node@v4 with: node-version: '14' run: npm help Run-PSScriptAnalyzer-on-Windows: name: Run PSScriptAnalyzer on Windows runs-on: windows-latest steps: uses: actions/checkout@v4 name: Install PSScriptAnalyzer module shell: pwsh run: | Set-PSRepository PSGallery -InstallationPolicy Trusted Install-Module PSScriptAnalyzer -ErrorAction Stop name: Get list of rules shell: pwsh run: | Get-ScriptAnalyzerRule ``` ``` name: Run commands on different operating systems on: push: branches: [ main ] pull_request: branches: [ main ] jobs: Run-npm-on-Ubuntu: name: Run npm on Ubuntu runs-on: ubuntu-latest steps: uses: actions/checkout@v4 uses: actions/setup-node@v4 with: node-version: '14' run: npm help Run-PSScriptAnalyzer-on-Windows: name: Run PSScriptAnalyzer on Windows runs-on: windows-latest steps: uses: actions/checkout@v4 name: Install PSScriptAnalyzer module shell: pwsh run: | Set-PSRepository PSGallery -InstallationPolicy Trusted Install-Module PSScriptAnalyzer -ErrorAction Stop name: Get list of rules shell: pwsh run: | Get-ScriptAnalyzerRule ``` While the job runs, the logs and output can be viewed in the GitHub UI: The GitHub Actions runner application is open source. You can contribute and file issues in the runner repository. If you have repo: write access to a repository, you can view a list of the runners available to the" }, { "data": "On GitHub.com, navigate to the main page of the repository. Under your repository name, click Actions. In the left sidebar, under the \"Management\" section, click Runners. Review the list of available GitHub-hosted runners for the repository. Optionally, to copy a runner's label to use it in a workflow, click to the right of the runner, then click Copy label. Note: Enterprise and organization owners can create runners from this page. To create a new runner, click New runner at the top right of the list of runners to add runners to the repository. For more information, see \"Managing larger runners\" and \"Adding self-hosted runners.\" GitHub-hosted runners are available for use in both public and private repositories. GitHub-hosted Linux runners support hardware acceleration for Android SDK tools, which makes running Android tests much faster and consumes fewer minutes. For more information on Android hardware acceleration, see Configure hardware acceleration for the Android Emulator in the Android Developers documentation. Note The -latest runner images are the latest stable images that GitHub provides, and might not be the most recent version of the operating system available from the operating system vendor. Warning: Beta and Deprecated Images are provided \"as-is\", \"with all faults\" and \"as available\" and are excluded from the service level agreement and warranty. Beta Images may not be covered by customer support. For public repositories, jobs using the workflow labels shown in the table below will run on virtual machines with the associated specifications. The use of these runners on public repositories is free and unlimited. | Virtual Machine | Processor (CPU) | Memory (RAM) | Storage (SSD) | Workflow label | Notes | |:|:|:|:-|:|:-| | Linux | 4 | 16 GB | 14 GB | ubuntu-latest, ubuntu-24.04 [Beta], ubuntu-22.04, ubuntu-20.04 | The ubuntu-latest label currently uses the Ubuntu 22.04 runner image. | | Windows | 4 | 16 GB | 14 GB | windows-latest, windows-2022, windows-2019 | The windows-latest label currently uses the Windows 2022 runner image. | | macOS | 3 | 14 GB | 14 GB | macos-12 or macos-11 | The macos-11 label has been deprecated and will no longer be available after 28 June 2024. | | macOS | 4 | 14 GB | 14 GB | macos-13 | nan | | macOS | 3 (M1) | 7 GB | 14 GB | macos-latest or macos-14 | The macos-latest label currently uses the macOS 14 runner image. | For private repositories, jobs using the workflow labels shown in the table below will run on virtual machines with the associated specifications. These runners use your GitHub account's allotment of free minutes, and are then charged at the per minute rates. For more information, see \"About billing for GitHub Actions.\" | Virtual Machine | Processor (CPU) | Memory (RAM) | Storage (SSD) | Workflow label | Notes | |:|:|:|:-|:|:-| | Linux | 2 | 7 GB | 14 GB | ubuntu-latest, ubuntu-24.04 [Beta], ubuntu-22.04, ubuntu-20.04 | The ubuntu-latest label currently uses the Ubuntu 22.04 runner image. | | Windows | 2 | 7 GB | 14 GB | windows-latest, windows-2022, windows-2019 | The windows-latest label currently uses the Windows 2022 runner image. | | macOS | 3 | 14 GB | 14 GB | macos-12 or macos-11 | The macos-11 label has been deprecated and will no longer be available after 28 June" }, { "data": "| | macOS | 4 | 14 GB | 14 GB | macos-13 | nan | | macOS | 3 (M1) | 7 GB | 14 GB | macos-latest or macos-14 | The macos-latestlabel currently uses the macOS 14 runner image. | Workflow logs list the runner used to run a job. For more information, see \"Viewing workflow run history.\" Customers on GitHub Team and GitHub Enterprise Cloud plans can choose from a range of managed virtual machines that have more resources than the standard GitHub-hosted runners. These machines are referred to as \"larger runner.\" They offer the following advanced features: These larger runners are hosted by GitHub and have the runner application and other tools preinstalled. For more information, see \"About larger runners.\" The software tools included in GitHub-hosted runners are updated weekly. The update process takes several days, and the list of preinstalled software on the main branch is updated after the whole deployment ends. Workflow logs include a link to the preinstalled tools on the exact runner. To find this information in the workflow log, expand the Set up job section. Under that section, expand the Runner Image section. The link following Included Software will describe the preinstalled tools on the runner that ran the workflow. For more information, see \"Viewing workflow run history.\" For the overall list of included tools for each runner operating system, see the Available Images documentation the runner images repository. GitHub-hosted runners include the operating system's default built-in tools, in addition to the packages listed in the above references. For example, Ubuntu and macOS runners include grep, find, and which, among other default tools. You can also view a software bill of materials (SBOM) for each build of the Windows and Ubuntu runner images. For more information, see \"Security hardening for GitHub Actions.\" We recommend using actions to interact with the software installed on runners. This approach has several benefits: If there is a tool that you'd like to request, please open an issue at actions/runner-images. This repository also contains announcements about all major software updates on runners. You can install additional software on GitHub-hosted runners. For more information, see \"Customizing GitHub-hosted runners\". GitHub hosts Linux and Windows runners on virtual machines in Microsoft Azure with the GitHub Actions runner application installed. The GitHub-hosted runner application is a fork of the Azure Pipelines Agent. Inbound ICMP packets are blocked for all Azure virtual machines, so ping or traceroute commands might not work. GitHub hosts macOS runners in Azure data centers. For Linux and Windows runners, GitHub uses Dadsv5-series virtual machines. For more information, see Dasv5 and Dadsv5-series in the Microsoft Azure documentation. If GitHub Actions services are temporarily unavailable, then a workflow run is discarded if it has not been queued within 30 minutes of being triggered. For example, if a workflow is triggered and the GitHub Actions services are unavailable for 31 minutes or longer, then the workflow run will not be processed. In addition, if the workflow run has been successfully queued, but has not been processed by a GitHub-hosted runner within 45 minutes, then the queued workflow run is discarded. The Linux and macOS virtual machines both run using passwordless sudo. When you need to execute commands or install tools that require more privileges than the current user, you can use sudo without needing to provide a password. For more information, see the \"Sudo Manual.\" Windows virtual machines are configured to run as administrators with User Account Control (UAC) disabled. For more information, see \"How User Account Control works\" in the Windows" }, { "data": "To get a list of IP address ranges that GitHub Actions uses for GitHub-hosted runners, you can use the GitHub REST API. For more information, see the actions key in the response of the GET /meta endpoint. For more information, see \"REST API endpoints for meta data.\" Windows and Ubuntu runners are hosted in Azure and subsequently have the same IP address ranges as the Azure datacenters. macOS runners are hosted in GitHub's own macOS cloud. Since there are so many IP address ranges for GitHub-hosted runners, we do not recommend that you use these as allowlists for your internal resources. Instead, we recommend you use larger runners with a static IP address range, or self-hosted runners. For more information, see \"About larger runners\" or \"About self-hosted runners.\" The list of GitHub Actions IP addresses returned by the API is updated once a week. A GitHub-hosted runner must establish connections to GitHub-owned endpoints to perform essential communication operations. In addition, your runner may require access to additional networks that you specify or utilize within an action. To ensure proper communications for GitHub-hosted runners between networks within your configuration, ensure that the following communications are allowed. Note Some of the domains listed are configured using CNAME records. Some firewalls might require you to add rules recursively for all CNAME records. Note that the CNAME records might change in the future, and that only the domains listed will remain constant. Needed for essential operations: ``` github.com api.github.com *.actions.githubusercontent.com ``` ``` github.com api.github.com *.actions.githubusercontent.com ``` Needed for downloading actions: ``` codeload.github.com ghcr.io *.actions.githubusercontent.com ``` ``` codeload.github.com ghcr.io *.actions.githubusercontent.com ``` Needed for uploading/downloading job summaries, logs, workflow artifacts, and caches: ``` results-receiver.actions.githubusercontent.com *.blob.core.windows.net ``` ``` results-receiver.actions.githubusercontent.com *.blob.core.windows.net ``` Needed for runner version updates: ``` objects.githubusercontent.com objects-origin.githubusercontent.com github-releases.githubusercontent.com github-registry-files.githubusercontent.com ``` ``` objects.githubusercontent.com objects-origin.githubusercontent.com github-releases.githubusercontent.com github-registry-files.githubusercontent.com ``` Needed for retrieving OIDC tokens: ``` *.actions.githubusercontent.com ``` ``` *.actions.githubusercontent.com ``` Needed for downloading or publishing packages or containers to GitHub Packages: ``` *.pkg.github.com ghcr.io ``` ``` *.pkg.github.com ghcr.io ``` Needed for Git Large File Storage ``` github-cloud.githubusercontent.com github-cloud.s3.amazonaws.com ``` ``` github-cloud.githubusercontent.com github-cloud.s3.amazonaws.com ``` GitHub-hosted runners are provisioned with an etc/hosts file that blocks network access to various cryptocurrency mining pools and malicious sites. Hosts such as MiningMadness.com and cpu-pool.com are rerouted to localhost so that they do not present a significant security risk. GitHub executes actions and shell commands in specific directories on the virtual machine. The file paths on virtual machines are not static. Use the environment variables GitHub provides to construct file paths for the home, workspace, and workflow directories. | Directory | Environment variable | Description | |:--|:--|:--| | home | HOME | Contains user-related data. For example, this directory could contain credentials from a login attempt. | | workspace | GITHUB_WORKSPACE | Actions and shell commands execute in this directory. An action can modify the contents of this directory, which subsequent actions can access. | | workflow/event.json | GITHUBEVENTPATH | The POST payload of the webhook event that triggered the workflow. GitHub rewrites this each time an action executes to isolate file content between actions. | For a list of the environment variables GitHub creates for each workflow, see \"Variables.\" Actions that run in Docker containers have static directories under the /github path. However, we strongly recommend using the default environment variables to construct file paths in Docker containers. GitHub reserves the /github path prefix and creates three directories for actions. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "App Definition and Development", "file_name": "workload-identity#authenticating_to.md", "project_name": "kaniko", "subcategory": "Application Definition & Image Build" }
[ { "data": "In Running Docker with HTTPS, you learned that, by default, Docker runs via a non-networked Unix socket and TLS must be enabled in order to have the Docker client and the daemon communicate securely over HTTPS. TLS ensures authenticity of the registry endpoint and that traffic to/from registry is encrypted. This article demonstrates how to ensure the traffic between the Docker registry server and the Docker daemon (a client of the registry server) is encrypted and properly authenticated using certificate-based client-server authentication. We show you how to install a Certificate Authority (CA) root certificate for the registry and how to set the client TLS certificate for verification. A custom certificate is configured by creating a directory under /etc/docker/certs.d using the same name as the registry's hostname, such as localhost. All *.crt files are added to this directory as CA roots. Note On Linux any root certificates authorities are merged with the system defaults, including the host's root CA set. If you are running Docker on Windows Server, or Docker Desktop for Windows with Windows containers, the system default certificates are only used when no custom root certificates are configured. The presence of one or more <filename>.key/cert pairs indicates to Docker that there are custom certificates required for access to the desired repository. Note If multiple certificates exist, each is tried in alphabetical order. If there is a 4xx-level or 5xx-level authentication error, Docker continues to try with the next certificate. The following illustrates a configuration with custom certificates: ``` /etc/docker/certs.d/ <-- Certificate directory localhost:5000 <-- Hostname:port client.cert <-- Client certificate client.key <-- Client key ca.crt <-- Root CA that signed the registry certificate, in PEM``` The preceding example is operating-system specific and is for illustrative purposes only. You should consult your operating system documentation for creating an os-provided bundled certificate chain. Use OpenSSL's genrsa and req commands to first generate an RSA key and then use the key to create the certificate. ``` $ openssl genrsa -out client.key 4096 $ openssl req -new -x509 -text -key client.key -out client.cert ``` Note These TLS commands only generate a working set of certificates on Linux. The version of OpenSSL in macOS is incompatible with the type of certificate Docker requires. The Docker daemon interprets .crt files as CA certificates and .cert files as client certificates. If a CA certificate is accidentally given the extension .cert instead of the correct .crt extension, the Docker daemon logs the following error message: ``` Missing key KEYNAME for client certificate CERTNAME. CA certificates should use the extension .crt.``` If the Docker registry is accessed without a port number, do not add the port to the directory name. The following shows the configuration for a registry on default port 443 which is accessed with docker login my-https.registry.example.com: ``` /etc/docker/certs.d/ my-https.registry.example.com <-- Hostname without port client.cert client.key ca.crt``` Edit this page Request changes Copyright 2013-2024 Docker Inc. All rights reserved." } ]
{ "category": "App Definition and Development", "file_name": "resources.md#surfacing-the-image-digest-built-in-a-task.md", "project_name": "kaniko", "subcategory": "Application Definition & Image Build" }
[ { "data": "You can host your own runners and customize the environment used to run jobs in your GitHub Actions workflows. A self-hosted runner is a system that you deploy and manage to execute jobs from GitHub Actions on GitHub.com. For more information about GitHub Actions, see \"Understanding GitHub Actions.\" Self-hosted runners offer more control of hardware, operating system, and software tools than GitHub-hosted runners provide. With self-hosted runners, you can create custom hardware configurations that meet your needs with processing power or memory to run larger jobs, install software available on your local network, and choose an operating system not offered by GitHub-hosted runners. Self-hosted runners can be physical, virtual, in a container, on-premises, or in a cloud. You can add self-hosted runners at various levels in the management hierarchy: Your runner machine connects to GitHub using the GitHub Actions self-hosted runner application. The GitHub Actions runner application is open source. You can contribute and file issues in the runner repository. When a new version is released, the runner application automatically updates itself when a job is assigned to the runner, or within a week of release if the runner hasn't been assigned any jobs. A self-hosted runner is automatically removed from GitHub if it has not connected to GitHub Actions for more than 14 days. An ephemeral self-hosted runner is automatically removed from GitHub if it has not connected to GitHub Actions for more than 1 day. For more information about installing and using self-hosted runners, see \"Adding self-hosted runners\" and \"Using self-hosted runners in a workflow.\" GitHub-hosted runners offer a quicker, simpler way to run your workflows, while self-hosted runners are a highly configurable way to run workflows in your own custom environment. GitHub-hosted runners: Self-hosted runners: You can use any machine as a self-hosted runner as long at it meets these requirements: You can automatically increase or decrease the number of self-hosted runners in your environment in response to the webhook events you receive. For more information, see \"Autoscaling with self-hosted runners.\" There are some limits on GitHub Actions usage when using self-hosted runners. These limits are subject to change. If GitHub Actions services are temporarily unavailable, then a workflow run is discarded if it has not been queued within 30 minutes of being triggered. For example, if a workflow is triggered and the GitHub Actions services are unavailable for 31 minutes or longer, then the workflow run will not be processed. The following operating systems are supported for the self-hosted runner application. The following processor architectures are supported for the self-hosted runner application. The self-hosted runner connects to GitHub to receive job assignments and to download new versions of the runner application. The self-hosted runner uses an HTTPS long poll that opens a connection to GitHub for 50 seconds, and if no response is received, it then times out and creates a new long poll. The application must be running on the machine to accept and run GitHub Actions jobs. The connection between self-hosted runners and GitHub is over HTTPS (port" }, { "data": "Since the self-hosted runner opens a connection to GitHub.com, you do not need to allow GitHub to make inbound connections to your self-hosted runner. You must ensure that the machine has the appropriate network access with at least 70 kilobits per second upload and download speed to communicate with the GitHub hosts listed below. Some hosts are required for essential runner operations, while other hosts are only required for certain functionality. You can use the REST API to get meta information about GitHub, including the IP addresses of GitHub services. For more information about the domains and IP addresses used, see \"REST API endpoints for meta data.\" Note Some of the domains listed are configured using CNAME records. Some firewalls might require you to add rules recursively for all CNAME records. Note that the CNAME records might change in the future, and that only the domains listed will remain constant. Needed for essential operations: ``` github.com api.github.com *.actions.githubusercontent.com ``` ``` github.com api.github.com *.actions.githubusercontent.com ``` Needed for downloading actions: ``` codeload.github.com ghcr.io *.actions.githubusercontent.com ``` ``` codeload.github.com ghcr.io *.actions.githubusercontent.com ``` Needed for uploading/downloading job summaries, logs, workflow artifacts, and caches: ``` results-receiver.actions.githubusercontent.com *.blob.core.windows.net ``` ``` results-receiver.actions.githubusercontent.com *.blob.core.windows.net ``` Needed for runner version updates: ``` objects.githubusercontent.com objects-origin.githubusercontent.com github-releases.githubusercontent.com github-registry-files.githubusercontent.com ``` ``` objects.githubusercontent.com objects-origin.githubusercontent.com github-releases.githubusercontent.com github-registry-files.githubusercontent.com ``` Needed for retrieving OIDC tokens: ``` *.actions.githubusercontent.com ``` ``` *.actions.githubusercontent.com ``` Needed for downloading or publishing packages or containers to GitHub Packages: ``` *.pkg.github.com ghcr.io ``` ``` *.pkg.github.com ghcr.io ``` Needed for Git Large File Storage ``` github-cloud.githubusercontent.com github-cloud.s3.amazonaws.com ``` ``` github-cloud.githubusercontent.com github-cloud.s3.amazonaws.com ``` In addition, your workflow may require access to other network resources. If you use an IP address allow list for your GitHub organization or enterprise account, you must add your self-hosted runner's IP address to the allow list. For more information, see \"Managing allowed IP addresses for your organization\" or \"Enforcing policies for security settings in your enterprise\" in the GitHub Enterprise Cloud documentation. You can also use self-hosted runners with a proxy server. For more information, see \"Using a proxy server with self-hosted runners.\" For more information about troubleshooting common network connectivity issues, see \"Monitoring and troubleshooting self-hosted runners.\" We recommend that you only use self-hosted runners with private repositories. This is because forks of your public repository can potentially run dangerous code on your self-hosted runner machine by creating a pull request that executes the code in a workflow. This is not an issue with GitHub-hosted runners because each GitHub-hosted runner is always a clean isolated virtual machine, and it is destroyed at the end of the job execution. Untrusted workflows running on your self-hosted runner pose significant security risks for your machine and network environment, especially if your machine persists its environment between jobs. Some of the risks include: For more information about security hardening for self-hosted runners, see \"Security hardening for GitHub Actions.\" Organization owners can choose which repositories are allowed to create repository-level self-hosted runners. . For more information, see \"Disabling or limiting GitHub Actions for your organization.\" All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "App Definition and Development", "file_name": "workload-identity#migrate_applications_to.md", "project_name": "kaniko", "subcategory": "Application Definition & Image Build" }
[ { "data": "GitHub offers hosted virtual machines to run workflows. The virtual machine contains an environment of tools, packages, and settings available for GitHub Actions to use. Runners are the machines that execute jobs in a GitHub Actions workflow. For example, a runner can clone your repository locally, install testing software, and then run commands that evaluate your code. GitHub provides runners that you can use to run your jobs, or you can host your own runners. Each GitHub-hosted runner is a new virtual machine (VM) hosted by GitHub with the runner application and other tools preinstalled, and is available with Ubuntu Linux, Windows, or macOS operating systems. When you use a GitHub-hosted runner, machine maintenance and upgrades are taken care of for you. You can choose one of the standard GitHub-hosted runner options or, if you are on the GitHub Team or GitHub Enterprise Cloud plan, you can provision a runner with more cores, or a runner that's powered by a GPU or ARM processor. These machines are referred to as \"larger runner.\" For more information, see \"About larger runners.\" Using GitHub-hosted runners requires network access with at least 70 kilobits per second upload and download speeds. To use a GitHub-hosted runner, create a job and use runs-on to specify the type of runner that will process the job, such as ubuntu-latest, windows-latest, or macos-latest. For the full list of runner types, see \"About GitHub-hosted runners.\" If you have repo: write access to a repository, you can view a list of the runners available to use in workflows in the repository. For more information, see \"Viewing available runners for a repository.\" When the job begins, GitHub automatically provisions a new VM for that job. All steps in the job execute on the VM, allowing the steps in that job to share information using the runner's filesystem. You can run workflows directly on the VM or in a Docker container. When the job has finished, the VM is automatically decommissioned. The following diagram demonstrates how two jobs in a workflow are executed on two different GitHub-hosted runners. The following example workflow has two jobs, named Run-npm-on-Ubuntu and Run-PSScriptAnalyzer-on-Windows. When this workflow is triggered, GitHub provisions a new virtual machine for each job. ``` name: Run commands on different operating systems on: push: branches: [ main ] pull_request: branches: [ main ] jobs: Run-npm-on-Ubuntu: name: Run npm on Ubuntu runs-on: ubuntu-latest steps: uses: actions/checkout@v4 uses: actions/setup-node@v4 with: node-version: '14' run: npm help Run-PSScriptAnalyzer-on-Windows: name: Run PSScriptAnalyzer on Windows runs-on: windows-latest steps: uses: actions/checkout@v4 name: Install PSScriptAnalyzer module shell: pwsh run: | Set-PSRepository PSGallery -InstallationPolicy Trusted Install-Module PSScriptAnalyzer -ErrorAction Stop name: Get list of rules shell: pwsh run: | Get-ScriptAnalyzerRule ``` ``` name: Run commands on different operating systems on: push: branches: [ main ] pull_request: branches: [ main ] jobs: Run-npm-on-Ubuntu: name: Run npm on Ubuntu runs-on: ubuntu-latest steps: uses: actions/checkout@v4 uses: actions/setup-node@v4 with: node-version: '14' run: npm help Run-PSScriptAnalyzer-on-Windows: name: Run PSScriptAnalyzer on Windows runs-on: windows-latest steps: uses: actions/checkout@v4 name: Install PSScriptAnalyzer module shell: pwsh run: | Set-PSRepository PSGallery -InstallationPolicy Trusted Install-Module PSScriptAnalyzer -ErrorAction Stop name: Get list of rules shell: pwsh run: | Get-ScriptAnalyzerRule ``` While the job runs, the logs and output can be viewed in the GitHub UI: The GitHub Actions runner application is open source. You can contribute and file issues in the runner repository. If you have repo: write access to a repository, you can view a list of the runners available to the" }, { "data": "On GitHub.com, navigate to the main page of the repository. Under your repository name, click Actions. In the left sidebar, under the \"Management\" section, click Runners. Review the list of available GitHub-hosted runners for the repository. Optionally, to copy a runner's label to use it in a workflow, click to the right of the runner, then click Copy label. Note: Enterprise and organization owners can create runners from this page. To create a new runner, click New runner at the top right of the list of runners to add runners to the repository. For more information, see \"Managing larger runners\" and \"Adding self-hosted runners.\" GitHub-hosted runners are available for use in both public and private repositories. GitHub-hosted Linux runners support hardware acceleration for Android SDK tools, which makes running Android tests much faster and consumes fewer minutes. For more information on Android hardware acceleration, see Configure hardware acceleration for the Android Emulator in the Android Developers documentation. Note The -latest runner images are the latest stable images that GitHub provides, and might not be the most recent version of the operating system available from the operating system vendor. Warning: Beta and Deprecated Images are provided \"as-is\", \"with all faults\" and \"as available\" and are excluded from the service level agreement and warranty. Beta Images may not be covered by customer support. For public repositories, jobs using the workflow labels shown in the table below will run on virtual machines with the associated specifications. The use of these runners on public repositories is free and unlimited. | Virtual Machine | Processor (CPU) | Memory (RAM) | Storage (SSD) | Workflow label | Notes | |:|:|:|:-|:|:-| | Linux | 4 | 16 GB | 14 GB | ubuntu-latest, ubuntu-24.04 [Beta], ubuntu-22.04, ubuntu-20.04 | The ubuntu-latest label currently uses the Ubuntu 22.04 runner image. | | Windows | 4 | 16 GB | 14 GB | windows-latest, windows-2022, windows-2019 | The windows-latest label currently uses the Windows 2022 runner image. | | macOS | 3 | 14 GB | 14 GB | macos-12 or macos-11 | The macos-11 label has been deprecated and will no longer be available after 28 June 2024. | | macOS | 4 | 14 GB | 14 GB | macos-13 | nan | | macOS | 3 (M1) | 7 GB | 14 GB | macos-latest or macos-14 | The macos-latest label currently uses the macOS 14 runner image. | For private repositories, jobs using the workflow labels shown in the table below will run on virtual machines with the associated specifications. These runners use your GitHub account's allotment of free minutes, and are then charged at the per minute rates. For more information, see \"About billing for GitHub Actions.\" | Virtual Machine | Processor (CPU) | Memory (RAM) | Storage (SSD) | Workflow label | Notes | |:|:|:|:-|:|:-| | Linux | 2 | 7 GB | 14 GB | ubuntu-latest, ubuntu-24.04 [Beta], ubuntu-22.04, ubuntu-20.04 | The ubuntu-latest label currently uses the Ubuntu 22.04 runner image. | | Windows | 2 | 7 GB | 14 GB | windows-latest, windows-2022, windows-2019 | The windows-latest label currently uses the Windows 2022 runner image. | | macOS | 3 | 14 GB | 14 GB | macos-12 or macos-11 | The macos-11 label has been deprecated and will no longer be available after 28 June" }, { "data": "| | macOS | 4 | 14 GB | 14 GB | macos-13 | nan | | macOS | 3 (M1) | 7 GB | 14 GB | macos-latest or macos-14 | The macos-latestlabel currently uses the macOS 14 runner image. | Workflow logs list the runner used to run a job. For more information, see \"Viewing workflow run history.\" Customers on GitHub Team and GitHub Enterprise Cloud plans can choose from a range of managed virtual machines that have more resources than the standard GitHub-hosted runners. These machines are referred to as \"larger runner.\" They offer the following advanced features: These larger runners are hosted by GitHub and have the runner application and other tools preinstalled. For more information, see \"About larger runners.\" The software tools included in GitHub-hosted runners are updated weekly. The update process takes several days, and the list of preinstalled software on the main branch is updated after the whole deployment ends. Workflow logs include a link to the preinstalled tools on the exact runner. To find this information in the workflow log, expand the Set up job section. Under that section, expand the Runner Image section. The link following Included Software will describe the preinstalled tools on the runner that ran the workflow. For more information, see \"Viewing workflow run history.\" For the overall list of included tools for each runner operating system, see the Available Images documentation the runner images repository. GitHub-hosted runners include the operating system's default built-in tools, in addition to the packages listed in the above references. For example, Ubuntu and macOS runners include grep, find, and which, among other default tools. You can also view a software bill of materials (SBOM) for each build of the Windows and Ubuntu runner images. For more information, see \"Security hardening for GitHub Actions.\" We recommend using actions to interact with the software installed on runners. This approach has several benefits: If there is a tool that you'd like to request, please open an issue at actions/runner-images. This repository also contains announcements about all major software updates on runners. You can install additional software on GitHub-hosted runners. For more information, see \"Customizing GitHub-hosted runners\". GitHub hosts Linux and Windows runners on virtual machines in Microsoft Azure with the GitHub Actions runner application installed. The GitHub-hosted runner application is a fork of the Azure Pipelines Agent. Inbound ICMP packets are blocked for all Azure virtual machines, so ping or traceroute commands might not work. GitHub hosts macOS runners in Azure data centers. For Linux and Windows runners, GitHub uses Dadsv5-series virtual machines. For more information, see Dasv5 and Dadsv5-series in the Microsoft Azure documentation. If GitHub Actions services are temporarily unavailable, then a workflow run is discarded if it has not been queued within 30 minutes of being triggered. For example, if a workflow is triggered and the GitHub Actions services are unavailable for 31 minutes or longer, then the workflow run will not be processed. In addition, if the workflow run has been successfully queued, but has not been processed by a GitHub-hosted runner within 45 minutes, then the queued workflow run is discarded. The Linux and macOS virtual machines both run using passwordless sudo. When you need to execute commands or install tools that require more privileges than the current user, you can use sudo without needing to provide a password. For more information, see the \"Sudo Manual.\" Windows virtual machines are configured to run as administrators with User Account Control (UAC) disabled. For more information, see \"How User Account Control works\" in the Windows" }, { "data": "To get a list of IP address ranges that GitHub Actions uses for GitHub-hosted runners, you can use the GitHub REST API. For more information, see the actions key in the response of the GET /meta endpoint. For more information, see \"REST API endpoints for meta data.\" Windows and Ubuntu runners are hosted in Azure and subsequently have the same IP address ranges as the Azure datacenters. macOS runners are hosted in GitHub's own macOS cloud. Since there are so many IP address ranges for GitHub-hosted runners, we do not recommend that you use these as allowlists for your internal resources. Instead, we recommend you use larger runners with a static IP address range, or self-hosted runners. For more information, see \"About larger runners\" or \"About self-hosted runners.\" The list of GitHub Actions IP addresses returned by the API is updated once a week. A GitHub-hosted runner must establish connections to GitHub-owned endpoints to perform essential communication operations. In addition, your runner may require access to additional networks that you specify or utilize within an action. To ensure proper communications for GitHub-hosted runners between networks within your configuration, ensure that the following communications are allowed. Note Some of the domains listed are configured using CNAME records. Some firewalls might require you to add rules recursively for all CNAME records. Note that the CNAME records might change in the future, and that only the domains listed will remain constant. Needed for essential operations: ``` github.com api.github.com *.actions.githubusercontent.com ``` ``` github.com api.github.com *.actions.githubusercontent.com ``` Needed for downloading actions: ``` codeload.github.com ghcr.io *.actions.githubusercontent.com ``` ``` codeload.github.com ghcr.io *.actions.githubusercontent.com ``` Needed for uploading/downloading job summaries, logs, workflow artifacts, and caches: ``` results-receiver.actions.githubusercontent.com *.blob.core.windows.net ``` ``` results-receiver.actions.githubusercontent.com *.blob.core.windows.net ``` Needed for runner version updates: ``` objects.githubusercontent.com objects-origin.githubusercontent.com github-releases.githubusercontent.com github-registry-files.githubusercontent.com ``` ``` objects.githubusercontent.com objects-origin.githubusercontent.com github-releases.githubusercontent.com github-registry-files.githubusercontent.com ``` Needed for retrieving OIDC tokens: ``` *.actions.githubusercontent.com ``` ``` *.actions.githubusercontent.com ``` Needed for downloading or publishing packages or containers to GitHub Packages: ``` *.pkg.github.com ghcr.io ``` ``` *.pkg.github.com ghcr.io ``` Needed for Git Large File Storage ``` github-cloud.githubusercontent.com github-cloud.s3.amazonaws.com ``` ``` github-cloud.githubusercontent.com github-cloud.s3.amazonaws.com ``` GitHub-hosted runners are provisioned with an etc/hosts file that blocks network access to various cryptocurrency mining pools and malicious sites. Hosts such as MiningMadness.com and cpu-pool.com are rerouted to localhost so that they do not present a significant security risk. GitHub executes actions and shell commands in specific directories on the virtual machine. The file paths on virtual machines are not static. Use the environment variables GitHub provides to construct file paths for the home, workspace, and workflow directories. | Directory | Environment variable | Description | |:--|:--|:--| | home | HOME | Contains user-related data. For example, this directory could contain credentials from a login attempt. | | workspace | GITHUB_WORKSPACE | Actions and shell commands execute in this directory. An action can modify the contents of this directory, which subsequent actions can access. | | workflow/event.json | GITHUBEVENTPATH | The POST payload of the webhook event that triggered the workflow. GitHub rewrites this each time an action executes to isolate file content between actions. | For a list of the environment variables GitHub creates for each workflow, see \"Variables.\" Actions that run in Docker containers have static directories under the /github path. However, we strongly recommend using the default environment variables to construct file paths in Docker containers. GitHub reserves the /github path prefix and creates three directories for actions. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "KOTS", "subcategory": "Application Definition & Image Build" }
[ { "data": "What's New? For EKS clusters created with Compatibility Matrix, provision S3-compatible object store buckets and AWS RDS Postgres databases using the new object-store and postgres cloud add-ons. Did You Know? For KOTS releases that contain one or more Helm charts, the KOTS HelmChart builder key is required to render the chart templates when building the air gap bundle for the release. Getting Started with Replicated Onboarding workflows, tutorials, and labs to help you get started with Replicated quickly. Vendor Platform Create and manage your account and team. Compatibility Matrix Rapidly create Kubernetes clusters, including OpenShift. Helm Charts Distribute Helm charts with Replicated. Replicated KOTS A kubectl plugin and in-cluster Admin Console that installs applications in customer-controlled environments. Embedded Kubernetes Embed Kubernetes with your application to support installations on VMs or bare metal servers. Insights and Telemetry Get insights on installed instances of your application. Channels and Releases Manage application releases with the vendor platform. Customer Licensing Create, customize, and issue customer licenses. Preflight Checks Define and verify installation environment requirements. Support Bundles Gather information about customer environments for troubleshooting. Developer Tools APIs, CLIs, and an SDK for interacting with the Replicated platform." } ]
{ "category": "App Definition and Development", "file_name": "workload-identity#enable_on_cluster.md", "project_name": "kaniko", "subcategory": "Application Definition & Image Build" }
[ { "data": "This document lists the configuration options for the GitLab .gitlab-ci.yml file. This file is where you define the CI/CD jobs that make up your pipeline. When you are editing your .gitlab-ci.yml file, you can validate it with the CI Lint tool. If you are editing content on this page, follow the instructions for documenting keywords. A GitLab CI/CD pipeline configuration includes: Global keywords that configure pipeline behavior: | Keyword | Description | |:-|:-| | default | Custom default values for job keywords. | | include | Import configuration from other YAML files. | | stages | The names and order of the pipeline stages. | | variables | Define CI/CD variables for all job in the pipeline. | | workflow | Control what types of pipeline run. | Header keywords | Keyword | Description | |:-|:--| | spec | Define specifications for external configuration files. | Jobs configured with job keywords: | Keyword | Description | |:--|:| | after_script | Override a set of commands that are executed after job. | | allow_failure | Allow job to fail. A failed job does not cause the pipeline to fail. | | artifacts | List of files and directories to attach to a job on success. | | before_script | Override a set of commands that are executed before job. | | cache | List of files that should be cached between subsequent runs. | | coverage | Code coverage settings for a given job. | | dast_configuration | Use configuration from DAST profiles on a job level. | | dependencies | Restrict which artifacts are passed to a specific job by providing a list of jobs to fetch artifacts from. | | environment | Name of an environment to which the job deploys. | | extends | Configuration entries that this job inherits from. | | identity | Authenticate with third party services using identity federation. | | image | Use Docker images. | | inherit | Select which global defaults all jobs inherit. | | interruptible | Defines if a job can be canceled when made redundant by a newer run. | | manual_confirmation | Define a custom confirmation message for a manual job. | | needs | Execute jobs earlier than the stage ordering. | | pages | Upload the result of a job to use with GitLab Pages. | | parallel | How many instances of a job should be run in parallel. | | release | Instructs the runner to generate a release object. | | resource_group | Limit job concurrency. | | retry | When and how many times a job can be auto-retried in case of a failure. | | rules | List of conditions to evaluate and determine selected attributes of a job, and whether or not its created. | | script | Shell script that is executed by a runner. | | secrets | The CI/CD secrets the job needs. | | services | Use Docker services images. | | stage | Defines a job stage. | | tags | List of tags that are used to select a runner. | | timeout | Define a custom job-level timeout that takes precedence over the project-wide setting. | | trigger | Defines a downstream pipeline trigger. | | variables | Define job variables on a job level. | | when | When to run job. | Some keywords are not defined in a job. These keywords control pipeline behavior or import additional pipeline configuration. You can set global defaults for some" }, { "data": "Each default keyword is copied to every job that doesnt already have it defined. If the job already has a keyword defined, that default is not used. Keyword type: Global keyword. Possible inputs: These keywords can have custom defaults: Example of default: ``` default: image: ruby:3.0 retry: 2 rspec: script: bundle exec rspec rspec 2.7: image: ruby:2.7 script: bundle exec rspec ``` In this example: Additional details: Use include to include external YAML files in your CI/CD configuration. You can split one long .gitlab-ci.yml file into multiple files to increase readability, or reduce duplication of the same configuration in multiple places. You can also store template files in a central repository and include them in projects. The include files are: The time limit to resolve all files is 30 seconds. Keyword type: Global keyword. Possible inputs: The include subkeys: And optionally: Additional details: Related topics: Use include:component to add a CI/CD component to the pipeline configuration. Keyword type: Global keyword. Possible inputs: The full address of the CI/CD component, formatted as <fully-qualified-domain-name>/<project-path>/<component-name>@<specific-version>. Example of include:component: ``` include: component: $CISERVERFQDN/my-org/security-components/secret-detection@1.0 ``` Related topics: Use include:local to include a file that is in the same repository and branch as the configuration file containing the include keyword. Use include:local instead of symbolic links. Keyword type: Global keyword. Possible inputs: A full path relative to the root directory (/): Example of include:local: ``` include: local: '/templates/.gitlab-ci-template.yml' ``` You can also use shorter syntax to define the path: ``` include: '.gitlab-ci-production.yml' ``` Additional details: To include files from another private project on the same GitLab instance, use include:project and include:file. Keyword type: Global keyword. Possible inputs: Example of include:project: ``` include: project: 'my-group/my-project' file: '/templates/.gitlab-ci-template.yml' project: 'my-group/my-subgroup/my-project-2' file: '/templates/.builds.yml' '/templates/.tests.yml' ``` You can also specify a ref: ``` include: project: 'my-group/my-project' ref: main # Git branch file: '/templates/.gitlab-ci-template.yml' project: 'my-group/my-project' ref: v1.0.0 # Git Tag file: '/templates/.gitlab-ci-template.yml' project: 'my-group/my-project' ref: 787123b47f14b552955ca2786bc9542ae66fee5b # Git SHA file: '/templates/.gitlab-ci-template.yml' ``` Additional details: Use include:remote with a full URL to include a file from a different location. Keyword type: Global keyword. Possible inputs: A public URL accessible by an HTTP/HTTPS GET request: Example of include:remote: ``` include: remote: 'https://gitlab.com/example-project/-/raw/main/.gitlab-ci.yml' ``` Additional details: Use include:template to include .gitlab-ci.yml templates. Keyword type: Global keyword. Possible inputs: A CI/CD template: Example of include:template: ``` include: template: Auto-DevOps.gitlab-ci.yml ``` Multiple include:template files: ``` include: template: Android-Fastlane.gitlab-ci.yml template: Auto-DevOps.gitlab-ci.yml ``` Additional details: Use include:inputs to set the values for input parameters when the included configuration uses spec:inputs and is added to the pipeline. Keyword type: Global keyword. Possible inputs: A string, numeric value, or boolean. Example of include:inputs: ``` include: local: 'custom_configuration.yml' inputs: website: \"My website\" ``` In this example: Additional details: Related topics: Use stages to define stages that contain groups of jobs. Use stage in a job to configure the job to run in a specific stage. If stages is not defined in the .gitlab-ci.yml file, the default pipeline stages are: The order of the items in stages defines the execution order for jobs: If a pipeline contains only jobs in the .pre or .post stages, it does not run. There must be at least one other job in a different stage. Keyword type: Global keyword. Example of stages: ``` stages: build test deploy ``` In this example: If any job fails, the pipeline is marked as failed and jobs in later stages do not start. Jobs in the current stage are not stopped and continue to run. Additional details: Related topics: Use workflow to control pipeline" }, { "data": "You can use some predefined CI/CD variables in workflow configuration, but not variables that are only defined when jobs start. Related topics: Use workflow:autocancel:onnew_commit to configure the behavior of the auto-cancel redundant pipelines feature. Possible inputs: Example of workflow:autocancel:onnew_commit: ``` workflow: auto_cancel: onnewcommit: interruptible job1: interruptible: true script: sleep 60 job2: interruptible: false # Default when not defined. script: sleep 60 ``` In this example: Use workflow:autocancel:onjob_failure to configure which jobs should be cancelled as soon as one job fails. Possible inputs: Example of workflow:autocancel:onjob_failure: ``` stages: [stagea, stageb] workflow: auto_cancel: onjobfailure: all job1: stage: stage_a script: sleep 60 job2: stage: stage_a script: sleep 30 exit 1 job3: stage: stage_b script: sleep 30 ``` In this example, if job2 fails, job1 is cancelled if it is still running and job3 does not start. Related topics: You can use name in workflow: to define a name for pipelines. All pipelines are assigned the defined name. Any leading or trailing spaces in the name are removed. Possible inputs: Examples of workflow:name: A simple pipeline name with a predefined variable: ``` workflow: name: 'Pipeline for branch: $CICOMMITBRANCH' ``` A configuration with different pipeline names depending on the pipeline conditions: ``` variables: PROJECT1PIPELINENAME: 'Default pipeline name' # A default is not required. workflow: name: '$PROJECT1PIPELINENAME' rules: if: '$CIPIPELINESOURCE == \"mergerequestevent\"' variables: PROJECT1PIPELINENAME: 'MR pipeline: $CIMERGEREQUESTSOURCEBRANCH_NAME' if: '$CIMERGEREQUEST_LABELS =~ /pipeline:run-in-ruby3/' variables: PROJECT1PIPELINENAME: 'Ruby 3 pipeline' when: always # Other pipelines can run, but use the default name ``` Additional details: The rules keyword in workflow is similar to rules defined in jobs, but controls whether or not a whole pipeline is created. When no rules evaluate to true, the pipeline does not run. Possible inputs: You can use some of the same keywords as job-level rules: Example of workflow:rules: ``` workflow: rules: if: $CICOMMITTITLE =~ /-draft$/ when: never if: $CIPIPELINESOURCE == \"mergerequestevent\" if: $CICOMMITBRANCH == $CIDEFAULTBRANCH ``` In this example, pipelines run if the commit title (first line of the commit message) does not end with -draft and the pipeline is for either: Additional details: Related topics: You can use variables in workflow:rules to define variables for specific pipeline conditions. When the condition matches, the variable is created and can be used by all jobs in the pipeline. If the variable is already defined at the global level, the workflow variable takes precedence and overrides the global variable. Keyword type: Global keyword. Possible inputs: Variable name and value pairs: Example of workflow:rules:variables: ``` variables: DEPLOY_VARIABLE: \"default-deploy\" workflow: rules: if: $CICOMMITREFNAME == $CIDEFAULT_BRANCH variables: DEPLOYVARIABLE: \"deploy-production\" # Override globally-defined DEPLOYVARIABLE if: $CICOMMITREF_NAME =~ /feature/ variables: ISAFEATURE: \"true\" # Define a new variable. when: always # Run the pipeline in other cases job1: variables: DEPLOY_VARIABLE: \"job1-default-deploy\" rules: if: $CICOMMITREFNAME == $CIDEFAULT_BRANCH variables: # Override DEPLOY_VARIABLE defined DEPLOY_VARIABLE: \"job1-deploy-production\" # at the job level. when: on_success # Run the job in other cases script: echo \"Run script with $DEPLOY_VARIABLE as an argument\" echo \"Run another script if $ISAFEATURE exists\" job2: script: echo \"Run script with $DEPLOY_VARIABLE as an argument\" echo \"Run another script if $ISAFEATURE exists\" ``` When the branch is the default branch: When the branch is feature: When the branch is something else: Additional details: Use workflow:rules:auto_cancel to configure the behavior of the workflow:autocancel:onnew_commit or the workflow:autocancel:onjob_failure features. Possible inputs: Example of workflow:rules:auto_cancel: ``` workflow: auto_cancel: onnewcommit: interruptible onjobfailure: all rules: if: $CICOMMITREF_PROTECTED == 'true' auto_cancel: onnewcommit: none onjobfailure: none when: always # Run the pipeline in other cases test-job1: script: sleep 10 interruptible: false test-job2: script: sleep 10 interruptible: true ``` In this example, workflow:autocancel:onnew_commit is set to interruptible and workflow:autocancel:onjob_failure is set to all for all jobs by" }, { "data": "But if a pipeline runs for a protected branch, the rule overrides the default with onnewcommit: none and onjobfailure: none. For example, if a pipeline is running for: Some keywords must be defined in a header section of a YAML configuration file. The header must be at the top of the file, separated from the rest of the configuration with . Add a spec section to the header of a YAML file to configure the behavior of a pipeline when a configuration is added to the pipeline with the include keyword. You can use spec:inputs to define input parameters for the CI/CD configuration you intend to add to a pipeline with include. Use include:inputs to define the values to use when the pipeline runs. Use the inputs to customize the behavior of the configuration when included in CI/CD configuration. Use the interpolation format $[[ input.input-id ]] to reference the values outside of the header section. Inputs are evaluated and interpolated when the configuration is fetched during pipeline creation, but before the configuration is merged with the contents of the .gitlab-ci.yml file. Keyword type: Header keyword. spec must be declared at the top of the configuration file, in a header section. Possible inputs: A hash of strings representing the expected inputs. Example of spec:inputs: ``` spec: inputs: environment: job-stage: scan-website: stage: $[[ inputs.job-stage ]] script: ./scan-website $[[ inputs.environment ]] ``` Additional details: Related topics: Inputs are mandatory when included, unless you set a default value with spec:inputs:default. Use default: '' to have no default value. Keyword type: Header keyword. spec must be declared at the top of the configuration file, in a header section. Possible inputs: A string representing the default value, or ''. Example of spec:inputs:default: ``` spec: inputs: website: user: default: 'test-user' flags: default: '' ``` In this example: Additional details: Use description to give a description to a specific input. The description does not affect the behavior of the input and is only used to help users of the file understand the input. Keyword type: Header keyword. spec must be declared at the top of the configuration file, in a header section. Possible inputs: A string representing the description. Example of spec:inputs:description: ``` spec: inputs: flags: description: 'Sample description of the `flags` input details.' ``` Inputs can use options to specify a list of allowed values for an input. The limit is 50 options per input. Keyword type: Header keyword. spec must be declared at the top of the configuration file, in a header section. Possible inputs: An array of input options. Example of spec:inputs:options: ``` spec: inputs: environment: options: development staging production ``` In this example: Additional details: Use spec:inputs:regex to specify a regular expression that the input must match. Keyword type: Header keyword. spec must be declared at the top of the configuration file, in a header section. Possible inputs: Must be a regular expression. Example of spec:inputs:regex: ``` spec: inputs: version: regex: ^v\\d\\.\\d+(\\.\\d+)$ ``` In this example, inputs of v1.0 or v1.2.3 match the regular expression and pass validation. An input of v1.A.B does not match the regular expression and fails validation. Additional details: By default, inputs expect strings. Use spec:inputs:type to set a different required type for inputs. Keyword type: Header keyword. spec must be declared at the top of the configuration file, in a header section. Possible inputs: Can be one of: Example of spec:inputs:type: ``` spec: inputs: job_name: website: type: string port: type: number available: type: boolean array_input: type: array ``` The following topics explain how to use keywords to configure CI/CD" }, { "data": "Use afterscript to define an array of commands to run last, after a jobs beforescript and script sections complete. after_script commands also run when: Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: An array including: CI/CD variables are supported. Example of after_script: ``` job: script: echo \"An example script section.\" after_script: echo \"Execute this command after the `script` section completes.\" ``` Additional details: Scripts you specify in after_script execute in a new shell, separate from any before_script or script commands. As a result, they: If a job times out, the after_script commands do not execute. An issue exists to add support for executing after_script commands for timed-out jobs. Related topics: Use allow_failure to determine whether a pipeline should continue running when a job fails. When jobs are allowed to fail (allow_failure: true) an orange warning () indicates that a job failed. However, the pipeline is successful and the associated commit is marked as passed with no warnings. This same warning is displayed when: The default value for allow_failure is: Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of allow_failure: ``` job1: stage: test script: executescript1 job2: stage: test script: executescript2 allow_failure: true job3: stage: deploy script: deploytostaging environment: staging ``` In this example, job1 and job2 run in parallel: Additional details: Use allowfailure:exitcodes to control when a job should be allowed to fail. The job is allow_failure: true for any of the listed exit codes, and allow_failure false for any other exit code. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of allow_failure: ``` testjob1: script: echo \"Run a script that results in exit code 1. This job fails.\" exit 1 allow_failure: exit_codes: 137 testjob2: script: echo \"Run a script that results in exit code 137. This job is allowed to fail.\" exit 137 allow_failure: exit_codes: 137 255 ``` Use artifacts to specify which files to save as job artifacts. Job artifacts are a list of files and directories that are attached to the job when it succeeds, fails, or always. The artifacts are sent to GitLab after the job finishes. They are available for download in the GitLab UI if the size is smaller than the maximum artifact size. By default, jobs in later stages automatically download all the artifacts created by jobs in earlier stages. You can control artifact download behavior in jobs with dependencies. When using the needs keyword, jobs can only download artifacts from the jobs defined in the needs configuration. Job artifacts are only collected for successful jobs by default, and artifacts are restored after caches. Read more about artifacts. Paths are relative to the project directory ($CIPROJECTDIR) and cant directly link outside it. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: CI/CD variables are supported. Example of artifacts:paths: ``` job: artifacts: paths: binaries/ .config ``` This example creates an artifact with .config and all the files in the binaries directory. Additional details: Related topics: Use artifacts:exclude to prevent files from being added to an artifacts archive. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: Example of artifacts:exclude: ``` artifacts: paths: binaries/ exclude: binaries//*.o ``` This example stores all files in binaries/, but not *.o files located in subdirectories of" }, { "data": "Additional details: Related topics: Use expire_in to specify how long job artifacts are stored before they expire and are deleted. The expire_in setting does not affect: After their expiry, artifacts are deleted hourly by default (using a cron job), and are not accessible anymore. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: The expiry time. If no unit is provided, the time is in seconds. Valid values include: Example of artifacts:expire_in: ``` job: artifacts: expire_in: 1 week ``` Additional details: Use the artifacts:expose_as keyword to expose job artifacts in the merge request UI. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: Example of artifacts:expose_as: ``` test: script: [\"echo 'test' > file.txt\"] artifacts: expose_as: 'artifact 1' paths: ['file.txt'] ``` Additional details: Related topics: Use the artifacts:name keyword to define the name of the created artifacts archive. You can specify a unique name for every archive. If not defined, the default name is artifacts, which becomes artifacts.zip when downloaded. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: Example of artifacts:name: To create an archive with a name of the current job: ``` job: artifacts: name: \"job1-artifacts-file\" paths: binaries/ ``` Related topics: Use artifacts:public to determine whether the job artifacts should be publicly available. When artifacts:public is true (default), the artifacts in public pipelines are available for download by anonymous, guest, and reporter users. To deny read access to artifacts in public pipelines for anonymous, guest, and reporter users, set artifacts:public to false: Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: Example of artifacts:public: ``` job: artifacts: public: false ``` Use artifacts:access to determine who can access the job artifacts. You cannot use artifacts:public and artifacts:access in the same job. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of artifacts:access: ``` job: artifacts: access: 'developer' ``` Additional details: Use artifacts:reports to collect artifacts generated by included templates in jobs. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: Example of artifacts:reports: ``` rspec: stage: test script: bundle install rspec --format RspecJunitFormatter --out rspec.xml artifacts: reports: junit: rspec.xml ``` Additional details: Use artifacts:untracked to add all Git untracked files as artifacts (along with the paths defined in artifacts:paths). artifacts:untracked ignores configuration in the repositorys .gitignore, so matching artifacts in .gitignore are included. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: Example of artifacts:untracked: Save all Git untracked files: ``` job: artifacts: untracked: true ``` Related topics: Use artifacts:when to upload artifacts on job failure or despite the failure. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: Example of artifacts:when: ``` job: artifacts: when: on_failure ``` Additional details: Use before_script to define an array of commands that should run before each jobs script commands, but after artifacts are restored. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: An array including: CI/CD variables are supported. Example of before_script: ``` job: before_script: echo \"Execute this command before any 'script:' commands.\" script: echo \"This command executes after the job's 'before_script'" }, { "data": "``` Additional details: Related topics: Use cache to specify a list of files and directories to cache between jobs. You can only use paths that are in the local working copy. Caches are: You can disable caching for specific jobs, for example to override: For more information about caches, see Caching in GitLab CI/CD. Use the cache:paths keyword to choose which files or directories to cache. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: Example of cache:paths: Cache all files in binaries that end in .apk and the .config file: ``` rspec: script: echo \"This job uses a cache.\" cache: key: binaries-cache paths: binaries/*.apk .config ``` Additional details: Related topics: Use the cache:key keyword to give each cache a unique identifying key. All jobs that use the same cache key use the same cache, including in different pipelines. If not set, the default key is default. All jobs with the cache keyword but no cache:key share the default cache. Must be used with cache: paths, or nothing is cached. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: Example of cache:key: ``` cache-job: script: echo \"This job uses a cache.\" cache: key: binaries-cache-$CICOMMITREF_SLUG paths: binaries/ ``` Additional details: The cache:key value cant contain: Related topics: Use the cache:key:files keyword to generate a new key when one or two specific files change. cache:key:files lets you reuse some caches, and rebuild them less often, which speeds up subsequent pipeline runs. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: CI/CD variables are not supported. Example of cache:key:files: ``` cache-job: script: echo \"This job uses a cache.\" cache: key: files: Gemfile.lock package.json paths: vendor/ruby node_modules ``` This example creates a cache for Ruby and Node.js dependencies. The cache is tied to the current versions of the Gemfile.lock and package.json files. When one of these files changes, a new cache key is computed and a new cache is created. Any future job runs that use the same Gemfile.lock and package.json with cache:key:files use the new cache, instead of rebuilding the dependencies. Additional details: Use cache:key:prefix to combine a prefix with the SHA computed for cache:key:files. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: Example of cache:key:prefix: ``` rspec: script: echo \"This rspec job uses a cache.\" cache: key: files: Gemfile.lock prefix: $CIJOBNAME paths: vendor/ruby ``` For example, adding a prefix of $CIJOBNAME causes the key to look like rspec-feef9576d21ee9b6a32e30c5c79d0a0ceb68d1e5. If a branch changes Gemfile.lock, that branch has a new SHA checksum for cache:key:files. A new cache key is generated, and a new cache is created for that key. If Gemfile.lock is not found, the prefix is added to default, so the key in the example would be rspec-default. Additional details: Use untracked: true to cache all files that are untracked in your Git repository. Untracked files include files that are: Caching untracked files can create unexpectedly large caches if the job downloads: Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: Example of cache:untracked: ``` rspec: script: test cache: untracked: true ``` Additional details: You can combine cache:untracked with cache:paths to cache all untracked files, as well as files in the configured paths. Use cache:paths to cache any specific files, including tracked files, or files that are outside of the working directory, and use cache: untracked to also cache all untracked" }, { "data": "For example: ``` rspec: script: test cache: untracked: true paths: binaries/ ``` In this example, the job caches all untracked files in the repository, as well as all the files in binaries/. If there are untracked files in binaries/, they are covered by both keywords. Use cache:unprotect to set a cache to be shared between protected and unprotected branches. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: Example of cache:unprotect: ``` rspec: script: test cache: unprotect: true ``` Use cache:when to define when to save the cache, based on the status of the job. Must be used with cache: paths, or nothing is cached. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: Example of cache:when: ``` rspec: script: rspec cache: paths: rspec/ when: 'always' ``` This example stores the cache whether or not the job fails or succeeds. To change the upload and download behavior of a cache, use the cache:policy keyword. By default, the job downloads the cache when the job starts, and uploads changes to the cache when the job ends. This caching style is the pull-push policy (default). To set a job to only download the cache when the job starts, but never upload changes when the job finishes, use cache:policy:pull. To set a job to only upload a cache when the job finishes, but never download the cache when the job starts, use cache:policy:push. Use the pull policy when you have many jobs executing in parallel that use the same cache. This policy speeds up job execution and reduces load on the cache server. You can use a job with the push policy to build the cache. Must be used with cache: paths, or nothing is cached. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: Example of cache:policy: ``` prepare-dependencies-job: stage: build cache: key: gems paths: vendor/bundle policy: push script: echo \"This job only downloads dependencies and builds the cache.\" echo \"Downloading dependencies...\" faster-test-job: stage: test cache: key: gems paths: vendor/bundle policy: pull script: echo \"This job script uses the cache, but does not update it.\" echo \"Running tests...\" ``` Related topics: Use cache:fallback_keys to specify a list of keys to try to restore cache from if there is no cache found for the cache:key. Caches are retrieved in the order specified in the fallback_keys section. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: Example of cache:fallback_keys: ``` rspec: script: rspec cache: key: gems-$CICOMMITREF_SLUG paths: rspec/ fallback_keys: gems when: 'always' ``` Use coverage with a custom regular expression to configure how code coverage is extracted from the job output. The coverage is shown in the UI if at least one line in the job output matches the regular expression. To extract the code coverage value from the match, GitLab uses this smaller regular expression: \\d+(?:\\.\\d+)?. Possible inputs: Example of coverage: ``` job1: script: rspec coverage: '/Code coverage: \\d+(?:\\.\\d+)?/' ``` In this example: Additional details: Use the dast_configuration keyword to specify a site profile and scanner profile to be used in a CI/CD configuration. Both profiles must first have been created in the project. The jobs stage must be dast. Keyword type: Job keyword. You can use only as part of a job. Possible inputs: One each of siteprofile and" }, { "data": "Example of dast_configuration: ``` stages: build dast include: template: DAST.gitlab-ci.yml dast: dast_configuration: site_profile: \"Example Co\" scanner_profile: \"Quick Passive Test\" ``` In this example, the dast job extends the dast configuration added with the include keyword to select a specific site profile and scanner profile. Additional details: Related topics: Use the dependencies keyword to define a list of specific jobs to fetch artifacts from. The specified jobs must all be in earlier stages. You can also set a job to download no artifacts at all. When dependencies is not defined in a job, all jobs in earlier stages are considered dependent and the job fetches all artifacts from those jobs. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of dependencies: ``` build osx: stage: build script: make build:osx artifacts: paths: binaries/ build linux: stage: build script: make build:linux artifacts: paths: binaries/ test osx: stage: test script: make test:osx dependencies: build osx test linux: stage: test script: make test:linux dependencies: build linux deploy: stage: deploy script: make deploy environment: production ``` In this example, two jobs have artifacts: build osx and build linux. When test osx is executed, the artifacts from build osx are downloaded and extracted in the context of the build. The same thing happens for test linux and artifacts from build linux. The deploy job downloads artifacts from all previous jobs because of the stage precedence. Additional details: Use environment to define the environment that a job deploys to. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: The name of the environment the job deploys to, in one of these formats: Example of environment: ``` deploy to production: stage: deploy script: git push production HEAD:main environment: production ``` Additional details: Set a name for an environment. Common environment names are qa, staging, and production, but you can use any name. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: The name of the environment the job deploys to, in one of these formats: Example of environment:name: ``` deploy to production: stage: deploy script: git push production HEAD:main environment: name: production ``` Set a URL for an environment. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: A single URL, in one of these formats: Example of environment:url: ``` deploy to production: stage: deploy script: git push production HEAD:main environment: name: production url: https://prod.example.com ``` Additional details: Closing (stopping) environments can be achieved with the on_stop keyword defined under environment. It declares a different job that runs to close the environment. Keyword type: Job keyword. You can use it only as part of a job. Additional details: Use the action keyword to specify how the job interacts with the environment. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: One of the following keywords: | Value | Description | |:--|:--| | start | Default value. Indicates that the job starts the environment. The deployment is created after the job starts. | | prepare | Indicates that the job is only preparing the environment. It does not trigger deployments. Read more about preparing environments. | | stop | Indicates that the job stops an environment. Read more about stopping an environment. | | verify | Indicates that the job is only verifying the environment. It does not trigger deployments. Read more about verifying environments. | | access | Indicates that the job is only accessing the environment. It does not trigger" }, { "data": "Read more about accessing environments. | Example of environment:action: ``` stopreviewapp: stage: deploy variables: GIT_STRATEGY: none script: make delete-app when: manual environment: name: review/$CICOMMITREF_SLUG action: stop ``` The autostopin keyword specifies the lifetime of the environment. When an environment expires, GitLab automatically stops it. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: A period of time written in natural language. For example, these are all equivalent: CI/CD variables are supported. Example of environment:autostopin: ``` review_app: script: deploy-review-app environment: name: review/$CICOMMITREF_SLUG autostopin: 1 day ``` When the environment for review_app is created, the environments lifetime is set to 1 day. Every time the review app is deployed, that lifetime is also reset to 1 day. Related topics: Use the kubernetes keyword to configure deployments to a Kubernetes cluster that is associated with your project. Keyword type: Job keyword. You can use it only as part of a job. Example of environment:kubernetes: ``` deploy: stage: deploy script: make deploy-app environment: name: production kubernetes: namespace: production ``` This configuration sets up the deploy job to deploy to the production environment, using the production Kubernetes namespace. Additional details: Related topics: Use the deployment_tier keyword to specify the tier of the deployment environment. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: One of the following: Example of environment:deployment_tier: ``` deploy: script: echo environment: name: customer-portal deployment_tier: production ``` Additional details: Related topics: Use CI/CD variables to dynamically name environments. For example: ``` deploy as review app: stage: deploy script: make deploy environment: name: review/$CICOMMITREF_SLUG url: https://$CIENVIRONMENTSLUG.example.com/ ``` The deploy as review app job is marked as a deployment to dynamically create the review/$CICOMMITREFSLUG environment. $CICOMMITREFSLUG is a CI/CD variable set by the runner. The $CIENVIRONMENTSLUG variable is based on the environment name, but suitable for inclusion in URLs. If the deploy as review app job runs in a branch named pow, this environment would be accessible with a URL like https://review-pow.example.com/. The common use case is to create dynamic environments for branches and use them as review apps. You can see an example that uses review apps at https://gitlab.com/gitlab-examples/review-apps-nginx/. Use extends to reuse configuration sections. Its an alternative to YAML anchors and is a little more flexible and readable. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of extends: ``` .tests: script: rake test stage: test only: refs: branches rspec: extends: .tests script: rake rspec only: variables: $RSPEC ``` In this example, the rspec job uses the configuration from the .tests template job. When creating the pipeline, GitLab: The result is this rspec job: ``` rspec: script: rake rspec stage: test only: refs: branches variables: $RSPEC ``` Additional details: Related topics: Use hooks to specify lists of commands to execute on the runner at certain stages of job execution, like before retrieving the Git repository. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: Use hooks:pregetsources_script to specify a list of commands to execute on the runner before cloning the Git repository and any submodules. You can use it for example to: Possible inputs: An array including: CI/CD variables are supported. Example of hooks:pregetsources_script: ``` job1: hooks: pregetsources_script: echo 'hello job1 pregetsources_script' script: echo 'hello job1 script' ``` Related topics: This feature is in beta. Use identity to authenticate with third party services using identity federation. Keyword type: Job keyword. You can use it only as part of a job or in the default:" }, { "data": "Possible inputs: An identifier. Supported providers: Example of identity: ``` jobwithworkload_identity: identity: google_cloud script: gcloud compute instances list ``` Related topics: Use id_tokens to create JSON web tokens (JWT) to authenticate with third party services. All JWTs created this way support OIDC authentication. The required aud sub-keyword is used to configure the aud claim for the JWT. Possible inputs: Example of id_tokens: ``` jobwithid_tokens: id_tokens: IDTOKEN1: aud: https://vault.example.com IDTOKEN2: aud: https://gcp.com https://aws.com SIGSTOREIDTOKEN: aud: sigstore script: commandtoauthenticatewithvault $IDTOKEN1 commandtoauthenticatewithaws $IDTOKEN2 commandtoauthenticatewithgcp $IDTOKEN2 ``` Related topics: Use image to specify a Docker image that the job runs in. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: The name of the image, including the registry path if needed, in one of these formats: CI/CD variables are supported. Example of image: ``` default: image: ruby:3.0 rspec: script: bundle exec rspec rspec 2.7: image: registry.example.com/my-group/my-project/ruby:2.7 script: bundle exec rspec ``` In this example, the ruby:3.0 image is the default for all jobs in the pipeline. The rspec 2.7 job does not use the default, because it overrides the default with a job-specific image section. Related topics: The name of the Docker image that the job runs in. Similar to image used by itself. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: The name of the image, including the registry path if needed, in one of these formats: CI/CD variables are supported. Example of image:name: ``` test-job: image: name: \"registry.example.com/my/image:latest\" script: echo \"Hello world\" ``` Related topics: Command or script to execute as the containers entry point. When the Docker container is created, the entrypoint is translated to the Docker --entrypoint option. The syntax is similar to the Dockerfile ENTRYPOINT directive, where each shell token is a separate string in the array. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: Example of image:entrypoint: ``` test-job: image: name: super/sql:experimental entrypoint: [\"\"] script: echo \"Hello world\" ``` Related topics: Use image:docker to pass options to the Docker executor of a GitLab Runner. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: A hash of options for the Docker executor, which can include: Example of image:docker: ``` arm-sql-job: script: echo \"Run sql tests\" image: name: super/sql:experimental docker: platform: arm64/v8 user: dave ``` Additional details: The pull policy that the runner uses to fetch the Docker image. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: Examples of image:pull_policy: ``` job1: script: echo \"A single pull policy.\" image: name: ruby:3.0 pull_policy: if-not-present job2: script: echo \"Multiple pull policies.\" image: name: ruby:3.0 pull_policy: [always, if-not-present] ``` Additional details: Related topics: Use inherit to control inheritance of default keywords and variables. Use inherit:default to control the inheritance of default keywords. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of inherit:default: ``` default: retry: 2 image: ruby:3.0 interruptible: true job1: script: echo \"This job does not inherit any default keywords.\" inherit: default: false job2: script: echo \"This job inherits only the two listed default keywords. It does not inherit 'interruptible'.\" inherit: default: retry image ``` Additional details: Use inherit:variables to control the inheritance of global variables keywords. Keyword type: Job keyword. You can use it only as part of a" }, { "data": "Possible inputs: Example of inherit:variables: ``` variables: VARIABLE1: \"This is variable 1\" VARIABLE2: \"This is variable 2\" VARIABLE3: \"This is variable 3\" job1: script: echo \"This job does not inherit any global variables.\" inherit: variables: false job2: script: echo \"This job inherits only the two listed global variables. It does not inherit 'VARIABLE3'.\" inherit: variables: VARIABLE1 VARIABLE2 ``` Additional details: Use interruptible to configure the auto-cancel redundant pipelines feature to cancel a job before it completes if a new pipeline on the same ref starts for a newer commit. If the feature is disabled, the keyword has no effect. The new pipeline must be for a commit with new changes. For example, the Auto-cancel redundant pipelines feature has no effect if you select Run pipeline in the UI to run a pipeline for the same commit. The behavior of the Auto-cancel redundant pipelines feature can be controlled by the workflow:autocancel:onnew_commit setting. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: Example of interruptible with the default behavior: ``` workflow: auto_cancel: onnewcommit: conservative # the default behavior stages: stage1 stage2 stage3 step-1: stage: stage1 script: echo \"Can be canceled.\" interruptible: true step-2: stage: stage2 script: echo \"Can not be canceled.\" step-3: stage: stage3 script: echo \"Because step-2 can not be canceled, this step can never be canceled, even though it's set as interruptible.\" interruptible: true ``` In this example, a new pipeline causes a running pipeline to be: Example of interruptible with the autocancel:onnew_commit:interruptible setting: ``` workflow: auto_cancel: onnewcommit: interruptible stages: stage1 stage2 stage3 step-1: stage: stage1 script: echo \"Can be canceled.\" interruptible: true step-2: stage: stage2 script: echo \"Can not be canceled.\" step-3: stage: stage3 script: echo \"Can be canceled.\" interruptible: true ``` In this example, a new pipeline causes a running pipeline to cancel step-1 and step-3 if they are running or pending. Additional details: Use needs to execute jobs out-of-order. Relationships between jobs that use needs can be visualized as a directed acyclic graph. You can ignore stage ordering and run some jobs without waiting for others to complete. Jobs in multiple stages can run concurrently. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of needs: ``` linux:build: stage: build script: echo \"Building linux...\" mac:build: stage: build script: echo \"Building mac...\" lint: stage: test needs: [] script: echo \"Linting...\" linux:rspec: stage: test needs: [\"linux:build\"] script: echo \"Running rspec on linux...\" mac:rspec: stage: test needs: [\"mac:build\"] script: echo \"Running rspec on mac...\" production: stage: deploy script: echo \"Running production...\" environment: production ``` This example creates four paths of execution: Additional details: When a job uses needs, it no longer downloads all artifacts from previous stages by default, because jobs with needs can start before earlier stages complete. With needs you can only download artifacts from the jobs listed in the needs configuration. Use artifacts: true (default) or artifacts: false to control when artifacts are downloaded in jobs that use needs. Keyword type: Job keyword. You can use it only as part of a job. Must be used with needs:job. Possible inputs: Example of needs:artifacts: ``` test-job1: stage: test needs: job: build_job1 artifacts: true test-job2: stage: test needs: job: build_job2 artifacts: false test-job3: needs: job: build_job1 artifacts: true job: build_job2 build_job3 ``` In this example: Additional details: Use needs:project to download artifacts from up to five jobs in other pipelines. The artifacts are downloaded from the latest successful specified job for the specified" }, { "data": "To specify multiple jobs, add each as separate array items under the needs keyword. If there is a pipeline running for the ref, a job with needs:project does not wait for the pipeline to complete. Instead, the artifacts are downloaded from the latest successful run of the specified job. needs:project must be used with job, ref, and artifacts. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Examples of needs:project: ``` build_job: stage: build script: ls -lhR needs: project: namespace/group/project-name job: build-1 ref: main artifacts: true project: namespace/group/project-name-2 job: build-2 ref: main artifacts: true ``` In this example, build_job downloads the artifacts from the latest successful build-1 and build-2 jobs on the main branches in the group/project-name and group/project-name-2 projects. You can use CI/CD variables in needs:project, for example: ``` build_job: stage: build script: ls -lhR needs: project: $CIPROJECTPATH job: $DEPENDENCYJOBNAME ref: $ARTIFACTSDOWNLOADREF artifacts: true ``` Additional details: Related topics: A child pipeline can download artifacts from a job in its parent pipeline or another child pipeline in the same parent-child pipeline hierarchy. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of needs:pipeline:job: Parent pipeline (.gitlab-ci.yml): ``` create-artifact: stage: build script: echo \"sample artifact\" > artifact.txt artifacts: paths: [artifact.txt] child-pipeline: stage: test trigger: include: child.yml strategy: depend variables: PARENTPIPELINEID: $CIPIPELINEID ``` Child pipeline (child.yml): ``` use-artifact: script: cat artifact.txt needs: pipeline: $PARENTPIPELINEID job: create-artifact ``` In this example, the create-artifact job in the parent pipeline creates some artifacts. The child-pipeline job triggers a child pipeline, and passes the CIPIPELINEID variable to the child pipeline as a new PARENTPIPELINEID variable. The child pipeline can use that variable in needs:pipeline to download artifacts from the parent pipeline. Additional details: To need a job that sometimes does not exist in the pipeline, add optional: true to the needs configuration. If not defined, optional: false is the default. Jobs that use rules, only, or except and that are added with include might not always be added to a pipeline. GitLab checks the needs relationships before starting a pipeline: Keyword type: Job keyword. You can use it only as part of a job. Example of needs:optional: ``` build-job: stage: build test-job1: stage: test test-job2: stage: test rules: if: $CICOMMITBRANCH == $CIDEFAULTBRANCH deploy-job: stage: deploy needs: job: test-job2 optional: true job: test-job1 environment: production review-job: stage: deploy needs: job: test-job2 optional: true environment: review ``` In this example: You can mirror the pipeline status from an upstream pipeline to a job by using the needs:pipeline keyword. The latest pipeline status from the default branch is replicated to the job. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of needs:pipeline: ``` upstream_status: stage: test needs: pipeline: other/project ``` Additional details: Jobs can use parallel:matrix to run a job multiple times in parallel in a single pipeline, but with different variable values for each instance of the job. Use needs:parallel:matrix to execute jobs out-of-order depending on parallelized jobs. Keyword type: Job keyword. You can use it only as part of a job. Must be used with needs:job. Possible inputs: An array of hashes of variables: Example of needs:parallel:matrix: ``` linux:build: stage: build script: echo \"Building linux...\" parallel: matrix: PROVIDER: aws STACK: monitoring app1 app2 linux:rspec: stage: test needs: job: linux:build parallel: matrix: PROVIDER: aws STACK: app1 script: echo \"Running rspec on linux...\" ``` The above example generates the following jobs: ``` linux:build: [aws, monitoring] linux:build: [aws, app1] linux:build: [aws, app2] linux:rspec ``` The linux:rspec job runs as soon as the linux:build: [aws, app1] job" }, { "data": "Related topics: Additional details: The order of the matrix variables in needs:parallel:matrix must match the order of the matrix variables in the needed job. For example, reversing the order of the variables in the linux:rspec job in the earlier example above would be invalid: ``` linux:rspec: stage: test needs: job: linux:build parallel: matrix: STACK: app1 # The variable order does not match `linux:build` and is invalid. PROVIDER: aws script: echo \"Running rspec on linux...\" ``` Use pages to define a GitLab Pages job that uploads static content to GitLab. The content is then published as a website. You must: Keyword type: Job name. Example of pages: ``` pages: stage: deploy script: mv my-html-content public artifacts: paths: public rules: if: $CICOMMITBRANCH == $CIDEFAULTBRANCH environment: production ``` This example renames the my-html-content/ directory to public/. This directory is exported as an artifact and published with GitLab Pages. Use publish to configure the content directory of a pages job. Keyword type: Job keyword. You can use it only as part of a pages job. Possible inputs: A path to a directory containing the Pages content. Example of publish: ``` pages: stage: deploy script: npx @11ty/eleventy --input=path/to/eleventy/root --output=dist artifacts: paths: dist publish: dist rules: if: $CICOMMITBRANCH == $CIDEFAULTBRANCH environment: production ``` This example uses Eleventy to generate a static website and output the generated HTML files into a the dist/ directory. This directory is exported as an artifact and published with GitLab Pages. Use pages.path_prefix to configure a path prefix for multiple deployments of GitLab Pages. Keyword type: Job keyword. You can use it only as part of a pages job. Possible inputs: A string, a CI/CD variables, or a combination of both. The given value is converted to lowercase, shortened to 63 bytes, and everything except alphanumeric characters is replaced with a hyphen. Leading and trailing hyphens are not permitted. Example of pages.path_prefix: ``` pages: stage: deploy script: echo \"Pages accessible through ${CIPAGESURL}/${CICOMMITBRANCH}\" pages: pathprefix: \"$CICOMMIT_BRANCH\" artifacts: paths: public ``` In this example, a different pages deployment is created for each branch. Use parallel to run a job multiple times in parallel in a single pipeline. Multiple runners must exist, or a single runner must be configured to run multiple jobs concurrently. Parallel jobs are named sequentially from jobname 1/N to jobname N/N. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of parallel: ``` test: script: rspec parallel: 5 ``` This example creates 5 jobs that run in parallel, named test 1/5 to test 5/5. Additional details: Related topics: Use parallel:matrix to run a job multiple times in parallel in a single pipeline, but with different variable values for each instance of the job. Multiple runners must exist, or a single runner must be configured to run multiple jobs concurrently. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: An array of hashes of variables: Example of parallel:matrix: ``` deploystacks: stage: deploy script: bin/deploy parallel: matrix: PROVIDER: aws STACK: monitoring app1 app2 PROVIDER: ovh STACK: [monitoring, backup, app] PROVIDER: [gcp, vultr] STACK: [data, processing] environment: $PROVIDER/$STACK ``` The example generates 10 parallel deploystacks jobs, each with different values for PROVIDER and STACK: ``` deploystacks: [aws, monitoring] deploystacks: [aws, app1] deploystacks: [aws, app2] deploystacks: [ovh, monitoring] deploystacks: [ovh, backup] deploystacks: [ovh, app] deploystacks: [gcp, data] deploystacks: [gcp, processing] deploystacks: [vultr, data] deploystacks: [vultr, processing] ``` Additional details: You cannot create multiple matrix configurations with the same variable values but different variable" }, { "data": "Job names are generated from the variable values, not the variable names, so matrix entries with identical values generate identical job names that overwrite each other. For example, this test configuration would try to create two series of identical jobs, but the OS2 versions overwrite the OS versions: ``` test: parallel: matrix: OS: [ubuntu] PROVIDER: [aws, gcp] OS2: [ubuntu] PROVIDER: [aws, gcp] ``` Related topics: Use release to create a release. The release job must have access to the release-cli, which must be in the $PATH. If you use the Docker executor, you can use this image from the GitLab container registry: registry.gitlab.com/gitlab-org/release-cli:latest If you use the Shell executor or similar, install release-cli on the server where the runner is registered. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: The release subkeys: Example of release keyword: ``` release_job: stage: release image: registry.gitlab.com/gitlab-org/release-cli:latest rules: if: $CICOMMITTAG # Run this job when a tag is created manually script: echo \"Running the release job.\" release: tagname: $CICOMMIT_TAG name: 'Release $CICOMMITTAG' description: 'Release created using the release-cli.' ``` This example creates a release: Additional details: All release jobs, except trigger jobs, must include the script keyword. A release job can use the output from script commands. If you dont need the script, you can use a placeholder: ``` script: echo \"release job\" ``` An issue exists to remove this requirement. Related topics: Required. The Git tag for the release. If the tag does not exist in the project yet, it is created at the same time as the release. New tags use the SHA associated with the pipeline. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: CI/CD variables are supported. Example of release:tag_name: To create a release when a new tag is added to the project: ``` job: script: echo \"Running the release job for the new tag.\" release: tagname: $CICOMMIT_TAG description: 'Release description' rules: if: $CICOMMITTAG ``` To create a release and a new tag at the same time, your rules should not configure the job to run only for new tags. A semantic versioning example: ``` job: script: echo \"Running the release job and creating a new tag.\" release: tagname: ${MAJOR}${MINOR}_${REVISION} description: 'Release description' rules: if: $CIPIPELINESOURCE == \"schedule\" ``` If the tag does not exist, the newly created tag is annotated with the message specified by tag_message. If omitted, a lightweight tag is created. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of release:tag_message: ``` release_job: stage: release release: tagname: $CICOMMIT_TAG description: 'Release description' tag_message: 'Annotated tag message' ``` The release name. If omitted, it is populated with the value of release: tag_name. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of release:name: ``` release_job: stage: release release: name: 'Release $CICOMMITTAG' ``` The long description of the release. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of release:description: ``` job: release: tagname: ${MAJOR}${MINOR}_${REVISION} description: './path/to/CHANGELOG.md' ``` Additional details: The ref for the release, if the release: tag_name doesnt exist yet. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: The title of each milestone the release is associated with. The date and time when the release is ready. Possible inputs: Example of release:released_at: ``` released_at: '2021-03-15T08:00:00Z' ``` Additional details: Use release:assets:links to include asset links in the release. Requires release-cli version v0.4.0 or later. Example of release:assets:links: ``` assets: links: name: 'asset1' url: 'https://example.com/assets/1' name: 'asset2' url:" }, { "data": "filepath: '/pretty/url/1' # optional link_type: 'other' # optional ``` Use resource_group to create a resource group that ensures a job is mutually exclusive across different pipelines for the same project. For example, if multiple jobs that belong to the same resource group are queued simultaneously, only one of the jobs starts. The other jobs wait until the resource_group is free. Resource groups behave similar to semaphores in other programming languages. You can choose a process mode to strategically control the job concurrency for your deployment preferences. The default process mode is unordered. To change the process mode of a resource group, use the API to send a request to edit an existing resource group. You can define multiple resource groups per environment. For example, when deploying to physical devices, you might have multiple physical devices. Each device can be deployed to, but only one deployment can occur per device at any given time. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of resource_group: ``` deploy-to-production: script: deploy resource_group: production ``` In this example, two deploy-to-production jobs in two separate pipelines can never run at the same time. As a result, you can ensure that concurrent deployments never happen to the production environment. Related topics: Use retry to configure how many times a job is retried if it fails. If not defined, defaults to 0 and jobs do not retry. When a job fails, the job is processed up to two more times, until it succeeds or reaches the maximum number of retries. By default, all failure types cause the job to be retried. Use retry:when or retry:exit_codes to select which failures to retry on. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: Example of retry: ``` test: script: rspec retry: 2 test_advanced: script: echo \"Run a script that results in exit code 137.\" exit 137 retry: max: 2 when: runnersystemfailure exit_codes: 137 ``` test_advanced will be retried up to 2 times if the exit code is 137 or if it had a runner system failure. Use retry:when with retry:max to retry jobs for only specific failure cases. retry:max is the maximum number of retries, like retry, and can be 0, 1, or 2. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: Example of retry:when (single failure type): ``` test: script: rspec retry: max: 2 when: runnersystemfailure ``` If there is a failure other than a runner system failure, the job is not retried. Example of retry:when (array of failure types): ``` test: script: rspec retry: max: 2 when: runnersystemfailure stuckortimeout_failure ``` Use retry:exit_codes with retry:max to retry jobs for only specific failure cases. retry:max is the maximum number of retries, like retry, and can be 0, 1, or 2. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: Example of retry:exit_codes: ``` testjob1: script: echo \"Run a script that results in exit code 1. This job isn't retried.\" exit 1 retry: max: 2 exit_codes: 137 testjob2: script: echo \"Run a script that results in exit code 137. This job will be retried.\" exit 137 retry: max: 1 exit_codes: 255 137 ``` Related topics: You can specify the number of retry attempts for certain stages of job execution using variables. Use rules to include or exclude jobs in" }, { "data": "Rules are evaluated when the pipeline is created, and evaluated in order until the first match. When a match is found, the job is either included or excluded from the pipeline, depending on the configuration. rules accepts an array of rules. Each rules must have at least one of: Rules can also optionally be combined with: You can combine multiple keywords together for complex rules. The job is added to the pipeline: The job is not added to the pipeline: For additional examples, see Specify when jobs run with rules. Use rules:if clauses to specify when to add a job to a pipeline: if clauses are evaluated based on the values of CI/CD variables or predefined CI/CD variables, with some exceptions. Keyword type: Job-specific and pipeline-specific. You can use it as part of a job to configure the job behavior, or with workflow to configure the pipeline behavior. Possible inputs: Example of rules:if: ``` job: script: echo \"Hello, Rules!\" rules: if: $CIMERGEREQUESTSOURCEBRANCHNAME =~ /^feature/ && $CIMERGEREQUESTTARGETBRANCHNAME != $CIDEFAULTBRANCH when: never if: $CIMERGEREQUESTSOURCEBRANCH_NAME =~ /^feature/ when: manual allow_failure: true if: $CIMERGEREQUESTSOURCEBRANCH_NAME ``` Additional details: Related topics: Use rules:changes to specify when to add a job to a pipeline by checking for changes to specific files. For new branch pipelines or when there is no Git push event, rules: changes always evaluates to true and the job always runs. Pipelines like tag pipelines, scheduled pipelines, and manual pipelines, all do not have a Git push event associated with them. To cover these cases, use rules: changes: compare_to to specify the branch to compare against the pipeline ref. If you do not use compare_to, you should use rules: changes only with branch pipelines or merge request pipelines, though rules: changes still evaluates to true when creating a new branch. With: Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: An array including any number of: Example of rules:changes: ``` docker build: script: docker build -t my-image:$CICOMMITREF_SLUG . rules: if: $CIPIPELINESOURCE == \"mergerequestevent\" changes: Dockerfile when: manual allow_failure: true ``` Additional details: Related topics: Use rules:changes to specify that a job only be added to a pipeline when specific files are changed, and use rules:changes:paths to specify the files. rules:changes:paths is the same as using rules:changes without any subkeys. All additional details and related topics are the same. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of rules:changes:paths: ``` docker-build-1: script: docker build -t my-image:$CICOMMITREF_SLUG . rules: if: $CIPIPELINESOURCE == \"mergerequestevent\" changes: Dockerfile docker-build-2: script: docker build -t my-image:$CICOMMITREF_SLUG . rules: if: $CIPIPELINESOURCE == \"mergerequestevent\" changes: paths: Dockerfile ``` In this example, both jobs have the same behavior. Use rules:changes:compare_to to specify which ref to compare against for changes to the files listed under rules:changes:paths. Keyword type: Job keyword. You can use it only as part of a job, and it must be combined with rules:changes:paths. Possible inputs: Example of rules:changes:compare_to: ``` docker build: script: docker build -t my-image:$CICOMMITREF_SLUG . rules: if: $CIPIPELINESOURCE == \"mergerequestevent\" changes: paths: Dockerfile compare_to: 'refs/heads/branch1' ``` In this example, the docker build job is only included when the Dockerfile has changed relative to refs/heads/branch1 and the pipeline source is a merge request event. Related topics: Use exists to run a job when certain files exist in the repository. Keyword type: Job keyword. You can use it as part of a job or an include. Possible inputs: Example of rules:exists: ``` job: script: docker build -t my-image:$CICOMMITREF_SLUG . rules: exists: Dockerfile ``` job runs if a Dockerfile exists anywhere in the" }, { "data": "Additional details: exists resolves to true if any of the listed files are found (an OR operation). rules:exists:paths is the same as using rules:exists without any subkeys. All additional details are the same. Keyword type: Job keyword. You can use it as part of a job or an include. Possible inputs: Example of rules:exists:paths: ``` docker-build-1: script: docker build -t my-image:$CICOMMITREF_SLUG . rules: if: $CIPIPELINESOURCE == \"mergerequestevent\" exists: Dockerfile docker-build-2: script: docker build -t my-image:$CICOMMITREF_SLUG . rules: if: $CIPIPELINESOURCE == \"mergerequestevent\" exists: paths: Dockerfile ``` In this example, both jobs have the same behavior. Use rules:exists:project to specify the location in which to search for the files listed under rules:exists:paths. Must be used with rules:exists:paths. Keyword type: Job keyword. You can use it as part of a job or an include, and it must be combined with rules:exists:paths. Possible inputs: Example of rules:exists:project: ``` docker build: script: docker build -t my-image:$CICOMMITREF_SLUG . rules: exists: paths: Dockerfile project: my-group/my-project ref: v1.0.0 ``` In this example, the docker build job is only included when the Dockerfile exists in the project my-group/my-project on the commit tagged with v1.0.0. Use rules:when alone or as part of another rule to control conditions for adding a job to a pipeline. rules:when is similar to when, but with slightly different input options. If a rules:when rule is not combined with if, changes, or exists, it always matches if reached when evaluating a jobs rules. Keyword type: Job-specific. You can use it only as part of a job. Possible inputs: Example of rules:when: ``` job1: rules: if: $CICOMMITREFNAME == $CIDEFAULT_BRANCH if: $CICOMMITREF_NAME =~ /feature/ when: delayed when: manual script: echo ``` In this example, job1 is added to pipelines: Use allow_failure: true in rules to allow a job to fail without stopping the pipeline. You can also use allow_failure: true with a manual job. The pipeline continues running without waiting for the result of the manual job. allow_failure: false combined with when: manual in rules causes the pipeline to wait for the manual job to run before continuing. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of rules:allow_failure: ``` job: script: echo \"Hello, Rules!\" rules: if: $CIMERGEREQUESTTARGETBRANCHNAME == $CIDEFAULT_BRANCH when: manual allow_failure: true ``` If the rule matches, then the job is a manual job with allow_failure: true. Additional details: Use needs in rules to update a jobs needs for specific conditions. When a condition matches a rule, the jobs needs configuration is completely replaced with the needs in the rule. Keyword type: Job-specific. You can use it only as part of a job. Possible inputs: Example of rules:needs: ``` build-dev: stage: build rules: if: $CICOMMITBRANCH != $CIDEFAULTBRANCH script: echo \"Feature branch, so building dev version...\" build-prod: stage: build rules: if: $CICOMMITBRANCH == $CIDEFAULTBRANCH script: echo \"Default branch, so building prod version...\" tests: stage: test rules: if: $CICOMMITBRANCH != $CIDEFAULTBRANCH needs: ['build-dev'] if: $CICOMMITBRANCH == $CIDEFAULTBRANCH needs: ['build-prod'] script: echo \"Running dev specs by default, or prod specs when default branch...\" ``` In this example: Additional details: Use variables in rules to define variables for specific conditions. Keyword type: Job-specific. You can use it only as part of a job. Possible inputs: Example of rules:variables: ``` job: variables: DEPLOY_VARIABLE: \"default-deploy\" rules: if: $CICOMMITREFNAME == $CIDEFAULT_BRANCH variables: # Override DEPLOY_VARIABLE defined DEPLOY_VARIABLE: \"deploy-production\" # at the job level. if: $CICOMMITREF_NAME =~ /feature/ variables: ISAFEATURE: \"true\" # Define a new variable. script: echo \"Run script with $DEPLOY_VARIABLE as an argument\" echo \"Run another script if $ISAFEATURE exists\" ``` Use interruptible in rules to update a jobs interruptible value for specific" }, { "data": "Keyword type: Job-specific. You can use it only as part of a job. Possible inputs: Example of rules:interruptible: ``` job: script: echo \"Hello, Rules!\" interruptible: true rules: if: $CICOMMITREFNAME == $CIDEFAULT_BRANCH interruptible: false # Override interruptible defined at the job level. when: on_success ``` Additional details: Use script to specify commands for the runner to execute. All jobs except trigger jobs require a script keyword. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: An array including: CI/CD variables are supported. Example of script: ``` job1: script: \"bundle exec rspec\" job2: script: uname -a bundle exec rspec ``` Additional details: Related topics: Use secrets to specify CI/CD secrets to: Use secrets:vault to specify secrets provided by a HashiCorp Vault. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of secrets:vault: To specify all details explicitly and use the KV-V2 secrets engine: ``` job: secrets: DATABASE_PASSWORD: # Store the path to the secret in this CI/CD variable vault: # Translates to secret: `ops/data/production/db`, field: `password` engine: name: kv-v2 path: ops path: production/db field: password ``` You can shorten this syntax. With the short syntax, engine:name and engine:path both default to kv-v2: ``` job: secrets: DATABASE_PASSWORD: # Store the path to the secret in this CI/CD variable vault: production/db/password # Translates to secret: `kv-v2/data/production/db`, field: `password` ``` To specify a custom secrets engine path in the short syntax, add a suffix that starts with @: ``` job: secrets: DATABASE_PASSWORD: # Store the path to the secret in this CI/CD variable vault: production/db/password@ops # Translates to secret: `ops/data/production/db`, field: `password` ``` Use secrets:gcpsecretmanager to specify secrets provided by GCP Secret Manager. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of secrets:gcpsecretmanager: ``` job: secrets: DATABASE_PASSWORD: gcpsecretmanager: name: 'test' version: 2 ``` Related topics: Use secrets:azurekeyvault to specify secrets provided by a Azure Key Vault. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of secrets:azurekeyvault: ``` job: secrets: DATABASE_PASSWORD: azurekeyvault: name: 'test' version: 'test' ``` Related topics: Use secrets:file to configure the secret to be stored as either a file or variable type CI/CD variable By default, the secret is passed to the job as a file type CI/CD variable. The value of the secret is stored in the file and the variable contains the path to the file. If your software cant use file type CI/CD variables, set file: false to store the secret value directly in the variable. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of secrets:file: ``` job: secrets: DATABASE_PASSWORD: vault: production/db/password@ops file: false ``` Additional details: Use secrets:token to explicitly select a token to use when authenticating with Vault by referencing the tokens CI/CD variable. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of secrets:token: ``` job: id_tokens: AWS_TOKEN: aud: https://aws.example.com VAULT_TOKEN: aud: https://vault.example.com secrets: DB_PASSWORD: vault: gitlab/production/db token: $VAULT_TOKEN ``` Additional details: Use services to specify any additional Docker images that your scripts require to run successfully. The services image is linked to the image specified in the image keyword. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: The name of the services image, including the registry path if needed, in one of these formats: CI/CD variables are supported, but not for" }, { "data": "Example of services: ``` default: image: name: ruby:2.6 entrypoint: [\"/bin/bash\"] services: name: my-postgres:11.7 alias: db-postgres entrypoint: [\"/usr/local/bin/db-postgres\"] command: [\"start\"] before_script: bundle install test: script: bundle exec rake spec ``` In this example, GitLab launches two containers for the job: Related topics: Use services:docker to pass options to the Docker executor of a GitLab Runner. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: A hash of options for the Docker executor, which can include: Example of services:docker: ``` arm-sql-job: script: echo \"Run sql tests in service container\" image: ruby:2.6 services: name: super/sql:experimental docker: platform: arm64/v8 user: dave ``` Additional details: The pull policy that the runner uses to fetch the Docker image. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: Examples of services:pull_policy: ``` job1: script: echo \"A single pull policy.\" services: name: postgres:11.6 pull_policy: if-not-present job2: script: echo \"Multiple pull policies.\" services: name: postgres:11.6 pull_policy: [always, if-not-present] ``` Additional details: Related topics: Use stage to define which stage a job runs in. Jobs in the same stage can execute in parallel (see Additional details). If stage is not defined, the job uses the test stage by default. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: A string, which can be a: Example of stage: ``` stages: build test deploy job1: stage: build script: echo \"This job compiles code.\" job2: stage: test script: echo \"This job tests the compiled code. It runs when the build stage completes.\" job3: script: echo \"This job also runs in the test stage\". job4: stage: deploy script: echo \"This job deploys the code. It runs when the test stage completes.\" environment: production ``` Additional details: Use the .pre stage to make a job run at the start of a pipeline. .pre is always the first stage in a pipeline. User-defined stages execute after .pre. You do not have to define .pre in stages. If a pipeline contains only jobs in the .pre or .post stages, it does not run. There must be at least one other job in a different stage. Keyword type: You can only use it with a jobs stage keyword. Example of stage: .pre: ``` stages: build test job1: stage: build script: echo \"This job runs in the build stage.\" first-job: stage: .pre script: echo \"This job runs in the .pre stage, before all other stages.\" job2: stage: test script: echo \"This job runs in the test stage.\" ``` Use the .post stage to make a job run at the end of a pipeline. .post is always the last stage in a pipeline. User-defined stages execute before .post. You do not have to define .post in stages. If a pipeline contains only jobs in the .pre or .post stages, it does not run. There must be at least one other job in a different stage. Keyword type: You can only use it with a jobs stage keyword. Example of stage: .post: ``` stages: build test job1: stage: build script: echo \"This job runs in the build stage.\" last-job: stage: .post script: echo \"This job runs in the .post stage, after all other stages.\" job2: stage: test script: echo \"This job runs in the test stage.\" ``` Additional details: Use tags to select a specific runner from the list of all runners that are available for the project. When you register a runner, you can specify the runners tags, for example ruby, postgres, or" }, { "data": "To pick up and run a job, a runner must be assigned every tag listed in the job. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: Example of tags: ``` job: tags: ruby postgres ``` In this example, only runners with both the ruby and postgres tags can run the job. Additional details: Related topics: Use timeout to configure a timeout for a specific job. If the job runs for longer than the timeout, the job fails. The job-level timeout can be longer than the project-level timeout, but cant be longer than the runners timeout. Keyword type: Job keyword. You can use it only as part of a job or in the default section. Possible inputs: A period of time written in natural language. For example, these are all equivalent: Example of timeout: ``` build: script: build.sh timeout: 3 hours 30 minutes test: script: rspec timeout: 3h 30m ``` Use trigger to declare that a job is a trigger job which starts a downstream pipeline that is either: Trigger jobs can use only a limited set of GitLab CI/CD configuration keywords. The keywords available for use in trigger jobs are: Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of trigger: ``` trigger-multi-project-pipeline: trigger: my-group/my-project ``` Additional details: Related topics: Use trigger:include to declare that a job is a trigger job which starts a child pipeline. Use trigger:include:artifact to trigger a dynamic child pipeline. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of trigger:include: ``` trigger-child-pipeline: trigger: include: path/to/child-pipeline.gitlab-ci.yml ``` Related topics: Use trigger:project to declare that a job is a trigger job which starts a multi-project pipeline. By default, the multi-project pipeline triggers for the default branch. Use trigger:branch to specify a different branch. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of trigger:project: ``` trigger-multi-project-pipeline: trigger: project: my-group/my-project ``` Example of trigger:project for a different branch: ``` trigger-multi-project-pipeline: trigger: project: my-group/my-project branch: development ``` Related topics: Use trigger:strategy to force the trigger job to wait for the downstream pipeline to complete before it is marked as success. This behavior is different than the default, which is for the trigger job to be marked as success as soon as the downstream pipeline is created. This setting makes your pipeline execution linear rather than parallel. Example of trigger:strategy: ``` trigger_job: trigger: include: path/to/child-pipeline.yml strategy: depend ``` In this example, jobs from subsequent stages wait for the triggered pipeline to successfully complete before starting. Additional details: Use trigger:forward to specify what to forward to the downstream pipeline. You can control what is forwarded to both parent-child pipelines and multi-project pipelines. Forwarded variables do not get forwarded again in nested downstream pipelines by default, unless the nested downstream trigger job also uses trigger:forward. Possible inputs: Example of trigger:forward: Run this pipeline manually, with the CI/CD variable MYVAR = my value: ``` variables: # default variables for each job VAR: value child1: trigger: include: .child-pipeline.yml child2: trigger: include: .child-pipeline.yml forward: pipeline_variables: true child3: trigger: include: .child-pipeline.yml forward: yaml_variables: false ``` Additional details: Use variables to define CI/CD variables for jobs. Keyword type: Global and job keyword. You can use it at the global level, and also at the job level. If you define variables as a global keyword, it behaves like default variables for all jobs. Each variable is copied to every job configuration when the pipeline is" }, { "data": "If the job already has that variable defined, the job-level variable takes precedence. Variables defined at the global-level cannot be used as inputs for other global keywords like include. These variables can only be used at the job-level, in script, beforescript, or afterscript sections, and in some job keywords like rules. Possible inputs: Variable name and value pairs: CI/CD variables are supported. Examples of variables: ``` variables: DEPLOY_SITE: \"https://example.com/\" deploy_job: stage: deploy script: deploy-script --url $DEPLOY_SITE --path \"/\" environment: production deployreviewjob: stage: deploy variables: REVIEW_PATH: \"/review\" script: deploy-review-script --url $DEPLOYSITE --path $REVIEWPATH environment: production ``` Additional details: Related topics: Use the description keyword to define a description for a pipeline-level (global) variable. The description displays with the prefilled variable name when running a pipeline manually. Keyword type: Global keyword. You cannot use it for job-level variables. Possible inputs: Example of variables:description: ``` variables: DEPLOY_NOTE: description: \"The deployment note. Explain the reason for this deployment.\" ``` Additional details: Use the value keyword to define a pipeline-level (global) variables value. When used with variables: description, the variable value is prefilled when running a pipeline manually. Keyword type: Global keyword. You cannot use it for job-level variables. Possible inputs: Example of variables:value: ``` variables: DEPLOY_ENVIRONMENT: value: \"staging\" description: \"The deployment target. Change this variable to 'canary' or 'production' if needed.\" ``` Additional details: Use variables:options to define an array of values that are selectable in the UI when running a pipeline manually. Must be used with variables: value, and the string defined for value: If there is no description, this keyword has no effect. Keyword type: Global keyword. You cannot use it for job-level variables. Possible inputs: Example of variables:options: ``` variables: DEPLOY_ENVIRONMENT: value: \"staging\" options: \"production\" \"staging\" \"canary\" description: \"The deployment target. Set to 'staging' by default.\" ``` Use the expand keyword to configure a variable to be expandable or not. Keyword type: Global and job keyword. You can use it at the global level, and also at the job level. Possible inputs: Example of variables:expand: ``` variables: VAR1: value1 VAR2: value2 $VAR1 VAR3: value: value3 $VAR1 expand: false ``` Additional details: Use when to configure the conditions for when jobs run. If not defined in a job, the default value is when: on_success. Keyword type: Job keyword. You can use it as part of a job. when: always and when: never can also be used in workflow:rules. Possible inputs: Example of when: ``` stages: build cleanup_build test deploy cleanup build_job: stage: build script: make build cleanupbuildjob: stage: cleanup_build script: cleanup build when failed when: on_failure test_job: stage: test script: make test deploy_job: stage: deploy script: make deploy when: manual environment: production cleanup_job: stage: cleanup script: cleanup after jobs when: always ``` In this example, the script: Additional details: Related topics: Use manual_confirmation with when: manual to define a custom confirmation message for manual jobs. If there is no manual job defined with when: manual, this keyword has no effect. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of manual_confirmation: ``` delete_job: stage: post-deployment script: make delete when: manual manualconfirmation: 'Are you sure you want to delete $CIENVIRONMENT_SLUG?' ``` The following keywords are deprecated. Defining image, services, cache, beforescript, and afterscript globally is deprecated. Using these keywords at the top level is still possible to ensure backwards compatibility, but could be scheduled for removal in a future milestone. Use default instead. For example: ``` default: image:" }, { "data": "services: docker:dind cache: paths: [vendor/] before_script: bundle config set path vendor/bundle bundle install after_script: rm -rf tmp/ ``` You can use only and except to control when to add jobs to pipelines. You can use the only:refs and except:refs keywords to control when to add jobs to a pipeline based on branch names or pipeline types. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: An array including any number of: The following keywords: | Value | Description | |:--|:--| | api | For pipelines triggered by the pipelines API. | | branches | When the Git reference for a pipeline is a branch. | | chat | For pipelines created by using a GitLab ChatOps command. | | external | When you use CI services other than GitLab. | | externalpullrequests | When an external pull request on GitHub is created or updated (See Pipelines for external pull requests). | | merge_requests | For pipelines created when a merge request is created or updated. Enables merge request pipelines, merged results pipelines, and merge trains. | | pipelines | For multi-project pipelines created by using the API with CIJOBTOKEN, or the trigger keyword. | | pushes | For pipelines triggered by a git push event, including for branches and tags. | | schedules | For scheduled pipelines. | | tags | When the Git reference for a pipeline is a tag. | | triggers | For pipelines created by using a trigger token. | | web | For pipelines created by selecting Run pipeline in the GitLab UI, from the projects Build > Pipelines section. | Example of only:refs and except:refs: ``` job1: script: echo only: main /^issue-.*$/ merge_requests job2: script: echo except: main /^stable-branch.*$/ schedules ``` Additional details: only or except used without any other keywords are equivalent to only: refs or except: refs. For example, the following two jobs configurations have the same behavior: ``` job1: script: echo only: branches job2: script: echo only: refs: branches ``` If a job does not use only, except, or rules, then only is set to branches and tags by default. For example, job1 and job2 are equivalent: ``` job1: script: echo \"test\" job2: script: echo \"test\" only: branches tags ``` You can use the only:variables or except:variables keywords to control when to add jobs to a pipeline, based on the status of CI/CD variables. Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: Example of only:variables: ``` deploy: script: cap staging deploy only: variables: $RELEASE == \"staging\" $STAGING ``` only:variables and except:variables Use the changes keyword with only to run a job, or with except to skip a job, when a Git push event modifies a file. Use changes in pipelines with the following refs: Keyword type: Job keyword. You can use it only as part of a job. Possible inputs: An array including any number of: Example of only:changes: ``` docker build: script: docker build -t my-image:$CICOMMITREF_SLUG . only: refs: branches changes: Dockerfile docker/scripts/* dockerfiles//* more_scripts/*.{rb,py,sh} \"/*.json\" ``` Additional details: Related topics: Use only:kubernetes or except:kubernetes to control if jobs are added to the pipeline when the Kubernetes service is active in the project. Keyword type: Job-specific. You can use it only as part of a job. Possible inputs: Example of only:kubernetes: ``` deploy: only: kubernetes: active ``` In this example, the deploy job runs only when the Kubernetes service is active in the project. If you didn't find what you were looking for, search the docs. If you want help with something specific and could" } ]
{ "category": "App Definition and Development", "file_name": "kots-cli-getting-started.md", "project_name": "KOTS", "subcategory": "Application Definition & Image Build" }
[ { "data": "Users can interact with the Replicated KOTS CLI to install and manage applications with Replicated KOTS. The KOTS CLI is a kubectl plugin that runs locally on any computer. Install kubectl, the Kubernetes command-line tool. See Install Tools in the Kubernetes documentation. If you are using a cluster created with Replicated kURL, kURL already installed both kubectl and the KOTS CLI when provisioning the cluster. For more information, see Online Installation with kURL and Air Gap Installation with kURL. To install the latest version of the KOTS CLI to /usr/local/bin, run: ``` curl https://kots.io/install | bash``` To install to a directory other than /usr/local/bin, run: ``` curl https://kots.io/install | REPLINSTALLPATH=/path/to/cli bash``` To install a specific version of the KOTS CLI, run: ``` curl https://kots.io/install/<version> | bash``` To verify your installation, run: ``` kubectl kots --help``` You can install the KOTS CLI on computers without root access or computers that cannot write to the /usr/local/bin directory. To install the KOTS CLI without root access, you can do any of the following: You can set the REPLINSTALLPATH environment variable to install the KOTS CLI to a directory other than /usr/local/bin that does not require elevated permissions. Example: In the following example, the installation script installs the KOTS CLI to ~/bin in the local directory. You can use the user home symbol ~ in the REPLINSTALLPATH environment variable. The script expands ~ to $HOME. ``` curl -L https://kots.io/install | REPLINSTALLPATH=~/bin bash``` If you have sudo access to the directory where you want to install the KOTS CLI, you can set the REPLUSESUDO environment variable so that the installation script prompts you for your sudo password. When you set the REPLUSESUDO environment variable to any value, the installation script uses sudo to create and write to the installation directory as needed. The script prompts for a sudo password if it is required for the user executing the script in the specified directory. Example: In the following example, the script uses sudo to install the KOTS CLI to the default /usr/local/bin directory. ``` curl -L" }, { "data": "| REPLUSESUDO=y bash``` Example: In the following example, the script uses sudo to install the KOTS CLI to the /replicated/bin directory. ``` curl -L https://kots.io/install | REPLINSTALLPATH=/replicated/bin REPLUSESUDO=y bash``` You can manually download and install the KOTS CLI binary to install without root access, rather than using the installation script. Users in air gap environments can also follow this procedure to install the KOTS CLI. To manually download and install the kots CLI: Download the KOTS CLI release for your operating system from the Releases page in the KOTS GitHub repository: MacOS (AMD and ARM): ``` curl -L https://github.com/replicatedhq/kots/releases/latest/download/kotsdarwinall.tar.gz``` Linux (AMD): ``` curl -L https://github.com/replicatedhq/kots/releases/latest/download/kotslinuxamd64.tar.gz``` Linux (ARM): ``` curl -L https://github.com/replicatedhq/kots/releases/latest/download/kotslinuxarm64.tar.gz``` For air gap environments, download the KOTS CLI release from the download portal provided by your vendor. Unarchive the .tar.gz file that you downloaded: MacOS (AMD and ARM): ``` tar xvf kotsdarwinall.tar.gz``` Linux (AMD): ``` tar xvf kotslinuxamd64.tar.gz``` Linux (ARM): ``` tar xvf kotslinuxarm64.tar.gz``` Rename the kots executable to kubectl-kots and move it to one of the directories that is in your PATH environment variable. This ensures that the system can access the executable when you run KOTS CLI commands. You can run echo $PATH to view the list of directories in your PATH. Run one of the following commands, depending on if you have write access to the target directory: You have write access to the directory: ``` mv kots /PATHTOTARGET_DIRECTORY/kubectl-kots``` Replace PATHTOTARGET_DIRECTORY with the path to a directory that is in your PATH environment variable. For example, /usr/local/bin. You do not have write access to the directory: ``` sudo mv kots /PATHTOTARGET_DIRECTORY/kubectl-kots``` Replace PATHTOTARGET_DIRECTORY with the path to a directory that is in your PATH environment variable. For example, /usr/local/bin. Verify the installation: ``` kubectl kots --help``` The KOTS CLI is a plugin for the Kubernetes kubectl command line tool. The KOTS CLI plugin is named kubectl-kots. For more information about working with kubectl, see Command line tool (kubectl) in the Kubernetes documentation. To uninstall the KOTS CLI: Find the location where the kubectl-kots plugin is installed on your PATH: ``` kubectl plugin list kubectl-kots cli``` Delete kubectl-kots: ``` sudo rm PATHTOKOTS``` Replace PATHTOKOTS with the location where kubectl-kots is installed. Example: ``` sudo rm /usr/local/bin/kubectl-kots```" } ]
{ "category": "App Definition and Development", "file_name": "%5E0.1.md", "project_name": "Krator", "subcategory": "Application Definition & Image Build" }
[ { "data": "Stream utilities for Tokio. A Stream is an asynchronous sequence of values. It can be thought of as an asynchronous version of the standard library's Iterator trait. This crate provides helpers to work with them. For examples of usage and a more in-depth description of streams you can also refer to the streams tutorial on the tokio website. Due to similarities with the standard library's Iterator trait, some new users may assume that they can use for in syntax to iterate over a Stream, but this is unfortunately not possible. Instead, you can use a while let loop as follows: ``` use tokio_stream::{self as stream, StreamExt}; async fn main() { let mut stream = stream::iter(vec![0, 1, 2]); while let Some(value) = stream.next().await { println!(\"Got {}\", value); } } ``` A common way to stream values from a function is to pass in the sender half of a channel and use the receiver as the stream. This requires awaiting both futures to ensure progress is made. Another alternative is the async-stream crate, which contains macros that provide a yield keyword and allow you to return an impl Stream. It is often desirable to convert a Stream into an AsyncRead, especially when dealing with plaintext formats streamed over the network. The opposite conversion from an AsyncRead into a Stream is also another commonly required feature. To enable these conversions, tokio-util provides the StreamReader and ReaderStream types when the io feature is enabled." } ]
{ "category": "App Definition and Development", "file_name": "#docs.rs.md", "project_name": "Krator", "subcategory": "Application Definition & Image Build" }
[ { "data": "The Rust Foundation is a US non-profit organization. This privacy notice explains what we do with personal information. We commit to upholding the key data protection principles and data subject rights described in the GDPR and local data protection regulations in the countries in which we operate. Some of the services we are responsible for were originally hosted by Mozilla Corporation. The services and all corresponding data (including personal data of users) were transferred to the Rust Foundation upon its formation. For personal data under the Rust Foundations control, we rely on the following legal bases to obtain and process personal information: Like many websites, the services may use cookies to obtain certain types of information when your web browser accesses our site. Cookies are used most commonly to do things like tracking page views, identifying repeat users and utilizing login tokens for a session. The services use session cookies to anonymously track a users session on the services to deliver a better experience. You can block or delete these cookies through your browsers settings. You can set or amend your web browser controls to accept or refuse cookies. If you choose to reject cookies, you may still use our services though your access to some functionality may be restricted. As the means by which you can refuse cookies through your web browser controls vary from browser-to-browser, you should visit your browsers help menu for more information. Rust Foundation is based in the United States, processes and stores data in the United States, and makes its services available around the world. The United States, Member States of the European Economic Area, and other countries are governed by different laws. When your data is moved from its home country to another country, the laws and rules that protect your personal information in the country to which your information is transferred may be different from those in the country where you reside. For example, the legal requirements for law enforcement to gain access to personal information may vary between countries. If your personal data is in the United States, it may be accessed by government authorities in accordance with United States law. Use of the services is voluntary and users may choose whether or not they wish to use them. Because we offer our services to people in different countries and use technical infrastructure based in the United States, we may need to transfer your personal information across borders in order to deliver our services. We maintain administrative, technical, and physical safeguards designed to protect the privacy and security of the information we maintain about you. The connection between your computer and our servers is encrypted using Secure Sockets Layer (SSL) software that encrypts that information. We use a digital certificate and secure pages will be identified by a padlock sign and https:// in the address bar. However, no method of transmission or storage is 100% secure. As a result, while we strive to protect your personal information, you acknowledge that: (a) there are security and privacy limitations inherent to the Internet which are beyond our control; and (b) security, integrity, and privacy of any and all information and data exchanged between you and us cannot be guaranteed. Upon request, Rust Foundation will provide users with information about whether we hold any of their personal" }, { "data": "In certain cases, subject to relevant legal rights, users have the right to object to the processing of their personal information, to request changes, corrections, or the deletion of their personal information, and to obtain a copy of their personal information in an easily accessible format. In order to do this, users can contact us using the contact information set out at the bottom of this policy. We will respond to every request within a reasonable timeframe and may need to take reasonable steps to confirm identity before proceeding. You can also withdraw your consent to our processing of your information and the use of our services, and/or delete your user account at any time, by using the contact information below to request that your personal information be deleted. If you are an EU resident and believe that our processing of your personal data is contrary to the EU General Data Protection Regulation, you have the right to lodge a complaint with the appropriate supervisory authority. If you withdraw your consent to the use or sharing of your personal information for the purposes set out in this policy, we may not be able to provide you with our services. Please note that in certain cases we may continue to process your information after you have withdrawn consent and requested that we delete your information if we have a legal basis/need to do so. For personal data under its control, Rust Foundation will retain such data only for as long as is necessary for the purposes set out in this policy, or as needed to provide users with our services. If a user no longer wishes to use our services then it may request deletion of its data at any time. Notwithstanding the above, Rust Foundation will retain and use user information to the extent necessary to comply with our legal obligations (for example, if we are required to retain your information to comply with applicable tax/revenue laws), resolve disputes, and enforce our agreements. We may also retain log files for the purpose of internal analysis, for site safety, security and fraud prevention, to improve site functionality, or where we are legally required to retain them for longer time periods. The services are not directed to children and we do not knowingly collect personal information from anyone under the age of sixteen. If you are under the age of sixteen, your parent or guardian must provide their consent for you to use the services. If you contact us via email, your email address and message will be accessible to our small team of staff. We use Google Workspace internally. The data is not owned or controlled by Google; they will not share it with third parties or use it for advertising, and neither will we. Grant applicants will be asked to give their consent to our processing and storage of the following personal data: All applicants: Successful applicants (in addition to the above): Data for all applicants will be retained for three years. Data for successful applicants will be retained for seven years. We may ask individuals outside of the Rust Foundation to assist with the grant assessment process. The only personal data that will be shared with such individuals will be the applicants GitHub username. rust-lang.org is managed by members of the Core team and the Community team. When you visit rust-lang.org, we receive your IP address as part of our standard server logs. We store these logs for 1 year. When you (or tooling, such as Rustup) visits static.rust-lang.org or dev-static.rust-lang.org, we receive your IP address, user-agent header, referer header, and request path as part of our standard server logs. We store these logs for 1 year. Crates.io is managed by members of the Core team and the Crates.io team." }, { "data": "requires users to have a GitHub account in order to log in and use the service. When you log in to Crates.io using a GitHub account, we receive your GitHub username and avatar. If you share a display name or public email address in your GitHub public profile, we also receive that information. You must have a verified email address to publish a crate. We receive any public email address associated with your GitHub account. You can also choose to submit a different address to associate with your Crates.io activity. We will only use your email address to contact you about your account. When you visit Crates.io, we receive your IP address, user-agent header, and request path as part of our standard server logs. We store these logs for 1 year. When you (or tooling, such as Cargo) visits static.crates.io, we receive your IP address, user-agent header, referer header, and request path as part of our standard server logs. We store these logs for 1 year. All crates on Crates.io are public, including the list of crate owners user names and the crate upload date. Anyone may view or download a crates contents. Because of the public nature of Crates.io, any personal data you might include in a Cargo.toml file uploaded to a crate will be publicly available. For example, if an email address is in the authors field in the Cargo.toml file, that email address will also be public. Due to its public nature, be aware if you include any private information in a crate, that information may be indexed by search engines or used by third parties. Sensitive information should not be included in a crate file. Crates.io uses Sentry, an error monitoring service, to help the Rust team discover and fix the performance of the code. When there is an error, Sentry receives basic information about how you interacted with the website and the actions that led to the error. Additionally, your IP address may be disclosed to Sentry as part of the error reporting process but weve configured Sentry to delete it as soon as its received. Read Sentrys Privacy Policy here. Docs.rs is managed by the members of the Core team and the Dev Tools docs.rs sub-team. When you visit docs.rs, we receive your IP address and user-agent header as part of our standard server logs. We store these logs for 1 year. The Community team administers the Users Forum and the Internals Forum. Posts on these forums are public. If you sign up to participate in these forums, we collect your email address and name. As administrators of the forum, we have access to usage information regarding your interactions with it, such as posts published and read, and time spent on the site. We use Heroku and AWS to host the services, on servers located in the US. AWSs privacy notice is here. Heroku is part of Salesforce, whose privacy policy is here. The Users Forum and Internals Forum on rust-lang.org are hosted by Discourse and use its open source discussion platform. Discourses privacy policy is available here. We use Mailgun to send email. Mailguns privacy policy is available here. We use ZenDesk to manage, track, and respond to support requests, including for the Crates.io mailing list. ZenDesks privacy policy is available here. GitHub login is used for authentication in Crates.io and (optionally) in the forums. GitHubs Privacy Statement can be found here. Some Rust team members use the Zulip and Discord platforms for community collaboration. Zulips privacy notice is available here. Discords privacy notice is available here. For data subject access requests, or" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Kapeta", "subcategory": "Application Definition & Image Build" }
[ { "data": "Here you'll find articles that explain and guide you through creating software using Kapeta. Understand the fundamentals of Kapeta Learn how to code and build software with Kapeta Learn how to deploy your software with Kapeta Installation and other general help Kapeta Inc. 548 Market Street Suite 44932 San Francisco, CA 94104" } ]
{ "category": "App Definition and Development", "file_name": "%5E0.11.md", "project_name": "Krator", "subcategory": "Application Definition & Image Build" }
[ { "data": "Utilities for adding OpenTelemetry interoperability to tracing. Documentation | Chat tracing is a framework for instrumenting Rust programs to collect structured, event-based diagnostic information. This crate provides a layer that connects spans from multiple systems into a trace and emits them to OpenTelemetry-compatible distributed tracing systems for processing and visualization. The crate provides the following types: Compiler support: requires rustc 1.42+ ``` use opentelemetry::exporter::trace::stdout; use tracing::{error, span}; use tracing_subscriber::layer::SubscriberExt; use tracing_subscriber::Registry; fn main() { // Install a new OpenTelemetry trace pipeline let (tracer, uninstall) = stdout::newpipeline().install(); // Create a tracing layer with the configured tracer let telemetry = tracingopentelemetry::layer().withtracer(tracer); // Use the tracing subscriber `Registry`, or any other subscriber // that impls `LookupSpan` let subscriber = Registry::default().with(telemetry); // Trace executed code tracing::subscriber::with_default(subscriber, || { // Spans will be sent to the configured OpenTelemetry exporter let root = span!(tracing::Level::TRACE, \"appstart\", workunits = 2); let _enter = root.enter(); error!(\"This event will be logged in the root span.\"); }); } ``` ``` $ docker run -d -p6831:6831/udp -p6832:6832/udp -p16686:16686 jaegertracing/all-in-one:latest $ cargo run --example opentelemetry $ firefox http://localhost:16686/ ``` Tracing is built against the latest stable release. The minimum supported version is 1.42. The current Tracing version is not guaranteed to build on Rust versions earlier than the minimum supported version. Tracing follows the same compiler support policies as the rest of the Tokio project. The current stable Rust compiler and the three most recent minor versions before it will always be supported. For example, if the current stable compiler version is 1.45, the minimum supported version will not be increased past 1.42, three minor versions prior. Increasing the minimum supported compiler version is not considered a semver breaking change as long as doing so complies with this policy. This project is licensed under the MIT license. Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in Tracing by you, shall be licensed as MIT, without any additional terms or conditions." } ]
{ "category": "App Definition and Development", "file_name": "%5E0.2.md", "project_name": "Krator", "subcategory": "Application Definition & Image Build" }
[ { "data": "Utilities for implementing and composing tracing subscribers. Documentation | Chat Compiler support: requires rustc 1.42+ Tracing is built against the latest stable release. The minimum supported version is 1.42. The current Tracing version is not guaranteed to build on Rust versions earlier than the minimum supported version. Tracing follows the same compiler support policies as the rest of the Tokio project. The current stable Rust compiler and the three most recent minor versions before it will always be supported. For example, if the current stable compiler version is 1.45, the minimum supported version will not be increased past 1.42, three minor versions prior. Increasing the minimum supported compiler version is not considered a semver breaking change as long as doing so complies with this policy. This project is licensed under the MIT license. Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in Tracing by you, shall be licensed as MIT, without any additional terms or conditions." } ]
{ "category": "App Definition and Development", "file_name": "%5E0.3.md", "project_name": "Krator", "subcategory": "Application Definition & Image Build" }
[ { "data": "Utilities for instrumenting futures-based code with tracing. Documentation | Chat tracing is a framework for instrumenting Rust programs to collect structured, event-based diagnostic information. This crate provides utilities for using tracing to instrument asynchronous code written using futures and async/await. The crate provides the following traits: Instrument allows a tracing span to be attached to a future, sink, stream, or executor. WithSubscriber allows a tracing Subscriber to be attached to a future, sink, stream, or executor. Compiler support: requires rustc 1.42+ Tracing is built against the latest stable release. The minimum supported version is 1.42. The current Tracing version is not guaranteed to build on Rust versions earlier than the minimum supported version. Tracing follows the same compiler support policies as the rest of the Tokio project. The current stable Rust compiler and the three most recent minor versions before it will always be supported. For example, if the current stable compiler version is 1.45, the minimum supported version will not be increased past 1.42, three minor versions prior. Increasing the minimum supported compiler version is not considered a semver breaking change as long as doing so complies with this policy. This project is licensed under the MIT license. Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in Tracing by you, shall be licensed as MIT, without any additional terms or conditions." } ]
{ "category": "App Definition and Development", "file_name": "%5E0.13.md", "project_name": "Krator", "subcategory": "Application Definition & Image Build" }
[ { "data": "Application-level tracing for Rust. Documentation | Chat tracing is a framework for instrumenting Rust programs to collect structured, event-based diagnostic information. In asynchronous systems like Tokio, interpreting traditional log messages can often be quite challenging. Since individual tasks are multiplexed on the same thread, associated events and log lines are intermixed making it difficult to trace the logic flow. tracing expands upon logging-style diagnostics by allowing libraries and applications to record structured events with additional information about temporality and causality unlike a log message, a span in tracing has a beginning and end time, may be entered and exited by the flow of execution, and may exist within a nested tree of similar spans. In addition, tracing spans are structured, with the ability to record typed data as well as textual messages. The tracing crate provides the APIs necessary for instrumenting libraries and applications to emit trace data. Compiler support: requires rustc 1.56+ (The examples below are borrowed from the log crate's yak-shaving example, modified to idiomatic tracing.) In order to record trace events, executables have to use a Subscriber implementation compatible with tracing. A Subscriber implements a way of collecting trace data, such as by logging it to standard output. tracing_subscriber's fmt module provides reasonable defaults. Additionally, tracing-subscriber is able to consume messages emitted by log-instrumented libraries and modules. The simplest way to use a subscriber is to call the setglobaldefault function. ``` use tracing::{info, Level}; use tracing_subscriber::FmtSubscriber; fn main() { // a builder for `FmtSubscriber`. let subscriber = FmtSubscriber::builder() // all spans/events with a level higher than TRACE (e.g, debug, info, warn, etc.) // will be written to stdout. .withmaxlevel(Level::TRACE) // completes the builder. .finish(); tracing::subscriber::setglobaldefault(subscriber) .expect(\"setting default subscriber failed\"); let numberofyaks = 3; // this creates a new event, outside of any spans. info!(numberofyaks, \"preparing to shave yaks\"); let numbershaved = yakshave::shaveall(numberof_yaks); info!( allyaksshaved = numbershaved == numberof_yaks, \"yak shaving completed.\" ); } ``` ``` [dependencies] tracing = \"0.1\" tracing-subscriber = \"0.3.0\" ``` This subscriber will be used as the default in all threads for the remainder of the duration of the program, similar to how loggers work in the log crate. In addition, you can locally override the default subscriber. For example: ``` use tracing::{info, Level}; use tracing_subscriber::FmtSubscriber; fn main() { let subscriber = tracing_subscriber::FmtSubscriber::builder() // all spans/events with a level higher than TRACE (e.g, debug, info, warn, etc.) // will be written to stdout. .withmaxlevel(Level::TRACE) // builds the subscriber. .finish(); tracing::subscriber::with_default(subscriber, || { info!(\"This will be logged to stdout\"); }); info!(\"This will not be logged to stdout\"); } ``` This approach allows trace data to be collected by multiple subscribers within different contexts in the program. Note that the override only applies to the currently executing thread; other threads will not see the change from with_default. Any trace events generated outside the context of a subscriber will not be collected. Once a subscriber has been set, instrumentation points may be added to the executable using the tracing crate's macros. Libraries should only rely on the tracing crate and use the provided macros and types to collect whatever information might be useful to downstream consumers. ``` use std::{error::Error, io}; use tracing::{debug, error, info, span, warn, Level}; // the `#[tracing::instrument]` attribute creates and enters a span // every time the instrumented function is called. The span is named after the // the function or method. Paramaters passed to the function are recorded as" }, { "data": "pub fn shave(yak: usize) -> Result<(), Box<dyn Error + 'static>> { // this creates an event at the DEBUG level with two fields: // - `excitement`, with the key \"excitement\" and the value \"yay!\" // - `message`, with the key \"message\" and the value \"hello! I'm gonna shave a yak.\" // // unlike other fields, `message`'s shorthand initialization is just the string itself. debug!(excitement = \"yay!\", \"hello! I'm gonna shave a yak.\"); if yak == 3 { warn!(\"could not locate yak!\"); // note that this is intended to demonstrate `tracing`'s features, not idiomatic // error handling! in a library or application, you should consider returning // a dedicated `YakError`. libraries like snafu or thiserror make this easy. return Err(io::Error::new(io::ErrorKind::Other, \"shaving yak failed!\").into()); } else { debug!(\"yak shaved successfully\"); } Ok(()) } pub fn shave_all(yaks: usize) -> usize { // Constructs a new span named \"shaving_yaks\" at the TRACE level, // and a field whose key is \"yaks\". This is equivalent to writing: // // let span = span!(Level::TRACE, \"shaving_yaks\", yaks = yaks); // // local variables (`yaks`) can be used as field values // without an assignment, similar to struct initializers. let span = span!(Level::TRACE, \"shaving_yaks\", yaks).entered(); info!(\"shaving yaks\"); let mut yaks_shaved = 0; for yak in 1..=yaks { let res = shave(yak); debug!(yak, shaved = res.is_ok()); if let Err(ref error) = res { // Like spans, events can also use the field initialization shorthand. // In this instance, `yak` is the field being initalized. error!(yak, error = error.as_ref(), \"failed to shave yak!\"); } else { yaks_shaved += 1; } debug!(yaks_shaved); } yaks_shaved } ``` ``` [dependencies] tracing = \"0.1\" ``` Note: Libraries should NOT call setglobaldefault(), as this will cause conflicts when executables try to set the default later. If you are instrumenting code that make use of std::future::Future or async/await, avoid using the Span::enter method. The following example will not work: ``` async { let _s = span.enter(); // ... } ``` ``` async { let _s = tracing::span!(...).entered(); // ... } ``` The span guard _s will not exit until the future generated by the async block is complete. Since futures and spans can be entered and exited multiple times without them completing, the span remains entered for as long as the future exists, rather than being entered only when it is polled, leading to very confusing and incorrect output. For more details, see the documentation on closing spans. There are two ways to instrument asynchronous code. The first is through the Future::instrument combinator: ``` use tracing::Instrument; let my_future = async { // ... }; my_future .instrument(tracing::infospan!(\"myfuture\")) .await ``` Future::instrument attaches a span to the future, ensuring that the span's lifetime is as long as the future's. The second, and preferred, option is through the attribute: ``` use tracing::{info, instrument}; use tokio::{io::AsyncWriteExt, net::TcpStream}; use std::io; async fn write(stream: &mut TcpStream) -> io::Result<usize> { let result = stream.write(b\"hello world\\n\").await; info!(\"wrote to stream; success={:?}\", result.is_ok()); result } ``` Under the hood, the #[instrument] macro performs the same explicit span attachment that Future::instrument does. This crate provides macros for creating Spans and Events, which represent periods of time and momentary events within the execution of a program, respectively. As a rule of thumb, spans should be used to represent discrete units of work (e.g., a given request's lifetime in a server) or periods of time spent in a given context (e.g., time spent interacting with an instance of an external system, such as a" }, { "data": "In contrast, events should be used to represent points in time within a span a request returned with a given status code, n new items were taken from a queue, and so on. Spans are constructed using the span! macro, and then entered to indicate that some code takes place within the context of that Span: ``` use tracing::{span, Level}; // Construct a new span named \"my span\". let mut span = span!(Level::INFO, \"my span\"); span.in_scope(|| { // Any trace events in this closure or code called by it will occur within // the span. }); // Dropping the span will close it, indicating that it has ended. ``` The #[instrument] attribute macro can reduce some of this boilerplate: ``` use tracing::{instrument}; pub fn myfunction(myarg: usize) { // This event will be recorded inside a span named `my_function` with the // field `my_arg`. tracing::info!(\"inside my_function!\"); // ... } ``` The Event type represent an event that occurs instantaneously, and is essentially a Span that cannot be entered. They are created using the event! macro: ``` use tracing::{event, Level}; event!(Level::INFO, \"something has happened!\"); ``` Users of the log crate should note that tracing exposes a set of macros for creating Events (trace!, debug!, info!, warn!, and error!) which may be invoked with the same syntax as the similarly-named macros from the log crate. Often, the process of converting a project to use tracing can begin with a simple drop-in replacement. Tracing is built against the latest stable release. The minimum supported version is 1.42. The current Tracing version is not guaranteed to build on Rust versions earlier than the minimum supported version. Tracing follows the same compiler support policies as the rest of the Tokio project. The current stable Rust compiler and the three most recent minor versions before it will always be supported. For example, if the current stable compiler version is 1.45, the minimum supported version will not be increased past 1.42, three minor versions prior. Increasing the minimum supported compiler version is not considered a semver breaking change as long as doing so complies with this policy. In addition to tracing and tracing-core, the tokio-rs/tracing repository contains several additional crates designed to be used with the tracing ecosystem. This includes a collection of Subscriber implementations, as well as utility and adapter crates to assist in writing Subscribers and instrumenting applications. In particular, the following crates are likely to be of interest: Additionally, there are also several third-party crates which are not maintained by the tokio project. These include: If you're the maintainer of a tracing ecosystem crate not listed above, please let us know! We'd love to add your project to the list! Note: that some of the ecosystem crates are currently unreleased and undergoing active development. They may be less stable than tracing and tracing-core. Tracing is built against the latest stable release. The minimum supported version is 1.56. The current Tracing version is not guaranteed to build on Rust versions earlier than the minimum supported version. Tracing follows the same compiler support policies as the rest of the Tokio project. The current stable Rust compiler and the three most recent minor versions before it will always be supported. For example, if the current stable compiler version is 1.69, the minimum supported version will not be increased past 1.66, three minor versions prior. Increasing the minimum supported compiler version is not considered a semver breaking change as long as doing so complies with this policy. This project is licensed under the MIT license. Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in Tokio by" } ]
{ "category": "App Definition and Development", "file_name": "%5E0.4.md", "project_name": "Krator", "subcategory": "Application Definition & Image Build" }
[ { "data": "Chrono aims to provide all functionality needed to do correct operations on dates and times in the proleptic Gregorian calendar: Timezone data is not shipped with chrono by default to limit binary sizes. Use the companion crate Chrono-TZ or tzfile for full timezone support. See docs.rs for the API reference. Default features: Optional features: Note: The rkyv{,-16,-32,-64} features are mutually exclusive. The Minimum Supported Rust Version (MSRV) is currently Rust 1.61.0. The MSRV is explicitly tested in CI. It may be bumped in minor releases, but this is not done lightly. This project is licensed under either of at your option." } ]
{ "category": "App Definition and Development", "file_name": "%5E0.7.md", "project_name": "Krator", "subcategory": "Application Definition & Image Build" }
[ { "data": "This project is an attempt at extracting the compiletest utility from the Rust compiler. The compiletest utility is useful for library and plugin developers, who want to include test programs that should fail to compile, issue warnings or otherwise produce compile-time output. To use compiletest-rs in your application, add the following to Cargo.toml ``` [dev-dependencies] compiletest_rs = \"0.7\" ``` By default, compiletest-rs should be able to run on both stable, beta and nightly channels of rust. We use the tester fork of Rust's builtin test crate, so that we don't have require nightly. If you are running nightly and want to use Rust's test crate directly, you need to have the rustc development libraries install (which you can get by running rustup component add rustc-dev --toolchain nightly). Once you have the rustc development libraries installed, you can use the rustc feature to make compiletest use them instead of the tester crate. ``` [dev-dependencies] compiletest_rs = { version = \"0.7\", features = [ \"rustc\" ] } ``` Create a tests folder in the root folder of your project. Create a test file with something like the following: ``` extern crate compiletest_rs as compiletest; use std::path::PathBuf; fn run_mode(mode: &'static str) { let mut config = compiletest::Config::default(); config.mode = mode.parse().expect(\"Invalid mode\"); config.src_base = PathBuf::from(format!(\"tests/{}\", mode)); config.linkdeps(); // Populate config.targetrustcflags with dependencies on the path config.clean_rmeta(); // If your tests import the parent crate, this helps with E0464 compiletest::run_tests(&config); } fn compile_test() { run_mode(\"compile-fail\"); run_mode(\"run-pass\"); } ``` Each mode corresponds to a folder with the same name in the tests folder. That is for the compile-fail mode the test runner looks for the tests/compile-fail folder. Adding flags to the Rust compiler is a matter of assigning the correct field in the config. The most common flag to populate is the target_rustcflags to include the link dependencies on the path. ``` // NOTE! This is the manual way of adding flags config.targetrustcflags = Some(\"-L target/debug\".tostring()); ``` This is useful (and necessary) for library development. Note that other secondary library dependencies may have their build artifacts placed in different (non-obvious) locations and these locations must also be added. For convenience, Config provides a link_deps() method that populates target_rustcflags with all the dependencies found in the PATH variable (which is OS specific). For most cases, it should be sufficient to do: ``` let mut config = compiletest::Config::default(); config.link_deps(); ``` Note that link_deps() should not be used if any of the added paths contain spaces, as these are currently not handled correctly. See the test-project folder for a complete working example using the compiletest-rs utility. Simply cd test-project and cargo test to see the tests run. The annotation syntax is documented in the rustc-guide. Thank you for your interest in improving this utility! Please consider submitting your patch to the upstream source instead, as it will be incorporated into this repo in due time. Still, there are some supporting files that are specific to this repo, for example: If you are unsure, open a pull request anyway and we would be glad to help!" } ]
{ "category": "App Definition and Development", "file_name": "%5E0.8.9.md", "project_name": "Krator", "subcategory": "Application Definition & Image Build" }
[ { "data": "Common components for building Kubernetes operators This crate contains the core building blocks to allow users to build controllers/operators/watchers that need to synchronize/reconcile kubernetes state. Newcomers are recommended to start with the [Controller] builder, which gives an opinionated starting point that should be appropriate for simple operators, but all components are designed to be usable la carte if your operator doesn't quite fit that mold." } ]
{ "category": "App Definition and Development", "file_name": "%5E0.8.md", "project_name": "Krator", "subcategory": "Application Definition & Image Build" }
[ { "data": "Serde is a framework for serializing and deserializing Rust data structures efficiently and generically. ``` [dependencies] serde_json = \"1.0\" ``` You may be looking for: JSON is a ubiquitous open-standard format that uses human-readable text to transmit data objects consisting of key-value pairs. ``` { \"name\": \"John Doe\", \"age\": 43, \"address\": { \"street\": \"10 Downing Street\", \"city\": \"London\" }, \"phones\": [ \"+44 1234567\", \"+44 2345678\" ] } ``` There are three common ways that you might find yourself needing to work with JSON data in Rust. Serde JSON provides efficient, flexible, safe ways of converting data between each of these representations. Any valid JSON data can be manipulated in the following recursive enum representation. This data structure is serde_json::Value. ``` enum Value { Null, Bool(bool), Number(Number), String(String), Array(Vec<Value>), Object(Map<String, Value>), } ``` A string of JSON data can be parsed into a serde_json::Value by the serdejson::fromstr function. There is also from_slice for parsing from a byte slice &[u8] and from_reader for parsing from any io::Read like a File or a TCP stream. ``` use serde_json::{Result, Value}; fn untyped_example() -> Result<()> { // Some JSON input data as a &str. Maybe this comes from the user. let data = r#\" { \"name\": \"John Doe\", \"age\": 43, \"phones\": [ \"+44 1234567\", \"+44 2345678\" ] }\"#; // Parse the string of data into serde_json::Value. let v: Value = serdejson::fromstr(data)?; // Access parts of the data by indexing with square brackets. println!(\"Please call {} at the number {}\", v); Ok(()) } ``` The result of square bracket indexing like v[\"name\"] is a borrow of the data at that index, so the type is &Value. A JSON map can be indexed with string keys, while a JSON array can be indexed with integer keys. If the type of the data is not right for the type with which it is being indexed, or if a map does not contain the key being indexed, or if the index into a vector is out of bounds, the returned element is Value::Null. When a Value is printed, it is printed as a JSON string. So in the code above, the output looks like Please call \"John Doe\" at the number \"+44 1234567\". The quotation marks appear because v[\"name\"] is a &Value containing a JSON string and its JSON representation is \"John Doe\". Printing as a plain string without quotation marks involves converting from a JSON string to a Rust string with as_str() or avoiding the use of Value as described in the following section. The Value representation is sufficient for very basic tasks but can be tedious to work with for anything more" }, { "data": "Error handling is verbose to implement correctly, for example imagine trying to detect the presence of unrecognized fields in the input data. The compiler is powerless to help you when you make a mistake, for example imagine typoing v[\"name\"] as v[\"nmae\"] in one of the dozens of places it is used in your code. Serde provides a powerful way of mapping JSON data into Rust data structures largely automatically. ``` use serde::{Deserialize, Serialize}; use serde_json::Result; struct Person { name: String, age: u8, phones: Vec<String>, } fn typed_example() -> Result<()> { // Some JSON input data as a &str. Maybe this comes from the user. let data = r#\" { \"name\": \"John Doe\", \"age\": 43, \"phones\": [ \"+44 1234567\", \"+44 2345678\" ] }\"#; // Parse the string of data into a Person object. This is exactly the // same function as the one that produced serde_json::Value above, but // now we are asking it for a Person as output. let p: Person = serdejson::fromstr(data)?; // Do things just like with any other Rust data structure. println!(\"Please call {} at the number {}\", p.name, p.phones[0]); Ok(()) } ``` This is the same serdejson::fromstr function as before, but this time we assign the return value to a variable of type Person so Serde will automatically interpret the input data as a Person and produce informative error messages if the layout does not conform to what a Person is expected to look like. Any type that implements Serde's Deserialize trait can be deserialized this way. This includes built-in Rust standard library types like Vec<T> and HashMap<K, V>, as well as any structs or enums annotated with Once we have p of type Person, our IDE and the Rust compiler can help us use it correctly like they do for any other Rust code. The IDE can autocomplete field names to prevent typos, which was impossible in the serde_json::Value representation. And the Rust compiler can check that when we write p.phones[0], then p.phones is guaranteed to be a Vec<String> so indexing into it makes sense and produces a String. The necessary setup for using Serde's derive macros is explained on the Using derive page of the Serde site. Serde JSON provides a json! macro to build serde_json::Value objects with very natural JSON syntax. ``` use serde_json::json; fn main() { // The type of `john` is `serde_json::Value` let john = json!({ \"name\": \"John Doe\", \"age\": 43, \"phones\": [ \"+44 1234567\", \"+44 2345678\" ] }); println!(\"first phone number: {}\", john); // Convert to a string of JSON and print it out println!(\"{}\", john.to_string()); } ``` The Value::tostring() function converts a serdejson::Value into a String of JSON text. One neat thing about the json! macro is that variables and expressions can be interpolated directly into the JSON value as you are building" }, { "data": "Serde will check at compile time that the value you are interpolating is able to be represented as JSON. ``` let full_name = \"John Doe\"; let agelastyear = 42; // The type of `john` is `serde_json::Value` let john = json!({ \"name\": full_name, \"age\": agelastyear + 1, \"phones\": [ format!(\"+44 {}\", random_phone()) ] }); ``` This is amazingly convenient, but we have the problem we had before with Value: the IDE and Rust compiler cannot help us if we get it wrong. Serde JSON provides a better way of serializing strongly-typed data structures into JSON text. A data structure can be converted to a JSON string by serdejson::tostring. There is also serdejson::tovec which serializes to a Vec<u8> and serdejson::towriter which serializes to any io::Write such as a File or a TCP stream. ``` use serde::{Deserialize, Serialize}; use serde_json::Result; struct Address { street: String, city: String, } fn printanaddress() -> Result<()> { // Some data structure. let address = Address { street: \"10 Downing Street\".to_owned(), city: \"London\".to_owned(), }; // Serialize it to a JSON string. let j = serdejson::tostring(&address)?; // Print, write to a file, or send to an HTTP server. println!(\"{}\", j); Ok(()) } ``` Any type that implements Serde's Serialize trait can be serialized this way. This includes built-in Rust standard library types like Vec<T> and HashMap<K, V>, as well as any structs or enums annotated with #[derive(Serialize)]. It is fast. You should expect in the ballpark of 500 to 1000 megabytes per second deserialization and 600 to 900 megabytes per second serialization, depending on the characteristics of your data. This is competitive with the fastest C and C++ JSON libraries or even 30% faster for many use cases. Benchmarks live in the serde-rs/json-benchmark repo. Serde is one of the most widely used Rust libraries, so any place that Rustaceans congregate will be able to help you out. For chat, consider trying the #rust-questions or #rust-beginners channels of the unofficial community Discord (invite: https://discord.gg/rust-lang-community), the #rust-usage or https://discord.gg/rust-lang), or the #general stream in Zulip. For asynchronous, consider the [rust] tag on StackOverflow, the /r/rust subreddit which has a pinned weekly easy questions post, or the Rust Discourse forum. It's acceptable to file a support issue in this repo, but they tend not to get as many eyes as any of the above and may get closed without a response after some time. As long as there is a memory allocator, it is possible to use serde_json without the rest of the Rust standard library. Disable the default \"std\" feature and enable the \"alloc\" feature: ``` [dependencies] serde_json = { version = \"1.0\", default-features = false, features = [\"alloc\"] } ``` For JSON support in Serde without a memory allocator, please see the serde-json-core crate." } ]
{ "category": "App Definition and Development", "file_name": "%5E1.0.md", "project_name": "Krator", "subcategory": "Application Definition & Image Build" }
[ { "data": "Kube is an umbrella-crate for interacting with Kubernetes in Rust. Kube contains a Kubernetes client, a controller runtime, a custom resource derive, and various tooling required for building applications or controllers that interact with Kubernetes. The main modules are: You can use each of these as you need with the help of the exported features. ``` use futures::{StreamExt, TryStreamExt}; use kube::{Client, api::{Api, ResourceExt, ListParams, PostParams}}; use k8s_openapi::api::core::v1::Pod; async fn main() -> Result<(), Box<dyn std::error::Error>> { // Infer the runtime environment and try to create a Kubernetes Client let client = Client::try_default().await?; // Read pods in the configured namespace into the typed interface from k8s-openapi let pods: Api<Pod> = Api::default_namespaced(client); for p in pods.list(&ListParams::default()).await? { println!(\"found pod {}\", p.name()); } Ok(()) } ``` For details, see: ``` use schemars::JsonSchema; use serde::{Deserialize, Serialize}; use serde_json::json; use validator::Validate; use futures::{StreamExt, TryStreamExt}; use k8sopenapi::apiextensionsapiserver::pkg::apis::apiextensions::v1::CustomResourceDefinition; use kube::{ api::{Api, DeleteParams, ListParams, PatchParams, Patch, ResourceExt}, core::CustomResourceExt, Client, CustomResource, runtime::{watcher, utils::tryflattenapplied, wait::{conditions, await_condition}}, }; // Our custom resource pub struct FooSpec { info: String, name: String, replicas: i32, } async fn main() -> Result<(), Box<dyn std::error::Error>> { let client = Client::try_default().await?; let crds: Api<CustomResourceDefinition> = Api::all(client.clone()); // Apply the CRD so users can create Foo instances in Kubernetes crds.patch(\"foos.clux.dev\", &PatchParams::apply(\"my_manager\"), &Patch::Apply(Foo::crd()) ).await?; // Wait for the CRD to be ready tokio::time::timeout( std::time::Duration::from_secs(10), awaitcondition(crds, \"foos.clux.dev\", conditions::iscrd_established()) ).await?; // Watch for changes to foos in the configured namespace let foos: Api<Foo> = Api::default_namespaced(client.clone()); let lp = ListParams::default(); let mut applystream = tryflatten_applied(watcher(foos, lp)).boxed(); while let Some(f) = applystream.trynext().await? { println!(\"saw apply to {}\", f.name()); } Ok(()) } ``` For details, see: A large list of complete, runnable examples with explainations are available in the examples folder." } ]
{ "category": "App Definition and Development", "file_name": "0.4.0.md", "project_name": "Krator", "subcategory": "Application Definition & Image Build" }
[ { "data": "All the builds on docs.rs are executed inside a sandbox with limited resources. The limits for this crate are the following: | 0 | 1 | |:--|:--| | Available RAM | 6 GB | | Maximum rustdoc execution time | 15 minutes | | Maximum size of a build log | 100 kB | | Network access | blocked | | Maximum number of build targets | 10 | If a build fails because it hit one of those limits please open an issue to get them increased." } ]
{ "category": "App Definition and Development", "file_name": "about.md", "project_name": "Krator", "subcategory": "Application Definition & Image Build" }
[ { "data": "futures-rs is a library providing the foundations for asynchronous programming in Rust. It includes key trait definitions like Stream, as well as utilities like join!, select!, and various futures combinator methods which enable expressive asynchronous control flow. Add this to your Cargo.toml: ``` [dependencies] futures = \"0.3\" ``` The current futures requires Rust 1.56 or later. Futures-rs works without the standard library, such as in bare metal environments. However, it has a significantly reduced API surface. To use futures-rs in a #[no_std] environment, use: ``` [dependencies] futures = { version = \"0.3\", default-features = false } ``` Licensed under either of Apache License, Version 2.0 or MIT license at your option. Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions." } ]
{ "category": "App Definition and Development", "file_name": "choose.md", "project_name": "Krator", "subcategory": "Application Definition & Image Build" }
[ { "data": "Generate JSON Schema documents from Rust code If you don't really care about the specifics, the easiest way to generate a JSON schema for your types is to #[derive(JsonSchema)] and use the schema_for! macro. All fields of the type must also implement JsonSchema - Schemars implements this for many standard library types. ``` use schemars::{schema_for, JsonSchema}; pub struct MyStruct { pub my_int: i32, pub my_bool: bool, pub mynullableenum: Option<MyEnum>, } pub enum MyEnum { StringNewType(String), StructVariant { floats: Vec<f32> }, } let schema = schema_for!(MyStruct); println!(\"{}\", serdejson::tostring_pretty(&schema).unwrap()); ``` ``` { \"$schema\": \"http://json-schema.org/draft-07/schema#\", \"title\": \"MyStruct\", \"type\": \"object\", \"required\": [\"mybool\", \"myint\"], \"properties\": { \"my_bool\": { \"type\": \"boolean\" }, \"my_int\": { \"type\": \"integer\", \"format\": \"int32\" }, \"mynullableenum\": { \"anyOf\": [ { \"$ref\": \"#/definitions/MyEnum\" }, { \"type\": \"null\" } ] } }, \"definitions\": { \"MyEnum\": { \"anyOf\": [ { \"type\": \"object\", \"required\": [\"StringNewType\"], \"properties\": { \"StringNewType\": { \"type\": \"string\" } }, \"additionalProperties\": false }, { \"type\": \"object\", \"required\": [\"StructVariant\"], \"properties\": { \"StructVariant\": { \"type\": \"object\", \"required\": [\"floats\"], \"properties\": { \"floats\": { \"type\": \"array\", \"items\": { \"type\": \"number\", \"format\": \"float\" } } } } }, \"additionalProperties\": false } ] } } } ``` One of the main aims of this library is compatibility with Serde. Any generated schema should match how serde_json would serialize/deserialize to/from JSON. To support this, Schemars will check for any #[serde(...)] attributes on types that derive JsonSchema, and adjust the generated schema accordingly. ``` use schemars::{schema_for, JsonSchema}; use serde::{Deserialize, Serialize}; pub struct MyStruct { pub my_int: i32, pub my_bool: bool, pub mynullableenum: Option<MyEnum>, } pub enum MyEnum { StringNewType(String), StructVariant { floats: Vec<f32> }, } let schema = schema_for!(MyStruct); println!(\"{}\", serdejson::tostring_pretty(&schema).unwrap()); ``` ``` { \"$schema\": \"http://json-schema.org/draft-07/schema#\", \"title\": \"MyStruct\", \"type\": \"object\", \"required\": [\"myBool\", \"myNumber\"], \"properties\": { \"myBool\": { \"type\": \"boolean\" }, \"myNullableEnum\": { \"default\": null, \"anyOf\": [ { \"$ref\": \"#/definitions/MyEnum\" }, { \"type\": \"null\" } ] }, \"myNumber\": { \"type\": \"integer\", \"format\": \"int32\" } }, \"additionalProperties\": false, \"definitions\": { \"MyEnum\": { \"anyOf\": [ { \"type\": \"string\" }, { \"type\": \"object\", \"required\": [\"floats\"], \"properties\": { \"floats\": { \"type\": \"array\", \"items\": { \"type\": \"number\", \"format\": \"float\" } } } } ] } } } ``` If you want a schema for a type that can't/doesn't implement JsonSchema, but does implement serde::Serialize, then you can generate a JSON schema from a value of that type. However, this schema will generally be less precise than if the type implemented JsonSchema - particularly when it involves enums, since schemars will not make any assumptions about the structure of an enum based on a single variant. ``` use schemars::schemaforvalue; use serde::Serialize; pub struct MyStruct { pub my_int: i32, pub my_bool: bool, pub mynullableenum: Option<MyEnum>, } pub enum MyEnum { StringNewType(String), StructVariant { floats: Vec<f32> }, } let schema = schemaforvalue!(MyStruct { my_int: 123, my_bool: true, mynullableenum: Some(MyEnum::StringNewType(\"foo\".to_string())) }); println!(\"{}\", serdejson::tostring_pretty(&schema).unwrap()); ``` ``` { \"$schema\": \"http://json-schema.org/draft-07/schema#\", \"title\": \"MyStruct\", \"examples\": [ { \"my_bool\": true, \"my_int\": 123, \"mynullableenum\": { \"StringNewType\": \"foo\" } } ], \"type\": \"object\", \"properties\": { \"my_bool\": { \"type\": \"boolean\" }, \"my_int\": { \"type\": \"integer\" }, \"mynullableenum\": true } } ``` Schemars can implement JsonSchema on types from several popular crates, enabled via feature flags (dependency versions are shown in brackets): For example, to implement JsonSchema on types from chrono, enable it as a feature in the schemars dependency in your Cargo.toml like so: ``` [dependencies] schemars = { version = \"0.8\", features = [\"chrono\"] } ```" } ]
{ "category": "App Definition and Development", "file_name": "latest.md", "project_name": "Krator", "subcategory": "Application Definition & Image Build" }
[ { "data": "This library provides anyhow::Error, a trait object based error type for easy idiomatic error handling in Rust applications. ``` [dependencies] anyhow = \"1.0\" ``` Compiler support: requires rustc 1.39+ Use Result<T, anyhow::Error>, or equivalently anyhow::Result<T>, as the return type of any fallible function. Within the function, use ? to easily propagate any error that implements the std::error::Error trait. ``` use anyhow::Result; fn getclusterinfo() -> Result<ClusterMap> { let config = std::fs::readtostring(\"cluster.json\")?; let map: ClusterMap = serdejson::fromstr(&config)?; Ok(map) } ``` Attach context to help the person troubleshooting the error understand where things went wrong. A low-level error like \"No such file or directory\" can be annoying to debug without more context about what higher level step the application was in the middle of. ``` use anyhow::{Context, Result}; fn main() -> Result<()> { ... it.detach().context(\"Failed to detach the important thing\")?; let content = std::fs::read(path) .with_context(|| format!(\"Failed to read instrs from {}\", path))?; ... } ``` ``` Error: Failed to read instrs from ./path/to/instrs.json Caused by: No such file or directory (os error 2) ``` Downcasting is supported and can be by value, by shared reference, or by mutable reference as needed. ``` // If the error was caused by redaction, then return a // tombstone instead of the content. match rootcause.downcastref::<DataStoreError>() { Some(DataStoreError::Censored()) => Ok(Poll::Ready(REDACTEDCONTENT)), None => Err(error), } ``` If using Rust 1.65, a backtrace is captured and printed with the error if the underlying error type does not already provide its own. In order to see backtraces, they must be enabled through the environment variables described in std::backtrace: Anyhow works with any error type that has an impl of std::error::Error, including ones defined in your crate. We do not bundle a derive(Error) macro but you can write the impls yourself or use a standalone macro like thiserror. ``` use thiserror::Error; pub enum FormatError { InvalidHeader { expected: String, found: String, }, MissingAttribute(String), } ``` One-off error messages can be constructed using the anyhow! macro, which supports string interpolation and produces an anyhow::Error. ``` return Err(anyhow!(\"Missing attribute: {}\", missing)); ``` A bail! macro is provided as a shorthand for the same early return. ``` bail!(\"Missing attribute: {}\", missing); ``` In no_std mode, almost all of the same API is available and works the same way. To depend on Anyhow in no_std mode, disable our default enabled \"std\" feature in Cargo.toml. A global allocator is required. ``` [dependencies] anyhow = { version = \"1.0\", default-features = false } ``` Since the ?-based error conversions would normally rely on the std::error::Error trait which is only available through std, no_std mode will require an explicit .map_err(Error::msg) when working with a non-Anyhow error type inside a function that returns Anyhow's error type. The anyhow::Error type works something like failure::Error, but unlike failure ours is built around the standard library's std::error::Error trait rather than a separate trait failure::Fail. The standard library has adopted the necessary improvements for this to be possible as part of RFC 2504. Use Anyhow if you don't care what error type your functions return, you just want it to be easy. This is common in application code. Use thiserror if you are a library that wants to design your own dedicated error type(s) so that on failures the caller gets exactly the information that you choose." } ]
{ "category": "App Definition and Development", "file_name": "docs.kubermatic.io.md", "project_name": "KubeCarrier", "subcategory": "Application Definition & Image Build" }
[ { "data": "Automate operations of a single Kubernetes cluster on your chosen cloud, on-prem, or edge environment. Automate multicloud, on-prem, and edge operations with a single management UI enabling you to deliver the cloud native transformation immediately. Operating System Manager is responsible for creating and managing the required configurations for worker nodes in a Kubernetes cluster. KubeLB is a tool to centrally manage load balancers across multicloud and on-prem environments. Scale like Google, more containers with the same amount of developers Stay flexible, from local testing to global enterprise operations Open source offers transparency and freedom, never be dependent again Speed up your cloud native adoption Automate Day 2 operations and reduce costs Empower your development team to work with the stack they need" } ]
{ "category": "App Definition and Development", "file_name": "metadata.md", "project_name": "Krator", "subcategory": "Application Definition & Image Build" }
[ { "data": "You can customize docs.rs builds by defining [package.metadata.docs.rs] table in your crates' Cargo.toml. The available configuration flags you can customize are: ``` [package] name = \"test\" [package.metadata.docs.rs] features = [\"feature1\", \"feature2\"] all-features = true no-default-features = true default-target = \"x86_64-unknown-linux-gnu\" targets = [\"x8664-apple-darwin\", \"x8664-pc-windows-msvc\"] rustc-args = [\"--example-rustc-arg\"] rustdoc-args = [\"--example-rustdoc-arg\"] cargo-args = [\"-Z\", \"build-std\"] ```" } ]
{ "category": "App Definition and Development", "file_name": "docs.github.com.md", "project_name": "KubeCarrier", "subcategory": "Application Definition & Image Build" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "App Definition and Development", "file_name": "github-terms-of-service.md", "project_name": "KubeCarrier", "subcategory": "Application Definition & Image Build" }
[ { "data": "Thank you for using GitHub! We're happy you're here. Please read this Terms of Service agreement carefully before accessing or using GitHub. Because it is such an important contract between us and our users, we have tried to make it as clear as possible. For your convenience, we have presented these terms in a short non-binding summary followed by the full legal terms. | Section | What can you find there? | |:-|:-| | A. Definitions | Some basic terms, defined in a way that will help you understand this agreement. Refer back up to this section for clarification. | | B. Account Terms | These are the basic requirements of having an Account on GitHub. | | C. Acceptable Use | These are the basic rules you must follow when using your GitHub Account. | | D. User-Generated Content | You own the content you post on GitHub. However, you have some responsibilities regarding it, and we ask you to grant us some rights so we can provide services to you. | | E. Private Repositories | This section talks about how GitHub will treat content you post in private repositories. | | F. Copyright & DMCA Policy | This section talks about how GitHub will respond if you believe someone is infringing your copyrights on GitHub. | | G. Intellectual Property Notice | This describes GitHub's rights in the website and service. | | H. API Terms | These are the rules for using GitHub's APIs, whether you are using the API for development or data collection. | | I. Additional Product Terms | We have a few specific rules for GitHub's features and products. | | J. Beta Previews | These are some of the additional terms that apply to GitHub's features that are still in development. | | K. Payment | You are responsible for payment. We are responsible for billing you accurately. | | L. Cancellation and Termination | You may cancel this agreement and close your Account at any time. | | M. Communications with GitHub | We only use email and other electronic means to stay in touch with our users. We do not provide phone support. | | N. Disclaimer of Warranties | We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. | | O. Limitation of Liability | We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. | | P. Release and Indemnification | You are fully responsible for your use of the service. | | Q. Changes to these Terms of Service | We may modify this agreement, but we will give you 30 days' notice of material changes. | | R. Miscellaneous | Please see this section for legal details including our choice of law. | Effective date: November 16, 2020 Short version: We use these basic terms throughout the agreement, and they have specific meanings. You should know what we mean when we use each of the terms. There's not going to be a test on it, but it's still useful" }, { "data": "Short version: Personal Accounts and Organizations have different administrative controls; a human must create your Account; you must be 13 or over; you must provide a valid email address; and you may not have more than one free Account. You alone are responsible for your Account and anything that happens while you are signed in to or using your Account. You are responsible for keeping your Account secure. Users. Subject to these Terms, you retain ultimate administrative control over your Personal Account and the Content within it. Organizations. The \"owner\" of an Organization that was created under these Terms has ultimate administrative control over that Organization and the Content within it. Within the Service, an owner can manage User access to the Organizations data and projects. An Organization may have multiple owners, but there must be at least one Personal Account designated as an owner of an Organization. If you are the owner of an Organization under these Terms, we consider you responsible for the actions that are performed on or through that Organization. You must provide a valid email address in order to complete the signup process. Any other information requested, such as your real name, is optional, unless you are accepting these terms on behalf of a legal entity (in which case we need more information about the legal entity) or if you opt for a paid Account, in which case additional information will be necessary for billing purposes. We have a few simple rules for Personal Accounts on GitHub's Service. You are responsible for keeping your Account secure while you use our Service. We offer tools such as two-factor authentication to help you maintain your Account's security, but the content of your Account and its security are up to you. In some situations, third parties' terms may apply to your use of GitHub. For example, you may be a member of an organization on GitHub with its own terms or license agreements; you may download an application that integrates with GitHub; or you may use GitHub to authenticate to another service. Please be aware that while these Terms are our full agreement with you, other parties' terms govern their relationships with you. If you are a government User or otherwise accessing or using any GitHub Service in a government capacity, this Government Amendment to GitHub Terms of Service applies to you, and you agree to its provisions. If you have signed up for GitHub Enterprise Cloud, the Enterprise Cloud Addendum applies to you, and you agree to its provisions. Short version: GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other. Your use of the Website and Service must not violate any applicable laws, including copyright or trademark laws, export control or sanctions laws, or other laws in your jurisdiction. You are responsible for making sure that your use of the Service is in compliance with laws and any applicable regulations. You agree that you will not under any circumstances violate our Acceptable Use Policies or Community Guidelines. Short version: You own content you create, but you allow us certain rights to it, so that we can display and share the content you" }, { "data": "You still have control over your content, and responsibility for it, and the rights you grant us are limited to those we need to provide the service. We have the right to remove content or close Accounts if we need to. You may create or upload User-Generated Content while using the Service. You are solely responsible for the content of, and for any harm resulting from, any User-Generated Content that you post, upload, link to or otherwise make available via the Service, regardless of the form of that Content. We are not responsible for any public display or misuse of your User-Generated Content. We have the right to refuse or remove any User-Generated Content that, in our sole discretion, violates any laws or GitHub terms or policies. User-Generated Content displayed on GitHub Mobile may be subject to mobile app stores' additional terms. You retain ownership of and responsibility for Your Content. If you're posting anything you did not create yourself or do not own the rights to, you agree that you are responsible for any Content you post; that you will only submit Content that you have the right to post; and that you will fully comply with any third party licenses relating to Content you post. Because you retain ownership of and responsibility for Your Content, we need you to grant us and other GitHub Users certain legal permissions, listed in Sections D.4 D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it. We need the legal right to do things like host Your Content, publish it, and share it. You grant us and our legal successors the right to store, archive, parse, and display Your Content, and make incidental copies, as necessary to provide the Service, including improving the Service over time. This license includes the right to do things like copy it to our database and make backups; show it to you and other users; parse it into a search index or otherwise analyze it on our servers; share it with other users; and perform it, in case Your Content is something like music or video. This license does not grant GitHub the right to sell Your Content. It also does not grant GitHub the right to otherwise distribute or use Your Content outside of our provision of the Service, except that as part of the right to archive Your Content, GitHub may permit our partners to store and archive Your Content in public repositories in connection with the GitHub Arctic Code Vault and GitHub Archive Program. Any User-Generated Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and \"fork\" your repositories (this means that others may make their own copies of Content from your repositories in repositories they" }, { "data": "If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to use, display, and perform Your Content through the GitHub Service and to reproduce Your Content solely on GitHub as permitted through GitHub's functionality (for example, through forking). You may grant further rights if you adopt a license. If you are uploading Content you did not create or own, you are responsible for ensuring that the Content you upload is licensed under terms that grant these permissions to other GitHub Users. Whenever you add Content to a repository containing notice of a license, you license that Content under the same terms, and you agree that you have the right to license that Content under those terms. If you have a separate agreement to license that Content under different terms, such as a contributor license agreement, that agreement will supersede. Isn't this just how it works already? Yep. This is widely accepted as the norm in the open-source community; it's commonly referred to by the shorthand \"inbound=outbound\". We're just making it explicit. You retain all moral rights to Your Content that you upload, publish, or submit to any part of the Service, including the rights of integrity and attribution. However, you waive these rights and agree not to assert them against us, to enable us to reasonably exercise the rights granted in Section D.4, but not otherwise. To the extent this agreement is not enforceable by applicable law, you grant GitHub the rights we need to use Your Content without attribution and to make reasonable adaptations of Your Content as necessary to render the Website and provide the Service. Short version: We treat the content of private repositories as confidential, and we only access it as described in our Privacy Statementfor security purposes, to assist the repository owner with a support matter, to maintain the integrity of the Service, to comply with our legal obligations, if we have reason to believe the contents are in violation of the law, or with your consent. Some Accounts may have private repositories, which allow the User to control access to Content. GitHub considers the contents of private repositories to be confidential to you. GitHub will protect the contents of private repositories from unauthorized use, access, or disclosure in the same manner that we would use to protect our own confidential information of a similar nature and in no event with less than a reasonable degree of care. GitHub personnel may only access the content of your private repositories in the situations described in our Privacy Statement. You may choose to enable additional access to your private repositories. For example: Additionally, we may be compelled by law to disclose the contents of your private repositories. GitHub will provide notice regarding our access to private repository content, unless for legal disclosure, to comply with our legal obligations, or where otherwise bound by requirements under law, for automated scanning, or if in response to a security threat or other risk to security. If you believe that content on our website violates your copyright, please contact us in accordance with our Digital Millennium Copyright Act Policy. If you are a copyright owner and you believe that content on GitHub violates your rights, please contact us via our convenient DMCA form or by emailing copyright@github.com. There may be legal consequences for sending a false or frivolous takedown notice. Before sending a takedown request, you must consider legal uses such as fair use and licensed uses. We will terminate the Accounts of repeat infringers of this policy. Short version: We own the service and all of our" }, { "data": "In order for you to use our content, we give you certain rights to it, but you may only use our content in the way we have allowed. GitHub and our licensors, vendors, agents, and/or our content providers retain ownership of all intellectual property rights of any kind related to the Website and Service. We reserve all rights that are not expressly granted to you under this Agreement or by law. The look and feel of the Website and Service is copyright GitHub, Inc. All rights reserved. You may not duplicate, copy, or reuse any portion of the HTML/CSS, JavaScript, or visual design elements or concepts without express written permission from GitHub. If youd like to use GitHubs trademarks, you must follow all of our trademark guidelines, including those on our logos page: https://github.com/logos. This Agreement is licensed under this Creative Commons Zero license. For details, see our site-policy repository. Short version: You agree to these Terms of Service, plus this Section H, when using any of GitHub's APIs (Application Provider Interface), including use of the API through a third party product that accesses GitHub. Abuse or excessively frequent requests to GitHub via the API may result in the temporary or permanent suspension of your Account's access to the API. GitHub, in our sole discretion, will determine abuse or excessive usage of the API. We will make a reasonable attempt to warn you via email prior to suspension. You may not share API tokens to exceed GitHub's rate limitations. You may not use the API to download data or Content from GitHub for spamming purposes, including for the purposes of selling GitHub users' personal information, such as to recruiters, headhunters, and job boards. All use of the GitHub API is subject to these Terms of Service and the GitHub Privacy Statement. GitHub may offer subscription-based access to our API for those Users who require high-throughput access or access that would result in resale of GitHub's Service. Short version: You need to follow certain specific terms and conditions for GitHub's various features and products, and you agree to the Supplemental Terms and Conditions when you agree to this Agreement. Some Service features may be subject to additional terms specific to that feature or product as set forth in the GitHub Additional Product Terms. By accessing or using the Services, you also agree to the GitHub Additional Product Terms. Short version: Beta Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Beta Previews better. Beta Previews may not be supported and may be changed at any time without notice. In addition, Beta Previews are not subject to the same security measures and auditing to which the Service has been and is subject. By using a Beta Preview, you use it at your own risk. As a user of Beta Previews, you may get access to special information that isnt available to the rest of the world. Due to the sensitive nature of this information, its important for us to make sure that you keep that information secret. Confidentiality Obligations. You agree that any non-public Beta Preview information we give you, such as information about a private Beta Preview, will be considered GitHubs confidential information (collectively, Confidential Information), regardless of whether it is marked or identified as" }, { "data": "You agree to only use such Confidential Information for the express purpose of testing and evaluating the Beta Preview (the Purpose), and not for any other purpose. You should use the same degree of care as you would with your own confidential information, but no less than reasonable precautions to prevent any unauthorized use, disclosure, publication, or dissemination of our Confidential Information. You promise not to disclose, publish, or disseminate any Confidential Information to any third party, unless we dont otherwise prohibit or restrict such disclosure (for example, you might be part of a GitHub-organized group discussion about a private Beta Preview feature). Exceptions. Confidential Information will not include information that is: (a) or becomes publicly available without breach of this Agreement through no act or inaction on your part (such as when a private Beta Preview becomes a public Beta Preview); (b) known to you before we disclose it to you; (c) independently developed by you without breach of any confidentiality obligation to us or any third party; or (d) disclosed with permission from GitHub. You will not violate the terms of this Agreement if you are required to disclose Confidential Information pursuant to operation of law, provided GitHub has been given reasonable advance written notice to object, unless prohibited by law. Were always trying to improve of products and services, and your feedback as a Beta Preview user will help us do that. If you choose to give us any ideas, know-how, algorithms, code contributions, suggestions, enhancement requests, recommendations or any other feedback for our products or services (collectively, Feedback), you acknowledge and agree that GitHub will have a royalty-free, fully paid-up, worldwide, transferable, sub-licensable, irrevocable and perpetual license to implement, use, modify, commercially exploit and/or incorporate the Feedback into our products, services, and documentation. Short version: You are responsible for any fees associated with your use of GitHub. We are responsible for communicating those fees to you clearly and accurately, and letting you know well in advance if those prices change. Our pricing and payment terms are available at github.com/pricing. If you agree to a subscription price, that will remain your price for the duration of the payment term; however, prices are subject to change at the end of a payment term. Payment Based on Plan For monthly or yearly payment plans, the Service is billed in advance on a monthly or yearly basis respectively and is non-refundable. There will be no refunds or credits for partial months of service, downgrade refunds, or refunds for months unused with an open Account; however, the service will remain active for the length of the paid billing period. In order to treat everyone equally, no exceptions will be made. Payment Based on Usage Some Service features are billed based on your usage. A limited quantity of these Service features may be included in your plan for a limited term without additional charge. If you choose to use paid Service features beyond the quantity included in your plan, you pay for those Service features based on your actual usage in the preceding month. Monthly payment for these purchases will be charged on a periodic basis in arrears. See GitHub Additional Product Terms for Details. Invoicing For invoiced Users, User agrees to pay the fees in full, up front without deduction or setoff of any kind, in U.S." }, { "data": "User must pay the fees within thirty (30) days of the GitHub invoice date. Amounts payable under this Agreement are non-refundable, except as otherwise provided in this Agreement. If User fails to pay any fees on time, GitHub reserves the right, in addition to taking any other action at law or equity, to (i) charge interest on past due amounts at 1.0% per month or the highest interest rate allowed by law, whichever is less, and to charge all expenses of recovery, and (ii) terminate the applicable order form. User is solely responsible for all taxes, fees, duties and governmental assessments (except for taxes based on GitHub's net income) that are imposed or become due in connection with this Agreement. By agreeing to these Terms, you are giving us permission to charge your on-file credit card, PayPal account, or other approved methods of payment for fees that you authorize for GitHub. You are responsible for all fees, including taxes, associated with your use of the Service. By using the Service, you agree to pay GitHub any charge incurred in connection with your use of the Service. If you dispute the matter, contact us through the GitHub Support portal. You are responsible for providing us with a valid means of payment for paid Accounts. Free Accounts are not required to provide payment information. Short version: You may close your Account at any time. If you do, we'll treat your information responsibly. It is your responsibility to properly cancel your Account with GitHub. You can cancel your Account at any time by going into your Settings in the global navigation bar at the top of the screen. The Account screen provides a simple, no questions asked cancellation link. We are not able to cancel Accounts in response to an email or phone request. We will retain and use your information as necessary to comply with our legal obligations, resolve disputes, and enforce our agreements, but barring legal requirements, we will delete your full profile and the Content of your repositories within 90 days of cancellation or termination (though some information may remain in encrypted backups). This information cannot be recovered once your Account is canceled. We will not delete Content that you have contributed to other Users' repositories or that other Users have forked. Upon request, we will make a reasonable effort to provide an Account owner with a copy of your lawful, non-infringing Account contents after Account cancellation, termination, or downgrade. You must make this request within 90 days of cancellation, termination, or downgrade. GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time. All provisions of this Agreement which, by their nature, should survive termination will survive termination including, without limitation: ownership provisions, warranty disclaimers, indemnity, and limitations of liability. Short version: We use email and other electronic means to stay in touch with our users. For contractual purposes, you (1) consent to receive communications from us in an electronic form via the email address you have submitted or via the Service; and (2) agree that all Terms of Service, agreements, notices, disclosures, and other communications that we provide to you electronically satisfy any legal requirement that those communications would satisfy if they were on paper. This section does not affect your non-waivable" }, { "data": "Communications made through email or GitHub Support's messaging system will not constitute legal notice to GitHub or any of its officers, employees, agents or representatives in any situation where notice to GitHub is required by contract or any law or regulation. Legal notice to GitHub must be in writing and served on GitHub's legal agent. GitHub only offers support via email, in-Service communications, and electronic messages. We do not offer telephone support. Short version: We provide our service as is, and we make no promises or guarantees about this service. Please read this section carefully; you should understand what to expect. GitHub provides the Website and the Service as is and as available, without warranty of any kind. Without limiting this, we expressly disclaim all warranties, whether express, implied or statutory, regarding the Website and the Service including without limitation any warranty of merchantability, fitness for a particular purpose, title, security, accuracy and non-infringement. GitHub does not warrant that the Service will meet your requirements; that the Service will be uninterrupted, timely, secure, or error-free; that the information provided through the Service is accurate, reliable or correct; that any defects or errors will be corrected; that the Service will be available at any particular time or location; or that the Service is free of viruses or other harmful components. You assume full responsibility and risk of loss resulting from your downloading and/or use of files, information, content or other material obtained from the Service. Short version: We will not be liable for damages or losses arising from your use or inability to use the service or otherwise arising under this agreement. Please read this section carefully; it limits our obligations to you. You understand and agree that we will not be liable to you or any third party for any loss of profits, use, goodwill, or data, or for any incidental, indirect, special, consequential or exemplary damages, however arising, that result from Our liability is limited whether or not we have been informed of the possibility of such damages, and even if a remedy set forth in this Agreement is found to have failed of its essential purpose. We will have no liability for any failure or delay due to matters beyond our reasonable control. Short version: You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved. If you have a dispute with one or more Users, you agree to release GitHub from any and all claims, demands and damages (actual and consequential) of every kind and nature, known and unknown, arising out of or in any way connected with such disputes. You agree to indemnify us, defend us, and hold us harmless from and against any and all claims, liabilities, and expenses, including attorneys fees, arising out of your use of the Website and the Service, including but not limited to your violation of this Agreement, provided that GitHub (1) promptly gives you written notice of the claim, demand, suit or proceeding; (2) gives you sole control of the defense and settlement of the claim, demand, suit or proceeding (provided that you may not settle any claim, demand, suit or proceeding unless the settlement unconditionally releases GitHub of all liability); and (3) provides to you all reasonable assistance, at your" }, { "data": "Short version: We want our users to be informed of important changes to our terms, but some changes aren't that important we don't want to bother you every time we fix a typo. So while we may modify this agreement at any time, we will notify users of any material changes and give you time to adjust to them. We reserve the right, at our sole discretion, to amend these Terms of Service at any time and will update these Terms of Service in the event of any such amendments. We will notify our Users of material changes to this Agreement, such as price increases, at least 30 days prior to the change taking effect by posting a notice on our Website or sending email to the primary email address specified in your GitHub account. Customer's continued use of the Service after those 30 days constitutes agreement to those revisions of this Agreement. For any other modifications, your continued use of the Website constitutes agreement to our revisions of these Terms of Service. You can view all changes to these Terms in our Site Policy repository. We reserve the right at any time and from time to time to modify or discontinue, temporarily or permanently, the Website (or any part of it) with or without notice. Except to the extent applicable law provides otherwise, this Agreement between you and GitHub and any access to or use of the Website or the Service are governed by the federal laws of the United States of America and the laws of the State of California, without regard to conflict of law provisions. You and GitHub agree to submit to the exclusive jurisdiction and venue of the courts located in the City and County of San Francisco, California. GitHub may assign or delegate these Terms of Service and/or the GitHub Privacy Statement, in whole or in part, to any person or entity at any time with or without your consent, including the license grant in Section D.4. You may not assign or delegate any rights or obligations under the Terms of Service or Privacy Statement without our prior written consent, and any unauthorized assignment and delegation by you is void. Throughout this Agreement, each section includes titles and brief summaries of the following terms and conditions. These section titles and brief summaries are not legally binding. If any part of this Agreement is held invalid or unenforceable, that portion of the Agreement will be construed to reflect the parties original intent. The remaining portions will remain in full force and effect. Any failure on the part of GitHub to enforce any provision of this Agreement will not be considered a waiver of our right to enforce such provision. Our rights under this Agreement will survive any termination of this Agreement. This Agreement may only be modified by a written amendment signed by an authorized representative of GitHub, or by the posting by GitHub of a revised version in accordance with Section Q. Changes to These Terms. These Terms of Service, together with the GitHub Privacy Statement, represent the complete and exclusive statement of the agreement between you and us. This Agreement supersedes any proposal or prior agreement oral or written, and any other communications between you and GitHub relating to the subject matter of these terms including any confidentiality or nondisclosure agreements. Questions about the Terms of Service? Contact us through the GitHub Support portal. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "App Definition and Development", "file_name": "understanding-github-code-search-syntax.md", "project_name": "KubeCarrier", "subcategory": "Application Definition & Image Build" }
[ { "data": "You can build search queries for the results you want with specialized code qualifiers, regular expressions, and boolean operations. The search syntax in this article only applies to searching code with GitHub code search. Note that the syntax and qualifiers for searching for non-code content, such as issues, users, and discussions, is not the same as the syntax for code search. For more information on non-code search, see \"About searching on GitHub\" and \"Searching on GitHub.\" Search queries consist of search terms, comprising text you want to search for, and qualifiers, which narrow down the search. A bare term with no qualifiers will match either the content of a file or the file's path. For example, the following query: ``` http-push ``` The above query will match the file docs/http-push.txt, even if it doesn't contain the term http-push. It will also match a file called example.txt if it contains the term http-push. You can enter multiple terms separated by whitespace to search for documents that satisfy both terms. For example, the following query: ``` sparse index ``` The search results would include all documents containing both the terms sparse and index, in any order. As examples, it would match a file containing SparseIndexVector, a file with the phrase index for sparse trees, and even a file named index.txt that contains the term sparse. Searching for multiple terms separated by whitespace is the equivalent to the search hello AND world. Other boolean operations, such as hello OR world, are also supported. For more information about boolean operations, see \"Using boolean operations.\" Code search also supports searching for an exact string, including whitespace. For more information, see \"Query for an exact match.\" You can narrow your code search with specialized qualifiers, such as repo:, language: and path:. For more information on the qualifiers you can use in code search, see \"Using qualifiers.\" You can also use regular expressions in your searches by surrounding the expression in slashes. For more information on using regular expressions, see \"Using regular expressions.\" To search for an exact string, including whitespace, you can surround the string in quotes. For example: ``` \"sparse index\" ``` You can also use quoted strings in qualifiers, for example: ``` path:git language:\"protocol buffers\" ``` To search for code containing a quotation mark, you can escape the quotation mark using a backslash. For example, to find the exact string name = \"tensorflow\", you can search: ``` \"name = \\\"tensorflow\\\"\" ``` To search for code containing a backslash, \\, use a double backslash, \\\\. The two escape sequences \\\\ and \\\" can be used outside of quotes as well. No other escape sequences are recognized, though. A backslash that isn't followed by either \" or \\ is included in the search, unchanged. Additional escape sequences, such as \\n to match a newline character, are supported in regular expressions. See \"Using regular expressions.\" Code search supports boolean expressions. You can use the operators AND, OR, and NOT to combine search terms. By default, adjacent terms separated by whitespace are equivalent to using the AND operator. For example, the search query sparse index is the same as sparse AND index, meaning that the search results will include all documents containing both the terms sparse and index, in any order. To search for documents containing either one term or the other, you can use the OR operator. For example, the following query will match documents containing either sparse or index: ``` sparse OR index ``` To exclude files from your search results, you can use the NOT" }, { "data": "For example, to exclude files in the testing directory, you can search: ``` \"fatal error\" NOT path:testing ``` You can use parentheses to express more complicated boolean expressions. For example: ``` (language:ruby OR language:python) AND NOT path:\"/tests/\" ``` You can use specialized keywords to qualify your search. To search within a repository, use the repo: qualifier. You must provide the full repository name, including the owner. For example: ``` repo:github-linguist/linguist ``` To search within a set of repositories, you can combine multiple repo: qualifiers with the boolean operator OR. For example: ``` repo:github-linguist/linguist OR repo:tree-sitter/tree-sitter ``` Note: Code search does not currently support regular expressions or partial matching for repository names, so you will have to type the entire repository name (including the user prefix) for the repo: qualifier to work. To search for files within an organization, use the org: qualifier. For example: ``` org:github ``` To search for files within a personal account, use the user: qualifier. For example: ``` user:octocat ``` Note: Code search does not currently support regular expressions or partial matching for organization or user names, so you will have to type the entire organization or user name for the qualifier to work. To narrow down to a specific languages, use the language: qualifier. For example: ``` language:ruby OR language:cpp OR language:csharp ``` For a complete list of supported language names, see languages.yaml in github-linguist/linguist. If your preferred language is not on the list, you can open a pull request to add it. To search within file paths, use the path: qualifier. This will match files containing the term anywhere in their file path. For example, to find files containing the term unit_tests in their path, use: ``` path:unit_tests ``` The above query will match both src/unittests/mytest.py and src/docs/unittests.md since they both contain unittest somewhere in their path. To match only a specific filename (and not part of the path), you could use a regular expression: ``` path:/(^|\\/)README\\.md$/ ``` Note that the . in the filename is escaped, since . has special meaning for regular expressions. For more information about using regular expressions, see \"Using regular expressions.\" You can also use some limited glob expressions in the path: qualifier. For example, to search for files with the extension txt, you can use: ``` path:*.txt ``` ``` path:src/*.js ``` By default, glob expressions are not anchored to the start of the path, so the above expression would still match a path like app/src/main.js. But if you prefix the expression with /, it will anchor to the start. For example: ``` path:/src/*.js ``` Note that doesn't match the / character, so for the above example, all results will be direct descendants of the src directory. To match within subdirectories, so that results include deeply nested files such as /src/app/testing/utils/example.js, you can use *. For example: ``` path:/src//*.js ``` You can also use the ? global character. For example, to match the path file.aac or file.abc, you can use: ``` path:*.a?c ``` ``` path:\"file?\" ``` Glob expressions are disabled for quoted strings, so the above query will only match paths containing the literal string file?. You can search for symbol definitions in code, such as function or class definitions, using the symbol: qualifier. Symbol search is based on parsing your code using the open source Tree-sitter parser ecosystem, so no extra setup or build tool integration is required. For example, to search for a symbol called WithContext: ``` language:go symbol:WithContext ``` In some languages, you can search for symbols using a prefix (e.g. a prefix of their class" }, { "data": "For example, for a method deleteRows on a struct Maint, you could search symbol:Maint.deleteRows if you are using Go, or symbol:Maint::deleteRows in Rust. You can also use regular expressions with the symbol qualifier. For example, the following query would find conversions people have implemented in Rust for the String type: ``` language:rust symbol:/^String::to_.*/ ``` Note that this qualifier only searches for definitions and not references, and not all symbol types or languages are fully supported yet. Symbol extraction is supported for the following languages: We are working on adding support for more languages. If you would like to help contribute to this effort, you can add support for your language in the open source Tree-sitter parser ecosystem, upon which symbol search is based. By default, bare terms search both paths and file content. To restrict a search to strictly match the content of a file and not file paths, use the content: qualifier. For example: ``` content:README.md ``` This query would only match files containing the term README.md, rather than matching files named README.md. To filter based on repository properties, you can use the is: qualifier. is: supports the following values: For example: ``` path:/^MIT.txt$/ is:archived ``` Note that the is: qualifier can be inverted with the NOT operator. To search for non-archived repositories, you can search: ``` log4j NOT is:archived ``` To exclude forks from your results, you can search: ``` log4j NOT is:fork ``` Code search supports regular expressions to search for patterns in your code. You can use regular expressions in bare search terms as well as within many qualifiers, by surrounding the regex in slashes. For example, to search for the regular expression sparse.*index, you would use: ``` /sparse.*index/ ``` Note that you'll have to escape any forward slashes within the regular expression. For example, to search for files within the App/src directory, you would use: ``` /^App\\/src\\// ``` Inside a regular expression, \\n stands for a newline character, \\t stands for a tab, and \\x{hhhh} can be used to escape any Unicode character. This means you can use regular expressions to search for exact strings that contain characters that you can't type into the search bar. Most common regular expressions features work in code search. However, \"look-around\" assertions are not supported. All parts of a search, such as search terms, exact strings, regular expressions, qualifiers, parentheses, and the boolean keywords AND, OR, and NOT, must be separated from one another with spaces. The one exception is that items inside parentheses, ( ), don't need to be separated from the parentheses. If your search contains multiple components that aren't separated by spaces, or other text that does not follow the rules listed above, code search will try to guess what you mean. It often falls back on treating that component of your query as the exact text to search for. For example, the following query: ``` printf(\"hello world\\n\"); ``` Code search will give up on interpreting the parentheses and quotes as special characters and will instead search for files containing that exact code. If code search guesses wrong, you can always get the search you wanted by using quotes and spaces to make the meaning clear. Code search is case-insensitive. Searching for True will include results for uppercase TRUE and lowercase true. You cannot do case-sensitive searches. Regular expression searches (e.g. for ) are also case-insensitive, and thus would return This, THIS and this in addition to any instances of tHiS. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "KUDO", "subcategory": "Application Definition & Image Build" }
[ { "data": "We read every piece of feedback, and take your input very seriously. To see all available qualifiers, see our documentation. | Name | Name.1 | Name.2 | Last commit message | Last commit date | |:|:|:|-:|-:| | parent directory.. | parent directory.. | parent directory.. | nan | nan | | resources | resources | resources | nan | nan | | README.md | README.md | README.md | nan | nan | | concepts.md | concepts.md | concepts.md | nan | nan | | configuration.md | configuration.md | configuration.md | nan | nan | | cruise-control.md | cruise-control.md | cruise-control.md | nan | nan | | custom.md | custom.md | custom.md | nan | nan | | debug-kafka.md | debug-kafka.md | debug-kafka.md | nan | nan | | external-access-runbook.md | external-access-runbook.md | external-access-runbook.md | nan | nan | | external-access.md | external-access.md | external-access.md | nan | nan | | install.md | install.md | install.md | nan | nan | | kafka-connect.md | kafka-connect.md | kafka-connect.md | nan | nan | | kudo-kafka-runbook.md | kudo-kafka-runbook.md | kudo-kafka-runbook.md | nan | nan | | limitations.md | limitations.md | limitations.md | nan | nan | | mirrormaker.md | mirrormaker.md | mirrormaker.md | nan | nan | | monitoring.md | monitoring.md | monitoring.md | nan | nan | | production.md | production.md | production.md | nan | nan | | release-notes.md | release-notes.md | release-notes.md | nan | nan | | security.md | security.md | security.md | nan | nan | | update.md | update.md | update.md | nan | nan | | upgrade-kafka.md | upgrade-kafka.md | upgrade-kafka.md | nan | nan | | upgrade.md | upgrade.md | upgrade.md | nan | nan | | versions.md | versions.md | versions.md | nan | nan | | View all files | View all files | View all files | nan | nan |" } ]
{ "category": "App Definition and Development", "file_name": "getting-started.md", "project_name": "KubeOrbit", "subcategory": "Application Definition & Image Build" }
[ { "data": "KubeOrbit CLI needs a KUBECONFIG env setup in your local, Otherwise it will use the default directory $HOME/.kube/config: ``` export KUBECONFIG=/path/to/your/config``` Download CLI from our github release page, Choose different binary to match your operating system. MacOS: ``` mv ./orbitctl /usr/local/bin/orbitctlorbitctl -h ``` ``` Orbitctl can forward traffic intended for a service in-cluster to your local workload.Usage: orbitctl [command]Examples:orbitctl forward --deployment depolyment-a --namespace ns-a --containerPort 8080 --localPort 8080Available Commands: forward help Help about any command uninstallFlags: -h, --help help for orbitctl -v, --version version for orbitctlUse \"orbitctl [command] --help\" for more information about a command.``` Launch your local service, below as a example application with Eureka registry: ``` java -Deureka.client.register-with-eureka=false -jar user-service.jar``` ``` 2022-02-18 15:43:38.546 INFO 22996 [ main] Tomcat started on port(s): 9000 (http) with context path ''2022-02-18 15:43:38.555 INFO 22996 [ main] Started PortalServiceApplication in 15.137 seconds (JVM running for 16.205)``` Then use orbitctl forward command to forward remote user-service to local: ``` orbitctl forward --deployment user-service-deployment -n namespace-a --containerPort 9000 --localPort 9000``` ``` workload user-service-deployment recreatedssh forwarded localPort 2662 to remotePort 2222channel connected, you can start testing your service``` Once you see the outputs above, your target workload in the remote kubernetes cluster has the KubeOrbit proxy agent installed, and all traffic that calls the remote target workload will be forwarded to your local workstation. When you are done debugging&testing, the easiest way to exit CLI is type command + c in MacOS or ctrl + c in Windows, your channel will be uninstalled and the remote workload will rollback to its original state: ``` uninstall forward workload user-service-deploymentworkload uninstallation successful```" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "KubeVela", "subcategory": "Application Definition & Image Build" }
[ { "data": "Before starting, please confirm that you've installed KubeVela and enabled the VelaUX addon according to the installation guide. Welcome to KubeVela! This section will guide you to deliver your first app. Below is a classic KubeVela application which contains one component with one operational trait, basically, it means to deploy a container image as webservice with one replica. Additionally, there are three policies and workflow steps, it means to deploy the application into two different environments with different configurations. ``` apiVersion: core.oam.dev/v1beta1kind: Applicationmetadata: name: first-vela-appspec: components: - name: express-server type: webservice properties: image: oamdev/hello-world ports: - port: 8000 expose: true traits: - type: scaler properties: replicas: 1 policies: - name: target-default type: topology properties: # The cluster with name local is installed the KubeVela. clusters: [\"local\"] namespace: \"default\" - name: target-prod type: topology properties: clusters: [\"local\"] # This namespace must be created before deploying. namespace: \"prod\" - name: deploy-ha type: override properties: components: - type: webservice traits: - type: scaler properties: replicas: 2 workflow: steps: - name: deploy2default type: deploy properties: policies: [\"target-default\"] - name: manual-approval type: suspend - name: deploy2prod type: deploy properties: policies: [\"target-prod\", \"deploy-ha\"]``` ``` ``` environment prod with namespace prod created``` ``` vela up -f https://kubevela.net/example/applications/first-app.yaml``` ``` Applying an application in vela K8s object format...I0516 15:45:18.123356 27156 apply.go:107] \"creating object\" name=\"first-vela-app\" resource=\"core.oam.dev/v1beta1, Kind=Application\" App has been deployed Port forward: vela port-forward first-vela-app SSH: vela exec first-vela-app Logging: vela logs first-vela-app App status: vela status first-vela-app Endpoint: vela status first-vela-app --endpointApplication prod/first-vela-app applied.``` ``` vela status first-vela-app``` ``` About: Name: first-vela-app Namespace: prod Created at: 2022-05-16 15:45:18 +0800 CST Status: workflowSuspendingWorkflow: ...Services: - Name: express-server Cluster: local Namespace: default Type: webservice Healthy Ready:1/1 Traits: scaler``` The application status will change to workflowSuspending, means the workflow has finished the first two steps and waiting for manual approval as per the step specified. We can check the application by: ``` vela port-forward first-vela-app 8000:8000``` It will invoke your browser and your can see the website: ``` <xmp>Hello KubeVela! Make shipping applications more enjoyable. ...snip...``` After we finished checking the application, we can approve the workflow to continue: ``` vela workflow resume first-vela-app``` ``` Successfully resume workflow: first-vela-app``` Then the rest will be delivered in the prod namespace: ``` vela status first-vela-app``` ``` About: Name: first-vela-app Namespace: prod Created at: 2022-05-16 15:45:18 +0800 CST Status: runningWorkflow:" }, { "data": "- Name: express-server Cluster: local Namespace: prod Type: webservice Healthy Ready:2/2 Traits: scaler - Name: express-server Cluster: local Namespace: default Type: webservice Healthy Ready:1/1 Traits: scaler``` Great! You have finished deploying your first KubeVela application, you can also view and manage it in UI. After finishing the installation of VelaUX, you can view and manage the application created. Port forward the UI if you don't have endpoint for access: ``` vela port-forward addon-velaux -n vela-system 8080:80``` VelaUX need authentication, default username is admin and the password is VelaUX12345. It requires you to override with a new password for the first login, please make sure to remember the new password. Click the application card, then you can view the details of the application. The UI console shares a different metadata layer with the controller. It's more like a PaaS architecture of a company which choose a database as the source of truth instead of relying on the etcd of Kubernetes. By default, if you're using CLI to manage the applications directly from Kubernetes API, we will sync the metadata to UI backend automatically. Once you deployed the application from the UI console, the automatic sync process will be stopped as the source of truth may be changed. If the namespace of the application operated by CLI has already been associated with the corresponding environment in UI, then the application will be automatically synchronized to the project associated with that environment in UI. Otherwise, the application will be synchronized to the default project. If you want to specify which project in UI console an application should be synchronized to, please refer to Creating environments for the project. If there're any changes happen from CLI after that, the UI console will detect the difference and show it for you. However, it's not recommended to modify the application properties from both sides. In conclusion, if you're a CLI/YAML/GitOps user, you'd better just use CLI to manage the application CRD and just use the UI console (velaux) as a dashboard. Once you've managed the app from the UI console, you need to align the behavior and manage apps from UI, API, or Webhook provided by velaux. ``` vela delete first-vela-app``` ``` Deleting Application \"first-vela-app\"app \"first-vela-app\" deleted from namespace \"prod\"``` That's it! You succeed at the first application delivery. Congratulation! 33011002016698" } ]
{ "category": "App Definition and Development", "file_name": "docs.md", "project_name": "Lagoon", "subcategory": "Application Definition & Image Build" }
[ { "data": "We gladly welcome any and all contributions to Lagoon! Lagoon benefits from any kind of contribution - whether it's a bugfix, new feature, documentation update, or simply some queue maintenance - we're happy that you want to help There's a whole section on how to get Lagoon running on your local machine using KinD over at Developing Lagoon. This documentation is still very WIP - but there are a lot of Makefile routines to help you out. We've got another section that outlines how to install Lagoon from Helm charts at Installing Lagoon Into Existing Kubernetes Cluster - we'd love to get this process as slick as possible! Right now one of our biggest needs is putting together examples of Lagoon working with various content management systems, etc, other than Drupal. If you can spin up an open source CMS or framework that we dont currently have as a Docker Compose stack, send us a PR. Look at the existing examples at https://github.com/uselagoon/lagoon-examples for tips, pointers and starter issues. One small catch wherever possible, wed like them to be built using our base Docker Hub images https://hub.docker.com/u/uselagoon if we dont have a suitable image, or our images need modifying throw us a PR (if you can) or create an issue (so someone else can) at https://github.com/uselagoon/lagoon-images. Help us improve our existing examples, if you can - are we following best practices, is there something were doing that doesnt make sense? Bonus points for anyone that helps contribute to tests for any of these examples weve got some example tests in a couple of the projects you can use for guidance https://github.com/amazeeio/drupal-example-simple/blob/8.x/TESTING_dockercompose.md. The testing framework were using is Leia, from the excellent team behind Lando. Help us to document our other examples better were not expecting a full manuscript, but tidy-ups, links to helpful resources and clarifying statements are all super-awesome. If you have any questions, reach out to us on Discord! We take security very seriously. If you discover a security issue or think you found one, please bring it to the maintainers' attention. Danger Please send your findings to security@amazee.io. Please DO NOT file a GitHub issue for them. Security reports are greatly appreciated and will receive public karma and swag! We're also working on a Bug Bounty system. We're always interested in fixing issues, therefore issue reports are very welcome. Please make sure to check that your issue does not already exist in the issue queue. Cool! Create an issue and we're happy to look over it. We can't guarantee that it will be implemented. But we are always interested in hearing ideas of what we could bring to Lagoon. Another good way is also to talk to us via Discord about your idea. Join today! Epic! Please send us a pull request for it, we will do our best to review it and merge it if possible." } ]
{ "category": "App Definition and Development", "file_name": "docs.github.com.md", "project_name": "Lagoon", "subcategory": "Application Definition & Image Build" }
[ { "data": "Help for wherever you are on your GitHub journey. At the heart of GitHub is an open-source version control system (VCS) called Git. Git is responsible for everything GitHub-related that happens locally on your computer. You can connect to GitHub using the Secure Shell Protocol (SSH), which provides a secure channel over an unsecured network. You can create a repository on GitHub to store and collaborate on your project's files, then manage the repository's name and location. Create sophisticated formatting for your prose and code on GitHub with simple syntax. Pull requests let you tell others about changes you've pushed to a branch in a repository on GitHub. Once a pull request is opened, you can discuss and review the potential changes with collaborators and add follow-up commits before your changes are merged into the base branch. Keep your account and data secure with features like two-factor authentication, SSH, and commit signature verification. Use GitHub Copilot to get code suggestions in your editor. Learn to work with your local repositories on your computer and remote repositories hosted on GitHub. All GitHub docs are open source. See something that's wrong or unclear? Submit a pull request. Learn how to contribute" } ]
{ "category": "App Definition and Development", "file_name": "mysql.md", "project_name": "Lagoon", "subcategory": "Application Definition & Image Build" }
[ { "data": "Cloud SQL for MySQL is a fully-managed database service that helps you set up, maintain, manage, and administer your MySQL relational databases on Google Cloud Platform. For information specific to MySQL, see the MySQL documentation or learn more about Cloud SQL for MySQL. Create instances Connection Overview Enable and disable high availability on an instance Create and manage MySQL databases Create and manage MySQL users Export and import using SQL dump file Export and import using CSV files Create backups Create read replicas gcloud sql command-line Use the Cloud SQL Admin API REST API Best practices Performance tips Authorize requests Configure VPC Service Controls Cloud SQL Admin API error messages Pricing Quotas and limits Troubleshoot Cloud SQL feature support by database engine Release notes Billing questions Get support Security Bulletins Google Cloud Fundamentals: Core Infrastructure These lectures, demos, and hands-on labs give you an overview of Google Cloud products and services so that you can learn the value of Google Cloud and how to incorporate cloud-based solutions into your business strategies. Architecting with Google Cloud: Design and Process This course features a combination of lectures, design activities, and hands-on labs to show you how to use proven design patterns on Google Cloud to build highly reliable and efficient solutions and operate deployments that are highly available and cost-effective. Converting and optimizing queries from Oracle Database to Cloud SQL for MySQL Discusses the basic query differences between Oracle and Cloud SQL for MySQL, and how features in Oracle map to features in Cloud SQL for MySQL. Migrating Oracle users to Cloud SQL for MySQL: Terminology and functionality Part of a series that provides key information and guidance related to planning and performing Oracle 11g/12c database migrations to Cloud SQL for MySQL version 5.7, second-generation instances. Data residency overview Learn how to use Cloud SQL to enforce data residency requirements for data. Use Secret Manager to handle secrets in Cloud SQL Learn how to use Secret Manager to store sensitive information about Cloud SQL instances and users as secrets. Python SQLAlchemy Use SQLAlchemy with your Cloud SQL for MySQL database Node.js sample Connecting to your Cloud SQL for MySQL database in Node.js PHP PDO Connecting your Cloud SQL for MySQL database using PHP PDO Go web app sample Simple examples of connecting to Cloud SQL for MySQL using Go .NET sample This sample application demonstrates how to store data in Google Cloud SQL with a MySQL database when running in Google App Engine Flexible Environment. Java servlet Connecting to Cloud SQL for MySQL from a Java application Terraform for Cloud SQL networking Use Terraform to create Cloud SQL for MySQL instances with private networking options. Create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads. Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Last updated 2024-06-10 UTC." } ]
{ "category": "App Definition and Development", "file_name": "mkdocs.yml.md", "project_name": "Lagoon", "subcategory": "Application Definition & Image Build" }
[ { "data": "We gladly welcome any and all contributions to Lagoon! Lagoon benefits from any kind of contribution - whether it's a bugfix, new feature, documentation update, or simply some queue maintenance - we're happy that you want to help There's a whole section on how to get Lagoon running on your local machine using KinD over at Developing Lagoon. This documentation is still very WIP - but there are a lot of Makefile routines to help you out. We've got another section that outlines how to install Lagoon from Helm charts at Installing Lagoon Into Existing Kubernetes Cluster - we'd love to get this process as slick as possible! Right now one of our biggest needs is putting together examples of Lagoon working with various content management systems, etc, other than Drupal. If you can spin up an open source CMS or framework that we dont currently have as a Docker Compose stack, send us a PR. Look at the existing examples at https://github.com/uselagoon/lagoon-examples for tips, pointers and starter issues. One small catch wherever possible, wed like them to be built using our base Docker Hub images https://hub.docker.com/u/uselagoon if we dont have a suitable image, or our images need modifying throw us a PR (if you can) or create an issue (so someone else can) at https://github.com/uselagoon/lagoon-images. Help us improve our existing examples, if you can - are we following best practices, is there something were doing that doesnt make sense? Bonus points for anyone that helps contribute to tests for any of these examples weve got some example tests in a couple of the projects you can use for guidance https://github.com/amazeeio/drupal-example-simple/blob/8.x/TESTING_dockercompose.md. The testing framework were using is Leia, from the excellent team behind Lando. Help us to document our other examples better were not expecting a full manuscript, but tidy-ups, links to helpful resources and clarifying statements are all super-awesome. If you have any questions, reach out to us on Discord! We take security very seriously. If you discover a security issue or think you found one, please bring it to the maintainers' attention. Danger Please send your findings to security@amazee.io. Please DO NOT file a GitHub issue for them. Security reports are greatly appreciated and will receive public karma and swag! We're also working on a Bug Bounty system. We're always interested in fixing issues, therefore issue reports are very welcome. Please make sure to check that your issue does not already exist in the issue queue. Cool! Create an issue and we're happy to look over it. We can't guarantee that it will be implemented. But we are always interested in hearing ideas of what we could bring to Lagoon. Another good way is also to talk to us via Discord about your idea. Join today! Epic! Please send us a pull request for it, we will do our best to review it and merge it if possible." } ]
{ "category": "App Definition and Development", "file_name": "mia-platform-overview.md", "project_name": "Mia-Platform", "subcategory": "Application Definition & Image Build" }
[ { "data": "Mia-Platform is a cloud-native Platform Builder that helps you to build and manage your digital platform. By streamlining the Developer Experience, Mia-Platform allows organizations to reduce cognitive load on cloud-native complexity, increase software engineering productivity, and reach DevOps at scale, providing golden paths for a wide range of CNCF-landscape technologies. With Mia-Platform, you can standardize and reuse your code, enabling you to adopt composability in your organization. Accelerate your products' development and deployment by composing existing modules, relying on a flexible and consistent architecture. To further foster composability, Mia-Platform features a software catalog full of ready-to-use components that you can plug into your software. Among these components, one of the most important is Mia-Platform Fast Data, a data management layer that can be used to build a Digital Integration Hub. Thanks to this solution, you can connect your cloud-native platform with existing systems, decouple and offload legacy systems, and serve real-time data 24/7. Thus, you can fully benefit from the true power of your data. Mia-Platform products are built by developers for developers, and you can actively contribute. Our main purpose is to streamline the software development lifecycle, and we do so by collecting all the tools you need in a single place. The image does not provide an exhaustive list of the technologies used in Mia-Platform. New technologies are constantly added. In a single place Mia-Platform enables you to: Mia-Platform supports you in creating, maintaining, and evolving your own digital platform tailored to your business. By helping you build your Internal Developer Platform, Mia-Platform opens the doors of Platform Engineering to your organization. According to Gartner, Platform Engineering is the discipline of building and operating self-service internal developer platforms (IDPs) for software delivery and life cycle management. As software becomes more important for businesses to expand their services, Platform Engineering enables the industrialization of software development and deployment. By abstracting away most of the complexity related to microservices architecture, Mia-Platform enables the adoption of a composable" }, { "data": "The deployment of new features and products can be further accelerated thanks to a software catalog of ready-to-use microservices and applications. The catalog also fosters developer self-service, reusing existing assets, and helps standardization through different products. Thanks to the composable architecture, you can also easily create and connect existing Packaged Business Capabilities (PBCs) - i.e. projects running at runtime to perform a specific business task. This helps you to reduce the time-to-market of your new products and features, to avoid redundant and duplicated efforts, and ensures clear governance throughout the entire organization. By using Mia-Platform, you can define standards such as templates, plugins, etc., and make them available to all development teams within your organization. With Mia-Platform, you will be able to build your Digital Integration Hub relying on a solution that has repeatedly been mentioned by Gartner as a sample implementation. This solution is a great example of cohabitation between the paradigms of Data Mesh and Data Fabric, featuring the best attributes of each approach. This layer ingests data from different sources, aggregates it in single views according to business needs, and makes it available in near real-time. In this way, organizations can improve data availability, while also offloading legacy systems and decoupling them from external consumers. Mia-Platform provides you with a suite of several products that supports you in governing your platform, tackling composable business, and making legacy systems coexist. The products can be divided into two main categories: core products and additional components. These products are the backbone of Mia-Platform, and constitute the main solutions that our customers use on a daily basis. The core products are: These components contribute to the realization of some specific tasks within your products. They are available through Mia-Platform Marketplace, and they are: Along with the components above, you can also find Mia-Platforms open-source projects: Mia-Platform is available for purchase in three different ways: SaaS, PaaS, and On-Premises. For further details on the distribution model, please refer to this page." } ]
{ "category": "App Definition and Development", "file_name": "postgres.md", "project_name": "Lagoon", "subcategory": "Application Definition & Image Build" }
[ { "data": "Cloud SQL for PostgreSQL is a fully-managed database service that helps you set up, maintain, manage, and administer your PostgreSQL relational databases on Google Cloud Platform. Learn more Create instances Connection Overview Enable and disable high availability on an instance Create and manage PostgreSQL databases Create and manage PostgreSQL users Export and import using pgdump, pgdumpall, and pg_restore Export and import using CSV files Create backups Create read replicas Build generative AI applications using Cloud SQL Client libraries and sample code for Cloud SQL gcloud sql command-line Use the Cloud SQL Admin API REST API Best practices Performance tips Authorize requests Configure VPC Service Controls Cloud SQL Admin API error messages Pricing Quotas and limits Troubleshoot Cloud SQL feature support by database engine Release notes Billing questions Getting support Security Bulletins Using Cloud SQL for PostgreSQL with Ruby on Rails 5 Learn how to connect a Ruby on Rails 5 app to Cloud SQL for PostgreSQL. Connecting to Cloud SQL with Cloud Functions Learn how to connect to Cloud SQL from Cloud Functions. Deploying Pega using Compute Engine and Cloud SQL Learn how to deploy Pega Platform, which is a business process management and customer relationship management (CRM) platform with Cloud SQL for PostgreSQL. Production launch checklist This checklist provides recommended activities to complete for launching a commercial application that uses Cloud SQL. This checklist focuses on Cloud SQL-specific activities. Data residency overview Learn how to use Cloud SQL to enforce data residency requirements for data. Use Secret Manager to handle secrets in Cloud SQL Learn how to use Secret Manager to store sensitive information about Cloud SQL instances and users as secrets. Python SQLAlchemy Use SQLAlchemy with your Cloud SQL for PostgreSQL database Node.js sample Connecting to your Cloud SQL for PostgreSQL database in Node.js PHP PDO Connecting your Cloud SQL for PostgreSQL database using PHP PDO Go web app sample Simple examples of connecting to Cloud SQL for PostgreSQL using Go .NET sample This sample application demonstrates how to store data in Google Cloud SQL with a PostgreSQL database when running in Google App Engine Flexible Environment. Terraform for Cloud SQL networking Use Terraform to create Cloud SQL for PostgreSQL instances with private networking options. Java servlet Connecting to Cloud SQL for PostgreSQL from a Java application Create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads. Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Last updated 2024-06-10 UTC." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Mia-Platform", "subcategory": "Application Definition & Image Build" }
[ { "data": "Learn how Mia-Platform can help you to develop your business Start to learn the main concepts of Mia-Platform and how to use to develop your services Start to use only one platform to design and manage the full-cycle of your DevOps Read our tutorials, follow walkthroughs and learn how to decouple your IT systems from your channels and develop modern cloud-native applications. Discover new cool features, updates and bug fixes Check out the following topics to learn how to build, deploy, debug and monitor your services with Mia-Platform Here you can find some useful resources to discover Mia-Platform. Do you wish to stay updated on the latest changes and additions to our documentation? Please refer to the links below." } ]
{ "category": "App Definition and Development", "file_name": "postgresql.md", "project_name": "Lagoon", "subcategory": "Application Definition & Image Build" }
[ { "data": "This browser is no longer supported. Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support. Azure Database for PostgreSQL - Flexible Server is a relational database service based on the open-source Postgres database engine. It's a fully managed database-as-a-service that can handle mission-critical workloads with predictable performance, security, high availability, and dynamic scalability." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Monokle", "subcategory": "Application Definition & Image Build" }
[ { "data": "Are you looking for the Monokle Desktop documentation? Check out docs.monokle.io. Creating compliant and secure Kubernetes deployments that don't put your infrastructure and end-users at risk is a difficult and time-consuming task. Monokle is a Policy Platform for Kubernetes that helps you create secure Kubernetes deployments throughout the entire application lifecycle from code to cluster. The Monokle platform includes: Learn basic concepts and how to get started with Monokle Cloud Discover how to view and fix misconfigurations Integrate policy enforcement in your Pull Request workflows" } ]
{ "category": "App Definition and Development", "file_name": "introduction.md", "project_name": "Nocalhost", "subcategory": "Application Definition & Image Build" }
[ { "data": "Goal: Install Nocalhost, evaluate the core features and experience efficient cloud-native application development. Estimate time: 5 minutes Requirements: Nocalhost does fully supports JetBrains, please refer to Install JetBrains Plugin. Click on the Nocalhost icon on the side panel, open the Nocalhost plugin. There are two methods that you can use to connect to Kubernetes cluster: Select the KubeConfig file from any local directory. Nocalhost will try to load KubeConfig from your local ~/.kube/config by default. Paste the KubeConfig as a text You can use the following command to view your KubeConfig and copy it. After KubeConfig is successfully loaded, select the context that you want to access, then connect to the cluster. Nocalhost will automatically show the cluster list. We are using the bookinfo application as an example here. You can use your own application that already deployed in your Kubernetes clusters, or you can follow Deploy Demo Application to deploy the demo application in your Kubernetes clusters. Make sure you have successfully deployed workloads within your Kubernetes Cluster, then: If you are experiencing DevMode in on premise K8s cluster, you need to configure the sidecar image address additionally and push the image to your own repository. Run the following command in the remote terminal to start main process When entering DevMode, the application main process will not automatically start by default in the DevContainer, thus the application will not response any request. You need to manually start the main process before you can access it. View the running result on http://127.0.0.1:39080 in your web browser In our bookinfo demo, we've already set the port-forward to 39080:9080, which means Nocalhost will automatically forwards data from the local port 39080 to port 9080 on the defined DevContainer. Modify code in productpage.py and see change in web browser. Do not forget to save your change. Refresh the web browser and see the code change Congratulations! You are all set to go" } ]
{ "category": "App Definition and Development", "file_name": "quick-start.md", "project_name": "Nocalhost", "subcategory": "Application Definition & Image Build" }
[ { "data": "Goal: Install Nocalhost, evaluate the core features and experience efficient cloud-native application development. Estimate time: 5 minutes Requirements: Nocalhost does fully supports JetBrains, please refer to Install JetBrains Plugin. Click on the Nocalhost icon on the side panel, open the Nocalhost plugin. There are two methods that you can use to connect to Kubernetes cluster: Select the KubeConfig file from any local directory. Nocalhost will try to load KubeConfig from your local ~/.kube/config by default. Paste the KubeConfig as a text You can use the following command to view your KubeConfig and copy it. After KubeConfig is successfully loaded, select the context that you want to access, then connect to the cluster. Nocalhost will automatically show the cluster list. We are using the bookinfo application as an example here. You can use your own application that already deployed in your Kubernetes clusters, or you can follow Deploy Demo Application to deploy the demo application in your Kubernetes clusters. Make sure you have successfully deployed workloads within your Kubernetes Cluster, then: If you are experiencing DevMode in on premise K8s cluster, you need to configure the sidecar image address additionally and push the image to your own repository. Run the following command in the remote terminal to start main process When entering DevMode, the application main process will not automatically start by default in the DevContainer, thus the application will not response any request. You need to manually start the main process before you can access it. View the running result on http://127.0.0.1:39080 in your web browser In our bookinfo demo, we've already set the port-forward to 39080:9080, which means Nocalhost will automatically forwards data from the local port 39080 to port 9080 on the defined DevContainer. Modify code in productpage.py and see change in web browser. Do not forget to save your change. Refresh the web browser and see the code change Congratulations! You are all set to go" } ]
{ "category": "App Definition and Development", "file_name": "en.md", "project_name": "Operator Framework", "subcategory": "Application Definition & Image Build" }
[ { "data": "Featured links Learn how to use RedHat products, find answers, and troubleshoot problems. Support application deploymentsfrom on premise to the cloud to the edgein a flexible operating environment. Quickly build and deploy applications at scale, while you modernize the ones you already have. Create, manage, and dynamically scale automation across your entire enterprise. Deploying and managing customized RHEL system images in hybrid clouds Install, configure and customize RedHat Developer Hub Setting up clusters and accounts Creating Ansible playbooks RedHat Ansible Lightspeed with IBM watsonx Code Assistant basics Navigating features and services Get answers quickly by opening a support case, directly access our support engineers during weekday business hours via live chat, or speak directly with a RedHat support expert by phone. Whether youre a beginner or an expert with RedHat Cloud Services products and solutions, these learning resources can help you build whatever your organization needs. Explore resources and tools that help you build, deliver, and manage innovative cloud-native apps and services. We help RedHat users innovate and achieve their goals with our products and services with content they can trust. RedHat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the RedHat Blog. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge." } ]
{ "category": "App Definition and Development", "file_name": "openshift_topic=openshift-getting-started&interface=ui.md", "project_name": "Operator Framework", "subcategory": "Application Definition & Image Build" }
[ { "data": "Error Red Hat OpenShift on IBM Cloud is a managed offering to create your own cluster of compute hosts where you can deploy and manage containerized apps on IBM Cloud. Combined with an intuitive user experience, built-in security and isolation, and advanced tools to secure, manage, and monitor your cluster workloads, you can rapidly deliver highly available and secure containerized apps in the public cloud. Complete the following steps to get familiar with the basics, understand the service components, create your first cluster, and deploy a starter app. Get an overview of the service by reviewing the concepts, terms, and benefits. For more information, see Understanding Red Hat OpenShift on IBM Cloud. Already familiar with containers and Red Hat OpenShift on IBM Cloud? Continue to the next step to prepare your account for creating clusters. To set up your IBM Cloud account so that you can create clusters, see Preparing your account to create clusters. If you've already prepared your account and you're ready to create a cluster, continue to the next step. Review the decision points in the Creating a cluster environment strategy doc to begin designing your setup. Not sure where to start? Try following a tutorial in the next step. Follow a tutorial, or set up your own custom cluster environment. Review the following table for your deployment options. | Type | Level | Time | Description | |:-|:-|:--|:| | Tutorial | Beginner | 30 minutes | Create a small, 2 node cluster to begin testing Red Hat OpenShift on IBM Cloud. For more information, see Creating a 2 node VPC cluster by using Schematics. | | Tutorial | Beginner | 45 minutes | Follow the steps in this tutorial to create your first cluster by using the IBM Cloud CLI. This tutorial uses Classic infrastructure. For more information, see Creating a classic cluster from the" }, { "data": "| | Tutorial | Beginner | 1 hour | Follow the steps in this tutorial to create your own Virtual Private Cloud (VPC), then create an Red Hat OpenShift on IBM Cloud cluster by using the CLI. For more information, see Create a cluster in your own Virtual Private Cloud. | | Deployable architecture: QuickStart variation | Beginner | 1 hour | This deployable architecture creates one VPC cluster with two worker nodes and a public endpoint. Note that the QuickStart variation is not highly available or validated for the IBM Cloud Framework for Financial Services. For more information, see Red Hat OpenShift on IBM Cloud on VPC landing zone | | Deployable architecture: Standard variation | Intermediate | 1-3 hours | This deployable architecture is based on the IBM Cloud for Financial Services reference architecture. The architecture creates secure and compliant clusters on a Virtual Private Cloud (VPC) network. For more information, see Red Hat OpenShift on IBM Cloud on VPC landing zone. | | Deployable architectures: Community Registry | Intermediate | 1-4 hours | There are more deployable architectures available in the Community Registry. Review the options to see if they fit your use case. For more information, see Catalog and select Community Registry from the dropdown. | | Custom deployment | Intermediate | 1-3 hours | Create a custom cluster on Classic infrastructure. | | Custom deployment | Intermediate | 1-3 hours | Create a custom cluster on VPC infrastructure. | Already have a cluster? Continue to the next step to deploy a sample app! After you create your cluster, deploy a sample app from the Red Hat OpenShift console, you can deploy one of the built-in service catalog apps and expose the app with a route. From the navigation menu, change from the Administrator perspective to Developer perspective. Click +Add > View all samples > Go. Wait a few minutes for the resources to deploy. Check the status from the Topology pane by clicking your Go app and reviewing the sidebar. When the deployment is complete. Click the Open URL button to view your app in a web browser. ``` Hello World! ``` To clean up the resources that you created, delete your deployment from the Topology pane. Check out the curated learning paths" } ]
{ "category": "App Definition and Development", "file_name": "index.html.md", "project_name": "Operator Framework", "subcategory": "Application Definition & Image Build" }
[ { "data": "Build, deploy and manage your applications across cloud- and on-premise infrastructure Single-tenant, high-availability Kubernetes clusters in the public cloud The fastest way for developers to build, host and scale applications in the public cloud Azure Red Hat OpenShift is supported by Red Hat and Microsoft. As of February 2021, the documentation will be hosted by Microsoft and Red Hat as outlined below. Welcome to the official Azure Red Hat OpenShift 4 documentation, where you can find information to help you learn about Azure Red Hat OpenShift and start exploring its features. Azure Red Hat OpenShift is supported by Red Hat and Microsoft. Use the following table to navigate all the available documentation related to Azure Red Hat OpenShift. Documentation unique to the Azure Red Hat OpenShift service (e.g. how to create a cluster, how to get support) is generally found on the Microsoft Azure documentation site; documentation that is common to all OpenShift distributions is generally found on this page. | Microsoft Azure documentation | Red Hat OpenShift documentation | |:--|:--| | Overview of Azure Red Hat OpenShift Creating a cluster Getting support How-to guides and tutorials Azure CLI reference | Overview of OpenShift architecture Release Notes Developing on OpenShift Managing your cluster | Overview of Azure Red Hat OpenShift Creating a cluster Getting support How-to guides and tutorials Azure CLI reference Overview of OpenShift architecture Release Notes Developing on OpenShift Managing your cluster Copyright 2024 Red Hat, Inc." } ]
{ "category": "App Definition and Development", "file_name": "docs.md", "project_name": "Packer", "subcategory": "Application Definition & Image Build" }
[ { "data": "Packer lets you create identical machine images for multiple platforms from a single source configuration. A common use case is creating golden images for organizations to use in cloud infrastructure. Standardize and automate your workflow. Use built-in and external plugins to perform tasks during each build. Manage images for your organization and extend Packer functionality. On this page:" } ]
{ "category": "App Definition and Development", "file_name": "docs.plural.sh.md", "project_name": "Plural", "subcategory": "Application Definition & Image Build" }
[ { "data": "Get started, master your operations, and troubleshoot your problems. Find whats most relevant to you A guide to getting up and running. What does Plural have access to? Setting up your first cluster in browser. Applications you can install with Plural. Common issues or errors. Share and manage your Git repositories. Join the group of Plural users and contributors that are helping shape the future of DevOps. Join the discussion and get help. Start your contribution journey." } ]
{ "category": "App Definition and Development", "file_name": "intro.md", "project_name": "Packer", "subcategory": "Application Definition & Image Build" }
[ { "data": "Welcome to the world of Packer! This introduction guide will show you what Packer is, explain why it exists, the benefits it has to offer, and how you can get started with it. If you're already familiar with Packer, the documentation provides more of a reference for all available features. Packer is a community tool for creating identical machine images for multiple platforms from a single source configuration. Packer is lightweight, runs on every major operating system, and is highly performant, creating machine images for multiple platforms in parallel. Packer does not replace configuration management like Chef or Puppet. In fact, when building images, Packer is able to use tools like Chef or Puppet to install software onto the image. A machine image is a single static unit that contains a pre-configured operating system and installed software which is used to quickly create new running machines. Machine image formats change for each platform. Some examples include AMIs for EC2, VMDK/VMX files for VMware, OVF exports for VirtualBox, etc. On this page:" } ]
{ "category": "App Definition and Development", "file_name": "Commands.html.md", "project_name": "Podman", "subcategory": "Application Definition & Image Build" }
[ { "data": "Podman (Pod Manager) Global Options, Environment Variables, Exit Codes, Configuration Files, and more attach Attach to a running container auto-update Auto update containers according to their auto-update policy build Build an image using instructions from Containerfiles commit Create new image based on the changed container container Manage containers cp Copy files/folders between a container and the local filesystem create Create but do not start a container diff Display the changes to the objects file system events Show podman system events exec Run a process in a running container export Export containers filesystem contents as a tar archive farm Farm out builds to remote machines generate Generate structured data based on containers, pods or volumes healthcheck Manage health checks on containers history Show history of a specified image image Manage images images List images in local storage import Import a tarball to create a filesystem image info Display podman system information init Initialize one or more containers inspect Display the configuration of object denoted by ID kill Kill one or more running containers with a specific signal kube Play containers, pods or volumes from a structured file load Load image(s) from a tar archive login Log in to a container registry logout Log out of a container registry logs Fetch the logs of one or more containers machine Manage a virtual machine manifest Manipulate manifest lists and image indexes mount Mount a working containers root filesystem network Manage networks pause Pause all the processes in one or more containers pod Manage pods port List port mappings or a specific mapping for the container ps List containers pull Pull an image from a registry push Push an image to a specified destination rename Rename an existing container restart Restart one or more containers rm Remove one or more containers rmi Remove one or more images from local storage run Run a command in a new container save Save image(s) to an archive search Search registry for image secret Manage secrets start Start one or more containers stats Display a live stream of container resource usage statistics stop Stop one or more containers system Manage podman tag Add an additional name to a local image top Display the running processes of a container unmount Unmount working containers root filesystem unpause Unpause the processes in one or more containers unshare Run a command in a modified user namespace untag Remove a name from a local image update Update an existing container version Display the Podman version information volume Manage volumes wait Block on one or more containers Contents:" } ]
{ "category": "App Definition and Development", "file_name": "quickstart.md", "project_name": "Plural", "subcategory": "Application Definition & Image Build" }
[ { "data": "A guide to getting up and running with Plural using our CLI in under 30 minutes. This is a guide on how to get Plural running using our CLI. If you prefer an in-browser Cloud Shell experience with all the dependencies loaded, check out our Quickstart Guide for Cloud Shell here. You can see the process in the video here or follow the instructions step-by-step, especially for unique cloud providers: You will need the following things to successfully get up and running with Plural: The Plural CLI and its dependencies are available using a package manager for your system. For Mac, we recommend using Homebrew. For other operating systems, curl and our Docker image should work universally. The brew tap will install Plural, alongside Terraform, Helm and kubectl for you. If you've already installed any of those dependencies, you can add --without-helm, --without-terraform, or --without-kubectl ``` brew install pluralsh/plural/plural``` Before you proceed, make sure that your cloud provider CLI is properly configured and updated to the latest version. If you aren't sure about how to do that, refer to this guide. If it is not configured correctly, Plural will fail and won't be able to create resources on your behalf. You can download the binaries attached to our GitHub releases here. There will be binaries for linux, windows, and mac and all compatible platforms. For example, you can download v0.6.2 for Darwin arm64 via: ``` VSN=$(curl --silent -qI https://github.com/pluralsh/plural-cli/releases/latest | awk -F '/' '/^location/ {print substr($NF, 1, length($NF)-1)}') curl -L -o plural.tgz 'https://github.com/pluralsh/plural-cli/releases/download/${VSN}/plural-cli${VSN#v}Darwin_arm64.tar.gz' tar -xvf plural.tgz chmod +x plural mv plural /usr/local/bin/plural``` Be sure to download the CLI version for your target OS/architecture, the above example is only valid for ARM Mac's You will still need to ensure helm, terraform and kubectl are properly installed, you can find installers for each here | Tool | Installer | |:-|:| | helm | https://helm.sh/docs/intro/install/ | | terraform | https://learn.hashicorp.com/tutorials/terraform/install-cli | | kubectl | https://kubernetes.io/docs/tasks/tools/#kubectl | Before you proceed, make sure that your cloud provider CLI is properly configured and updated to the latest version. If you aren't sure about how to do that, refer to this guide. If it is not configured correctly, Plural will fail and won't be able to create resources on your behalf. Plural stores all configuration artifacts within a Git repository that we will create on your behalf. Run this command within the directory that you want to store your configuration in: ``` plural init``` The Plural CLI will then guide you through a workflow using GitHub/GitLab OAuth to create a repository on your behalf. If you'd prefer to set up Git manually vs. using OAuth, refer to our guide on setting up Gitops. Along the plural init workflow, we will set the Git attributes to configure encryption and configure your cloud provider for this installation. You will also be asked whether you want to use Plural's domain service and if so, what you want the subdomain to be. We recommend that you use our DNS service if you don't have any security reasons that prevent you from doing so. The hostname that you configure with us will determine where your applications are hosted. For example, if you enter singular.onplural.sh, your applications will be available at $APP_NAME.singular.onplural.sh. This process will generate a" }, { "data": "file at the root of your repo that stores your cloud provider configuration information. Currently we're limited to a one cluster to one repo mapping, but eventually that will be relaxed. We also strongly urge users to store installations in a fresh, separate repository to avoid our automation trampling existing files. To view the applications you can install on Plural, head to this link. Once you've selected your applications, you can install Plural bundles using our interactive GUI. To start the GUI, run: ``` plural install``` You should see a window pop up like the below: You can then follow a guided flow to select and configure your applications. Alternatively, you can run plural repos list on the CLI or Cloud Shell and find the bundle name specific to your cloud provider. Run plural bundle list <app-name> to find installation commands and information about each application available for install. For example, to list the bundle information for the Plural console, a powerful Kubernetes control plane: Here's what we get from running plural bundle list console: ``` +-+--+-+--+ | NAME | DESCRIPTION | PROVIDER | INSTALL COMMAND | +-+--+-+--+ | console-aws | Deploys console on an EKS | AWS | plural bundle install console | | | cluster | | console-aws | +-+--+-+--+``` To install applications on Plural, run: ``` plural bundle install <app-name> <bundle-name>``` We can try this out by installing the Plural Console: ``` plural bundle install console console-aws``` ``` plural bundle install console console-gcp``` ``` plural bundle install console console-azure``` As of CLI version 0.6.19, the bundle name can be inferred from primary bundles, optionally shortening the command to: ``` plural bundle install console``` After running the install command, you will be asked a few questions about how your app will be configured, including whether you want to enable Plural OIDC (single sign-on). Unless you don't wish to use Plural as an identity provider due to internal company security requirements, you should enter (Y). This will enable you to use your existing app.plural.sh login information to access Plural-deployed applications. This will add an extra layer of security for applications without built-in authentication. Ultimately all the values you input at this step will be stored in a file called context.yaml at the root of your repo. With all bundles installed, run: ``` plural build plural deploy --commit \"initial deploy\"``` This will generate all deployment artifacts in the repo, then deploy them in dependency order. It is common for plural deploy to take a fair amount of time, as is the case with most Terraform and cloud infrastructure deployments. Network disconnects can cause potential issues as a result. If you're running on a spotty network, or would like to step out while it's running we recommend running it in tmux. Once plural deploy has completed, you should be ready to log in to your application at {app-name}.{domain-name}. You may experience a delayed creation of your SSL certs for your applications. ZeroSSL currently may take up to 24 hours to provide you your certs. And you are done! You now have a fully-configured Kubernetes cluster and are free to install applications on it to your heart's content. If you want to take down any of your individual applications, run plural destroy <APP-NAME>. If you're just testing us out and want to take down the entire thing, run plural destroy." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Porter", "subcategory": "Application Definition & Image Build" }
[ { "data": "On this page The Porter Operator provides custom resource definitions (CRDs) that you can use to interact with Porter and control how it is executed. Though both Porter and Kubernetes has the concept of names, namespaces and labels, the resources do not reuse those fields from the CRD, and instead uses the values as defined on the resource spec. This allows you to run the operator in a Kubernetes namespace, and target a different Porter namespace because although they both use the term namespace, there is no relation between Kubernetes namespaces and Porter namespaces. The same goes for the name and labels fields. See the glossary for more information about the Installation resource. The Installation spec is the same schema as the Installation resource in Porter. You can copy/paste the output of the porter installation show NAME -o yaml command into the Installation resource spec (removing the status section). In addition to the normal fields available on a Porter Installation document, the following fields are supported: | Field | Required | Default | Description | |:|:--|:--|:| | agentConfig | False | See Agent Config | Reference to an AgentConfig resource in the same namespace. | See the glossary for more information about the CredentialSet resource. The CredentialSet spec is the same schema as the CredentialSet resource in Porter. You can copy/paste the output of the porter credentials show NAME -o yaml command into the CredentialSet resource spec (removing the status section). In addition to the normal fields available on a Porter Credential Set document, the following fields are supported: ``` apiVersion: getporter.org/v1 kind: CredentialSet metadata: name: credentialset-sample spec: schemaVersion: 1.0.1 namespace: operator name: porter-test-me credentials: name: test-credential source: secret: test-secret``` | Field | Required | Default | Description | |:--|:--|:--|:-| | agentConfig | False | See Agent Config | Reference to an AgentConfig resource in the same namespace. | | credentials | True | nan | List of credential sources for the set | | credentials.name | True | nan | The name of the credential for the bundle | | credentials.source | True | nan | The credential type. Currently secret is the only supported source | | credentials.source.secret | True | nan | The name of the secret | See the glossary for more information about the ParameterSet resource. The ParameterSet spec is the same schema as the ParameterSet resource in Porter. You can copy/paste the output of the porter parameters show NAME -o yaml command into the ParameterSet resource spec (removing the status section). In addition to the normal fields available on a Porter Parameter Set document, the following fields are supported: ``` apiVersion: getporter.org/v1 kind: ParameterSet metadata: name: parameterset-sample spec: schemaVersion:" }, { "data": "namespace: operator name: porter-test-me parameters: name: test-secret source: value: test-value name: test-secret source: secret: test-secret``` | Field | Required | Default | Description | |:-|:--|:--|:-| | agentConfig | False | See Agent Config | Reference to an AgentConfig resource in the same namespace. | | parameters | True | nan | List of parameter sources for the set | | parameters.name | True | nan | The name of the parameter for the bundle | | parameters.source | True | nan | The parameters type. Currently vaule and secret are the only supported sources | | oneof parameters.source.secret parameters.source.value | True | nan | The plaintext value to use or the name of the secret that holds the parameter | See the glossary for more information about the AgentAction resource. ``` apiVersion: getporter.org/v1 kind: AgentAction metadata: name: agentaction-sample spec: args: [\"installation\", \"apply\", \"installation.yaml\"] files: installation.yaml: c2NoZW1hVmVyc2lvbjogMS4wLjAKbmFtZXNwYWNlOiBvcGVyYXRvcgpuYW1lOiBoZWxsbwpidW5kbGU6CiAgcmVwb3NpdG9yeTogZ2hjci5pby9nZXRwb3J0ZXIvdGVzdC9wb3J0ZXItaGVsbG8KICB2ZXJzaW9uOiAwLjIuMApwYXJhbWV0ZXJzOgogIG5hbWU6IGxsYW1hcyAK``` | Field | Required | Default | Description | |:-|:--|:|:--| | agentConfig | False | See Agent Config | Reference to an AgentConfig resource in the same namespace. | | command | False | /app/.porter/agent | Overrides the entrypoint of the Porter Agent image. | | args | True | None. | Arguments to pass to the porter command. Do not include porter in the arguments. For example, use [help], not [porter, help]. | | files | False | None. | Files that should be present in the working directory where the command is run. | | env | False | Settings for the kubernetes driver. | Additional environment variables that should be set. | | envFrom | False | None. | Load environment variables from a ConfigMap or Secret. | | volumeMounts | False | Porters config and working directory. | Additional volumes that should be mounted into the Porter Agent. | | volumes | False | Porters config and working directory. | Additional volumes that should be mounted into the Porter Agent. | See the glossary for more information about the [AgentConfig] resource. ``` apiVersion: getporter.org/v1 kind: AgentConfig metadata: name: customAgent spec: porterRepository: ghcr.io/getporter/porter-agent porterVersion: v1.0.0 serviceAccount: porter-agent volumeSize: 64Mi pullPolicy: Always installationServiceAccount: installation-agent pluginConfigFile: schemaVersion: 1.0.0 plugins: kubernetes: version: v1.0.0``` | Field | Required | Default | Description | |:-|--:|:-|:| | porterRepository | 0 | ghcr.io/getporter/porter-agent | The repository for the Porter Agent image. | | porterVersion | 0 | varies | The tag for the Porter Agent image. For example, vX.Y.Z, latest, or canary. Defaults to the most recent version of porter that has been tested with the operator. | | serviceAccount | 1 | (none) | The service account to run the Porter Agent under. Must exist in the same namespace as the installation. | | installationServiceAccount | 0 | (none) | The service account to run the Kubernetes pod/job for the installation image. | | volumeSize | 0 | 64Mi | The size of the persistent volume that Porter will request when running the Porter Agent. It is used to share data between the Porter Agent and the bundle invocation" }, { "data": "It must be large enough to store any files used by the bundle including credentials, parameters and outputs. | | pullPolicy | 0 | PullAlways when the tag is canary or latest, otherwise PullIfNotPresent. | Specifies when to pull the Porter Agent image | | retryLimit | 0 | (none) | Specifies the number of tries an agent job will run until its marked as failure | | pluginConfigFile | 0 | (none) ] | The plugins that porter operator needs to install before bundle runs | | pluginConfigFile.schemaVersion | 0 | (none) | The schema version of the plugin config file | | pluginConfigFile.plugins..version | 0 | latest | The version of the plugin | | plugiConfigFiles.plugins..feedURL | 0 | https://cdn.porter.sh/plugins/atom.xml | The url of an atom feed where the plugin can be downloaded | | plugiConfigFiles.plugins..url | 0 | https://cdn.porter.sh/plugins/ | The url from where the plugin can be downloaded | | plugiConfigFiles.plugins..mirror | 0 | https://cdn.porter.sh/ | The mirror of the official Porter assets | | [AgentConfig]: /docs/operator/glossary/#agentconfig | nan | nan | nan | The only required configuration is the name of the service account under which Porter should run. The configureNamespace action of the porter operator bundle creates a service account named porter-agent for you with the porter-operator-agent-role role binding. See the glossary for more information about the PorterConfig resource. The PorterConfig resource uses the same naming convention as the Porter Configuration File, hyphenated instead of camelCase, so that you can copy/paste between the two without changing the field names. ``` apiVersion: getporter.org/v1 kind: PorterConfig metadata: name: customPorterConfig spec: verbosity: debug default-secrets-plugin: kubernetes.secrets default-storage: in-cluster-mongodb storage: name: in-cluster-mongodb plugin: mongodb config: url: \"mongodb://mongodb.porter-operator-system.svc.cluster.local\"``` | Field | Required | Default | Description | |:--|:--|:|:--| | verbosity | False | info | Threshold for printing messages to the console. Available values are: debug, info, warning, error. (default info) | | namespace | False | (empty) | The default Porter namespace. Used when a resource is defined without the namespace set in the spec. | | experimental | False | (empty) | Specifies which experimental features are enabled. See Porter Feature Flags for more information. | | default-storage | False | in-cluster-mongodb | The name of the storage configuration to use. | | default-secrets | False | (empty) | The name of the secrets configuration to use. | | default-storage-plugin | False | (empty) | The name of the storage plugin to use when default-storage is unspecified. | | default-secrets-plugin | False | kubernetes.secrets | The name of the storage plugin to use when defaultSecrets is unspecified. | | storage | False | The mongodb server installed with the operator. | A list of named storage configurations. | | secrets | False | (empty) | A list of named secrets configurations. | The Porter Authors 2024 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see the Trademark Usage page." } ]
{ "category": "App Definition and Development", "file_name": "docs.md", "project_name": "Podman", "subcategory": "Application Definition & Image Build" }
[ { "data": "Podman is a utility provided as part of the libpod library. It can be used to create and maintain containers. The following tutorial will teach you how to set up Podman and perform some basic commands. The documentation for Podman is located here. For installing or building Podman, please see the installation instructions. The code samples are intended to be run as a non-root user, and use sudo where root escalation is required. To get some help and find out how Podman is working, you can use the help: ``` $ podman --help$ podman <subcommand> --help``` For more details, you can review the manpages: ``` $ man podman$ man podman-<subcommand>``` Please also reference the Podman Troubleshooting Guide to find known issues and tips on how to solve common configuration mistakes. Podman can search for images on remote registries with some simple keywords. ``` $ podman search <search_term>``` You can also enhance your search with filters: ``` $ podman search httpd --filter=is-official``` Downloading (Pulling) an image is easy, too. ``` $ podman pull docker.io/library/httpd``` After pulling some images, you can list all images, present on your machine. ``` $ podman images``` Note: Podman searches in different registries. Therefore it is recommend to use the full image name (docker.io/library/httpd instead of httpd) to ensure, that you are using the correct image. This sample container will run a very basic httpd server that serves only its index page. ``` $ podman run -dt -p 8080:80/tcp docker.io/library/httpd``` Note: Because the container is being run in detached mode, represented by the -d in the podman run command, Podman will print the container ID after it has executed the command. The -t also adds a pseudo-tty to run arbitrary commands in an interactive shell. Note: We use port forwarding to be able to access the HTTP server. For successful running at least slirp4netns v0.3.0 is needed. The podman ps command is used to list created and running containers. ``` $ podman ps``` Note: If you add -a to the podman ps command, Podman will show all containers (created, exited, running, etc.). As you are able to see, the container does not have an IP Address assigned. The container is reachable via it's published port on your local machine. ``` $ curl http://localhost:8080``` From another machine, you need to use the IP Address of the host, running the" }, { "data": "``` $ curl http://<IP_Address>:8080``` Note: Instead of using curl, you can also point a browser to http://localhost:8080. You can \"inspect\" a running container for metadata and details about itself. podman inspect will provide lots of useful information like environment variables, network settings or allocated resources. Since, the container is running in rootless mode, no IP Address is assigned to the container. ``` $ podman inspect -l | grep IPAddress \"IPAddress\": \"\",``` Note: The -l is a convenience argument for latest container. You can also use the container's ID or name instead of -l or the long argument --latest. Note: If you are running remote Podman client, including Mac and Windows (excluding WSL2) machines, -l option is not available. You can view the container's logs with Podman as well: ``` $ podman logs -l127.0.0.1 - - [04/May/2020:08:33:48 +0000] \"GET / HTTP/1.1\" 200 45127.0.0.1 - - [04/May/2020:08:33:50 +0000] \"GET / HTTP/1.1\" 200 45127.0.0.1 - - [04/May/2020:08:33:51 +0000] \"GET / HTTP/1.1\" 200 45127.0.0.1 - - [04/May/2020:08:33:51 +0000] \"GET / HTTP/1.1\" 200 45127.0.0.1 - - [04/May/2020:08:33:52 +0000] \"GET / HTTP/1.1\" 200 45127.0.0.1 - - [04/May/2020:08:33:52 +0000] \"GET / HTTP/1.1\" 200 45``` You can observe the httpd pid in the container with podman top. ``` $ podman top -lUSER PID PPID %CPU ELAPSED TTY TIME COMMANDroot 1 0 0.000 22m13.33281018s pts/0 0s httpd -DFOREGROUNDdaemon 3 1 0.000 22m13.333132179s pts/0 0s httpd -DFOREGROUNDdaemon 4 1 0.000 22m13.333276305s pts/0 0s httpd -DFOREGROUNDdaemon 5 1 0.000 22m13.333818476s pts/0 0s httpd -DFOREGROUND``` You may stop the container: ``` $ podman stop -l``` You can check the status of one or more containers using the podman ps command. In this case, you should use the -a argument to list all containers. ``` $ podman ps -a``` Finally, you can remove the container: ``` $ podman rm -l``` You can verify the deletion of the container by running podman ps -a. For a more detailed guide about Networking and DNS in containers, please see the network guide. Checkpointing a container stops the container while writing the state of all processes in the container to disk. With this, a container can later be migrated and restored, running at exactly the same point in time as the checkpoint. For more details, see the checkpoint instructions. For more information on how to setup and run the integration tests in your environment, checkout the Integration Tests README.md. The documentation for the Podman Python SDK is located here. For more information on Podman and its subcommands, checkout the asciiart demos on the README.md page." } ]
{ "category": "App Definition and Development", "file_name": "quickstart.md", "project_name": "Porter", "subcategory": "Application Definition & Image Build" }
[ { "data": "On this page Porter is an open-source project that packages your application, client tools, configuration, and deployment logic into an installer that you can distribute and run with a single command. The Porter Authors 2024 2024 The Linux Foundation. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see the Trademark Usage page." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Radius", "subcategory": "Application Definition & Image Build" }
[ { "data": "Learn about why we created Radius and how it can help you Learn about the architecture, API design, and other technical concepts of Radius Commonly asked questions about best practices Was this page helpful? Glad to hear it! Please feel free to star our repo and join our Discord server to stay up to date with the project. Sorry to hear that. If you would like to also contribute a suggestion visit and tell us how we can improve." } ]
{ "category": "App Definition and Development", "file_name": "docs.radapp.io.md", "project_name": "Radius", "subcategory": "Application Definition & Image Build" }
[ { "data": "This guide will show you how to quickly get started with Radius. Youll walk through both installing Radius and running your first Radius app. Estimated time to complete: 10 min The Radius getting-started guide can be run for free in a GitHub Codespace. Visit the following link to get started in seconds: Radius runs inside Kubernetes. However you run Kubernetes, get a cluster ready. If you dont have a preferred way to create Kubernetes clusters, you could try using k3d, which runs a minimal Kubernetes distribution in Docker. Ensure your cluster is set as your current context: ``` kubectl config current-context ``` The rad CLI manages your applications, resources, and environments. You can install it on your local machine with the following installation scripts: ``` wget -q \"https://raw.githubusercontent.com/radius-project/radius/main/deploy/install.sh\" -O - | /bin/bash ``` To try out an unstable release visit the edge docs. ``` curl -fsSL \"https://raw.githubusercontent.com/radius-project/radius/main/deploy/install.sh\" | /bin/bash ``` To try out an unstable release visit the edge docs. Run the following in a PowerShell window: ``` iwr -useb \"https://raw.githubusercontent.com/radius-project/radius/main/deploy/install.ps1\" | iex ``` You may need to refresh your $PATH environment variable to access rad: ``` $Env:Path = [System.Environment]::GetEnvironmentVariable(\"Path\",\"User\") ``` To try out an unstable release visit the edge docs. Radius offers a free Codespace option for getting up and running with a Radius environment in seconds: Azure Cloud Shell is an interactive, authenticated, browser-accessible shell for managing Azure resources. Azure Cloud Shell for bash doesnt have a sudo command, so users are unable to install Radius to the default /usr/local/bin installation path. To install the rad CLI to the home directory, run the following commands: ``` export RADIUSINSTALLDIR=./ wget -q \"https://raw.githubusercontent.com/radius-project/radius/main/deploy/install.sh\" -O - | /bin/bash ``` You can now run the rad CLI with ./rad. PowerShell for Cloud Shell is currently not supported. Visit Radius GitHub releases to select and download a specific version of the rad CLI. You may be prompted for your sudo password during installation, as the installer places the rad binary under /usr/local/bin. If you are unable to sudo you can install the rad CLI to another directory by setting the RADIUSINSTALLDIR environment variable with your intended install path. Make sure you add this to your path (Unix, Windows) if you wish to reference it via rad, like in the docs. Verify the rad CLI is installed correctly by running rad version. Example output: ``` RELEASE VERSION BICEP COMMIT 0.34.0 v0.34 0.11.13 2e60bfb46de73ec5cc70485d53e67f8eaa914ba7 ``` Create a new directory for your app and navigate into it: ``` mkdir first-app cd first-app ``` Initialize Radius. For this example, accept all the default options (press ENTER to confirm): ``` rad init ``` Example output: ``` Initializing Radius... Install Radius v0.34 Kubernetes cluster: k3d-k3s-default Kubernetes namespace: radius-system Create new environment default Kubernetes namespace: default Recipe pack: local-dev Scaffold application docs Update local configuration Initialization complete! Have a RAD time ``` In addition to starting Radius services in your Kubernetes cluster, this initialization command creates a default application (app.bicep) as your starting point. It contains a single container definition (demo). | 0 | 1 | |:--|:| | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | // Import the set of Radius resources (Applications.*) into Bicep import radius as radius @description('The app ID of your Radius Application. Set automatically by the rad" }, { "data": "param application string resource demo 'Applications.Core/containers@2023-10-01-preview' = { name: 'demo' properties: { application: application container: { image: 'ghcr.io/radius-project/samples/demo:latest' ports: { web: { containerPort: 3000 } } } } } | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 ``` ``` // Import the set of Radius resources (Applications.*) into Bicep import radius as radius @description('The app ID of your Radius Application. Set automatically by the rad CLI.') param application string resource demo 'Applications.Core/containers@2023-10-01-preview' = { name: 'demo' properties: { application: application container: { image: 'ghcr.io/radius-project/samples/demo:latest' ports: { web: { containerPort: 3000 } } } } } ``` This file will run the ghcr.io/radius-project/samples/demo:latest image. This image is published by the Radius team to a public registry, you do not need to create it. Use the below command to run the app in your environment, then access the application by opening http://localhost:3000 in a browser. ``` rad run app.bicep ``` This command: Access your Radius Dashboard by opening http://localhost:7007 in a browser. In your browser, you should see the Radius Dashboard, which includes visualizations of the application graph, environments, and recipes: Congrats! Youre running your first Radius app. When youre ready to move on to the next step, use CTRL+ C to exit the command. This step will add a database (Redis Cache) to the application. You can create a Redis Cache using Recipes provided by Radius. The Radius community provides Recipes for running commonly used application dependencies, including Redis. In this step you will: Open app.bicep in your editor and get ready to edit the file. First add some new code to app.bicep by pasting in the content below at the end of the file. This code creates a Redis Cache using a Radius Recipe: | 0 | 1 | |:|:| | 21 22 23 24 25 26 27 28 29 30 | @description('The environment ID of your Radius Application. Set automatically by the rad CLI.') param environment string resource db 'Applications.Datastores/redisCaches@2023-10-01-preview' = { name: 'db' properties: { application: application environment: environment } } | ``` 21 22 23 24 25 26 27 28 29 30 ``` ``` @description('The environment ID of your Radius Application. Set automatically by the rad CLI.') param environment string resource db 'Applications.Datastores/redisCaches@2023-10-01-preview' = { name: 'db' properties: { application: application environment: environment } } ``` Next, update your container definition to include connections inside properties. This code creates a connection between the container and the database. Based on this connection, Radius will inject environment variables into the container that inform the container how to connect. You will view these in the next step. | 0 | 1 | |:--|:--| | 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | resource demo 'Applications.Core/containers@2023-10-01-preview' = { name: 'demo' properties: { application: application container: { image: 'ghcr.io/radius-project/samples/demo:latest' ports: { web: { containerPort: 3000 } } } connections: { redis: { source: db.id } } } } | ``` 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 ``` ``` resource demo 'Applications.Core/containers@2023-10-01-preview' = { name: 'demo' properties: { application: application container: { image: 'ghcr.io/radius-project/samples/demo:latest' ports: { web: { containerPort: 3000 } } } connections: { redis: { source: db.id } } } } ``` Your updated" }, { "data": "will look like this: | 0 | 1 | |:--|:-| | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | // Import the set of Radius resources (Applications.*) into Bicep import radius as radius @description('The app ID of your Radius Application. Set automatically by the rad CLI.') param application string resource demo 'Applications.Core/containers@2023-10-01-preview' = { name: 'demo' properties: { application: application container: { image: 'ghcr.io/radius-project/samples/demo:latest' ports: { web: { containerPort: 3000 } } } connections: { redis: { source: db.id } } } } param environment string resource db 'Applications.Datastores/redisCaches@2023-10-01-preview' = { name: 'db' properties: { application: application environment: environment } } | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 ``` ``` // Import the set of Radius resources (Applications.*) into Bicep import radius as radius @description('The app ID of your Radius Application. Set automatically by the rad CLI.') param application string resource demo 'Applications.Core/containers@2023-10-01-preview' = { name: 'demo' properties: { application: application container: { image: 'ghcr.io/radius-project/samples/demo:latest' ports: { web: { containerPort: 3000 } } } connections: { redis: { source: db.id } } } } param environment string resource db 'Applications.Datastores/redisCaches@2023-10-01-preview' = { name: 'db' properties: { application: application environment: environment } } ``` Use the command below to run the updated application again, then open the browser to http://localhost:3000. ``` rad run app.bicep ``` You should see the Radius Connections section with new environment variables added. The demo container now has connection information for Redis (CONNECTIONREDISHOST, CONNECTIONREDISPORT, etc.): Navigate to the Todo List tab and test out the application. Using the Todo page will update the saved state in Redis: Access your Radius Dashboard again by opening http://localhost:7007 in a browser. You should see a visualization of the application graph for your demo app, including the connection to the db Redis Cache: Press CTRL+ C when you are finished with the websites. Radius Connections are more than just environment variables and configuration. You can also access the application graph and understand the connections within your application with the following command: ``` rad app graph ``` You should see the following output, detailing the connections between the demo container and the db Redis Cache, along with information about the underlying Kubernetes resources running the app: ``` Displaying application: demo Name: demo (Applications.Core/containers) Connections: demo -> db (Applications.Datastores/redisCaches) Resources: demo (kubernetes: apps/Deployment) demo (kubernetes: core/Secret) demo (kubernetes: core/Service) demo (kubernetes: core/ServiceAccount) demo (kubernetes: rbac.authorization.k8s.io/Role) demo (kubernetes: rbac.authorization.k8s.io/RoleBinding) Name: db (Applications.Datastores/redisCaches) Connections: demo (Applications.Core/containers) -> db Resources: redis-r5tcrra3d7uh6 (kubernetes: apps/Deployment) redis-r5tcrra3d7uh6 (kubernetes: core/Service) ``` To delete your app, run the rad app delete command to cleanup the app and its resources, including the Recipe resources: ``` rad app delete first-app -y ``` Now that youve run your first Radius app, you can learn more about Radius by reading the following guides: Was this page helpful? Glad to hear it! Please feel free to star our repo and join our Discord server to stay up to date with the project. Sorry to hear that. If you would like to also contribute a suggestion visit and tell us how we can improve." } ]
{ "category": "App Definition and Development", "file_name": "overview.md", "project_name": "Radius", "subcategory": "Application Definition & Image Build" }
[ { "data": "Recipes enable a separation of concerns between IT operators and developers by automating infrastructure deployment. Developers select the resource they want in their app (Mongo Database, Redis Cache, Dapr State Store, etc.), and IT operators codify in their environment how these resources should be deployed and configured (lightweight containers, Azure resources, AWS resources, etc.). When a developer deploys their application and its resources, Recipes automatically deploy the backing infrastructure and bind it to the developers resources. | Language | Supported sources | Notes | |:--|:-|:-| | Bicep | OCI registries | Supports Azure, AWS, and Kubernetes | | Terraform | Public module sourcesPrivate modules not yet configurable | Supports Azure, AWS, and Kubernetes providersOther providers not yet configurable | Recipes can be used in any environment, from dev to prod. You can run a default recipe registered in your environment or select the specific Recipe you want to run. To run a default recipe, simply add the resource you want to your app and omit the Recipe name: ``` resource redisDefault 'Applications.Datastores/redisCaches@2023-10-01-preview'= { name: 'myresource' properties: { environment: environment application: application } } ``` If you want to use a specific Recipe, you can specify the Recipe name in the recipe parameter: ``` resource redis 'Applications.Datastores/redisCaches@2023-10-01-preview'= { name: 'myresource' properties: { environment: environment application: application recipe: { name: 'azure-prod' } } } ``` Use rad recipe list to view the Recipes available to you in your environment. Radius Environments make it easy to get up and running with Recipes instantly. When you run rad init you get a set of containerized local-dev Recipes pre-registered in your environment. These Recipes are designed to help you get started quickly with Recipes using lightweight containers. You can use these Recipes to test your app locally, or deploy them to a dev environment. Recipes can be customized with parameters, allowing developers to fine-tune infrastructure to meet their specific needs: ``` resource redisParam 'Applications.Datastores/redisCaches@2023-10-01-preview'= { name: 'myresource' properties: { environment: environment application: application recipe: { name: 'azure-prod' parameters: { sku: 'Premium' } } } } ``` You can use rad recipe show to view the parameters available to you in a Recipe. Its easy to author and register your own Recipes which define how to deploy and configure infrastructure that meets your organizations needs. See the custom Recipes guide for more information. Recipes support all of the available portable resources listed here. When you use a Recipe to deploy infrastructure (e.g. Azure, AWS resources), that infrastructure can be linked and tracked as part of the Recipe-enabled resource. This means you can inspect what infrastructure supports the resource. Use rad resource show -o json to view this information. The lifecycle of Recipe infrastructure is tied to the resource calling the Recipe. When a Recipe-supported resource is deployed it triggers the Recipe, which in turn deploys the underlying infrastructure. When the Recipe-supported resource is deleted the underlying infrastructure is deleted as well. Was this page helpful? Glad to hear it! Please feel free to star our repo and join our Discord server to stay up to date with the project. Sorry to hear that. If you would like to also contribute a suggestion visit and tell us how we can improve." } ]
{ "category": "App Definition and Development", "file_name": "index.html.md", "project_name": "Serverless Workflow", "subcategory": "Application Definition & Image Build" }
[ { "data": "SonataFlow is a tool for building cloud-native workflow applications. You can use it to do the services and events orchestration and choreography. Currently, with SonataFlow you can integrate with services and events in your architecture using: CloudEvents. Ideal for an Event-Driven architecture where the services are ready to consume and produce events working in a more reactive way. Sync or Async REST services invocations via OpenAPI/Async API. There are options even to directly call a REST service in the architecture or ecosystem. Either async or sync methods are supported depending on your requirements. Internal Service execution or invocation. SonataFlow is also a workflow framework to build applications. You can use it to create custom services in the same thread to run a lightweight workflow-based application within the same instance. You can learn how to create, manage, and deploy your workflow applications with the following guides. Creating a Quarkus Workflow Project Learn how to create your first Quarkus Workflow Project Serverless Workflow Specification Learn about the CNCF Serverless Workflow Specification implementation Events in SonataFlow Learn how to use the Event state in your workflow application Callbacks in SonataFlow Learn how to use the Callback state in your workflow application jq Expressions Learn how to create jq expressions to manipulate data within a workflow execution Error handling in SonataFlow Learn how to handle errors in your workflow application Configuration properties in SonataFlow Quick reference of configuration properties in workflow Input and Output schema definition for SonataFlow Learn about the input schema definition used to validate the workflow data input against a defined JSON Schema Custom functions for your SonataFlow service Learn about the custom functions supported by Serverless Workflow Timeouts in SonataFlow Learn how to configure timeouts in the workflow Parallelism in SonataFlow Working with parallelism in your workflow project Serverless Workflow editor Learn how to install and use the Serverless Workflow editor VS Code extension for Serverless Workflow editor Learn how to install and use the VS Code extension for Serverless Workflow editor for creating workflows. Serverless Logic Web Tools Learn how to use Serverless Logic Web Tools for creating and managing workflows, decisions, and dashboards. Chrome extension for Serverless Workflow editor on GitHub Learn how to install and use the Chrome extension for Serverless Workflow editor to view and edit workflows directly in" }, { "data": "Orchestrating OpenAPI Services Learn how to orchestrate REST services using OpenAPI specification descriptors OpenAPI Callback in SonataFlow Learn how to use the OpenAPI Callback in your workflow application Orchestrating gRPC based Services Learn about orchestrating gRPC services Orchestrating AsyncAPI Services Learn how to trigger and consume events using AsyncAPI specification descriptors Event correlation in SonataFlow Learn how to configure event correlation in your workflow application Consuming and producing events using Apache Kafka in Quarkus Learn how to configure your Quarkus Workflow Project to produce and consume events using Apache Kafka Consuming and producing events on Knative Eventing in Quarkus Learn how to configure your Quarkus Workflow Project to produce and consume events on Knative Eventing Authentication for OpenAPI services in SonataFlow Learn how to use authentication methods when calling REST services using OpenAPI specification Orchestration of third-party services using OAuth 2.0 authentication Learn about the OAuth2 method support when orchestrating REST services using your workflow application Kogito Serverless Workflow Tools extension in Quarkus Dev UI Learn how to use the Serverless Workflow extension in Quarkus Dev UI SonataFlow plug-in for Knative CLI Learn how to install the SonataFlow plug-in for Knative CLI Mocking HTTP CloudEvents sink using WireMock Testing Quarkus Workflow Project that uses HTTP CloudEvents and Knative Sink Binding Mocking OpenAPI services using WireMock Learn how to mock external REST requests when testing your Quarkus Workflow Project Testing your Quarkus Workflow Application using REST Assured Learn how to add unit tests in your Quarkus Workflow Project using RestAssured Running a Quarkus Workflow Application using PostgreSQL Running Quarkus Workflow Applications using PostgresSQL PostgreSQL Database Migration Migrating your existing PostgreSQL Database with changes from the SonataFlow upgrade using Flyway SonataFlow integration test using PostgreSQL Learn how to integrate tests on Quarkus Workflow Applications that use PostgreSQL as a persistence storage SonataFlow in the Cloud Learn about the options to deploy workflow applications in Kubernetes Integrating with Camel routes Learn how to use Camel Routes within your workflow application Invoking Knative services from SonataFlow Learn how to invoke Knative Services from SonataFlow custom functions Exposing Workflow base metrics to Prometheus Exposing the workflow base metrics to Prometheus Displaying Workflow Data in Dashboards Learn how to use dashboards to display the runtime data of your workflow application Introduction Details about Job Service to control timers in SonataFlow Job Service Quarkus Extensions Details about how to configure you Quarkus Workflow Project to interact with the Job Service in SonataFlow Data Index Core Concepts Learn Data Index core concepts, allowing to understand the purpose and the different deployment options that are provided. Data Index standalone service Go deeper in details about Data Index as standalone service deployment. Data Index Quarkus extension Explore Data Index as Quarkus extension in SonataFlow Saga orchestration example in SonataFlow Learn how and when to use the SAGA pattern in your workflow projects Timeouts Showcase in SonataFlow Learn how and when to use timeout in your workflow projects This page was built using the Antora default UI. The source code for this UI is licensed under the terms of the MPL-2.0 license." } ]
{ "category": "App Definition and Development", "file_name": "introduction.html.md", "project_name": "sealer", "subcategory": "Application Definition & Image Build" }
[ { "data": "Introduction Getting Started Commands Concepts Advanced Find Sealer Images Contributing Help Sealer[silr] provides a new way of distributed application delivery which is reducing the difficulty and complexity by packaging Kubernetes cluster and all application's dependencies into one Sealer Image. We can write a Kubefile, and build a Sealer Image, then using a Clusterfile to run a Sealer Image. Kubefile: a file that describes how to build a Sealer Image. Sealer Image: like docker image, and it contains all the dependencies you need to deploy a cluster or applications(like container images, yaml files or helm chart). Clusterfile: a file that describes how to run a Sealer Image. Architecture" } ]
{ "category": "App Definition and Development", "file_name": "Apache-ServiceComb-Meetup-2019-Shanghai-KubeCon+CloudNative+OSS-Report.md", "project_name": "ServiceComb", "subcategory": "Application Definition & Image Build" }
[ { "data": "ServiceComb is a microservice framework that provides service registration, discovery, configuration and management utilities. 2 minute read On June 24, 2019, Beijing time, at the Shanghai World Expo Center in China, the Apache ServiceComb community held microservice Co-Located event at the KubeCon+CloudNativeCon+Open Source Summit conference which the most prestigious in the open source industry. The event invited Apache Member, Apache Committer, Huawei Cloud ServiceStage Chief Engineer, Jingdong DBA Expert, Global Top10 IT Service Provider Development Manager and other senior practitioners to share experience,the topic of the speech includes Apache community development experience, enterprise PaaS responds to complex network topology cases, car brand digital marketing system microservice practices, high-performance service communication optimization techniques and so on. ServiceComb community has also released a series of innovative new projects to assist user solve pain points of microservice In the process of enterprise transformation to digitalization and cloudization, microservice is the best choice. However, it is not a silver bullet. Enterprises will encounter many challenges in the process of microservice. ServiceComb will continue to closely focus on users and developers,solve the problem of microservice pain points. Adhering to the concept of provide a one-stop open source micro-service solution to assist enterprises, users and developers migrate applications on the cloud, archieving efficient O&M and management of microservice applications,ServiceComb community initiates a convening order to mobilize aspiring people to join the community and do something interesting together. Video review: Link ServiceComb opensource way PDF Download Speaker Willem Jiang, Apache Member, Apache ServiceComb VP Summary This speech summarizes the experience and gains of ServiceComb open source in the past two years. Hope that to assist guys understand open source, participate in open source, and create a better" }, { "data": "Digital marketing system microservice practice of car brands PDF Download Speaker Xiaowei Zhu, NTTDATA Shanghai Branch, Digital Marketing Development Manager Summary This speech shares the transformation of NTTDATA into microservice, and creates a digital marketing platform for car brands, assist enterprises to flexibly respond to market demands and supporting the practice of fast changing business scenarios of digital marketing. Microservice practice in enterprise PaaS PDF Download Speaker Shawn Tian, Huawei Cloud ServiceStage Chief Engineer Summary This speech shares Huawei Cloud PaaS platform using ServiceComb to solve complex network topology problems of distributed systems, assist users complete microservice transformation. ServiceComb innovation new projects release PDF Download Speaker Mabin, Apache ServiceComb member, Huawei Open Source Software Architect Summary This speech shares the pain points of users in the process of implementing microservices transformation, as well as ServiceCombs innovative projects, we look forward to working with more users and developers to think about how to solve the problems in microservices together. ShardingSphere combines ServiceComb distributed transaction solution PDF Download Speaker Juan Pan, Apache ShardingSphere member, Jingdong DBA Expert Summary This speech revolves around distributed transactions, explains that ShardingSphere(the first ASF distributed database middleware project) to join forces with ServiceComb(the first ASF microservice top-level project) to implement distributed transactions solution in microservice and distributed database scenarios. High-performance service communication practice PDF Download Speaker Bruce Liu, Apache Commiter Summary This speech combines the communication optimization practices of servicecomb-java-chassis to illustrate some common methods of performance optimization, and from a practical perspective, sharing performance optimization in terms of system reliability, resource planning, and other aspects that are not easily perceived by developers directly. . Huawei Cloud distributed transaction solution PDF Download Speaker Jon Wang, Huawei Cloud Architect Summary This topic explains how to implement distributed transactions when you build microservices based on Huawei Cloud through related scenarios. How to fail back? How to deal with timeouts? Explain in detail the principles behind it. more questions, welcome to scan the QR code or WeChat to add ServiceComb Assistant Tags: Meetup, Microservice Updated: July 02, 2019 Your email address will not be published. Required fields are marked * Apache ServiceComb- (PPT Download) less than 1 minute read ServiceCombServiceCenter 2 minute read Apache ServiceComb- (PPT Download) less than 1 minute read Apache ServiceComb Accept Code Donation From NewCapec Institute less than 1 minute read Events Resources ASF Contribute Community" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "ServiceComb", "subcategory": "Application Definition & Image Build" }
[ { "data": "ServiceComb is a microservice framework that provides service registration, discovery, configuration and management utilities. 1 minute read On December 23, 2018, the Apache ServiceComb community jointly Chuanzhiboke sub-brand Itheima, Boxuegu and Wisdom Gathering, and jointly launched the Apache ServiceComb technical course joint construction and technical resource sharing release ceremony successfully held at Beijing Global Financial Center. Apache ServiceComb is the core of Huawei Cloud microservices engine CSE. Apache, the worlds largest software foundation. announced on October 24, 2018 that Apache ServieComb graduated and became the worlds first Apache microservice top level project. At the 8th Cloud Computing Standards and Applications Conference held in Beijing, Apache ServiceComb won the first prize of China Excellent Open Source Project organized by China Open Source Cloud Alliance (COSCL) due to its technological development potential, activity level and degree of attention. . ServiceComb is committed to helping enterprises, users and developers to easily micro-service enterprise applications to the cloud and achieve efficient operation and management of micro-service applications. The launching ceremony was jointly released by Huawei Open Source Software Competence Center technical expert Jiang Ning, Huawei Open Source Software Competence Center enterprise application micro-service engineer Ma Bin, Wisdom Gatherings operation director Wang Ping, and Boxuegus operation director Tang Yangguang. Jiang Ning, a technical expert at Huawei Open Source Software Competence Center, said that he is very grateful to Chuanzhiboke for providing such a rare resource exchange opportunity for IT people to exchange technology and resources. I hope that IT people will get better growth and development on Apache ServiceComb community and platform. Tags: course, microservice Updated: January 07, 2019 Your email address will not be published. Required fields are marked * Apache ServiceComb- (PPT Download) less than 1 minute read ServiceCombServiceCenter 2 minute read Apache ServiceComb- (PPT Download) less than 1 minute read Apache ServiceComb Accept Code Donation From NewCapec Institute less than 1 minute read Events Resources ASF Contribute Community" } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Shipwright", "subcategory": "Application Definition & Image Build" }
[ { "data": "Shipwright is an extensible framework for building container images on Kubernetes. Shipwright supports popular tools such as Kaniko, Cloud Native Buildpacks, Buildah, and more! Shipwright is based around four elements for each build: Developers who use Docker are familiar with this process: ``` docker build -t registry.mycompany.com/myorg/myapp:latest . ``` ``` docker push registry.mycompany.com/myorg/myapp:latest ``` Shipwrights Build API consists of four core CustomResourceDefinitions (CRDs): The Build object provides a playbook on how to assemble your specific application. The simplest build consists of a git source, a build strategy, and an output image: ``` apiVersion: build.dev/v1alpha1 kind: Build metadata: name: kaniko-golang-build annotations: build.build.dev/build-run-deletion: \"true\" spec: source: url: https://github.com/sbose78/taxi strategy: name: kaniko kind: ClusterBuildStrategy output: image: registry.mycompany.com/my-org/taxi-app:latest ``` Builds can be extended to push to private registries, use a different Dockerfile, and more. BuildStrategy and ClusterBuildStrategy are related APIs to define how a given tool should be used to assemble an application. They are distinguished by their scope - BuildStrategy objects are namespace scoped, whereas ClusterBuildStrategy objects are cluster scoped. The spec of a BuildStrategy or ClusterBuildStrategy consists of a buildSteps object, which look and feel like Kubernetes container specifications. Below is an example spec for Kaniko, which can build an image from a Dockerfile within a container: ``` spec: buildSteps: name: build-and-push image: gcr.io/kaniko-project/executor:v1.3.0 workingDir: /workspace/source securityContext: runAsUser: 0 capabilities: add: CHOWN DAC_OVERRIDE FOWNER SETGID SETUID SETFCAP env: name: DOCKER_CONFIG value: /tekton/home/.docker command: /kaniko/executor args: --skip-tls-verify=true --dockerfile=$(build.dockerfile) --context=/workspace/source/$(build.source.contextDir) --destination=$(build.output.image) --oci-layout-path=/workspace/output/image --snapshotMode=redo resources: limits: cpu: 500m memory: 1Gi requests: cpu: 250m memory: 65Mi ``` Each BuildRun object invokes a build on your cluster. You can think of these as a Kubernetes Jobs or Tekton TaskRuns - they represent a workload on your cluster, ultimately resulting in a running Pod. See BuildRun for more details." } ]
{ "category": "App Definition and Development", "file_name": "servicecomb-accept-newcapec-institute-code-donation.md", "project_name": "ServiceComb", "subcategory": "Application Definition & Image Build" }
[ { "data": "Opensource change the world less than 1 minute read ServiceComb Toolkit recently recieved a code donation(oas-validator) from NewCapec Institute, oas-validator provides OpenAPI V3 style and compatiblity check functionalitiesRelated links ServiceComb community will complete the integration work as soon as possible, provide more helpful functionalities to developers. Tags: microservice Updated: November 07, 2019 Your email address will not be published. Required fields are marked * Apache ServiceComb- (PPT Download) less than 1 minute read ServiceCombServiceCenter 2 minute read Apache ServiceComb- (PPT Download) less than 1 minute read Apache Servicecomb less than 1 minute read Events Resources ASF Contribute Community" } ]
{ "category": "App Definition and Development", "file_name": "#standalone-binary.md", "project_name": "Skaffold", "subcategory": "Application Definition & Image Build" }
[ { "data": "To keep Skaffold up to date, update checks are made to Google servers to see if a new version of Skaffold is available. You can turn this update check off by following these instructions. To help prioritize features and work on improving Skaffold, we collect anonymized Skaffold usage data. You can opt out of data collection by following these instructions. Your use of this software is subject to the Google Privacy Policy Cloud Code provides a managed experience of using Skaffold in supported IDEs. You can install the Cloud Code extension for Visual Studio Code or the plugin for JetBrains IDEs. It manages and keeps Skaffold up-to-date, along with other common dependencies, and works with any kubernetes cluster. Google Cloud Platforms Cloud Shell provides a free browser-based terminal/CLI and editor with Skaffold, Minikube, and Docker pre-installed. (Requires a Google Account.) Cloud Shell is a great way to try Skaffold out. The latest stable binaries can be found here: Simply download the appropriate binary and add it to your PATH. Or, copy+paste one of the following commands in your terminal: ``` curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && \\ sudo install skaffold /usr/local/bin/ ``` ``` curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-arm64 && \\ sudo install skaffold /usr/local/bin/ ``` We also release a bleeding edge build, built from the latest commit: ``` curl -Lo skaffold https://storage.googleapis.com/skaffold/builds/latest/skaffold-linux-amd64 && \\ sudo install skaffold /usr/local/bin/ ``` ``` curl -Lo skaffold https://storage.googleapis.com/skaffold/builds/latest/skaffold-linux-arm64 && \\ sudo install skaffold /usr/local/bin/ ``` The latest stable binaries can be found here: Simply download the appropriate binary and add it to your PATH. Or, copy+paste one of the following commands in your terminal: ``` curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-darwin-amd64 && \\ sudo install skaffold /usr/local/bin/ ``` ``` curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-darwin-arm64 && \\ sudo install skaffold /usr/local/bin/ ``` We also release a bleeding edge build, built from the latest commit: ``` curl -Lo skaffold https://storage.googleapis.com/skaffold/builds/latest/skaffold-darwin-amd64 && \\ sudo install skaffold /usr/local/bin/ ``` ``` curl -Lo skaffold https://storage.googleapis.com/skaffold/builds/latest/skaffold-darwin-arm64 && \\ sudo install skaffold /usr/local/bin/ ``` Skaffold is also kept up to date on a few central package managers: ``` brew install skaffold ``` ``` sudo port install skaffold ``` The latest stable release binary can be found here: https://storage.googleapis.com/skaffold/releases/latest/skaffold-windows-amd64.exe Simply download it and place it in your PATH as skaffold.exe. We also release a bleeding edge build, built from the latest commit: https://storage.googleapis.com/skaffold/builds/latest/skaffold-windows-amd64.exe Skaffold can be installed using the Scoop package manager from the extras bucket. This package is not maintained by the Skaffold team. ``` scoop bucket add extras scoop install skaffold ``` Skaffold can be installed using the Chocolatey package manager. This package is not maintained by the Skaffold team. ``` choco install -y skaffold ``` If you have the Google Cloud SDK installed on your machine, you can quickly install Skaffold as a bundled component. Make sure your gcloud installation and the components are up to date: gcloud components update Then, install Skaffold: gcloud components install skaffold For the latest stable release, you can use: docker run gcr.io/k8s-skaffold/skaffold:latest skaffold <command> For the latest bleeding edge build: docker run gcr.io/k8s-skaffold/skaffold:edge skaffold <command>" } ]
{ "category": "App Definition and Development", "file_name": "#community.md", "project_name": "Skaffold", "subcategory": "Application Definition & Image Build" }
[ { "data": "Join the Skaffold community and discuss the project at: The Skaffold Project also holds a monthly meeting on the last Wednesday of the month at 9:30am PST on Google Meet! Everyone is welcome to attend! You will receive a calendar invite when you join the Skaffold Mailing List. See Contributing Guide, Developing Guide, and our Code of Conduct on GitHub. See Release Notes on Github. See our roadmap in GitHub." } ]
{ "category": "App Definition and Development", "file_name": ".md", "project_name": "Skaffold", "subcategory": "Application Definition & Image Build" }
[ { "data": "Skaffold v2 has been released! You are viewing the Skaffold v2 documentation. View the archived v1 documentation here. Skaffold is a command line tool that facilitates continuous development for container based & Kubernetes applications. Skaffold handles the workflow for building, pushing, and deploying your application, and provides building blocks for creating CI/CD pipelines. This enables you to focus on iterating on your application locally while Skaffold continuously deploys to your local or remote Kubernetes cluster, local Docker environment or Cloud Run project. Skaffold simplifies your development workflow by organizing common development stages into one simple command. Every time you run skaffold dev, the system The pluggable architecture is central to Skaffolds design, allowing you to use your preferred tool or technology in each stage. Also, Skaffolds profiles feature grants you the freedom to switch tools on the fly with a simple flag. For example, if you are coding on a local machine, you can configure Skaffold to build artifacts with your local Docker daemon and deploy them to minikube using kubectl. When you finalize your design, you can switch to your production profile and start building with Google Cloud Build and deploy with Helm. Skaffold supports the following tools: Besides the above steps, Skaffold also automatically manages the following utilities for you:" } ]