content large_stringlengths 3 20.5k | url large_stringlengths 54 193 | branch large_stringclasses 4 values | source large_stringclasses 42 values | embeddings listlengths 384 384 | score float64 -0.21 0.65 |
|---|---|---|---|---|---|
## Overview {% data reusables.actions.about-artifact-attestations %} ## SLSA levels for artifact attestations The SLSA framework is an industry standard used to evaluate supply chain security. It is organized into levels. Each level represents an increasing degree of security and trustworthiness for a software supply chain. Artifact attestations by itself provides SLSA v1.0 Build Level 2. This provides a link between your artifact and its build instructions, but you can take this a step further by requiring builds make use of known, vetted build instructions. A great way to do this is to have your build take place in a reusable workflow that many repositories across your organization share. Reusable workflows can provide isolation between the build process and the calling workflow, to meet SLSA v1.0 Build Level 3. For more information, see [AUTOTITLE](/actions/security-guides/using-artifact-attestations-and-reusable-workflows-to-achieve-slsa-v1-build-level-3). For more information on SLSA levels, see [SLSA Security Levels](https://slsa.dev/spec/v1.0/levels). ## How {% data variables.product.github %} generates artifact attestations To generate artifact attestations, {% data variables.product.prodname\_dotcom %} uses Sigstore, which is an open source project that offers a comprehensive solution for signing and verifying software artifacts via attestations. \*\*Public repositories\*\* that generate artifact attestations use the [Sigstore Public Good Instance](https://openssf.org/blog/2023/10/03/running-sigstore-as-a-managed-service-a-tour-of-sigstores-public-good-instance/). A copy of the generated Sigstore bundle is stored with GitHub and is also written to an immutable transparency log that is publicly readable on the internet. \*\*Private repositories\*\* that generate artifact attestations use GitHub's Sigstore instance. GitHub's Sigstore instance uses the same codebase as the Sigstore Public Good Instance, but it does not have a transparency log and only federates with {% data variables.product.prodname\_actions %}. ## When to generate attestations Generating attestations alone doesn't provide any security benefit, the attestations must be verified for the benefit to be realized. Here are some guidelines for how to think about what to sign and how often: You should sign: \* Software you are releasing that you expect people to run `gh attestation verify ...` on. \* Binaries people will run, packages people will download, or manifests that include hashes of detailed contents. You should \*\*not\*\* sign: \* Frequent builds that are just for automated testing. \* Individual files like source code, documentation files, or embedded images. ## Verifying artifact attestations If you consume software that publishes artifact attestations, you can use the {% data variables.product.prodname\_cli %} to verify those attestations. Because the attestations give you information about where and how software was built, you can use that information to create and enforce security policies that elevate your supply chain security. >[!WARNING] It is important to remember that artifact attestations are \_not\_ a guarantee that an artifact is secure. Instead, artifact attestations link you to the source code and the build instructions that produced them. It is up to you to define your policy criteria, evaluate that policy by evaluating the content, and make an informed risk decision when you are consuming software. ## Next steps To start generating and verifying artifact attestations for your builds, see [AUTOTITLE](/actions/how-tos/security-for-github-actions/using-artifact-attestations/using-artifact-attestations-to-establish-provenance-for-builds). | https://github.com/github/docs/blob/main//content/actions/concepts/security/artifact-attestations.md | main | github-actions | [
-0.07998332381248474,
0.0035667894408106804,
-0.04350585117936134,
-0.0010479970369488,
0.0280907042324543,
-0.06660491973161697,
-0.009082349948585033,
-0.040978774428367615,
-0.05285279452800751,
-0.039272576570510864,
0.009981446899473667,
-0.04297371953725815,
0.04351082444190979,
-0.0... | 0.097755 |
## Reusable workflows Rather than copying and pasting from one workflow to another, you can make workflows reusable. You and anyone with access to the reusable workflow can then call the reusable workflow from another workflow. Reusing workflows avoids duplication. This makes workflows easier to maintain and allows you to create new workflows more quickly by building on the work of others, just as you do with actions. Workflow reuse also promotes best practice by helping you to use workflows that are well designed, have already been tested, and have been proven to be effective. Your organization can build up a library of reusable workflows that can be centrally maintained. The diagram below shows an in-progress workflow run that uses a reusable workflow. \* After each of three build jobs on the left of the diagram completes successfully, a dependent job called "Deploy" is run. \* The "Deploy" job calls a reusable workflow that contains three jobs: "Staging", "Review", and "Production." \* The "Production" deployment job only runs after the "Staging" job has completed successfully. \* When a job targets an environment, the workflow run displays a progress bar that shows the number of steps in the job. In the diagram below, the "Production" job contains 8 steps, with step 6 currently being processed. \* Using a reusable workflow to run deployment jobs allows you to run those jobs for each build without duplicating code in workflows.  A workflow that uses another workflow is referred to as a "caller" workflow. The reusable workflow is a "called" workflow. One caller workflow can use multiple called workflows. Each called workflow is referenced in a single line. The result is that the caller workflow file may contain just a few lines of YAML, but may perform a large number of tasks when it's run. When you reuse a workflow, the entire called workflow is used, just as if it was part of the caller workflow. If you reuse a workflow from a different repository, any actions in the called workflow run as if they were part of the caller workflow. For example, if the called workflow uses `actions/checkout`, the action checks out the contents of the repository that hosts the caller workflow, not the called workflow. You can view the reused workflows referenced in your {% data variables.product.prodname\_actions %} workflows as dependencies in the dependency graph of the repository containing your workflows. For more information, see “[About the dependency graph](/code-security/supply-chain-security/understanding-your-software-supply-chain/about-the-dependency-graph).” ### Reusable workflows versus composite actions Reusable workflows and composite actions both help you avoid duplicating workflow content. Whereas reusable workflows allow you to reuse an entire workflow, with multiple jobs and steps, composite actions combine multiple steps that you can then run within a job step, just like any other action. Let's compare some aspects of each solution: \* \*\*Workflow jobs\*\* - Composite actions contain a series of steps that are run as a single step within the caller workflow. Unlike reusable workflows, they cannot contain jobs. \* \*\*Logging\*\* - When a composite action runs, the log will show just the step in the caller workflow that ran the composite action, not the individual steps within the composite action. With reusable workflows, every job and step is logged separately. \* \*\*Specifying runners\*\* - Reusable workflows contain one or more jobs. As with all workflow jobs, the jobs in a reusable workflow specify the type of machine on which the job will run. Therefore, if the steps must be run on a type of machine that might be different from the machine chosen for the | https://github.com/github/docs/blob/main//content/actions/concepts/workflows-and-actions/reusing-workflow-configurations.md | main | github-actions | [
-0.03587152063846588,
-0.09434571117162704,
-0.0039029542822390795,
-0.0050660390406847,
-0.0024810386821627617,
-0.00875498354434967,
-0.04852260649204254,
-0.07865236699581146,
0.07759451866149902,
-0.09858591109514236,
-0.03018375299870968,
0.054202474653720856,
0.011934266425669193,
-0... | 0.13123 |
workflows contain one or more jobs. As with all workflow jobs, the jobs in a reusable workflow specify the type of machine on which the job will run. Therefore, if the steps must be run on a type of machine that might be different from the machine chosen for the calling workflow job, then you should use a reusable workflow, not a composite action. \* \*\*Passing output to steps\*\* - A composite action is run as a step within a workflow job, and you can have multiple steps before or after the step that runs the composite action. Reusable workflows are called directly within a job, and not from within a job step. You can't add steps to a job after calling a reusable workflow, so you can't use `GITHUB\_ENV` to pass values to subsequent job steps in the caller workflow. ### Key differences between reusable workflows and composite actions | Reusable workflows | Composite actions | | ------------------ | ----------------- | | A YAML file, very similar to any standard workflow file | An action containing a bundle of workflow steps | | Each reusable workflow is a single file in the `.github/workflows` directory of a repository | Each composite action is a separate repository, or a directory, containing an `action.yml` file and, optionally, other files | | Called by referencing a specific YAML file | Called by referencing a repository or directory in which the action is defined | | Called directly within a job, not from a step | Run as a step within a job | | Can contain multiple jobs | Does not contain jobs | | Each step is logged in real-time | Logged as one step even if it contains multiple steps | | Can connect a maximum of {% ifversion fpt or ghec %}ten {% else %}four {% endif %}levels of workflows | Can be nested to have up to 10 composite actions in one workflow | | Can use secrets | Cannot use secrets | | Cannot be published to the [marketplace](https://github.com/marketplace?type=actions) | Can be published to the [marketplace](https://github.com/marketplace?type=actions) | ## Workflow templates Workflow templates allow everyone in your organization who has permission to create workflows to do so more quickly and easily. When people create a new workflow, they can choose a workflow template and some or all of the work of writing the workflow will be done for them. Within a workflow template, you can also reference reusable workflows to make it easy for people to benefit from reusing centrally managed workflow code. If you use a commit SHA when referencing the reusable workflow, you can ensure that everyone who reuses that workflow will always be using the same YAML code. However, if you reference a reusable workflow by a tag or branch, be sure that you can trust that version of the workflow. For more information, see [AUTOTITLE](/actions/security-guides/security-hardening-for-github-actions#reusing-third-party-workflows). {% data variables.product.github %} offers workflow templates for a variety of languages and tooling. When you set up workflows in your repository, {% data variables.product.github %} analyzes the code in your repository and recommends workflows based on the language and framework in your repository. For example, if you use Node.js, {% data variables.product.github %} will suggest a workflow template file that installs your Node.js packages and runs your tests. You can search and filter to find relevant workflow templates. {% data reusables.actions.workflow-templates-categories %} {% data reusables.actions.workflow-templates-repo-link %} For more information, see [AUTOTITLE](/actions/using-workflows/creating-starter-workflows-for-your-organization). {% ifversion fpt or ghec %} ## YAML anchors and aliases You can use YAML anchors and aliases to reduce repetition in your workflows. An anchor (marked with | https://github.com/github/docs/blob/main//content/actions/concepts/workflows-and-actions/reusing-workflow-configurations.md | main | github-actions | [
-0.028312593698501587,
-0.0802743211388588,
-0.008224966004490852,
0.025019632652401924,
-0.018988657742738724,
0.01429566740989685,
-0.009173095226287842,
-0.03075578808784485,
0.03581748902797699,
-0.022714147344231606,
-0.025695370510220528,
0.02291090413928032,
0.09094886481761932,
-0.... | 0.023735 |
can search and filter to find relevant workflow templates. {% data reusables.actions.workflow-templates-categories %} {% data reusables.actions.workflow-templates-repo-link %} For more information, see [AUTOTITLE](/actions/using-workflows/creating-starter-workflows-for-your-organization). {% ifversion fpt or ghec %} ## YAML anchors and aliases You can use YAML anchors and aliases to reduce repetition in your workflows. An anchor (marked with `&`) identifies a piece of content that you want to reuse, while an alias (marked with `\*`) repeats that content in another location. Think of an anchor as creating a named template and an alias as using that template. This is particularly useful when you have jobs or steps that share common configurations. For reference information and examples, see [AUTOTITLE](/actions/reference/workflows-and-actions/reusing-workflow-configurations#yaml-anchors-and-aliases). {% endif %} ## Next steps To start reusing your workflows, see [AUTOTITLE](/actions/how-tos/sharing-automations/reuse-workflows). To find information on the intricacies of reusing workflows, see [AUTOTITLE](/actions/reference/reusable-workflows-reference). | https://github.com/github/docs/blob/main//content/actions/concepts/workflows-and-actions/reusing-workflow-configurations.md | main | github-actions | [
-0.03505667671561241,
-0.009677146561443806,
0.029613835737109184,
0.01028082612901926,
0.037599146366119385,
0.062023796141147614,
0.03640606626868248,
-0.07129289209842682,
0.038558851927518845,
-0.07267526537179947,
-0.0682508796453476,
0.020255286246538162,
-0.005214085336774588,
-0.03... | 0.034125 |
## About workflows {% data reusables.actions.about-workflows-long %} ## Workflow basics A workflow must contain the following basic components: 1. One or more \_events\_ that will trigger the workflow. 1. One or more \_jobs\_, each of which will execute on a \_runner\_ machine and run a series of one or more \_steps\_. 1. Each step can either run a script that you define or run an action, which is a reusable extension that can simplify your workflow. For more information on these basic components, see [AUTOTITLE](/actions/learn-github-actions/understanding-github-actions#the-components-of-github-actions).  ## Workflow triggers {% data reusables.actions.about-triggers %} For more information, see [AUTOTITLE](/actions/using-workflows/triggering-a-workflow). ## Next steps To build your first workflow, see [AUTOTITLE](/actions/tutorials/creating-an-example-workflow). | https://github.com/github/docs/blob/main//content/actions/concepts/workflows-and-actions/workflows.md | main | github-actions | [
-0.06502284854650497,
-0.07534820586442947,
-0.005328219849616289,
0.008162908256053925,
0.04169461131095886,
-0.010849363170564175,
-0.0039213248528540134,
-0.014437120407819748,
0.00478759640827775,
-0.0466218963265419,
-0.04294166713953018,
0.04170182719826698,
0.03769721835851669,
-0.0... | 0.152567 |
{% data reusables.actions.enterprise-github-hosted-runners %} ## About custom actions You can create actions by writing custom code that interacts with your repository in any way you'd like, including integrating with {% data variables.product.prodname\_dotcom %}'s APIs and any publicly available third-party API. For example, an action can publish npm modules, send SMS alerts when urgent issues are created, or deploy production-ready code. {% ifversion fpt or ghec %} You can write your own actions to use in your workflow or share the actions you build with the {% data variables.product.prodname\_dotcom %} community. To share actions you've built with everyone, your repository must be public. {% ifversion ghec %}To share actions only within your enterprise, your repository must be internal.{% endif %} {% endif %} Actions can run directly on a machine or in a Docker container. You can define an action's inputs, outputs, and environment variables. ## Types of actions {% data reusables.actions.types-of-actions %} {% rowheaders %} | Type | Linux | macOS | Windows | | ---- | ----- | ----- | -------- | | Docker container | {% octicon "check" aria-label="Supported" %} | {% octicon "x" aria-label="Not supported" %} | {% octicon "x" aria-label="Not supported" %} | | JavaScript | {% octicon "check" aria-label="Supported" %} | {% octicon "check" aria-label="Supported" %} | {% octicon "check" aria-label="Supported" %} | | Composite Actions | {% octicon "check" aria-label="Supported" %} | {% octicon "check" aria-label="Supported" %} | {% octicon "check" aria-label="Supported" %} | {% endrowheaders %} ### Docker container actions Docker containers package the environment with the {% data variables.product.prodname\_actions %} code. This creates a more consistent and reliable unit of work because the consumer of the action does not need to worry about the tools or dependencies. A Docker container allows you to use specific versions of an operating system, dependencies, tools, and code. For actions that must run in a specific environment configuration, Docker is an ideal option because you can customize the operating system and tools. Because of the latency to build and retrieve the container, Docker container actions are slower than JavaScript actions. Docker container actions can only execute on runners with a Linux operating system. {% data reusables.actions.self-hosted-runner-reqs-docker %} ### JavaScript actions JavaScript actions can run directly on a runner machine, and separate the action code from the environment used to run the code. Using a JavaScript action simplifies the action code and executes faster than a Docker container action. {% data reusables.actions.pure-javascript %} If you're developing a Node.js project, the {% data variables.product.prodname\_actions %} Toolkit provides packages that you can use in your project to speed up development. For more information, see the [actions/toolkit](https://github.com/actions/toolkit) repository. ### Composite Actions A \_composite\_ action allows you to combine multiple workflow steps within one action. For example, you can use this feature to bundle together multiple run commands into an action, and then have a workflow that executes the bundled commands as a single step using that action. To see an example, check out [AUTOTITLE](/actions/creating-actions/creating-a-composite-action). ## Next steps To learn about how to manage your custom actions, see [AUTOTITLE](/actions/how-tos/administering-github-actions/managing-custom-actions). | https://github.com/github/docs/blob/main//content/actions/concepts/workflows-and-actions/custom-actions.md | main | github-actions | [
-0.011236334219574928,
-0.08207455277442932,
-0.08875980973243713,
0.01910299062728882,
0.013461846858263016,
0.023046936839818954,
0.022284962236881256,
0.1092311218380928,
-0.05602201819419861,
0.07107983529567719,
0.04105578735470772,
-0.007903226651251316,
0.026892373338341713,
-0.0349... | 0.175151 |
## About contexts {% data reusables.actions.actions-contexts-about-description %} Each context is an object that contains properties, which can be strings or other objects. {% data reusables.actions.context-contents %} For example, the `matrix` context is only populated for jobs in a [matrix](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_idstrategymatrix). You can access contexts using the expression syntax. For more information, see [AUTOTITLE](/actions/learn-github-actions/expressions). {% raw %} `${{ }}` {% endraw %} {% data reusables.actions.context-injection-warning %} ## Determining when to use contexts {% data variables.product.prodname\_actions %} includes a collection of variables called \_contexts\_ and a similar collection of variables called \_default variables\_. These variables are intended for use at different points in the workflow: \* \*\*Default environment variables:\*\* These environment variables exist only on the runner that is executing your job. For more information, see [AUTOTITLE](/actions/learn-github-actions/variables#default-environment-variables). \* \*\*Contexts:\*\* You can use most contexts at any point in your workflow, including when \_default variables\_ would be unavailable. For example, you can use contexts with expressions to perform initial processing before the job is routed to a runner for execution; this allows you to use a context with the conditional `if` keyword to determine whether a step should run. Once the job is running, you can also retrieve context variables from the runner that is executing the job, such as `runner.os`. For details of where you can use various contexts within a workflow, see [AUTOTITLE](/actions/reference/accessing-contextual-information-about-workflow-runs#context-availability). The following example demonstrates how these different types of variables can be used together in a job: {% raw %} ```yaml copy name: CI on: push jobs: prod-check: if: ${{ github.ref == 'refs/heads/main' }} runs-on: ubuntu-latest steps: - run: echo "Deploying to production server on branch $GITHUB\_REF" ``` {% endraw %} In this example, the `if` statement checks the [`github.ref`](/actions/learn-github-actions/contexts#github-context) context to determine the current branch name; if the name is `refs/heads/main`, then the subsequent steps are executed. The `if` check is processed by {% data variables.product.prodname\_actions %}, and the job is only sent to the runner if the result is `true`. Once the job is sent to the runner, the step is executed and refers to the [`$GITHUB\_REF`](/actions/learn-github-actions/variables#default-environment-variables) variable from the runner. | https://github.com/github/docs/blob/main//content/actions/concepts/workflows-and-actions/contexts.md | main | github-actions | [
-0.020823201164603233,
-0.0037626828998327255,
-0.13095183670520782,
0.04133845120668411,
0.026969412341713905,
0.03347829729318619,
0.1318192332983017,
0.021968817338347435,
-0.008845877833664417,
-0.029601924121379852,
0.0026191743090748787,
-0.01862609013915062,
0.04307689145207405,
-0.... | 0.119592 |
{% data reusables.actions.about-environments %} Each job in a workflow can reference a single environment. Any protection rules configured for the environment must pass before a job referencing the environment is sent to a runner. The job can access the environment's secrets only after the job is sent to a runner. When a workflow references an environment, the environment will appear in the repository's deployments. For more information about viewing current and previous deployments, see [AUTOTITLE](/actions/deployment/managing-your-deployments/viewing-deployment-history). | https://github.com/github/docs/blob/main//content/actions/concepts/workflows-and-actions/deployment-environments.md | main | github-actions | [
-0.029322240501642227,
0.007018391042947769,
-0.00939148012548685,
-0.0054680658504366875,
0.04762991517782211,
-0.009069163352251053,
0.046059511601924896,
-0.05907599627971649,
0.0024399885442107916,
-0.022226952016353607,
-0.041228774935007095,
-0.02146500162780285,
0.03441424295306206,
... | 0.088483 |
## About workflow artifacts An artifact is a file or collection of files produced during a workflow run. Artifacts allow you to persist data after a job has completed, and share that data with another job in the same workflow. For example, you can use artifacts to save your build and test output after a workflow run has ended. {% data variables.product.github %} provides two actions that you can use to upload and download build artifacts, {% ifversion fpt or ghec %}[upload-artifact](https://github.com/actions/upload-artifact) and [download-artifact](https://github.com/actions/download-artifact){% else %} `upload-artifact` and `download-artifact` on {% data variables.product.prodname\_ghe\_server %}{% endif %}. Common artifacts include: \* Log files and core dumps \* Test results, failures, and screenshots \* Binary or compressed files \* Stress test performance output and code coverage results {% data reusables.actions.comparing-artifacts-caching %} For more information on dependency caching, see [AUTOTITLE](/actions/using-workflows/caching-dependencies-to-speed-up-workflows#comparing-artifacts-and-dependency-caching). {% ifversion artifact-attestations %} ## Generating artifact attestations for builds {% data reusables.actions.about-artifact-attestations %} You can access attestations after a build run, underneath the list of the artifacts the build produced. For more information, see [AUTOTITLE](/actions/security-guides/using-artifact-attestations-to-establish-provenance-for-builds). {% endif %} {% data reusables.actions.artifacts.artifacts-from-deleted-workflow-runs %} | https://github.com/github/docs/blob/main//content/actions/concepts/workflows-and-actions/workflow-artifacts.md | main | github-actions | [
-0.06321512907743454,
0.001102064154110849,
-0.05452787131071091,
-0.015609531663358212,
0.06846020370721817,
-0.07711624354124069,
0.011583426967263222,
0.013541778549551964,
-0.02029886282980442,
0.015252544544637203,
0.021191393956542015,
0.035995963960886,
0.03666551411151886,
-0.05900... | 0.152841 |
If you enable email or web notifications for {% data variables.product.prodname\_actions %}, you'll receive a notification when any workflow runs that you've triggered have completed. The notification will include the workflow run's status (including successful, failed, neutral, and canceled runs). You can also choose to receive a notification only when a workflow run has failed. For more information about enabling or disabling notifications, see [AUTOTITLE](/account-and-profile/managing-subscriptions-and-notifications-on-github/setting-up-notifications/about-notifications). Notifications for scheduled workflows are sent to the user who initially created the workflow. \* If a different user updates the cron syntax, in the `schedule` event in the workflow file, subsequent notifications will be sent to that user instead. \* If a scheduled workflow is disabled and then re-enabled, notifications will be sent to the user who re-enabled the workflow rather than the user who last modified the cron syntax. You can also see the status of workflow runs on a repository's Actions tab. For more information, see [AUTOTITLE](/actions/managing-workflow-runs). | https://github.com/github/docs/blob/main//content/actions/concepts/workflows-and-actions/notifications-for-workflow-runs.md | main | github-actions | [
-0.020849289372563362,
-0.0736665278673172,
-0.031381260603666306,
-0.025630110874772072,
0.07856990396976471,
0.012826071120798588,
0.13080517947673798,
0.024139700457453728,
0.0278107151389122,
-0.05377695709466934,
0.018075229600071907,
0.018016217276453972,
0.04868833348155022,
0.00253... | 0.153815 |
## About workflow dependency caching Workflow runs often reuse the same outputs or downloaded dependencies from one run to another. For example, package and dependency management tools such as Maven, Gradle, npm, and Yarn keep a local cache of downloaded dependencies. {% ifversion fpt or ghec %} Jobs on {% data variables.product.prodname\_dotcom %}-hosted runners start in a clean runner image and must download dependencies each time, causing increased network utilization, longer runtime, and increased cost. {% endif %}To help speed up the time it takes to recreate files like dependencies, {% data variables.product.prodname\_dotcom %} can cache files you frequently use in workflows. {%- ifversion fpt or ghec %} > [!NOTE] > When using self-hosted runners, caches from workflow runs are stored on {% data variables.product.company\_short %}-owned cloud storage. A customer-owned storage solution is only available with {% data variables.product.prodname\_ghe\_server %}. {%- endif %} {% data reusables.actions.comparing-artifacts-caching %} For more information on workflow run artifacts, see [AUTOTITLE](/actions/using-workflows/storing-workflow-data-as-artifacts). ## Next steps To implement dependency caching in your workflows, see [AUTOTITLE](/actions/reference/dependency-caching-reference). | https://github.com/github/docs/blob/main//content/actions/concepts/workflows-and-actions/dependency-caching.md | main | github-actions | [
-0.03978295251727104,
-0.021049432456493378,
-0.0077383071184158325,
-0.01737208478152752,
0.08017688989639282,
-0.04029736667871475,
0.017583074048161507,
0.0027792095206677914,
0.03782588616013527,
0.008051673881709576,
-0.005348486360162497,
0.09781228750944138,
-0.03945205360651016,
-0... | 0.122904 |
## About Variables provide a way to store and reuse non-sensitive configuration information. You can store any configuration data such as compiler flags, usernames, or server names as variables. Variables are interpolated on the runner machine that runs your workflow. Commands that run in actions or workflow steps can create, read, and modify variables. You can set your own custom variables or use the default environment variables that {% data variables.product.prodname\_dotcom %} sets automatically. You can set a custom variable in two ways. \* To define an environment variable for use in a single workflow, you can use the `env` key in the workflow file. For more information, see [Defining environment variables for a single workflow](/actions/how-tos/writing-workflows/choosing-what-your-workflow-does/store-information-in-variables#defining-environment-variables-for-a-single-workflow). \* To define a configuration variable across multiple workflows, you can define it at the organization, repository, or environment level. When creating a variable in an organization, you can use a policy to limit access by repository. For example, you can grant access to all repositories, or limit access to only private repositories or a specified list of repositories. For more information, see [Defining configuration variables for multiple workflows](/actions/how-tos/writing-workflows/choosing-what-your-workflow-does/store-information-in-variables#defining-configuration-variables-for-multiple-workflows). > [!WARNING] > By default, variables render unmasked in your build outputs. If you need greater security for sensitive information, such as passwords, use secrets instead. For more information, see [AUTOTITLE](/actions/security-for-github-actions/security-guides/about-secrets). For reference documentation, see [AUTOTITLE](/actions/reference/variables-reference). | https://github.com/github/docs/blob/main//content/actions/concepts/workflows-and-actions/variables.md | main | github-actions | [
0.03464389964938164,
-0.03151192516088486,
-0.09936563670635223,
-0.010524126701056957,
-0.009418915025889874,
0.01954568736255169,
0.10569107532501221,
0.056537605822086334,
-0.05455159395933151,
-0.04226461052894592,
-0.03053380735218525,
-0.0026180243585258722,
0.05321381613612175,
-0.0... | 0.124311 |
By default, {% data variables.product.prodname\_actions %} allows multiple jobs within the same workflow, multiple workflow runs within the same repository, and multiple workflow runs across a repository owner's account to run concurrently. This means that multiple instances of the same workflow or job can run at the same time, performing the same steps. {% data variables.product.prodname\_actions %} also allows you to disable concurrent execution. This can be useful for controlling your account’s or organization’s resources in situations where running multiple workflows or jobs at the same time could cause conflicts or consume more Actions minutes and storage than expected. For example, you might want to prevent multiple deployments from running at the same time, or cancel linters checking outdated commits. To start controlling concurrency in your own workflows with the `concurrency` keyword, see [AUTOTITLE](/actions/how-tos/writing-workflows/choosing-when-your-workflow-runs/control-the-concurrency-of-workflows-and-jobs). | https://github.com/github/docs/blob/main//content/actions/concepts/workflows-and-actions/concurrency.md | main | github-actions | [
-0.056035418063402176,
-0.06643502414226532,
-0.05411821976304054,
-0.024255117401480675,
0.022183174267411232,
-0.03334011882543564,
0.04415535181760788,
-0.010828624479472637,
0.018092628568410873,
0.019318358972668648,
0.06948601454496384,
0.07249762862920761,
0.007701443508267403,
-0.0... | 0.065952 |
## About expressions You can use expressions to programmatically set environment variables in workflow files and access contexts. An expression can be any combination of literal values, references to a context, or functions. You can combine literals, context references, and functions using operators. For more information about contexts, see [AUTOTITLE](/actions/learn-github-actions/contexts). Expressions are commonly used with the conditional `if` keyword in a workflow file to determine whether a step should run. When an `if` conditional is `true`, the step will run. {% data reusables.actions.expressions-syntax-evaluation %} {% raw %} `${{ }}` {% endraw %} > [!NOTE] > The exception to this rule is when you are using expressions in an `if` clause, where, optionally, you can usually omit {% raw %}`${{`{% endraw %} and {% raw %}`}}`{% endraw %}. For more information about `if` conditionals, see [AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_idif). {% data reusables.actions.context-injection-warning %} ### Example setting an environment variable {% raw %} ```yaml env: MY\_ENV\_VAR: ${{ }} ``` {% endraw %} ## Further reading For technical reference information about expressions you can use in workflows and actions, see [AUTOTITLE](/actions/reference/evaluate-expressions-in-workflows-and-actions). | https://github.com/github/docs/blob/main//content/actions/concepts/workflows-and-actions/expressions.md | main | github-actions | [
-0.010340861976146698,
0.0433485247194767,
-0.014525438658893108,
0.04714930057525635,
0.02986544370651245,
0.031919050961732864,
0.042802777141332626,
0.03604765981435776,
-0.019862256944179535,
0.007074550725519657,
-0.03413074463605881,
-0.07271403819322586,
0.019225308671593666,
0.0012... | 0.07592 |
## About {% data variables.product.prodname\_actions\_runner\_controller %} {% data reusables.actions.actions-runner-controller-about-arc %} The following diagram illustrates the architecture of ARC's autoscaling runner scale set mode. > [!NOTE] > To view the following diagram in a larger size, see the [Autoscaling Runner Scale Sets mode](https://github.com/actions/actions-runner-controller/blob/master/docs/gha-runner-scale-set-controller/README.md#how-it-works) documentation in the Actions Runner Controller repository.  1. {% data variables.product.prodname\_actions\_runner\_controller %} is installed using the supplied Helm charts, and the controller manager pod is deployed in the specified namespace. A new AutoScalingRunnerSet resource is deployed via the supplied Helm charts or a customized manifest file. The AutoScalingRunnerSet Controller calls GitHub's APIs to fetch the runner group ID that the runner scale set will belong to. 1. The AutoScalingRunnerSet Controller calls the APIs one more time to either fetch or create a runner scale set in the {% data variables.product.prodname\_actions %} service before creating the Runner ScaleSet Listener resource. 1. A Runner ScaleSet Listener pod is deployed by the AutoScalingListener Controller. In this pod, the listener application connects to the {% data variables.product.prodname\_actions %} Service to authenticate and establish an HTTPS long poll connection. The listener stays idle until it receives a `Job Available` message from the {% data variables.product.prodname\_actions %} Service. 1. When a workflow run is triggered from a repository, the {% data variables.product.prodname\_actions %} Service dispatches individual job runs to the runners or runner scale sets where the `runs-on` key matches the name of the runner scale set or labels of self-hosted runners. 1. When the Runner ScaleSet Listener receives the `Job Available` message, it checks whether it can scale up to the desired count. If it can, the Runner ScaleSet Listener acknowledges the message. 1. The Runner ScaleSet Listener uses a Service Account and a Role bound to that account to make an HTTPS call through the Kubernetes APIs to patch the Ephemeral RunnerSet resource with the number of desired replicas count. 1. The Ephemeral RunnerSet attempts to create new runners and the EphemeralRunner Controller requests a Just-in-Time (JIT) configuration token to register these runners. The controller attempts to create runner pods. If the pod's status is `failed`, the controller retries up to 5 times. After 24 hours the {% data variables.product.prodname\_actions %} Service unassigns the job if no runner accepts it. 1. Once the runner pod is created, the runner application in the pod uses the JIT configuration token to register itself with the {% data variables.product.prodname\_actions %} Service. It then establishes another HTTPS long poll connection to receive the job details it needs to execute. 1. The {% data variables.product.prodname\_actions %} Service acknowledges the runner registration and dispatches the job run details. 1. Throughout the job run execution, the runner continuously communicates the logs and job run status back to the {% data variables.product.prodname\_actions %} Service. 1. When the runner completes its job successfully, the EphemeralRunner Controller checks with the {% data variables.product.prodname\_actions %} Service to see if runner can be deleted. If it can, the Ephemeral RunnerSet deletes the runner. ## {% data variables.product.prodname\_actions\_runner\_controller %} components ARC consists of a set of resources, some of which are created specifically for ARC. An ARC deployment applies these resources onto a Kubernetes cluster. Once applied, it creates a set of Pods that contain your self-hosted runners' containers. With ARC, {% data variables.product.company\_short %} can treat these runner containers as self-hosted runners and allocate jobs to them as needed. Each resource that is deployed by ARC is given a name composed of: \* An installation name, which is the installation name you specify when you install the Helm chart. \* A resource identification suffix, which is a | https://github.com/github/docs/blob/main//content/actions/concepts/runners/actions-runner-controller.md | main | github-actions | [
0.01513365563005209,
-0.037041109055280685,
-0.0722813755273819,
0.07108013331890106,
-0.013754153624176979,
0.03280133008956909,
0.01346372440457344,
0.07850798964500427,
-0.0434250608086586,
0.017196673899888992,
0.1018485426902771,
-0.0029300253372639418,
0.01039920374751091,
-0.0255599... | 0.139917 |
runner containers as self-hosted runners and allocate jobs to them as needed. Each resource that is deployed by ARC is given a name composed of: \* An installation name, which is the installation name you specify when you install the Helm chart. \* A resource identification suffix, which is a string that identifies the resource type. This value is not configurable. > [!NOTE] > Different versions of Kubernetes have different length limits for names of resources. The length limit for the resource name is calculated by adding the length of the installation name and the length of the resource identification suffix. If the resource name is longer than the reserved length, you will receive an error. ### Resources deployed by `gha-runner-scale-set-controller` | Template | Resource Kind | Name | Reserved Length | Description | Notes | |-------|---------------|------|-----------------|-------------|-------| | `deployment.yaml` | Deployment | INSTALLATION\_NAME-gha-rs-controller | 18 | The resource running controller-manager | The pods created by this resource have the ReplicaSet suffix and the Pod suffix. | | `serviceaccount.yaml` | ServiceAccount | INSTALLATION\_NAME-gha-rs-controller | 18 | This is created if `serviceAccount.create` in `values.yaml` is set to true. | The name can be customized in `values.yaml` | | `manager\_cluster\_role.yaml` | ClusterRole | INSTALLATION\_NAME-gha-rs-controller | 18 | ClusterRole for the controller manager | This is created if the value of `flags.watchSingleNamespace` is empty. | | `manager\_cluster\_role\_binding.yaml` | ClusterRoleBinding | INSTALLATION\_NAME-gha-rs-controller | 18 | ClusterRoleBinding for the controller manager | This is created if the value of `flags.watchSingleNamespace` is empty. | | `manager\_single\_namespace\_controller\_role.yaml` | Role | INSTALLATION\_NAME-gha-rs-controller-single-namespace | 35 | Role for the controller manager | This is created if the value of `flags.watchSingleNamespace` is set. | | `manager\_single\_namespace\_controller\_role\_binding.yaml` | RoleBinding | INSTALLATION\_NAME-gha-rs-controller-single-namespace | 35 | RoleBinding for the controller manager | This is created if the value of `flags.watchSingleNamespace` is set. | | `manager\_single\_namespace\_watch\_role.yaml` | Role | INSTALLATION\_NAME-gha-rs-controller-single-namespace-watch | 41 | Role for the controller manager for the namespace configured | This is created if the value of `flags.watchSingleNamespace` is set. | | `manager\_single\_namespace\_watch\_role\_binding.yaml` | RoleBinding | INSTALLATION\_NAME-gha-rs-controller-single-namespace-watch | 41 | RoleBinding for the controller manager for the namespace configured | This is created if the value of `flags.watchSingleNamespace` is set. | | `manager\_listener\_role.yaml` | Role | INSTALLATION\_NAME-gha-rs-controller-listener | 26 | Role for the listener | This is always created. | | `manager\_listener\_role\_binding.yaml `| RoleBinding | INSTALLATION\_NAME-gha-rs-controller-listener | 26 | RoleBinding for the listener | This is always created and binds the listener role with the service account, which is either created by `serviceaccount.yaml` or configured with `values.yaml`. | ### Resources deployed by `gha-runner-scale-set` | Template | Resource Kind | Name | Reserved Length | Description | Notes | |-------|---------------|------|-----------------|-------------|-------| | `autoscalingrunnerset.yaml` | AutoscalingRunnerSet | INSTALLATION\_NAME | 0 | Top level resource working with scale sets | The name is limited to 45 characters in length. | | `no\_permission\_service\_account.yaml` | ServiceAccount | INSTALLATION\_NAME-gha-rs-no-permission | 21 | Service account mounted to the runner container | This is created if the container mode is not "kubernetes" and `template.spec.serviceAccountName` is not specified. | | `githubsecret.yaml` | Secret | INSTALLATION\_NAME-gha-rs-github-secret | 20 | Secret containing values needed to authenticate to the GitHub API | This is created if `githubConfigSecret` is an object. If a string is provided, this secret will not be created. | | `manager\_role.yaml` | Role | INSTALLATION\_NAME-gha-rs-manager | 15 | Role provided to the manager to be able to reconcile on resources in the autoscaling runner set's namespace | This is always created. | | `manager\_role\_binding.yaml` | RoleBinding | INSTALLATION\_NAME-gha-rs-manager | 15 | Binding manager\_role to the manager service account. | This is always created. | | `kube\_mode\_role.yaml` | Role | INSTALLATION\_NAME-gha-rs-kube-mode | 17 | https://github.com/github/docs/blob/main//content/actions/concepts/runners/actions-runner-controller.md | main | github-actions | [
0.06459475308656693,
0.05242759361863136,
0.028066959232091904,
-0.009604591876268387,
-0.002182847121730447,
0.028201455250382423,
-0.0658855065703392,
-0.0064294892363250256,
0.04863986000418663,
0.005640209186822176,
-0.013754011131823063,
-0.08052610605955124,
-0.006175401620566845,
-0... | 0.039618 |
manager to be able to reconcile on resources in the autoscaling runner set's namespace | This is always created. | | `manager\_role\_binding.yaml` | RoleBinding | INSTALLATION\_NAME-gha-rs-manager | 15 | Binding manager\_role to the manager service account. | This is always created. | | `kube\_mode\_role.yaml` | Role | INSTALLATION\_NAME-gha-rs-kube-mode | 17 | Role providing necessary permissions for the hook | This is created when the container mode is set to "kubernetes" and `template.spec.serviceAccount` is not provided. | | `kube\_mode\_serviceaccount.yaml` | ServiceAccount | INSTALLATION\_NAME-gha-rs-kube-mode | 17 | Service account bound to the runner pod. | This is created when the container mode is set to "kubernetes" and `template.spec.serviceAccount` is not provided. | ### About custom resources ARC consists of several custom resource definitions (CRDs). For more information on custom resources, see [Custom Resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) in the Kubernetes documentation. You can find the list of custom resource definitions used for ARC in the following API schema definitions. \* [actions.github.com/v1alpha1](https://pkg.go.dev/github.com/actions/actions-runner-controller/apis/actions.github.com/v1alpha1) \* [actions.summerwind.net/v1alpha1](https://pkg.go.dev/github.com/actions/actions-runner-controller/apis/actions.summerwind.net/v1alpha1) Because custom resources are extensions of the Kubernetes API, they won't be available in a default Kubernetes installation. You will need to install these custom resources to use ARC. For more information on installing custom resources, see [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/quickstart-for-actions-runner-controller). Once the custom resources are installed, you can deploy ARC into your Kubernetes cluster. For information about deploying ARC, see [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/deploying-runner-scale-sets-with-actions-runner-controller). ### About the runner container image {% data variables.product.company\_short %} maintains a [minimal runner container image](https://github.com/actions/runner/pkgs/container/actions-runner). A new image will be published with every runner binaries release. The most recent image will have the runner binaries version and `latest` as tags. This image contains the least amount of packages necessary for the container runtime and the runner binaries. To install additional software, you can create your own runner image. You can use ARC's runner image as a base, or use the corresponding setup actions. For instance, `actions/setup-java` for Java or `actions/setup-node` for Node. You can find the definition of ARC's runner image in [this Dockerfile](https://github.com/actions/runner/blob/main/images/Dockerfile). To view the current base image, check the `FROM` line in the runner image Dockerfile, then search for that tag in the [`dotnet/dotnet-docker`](https://github.com/dotnet/dotnet-docker/tree/main/src/runtime-deps) repository. For example, if the `FROM` line in the runner image Dockerfile is `mcr.microsoft.com/dotnet/runtime-deps:8.0-jammy AS build`, then you can find the base image in [`https://github.com/dotnet/dotnet-docker/blob/main/src/runtime-deps/8.0/jammy/amd64/Dockerfile`](https://github.com/dotnet/dotnet-docker/blob/main/src/runtime-deps/8.0/jammy/amd64/Dockerfile). #### Creating your own runner image You can create your own runner image that meets your requirements. Your runner image must fulfill the following conditions. \* Use a base image that can run the self-hosted runner application. See [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners). \* The [runner binary](https://github.com/actions/runner/releases) must be placed under `/home/runner/` and launched using `/home/runner/run.sh`. \* If you use Kubernetes mode, the [runner container hooks](https://github.com/actions/runner-container-hooks/releases) must be placed under `/home/runner/k8s`. You can use the following example Dockerfile to start creating your own runner image. ```dockerfile copy FROM mcr.microsoft.com/dotnet/runtime-deps:6.0 as build # Replace value with the latest runner release version # source: https://github.com/actions/runner/releases # ex: 2.303.0 ARG RUNNER\_VERSION="" ARG RUNNER\_ARCH="x64" # Replace value with the latest runner-container-hooks release version # source: https://github.com/actions/runner-container-hooks/releases # ex: 0.3.1 ARG RUNNER\_CONTAINER\_HOOKS\_VERSION="" ENV DEBIAN\_FRONTEND=noninteractive ENV RUNNER\_MANUALLY\_TRAP\_SIG=1 ENV ACTIONS\_RUNNER\_PRINT\_LOG\_TO\_STDOUT=1 RUN apt update -y && apt install curl unzip -y RUN adduser --disabled-password --gecos "" --uid 1001 runner \ && groupadd docker --gid 123 \ && usermod -aG sudo runner \ && usermod -aG docker runner \ && echo "%sudo ALL=(ALL:ALL) NOPASSWD:ALL" > /etc/sudoers \ && echo "Defaults env\_keep += \"DEBIAN\_FRONTEND\"" >> /etc/sudoers WORKDIR /home/runner RUN curl -f -L -o runner.tar.gz https://github.com/actions/runner/releases/download/v${RUNNER\_VERSION}/actions-runner-linux-${RUNNER\_ARCH}-${RUNNER\_VERSION}.tar.gz \ && tar xzf ./runner.tar.gz \ && rm runner.tar.gz RUN curl -f -L -o runner-container-hooks.zip https://github.com/actions/runner-container-hooks/releases/download/v${RUNNER\_CONTAINER\_HOOKS\_VERSION}/actions-runner-hooks-k8s-${RUNNER\_CONTAINER\_HOOKS\_VERSION}.zip \ && unzip ./runner-container-hooks.zip -d ./k8s \ && rm runner-container-hooks.zip USER runner ``` ## Software installed in the ARC runner image The ARC [runner | https://github.com/github/docs/blob/main//content/actions/concepts/runners/actions-runner-controller.md | main | github-actions | [
-0.0037383281160146,
0.0064204237423837185,
-0.004596736282110214,
0.02938839979469776,
-0.0458340086042881,
0.022068584337830544,
0.08028123527765274,
-0.024728069081902504,
0.010501434095203876,
0.0023079775273799896,
0.027126362547278404,
-0.09036178886890411,
-0.016114631667733192,
0.0... | 0.119016 |
/etc/sudoers WORKDIR /home/runner RUN curl -f -L -o runner.tar.gz https://github.com/actions/runner/releases/download/v${RUNNER\_VERSION}/actions-runner-linux-${RUNNER\_ARCH}-${RUNNER\_VERSION}.tar.gz \ && tar xzf ./runner.tar.gz \ && rm runner.tar.gz RUN curl -f -L -o runner-container-hooks.zip https://github.com/actions/runner-container-hooks/releases/download/v${RUNNER\_CONTAINER\_HOOKS\_VERSION}/actions-runner-hooks-k8s-${RUNNER\_CONTAINER\_HOOKS\_VERSION}.zip \ && unzip ./runner-container-hooks.zip -d ./k8s \ && rm runner-container-hooks.zip USER runner ``` ## Software installed in the ARC runner image The ARC [runner image](https://github.com/actions/runner/pkgs/container/actions-runner) is bundled with the following software: \* [Runner binaries](https://github.com/actions/runner) \* [Runner container hooks](https://github.com/actions/runner-container-hooks) \* Docker (required for Docker-in-Docker mode) For more information, see [ARC's runner image Dockerfile](https://github.com/actions/runner/blob/main/images/Dockerfile) in the Actions repository. ## Assets and releases ARC is released as two Helm charts and one container image. The Helm charts are only published as Open Container Initiative (OCI) packages. ARC does not provide tarballs or Helm repositories via {% data variables.product.prodname\_pages %}. You can find the latest releases of ARC's Helm charts and container image on {% data variables.product.prodname\_registry %}: \* [`gha-runner-scale-set-controller` Helm chart](https://github.com/actions/actions-runner-controller/pkgs/container/actions-runner-controller-charts%2Fgha-runner-scale-set-controller) \* [`gha-runner-scale-set` Helm chart](https://github.com/actions/actions-runner-controller/pkgs/container/actions-runner-controller-charts%2Fgha-runner-scale-set) \* [`gha-runner-scale-set-controller` container image](https://github.com/actions/actions-runner-controller/pkgs/container/gha-runner-scale-set-controller) The supported runner image is released as a separate container image, which you can find at [`actions-runner`](https://github.com/actions/runner/pkgs/container/actions-runner) on {% data variables.product.prodname\_registry %}. ## Legal notice {% data reusables.actions.actions-runner-controller-legal-notice %} ## Next steps When you're ready to use ARC to execute workflows, see [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/using-actions-runner-controller-runners-in-a-workflow). {% data reusables.actions.actions-runner-controller-labels %} See [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners/using-self-hosted-runners-in-a-workflow). You can scale runners statically or dynamically depending on your needs. See [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/deploying-runner-scale-sets-with-actions-runner-controller#scaling-runners). | https://github.com/github/docs/blob/main//content/actions/concepts/runners/actions-runner-controller.md | main | github-actions | [
-0.0044248164631426334,
-0.0001136786668212153,
-0.036337703466415405,
-0.011797257699072361,
0.03818744048476219,
0.0005914426874369383,
-0.0655592679977417,
0.012781287543475628,
-0.0522281639277935,
-0.019206175580620766,
0.010361244902014732,
-0.009981269016861916,
-0.06915722042322159,
... | 0.090366 |
{% ifversion ghes %} {% data reusables.actions.enterprise-github-hosted-runners %} To learn about larger runners, see [the {% data variables.product.prodname\_ghe\_cloud %} documentation](/enterprise-cloud@latest/actions/concepts/runners/about-larger-runners). {% else %} ## About {% data variables.actions.hosted\_runners %} {% data reusables.actions.about-larger-runners %} {% data variables.product.prodname\_dotcom %} offers {% data variables.actions.hosted\_runners %} with macOS, Ubuntu, or Windows operating systems, and different features and sizes are available depending on which operating system you use. ## About Ubuntu and Windows {% data variables.actions.hosted\_runners %} {% data variables.actions.hosted\_runner\_caps %}s with Ubuntu or Windows operating systems are configured in your organization or enterprise. When you add a {% data variables.actions.hosted\_runner %}, you are defining a type of machine from a selection of available hardware specifications and operating system images. With Ubuntu and Windows {% data variables.actions.hosted\_runners %}, you can: \* Assign runners static IP addresses from a specific range, allowing you to use this range to configure a firewall allowlist \* Control access to your resources by assigning runners to runner groups \* Use autoscaling to simplify runner management and control your costs \* Use your runners with Azure private networking ## About macOS {% data variables.actions.hosted\_runners %} {% data variables.actions.hosted\_runner\_caps %}s with a macOS operating system are not manually added to your organization or enterprise, but are instead used by updating the `runs-on` key of a workflow file to one of the {% data variables.product.company\_short %}-defined macOS {% data variables.actions.hosted\_runner %} labels. Since macOS {% data variables.actions.hosted\_runners %} are not preconfigured, they have limitations that Ubuntu and Windows {% data variables.actions.hosted\_runners %} do not. For more information, see [AUTOTITLE](/actions/reference/larger-runners-reference#limitations-for-macos-larger-runners). ## Billing > [!NOTE] > {% data variables.actions.hosted\_runner\_caps %}s are not eligible for the use of included minutes on private repositories. For both private and public repositories, when {% data variables.actions.hosted\_runners %} are in use, they will always be billed at the per-minute rate. Compared to standard {% data variables.product.github %}-hosted runners, {% data variables.actions.hosted\_runners %} are billed differently. {% data reusables.actions.about-larger-runners-billing %} For more information, see [AUTOTITLE](/billing/reference/actions-minute-multipliers). ## Next steps To start using Windows or Ubuntu {% data variables.actions.hosted\_runners %}, see [AUTOTITLE](/actions/how-tos/using-github-hosted-runners/using-larger-runners/managing-larger-runners). To start using macOS {% data variables.actions.hosted\_runners %}, see [AUTOTITLE](/actions/how-tos/using-github-hosted-runners/using-larger-runners/running-jobs-on-larger-runners?platform=mac). To find reference information about using {% data variables.actions.hosted\_runners %}, see [AUTOTITLE](/actions/reference/larger-runners-reference). {% endif %} | https://github.com/github/docs/blob/main//content/actions/concepts/runners/larger-runners.md | main | github-actions | [
0.02447318285703659,
-0.03614675998687744,
-0.03272110968828201,
0.06966940313577652,
0.07027323544025421,
-0.004012886434793472,
-0.004994216375052929,
0.05365306884050369,
-0.06448498368263245,
0.06523983925580978,
0.04394884407520294,
0.03772679716348648,
-0.020245855674147606,
-0.05154... | 0.110697 |
A self-hosted runner is a system that you deploy and manage to execute jobs from {% data variables.product.prodname\_actions %} on {% data variables.product.github %}. Self-hosted runners: {% ifversion fpt or ghec %} \* Give you more control of hardware, operating system, and software tools than {% data variables.product.github %}-hosted runners provide. Be aware that you are responsible for updating the operating system and all other software. \* Allow you to use machines and services that your company already maintains and pays to use.{% endif %} \* Are free to use with {% data variables.product.prodname\_actions %}, but you are responsible for the cost of maintaining your runner machines. \* Let you create custom hardware configurations that meet your needs with processing power or memory to run larger jobs, install software available on your local network. \* Receive automatic updates for the self-hosted runner application only, though you may disable automatic updates of the runner. \* Don't need to have a clean instance for every job execution.{% ifversion ghec or ghes %} \* Can be organized into groups to restrict access to specific workflows, organizations, and repositories. See [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners/managing-access-to-self-hosted-runners-using-groups).{% endif %} \* Can be physical, virtual, in a container, on-premises, or in a cloud. You can use self-hosted runners anywhere in the management hierarchy. Repository-level runners are dedicated to a single repository, while organization-level runners can process jobs for multiple repositories in an organization. Organization owners can choose which repositories are allowed to create repository-level self-hosted runners. See [AUTOTITLE](/organizations/managing-organization-settings/disabling-or-limiting-github-actions-for-your-organization#limiting-the-use-of-self-hosted-runners). Finally, enterprise-level runners can be assigned to multiple organizations in an enterprise account. ## Next steps {% ifversion ghec or ghes %} To get hands-on experience with the policies and usage of self-hosted runners, see [AUTOTITLE](/admin/github-actions/getting-started-with-github-actions-for-your-enterprise/getting-started-with-self-hosted-runners-for-your-enterprise) {% else %} To set up a self-hosted runner in your workspace, see [AUTOTITLE](/actions/how-tos/managing-self-hosted-runners/adding-self-hosted-runners). {% endif %} To find information about the requirements and supported software and hardware for self-hosted runners, see [AUTOTITLE](/actions/reference/self-hosted-runners-reference). | https://github.com/github/docs/blob/main//content/actions/concepts/runners/self-hosted-runners.md | main | github-actions | [
-0.012636777013540268,
-0.017906244844198227,
-0.06568928807973862,
0.050890203565359116,
0.017039738595485687,
-0.025954896584153175,
-0.06401810050010681,
0.11207589507102966,
-0.08892253041267395,
0.03881940618157387,
0.043373141437768936,
0.02257324382662773,
0.01412911619991064,
-0.06... | 0.196913 |
## About runner scale sets A runner scale set is a group of homogeneous runners that can be assigned jobs from {% data variables.product.prodname\_actions %}. The number of active runners owned by a runner scale set can be controlled by auto-scaling runner solutions such as {% data variables.product.prodname\_actions\_runner\_controller %} (ARC). You can use runner groups to manage runner scale sets. Similar to self-hosted runners, you can add runner scale sets to existing runner groups. However, runner scale sets can belong to only one runner group at a time and can only have one label assigned to them. To assign jobs to a runner scale set, you must configure your workflow to reference the runner scale set’s name. For more information, see [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/using-actions-runner-controller-runners-in-a-workflow). ## Legal notice {% data reusables.actions.actions-runner-controller-legal-notice %} ## Next steps \* For more information about the {% data variables.product.prodname\_actions\_runner\_controller %} as a concept, see [AUTOTITLE](/actions/concepts/runners/about-actions-runner-controller). \* To learn about runner groups, see [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners/managing-access-to-self-hosted-runners-using-groups). | https://github.com/github/docs/blob/main//content/actions/concepts/runners/runner-scale-sets.md | main | github-actions | [
0.010177914053201675,
-0.0415477454662323,
-0.12639866769313812,
0.054220911115407944,
0.005550214555114508,
0.10589423775672913,
0.02509409934282303,
0.05016003176569939,
-0.04298751428723335,
-0.024103906005620956,
0.03925750032067299,
-0.04117496684193611,
0.009732433594763279,
0.026577... | 0.082402 |
{% data reusables.actions.enterprise-github-hosted-runners %} ## About {% data variables.product.prodname\_dotcom %}-hosted runners networking {% data reusables.actions.about-private-networking-github-hosted-runners %} There are a few different approaches you could take to configure this access, each with different advantages and disadvantages. ## Using an API Gateway with OIDC {% data reusables.actions.private-networking-oidc-intro %} For more information, see [AUTOTITLE](/actions/using-github-hosted-runners/connecting-to-a-private-network/using-an-api-gateway-with-oidc). ## Using WireGuard to create a network overlay {% data reusables.actions.private-networking-wireguard-intro %} For more information, see [AUTOTITLE](/actions/using-github-hosted-runners/connecting-to-a-private-network/using-wireguard-to-create-a-network-overlay). {% ifversion actions-private-networking-azure-vnet %} ## Using an Azure Virtual Network (VNET) {% data reusables.actions.azure-vnet-network-configuration-intro %} {% ifversion fpt %} Organization owners using the {% data variables.product.prodname\_team %} plan can configure Azure private networking for {% data variables.product.company\_short %}-hosted runners at the organization level. For more information, see [AUTOTITLE](/organizations/managing-organization-settings/about-azure-private-networking-for-github-hosted-runners-in-your-organization). {% endif %} {% ifversion ghec %} Enterprises and organizations on {% data variables.product.prodname\_ghe\_cloud %} or {% data variables.product.prodname\_team %} plans can configure Azure private networking for {% data variables.product.company\_short %}-hosted runners. For more information, see [AUTOTITLE](/enterprise-cloud@latest/admin/configuration/configuring-private-networking-for-hosted-compute-products/about-azure-private-networking-for-github-hosted-runners-in-your-enterprise) and [AUTOTITLE](/admin/configuration/configuring-private-networking-for-hosted-compute-products/configuring-private-networking-for-github-hosted-runners-in-your-enterprise#enabling-creation-of-network-configurations-for-organizations). {% endif %} {% endif %} | https://github.com/github/docs/blob/main//content/actions/concepts/runners/private-networking.md | main | github-actions | [
-0.0874297097325325,
0.008001204580068588,
-0.07877858728170395,
0.08503437787294388,
0.047611087560653687,
-0.005759185645729303,
0.013384373858571053,
0.05691888928413391,
-0.036942873150110245,
0.0014995650853961706,
0.02393578365445137,
0.07396888732910156,
0.04014845937490463,
-0.0697... | 0.070288 |
{% data reusables.actions.enterprise-github-hosted-runners %} ## Overview of {% data variables.product.prodname\_dotcom %}-hosted runners Runners are the machines that execute jobs in a {% data variables.product.prodname\_actions %} workflow. For example, a runner can clone your repository locally, install testing software, and then run commands that evaluate your code. {% data variables.product.prodname\_dotcom %} provides runners that you can use to run your jobs, or you can [host your own runners](/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners). {% data reusables.actions.single-cpu-runners %} Each runner comes with the runner application and other tools preinstalled. {% data variables.product.prodname\_dotcom %}-hosted runners are available with Ubuntu Linux, Windows, or macOS operating systems. When you use a {% data variables.product.prodname\_dotcom %}-hosted runner, machine maintenance and upgrades are taken care of for you. {% ifversion not ghes %} You can choose one of the standard {% data variables.product.prodname\_dotcom %}-hosted runner options or, if you are on the {% data variables.product.prodname\_team %} or {% data variables.product.prodname\_ghe\_cloud %} plan, you can provision a runner with more cores, or a runner that's powered by a GPU processor. These machines are referred to as "{% data variables.actions.hosted\_runner %}." For more information, see [AUTOTITLE](/enterprise-cloud@latest/actions/using-github-hosted-runners/about-larger-runners/about-larger-runners). {% data variables.actions.hosted\_runners\_caps %} also support custom images, which let you create and manage your own preconfigured VM images. For more information, see [Custom images](#custom-images). Using {% data variables.product.prodname\_dotcom %}-hosted runners requires network access with at least 70 kilobits per second upload and download speeds. {% endif %} {% ifversion github-hosted-runners-emus-entitlements %} > [!NOTE] > {% data reusables.actions.entitlement-minutes-emus %} For more information, see [AUTOTITLE](/admin/identity-and-access-management/using-enterprise-managed-users-for-iam/about-enterprise-managed-users). {% endif %} {% ifversion not ghes %} ## Runner images {% data variables.product.github %} maintains our own set of VM images for our standard hosted runners. This includes the images for macOS, x64 linux and Windows images. The list of images and their included tools are managed in the [`actions/runner-images`](https://github.com/actions/runner-images) repository. Our arm64 images are partner images, and those are managed in the [`actions/partner-runner-images`](https://github.com/actions/partner-runner-images) repository. ### Preinstalled software for GitHub-owned images The software tools included in our GitHub-owned images are updated weekly. The update process takes several days, and the list of preinstalled software on the `main` branch is updated after the whole deployment ends. Workflow logs include a link to the preinstalled tools on the exact runner. To find this information in the workflow log, expand the `Set up job` section. Under that section, expand the `Runner Image` section. The link following `Included Software` will describe the preinstalled tools on the runner that ran the workflow. For more information, see [AUTOTITLE](/actions/monitoring-and-troubleshooting-workflows/viewing-workflow-run-history). {% data variables.product.prodname\_dotcom %}-hosted runners include the operating system's default built-in tools, in addition to the packages listed in the above references. For example, Ubuntu and macOS runners include `grep`, `find`, and `which`, among other default tools. {% ifversion actions-sbom %} You can also view a software bill of materials (SBOM) for each build of the Windows and Ubuntu runner images. For more information, see [AUTOTITLE](/actions/security-guides/security-hardening-for-github-actions#reviewing-the-supply-chain-for-github-hosted-runners). {% endif %} We recommend using actions to interact with the software installed on runners. This approach has several benefits: \* Usually, actions provide more flexible functionality like version selection, ability to pass arguments, and parameters \* It ensures the tool versions used in your workflow will remain the same regardless of software updates If there is a tool that you'd like to request, please open an issue at [actions/runner-images](https://github.com/actions/runner-images). This repository also contains announcements about all major software updates on runners. > [!NOTE] > \* You can also install additional software on {% data variables.product.prodname\_dotcom %}-hosted runners. See [AUTOTITLE](/actions/using-github-hosted-runners/customizing-github-hosted-runners). > \* While nested virtualization is technically possible while using runners, it is not officially supported. Any use of nested VMs is experimental and done | https://github.com/github/docs/blob/main//content/actions/concepts/runners/github-hosted-runners.md | main | github-actions | [
-0.030989503487944603,
-0.04340308532118797,
-0.08979006111621857,
0.06835158914327621,
0.030236775055527687,
-0.043184828013181686,
-0.013575594872236252,
0.06793458014726639,
-0.08707696199417114,
0.036492083221673965,
0.020932093262672424,
0.028810610994696617,
0.02705235220491886,
-0.0... | 0.159905 |
about all major software updates on runners. > [!NOTE] > \* You can also install additional software on {% data variables.product.prodname\_dotcom %}-hosted runners. See [AUTOTITLE](/actions/using-github-hosted-runners/customizing-github-hosted-runners). > \* While nested virtualization is technically possible while using runners, it is not officially supported. Any use of nested VMs is experimental and done at your own risk, we offer no guarantees regarding stability, performance, or compatibility. ### Custom images Custom images let you start with a {% data variables.product.github %}-provided base image and build your own VM image that’s customized to your workflow needs. With custom images, you can: \* Build custom VM images using existing workflow YAML syntax. \* Pre-configure environments with approved tooling, security patches, and dependencies before workflows start. \* Create consistent, validated base environments across all builds. Custom images can include repository code, container images, binaries, certificates, and other dependencies to create a consistent build environment across workflows. This helps you gain control over your supply chain. They help reduce setup time, improve build performance, and strengthen security by reducing the surface attack vector on your images. Administrators can also apply policies to manage image versions, retention, and age to meet organizational security and compliance requirements. Custom images can only be used with larger runners and are billed at the same per-minute rates as those runners. Storage for custom images is billed and metered through {% data variables.product.prodname\_actions %} storage. For more information about billing, see [AUTOTITLE](/billing/concepts/product-billing/github-actions). To get started with custom images, see [AUTOTITLE](/actions/how-tos/manage-runners/larger-runners/use-custom-images). ## Cloud hosts used by {% data variables.product.prodname\_dotcom %}-hosted runners {% data variables.product.prodname\_dotcom %} hosts Linux and Windows runners on virtual machines in Microsoft Azure with the {% data variables.product.prodname\_actions %} runner application installed. The {% data variables.product.prodname\_dotcom %}-hosted runner application is a fork of the Azure Pipelines Agent. Inbound ICMP packets are blocked for all Azure virtual machines, so ping or traceroute commands might not work. {% data variables.product.prodname\_dotcom %} hosts macOS runners in Azure data centers. ## Workflow continuity {% data reusables.actions.runner-workflow-continuity %} In addition, if the workflow run has been successfully queued, but has not been processed by a {% data variables.product.prodname\_dotcom %}-hosted runner within 45 minutes, then the queued workflow run is discarded. ## The `etc/hosts` file {% data reusables.actions.runners-etc-hosts-file %} {% endif %} | https://github.com/github/docs/blob/main//content/actions/concepts/runners/github-hosted-runners.md | main | github-actions | [
0.010114934295415878,
-0.034884918481111526,
-0.01592070609331131,
-0.029800839722156525,
0.04096579924225807,
-0.012054269202053547,
-0.09669096767902374,
0.07362385839223862,
-0.09127503633499146,
0.020940111950039864,
0.0011050000321120024,
0.02371198870241642,
0.03257148712873459,
-0.0... | 0.130299 |
## Overview The Actions Runner Controller (ARC) project [was adopted by GitHub](https://github.com/actions/actions-runner-controller/discussions/2072) to release as a new GitHub product. As a result, there are currently two ARC releases: the legacy community-maintained ARC and GitHub's Autoscaling Runner Sets. GitHub only supports the latest Autoscaling Runner Sets version of ARC. Support for the legacy ARC is provided by the community in the [Actions Runner Controller](https://github.com/actions/actions-runner-controller) repository only. ## Scope of support for Actions Runner Controller To ensure a smooth adoption of Actions Runner Controller, we recommend that organizations have a Kubernetes expert on staff. Many aspects of ARC installation, including container orchestration, networking, policy application, and integration with managed Kubernetes providers, fall outside GitHub Support’s scope and require in-depth Kubernetes knowledge. If your support request is outside of the scope of what our team can help you with, we may recommend next steps to resolve your issue outside of {% data variables.contact.github\_support %}. Your support request is out of {% data variables.contact.github\_support %}'s scope if the request is primarily about: \* The legacy community-maintained version of ARC \* Installing, configuring, or maintaining dependencies \* Template spec customization \* Container orchestration, such as Kubernetes setup, networking, building images in ARC (DinD), etc. \* Applying Kubernetes policies \* Managed Kubernetes providers or provider-specific configurations \* [Runner Container Hooks](https://github.com/actions/runner-container-hooks) in conjunction with ARC's `kubernetes` mode \* Installation tooling other than Helm \* Storage provisioners and PersistentVolumeClaims (PVCs) \* Best practices, such as configuring metrics servers, image caching, etc. While ARC may be deployed successfully with different tooling and configurations, your support request is possibly out of {% data variables.contact.github\_support %}'s scope if ARC has been deployed with: \* Installation tooling other than Helm \* Service account and/or template spec customization For more information about contacting {% data variables.contact.github\_support %}, see [AUTOTITLE](/support/contacting-github-support). > [!NOTE] > \* OpenShift clusters are in public preview. See guidance from [Red Hat](https://developers.redhat.com/articles/2025/02/17/how-securely-deploy-github-arc-openshift#arc\_architecture) for configuration recommendations. > \* ARC is only supported on GitHub Enterprise Server versions 3.9 and greater. ## Working with {% data variables.contact.github\_support %} for Actions Runner Controller {% data variables.contact.github\_support %} may ask questions about your Actions Runner Controller deployment and request that you collect and attach [controller logs, listener logs](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/troubleshooting-actions-runner-controller-errors#checking-the-logs-of-the-controller-and-runner-set-listener), runner logs, and Helm charts (`values.yaml`) to the support ticket. | https://github.com/github/docs/blob/main//content/actions/concepts/runners/support-for-arc.md | main | github-actions | [
-0.07076871395111084,
-0.10180177539587021,
-0.031577158719301224,
0.041337981820106506,
0.0013582796091213822,
-0.019325243309140205,
-0.01403947826474905,
0.08110495656728745,
-0.012598405592143536,
0.04722130671143532,
0.06090417131781578,
0.020694656297564507,
-0.06318499147891998,
-0.... | 0.123306 |
## Prerequisites Before you can complete this tutorial, you need to understand workflow artifacts. See [AUTOTITLE](/actions/concepts/workflows-and-actions/workflow-artifacts). ## Uploading build and test artifacts The output of building and testing your code often produces files you can use to debug test failures and production code that you can deploy. You can configure a workflow to build and test the code pushed to your repository and report a success or failure status. You can upload the build and test output to use for deployments, debugging failed tests or crashes, and viewing test suite coverage. You can use the `upload-artifact` action to upload artifacts. When uploading an artifact, you can specify a single file or directory, or multiple files or directories. You can also exclude certain files or directories, and use wildcard patterns. We recommend that you provide a name for an artifact, but if no name is provided then `artifact` will be used as the default name. For more information on syntax, see the {% ifversion fpt or ghec %}[actions/upload-artifact](https://github.com/actions/upload-artifact) action{% else %} `actions/upload-artifact` action on {% data variables.product.prodname\_ghe\_server %}{% endif %}. ### Example For example, your repository or a web application might contain SASS and TypeScript files that you must convert to CSS and JavaScript. Assuming your build configuration outputs the compiled files in the `dist` directory, you would deploy the files in the `dist` directory to your web application server if all tests completed successfully. ```text |-- hello-world (repository) | └── dist | └── tests | └── src | └── sass/app.scss | └── app.ts | └── output | └── test | ``` This example shows you how to create a workflow for a Node.js project that builds the code in the `src` directory and runs the tests in the `tests` directory. You can assume that running `npm test` produces a code coverage report named `code-coverage.html` stored in the `output/test/` directory. The workflow uploads the production artifacts in the `dist` directory, but excludes any markdown files. It also uploads the `code-coverage.html` report as another artifact. ```yaml copy name: Node CI on: [push] jobs: build\_and\_test: runs-on: ubuntu-latest steps: - name: Checkout repository uses: {% data reusables.actions.action-checkout %} - name: npm install, build, and test run: | npm install npm run build --if-present npm test - name: Archive production artifacts uses: {% data reusables.actions.action-upload-artifact %} with: name: dist-without-markdown path: | dist !dist/\*\*/\*.md - name: Archive code coverage results uses: {% data reusables.actions.action-upload-artifact %} with: name: code-coverage-report path: output/test/code-coverage.html ``` ## Configuring a custom artifact retention period You can define a custom retention period for individual artifacts created by a workflow. When using a workflow to create a new artifact, you can use `retention-days` with the `upload-artifact` action. This example demonstrates how to set a custom retention period of 5 days for the artifact named `my-artifact`: ```yaml copy - name: 'Upload Artifact' uses: {% data reusables.actions.action-upload-artifact %} with: name: my-artifact path: my\_file.txt retention-days: 5 ``` The `retention-days` value cannot exceed the retention limit set by the repository, organization, or enterprise. ## Downloading artifacts during a workflow run You can use the [`actions/download-artifact`](https://github.com/actions/download-artifact) action to download previously uploaded artifacts during a workflow run. > [!NOTE] > {% ifversion fpt or ghec %}If you want to download artifacts from a different workflow or workflow run, you need to supply a token and run identifier. See [Download Artifacts from other Workflow Runs or Repositories](https://github.com/actions/download-artifact?tab=readme-ov-file#download-artifacts-from-other-workflow-runs-or-repositories) in the documentation for the `download-artifact` action. {% elsif ghes %}You can only download artifacts in a workflow that were uploaded during the same workflow run.{% endif %} Specify an artifact's name to download an individual artifact. If you uploaded an artifact without specifying | https://github.com/github/docs/blob/main//content/actions/tutorials/store-and-share-data.md | main | github-actions | [
-0.03370330482721329,
0.022497929632663727,
-0.0025920665357261896,
0.007348552346229553,
0.01484574656933546,
-0.09534680098295212,
-0.03794831037521362,
-0.004977461416274309,
-0.09181948006153107,
0.04801766946911812,
-0.0015400752890855074,
-0.07094663381576538,
0.06437230110168457,
-0... | 0.088565 |
from other Workflow Runs or Repositories](https://github.com/actions/download-artifact?tab=readme-ov-file#download-artifacts-from-other-workflow-runs-or-repositories) in the documentation for the `download-artifact` action. {% elsif ghes %}You can only download artifacts in a workflow that were uploaded during the same workflow run.{% endif %} Specify an artifact's name to download an individual artifact. If you uploaded an artifact without specifying a name, the default name is `artifact`. ```yaml - name: Download a single artifact uses: {% data reusables.actions.action-download-artifact %} with: name: my-artifact ``` You can also download all artifacts in a workflow run by not specifying a name. This can be useful if you are working with lots of artifacts. ```yaml - name: Download all workflow run artifacts uses: {% data reusables.actions.action-download-artifact %} ``` If you download all workflow run's artifacts, a directory for each artifact is created using its name. For more information on syntax, see the {% ifversion fpt or ghec %}[actions/download-artifact](https://github.com/actions/download-artifact) action{% else %} `actions/download-artifact` action on {% data variables.product.prodname\_ghe\_server %}{% endif %}. ## Passing data between jobs in a workflow You can use the `upload-artifact` and `download-artifact` actions to share data between jobs in a workflow. This example workflow illustrates how to pass data between jobs in the same workflow. For more information, see the {% ifversion fpt or ghec %}[actions/upload-artifact](https://github.com/actions/upload-artifact) and [download-artifact](https://github.com/actions/download-artifact) actions{% else %} `actions/upload-artifact` and `download-artifact` actions on {% data variables.product.prodname\_ghe\_server %}{% endif %}. Jobs that are dependent on a previous job's artifacts must wait for the dependent job to complete successfully. This workflow uses the `needs` keyword to ensure that `job\_1`, `job\_2`, and `job\_3` run sequentially. For example, `job\_2` requires `job\_1` using the `needs: job\_1` syntax. Job 1 performs these steps: \* Performs a math calculation and saves the result to a text file called `math-homework.txt`. \* Uses the `upload-artifact` action to upload the `math-homework.txt` file with the artifact name {% ifversion artifacts-v3-deprecation %}`homework\_pre`{% else %}`homework`{% endif %}. Job 2 uses the result in the previous job: \* Downloads the {% ifversion artifacts-v3-deprecation %}`homework\_pre`{% else %}`homework`{% endif %} artifact uploaded in the previous job. By default, the `download-artifact` action downloads artifacts to the workspace directory that the step is executing in. You can use the `path` input parameter to specify a different download directory. \* Reads the value in the `math-homework.txt` file, performs a math calculation, and saves the result to `math-homework.txt` again, overwriting its contents. \* Uploads the `math-homework.txt` file. {% ifversion artifacts-v3-deprecation %}As artifacts are considered immutable in `v4`, the artifact is passed a different input, `homework\_final`, as a name.{% else %}This upload overwrites the previously uploaded artifact because they share the same name.{% endif %} Job 3 displays the result uploaded in the previous job: \* Downloads the {% ifversion artifacts-v3-deprecation %}`homework\_final` artifact from Job 2.{% else %}`homework` artifact.{% endif %} \* Prints the result of the math equation to the log. The full math operation performed in this workflow example is `(3 + 7) x 9 = 90`. ```yaml copy name: Share data between jobs on: [push] jobs: job\_1: name: Add 3 and 7 runs-on: ubuntu-latest steps: - shell: bash run: | expr 3 + 7 > math-homework.txt - name: Upload math result for job 1 uses: {% data reusables.actions.action-upload-artifact %} with: name: {% ifversion artifacts-v3-deprecation %}homework\_pre{% else %}homework{% endif %} path: math-homework.txt job\_2: name: Multiply by 9 needs: job\_1 runs-on: windows-latest steps: - name: Download math result for job 1 uses: {% data reusables.actions.action-download-artifact %} with: name: {% ifversion artifacts-v3-deprecation %}homework\_pre{% else %}homework{% endif %} - shell: bash run: | value=`cat math-homework.txt` expr $value \\* 9 > math-homework.txt - name: Upload math result for job 2 uses: {% data reusables.actions.action-upload-artifact %} with: name: {% ifversion artifacts-v3-deprecation %}homework\_final{% | https://github.com/github/docs/blob/main//content/actions/tutorials/store-and-share-data.md | main | github-actions | [
0.00033451186027377844,
-0.0046674455516040325,
0.008963312953710556,
-0.0034223035909235477,
0.03932759165763855,
-0.10158149898052216,
0.03455029055476189,
-0.08121348917484283,
0.09476369619369507,
-0.0051614390686154366,
0.008431012742221355,
-0.014545300975441933,
-0.01397092267870903,
... | 0.054803 |
for job 1 uses: {% data reusables.actions.action-download-artifact %} with: name: {% ifversion artifacts-v3-deprecation %}homework\_pre{% else %}homework{% endif %} - shell: bash run: | value=`cat math-homework.txt` expr $value \\* 9 > math-homework.txt - name: Upload math result for job 2 uses: {% data reusables.actions.action-upload-artifact %} with: name: {% ifversion artifacts-v3-deprecation %}homework\_final{% else %}homework{% endif %} path: math-homework.txt job\_3: name: Display results needs: job\_2 runs-on: macOS-latest steps: - name: Download math result for job 2 uses: {% data reusables.actions.action-download-artifact %} with: name: {% ifversion artifacts-v3-deprecation %}homework\_final{% else %}homework{% endif %} - name: Print the final result shell: bash run: | value=`cat math-homework.txt` echo The result is $value ``` The workflow run will archive any artifacts that it generated. For more information on downloading archived artifacts, see [AUTOTITLE](/actions/managing-workflow-runs/downloading-workflow-artifacts). {% ifversion fpt or ghec %} ## Validating artifacts Every time the upload-artifact action is used it returns an output called `digest`. This is a SHA256 digest of the Artifact you uploaded during a workflow run. When the download-artifact action is then used to download that artifact, it automatically calculates the digest for that downloaded artifact and validates that it matches the output from the upload-artifact step. If the digest does not match, the run will display a warning in the UI and in the job logs. To view the SHA256 digest, open the logs for the upload-artifact job or check in the Artifact output that appears in the workflow run UI. {% endif %} | https://github.com/github/docs/blob/main//content/actions/tutorials/store-and-share-data.md | main | github-actions | [
-0.051348745822906494,
0.05729883536696434,
-0.02707585133612156,
0.01950686052441597,
0.024980861693620682,
-0.023502878844738007,
0.02353408932685852,
0.03270703926682472,
-0.043106138706207275,
0.04309126362204552,
0.05506633222103119,
-0.015066031366586685,
-0.003200000850483775,
0.016... | 0.029147 |
You can run workflows on {% data variables.product.github %}-hosted or self-hosted runners, or use a mixture of runner types. This tutorial shows you how to assess your current use of runners, then migrate workflows from self-hosted runners to {% data variables.product.github %}-hosted runners efficiently. ## 1. Assess your current CI infrastructure Migrating from self-hosted runners to {% data variables.product.github %}-hosted larger runners begins with a thorough assessment of your current CI infrastructure. If you take the time to match specifications and environments carefully, you will minimize the time spent fixing problems when you start running workflows on different runners. 1. Create an inventory of each machine specification used to run workflows, including CPU cores, RAM, storage, chip architecture, and operating system. 1. Note if any of the runners are part of a runner group or have a label. You can use this information to simplify migration of workflows to new runners. 1. Document any custom images and pre-installed dependencies that workflows rely on, as these will influence your migration strategy. 1. Identify which workflows currently target self-hosted runners, and why. For example, in {% data variables.product.prodname\_actions %} usage metrics, use the \*\*Jobs\*\* tab and filter by runner label (such as `self-hosted` or a custom label) to see which repositories and jobs are using that label. If you need to validate specific workflow files, you can also use code search to find workflow files that reference `runs-on: self-hosted` or other self-hosted labels. 1. Identify workflows that access private network resources (for example, internal package registries, private APIs, databases, or on-premises services), since these may require additional networking configuration. ## 2. Map your processing requirements to {% data variables.product.github %}-hosted runner types {% data variables.product.github %} offers managed runners in multiple operating systems—Linux, Windows, and macOS—with options for GPU-enabled machines. See [AUTOTITLE](/actions/reference/runners/larger-runners). 1. Map each distinct machine specification in your inventory to a suitable {% data variables.product.github %}-hosted runner specification. 1. Make a note of any self-hosted runners where there is no suitable {% data variables.product.github %}-hosted runner. 1. Exclude any workflows that must continue to run on self-hosted runners from your migration plans. ## 3. Estimate capacity requirements Before you provision {% data variables.product.github %}-hosted runners, estimate how much compute capacity your workflows will need. Reviewing your current self-hosted runner usage helps you choose appropriate runner sizes, set concurrency limits, and forecast potential cost changes. {% data reusables.profile.access\_org %} {% data reusables.user-settings.access\_org %} {% data reusables.organizations.insights %} 1. In the "Insights" navigation menu, click \*\*Actions Usage Metrics\*\*. 1. Click on the tab that contains the metrics you would like to view. See [AUTOTITLE](/actions/concepts/metrics). 1. Review the following data points to estimate hosted runner capacity: \* \*\*Total minutes consumed\*\*: Helps you estimate baseline compute demand. \* \*\*Number of workflow runs\*\*: Identifies peak activity times that may require more concurrency. \* \*\*Job distribution across OS types\*\*: Ensures you provision the right mix of Linux, Windows, and macOS runners. \* \*\*Runner labels (Jobs tab)\*\*: Filter by a runner label to understand where a label is used. 1. Convert your findings into a capacity plan: \* Match high-usage workflows to larger runner sizes where appropriate. \* Identify workflows that may benefit from pre-built or custom images to reduce runtime. \* Estimate concurrency by determining how many jobs typically run simultaneously. 1. Make a note of any gaps: \* Workflows with hard dependencies your current hosted runner images do not support. \* Jobs with unusually long runtimes or bespoke environment needs. (You may need custom images for these.) Your capacity plan will guide how many runners to provision, which machine types to use, and how to | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-runners.md | main | github-actions | [
0.04347993806004524,
-0.057099517434835434,
-0.05784064903855324,
0.010906572453677654,
0.01818143203854561,
-0.04488762840628624,
-0.07943695783615112,
0.029749928042292595,
-0.0518391989171505,
-0.06693154573440552,
-0.0739288479089737,
-0.0363147109746933,
0.03235238417983055,
-0.042429... | 0.104086 |
any gaps: \* Workflows with hard dependencies your current hosted runner images do not support. \* Jobs with unusually long runtimes or bespoke environment needs. (You may need custom images for these.) Your capacity plan will guide how many runners to provision, which machine types to use, and how to configure runner groups and policies in the next steps. ## 4. Configure runner groups and policies After estimating your capacity needs, configure runner groups and access policies so your {% data variables.product.github %}-hosted runners are available to the right organizations and workflows. Configuring runner groups before provisioning runners helps ensure that migration doesn’t accidentally open access too broadly or create unexpected cost increases. 1. Create runner groups at the enterprise level to define who can use your hosted runners. See [AUTOTITLE](/enterprise-cloud@latest/actions/how-tos/manage-runners/larger-runners/control-access#creating-a-runner-group-for-an-enterprise). Use runner groups to scope access by organization, repository, or workflow. If you are migrating from self-hosted runners, consider reusing existing runner group names or labels where possible. This allows workflows to continue working without changes when you switch to {% data variables.product.github %}-hosted runners. 1. Add new {% data variables.product.github %}-hosted runners to the appropriate group and set concurrency limits based on the usage patterns you identified in step 3. For details on automatic scaling, see [AUTOTITLE](/actions/how-tos/manage-runners/larger-runners/manage-larger-runners#configuring-autoscaling-for-larger-runners). 1. Review policy settings to ensure runners are only used by the intended workflows. For example, restricting use to specific repositories or preventing untrusted workflows from accessing more powerful machine types. >[!NOTE] macOS larger runners cannot be added to runner groups and must be referenced directly in your workflow files. ## 5. Set up {% data variables.product.github %}-hosted runners Next, provision your {% data variables.product.github %}-hosted runners based on the machine types and capacity you identified earlier. 1. Choose the machine size and operating system that match your workflow requirements. For available images and specifications, see [AUTOTITLE](/actions/reference/runners/larger-runners#runner-images). 1. Assign each runner to a runner group and configure concurrency limits to control how many jobs can run at the same time. 1. Select an image type: \* Use {% data variables.product.github %}-managed images for a maintained, frequently updated environment. \* Use custom images when you need pre-installed dependencies to reduce setup time. See [AUTOTITLE](/actions/how-tos/manage-runners/larger-runners/use-custom-images). 1. Apply any required customizations, such as environment variables, software installation, or startup scripts. For more examples, see [AUTOTITLE](/actions/how-tos/manage-runners/github-hosted-runners/customize-runners). 1. Optionally, configure private networking if runners must access internal resources. See [AUTOTITLE](/enterprise-cloud@latest/actions/concepts/runners/private-networking). ### Configure private connectivity options If your workflows need access to private resources (for example, internal package registries, private APIs, databases, or on-premises services), choose an approach that fits your network and security requirements. #### Configure Azure Private Networking Run {% data variables.product.github %}-hosted runners inside an Azure Virtual Network (VNET) for secure access to internal resources. 1. Create an Azure Virtual Network (VNET) and configure subnets and network security groups for your runners. 1. Enable Azure private networking for your runner group. See [AUTOTITLE](/admin/configuring-settings/configuring-private-networking-for-hosted-compute-products/configuring-private-networking-for-github-hosted-runners-in-your-enterprise#1-add-a-new-network-configuration-for-your-enterprise) 1. Apply network configuration, such as NSGs and firewall rules, to control ingress and egress traffic. 1. Update workflow targeting to use the runner group that is configured for private networking. For detailed instructions, see: \* [AUTOTITLE](/organizations/managing-organization-settings/configuring-private-networking-for-github-hosted-runners-in-your-organization) \* [AUTOTITLE](/admin/configuring-settings/configuring-private-networking-for-hosted-compute-products/configuring-private-networking-for-github-hosted-runners-in-your-enterprise) #### Connect using a WireGuard overlay network If Azure private networking is not applicable (for example, because your target network is on-premises or in another cloud), you can use a VPN overlay such as WireGuard to provide network-level access to private resources. For detailed instructions and examples, see [AUTOTITLE](/actions/how-tos/manage-runners/github-hosted-runners/connect-to-a-private-network/connect-with-wireguard). #### Use OIDC with an API gateway for trusted access to private resources If you don’t need the runner to join your private network, you can use OIDC to establish trusted, short-lived access | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-runners.md | main | github-actions | [
0.029310045763850212,
-0.04921801760792732,
-0.02288028784096241,
0.04492323473095894,
0.054462578147649765,
-0.0015116253634914756,
-0.1224665716290474,
0.03746863082051277,
-0.06131647527217865,
-0.01268266886472702,
-0.04783419147133827,
0.005764287430793047,
0.013062268495559692,
-0.03... | 0.078013 |
such as WireGuard to provide network-level access to private resources. For detailed instructions and examples, see [AUTOTITLE](/actions/how-tos/manage-runners/github-hosted-runners/connect-to-a-private-network/connect-with-wireguard). #### Use OIDC with an API gateway for trusted access to private resources If you don’t need the runner to join your private network, you can use OIDC to establish trusted, short-lived access to a service you expose via an API gateway. This approach can reduce the need for long-lived secrets and limits network access to the specific endpoints your workflow needs. For detailed instructions and examples, see [AUTOTITLE](/actions/how-tos/manage-runners/github-hosted-runners/connect-to-a-private-network/connect-with-oidc). ## 6. Update workflows to use the new runners After your {% data variables.product.github %}-hosted runners are configured, update your workflow files to target them. 1. Reuse existing labels if you assigned your new runners to the same runner group names your self-hosted runners used. In this case, workflows will automatically use the new runners without changes. 1. If you created new runner groups or labels, update the runs-on field in your workflow YAML files. For example: ```yaml jobs: build: runs-on: [github-larger-runner, linux-x64] steps: - name: Checkout code uses: {% data reusables.actions.action-checkout %} - name: Build project run: make build ``` 1. Check for hard-coded references to self-hosted labels (such as `self-hosted`, `linux-x64`, or custom labels) and replace them with the appropriate {% data variables.product.github %}-hosted runner labels. 1. Test each updated workflow to ensure it runs correctly on the new runners. Monitor for any issues related to environment differences or missing dependencies. ## 7. Remove unused self-hosted runners After your workflows have been updated and tested on {% data variables.product.github %}-hosted runners, remove any self-hosted runners that are no longer needed. This prevents jobs from accidentally targeting outdated infrastructure. See [AUTOTITLE](/actions/how-tos/manage-runners/self-hosted-runners/remove-runners). Before you remove self-hosted runners, verify that you have fully migrated: \* In {% data variables.product.prodname\_actions %} usage metrics, use the \*\*Jobs\*\* tab and filter by runner label (for example, `self-hosted` or your custom labels) to confirm no repositories or jobs are still using self-hosted runners. | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-runners.md | main | github-actions | [
-0.08653881400823593,
0.00973848719149828,
-0.09637864679098129,
0.05175352469086647,
0.013300023972988129,
0.009192943572998047,
0.011646607890725136,
0.016658449545502663,
0.003952180966734886,
-0.06462512165307999,
-0.007152285426855087,
0.06001335382461548,
0.04379560798406601,
-0.0521... | 0.117947 |
This tutorial leads you through how to use the `GITHUB\_TOKEN` for authentication in {% data variables.product.prodname\_actions %} workflows, including examples for passing the token to actions, making API requests, and configuring permissions for secure automation. For reference information, see [AUTOTITLE](/actions/reference/workflow-syntax-for-github-actions#permissions). ## Using the `GITHUB\_TOKEN` in a workflow You can use the `GITHUB\_TOKEN` by using the standard syntax for referencing secrets: {% raw %}`${{ secrets.GITHUB\_TOKEN }}`{% endraw %}. Examples of using the `GITHUB\_TOKEN` include passing the token as an input to an action, or using it to make an authenticated {% data variables.product.github %} API request. > [!IMPORTANT] > An action can access the `GITHUB\_TOKEN` through the `github.token` context even if the workflow does not explicitly pass the `GITHUB\_TOKEN` to the action. As a good security practice, you should always make sure that actions only have the minimum access they require by limiting the permissions granted to the `GITHUB\_TOKEN`. For more information, see [AUTOTITLE](/actions/reference/workflow-syntax-for-github-actions#permissions). ### Example 1: passing the `GITHUB\_TOKEN` as an input {% data reusables.actions.github\_token-input-example %} ### Example 2: calling the REST API You can use the `GITHUB\_TOKEN` to make authenticated API calls. This example workflow creates an issue using the {% data variables.product.prodname\_dotcom %} REST API: ```yaml name: Create issue on commit on: [ push ] jobs: create\_issue: runs-on: ubuntu-latest permissions: issues: write steps: - name: Create issue using REST API run: | curl --request POST \ --url {% data variables.product.rest\_url %}/repos/${% raw %}{{ github.repository }}{% endraw %}/issues \ --header 'authorization: Bearer ${% raw %}{{ secrets.GITHUB\_TOKEN }}{% endraw %}' \ --header 'content-type: application/json' \ --data '{ "title": "Automated issue for commit: ${% raw %}{{ github.sha }}{% endraw %}", "body": "This issue was automatically created by the GitHub Action workflow \*\*${% raw %}{{ github.workflow }}{% endraw %}\*\*. \n\n The commit hash was: \_${% raw %}{{ github.sha }}{% endraw %}\_." }' \ --fail ``` ## Modifying the permissions for the `GITHUB\_TOKEN` Use the `permissions` key in your workflow file to modify permissions for the `GITHUB\_TOKEN` for an entire workflow or for individual jobs. This allows you to configure the minimum required permissions for a workflow or job. As a good security practice, you should grant the `GITHUB\_TOKEN` the least required access. To see the list of permissions available for use and their parameterized names, see [AUTOTITLE](/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token#account-permissions). The two workflow examples earlier in this article show the `permissions` key being used at the job level. ## Granting additional permissions If you need a token that requires permissions that aren't available in the `GITHUB\_TOKEN`, create a {% data variables.product.prodname\_github\_app %} and generate an installation access token within your workflow. For more information, see [AUTOTITLE](/apps/creating-github-apps/guides/making-authenticated-api-requests-with-a-github-app-in-a-github-actions-workflow). Alternatively, you can create a {% data variables.product.pat\_generic %}, store it as a secret in your repository, and use the token in your workflow with the {% raw %}`${{ secrets.SECRET\_NAME }}`{% endraw %} syntax. For more information, see [AUTOTITLE](/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token) and [AUTOTITLE](/actions/security-guides/using-secrets-in-github-actions). ## Next steps \* [AUTOTITLE](/actions/concepts/security/github\_token) \* [AUTOTITLE](/actions/reference/workflow-syntax-for-github-actions#permissions) | https://github.com/github/docs/blob/main//content/actions/tutorials/authenticate-with-github_token.md | main | github-actions | [
-0.07007180899381638,
0.007336704060435295,
-0.08981776237487793,
0.008661567233502865,
-0.01451320480555296,
0.017938487231731415,
0.0795350894331932,
0.11368439346551895,
-0.019375324249267578,
-0.019055167213082314,
-0.017248135060071945,
-0.038038015365600586,
0.06315676122903824,
0.00... | 0.104706 |
## Introduction This guide shows you how to create a workflow that performs a Docker build, and then publishes Docker images to Docker Hub or {% data variables.product.prodname\_registry %}. With a single workflow, you can publish images to a single registry or to multiple registries. > [!NOTE] > If you want to push to another third-party Docker registry, the example in the [Publishing images to {% data variables.product.prodname\_registry %}](#publishing-images-to-github-packages) section can serve as a good template. ## Prerequisites We recommend that you have a basic understanding of workflow configuration options and how to create a workflow file. For more information, see [AUTOTITLE](/actions/learn-github-actions). You might also find it helpful to have a basic understanding of the following: \* [AUTOTITLE](/actions/security-guides/using-secrets-in-github-actions) \* [AUTOTITLE](/actions/security-guides/automatic-token-authentication){% ifversion fpt or ghec %} \* [AUTOTITLE](/packages/working-with-a-github-packages-registry/working-with-the-container-registry){% else %} \* [AUTOTITLE](/packages/working-with-a-github-packages-registry/working-with-the-docker-registry){% endif %} ## About image configuration This guide assumes that you have a complete definition for a Docker image stored in a {% data variables.product.prodname\_dotcom %} repository. For example, your repository must contain a \_Dockerfile\_, and any other files needed to perform a Docker build to create an image. {% data reusables.package\_registry.about-annotation-keys %} For more information, see [AUTOTITLE](/packages/working-with-a-github-packages-registry/working-with-the-container-registry#labelling-container-images). In this guide, we will use the Docker `build-push-action` action to build the Docker image and push it to one or more Docker registries. For more information, see [`build-push-action`](https://github.com/marketplace/actions/build-and-push-docker-images). {% data reusables.actions.enterprise-marketplace-actions %} ## Publishing images to Docker Hub {% data reusables.actions.jobs.dockerhub-ratelimit-ghr %} Each time you create a new release on {% data variables.product.github %}, you can trigger a workflow to publish your image. The workflow in the example below runs when the `release` event triggers with the `published` activity type. In the example workflow below, we use the Docker `login-action` and `build-push-action` actions to build the Docker image and, if the build succeeds, push the built image to Docker Hub. To push to Docker Hub, you will need to have a Docker Hub account, and have a Docker Hub repository created. For more information, see [Pushing a Docker container image to Docker Hub](https://docs.docker.com/docker-hub/quickstart/#step-3-build-and-push-an-image-to-docker-hub) in the Docker documentation. The `login-action` options required for Docker Hub are: \* `username` and `password`: This is your Docker Hub username and password. We recommend storing your Docker Hub username and password as secrets so they aren't exposed in your workflow file. For more information, see [AUTOTITLE](/actions/security-guides/using-secrets-in-github-actions). The `metadata-action` option required for Docker Hub is: \* `images`: The namespace and name for the Docker image you are building/pushing to Docker Hub. The `build-push-action` options required for Docker Hub are: \* `tags`: The tag of your new image in the format `DOCKER-HUB-NAMESPACE/DOCKER-HUB-REPOSITORY:VERSION`. You can set a single tag as shown below, or specify multiple tags in a list. \* `push`: If set to `true`, the image will be pushed to the registry if it is built successfully. ```yaml copy {% data reusables.actions.actions-not-certified-by-github-comment %} {% data reusables.actions.actions-use-sha-pinning-comment %} name: Publish Docker image on: release: types: [published] jobs: push\_to\_registry: name: Push Docker image to Docker Hub runs-on: {% ifversion ghes %}[self-hosted]{% else %}ubuntu-latest{% endif %} permissions: packages: write contents: read {% ifversion artifact-attestations %}attestations: write{% endif %} {% ifversion artifact-attestations %}id-token: write{% endif %} steps: - name: Check out the repo uses: {% data reusables.actions.action-checkout %} - name: Log in to Docker Hub uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a with: username: {% raw %}${{ secrets.DOCKER\_USERNAME }}{% endraw %} password: {% raw %}${{ secrets.DOCKER\_PASSWORD }}{% endraw %} - name: Extract metadata (tags, labels) for Docker id: meta uses: docker/metadata-action@9ec57ed1fcdbf14dcef7dfbe97b2010124a938b7 with: images: my-docker-hub-namespace/my-docker-hub-repository - name: Build and push Docker image id: push uses: docker/build-push-action@3b5e8027fcad23fda98b2e3ac259d8d67585f671 with: context: . file: ./Dockerfile push: true tags: {% raw %}${{ steps.meta.outputs.tags }}{% endraw %} labels: {% raw %}${{ steps.meta.outputs.labels }}{% | https://github.com/github/docs/blob/main//content/actions/tutorials/publish-packages/publish-docker-images.md | main | github-actions | [
0.0012354290811344981,
-0.0248990710824728,
-0.04699397087097168,
-0.030309269204735756,
0.01120075210928917,
0.013553788885474205,
-0.022635567933321,
0.08805160224437714,
-0.02671319805085659,
0.011754140257835388,
-0.05935512110590935,
-0.026578666642308235,
0.08052199333906174,
-0.0108... | 0.05701 |
}}{% endraw %} - name: Extract metadata (tags, labels) for Docker id: meta uses: docker/metadata-action@9ec57ed1fcdbf14dcef7dfbe97b2010124a938b7 with: images: my-docker-hub-namespace/my-docker-hub-repository - name: Build and push Docker image id: push uses: docker/build-push-action@3b5e8027fcad23fda98b2e3ac259d8d67585f671 with: context: . file: ./Dockerfile push: true tags: {% raw %}${{ steps.meta.outputs.tags }}{% endraw %} labels: {% raw %}${{ steps.meta.outputs.labels }}{% endraw %} {% ifversion artifact-attestations %} - name: Generate artifact attestation uses: actions/attest-build-provenance@v3 with: subject-name: index.docker.io/my-docker-hub-namespace/my-docker-hub-repository subject-digest: {% raw %}${{ steps.push.outputs.digest }}{% endraw %} push-to-registry: true {% endif -%} ``` The above workflow checks out the {% data variables.product.prodname\_dotcom %} repository, uses the `login-action` to log in to the registry, and then uses the `build-push-action` action to: build a Docker image based on your repository's `Dockerfile`; push the image to Docker Hub, and apply a tag to the image. {% ifversion artifact-attestations %}{% data reusables.actions.artifact-attestations-step-explanation %}{% endif %} ## Publishing images to {% data variables.product.prodname\_registry %} {% ifversion ghes %} {% data reusables.package\_registry.container-registry-ghes-beta %} {% endif %} Each time you create a new release on {% data variables.product.github %}, you can trigger a workflow to publish your image. The workflow in the example below runs when a change is pushed to the `release` branch. In the example workflow below, we use the Docker `login-action`{% ifversion fpt or ghec %}, `metadata-action`,{% endif %} and `build-push-action` actions to build the Docker image, and if the build succeeds, push the built image to {% data variables.product.prodname\_registry %}. The `login-action` options required for {% data variables.product.prodname\_registry %} are: \* `registry`: Must be set to {% ifversion fpt or ghec %}`ghcr.io`{% elsif ghes %}`{% data reusables.package\_registry.container-registry-hostname %}`{% endif %}. \* `username`: You can use the {% raw %}`${{ github.actor }}`{% endraw %} context to automatically use the username of the user that triggered the workflow run. For more information, see [AUTOTITLE](/actions/learn-github-actions/contexts#github-context). \* `password`: You can use the automatically-generated `GITHUB\_TOKEN` secret for the password. For more information, see [AUTOTITLE](/actions/security-guides/automatic-token-authentication). {% ifversion fpt or ghec %} The `metadata-action` option required for {% data variables.product.prodname\_registry %} is: \* `images`: The namespace and name for the Docker image you are building. {% endif %} The `build-push-action` options required for {% data variables.product.prodname\_registry %} are:{% ifversion fpt or ghec %} \* `context`: Defines the build's context as the set of files located in the specified path.{% endif %} \* `push`: If set to `true`, the image will be pushed to the registry if it is built successfully.{% ifversion fpt or ghec %} \* `tags` and `labels`: These are populated by output from `metadata-action`.{% else %} \* `tags`: Must be set in the format `{% data reusables.package\_registry.container-registry-hostname %}/OWNER/REPOSITORY/IMAGE\_NAME:VERSION`. For example, for an image named `octo-image` stored on {% data variables.product.prodname\_ghe\_server %} at `https://HOSTNAME/octo-org/octo-repo`, the `tags` option should be set to `{% data reusables.package\_registry.container-registry-hostname %}/octo-org/octo-repo/octo-image:latest` . You can set a single tag as shown below, or specify multiple tags in a list.{% endif %} {% data reusables.package\_registry.publish-docker-image %} The above workflow is triggered by a push to the "release" branch. It checks out the GitHub repository, and uses the `login-action` to log in to the {% data variables.product.prodname\_container\_registry %}. It then extracts labels and tags for the Docker image. Finally, it uses the `build-push-action` action to build the image and publish it on the {% data variables.product.prodname\_container\_registry %}. ## Publishing images to Docker Hub and {% data variables.product.prodname\_registry %} {% ifversion ghes %} {% data reusables.package\_registry.container-registry-ghes-beta %} {% endif %} In a single workflow, you can publish your Docker image to multiple registries by using the `login-action` and `build-push-action` actions for each registry. The following example workflow uses the steps from the previous sections ([Publishing images to Docker Hub](#publishing-images-to-docker-hub) and [Publishing images | https://github.com/github/docs/blob/main//content/actions/tutorials/publish-packages/publish-docker-images.md | main | github-actions | [
0.0133037269115448,
0.09615876525640488,
-0.027653401717543602,
0.04210400953888893,
0.11442413181066513,
-0.031054073944687843,
-0.03404955938458443,
0.01350120734423399,
0.008809627965092659,
0.00924554280936718,
0.0023514418862760067,
-0.09340360760688782,
-0.00604402506724,
0.016185134... | 0.011079 |
%} {% data reusables.package\_registry.container-registry-ghes-beta %} {% endif %} In a single workflow, you can publish your Docker image to multiple registries by using the `login-action` and `build-push-action` actions for each registry. The following example workflow uses the steps from the previous sections ([Publishing images to Docker Hub](#publishing-images-to-docker-hub) and [Publishing images to {% data variables.product.prodname\_registry %}](#publishing-images-to-github-packages)) to create a single workflow that pushes to both registries. ```yaml copy {% data reusables.actions.actions-not-certified-by-github-comment %} {% data reusables.actions.actions-use-sha-pinning-comment %} name: Publish Docker image on: release: types: [published] jobs: push\_to\_registries: name: Push Docker image to multiple registries runs-on: {% ifversion ghes %}[self-hosted]{% else %}ubuntu-latest{% endif %} permissions: packages: write contents: read steps: - name: Check out the repo uses: {% data reusables.actions.action-checkout %} - name: Log in to Docker Hub uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a with: username: {% raw %}${{ secrets.DOCKER\_USERNAME }}{% endraw %} password: {% raw %}${{ secrets.DOCKER\_PASSWORD }}{% endraw %} - name: Log in to the Container registry uses: docker/login-action@65b78e6e13532edd9afa3aa52ac7964289d1a9c1 with: registry: {% ifversion fpt or ghec %}ghcr.io{% elsif ghes %}{% data reusables.package\_registry.container-registry-hostname %}{% endif %} username: {% raw %}${{ github.actor }}{% endraw %} password: {% raw %}${{ secrets.GITHUB\_TOKEN }}{% endraw %} - name: Extract metadata (tags, labels) for Docker id: meta uses: docker/metadata-action@9ec57ed1fcdbf14dcef7dfbe97b2010124a938b7 with: images: | my-docker-hub-namespace/my-docker-hub-repository {% data reusables.package\_registry.container-registry-hostname %}/{% raw %}${{ github.repository }}{% endraw %} - name: Build and push Docker images id: push uses: docker/build-push-action@3b5e8027fcad23fda98b2e3ac259d8d67585f671 with: context: . push: true tags: {% raw %}${{ steps.meta.outputs.tags }}{% endraw %} labels: {% raw %}${{ steps.meta.outputs.labels }}{% endraw %} ``` The above workflow checks out the {% data variables.product.github %} repository, uses the `login-action` twice to log in to both registries and generates tags and labels with the `metadata-action` action. Then the `build-push-action` action builds and pushes the Docker image to Docker Hub and the {% data variables.product.prodname\_container\_registry %}. {% ifversion artifact-attestations %}> [!NOTE] > When pushing to multiple registries: > > \* Image digests may differ between registries, making attestation verification difficult. > \* To maintain a consistent digest and allow a single attestation to verify all copies, push to one registry first and use a tool like [`crane copy`](https://github.com/google/go-containerregistry/blob/main/cmd/crane/doc/crane\_copy.md) to replicate the image elsewhere. > \* If you choose to build and push to each registry separately instead, you must generate a distinct attestation for each one to ensure your artifacts remain verifiable. {% endif %} | https://github.com/github/docs/blob/main//content/actions/tutorials/publish-packages/publish-docker-images.md | main | github-actions | [
0.000011992725376330782,
-0.05903129652142525,
-0.007287435233592987,
0.01805759035050869,
0.021946754306554794,
-0.019124800339341164,
0.0012921706074848771,
-0.007851474918425083,
-0.004120260942727327,
0.014387017115950584,
-0.009053240530192852,
-0.016707053408026695,
0.079277403652668,
... | 0.002262 |
## Introduction {% data reusables.actions.publishing-java-packages-intro %} ## Prerequisites We recommend that you have a basic understanding of workflow files and configuration options. For more information, see [AUTOTITLE](/actions/learn-github-actions). For more information about creating a CI workflow for your Java project with Gradle, see [AUTOTITLE](/actions/automating-builds-and-tests/building-and-testing-java-with-gradle). You may also find it helpful to have a basic understanding of the following: \* [AUTOTITLE](/packages/working-with-a-github-packages-registry/working-with-the-apache-maven-registry) \* [AUTOTITLE](/actions/learn-github-actions/variables) \* [AUTOTITLE](/actions/security-guides/using-secrets-in-github-actions) \* [AUTOTITLE](/actions/security-guides/automatic-token-authentication) ## About package configuration The `groupId` and `artifactId` fields in the `MavenPublication` section of the \_build.gradle\_ file create a unique identifier for your package that registries use to link your package to a registry. This is similar to the `groupId` and `artifactId` fields of the Maven \_pom.xml\_ file. For more information, see the [Maven Publish Plugin](https://docs.gradle.org/current/userguide/publishing\_maven.html) in the Gradle documentation. The \_build.gradle\_ file also contains configuration for the distribution management repositories that Gradle will publish packages to. Each repository must have a name, a deployment URL, and credentials for authentication. ## Publishing packages to the Maven Central Repository Each time you create a new release, you can trigger a workflow to publish your package. The workflow in the example below runs when the `release` event triggers with type `created`. The workflow publishes the package to the Maven Central Repository if CI tests pass. For more information on the `release` event, see [AUTOTITLE](/actions/using-workflows/events-that-trigger-workflows#release). You can define a new Maven repository in the publishing block of your \_build.gradle\_ file that points to your package repository. For example, if you were deploying to the Maven Central Repository through the OSSRH hosting project, your \_build.gradle\_ could specify a repository with the name `"OSSRH"`. {% raw %} ```groovy copy plugins { ... id 'maven-publish' } publishing { ... repositories { maven { name = "OSSRH" url = "https://oss.sonatype.org/service/local/staging/deploy/maven2/" credentials { username = System.getenv("MAVEN\_USERNAME") password = System.getenv("MAVEN\_PASSWORD") } } } } ``` {% endraw %} With this configuration, you can create a workflow that publishes your package to the Maven Central Repository by running the `gradle publish` command. In the deploy step, you’ll need to set environment variables for the username and password or token that you use to authenticate to the Maven repository. For more information, see [AUTOTITLE](/actions/security-guides/using-secrets-in-github-actions). ```yaml copy {% data reusables.actions.actions-not-certified-by-github-comment %} {% data reusables.actions.actions-use-sha-pinning-comment %} name: Publish package to the Maven Central Repository on: release: types: [created] jobs: publish: runs-on: ubuntu-latest steps: - uses: {% data reusables.actions.action-checkout %} - name: Set up Java uses: {% data reusables.actions.action-setup-java %} with: java-version: '11' distribution: 'temurin' - name: Setup Gradle uses: gradle/actions/setup-gradle@017a9effdb900e5b5b2fddfb590a105619dca3c3 # v4.4.2 - name: Publish package run: ./gradlew publish env: MAVEN\_USERNAME: {% raw %}${{ secrets.OSSRH\_USERNAME }}{% endraw %} MAVEN\_PASSWORD: {% raw %}${{ secrets.OSSRH\_TOKEN }}{% endraw %} ``` {% data reusables.actions.gradle-workflow-steps %} 1. Executes the Gradle `publish` task to publish to the `OSSRH` Maven repository. The `MAVEN\_USERNAME` environment variable will be set with the contents of your `OSSRH\_USERNAME` secret, and the `MAVEN\_PASSWORD` environment variable will be set with the contents of your `OSSRH\_TOKEN` secret. For more information about using secrets in your workflow, see [AUTOTITLE](/actions/security-guides/using-secrets-in-github-actions). ## Publishing packages to {% data variables.product.prodname\_registry %} Each time you create a new release, you can trigger a workflow to publish your package. The workflow in the example below runs when the `release` event triggers with type `created`. The workflow publishes the package to {% data variables.product.prodname\_registry %} if CI tests pass. For more information on the `release` event, see [AUTOTITLE](/actions/using-workflows/events-that-trigger-workflows#release). You can define a new Maven repository in the publishing block of your \_build.gradle\_ that points to {% data variables.product.prodname\_registry %}. In that repository configuration, you can also take advantage of environment variables set in your CI workflow run. You can use | https://github.com/github/docs/blob/main//content/actions/tutorials/publish-packages/publish-java-packages-with-gradle.md | main | github-actions | [
-0.09925282001495361,
0.016104938462376595,
-0.013813883066177368,
-0.08457399159669876,
-0.02305525168776512,
-0.0013824993511661887,
0.02337482012808323,
0.00980322901159525,
-0.024458937346935272,
-0.03004983253777027,
-0.0004814809944946319,
-0.04490888863801956,
0.07713782787322998,
-... | 0.079638 |
more information on the `release` event, see [AUTOTITLE](/actions/using-workflows/events-that-trigger-workflows#release). You can define a new Maven repository in the publishing block of your \_build.gradle\_ that points to {% data variables.product.prodname\_registry %}. In that repository configuration, you can also take advantage of environment variables set in your CI workflow run. You can use the `GITHUB\_ACTOR` environment variable as a username, and you can set the `GITHUB\_TOKEN` environment variable with your `GITHUB\_TOKEN` secret. {% data reusables.actions.github-token-permissions %} For example, if your organization is named "octocat" and your repository is named "hello-world", then the {% data variables.product.prodname\_registry %} configuration in \_build.gradle\_ would look similar to the below example. {% raw %} ```groovy copy plugins { ... id 'maven-publish' } publishing { ... repositories { maven { name = "GitHubPackages" url = "https://maven.pkg.github.com/octocat/hello-world" credentials { username = System.getenv("GITHUB\_ACTOR") password = System.getenv("GITHUB\_TOKEN") } } } } ``` {% endraw %} With this configuration, you can create a workflow that publishes your package to {% data variables.product.prodname\_registry %} by running the `gradle publish` command. ```yaml copy {% data reusables.actions.actions-not-certified-by-github-comment %} {% data reusables.actions.actions-use-sha-pinning-comment %} name: Publish package to GitHub Packages on: release: types: [created] jobs: publish: runs-on: ubuntu-latest permissions: contents: read packages: write steps: - uses: {% data reusables.actions.action-checkout %} - uses: {% data reusables.actions.action-setup-java %} with: java-version: '11' distribution: 'temurin' - name: Setup Gradle uses: gradle/actions/setup-gradle@017a9effdb900e5b5b2fddfb590a105619dca3c3 # v4.4.2 - name: Publish package run: ./gradlew publish env: GITHUB\_TOKEN: {% raw %}${{ secrets.GITHUB\_TOKEN }}{% endraw %} ``` {% data reusables.actions.gradle-workflow-steps %} 1. Executes the Gradle `publish` task to publish to {% data variables.product.prodname\_registry %}. The `GITHUB\_TOKEN` environment variable will be set with the content of the `GITHUB\_TOKEN` secret. The `permissions` key specifies the access that the `GITHUB\_TOKEN` secret will allow. For more information about using secrets in your workflow, see [AUTOTITLE](/actions/security-guides/using-secrets-in-github-actions). ## Publishing packages to the Maven Central Repository and {% data variables.product.prodname\_registry %} You can publish your packages to both the Maven Central Repository and {% data variables.product.prodname\_registry %} by configuring each in your \_build.gradle\_ file. Ensure your \_build.gradle\_ file includes a repository for both your {% data variables.product.prodname\_dotcom %} repository and your Maven Central Repository provider. For example, if you deploy to the Central Repository through the OSSRH hosting project, you might want to specify it in a distribution management repository with the `name` set to `OSSRH`. If you deploy to {% data variables.product.prodname\_registry %}, you might want to specify it in a distribution management repository with the `name` set to `GitHubPackages`. If your organization is named "octocat" and your repository is named "hello-world", then the configuration in \_build.gradle\_ would look similar to the below example. {% raw %} ```groovy copy plugins { ... id 'maven-publish' } publishing { ... repositories { maven { name = "OSSRH" url = "https://oss.sonatype.org/service/local/staging/deploy/maven2/" credentials { username = System.getenv("MAVEN\_USERNAME") password = System.getenv("MAVEN\_PASSWORD") } } maven { name = "GitHubPackages" url = "https://maven.pkg.github.com/octocat/hello-world" credentials { username = System.getenv("GITHUB\_ACTOR") password = System.getenv("GITHUB\_TOKEN") } } } } ``` {% endraw %} With this configuration, you can create a workflow that publishes your package to both the Maven Central Repository and {% data variables.product.prodname\_registry %} by running the `gradle publish` command. ```yaml copy {% data reusables.actions.actions-not-certified-by-github-comment %} {% data reusables.actions.actions-use-sha-pinning-comment %} name: Publish package to the Maven Central Repository and GitHub Packages on: release: types: [created] jobs: publish: runs-on: ubuntu-latest permissions: contents: read packages: write steps: - uses: {% data reusables.actions.action-checkout %} - name: Set up Java uses: {% data reusables.actions.action-setup-java %} with: java-version: '11' distribution: 'temurin' - name: Setup Gradle uses: gradle/actions/setup-gradle@017a9effdb900e5b5b2fddfb590a105619dca3c3 # v4.4.2 - name: Publish package run: ./gradlew publish env: {% raw %} MAVEN\_USERNAME: ${{ secrets.OSSRH\_USERNAME }} MAVEN\_PASSWORD: ${{ secrets.OSSRH\_TOKEN }} GITHUB\_TOKEN: | https://github.com/github/docs/blob/main//content/actions/tutorials/publish-packages/publish-java-packages-with-gradle.md | main | github-actions | [
-0.027479562908411026,
-0.042507193982601166,
-0.048671551048755646,
-0.05711733549833298,
-0.03510912507772446,
0.020036740228533745,
0.05990995466709137,
0.010802431032061577,
0.025333916768431664,
0.025345202535390854,
0.0078375693410635,
-0.0670318454504013,
0.036998048424720764,
-0.04... | 0.049052 |
steps: - uses: {% data reusables.actions.action-checkout %} - name: Set up Java uses: {% data reusables.actions.action-setup-java %} with: java-version: '11' distribution: 'temurin' - name: Setup Gradle uses: gradle/actions/setup-gradle@017a9effdb900e5b5b2fddfb590a105619dca3c3 # v4.4.2 - name: Publish package run: ./gradlew publish env: {% raw %} MAVEN\_USERNAME: ${{ secrets.OSSRH\_USERNAME }} MAVEN\_PASSWORD: ${{ secrets.OSSRH\_TOKEN }} GITHUB\_TOKEN: ${{ secrets.GITHUB\_TOKEN }}{% endraw %} ``` {% data reusables.actions.gradle-workflow-steps %} 1. Executes the Gradle `publish` task to publish to the `OSSRH` Maven repository and {% data variables.product.prodname\_registry %}. The `MAVEN\_USERNAME` environment variable will be set with the contents of your `OSSRH\_USERNAME` secret, and the `MAVEN\_PASSWORD` environment variable will be set with the contents of your `OSSRH\_TOKEN` secret. The `GITHUB\_TOKEN` environment variable will be set with the content of the `GITHUB\_TOKEN` secret. The `permissions` key specifies the access that the `GITHUB\_TOKEN` secret will allow. For more information about using secrets in your workflow, see [AUTOTITLE](/actions/security-guides/using-secrets-in-github-actions). | https://github.com/github/docs/blob/main//content/actions/tutorials/publish-packages/publish-java-packages-with-gradle.md | main | github-actions | [
-0.019006220623850822,
-0.05939173698425293,
-0.024633727967739105,
-0.023533085361123085,
-0.0208682119846344,
-0.03725164011120796,
0.02708522416651249,
-0.02602103352546692,
-0.010278771631419659,
0.04291190207004547,
0.027633273974061012,
-0.019611936062574387,
0.07186209410429001,
-0.... | -0.042605 |
## Introduction {% data reusables.actions.publishing-java-packages-intro %} ## Prerequisites We recommend that you have a basic understanding of workflow files and configuration options. For more information, see [AUTOTITLE](/actions/learn-github-actions). For more information about creating a CI workflow for your Java project with Maven, see [AUTOTITLE](/actions/automating-builds-and-tests/building-and-testing-java-with-maven). You may also find it helpful to have a basic understanding of the following: \* [AUTOTITLE](/packages/working-with-a-github-packages-registry/working-with-the-apache-maven-registry) \* [AUTOTITLE](/actions/learn-github-actions/variables) \* [AUTOTITLE](/actions/security-guides/using-secrets-in-github-actions) \* [AUTOTITLE](/actions/security-guides/automatic-token-authentication) ## About package configuration The `groupId` and `artifactId` fields in the \_pom.xml\_ file create a unique identifier for your package that registries use to link your package to a registry. For more information see [Guide to uploading artifacts to the Central Repository](https://maven.apache.org/repository/guide-central-repository-upload.html) in the Apache Maven documentation. {% data reusables.package\_registry.maven-package-naming-convention %} The \_pom.xml\_ file also contains configuration for the distribution management repositories that Maven will deploy packages to. Each repository must have a name and a deployment URL. Authentication for these repositories can be configured in the \_.m2/settings.xml\_ file in the home directory of the user running Maven. You can use the `setup-java` action to configure the deployment repository as well as authentication for that repository. For more information, see [`setup-java`](https://github.com/actions/setup-java). ## Publishing packages to the Maven Central Repository Each time you create a new release, you can trigger a workflow to publish your package. The workflow in the example below runs when the `release` event triggers with type `created`. The workflow publishes the package to the Maven Central Repository if CI tests pass. For more information on the `release` event, see [AUTOTITLE](/actions/using-workflows/events-that-trigger-workflows#release). In this workflow, you can use the `setup-java` action. This action installs the given version of the JDK into the `PATH`, but it also configures a Maven \_settings.xml\_ for publishing packages. By default, the settings file will be configured for {% data variables.product.prodname\_registry %}, but it can be configured to deploy to another package registry, such as the Maven Central Repository. If you already have a distribution management repository configured in \_pom.xml\_, then you can specify that `id` during the `setup-java` action invocation. For example, if you were deploying to the Maven Central Repository through the OSSRH hosting project, your \_pom.xml\_ could specify a distribution management repository with the `id` of `ossrh`. {% raw %} ```xml copy ... ossrh Central Repository OSSRH https://oss.sonatype.org/service/local/staging/deploy/maven2/ ``` {% endraw %} With this configuration, you can create a workflow that publishes your package to the Maven Central Repository by specifying the repository management `id` to the `setup-java` action. You’ll also need to provide environment variables that contain the username and password to authenticate to the repository. In the deploy step, you’ll need to set the environment variables to the username that you authenticate with to the repository, and to a secret that you’ve configured with the password or token to authenticate with. For more information, see [AUTOTITLE](/actions/security-guides/using-secrets-in-github-actions). ```yaml copy name: Publish package to the Maven Central Repository on: release: types: [created] jobs: publish: runs-on: ubuntu-latest steps: - uses: {% data reusables.actions.action-checkout %} - name: Set up Maven Central Repository uses: {% data reusables.actions.action-setup-java %} with: java-version: '11' distribution: 'temurin' server-id: ossrh server-username: MAVEN\_USERNAME server-password: MAVEN\_PASSWORD - name: Publish package run: mvn --batch-mode deploy env: MAVEN\_USERNAME: {% raw %}${{ secrets.OSSRH\_USERNAME }}{% endraw %} MAVEN\_PASSWORD: {% raw %}${{ secrets.OSSRH\_TOKEN }}{% endraw %} ``` This workflow performs the following steps: 1. Checks out a copy of project's repository. 1. Sets up the Java JDK, and also configures the Maven \_settings.xml\_ file to add authentication for the `ossrh` repository using the `MAVEN\_USERNAME` and `MAVEN\_PASSWORD` environment variables. 1. {% data reusables.actions.publish-to-maven-workflow-step %} For more information about using secrets in your workflow, see [AUTOTITLE](/actions/security-guides/using-secrets-in-github-actions). ## Publishing packages to {% data variables.product.prodname\_registry %} | https://github.com/github/docs/blob/main//content/actions/tutorials/publish-packages/publish-java-packages-with-maven.md | main | github-actions | [
-0.08070753514766693,
0.019412929192185402,
-0.027494395151734352,
-0.06127150356769562,
-0.023765280842781067,
0.005076196976006031,
-0.017527034506201744,
0.004887777380645275,
-0.02938399650156498,
-0.01768803410232067,
0.019117506220936775,
-0.06578585505485535,
0.08638342469930649,
-0... | 0.0816 |
Sets up the Java JDK, and also configures the Maven \_settings.xml\_ file to add authentication for the `ossrh` repository using the `MAVEN\_USERNAME` and `MAVEN\_PASSWORD` environment variables. 1. {% data reusables.actions.publish-to-maven-workflow-step %} For more information about using secrets in your workflow, see [AUTOTITLE](/actions/security-guides/using-secrets-in-github-actions). ## Publishing packages to {% data variables.product.prodname\_registry %} Each time you create a new release, you can trigger a workflow to publish your package. The workflow in the example below runs when the `release` event triggers with type `created`. The workflow publishes the package to {% data variables.product.prodname\_registry %} if CI tests pass. For more information on the `release` event, see [AUTOTITLE](/actions/using-workflows/events-that-trigger-workflows#release). In this workflow, you can use the `setup-java` action. This action installs the given version of the JDK into the `PATH`, and also sets up a Maven \_settings.xml\_ for publishing the package to {% data variables.product.prodname\_registry %}. The generated \_settings.xml\_ defines authentication for a server with an `id` of `github`, using the `GITHUB\_ACTOR` environment variable as the username and the `GITHUB\_TOKEN` environment variable as the password. The `GITHUB\_TOKEN` environment variable is assigned the value of the special `GITHUB\_TOKEN` secret. {% data reusables.actions.github-token-permissions %} For a Maven-based project, you can make use of these settings by creating a distribution repository in your \_pom.xml\_ file with an `id` of `github` that points to your {% data variables.product.prodname\_registry %} endpoint. For example, if your organization is named "octocat" and your repository is named "hello-world", then the {% data variables.product.prodname\_registry %} configuration in \_pom.xml\_ would look similar to the below example. {% raw %} ```xml copy ... github GitHub Packages https://maven.pkg.github.com/octocat/hello-world ``` {% endraw %} With this configuration, you can create a workflow that publishes your package to {% data variables.product.prodname\_registry %} by making use of the automatically generated \_settings.xml\_. ```yaml copy name: Publish package to GitHub Packages on: release: types: [created] jobs: publish: runs-on: ubuntu-latest permissions: contents: read packages: write steps: - uses: {% data reusables.actions.action-checkout %} - uses: {% data reusables.actions.action-setup-java %} with: java-version: '11' distribution: 'temurin' - name: Publish package run: mvn --batch-mode deploy env: GITHUB\_TOKEN: {% raw %}${{ secrets.GITHUB\_TOKEN }}{% endraw %} ``` This workflow performs the following steps: 1. Checks out a copy of project's repository. 1. Sets up the Java JDK, and also automatically configures the Maven \_settings.xml\_ file to add authentication for the `github` Maven repository to use the `GITHUB\_TOKEN` environment variable. 1. {% data reusables.actions.publish-to-packages-workflow-step %} For more information about using secrets in your workflow, see [AUTOTITLE](/actions/security-guides/using-secrets-in-github-actions). ## Publishing packages to the Maven Central Repository and {% data variables.product.prodname\_registry %} You can publish your packages to both the Maven Central Repository and {% data variables.product.prodname\_registry %} by using the `setup-java` action for each registry. Ensure your \_pom.xml\_ file includes a distribution management repository for both your {% data variables.product.prodname\_dotcom %} repository and your Maven Central Repository provider. For example, if you deploy to the Central Repository through the OSSRH hosting project, you might want to specify it in a distribution management repository with the `id` set to `ossrh`, and you might want to specify {% data variables.product.prodname\_registry %} in a distribution management repository with the `id` set to `github`. ```yaml copy name: Publish package to the Maven Central Repository and GitHub Packages on: release: types: [created] jobs: publish: runs-on: ubuntu-latest permissions: contents: read packages: write steps: - uses: {% data reusables.actions.action-checkout %} - name: Set up Java for publishing to Maven Central Repository uses: {% data reusables.actions.action-setup-java %} with: java-version: '11' distribution: 'temurin' server-id: ossrh server-username: MAVEN\_USERNAME server-password: MAVEN\_PASSWORD - name: Publish to the Maven Central Repository run: mvn --batch-mode deploy env: MAVEN\_USERNAME: {% raw %}${{ secrets.OSSRH\_USERNAME }}{% endraw %} MAVEN\_PASSWORD: {% | https://github.com/github/docs/blob/main//content/actions/tutorials/publish-packages/publish-java-packages-with-maven.md | main | github-actions | [
-0.049647584557533264,
-0.03737797960639,
-0.10400757938623428,
-0.052018266171216965,
-0.02780713513493538,
-0.007734253071248531,
-0.000007337693659792421,
0.03443237021565437,
-0.00906930211931467,
-0.0056631918996572495,
0.036222346127033234,
-0.02199489250779152,
0.11377620697021484,
... | 0.012365 |
- name: Set up Java for publishing to Maven Central Repository uses: {% data reusables.actions.action-setup-java %} with: java-version: '11' distribution: 'temurin' server-id: ossrh server-username: MAVEN\_USERNAME server-password: MAVEN\_PASSWORD - name: Publish to the Maven Central Repository run: mvn --batch-mode deploy env: MAVEN\_USERNAME: {% raw %}${{ secrets.OSSRH\_USERNAME }}{% endraw %} MAVEN\_PASSWORD: {% raw %}${{ secrets.OSSRH\_TOKEN }}{% endraw %} - name: Set up Java for publishing to GitHub Packages uses: {% data reusables.actions.action-setup-java %} with: java-version: '11' distribution: 'temurin' - name: Publish to GitHub Packages run: mvn --batch-mode deploy env: GITHUB\_TOKEN: {% raw %}${{ secrets.GITHUB\_TOKEN }}{% endraw %} ``` This workflow calls the `setup-java` action twice. Each time the `setup-java` action runs, it overwrites the Maven \_settings.xml\_ file for publishing packages. For authentication to the repository, the \_settings.xml\_ file references the distribution management repository `id`, and the username and password. This workflow performs the following steps: 1. Checks out a copy of project's repository. 1. Calls `setup-java` the first time. This configures the Maven \_settings.xml\_ file for the `ossrh` repository, and sets the authentication options to environment variables that are defined in the next step. 1. {% data reusables.actions.publish-to-maven-workflow-step %} 1. Calls `setup-java` the second time. This automatically configures the Maven \_settings.xml\_ file for {% data variables.product.prodname\_registry %}. 1. {% data reusables.actions.publish-to-packages-workflow-step %} For more information about using secrets in your workflow, see [AUTOTITLE](/actions/security-guides/using-secrets-in-github-actions). | https://github.com/github/docs/blob/main//content/actions/tutorials/publish-packages/publish-java-packages-with-maven.md | main | github-actions | [
-0.032040152698755264,
-0.04197617247700691,
-0.08894877880811691,
0.034737661480903625,
-0.02873200923204422,
-0.01417024340480566,
-0.022521622478961945,
0.006520623806864023,
-0.03758919611573219,
0.06461219489574432,
0.08653922379016876,
-0.02078540064394474,
0.06655800342559814,
0.005... | -0.008455 |
## Introduction This guide shows you how to create a workflow that publishes Node.js packages to the {% data variables.product.prodname\_registry %} and npm registries after continuous integration (CI) tests pass. ## Prerequisites We recommend that you have a basic understanding of workflow configuration options and how to create a workflow file. For more information, see [AUTOTITLE](/actions/learn-github-actions). For more information about creating a CI workflow for your Node.js project, see [AUTOTITLE](/actions/automating-builds-and-tests/building-and-testing-nodejs). You may also find it helpful to have a basic understanding of the following: \* [AUTOTITLE](/packages/working-with-a-github-packages-registry/working-with-the-npm-registry) \* [AUTOTITLE](/actions/learn-github-actions/variables) \* [AUTOTITLE](/actions/security-guides/using-secrets-in-github-actions) \* [AUTOTITLE](/actions/security-guides/automatic-token-authentication) ## About package configuration The `name` and `version` fields in the `package.json` file create a unique identifier that registries use to link your package to a registry. You can add a summary for the package listing page by including a `description` field in the `package.json` file. For more information, see [Creating a package.json file](https://docs.npmjs.com/creating-a-package-json-file) and [Creating Node.js modules](https://docs.npmjs.com/creating-node-js-modules) in the npm documentation. When a local `.npmrc` file exists and has a `registry` value specified, the `npm publish` command uses the registry configured in the `.npmrc` file. {% data reusables.actions.setup-node-intro %} You can specify the Node.js version installed on the runner using the `setup-node` action. If you add steps in your workflow to configure the `publishConfig` fields in your `package.json` file, you don't need to specify the registry-url using the `setup-node` action, but you will be limited to publishing the package to one registry. For more information, see [publishConfig](https://docs.npmjs.com/cli/v9/configuring-npm/package-json#publishconfig) in the npm documentation. ## Publishing packages to the npm registry You can trigger a workflow to publish your package every time you publish a new release. The process in the following example is executed when the release event of type `published` is triggered. If the CI tests pass, the process uploads the package to the npm registry. For more information, see [AUTOTITLE](/repositories/releasing-projects-on-github/managing-releases-in-a-repository#creating-a-release). To perform authenticated operations against the npm registry in your workflow, you'll need to store your npm authentication token as a secret. For example, create a repository secret called `NPM\_TOKEN`. For more information, see [AUTOTITLE](/actions/security-guides/using-secrets-in-github-actions). By default, npm uses the `name` field of the `package.json` file to determine the name of your published package. When publishing to a global namespace, you only need to include the package name. For example, you would publish a package named `my-package` to `https://www.npmjs.com/package/my-package`. If you're publishing a package that includes a scope prefix, include the scope in the name of your `package.json` file. For example, if your npm scope prefix is "octocat" and the package name is "hello-world", the `name` in your `package.json` file should be `@octocat/hello-world`. If your npm package uses a scope prefix and the package is public, you need to use the option `npm publish --access public`. This is an option that npm requires to prevent someone from publishing a private package unintentionally. {% ifversion artifact-attestations %}If you would like to publish your package with provenance, include the `--provenance` flag with your `npm publish` command. This allows you to publicly and verifiably establish where and how your package was built, which increases supply chain security for people who consume your package. For more information, see [Generating provenance statements](https://docs.npmjs.com/generating-provenance-statements) in the npm documentation.{% endif %} This example stores the `NPM\_TOKEN` secret in the `NODE\_AUTH\_TOKEN` environment variable. When the `setup-node` action creates an `.npmrc` file, it references the token from the `NODE\_AUTH\_TOKEN` environment variable. ```yaml copy name: Publish Package to npmjs on: release: types: [published] jobs: build: runs-on: ubuntu-latest {% ifversion artifact-attestations %}permissions: contents: read id-token: write{% endif %} steps: - uses: {% data reusables.actions.action-checkout %} # Setup .npmrc file to publish to npm - uses: {% data reusables.actions.action-setup-node %} | https://github.com/github/docs/blob/main//content/actions/tutorials/publish-packages/publish-nodejs-packages.md | main | github-actions | [
-0.07902493327856064,
0.013088718988001347,
-0.0379272997379303,
-0.005609787534922361,
0.026030253618955612,
0.03026275895535946,
-0.020729107782244682,
0.05484016612172127,
-0.030051102861762047,
0.024181600660085678,
-0.03475618362426758,
0.0016778608551248908,
0.020460760220885277,
-0.... | 0.119071 |
the `NODE\_AUTH\_TOKEN` environment variable. ```yaml copy name: Publish Package to npmjs on: release: types: [published] jobs: build: runs-on: ubuntu-latest {% ifversion artifact-attestations %}permissions: contents: read id-token: write{% endif %} steps: - uses: {% data reusables.actions.action-checkout %} # Setup .npmrc file to publish to npm - uses: {% data reusables.actions.action-setup-node %} with: node-version: '20.x' registry-url: 'https://registry.npmjs.org' - run: npm ci - run: npm publish {% ifversion artifact-attestations %}--provenance --access public{% endif %} env: NODE\_AUTH\_TOKEN: {% raw %}${{ secrets.NPM\_TOKEN }}{% endraw %} ``` In the example above, the `setup-node` action creates an `.npmrc` file on the runner with the following contents: ```shell //registry.npmjs.org/:\_authToken=${NODE\_AUTH\_TOKEN} registry=https://registry.npmjs.org/ always-auth=true ``` Please note that you need to set the `registry-url` to `https://registry.npmjs.org/` in `setup-node` to properly configure your credentials. ## Publishing packages to {% data variables.product.prodname\_registry %} You can trigger a workflow to publish your package every time you publish a new release. The process in the following example is executed when the release event of type `published` is triggered. If the CI tests pass, the process uploads the package to {% data variables.product.prodname\_registry %}. For more information, see [AUTOTITLE](/repositories/releasing-projects-on-github/managing-releases-in-a-repository#creating-a-release). ### Configuring the destination repository Linking your package to {% data variables.product.prodname\_registry %} using the `repository` key is optional. If you choose not to provide the `repository` key in your `package.json` file, then {% ifversion packages-npm-v2 %}your package will not be linked to a repository when it is published, but you can choose to connect the package to a repository later.{% else %}{% data variables.product.prodname\_registry %} publishes a package in the {% data variables.product.prodname\_dotcom %} repository you specify in the `name` field of the `package.json` file. For example, a package named `@my-org/test` is published to the `my-org/test` {% data variables.product.prodname\_dotcom %} repository. If the `url` specified in the `repository` key is invalid, your package may still be published however it won't be linked to the repository source as intended.{% endif %} If you do provide the `repository` key in your `package.json` file, then the repository in that key is used as the destination npm registry for {% data variables.product.prodname\_registry %}. For example, publishing the below `package.json` results in a package named `my-package` published to the `octocat/my-other-repo` {% data variables.product.prodname\_dotcom %} repository.{% ifversion packages-npm-v2 %}{% else %} Once published, only the repository source is updated, and the package doesn't inherit any permissions from the destination repository.{% endif %} ```json { "name": "@octocat/my-package", "repository": { "type": "git", "url": "https://github.com/octocat/my-other-repo.git" }, } ``` ### Authenticating to the destination repository To perform authenticated operations against the {% data variables.product.prodname\_registry %} registry in your workflow, you can use the `GITHUB\_TOKEN`. {% data reusables.actions.github-token-permissions %} If you want to publish your package to a different repository, you must use a {% data variables.product.pat\_v1 %} that has permission to write to packages in the destination repository. For more information, see [AUTOTITLE](/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token) and [AUTOTITLE](/actions/security-guides/using-secrets-in-github-actions). ### Example workflow This example stores the `GITHUB\_TOKEN` secret in the `NODE\_AUTH\_TOKEN` environment variable. When the `setup-node` action creates an `.npmrc` file, it references the token from the `NODE\_AUTH\_TOKEN` environment variable. ```yaml copy name: Publish package to GitHub Packages on: release: types: [published] jobs: build: runs-on: ubuntu-latest permissions: contents: read packages: write steps: - uses: {% data reusables.actions.action-checkout %} # Setup .npmrc file to publish to GitHub Packages - uses: {% data reusables.actions.action-setup-node %} with: node-version: '20.x' registry-url: 'https://npm.pkg.github.com' # Defaults to the user or organization that owns the workflow file scope: '@octocat' - run: npm ci - run: npm publish env: NODE\_AUTH\_TOKEN: {% raw %}${{ secrets.GITHUB\_TOKEN }}{% endraw %} ``` The `setup-node` action creates an `.npmrc` file on the runner. When you use the `scope` input to the `setup-node` action, | https://github.com/github/docs/blob/main//content/actions/tutorials/publish-packages/publish-nodejs-packages.md | main | github-actions | [
0.017740773037075996,
0.013319157995283604,
-0.056576747447252274,
0.07085143774747849,
0.048045799136161804,
-0.06036502495408058,
-0.01110391691327095,
-0.03533109650015831,
-0.032851409167051315,
0.11084973067045212,
0.030574778094887733,
-0.020143991336226463,
0.0037580817006528378,
0.... | 0.086831 |
to the user or organization that owns the workflow file scope: '@octocat' - run: npm ci - run: npm publish env: NODE\_AUTH\_TOKEN: {% raw %}${{ secrets.GITHUB\_TOKEN }}{% endraw %} ``` The `setup-node` action creates an `.npmrc` file on the runner. When you use the `scope` input to the `setup-node` action, the `.npmrc` file includes the scope prefix. By default, the `setup-node` action sets the scope in the `.npmrc` file to the account that contains that workflow file. ```shell //npm.pkg.github.com/:\_authToken=${NODE\_AUTH\_TOKEN} @octocat:registry=https://npm.pkg.github.com always-auth=true ``` ## Publishing packages using Yarn If you use the Yarn package manager, you can install and publish packages using Yarn. ```yaml copy name: Publish Package to npmjs on: release: types: [published] jobs: build: runs-on: ubuntu-latest steps: - uses: {% data reusables.actions.action-checkout %} # Setup .npmrc file to publish to npm - uses: {% data reusables.actions.action-setup-node %} with: node-version: '20.x' registry-url: 'https://registry.npmjs.org' # Defaults to the user or organization that owns the workflow file scope: '@octocat' - run: yarn - run: yarn npm publish // for Yarn version 1, use `yarn publish` instead env: NODE\_AUTH\_TOKEN: {% raw %}${{ secrets.NPM\_TOKEN }}{% endraw %} ``` To authenticate with the registry during publishing, ensure your authentication token is also defined in your `yarnrc.yml` file. For more information, see the [Settings](https://yarnpkg.com/configuration/yarnrc#npmAuthToken) article in the Yarn documentation. | https://github.com/github/docs/blob/main//content/actions/tutorials/publish-packages/publish-nodejs-packages.md | main | github-actions | [
0.012354730628430843,
-0.04291464760899544,
-0.06604288518428802,
0.024702163413167,
-0.019562195986509323,
-0.044170450419187546,
-0.03382514417171478,
0.02228076197206974,
0.03310902416706085,
0.01813122257590294,
-0.05651295557618141,
-0.003851828631013632,
0.025096451863646507,
-0.0231... | 0.09012 |
## Logging The {% data variables.product.prodname\_actions\_runner\_controller %} (ARC) resources, which include the controller, listener, and runners, write logs to standard output (`stdout`). We recommend you implement a logging solution to collect and store these logs. Having logs available can help you or GitHub support with troubleshooting and debugging. For more information, see [Logging Architecture](https://kubernetes.io/docs/concepts/cluster-administration/logging/) in the Kubernetes documentation. ## Resources labels Labels are added to the resources created by {% data variables.product.prodname\_actions\_runner\_controller %}, which include the controller, listener, and runner pods. You can use these labels to filter resources and to help with troubleshooting. ### Controller pod The following labels are applied to the controller pod. ```yaml app.kubernetes.io/component=controller-manager app.kubernetes.io/instance= app.kubernetes.io/name=gha-runner-scale-set-controller app.kubernetes.io/part-of=gha-runner-scale-set-controller app.kubernetes.io/version= ``` ### Listener pod The following labels are applied to listener pods. ```yaml actions.github.com/enterprise= # Will be populated if githubConfigUrl is an enterprise URL actions.github.com/organization= # Will be populated if githubConfigUrl is an organization URL actions.github.com/repository= # Will be populated if githubConfigUrl is a repository URL actions.github.com/scale-set-name= # Runners scale set name actions.github.com/scale-set-namespace= # Runners namespace app.kubernetes.io/component=runner-scale-set-listener app.kubernetes.io/part-of=gha-runner-scale-set app.kubernetes.io/version= # Chart version ``` ### Runner pod The following labels are applied to runner pods. ```yaml actions-ephemeral-runner= # True | False actions.github.com/organization= # Will be populated if githubConfigUrl is an organization URL actions.github.com/scale-set-name= # Runners scale set name actions.github.com/scale-set-namespace= # Runners namespace app.kubernetes.io/component=runner app.kubernetes.io/part-of=gha-runner-scale-set app.kubernetes.io/version= # Chart version ``` ## Checking the logs of the controller and runner set listener To check the logs of the controller pod, you can use the following command. ```bash copy kubectl logs -n -l app.kubernetes.io/name=gha-runner-scale-set-controller ``` To check the logs of the runner set listener, you can use the following command. ```bash copy kubectl logs -n -l auto-scaling-runner-set-namespace=arc-systems -l auto-scaling-runner-set-name=arc-runner-set ``` ## Using the charts from the `master` branch We recommend you use the charts from the latest release instead of the `master` branch. The `master` branch is highly unstable, and we cannot guarantee that the charts in the `master` branch will work at any given time. ## Troubleshooting the listener pod If the controller pod is running, but the listener pod is not, inspect the logs of the controller first and see if there are any errors. If there are no errors and the runner set listener pod is still not running, ensure the controller pod has access to the Kubernetes API server in your cluster. If you have a proxy configured or you're using a sidecar proxy that's automatically injected, such as [Istio](https://istio.io/), ensure it's configured to allow traffic from the controller container (manager) to the Kubernetes API server. If you have installed the autoscaling runner set, but the listener pod is not created, verify that the `githubConfigSecret` you provided is correct and that the `githubConfigUrl` you provided is accurate. See [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/authenticating-to-the-github-api) and [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/deploying-runner-scale-sets-with-actions-runner-controller) for more information. ## Runner pods are recreated after a canceled workflow run Once a workflow run is canceled, the following events happen. \* The cancellation signal is sent to the runners directly. \* The runner application terminates, which also terminates the runner pods. \* On the next poll, the cancellation signal is received by the listener. There might be a slight delay between when the runners receive the signal and when the listener receives the signal. When runner pods start terminating, the listener tries to bring up new runners to match the desired number of runners according to the state it's in. However, when the listener receives the cancellation signal, it will act to reduce the number of runners. Eventually the listener will scale back down to the desired number of runners. In the meantime, you may see extra runners. ## Error: `Name must have up | https://github.com/github/docs/blob/main//content/actions/tutorials/use-actions-runner-controller/troubleshoot.md | main | github-actions | [
0.0641930103302002,
-0.032299213111400604,
-0.03419722989201546,
0.05756739526987076,
0.0653565302491188,
0.04050678759813309,
0.059936076402664185,
0.0007388946833088994,
0.04970931261777878,
0.003793955547735095,
0.04509948566555977,
-0.078187957406044,
-0.027559954673051834,
-0.03583453... | 0.171373 |
according to the state it's in. However, when the listener receives the cancellation signal, it will act to reduce the number of runners. Eventually the listener will scale back down to the desired number of runners. In the meantime, you may see extra runners. ## Error: `Name must have up to n characters` ARC uses the generated names of certain resources as labels for other resources. Because of this requirement, ARC limits resource names to 63 characters. Because part of the resource name is defined by you, ARC imposes a limit on the number of characters you can use for the installation name and namespace. ```bash Error: INSTALLATION FAILED: execution error at (gha-runner-scale-set/templates/autoscalingrunnerset.yaml:5:5): Name must have up to 45 characters Error: INSTALLATION FAILED: execution error at (gha-runner-scale-set/templates/autoscalingrunnerset.yaml:8:5): Namespace must have up to 63 characters ``` ## Error: `Access to the path /home/runner/\_work/\_tool is denied` You may see this error if you're using Kubernetes mode with persistent volumes. This error occurs if the runner container is running with a non-root user and is causing a permissions mismatch with the mounted volume. To fix this, you can do one of the following things. \* Use a volume type that supports `securityContext.fsGroup`. `hostPath` volumes do not support this property, whereas `local` volumes and other types of volumes do support it. Update the `fsGroup` of your runner pod to match the GID of the runner. You can do this by updating the `gha-runner-scale-set` helm chart values to include the following. Replace `VERSION` with the version of the `actions-runner` container image you want to use. ```yaml copy template: spec: securityContext: fsGroup: 123 containers: - name: runner image: ghcr.io/actions/actions-runner:latest command: ["/home/runner/run.sh"] ``` \* If updating the `securityContext` of your runner pod is not a viable solution, you can work around the issue by using `initContainers` to change the mounted volume's ownership, as follows. ```yaml copy template: spec: initContainers: - name: kube-init image: ghcr.io/actions/actions-runner:latest command: ["sudo", "chown", "-R", "1001:123", "/home/runner/\_work"] volumeMounts: - name: work mountPath: /home/runner/\_work containers: - name: runner image: ghcr.io/actions/actions-runner:latest command: ["/home/runner/run.sh"] ``` ## Error: `failed to get access token for {% data variables.product.prodname\_github\_app %} auth: 401 Unauthorized` A `401 Unauthorized` error when attempting to obtain an access token for a {% data variables.product.prodname\_github\_app %} could be a result of a Network Time Protocol (NTP) drift. Ensure that your Kubernetes system is accurately syncing with an NTP server and that there isn't a significant time drift. There is more leeway if your system time is behind {% data variables.product.github %}'s time, but if the environment is more than a few seconds ahead, 401 errors will occur when using {% data variables.product.prodname\_github\_app %}. ## Runner group limits {% data reusables.actions.self-hosted-runner-group-limit %} ## Runner updates {% data reusables.actions.self-hosted-runner-update-warning %} Validate that your runner software version and/or custom runner image(s) in use are running the latest version. For more information, see [AUTOTITLE](/actions/reference/runners/self-hosted-runners). ## Legal notice {% data reusables.actions.actions-runner-controller-legal-notice %} | https://github.com/github/docs/blob/main//content/actions/tutorials/use-actions-runner-controller/troubleshoot.md | main | github-actions | [
0.011908137239515781,
-0.01363137736916542,
-0.03438703343272209,
-0.0035032692831009626,
-0.06830114871263504,
0.006873095873743296,
-0.015858205035328865,
-0.02399434521794319,
0.029973585158586502,
-0.04597240686416626,
0.024948367848992348,
-0.05478960648179054,
-0.008042318746447563,
... | 0.119335 |
## Prerequisites In order to use ARC, ensure you have the following. \* A Kubernetes cluster \* For a managed cloud environment, you can use AKS. For more information, see [Azure Kubernetes Service](https://azure.microsoft.com/en-us/products/kubernetes-service) in the Azure documentation. \* For a local setup, you can use minikube or kind. For more information, see [minikube start](https://minikube.sigs.k8s.io/docs/start/) in the minikube documentation and [kind](https://kind.sigs.k8s.io/) in the kind documentation. \* Helm 3 \* For more information, see [Installing Helm](https://helm.sh/docs/intro/install/) in the Helm documentation. \* While it is not required for ARC to be deployed, we recommend ensuring you have implemented a way to collect and retain logs from the controller, listeners, and ephemeral runners before deploying ARC in production workflows. ## Installing Actions Runner Controller 1. To install the operator and the custom resource definitions (CRDs) in your cluster, do the following. 1. In your Helm chart, update the `NAMESPACE` value to the location you want your operator pods to be created. This namespace must allow access to the Kubernetes API server. 1. Install the Helm chart. The following example installs the latest version of the chart. To install a specific version, you can pass the `--version` argument along with the version of the chart you wish to install. You can find the list of releases in the [GitHub Container Registry](https://github.com/actions/actions-runner-controller/pkgs/container/actions-runner-controller-charts%2Fgha-runner-scale-set-controller). ```bash copy NAMESPACE="arc-systems" helm install arc \ --namespace "{% raw %}${NAMESPACE}{% endraw %}" \ --create-namespace \ oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller ``` For additional Helm configuration options, see [`values.yaml`](https://github.com/actions/actions-runner-controller/blob/master/charts/gha-runner-scale-set-controller/values.yaml) in the ARC documentation. 1. To enable ARC to authenticate to {% data variables.product.company\_short %}, generate a {% data variables.product.pat\_v1 %}. For more information, see [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/authenticating-to-the-github-api#deploying-using-personal-access-token-classic-authentication). ## Configuring a runner scale set 1. To configure your runner scale set, run the following command in your terminal, using values from your ARC configuration. When you run the command, keep the following in mind. \* Update the `INSTALLATION\_NAME` value carefully. You will use the installation name as the value of `runs-on` in your workflows. For more information, see [AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_idruns-on). \* Update the `NAMESPACE` value to the location you want the runner pods to be created. \* Set `GITHUB\_CONFIG\_URL` to the URL of your repository, organization, or enterprise. This is the entity that the runners will belong to. {% ifversion fpt %} \* Set `GITHUB\_PAT` to a {% data variables.product.company\_short %} {% data variables.product.pat\_generic %} with the `repo` and `admin:org` scopes for repository and organization runners. {% else %} \* Set `GITHUB\_PAT` to a {% data variables.product.company\_short %} {% data variables.product.pat\_generic %} with the `repo` and `manage\_runners:org` scopes for repository and organization runners, and the `manage\_runners:enterprise` scope for enterprise runners. {% endif %} \* This example command installs the latest version of the Helm chart. To install a specific version, you can pass the `--version` argument with the version of the chart you wish to install. You can find the list of releases in the [GitHub Container Registry](https://github.com/actions/actions-runner-controller/pkgs/container/actions-runner-controller-charts%2Fgha-runner-scale-set). > [!NOTE] > \* {% data reusables.actions.actions-runner-controller-security-practices-namespace %} > \* {% data reusables.actions.actions-runner-controller-security-practices-secret %} For more information, see [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/deploying-runner-scale-sets-with-actions-runner-controller). ```bash copy INSTALLATION\_NAME="arc-runner-set" NAMESPACE="arc-runners" GITHUB\_CONFIG\_URL="https://github.com/" GITHUB\_PAT="" helm install "{% raw %}${INSTALLATION\_NAME}{% endraw %}" \ --namespace "{% raw %}${NAMESPACE}{% endraw %}" \ --create-namespace \ --set githubConfigUrl="{% raw %}${GITHUB\_CONFIG\_URL}{% endraw %}" \ --set githubConfigSecret.github\_token="{% raw %}${GITHUB\_PAT}{% endraw %}" \ oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set ``` For additional Helm configuration options, see [`values.yaml`](https://github.com/actions/actions-runner-controller/blob/master/charts/gha-runner-scale-set/values.yaml) in the ARC documentation. 1. From your terminal, run the following command to check your installation. ```bash copy helm list -A ``` You should see an output similar to the following. ```bash NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION arc arc-systems 1 2023-04-12 11:45:59.152090536 +0000 UTC deployed gha-runner-scale-set-controller-0.4.0 0.4.0 arc-runner-set arc-runners 1 2023-04-12 11:46:13.451041354 +0000 UTC deployed gha-runner-scale-set-0.4.0 | https://github.com/github/docs/blob/main//content/actions/tutorials/use-actions-runner-controller/quickstart.md | main | github-actions | [
0.02003343589603901,
-0.05703839287161827,
0.013564839959144592,
0.02387123554944992,
0.01908409595489502,
0.06055210158228874,
-0.03460273891687393,
0.020209506154060364,
0.07875484973192215,
0.11250782012939453,
-0.015070763416588306,
-0.02770298719406128,
0.010266865603625774,
0.0136344... | 0.132126 |
the following command to check your installation. ```bash copy helm list -A ``` You should see an output similar to the following. ```bash NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION arc arc-systems 1 2023-04-12 11:45:59.152090536 +0000 UTC deployed gha-runner-scale-set-controller-0.4.0 0.4.0 arc-runner-set arc-runners 1 2023-04-12 11:46:13.451041354 +0000 UTC deployed gha-runner-scale-set-0.4.0 0.4.0 ``` 1. To check the manager pod, run the following command in your terminal. ```bash copy kubectl get pods -n arc-systems ``` If everything was installed successfully, the status of the pods shows as \*\*Running\*\*. ```bash NAME READY STATUS RESTARTS AGE arc-gha-runner-scale-set-controller-594cdc976f-m7cjs 1/1 Running 0 64s arc-runner-set-754b578d-listener 1/1 Running 0 12s ``` If your installation was not successful, see [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/troubleshooting-actions-runner-controller-errors) for troubleshooting information. ## Using runner scale sets Now you will create and run a simple test workflow that uses the runner scale set runners. 1. In a repository, create a workflow similar to the following example. The `runs-on` value should match the Helm installation name you used when you installed the autoscaling runner set. For more information on adding workflows to a repository, see [AUTOTITLE](/actions/quickstart#creating-your-first-workflow). ```yaml copy name: Actions Runner Controller Demo on: workflow\_dispatch: jobs: Explore-GitHub-Actions: # You need to use the INSTALLATION\_NAME from the previous step runs-on: arc-runner-set steps: - run: echo "🎉 This job uses runner scale set runners!" ``` 1. Once you've added the workflow to your repository, manually trigger the workflow. For more information, see [AUTOTITLE](/actions/managing-workflow-runs/manually-running-a-workflow). 1. To view the runner pods being created while the workflow is running, run the following command from your terminal. ```bash copy kubectl get pods -n arc-runners -w ``` A successful output will look similar to the following. ```bash NAMESPACE NAME READY STATUS RESTARTS AGE arc-runners arc-runner-set-rmrgw-runner-p9p5n 1/1 Running 0 21s ``` ## Next steps {% data variables.product.prodname\_actions\_runner\_controller %} can help you efficiently manage your {% data variables.product.prodname\_actions %} runners. Ready to get started? Here are some helpful resources for taking your next steps with ARC: \* For detailed authentication information, see [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/authenticating-to-the-github-api). \* For help using ARC runners in your workflows, see [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/using-actions-runner-controller-runners-in-a-workflow). \* For deployment information, see [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/deploying-runner-scale-sets-with-actions-runner-controller). ## Legal notice {% data reusables.actions.actions-runner-controller-legal-notice %} | https://github.com/github/docs/blob/main//content/actions/tutorials/use-actions-runner-controller/quickstart.md | main | github-actions | [
0.08409028500318527,
-0.0786539688706398,
0.04743322357535362,
-0.05530108883976936,
0.04293360561132431,
-0.03320882469415665,
-0.07728513330221176,
-0.0336117148399353,
0.039093017578125,
0.02430800348520279,
0.07297879457473755,
-0.12583133578300476,
-0.03099220059812069,
-0.02299761585... | 0.074899 |
## Deploying a runner scale set To deploy a runner scale set, you must have ARC up and running. For more information, see [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/quickstart-for-actions-runner-controller). You can deploy runner scale sets with ARC's Helm charts or by deploying the necessary manifests. Using ARC's Helm charts is the preferred method, especially if you do not have prior experience using ARC. > [!NOTE] > \* {% data reusables.actions.actions-runner-controller-security-practices-namespace %} > \* {% data reusables.actions.actions-runner-controller-security-practices-secret %} > \* We recommend running production workloads in isolation. {% data variables.product.prodname\_actions %} workflows are designed to run arbitrary code, and using a shared Kubernetes cluster for production workloads could pose a security risk. > \* Ensure you have implemented a way to collect and retain logs from the controller, listeners, and ephemeral runners. 1. To configure your runner scale set, run the following command in your terminal, using values from your ARC configuration. When you run the command, keep the following in mind. \* Update the `INSTALLATION\_NAME` value carefully. You will use the installation name as the value of [`runs-on`](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_idruns-on) in your workflows. \* Update the `NAMESPACE` value to the location you want the runner pods to be created. \* Set the `GITHUB\_CONFIG\_URL` value to the URL of your repository, organization, or enterprise. This is the entity that the runners will belong to. \* This example command installs the latest version of the Helm chart. To install a specific version, you can pass the `--version` argument with the version of the chart you want to install. You can find the list of releases in the [`actions-runner-controller`](https://github.com/actions/actions-runner-controller/pkgs/container/actions-runner-controller-charts%2Fgha-runner-scale-set) repository. {% ifversion not ghes %} ```bash copy INSTALLATION\_NAME="arc-runner-set" NAMESPACE="arc-runners" GITHUB\_CONFIG\_URL="https://github.com/" GITHUB\_PAT="" helm install "{% raw %}${INSTALLATION\_NAME}{% endraw %}" \ --namespace "{% raw %}${NAMESPACE}{% endraw %}" \ --create-namespace \ --set githubConfigUrl="{% raw %}${GITHUB\_CONFIG\_URL}{% endraw %}" \ --set githubConfigSecret.github\_token="{% raw %}${GITHUB\_PAT}{% endraw %}" \ oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set ``` {% endif %} {% ifversion ghes %} ```bash copy INSTALLATION\_NAME="arc-runner-set" NAMESPACE="arc-runners" GITHUB\_CONFIG\_URL="http(s):///<'enterprises/your\_enterprise'/'org'/'org/repo'>" GITHUB\_PAT="" helm install "{% raw %}${INSTALLATION\_NAME}{% endraw %}" \ --namespace "{% raw %}${NAMESPACE}{% endraw %}" \ --create-namespace \ --set githubConfigUrl="{% raw %}${GITHUB\_CONFIG\_URL}{% endraw %}" \ --set githubConfigSecret.github\_token="{% raw %}${GITHUB\_PAT}{% endraw %}" \ oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set ``` {% endif %} {% data reusables.actions.actions-runner-controller-helm-chart-options %} 1. To check your installation, run the following command in your terminal. ```bash copy helm list -A ``` You should see an output similar to the following. ```bash NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION arc arc-systems 1 2023-04-12 11:45:59.152090536 +0000 UTC deployed gha-runner-scale-set-controller-0.4.0 0.4.0 arc-runner-set arc-systems 1 2023-04-12 11:46:13.451041354 +0000 UTC deployed gha-runner-scale-set-0.4.0 0.4.0 ``` 1. To check the manager pod, run the following command in your terminal. ```bash copy kubectl get pods -n arc-systems ``` If the installation was successful, the pods will show the `Running` status. ```bash NAME READY STATUS RESTARTS AGE arc-gha-runner-scale-set-controller-594cdc976f-m7cjs 1/1 Running 0 64s arc-runner-set-754b578d-listener 1/1 Running 0 12s ``` If your installation was not successful, see [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/troubleshooting-actions-runner-controller-errors) for troubleshooting information. ## Using advanced configuration options ARC offers several advanced configuration options. ### Configuring the runner scale set name > [!NOTE] > Runner scale set names are unique within the runner group they belong to. If you want to deploy multiple runner scale sets with the same name, they must belong to different runner groups. To configure the runner scale set name, you can define an `INSTALLATION\_NAME` or set the value of `runnerScaleSetName` in your copy of the [`values.yaml`](https://github.com/actions/actions-runner-controller/blob/master/charts/gha-runner-scale-set/values.yaml) file. ```yaml ## The name of the runner scale set to create, which defaults to the Helm release name runnerScaleSetName: "my-runners" ``` Make sure to pass the `values.yaml` file in your `helm install` command. See the [Helm Install](https://helm.sh/docs/helm/helm\_install/) documentation for more details. ### Choosing runner destinations Runner | https://github.com/github/docs/blob/main//content/actions/tutorials/use-actions-runner-controller/deploy-runner-scale-sets.md | main | github-actions | [
0.008846507407724857,
-0.03227536007761955,
-0.060168102383613586,
0.007784989196807146,
0.009185758419334888,
0.06766887754201889,
-0.0068786353804171085,
0.07811615616083145,
-0.009838229045271873,
0.040261004120111465,
0.037777453660964966,
-0.03576265648007393,
0.02380656823515892,
-0.... | 0.078091 |
of the [`values.yaml`](https://github.com/actions/actions-runner-controller/blob/master/charts/gha-runner-scale-set/values.yaml) file. ```yaml ## The name of the runner scale set to create, which defaults to the Helm release name runnerScaleSetName: "my-runners" ``` Make sure to pass the `values.yaml` file in your `helm install` command. See the [Helm Install](https://helm.sh/docs/helm/helm\_install/) documentation for more details. ### Choosing runner destinations Runner scale sets can be deployed at the repository, organization, or enterprise levels. {% ifversion ghec or ghes %} > [!NOTE] > You can only deploy runner scale sets at the enterprise level when using {% data variables.product.pat\_v1 %} authentication. {% endif %} To deploy runner scale sets to a specific level, set the value of `githubConfigUrl` in your copy of the `values.yaml` to the URL of your repository, organization, or enterprise. The following example shows how to configure ARC to add runners to `octo-org/octo-repo`. {% ifversion not ghes %} ```yaml githubConfigUrl: "https://github.com/octo-ent/octo-org/octo-repo" ``` {% endif %} {% ifversion ghes %} ```yaml githubConfigUrl: "http(s):///<'enterprises/your\_enterprise'/'org'/'org/repo'>" ``` {% endif %} {% data reusables.actions.actions-runner-controller-helm-chart-options %} ### Using a {% data variables.product.prodname\_github\_app %} for authentication If you are not using enterprise-level runners, you can use {% data variables.product.prodname\_github\_apps %} to authenticate with the {% data variables.product.company\_short %} API. For more information, see [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/authenticating-to-the-github-api). > [!NOTE] > Given the security risk associated with exposing your private key in plain text in a file on disk, we recommend creating a Kubernetes secret and passing the reference instead. You can either create a Kubernetes secret, or specify values in your [`values.yaml`](https://github.com/actions/actions-runner-controller/blob/master/charts/gha-runner-scale-set/values.yaml) file. #### Option 1: Create a Kubernetes secret (recommended) Once you have created your {% data variables.product.prodname\_github\_app %}, create a Kubernetes secret and pass the reference to that secret in your copy of the [`values.yaml`](https://github.com/actions/actions-runner-controller/blob/master/charts/gha-runner-scale-set/values.yaml) file. {% data reusables.actions.arc-runners-namespace %} ```bash kubectl create secret generic pre-defined-secret \ --namespace=arc-runners \ --from-literal=github\_app\_id=123456 \ --from-literal=github\_app\_installation\_id=654321 \ --from-file=github\_app\_private\_key=private-key.pem ``` In your copy of the [`values.yaml`](https://github.com/actions/actions-runner-controller/blob/master/charts/gha-runner-scale-set/values.yaml) pass the secret name as a reference. ```yaml githubConfigSecret: pre-defined-secret ``` #### Option 2: Specify values in your `values.yaml` file Alternatively, you can specify the values of `app\_id`, `installation\_id` and `private\_key` in your copy of the [`values.yaml`](https://github.com/actions/actions-runner-controller/blob/master/charts/gha-runner-scale-set/values.yaml) file. ```yaml ## githubConfigSecret is the Kubernetes secret to use when authenticating with GitHub API. ## You can choose to use a GitHub App or a {% data variables.product.pat\_v1 %} githubConfigSecret: ## GitHub Apps Configuration ## IDs must be strings, use quotes github\_app\_id: "123456" github\_app\_installation\_id: "654321" github\_app\_private\_key: | -----BEGIN RSA PRIVATE KEY----- ... HkVN9... ... -----END RSA PRIVATE KEY----- ``` {% data reusables.actions.actions-runner-controller-helm-chart-options %} ### Managing access with runner groups You can use runner groups to control which organizations or repositories have access to your runner scale sets. For more information on runner groups, see [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners/managing-access-to-self-hosted-runners-using-groups). To add a runner scale set to a runner group, you must already have a runner group created. Then set the `runnerGroup` property in your copy of the `values.yaml` file. The following example adds a runner scale set to the Octo-Group runner group. ```yaml runnerGroup: "Octo-Group" ``` {% data reusables.actions.actions-runner-controller-helm-chart-options %} ### Configuring an outbound proxy To force HTTP traffic for the controller and runners to go through your outbound proxy, set the following properties in your Helm chart. ```yaml proxy: http: url: http://proxy.com:1234 credentialSecretRef: proxy-auth # a Kubernetes secret with `username` and `password` keys https: url: http://proxy.com:1234 credentialSecretRef: proxy-auth # a Kubernetes secret with `username` and `password` keys noProxy: - example.com - example.org ``` ARC supports using anonymous or authenticated proxies. If you use authenticated proxies, you will need to set the `credentialSecretRef` value to reference a Kubernetes secret. You can create a secret with your proxy credentials with the following command. {% data reusables.actions.arc-runners-namespace %} ```bash copy kubectl create secret generic | https://github.com/github/docs/blob/main//content/actions/tutorials/use-actions-runner-controller/deploy-runner-scale-sets.md | main | github-actions | [
0.05666796490550041,
-0.0014488041633740067,
-0.07670128345489502,
0.009846198372542858,
-0.006214820314198732,
0.036876413971185684,
-0.0633205696940422,
0.059426065534353256,
0.001513320836238563,
0.01514333114027977,
0.05701777711510658,
-0.018712135031819344,
0.014921607449650764,
-0.0... | 0.02843 |
example.org ``` ARC supports using anonymous or authenticated proxies. If you use authenticated proxies, you will need to set the `credentialSecretRef` value to reference a Kubernetes secret. You can create a secret with your proxy credentials with the following command. {% data reusables.actions.arc-runners-namespace %} ```bash copy kubectl create secret generic proxy-auth \ --namespace=arc-runners \ --from-literal=username=proxyUsername \ --from-literal=password=proxyPassword \ ``` {% data reusables.actions.actions-runner-controller-helm-chart-options %} ### Setting the maximum and minimum number of runners The `maxRunners` and `minRunners` properties provide you with a range of options to customize your ARC setup. > [!NOTE] > ARC does not support scheduled maximum and minimum configurations. You can use a cron job or any other scheduling solution to update the configuration on a schedule. #### Example: Unbounded number of runners If you comment out both the `maxRunners` and `minRunners` properties, ARC will scale up to the number of jobs assigned to the runner scale set and will scale down to 0 if there aren't any active jobs. ```yaml ## maxRunners is the max number of runners the auto scaling runner set will scale up to. # maxRunners: 0 ## minRunners is the min number of idle runners. The target number of runners created will be ## calculated as a sum of minRunners and the number of jobs assigned to the scale set. # minRunners: 0 ``` #### Example: Minimum number of runners You can set the `minRunners` property to any number and ARC will make sure there is always the specified number of runners active and available to take jobs assigned to the runner scale set at all times. ```yaml ## maxRunners is the max number of runners the auto scaling runner set will scale up to. # maxRunners: 0 ## minRunners is the min number of idle runners. The target number of runners created will be ## calculated as a sum of minRunners and the number of jobs assigned to the scale set. minRunners: 20 ``` #### Example: Set maximum and minimum number of runners In this configuration, {% data variables.product.prodname\_actions\_runner\_controller %} will scale up to a maximum of `30` runners and will scale down to `20` runners when the jobs are complete. > [!NOTE] > The value of `minRunners` can never exceed that of `maxRunners`, unless `maxRunners` is commented out. ```yaml ## maxRunners is the max number of runners the auto scaling runner set will scale up to. maxRunners: 30 ## minRunners is the min number of idle runners. The target number of runners created will be ## calculated as a sum of minRunners and the number of jobs assigned to the scale set. minRunners: 20 ``` #### Example: Jobs queue draining In certain scenarios you might want to drain the jobs queue to troubleshoot a problem or to perform maintenance on your cluster. If you set both properties to `0`, {% data variables.product.prodname\_actions\_runner\_controller %} will not create new runner pods when new jobs are available and assigned. ```yaml ## maxRunners is the max number of runners the auto scaling runner set will scale up to. maxRunners: 0 ## minRunners is the min number of idle runners. The target number of runners created will be ## calculated as a sum of minRunners and the number of jobs assigned to the scale set. minRunners: 0 ``` ### Custom TLS certificates > [!NOTE] > If you are using a custom runner image that is not based on the `Debian` distribution, the following instructions will not work. Some environments require TLS certificates that are signed by a custom certificate authority (CA). Since the custom certificate authority certificates are not bundled with the controller or runner containers, | https://github.com/github/docs/blob/main//content/actions/tutorials/use-actions-runner-controller/deploy-runner-scale-sets.md | main | github-actions | [
-0.013696447014808655,
-0.03746425732970238,
-0.014585291035473347,
0.014022575691342354,
-0.06884414702653885,
0.06336057186126709,
-0.032105330377817154,
-0.004792368039488792,
0.07957331836223602,
0.051297470927238464,
-0.05351075157523155,
-0.023267624899744987,
0.03323550522327423,
-0... | 0.078661 |
you are using a custom runner image that is not based on the `Debian` distribution, the following instructions will not work. Some environments require TLS certificates that are signed by a custom certificate authority (CA). Since the custom certificate authority certificates are not bundled with the controller or runner containers, you must inject them into their respective trust stores. ```yaml githubServerTLS: certificateFrom: configMapKeyRef: name: config-map-name key: ca.crt runnerMountPath: /usr/local/share/ca-certificates/ ``` When you do this, ensure you are using the Privacy Enhanced Mail (PEM) format and that the extension of your certificate is `.crt`. Anything else will be ignored. The controller executes the following actions. \* Creates a `github-server-tls-cert` volume containing the certificate specified in `certificateFrom`. \* Mounts that volume on path `runnerMountPath/`. \* Sets the `NODE\_EXTRA\_CA\_CERTS` environment variable to that same path. \* Sets the `RUNNER\_UPDATE\_CA\_CERTS` environment variable to `1` (as of version `2.303.0`, this will instruct the runner to reload certificates on the host). ARC observes values set in the runner pod template and does not overwrite them. {% data reusables.actions.actions-runner-controller-helm-chart-options %} ### Using a private container registry {% data reusables.actions.actions-runner-controller-unsupported-customization %} To use a private container registry, you can copy the controller image and runner image to your private container registry. Then configure the links to those images and set the `imagePullPolicy` and `imagePullSecrets` values. #### Configuring the controller image You can update your copy of the [`values.yaml`](https://github.com/actions/actions-runner-controller/blob/master/charts/gha-runner-scale-set-controller/values.yaml) file and set the `image` properties as follows. ```yaml image: repository: "custom-registry.io/gha-runner-scale-set-controller" pullPolicy: IfNotPresent # Overrides the image tag whose default is the chart appVersion. tag: "0.4.0" imagePullSecrets: - name: ``` The listener container inherits the `imagePullPolicy` defined for the controller. #### Configuring the runner image You can update your copy of the [`values.yaml`](https://github.com/actions/actions-runner-controller/blob/master/charts/gha-runner-scale-set/values.yaml) file and set the `template.spec` properties to configure the runner pod for your specific use case. > [!NOTE] > The runner container must be named `runner`. Otherwise, it will not be configured properly to connect to {% data variables.product.prodname\_dotcom %}. The following is a sample configuration: ```yaml template: spec: containers: - name: runner image: "custom-registry.io/actions-runner:latest" imagePullPolicy: Always command: ["/home/runner/run.sh"] imagePullSecrets: - name: ``` {% data reusables.actions.actions-runner-controller-helm-chart-options %} ### Updating the pod specification for the runner pod {% data reusables.actions.actions-runner-controller-unsupported-customization %} You can fully customize the PodSpec of the runner pod and the controller will apply the configuration you specify. The following is an example pod specification. ```yaml template: spec: containers: - name: runner image: ghcr.io/actions/actions-runner:latest command: ["/home/runner/run.sh"] resources: limits: cpu: 500m memory: 512Mi securityContext: readOnlyRootFilesystem: true allowPrivilegeEscalation: false capabilities: add: - NET\_ADMIN ``` {% data reusables.actions.actions-runner-controller-helm-chart-options %} ### Updating the pod specification for the listener pod {% data reusables.actions.actions-runner-controller-unsupported-customization %} You can customize the PodSpec of the listener pod and the controller will apply the configuration you specify. The following is an example pod specification. > [!NOTE] > It's important to not change the `listenerTemplate.spec.containers.name` value of the listener container. Otherwise, the configuration you specify will be applied to a new sidecar container. ```yaml listenerTemplate: spec: containers: # If you change the name of the container, the configuration will not be applied to the listener, # and it will be treated as a sidecar container. - name: listener securityContext: runAsUser: 1000 resources: limits: cpu: "1" memory: 1Gi requests: cpu: "1" memory: 1Gi ``` {% data reusables.actions.actions-runner-controller-helm-chart-options %} ## Using Docker-in-Docker or Kubernetes mode for containers {% data reusables.actions.actions-runner-controller-unsupported-customization %} If you are using container jobs and services or container actions, you must set the `containerMode` value to `dind` or `kubernetes`. To use a custom container mode, comment out or remove `containerMode`, and add your desired configuration to the `template` section. See [Customizing container modes](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/deploying-runner-scale-sets-with-actions-runner-controller#customizing-container-modes). \* For | https://github.com/github/docs/blob/main//content/actions/tutorials/use-actions-runner-controller/deploy-runner-scale-sets.md | main | github-actions | [
0.006897789891809225,
0.01266518048942089,
-0.05376538261771202,
-0.016080375760793686,
0.0357295386493206,
-0.09459336847066879,
-0.08497931063175201,
0.08582571893930435,
0.049412939697504044,
-0.009824938140809536,
0.07255387306213379,
-0.06939830631017685,
0.07234266400337219,
0.050202... | -0.077539 |
data reusables.actions.actions-runner-controller-unsupported-customization %} If you are using container jobs and services or container actions, you must set the `containerMode` value to `dind` or `kubernetes`. To use a custom container mode, comment out or remove `containerMode`, and add your desired configuration to the `template` section. See [Customizing container modes](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/deploying-runner-scale-sets-with-actions-runner-controller#customizing-container-modes). \* For more information on container jobs and services, see [AUTOTITLE](/actions/using-jobs/running-jobs-in-a-container). \* For more information on container actions, see [AUTOTITLE](/actions/creating-actions/creating-a-docker-container-action). ### Using Docker-in-Docker mode > [!NOTE] > The Docker-in-Docker container requires privileged mode. For more information, see [Configure a Security Context for a Pod or Container](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) in the Kubernetes documentation. > > By default, the `dind` container uses the `docker:dind` image, which runs the Docker daemon as root. You can replace this image with `docker:dind-rootless` as long as you are aware of the [known limitations](https://docs.docker.com/engine/security/rootless/#known-limitations) and run the pods with `--privileged` mode. To learn how to customize the Docker-in-Docker configuration, see [Customizing container modes](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/deploying-runner-scale-sets-with-actions-runner-controller#customizing-container-modes). Docker-in-Docker mode is a configuration that allows you to run Docker inside a Docker container. In this configuration, for each runner pod created, ARC creates the following containers. \* An `init` container \* A `runner` container \* A `dind` container To enable Docker-in-Docker mode, set the `containerMode.type` to `dind` as follows. ```yaml containerMode: type: "dind" ``` The `template.spec` will be updated to the following default configuration. For versions of Kubernetes `>= v1.29`, sidecar container will be used to run docker daemon. ```yaml template: spec: initContainers: - name: init-dind-externals image: ghcr.io/actions/actions-runner:latest command: ["cp", "-r", "/home/runner/externals/.", "/home/runner/tmpDir/"] volumeMounts: - name: dind-externals mountPath: /home/runner/tmpDir - name: dind image: docker:dind args: - dockerd - --host=unix:///var/run/docker.sock - --group=$(DOCKER\_GROUP\_GID) env: - name: DOCKER\_GROUP\_GID value: "123" securityContext: privileged: true restartPolicy: Always startupProbe: exec: command: - docker - info initialDelaySeconds: 0 failureThreshold: 24 periodSeconds: 5 volumeMounts: - name: work mountPath: /home/runner/\_work - name: dind-sock mountPath: /var/run - name: dind-externals mountPath: /home/runner/externals containers: - name: runner image: ghcr.io/actions/actions-runner:latest command: ["/home/runner/run.sh"] env: - name: DOCKER\_HOST value: unix:///var/run/docker.sock - name: RUNNER\_WAIT\_FOR\_DOCKER\_IN\_SECONDS value: "120" volumeMounts: - name: work mountPath: /home/runner/\_work - name: dind-sock mountPath: /var/run volumes: - name: work emptyDir: {} - name: dind-sock emptyDir: {} - name: dind-externals emptyDir: {} ``` For versions of Kubernetes `< v1.29`, the following configuration will be applied: ```yaml template: spec: initContainers: - name: init-dind-externals image: ghcr.io/actions/actions-runner:latest command: ["cp", "-r", "/home/runner/externals/.", "/home/runner/tmpDir/"] volumeMounts: - name: dind-externals mountPath: /home/runner/tmpDir containers: - name: runner image: ghcr.io/actions/actions-runner:latest command: ["/home/runner/run.sh"] env: - name: DOCKER\_HOST value: unix:///var/run/docker.sock volumeMounts: - name: work mountPath: /home/runner/\_work - name: dind-sock mountPath: /var/run - name: dind image: docker:dind args: - dockerd - --host=unix:///var/run/docker.sock - --group=$(DOCKER\_GROUP\_GID) env: - name: DOCKER\_GROUP\_GID value: "123" securityContext: privileged: true volumeMounts: - name: work mountPath: /home/runner/\_work - name: dind-sock mountPath: /var/run - name: dind-externals mountPath: /home/runner/externals volumes: - name: work emptyDir: {} - name: dind-sock emptyDir: {} - name: dind-externals emptyDir: {} ``` The values in `template.spec` are automatically injected and cannot be overridden. If you want to customize this setup, you must unset `containerMode.type`, then copy this configuration and apply it directly in your copy of the [`values.yaml`](https://github.com/actions/actions-runner-controller/blob/master/charts/gha-runner-scale-set/values.yaml) file. {% data reusables.actions.actions-runner-controller-helm-chart-options %} ### Using Kubernetes mode In Kubernetes mode, ARC uses runner container hooks to create a new pod in the same namespace to run the service, container job, or action. #### Prerequisites Kubernetes mode supports two approaches for sharing job data between the runner pod and the container job pod. You can use persistent volumes, which remain the recommended option for scenarios requiring concurrent write access, or you can use container lifecycle hooks to restore and export job filesystems between pods without relying on RWX volumes. The lifecycle hook approach improves portability and performance by leveraging local | https://github.com/github/docs/blob/main//content/actions/tutorials/use-actions-runner-controller/deploy-runner-scale-sets.md | main | github-actions | [
0.05123228579759598,
0.0012088649673387408,
-0.009252126328647137,
0.027610966935753822,
-0.0339181125164032,
0.013721827417612076,
-0.04381636902689934,
0.03131750598549843,
-0.021347681060433388,
0.06450387090444565,
-0.010551447048783302,
-0.009095251560211182,
-0.047479934990406036,
-0... | 0.042866 |
container job pod. You can use persistent volumes, which remain the recommended option for scenarios requiring concurrent write access, or you can use container lifecycle hooks to restore and export job filesystems between pods without relying on RWX volumes. The lifecycle hook approach improves portability and performance by leveraging local storage and is ideal for clusters without shared storage. #### Configuring Kubernetes mode with persistent volumes To use Kubernetes mode, you must create persistent volumes that the runner pods can claim and use a solution that automatically provisions these volumes on demand. For testing, you can use a solution like [OpenEBS](https://github.com/openebs/openebs). To enable Kubernetes mode, set the `containerMode.type` to `kubernetes` in your [`values.yaml`](https://github.com/actions/actions-runner-controller/blob/master/charts/gha-runner-scale-set/values.yaml) file. ```yaml containerMode: type: "kubernetes" kubernetesModeWorkVolumeClaim: accessModes: ["ReadWriteOnce"] storageClassName: "dynamic-blob-storage" resources: requests: storage: 1Gi ``` {% data reusables.actions.actions-runner-controller-helm-chart-options %} #### Configuring Kubernetes mode with container lifecycle hooks To enable Kubernetes mode using container lifecycle hooks, set the `containerMode.type` to `kubernetes-novolume` in your `values.yaml` file: ```yaml containerMode: type: "kubernetes-novolume" ``` >[!NOTE] >When using `kubernetes-novolume` mode, the container must run as `root` to support lifecycle hook operations. #### Troubleshooting Kubernetes mode When Kubernetes mode is enabled, workflows that are not configured with a container job will fail with an error similar to: ```bash Jobs without a job container are forbidden on this runner, please add a 'container:' to your job or contact your self-hosted runner administrator. ``` To allow jobs without a job container to run, set `ACTIONS\_RUNNER\_REQUIRE\_JOB\_CONTAINER` to `false` on your runner container. This instructs the runner to disable this check. > [!WARNING] >Allowing jobs to run without a container in `kubernetes` or `kubernetes-novolume` mode can give the >runner pod elevated privileges with the Kubernetes API server, including the ability to create pods and access secrets. Before changing this default, we recommend carefully reviewing the potential security implications. ```yaml template: spec: containers: - name: runner image: ghcr.io/actions/actions-runner:latest command: ["/home/runner/run.sh"] env: - name: ACTIONS\_RUNNER\_REQUIRE\_JOB\_CONTAINER value: "false" ``` ### Customizing container modes When you set the `containerMode` in the `values.yaml` file for the [`gha-runner-scale-set` helm chart](https://github.com/actions/actions-runner-controller/blob/5347e2c2c80fbc45be7390eab117e861d30776d1/charts/gha-runner-scale-set/values.yaml#L77), you can use either of the following values: \* `dind` or \* `kubernetes` Depending on which value you set for the `containerMode`, a configuration will automatically be injected into the `template` section of the `values.yaml` file for the `gha-runner-scale-set` helm chart. \* See the [`dind` configuration](https://github.com/actions/actions-runner-controller/blob/5347e2c2c80fbc45be7390eab117e861d30776d1/charts/gha-runner-scale-set/values.yaml#L110). \* See the [`kubernetes` configuration](https://github.com/actions/actions-runner-controller/blob/5347e2c2c80fbc45be7390eab117e861d30776d1/charts/gha-runner-scale-set/values.yaml#L160). To customize the spec, comment out or remove `containerMode`, and append the configuration you want in the `template` section. #### Example: running `dind-rootless` Before deciding to run `dind-rootless`, make sure you are aware of [known limitations](https://docs.docker.com/engine/security/rootless/#known-limitations). {% ifversion not ghes %} For versions of Kubernetes >= v1.29, sidecar container will be used to run docker daemon. ```yaml ## githubConfigUrl is the GitHub url for where you want to configure runners ## ex: https://github.com/myorg/myrepo or https://github.com/myorg githubConfigUrl: "https://github.com/actions/actions-runner-controller" ## githubConfigSecret is the k8s secrets to use when auth with GitHub API. ## You can choose to use GitHub App or a PAT token githubConfigSecret: my-super-safe-secret ## maxRunners is the max number of runners the autoscaling runner set will scale up to. maxRunners: 5 ## minRunners is the min number of idle runners. The target number of runners created will be ## calculated as a sum of minRunners and the number of jobs assigned to the scale set. minRunners: 0 runnerGroup: "my-custom-runner-group" ## name of the runner scale set to create. Defaults to the helm release name runnerScaleSetName: "my-awesome-scale-set" ## template is the PodSpec for each runner Pod ## For reference: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec template: spec: initContainers: - name: init-dind-externals image: ghcr.io/actions/actions-runner:latest command: ["cp", "-r", "/home/runner/externals/.", "/home/runner/tmpDir/"] volumeMounts: - name: dind-externals mountPath: /home/runner/tmpDir - name: init-dind-rootless image: | https://github.com/github/docs/blob/main//content/actions/tutorials/use-actions-runner-controller/deploy-runner-scale-sets.md | main | github-actions | [
-0.009061639197170734,
-0.03562450036406517,
-0.04675397276878357,
0.10898140072822571,
0.0006816927925683558,
-0.029421307146549225,
-0.08395708352327347,
-0.006562479306012392,
0.0035898801870644093,
0.03712671622633934,
-0.008314361795783043,
0.033005308359861374,
-0.003926524426788092,
... | 0.096358 |
the runner scale set to create. Defaults to the helm release name runnerScaleSetName: "my-awesome-scale-set" ## template is the PodSpec for each runner Pod ## For reference: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec template: spec: initContainers: - name: init-dind-externals image: ghcr.io/actions/actions-runner:latest command: ["cp", "-r", "/home/runner/externals/.", "/home/runner/tmpDir/"] volumeMounts: - name: dind-externals mountPath: /home/runner/tmpDir - name: init-dind-rootless image: docker:dind-rootless command: - sh - -c - | set -x cp -a /etc/. /dind-etc/ echo 'runner:x:1001:1001:runner:/home/runner:/bin/ash' >> /dind-etc/passwd echo 'runner:x:1001:' >> /dind-etc/group echo 'runner:100000:65536' >> /dind-etc/subgid echo 'runner:100000:65536' >> /dind-etc/subuid chmod 755 /dind-etc; chmod u=rwx,g=rx+s,o=rx /dind-home chown 1001:1001 /dind-home securityContext: runAsUser: 0 volumeMounts: - mountPath: /dind-etc name: dind-etc - mountPath: /dind-home name: dind-home - name: dind image: docker:dind-rootless args: - dockerd - --host=unix:///run/user/1001/docker.sock securityContext: privileged: true runAsUser: 1001 runAsGroup: 1001 restartPolicy: Always startupProbe: exec: command: - docker - info initialDelaySeconds: 0 failureThreshold: 24 periodSeconds: 5 volumeMounts: - name: work mountPath: /home/runner/\_work - name: dind-sock mountPath: /run/user/1001 - name: dind-externals mountPath: /home/runner/externals - name: dind-etc mountPath: /etc - name: dind-home mountPath: /home/runner containers: - name: runner image: ghcr.io/actions/actions-runner:latest command: ["/home/runner/run.sh"] env: - name: DOCKER\_HOST value: unix:///run/user/1001/docker.sock securityContext: privileged: true runAsUser: 1001 runAsGroup: 1001 volumeMounts: - name: work mountPath: /home/runner/\_work - name: dind-sock mountPath: /run/user/1001 volumes: - name: work emptyDir: {} - name: dind-externals emptyDir: {} - name: dind-sock emptyDir: {} - name: dind-etc emptyDir: {} - name: dind-home emptyDir: {} ``` For versions of Kubernetes `< v1.29`, the following configuration will be applied: ```yaml ## githubConfigUrl is the GitHub url for where you want to configure runners ## ex: https://github.com/myorg/myrepo or https://github.com/myorg githubConfigUrl: "https://github.com/actions/actions-runner-controller" ## githubConfigSecret is the k8s secrets to use when auth with GitHub API. ## You can choose to use GitHub App or a PAT token githubConfigSecret: my-super-safe-secret ## maxRunners is the max number of runners the autoscaling runner set will scale up to. maxRunners: 5 ## minRunners is the min number of idle runners. The target number of runners created will be ## calculated as a sum of minRunners and the number of jobs assigned to the scale set. minRunners: 0 runnerGroup: "my-custom-runner-group" ## name of the runner scale set to create. Defaults to the helm release name runnerScaleSetName: "my-awesome-scale-set" ## template is the PodSpec for each runner Pod ## For reference: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec template: spec: initContainers: - name: init-dind-externals image: ghcr.io/actions/actions-runner:latest command: ["cp", "-r", "/home/runner/externals/.", "/home/runner/tmpDir/"] volumeMounts: - name: dind-externals mountPath: /home/runner/tmpDir - name: init-dind-rootless image: docker:dind-rootless command: - sh - -c - | set -x cp -a /etc/. /dind-etc/ echo 'runner:x:1001:1001:runner:/home/runner:/bin/ash' >> /dind-etc/passwd echo 'runner:x:1001:' >> /dind-etc/group echo 'runner:100000:65536' >> /dind-etc/subgid echo 'runner:100000:65536' >> /dind-etc/subuid chmod 755 /dind-etc; chmod u=rwx,g=rx+s,o=rx /dind-home chown 1001:1001 /dind-home securityContext: runAsUser: 0 volumeMounts: - mountPath: /dind-etc name: dind-etc - mountPath: /dind-home name: dind-home containers: - name: runner image: ghcr.io/actions/actions-runner:latest command: ["/home/runner/run.sh"] env: - name: DOCKER\_HOST value: unix:///run/user/1001/docker.sock securityContext: privileged: true runAsUser: 1001 runAsGroup: 1001 volumeMounts: - name: work mountPath: /home/runner/\_work - name: dind-sock mountPath: /run/user/1001 - name: dind image: docker:dind-rootless args: - dockerd - --host=unix:///run/user/1001/docker.sock securityContext: privileged: true runAsUser: 1001 runAsGroup: 1001 volumeMounts: - name: work mountPath: /home/runner/\_work - name: dind-sock mountPath: /run/user/1001 - name: dind-externals mountPath: /home/runner/externals - name: dind-etc mountPath: /etc - name: dind-home mountPath: /home/runner volumes: - name: work emptyDir: {} - name: dind-externals emptyDir: {} - name: dind-sock emptyDir: {} - name: dind-etc emptyDir: {} - name: dind-home emptyDir: {} ``` {% endif %} {% ifversion ghes %} For versions of Kubernetes `>= v1.29`, sidecar container will be used to run docker daemon. ```yaml ## githubConfigUrl is the GitHub url for where you want to configure runners ## ex: https:///enterprises/my\_enterprise or https:///myorg githubConfigUrl: "https:///actions/actions-runner-controller" ## githubConfigSecret is the k8s secrets to use when auth with GitHub API. | https://github.com/github/docs/blob/main//content/actions/tutorials/use-actions-runner-controller/deploy-runner-scale-sets.md | main | github-actions | [
0.02724490873515606,
0.016924696043133736,
-0.0019147683633491397,
0.04001322016119957,
-0.0052729113958776,
-0.002423688303679228,
-0.04729681834578514,
0.09441544115543365,
0.07428336143493652,
0.015928950160741806,
0.025174139067530632,
-0.09106268733739853,
-0.024090850725769997,
-0.00... | 0.140626 |
%} For versions of Kubernetes `>= v1.29`, sidecar container will be used to run docker daemon. ```yaml ## githubConfigUrl is the GitHub url for where you want to configure runners ## ex: https:///enterprises/my\_enterprise or https:///myorg githubConfigUrl: "https:///actions/actions-runner-controller" ## githubConfigSecret is the k8s secrets to use when auth with GitHub API. ## You can choose to use GitHub App or a PAT token githubConfigSecret: my-super-safe-secret ## maxRunners is the max number of runners the autoscaling runner set will scale up to. maxRunners: 5 ## minRunners is the min number of idle runners. The target number of runners created will be ## calculated as a sum of minRunners and the number of jobs assigned to the scale set. minRunners: 0 runnerGroup: "my-custom-runner-group" ## name of the runner scale set to create. Defaults to the helm release name runnerScaleSetName: "my-awesome-scale-set" ## template is the PodSpec for each runner Pod ## For reference: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec template: spec: initContainers: - name: init-dind-externals image: ghcr.io/actions/actions-runner:latest command: ["cp", "-r", "/home/runner/externals/.", "/home/runner/tmpDir/"] volumeMounts: - name: dind-externals mountPath: /home/runner/tmpDir - name: init-dind-rootless image: docker:dind-rootless command: - sh - -c - | set -x cp -a /etc/. /dind-etc/ echo 'runner:x:1001:1001:runner:/home/runner:/bin/ash' >> /dind-etc/passwd echo 'runner:x:1001:' >> /dind-etc/group echo 'runner:100000:65536' >> /dind-etc/subgid echo 'runner:100000:65536' >> /dind-etc/subuid chmod 755 /dind-etc; chmod u=rwx,g=rx+s,o=rx /dind-home chown 1001:1001 /dind-home securityContext: runAsUser: 0 volumeMounts: - mountPath: /dind-etc name: dind-etc - mountPath: /dind-home name: dind-home - name: dind image: docker:dind-rootless args: - dockerd - --host=unix:///run/user/1001/docker.sock env: - name: DOCKER\_HOST value: unix:///run/user/1001/docker.sock securityContext: privileged: true runAsUser: 1001 runAsGroup: 1001 restartPolicy: Always startupProbe: exec: command: - docker - info initialDelaySeconds: 0 failureThreshold: 24 periodSeconds: 5 volumeMounts: - name: work mountPath: /home/runner/\_work - name: dind-sock mountPath: /run/user/1001 - name: dind-externals mountPath: /home/runner/externals - name: dind-etc mountPath: /etc - name: dind-home mountPath: /home/runner containers: - name: runner image: ghcr.io/actions/actions-runner:latest command: ["/home/runner/run.sh"] env: - name: DOCKER\_HOST value: unix:///run/user/1001/docker.sock securityContext: privileged: true runAsUser: 1001 runAsGroup: 1001 volumeMounts: - name: work mountPath: /home/runner/\_work - name: dind-sock mountPath: /run/user/1001 volumes: - name: work emptyDir: {} - name: dind-externals emptyDir: {} - name: dind-sock emptyDir: {} - name: dind-etc emptyDir: {} - name: dind-home emptyDir: {} ``` For versions of Kubernetes `< v1.29`, the following configuration can be applied: ```yaml ## githubConfigUrl is the GitHub url for where you want to configure runners ## ex: https:///enterprises/my\_enterprise or https:///myorg githubConfigUrl: "https:///actions/actions-runner-controller" ## githubConfigSecret is the k8s secrets to use when auth with GitHub API. ## You can choose to use GitHub App or a PAT token githubConfigSecret: my-super-safe-secret ## maxRunners is the max number of runners the autoscaling runner set will scale up to. maxRunners: 5 ## minRunners is the min number of idle runners. The target number of runners created will be ## calculated as a sum of minRunners and the number of jobs assigned to the scale set. minRunners: 0 runnerGroup: "my-custom-runner-group" ## name of the runner scale set to create. Defaults to the helm release name runnerScaleSetName: "my-awesome-scale-set" ## template is the PodSpec for each runner Pod ## For reference: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec template: spec: initContainers: - name: init-dind-externals image: ghcr.io/actions/actions-runner:latest command: ["cp", "-r", "/home/runner/externals/.", "/home/runner/tmpDir/"] volumeMounts: - name: dind-externals mountPath: /home/runner/tmpDir - name: init-dind-rootless image: docker:dind-rootless command: - sh - -c - | set -x cp -a /etc/. /dind-etc/ echo 'runner:x:1001:1001:runner:/home/runner:/bin/ash' >> /dind-etc/passwd echo 'runner:x:1001:' >> /dind-etc/group echo 'runner:100000:65536' >> /dind-etc/subgid echo 'runner:100000:65536' >> /dind-etc/subuid chmod 755 /dind-etc; chmod u=rwx,g=rx+s,o=rx /dind-home chown 1001:1001 /dind-home securityContext: runAsUser: 0 volumeMounts: - mountPath: /dind-etc name: dind-etc - mountPath: /dind-home name: dind-home containers: - name: runner image: ghcr.io/actions/actions-runner:latest command: ["/home/runner/run.sh"] env: - name: DOCKER\_HOST value: unix:///run/user/1001/docker.sock securityContext: privileged: true runAsUser: 1001 runAsGroup: 1001 volumeMounts: - name: work mountPath: /home/runner/\_work - name: | https://github.com/github/docs/blob/main//content/actions/tutorials/use-actions-runner-controller/deploy-runner-scale-sets.md | main | github-actions | [
0.018636535853147507,
0.002173315268009901,
-0.034169092774391174,
0.008968220092356205,
-0.03910075128078461,
0.01812061108648777,
-0.08597023040056229,
0.03245816379785538,
0.021853363141417503,
0.01554294116795063,
-0.02235797978937626,
-0.05777079984545708,
0.0029591976199299097,
-0.03... | 0.127212 |
/dind-etc; chmod u=rwx,g=rx+s,o=rx /dind-home chown 1001:1001 /dind-home securityContext: runAsUser: 0 volumeMounts: - mountPath: /dind-etc name: dind-etc - mountPath: /dind-home name: dind-home containers: - name: runner image: ghcr.io/actions/actions-runner:latest command: ["/home/runner/run.sh"] env: - name: DOCKER\_HOST value: unix:///run/user/1001/docker.sock securityContext: privileged: true runAsUser: 1001 runAsGroup: 1001 volumeMounts: - name: work mountPath: /home/runner/\_work - name: dind-sock mountPath: /run/user/1001 - name: dind image: docker:dind-rootless args: - dockerd - --host=unix:///run/user/1001/docker.sock securityContext: privileged: true runAsUser: 1001 runAsGroup: 1001 volumeMounts: - name: work mountPath: /home/runner/\_work - name: dind-sock mountPath: /run/user/1001 - name: dind-externals mountPath: /home/runner/externals - name: dind-etc mountPath: /etc - name: dind-home mountPath: /home/runner volumes: - name: work emptyDir: {} - name: dind-externals emptyDir: {} - name: dind-sock emptyDir: {} - name: dind-etc emptyDir: {} - name: dind-home emptyDir: {} ``` {% endif %} #### Understanding runner-container-hooks When the runner detects a workflow run that uses a container job, service container, or Docker action, it will call runner-container-hooks to create a new pod. The runner relies on runner-container-hooks to call the Kubernetes APIs and create a new pod in the same namespace as the runner pod. This newly created pod will be used to run the container job, service container, or Docker action. For more information, see the [`runner-container-hooks`](https://github.com/actions/runner-container-hooks) repository. #### Configuring hook extensions As of ARC version 0.4.0, runner-container-hooks support hook extensions. You can use these to configure the pod created by runner-container-hooks. For example, you could use a hook extension to set a security context on the pod. Hook extensions allow you to specify a YAML file that is used to update the [PodSpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#podspec-v1-core) of the pod created by runner-container-hooks. There are two options to configure hook extensions. \* Store in your \*\*custom runner image\*\*. You can store the PodSpec in a YAML file anywhere in your custom runner image. For more information, see [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/about-actions-runner-controller#creating-your-own-runner-image). \* Store in a \*\*ConfigMap\*\*. You can create a config map with the PodSpec and mount that config map in the runner container. For more information, see [ConfigMaps](https://kubernetes.io/docs/concepts/configuration/configmap/) in the Kubernetes documentation. > [!NOTE] > With both options, you must set the `ACTIONS\_RUNNER\_CONTAINER\_HOOK\_TEMPLATE` environment variable in the runner container spec to point to the path of the YAML file mounted in the runner container. ##### Example: Using config map to set securityContext Create a config map in the same namespace as the runner pods. For example: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: hook-extension namespace: arc-runners data: content: | metadata: annotations: example: "extension" spec: containers: - name: "$job" # Target the job container securityContext: runAsUser: 1000 ``` \* The `.metadata.labels` and `metadata.annotations` fields will be appended as is, unless their keys are reserved. You cannot override the `.metadata.name` and `metadata.namespace` fields. \* The majority of the PodSpec fields are applied from the specified template, and will override the values passed from your Helm chart `values.yaml` file. \* If you specify additional volumes they will be appended to the default volumes specified by the runner. \* The `spec.containers` are merged based on the names assigned to them. \* If the name of the container is `$job`: \* The `spec.containers.name` and `spec.containers.image` fields are ignored. \* The `spec.containers.env`, `spec.containers.volumeMounts`, and `spec.containers.ports` fields are appended to the default container spec created by the hook. \* The rest of the fields are applied as provided. \* If the name of the container is not `$job`, the fields will be added to the pod definition as they are. ## Enabling metrics > [!NOTE] > Metrics for ARC are available as of version gha-runner-scale-set-0.5.0. ARC can emit metrics about your runners, your jobs, and time spent on executing your workflows. Metrics can be used to identify congestion, | https://github.com/github/docs/blob/main//content/actions/tutorials/use-actions-runner-controller/deploy-runner-scale-sets.md | main | github-actions | [
0.001335333799943328,
0.0450441800057888,
-0.041546374559402466,
-0.04026015102863312,
0.059580251574516296,
-0.03615285083651543,
0.013813736848533154,
0.024782279506325722,
0.005068682599812746,
-0.020083578303456306,
0.007630824577063322,
-0.0795750543475151,
0.0002953253861051053,
0.04... | 0.098633 |
the fields will be added to the pod definition as they are. ## Enabling metrics > [!NOTE] > Metrics for ARC are available as of version gha-runner-scale-set-0.5.0. ARC can emit metrics about your runners, your jobs, and time spent on executing your workflows. Metrics can be used to identify congestion, monitor the health of your ARC deployment, visualize usage trends, optimize resource consumption, among many other use cases. Metrics are emitted by the controller-manager and listener pods in Prometheus format. For more information, see [Exposition formats](https://prometheus.io/docs/instrumenting/exposition\_formats/) in the Prometheus documentation. To enable metrics for ARC, configure the `metrics` property in the [`values.yaml`](https://github.com/actions/actions-runner-controller/blob/master/charts/gha-runner-scale-set-controller/values.yaml) file of the `gha-runner-scale-set-controller` chart. The following is an example configuration. ```yaml metrics: controllerManagerAddr: ":8080" listenerAddr: ":8080" listenerEndpoint: "/metrics" ``` > [!NOTE] > If the `metrics:` object is not provided or is commented out, the following flags will be applied to the controller-manager and listener pods with empty values: `--metrics-addr`, `--listener-metrics-addr`, `--listener-metrics-endpoint`. This will disable metrics for ARC. Once these properties are configured, your controller-manager and listener pods emit metrics via the listenerEndpoint bound to the ports that you specify in your [`values.yaml`](https://github.com/actions/actions-runner-controller/blob/master/charts/gha-runner-scale-set-controller/values.yaml) file. In the above example, the endpoint is `/metrics` and the port is `:8080`. You can use this endpoint to scrape metrics from your controller-manager and listener pods. To turn off metrics, update your [`values.yaml`](https://github.com/actions/actions-runner-controller/blob/master/charts/gha-runner-scale-set-controller/values.yaml) file by removing or commenting out the `metrics:` object and its properties. ### Available metrics for ARC The following table shows the metrics emitted by the controller-manager and listener pods. > [!NOTE] > The metrics that the controller-manager emits pertain to the controller runtime and are not owned by {% data variables.product.company\_short %}. | Owner | Metric | Type | Description | | ------------------ | --------------------------------------------- | --------- | ----------------------------------------------------------------------------------------------------------- | | controller-manager | gha\_controller\_pending\_ephemeral\_runners | gauge | Number of ephemeral runners in a pending state | | controller-manager | gha\_controller\_running\_ephemeral\_runners | gauge | Number of ephemeral runners in a running state | | controller-manager | gha\_controller\_failed\_ephemeral\_runners | gauge | Number of ephemeral runners in a failed state | | controller-manager | gha\_controller\_running\_listeners | gauge | Number of listeners in a running state | | listener | gha\_assigned\_jobs | gauge | Number of jobs assigned to the runner scale set | | listener | gha\_running\_jobs | gauge | Number of jobs running or queued to run | | listener | gha\_registered\_runners | gauge | Number of runners registered by the runner scale set | | listener | gha\_busy\_runners | gauge | Number of registered runners currently running a job | | listener | gha\_min\_runners | gauge | Minimum number of runners configured for the runner scale set | | listener | gha\_max\_runners | gauge | Maximum number of runners configured for the runner scale set | | listener | gha\_desired\_runners | gauge | Number of runners desired (scale up / down target) by the runner scale set | | listener | gha\_idle\_runners | gauge | Number of registered runners not running a job | | listener | gha\_started\_jobs\_total | counter | Total number of jobs started since the listener became ready [1] | | listener | gha\_completed\_jobs\_total | counter | Total number of jobs completed since the listener became ready [1] | | listener | gha\_job\_startup\_duration\_seconds | histogram | Number of seconds spent waiting for workflow job to get started on the runner owned by the runner scale set | | listener | gha\_job\_execution\_duration\_seconds | histogram | Number of seconds spent executing workflow jobs by the runner scale set | [1]: Listener metrics that have the counter type are reset when the listener pod restarts. {% ifversion ghes %} ## | https://github.com/github/docs/blob/main//content/actions/tutorials/use-actions-runner-controller/deploy-runner-scale-sets.md | main | github-actions | [
-0.001545366132631898,
-0.01290670782327652,
-0.03202468901872635,
0.04546177759766579,
-0.028272220864892006,
-0.05975567549467087,
-0.008162006735801697,
0.0014434987679123878,
0.053719986230134964,
-0.011819695122539997,
-0.019125549122691154,
-0.056383829563856125,
-0.027493519708514214,... | 0.246274 |
on the runner owned by the runner scale set | | listener | gha\_job\_execution\_duration\_seconds | histogram | Number of seconds spent executing workflow jobs by the runner scale set | [1]: Listener metrics that have the counter type are reset when the listener pod restarts. {% ifversion ghes %} ## Using ARC with {% data variables.product.prodname\_dependabot %} and {% data variables.product.prodname\_code\_scanning %} You can use {% data variables.product.prodname\_actions\_runner\_controller %} to create dedicated runners for your {% data variables.product.prodname\_ghe\_server %} instance that {% data variables.product.prodname\_dependabot %} can use to help secure and maintain the dependencies used in repositories on your enterprise. For more information, see [AUTOTITLE](/admin/github-actions/enabling-github-actions-for-github-enterprise-server/managing-self-hosted-runners-for-dependabot-updates#system-requirements-for-dependabot-runners). You can also use ARC with {% data variables.product.prodname\_codeql %} to identify vulnerabilities and errors in your code. For more information, see [AUTOTITLE](/code-security/code-scanning/introduction-to-code-scanning/about-code-scanning-with-codeql). If you're already using {% data variables.product.prodname\_code\_scanning %} and want to configure a runner scale set to use default setup, set `INSTALLATION\_NAME=code-scanning`. For more information about {% data variables.product.prodname\_code\_scanning %} default setup, see [AUTOTITLE](/code-security/code-scanning/enabling-code-scanning/configuring-default-setup-for-code-scanning). {% data variables.product.prodname\_actions\_runner\_controller %} does not use multiple labels to route jobs to specific runner scale sets. Instead, to designate a runner scale set for {% data variables.product.prodname\_dependabot %} updates or {% data variables.product.prodname\_code\_scanning %} with {% data variables.product.prodname\_codeql %}, use a descriptive installation name in your Helm chart, such as `dependabot` or `code-scanning`. You can then set the `runs-on` value in your workflows to the installation name as the single label, and use the designated runner scale set for {% data variables.product.prodname\_dependabot %} updates or {% data variables.product.prodname\_code\_scanning %} jobs. If you're using default setup for {% data variables.product.prodname\_code\_scanning %}, the analysis will automatically look for a runner scale set with the installation name `code-scanning` {% ifversion code-scanning-default-setup-customize-labels %} but you can specify a custom name in the configuration, so that individual repositories can use different runner scale sets. See [AUTOTITLE](/code-security/code-scanning/enabling-code-scanning/configuring-default-setup-for-code-scanning#assigning-labels-to-runners){% endif %}. > [!NOTE] > The [Dependabot Action](https://github.com/github/dependabot-action) is used to run {% data variables.product.prodname\_dependabot %} updates via {% data variables.product.prodname\_actions %}. This action requires Docker as a dependency. For this reason, you can only use {% data variables.product.prodname\_actions\_runner\_controller %} with {% data variables.product.prodname\_dependabot %} when Docker-in-Docker (DinD) mode is enabled. For more information, see [AUTOTITLE](/admin/github-actions/enabling-github-actions-for-github-enterprise-server/managing-self-hosted-runners-for-dependabot-updates#system-requirements-for-dependabot-runners) and [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/deploying-runner-scale-sets-with-actions-runner-controller#using-docker-in-docker-or-kubernetes-mode-for-containers). {% endif %} ## Upgrading ARC Because there is no support for upgrading or deleting CRDs with Helm, it is not possible to use Helm to upgrade ARC. For more information, see [Custom Resource Definitions](https://helm.sh/docs/chart\_best\_practices/custom\_resource\_definitions/#some-caveats-and-explanations) in the Helm documentation. To upgrade ARC to a newer version, you must complete the following steps. 1. Uninstall all installations of `gha-runner-scale-set`. 1. Wait for resources cleanup. 1. Uninstall ARC. 1. If there is a change in CRDs from the version you currently have installed, to the upgraded version, remove all CRDs associated with `actions.github.com` API group. 1. Reinstall ARC again. For more information, see [Deploying a runner scale set](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/deploying-runner-scale-sets-with-actions-runner-controller#deploying-a-runner-scale-set). If you would like to upgrade ARC but are concerned about downtime, you can deploy ARC in a high availability configuration to ensure runners are always available. For more information, see [High availability and automatic failover](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/deploying-runner-scale-sets-with-actions-runner-controller#high-availability-and-automatic-failover). > [!NOTE] > Transitioning from the [community supported version of ARC](https://github.com/actions/actions-runner-controller/discussions/2775) to the GitHub supported version is a substantial architectural change. The GitHub supported version involves a redesign of many components of ARC. It is not a minor software upgrade. For these reasons, we recommend testing the new versions in a staging environment that matches your production environment first. This will ensure stability and reliability of the setup before deploying in production. ### Deploying a canary image You can test features before they are released by using canary releases of the controller-manager container image. Canary images | https://github.com/github/docs/blob/main//content/actions/tutorials/use-actions-runner-controller/deploy-runner-scale-sets.md | main | github-actions | [
0.02142193354666233,
-0.09007912874221802,
-0.093424953520298,
0.03596300259232521,
-0.026151539757847786,
0.026875345036387444,
-0.0017646835185587406,
0.027158522978425026,
0.026864998042583466,
-0.0173468180000782,
-0.0068500651977956295,
0.0136775067076087,
-0.016485245898365974,
-0.00... | 0.184304 |
the new versions in a staging environment that matches your production environment first. This will ensure stability and reliability of the setup before deploying in production. ### Deploying a canary image You can test features before they are released by using canary releases of the controller-manager container image. Canary images are published with tag format `canary-SHORT\_SHA`. For more information, see [`gha-runner-scale-set-controller`](https://github.com/actions/actions-runner-controller/pkgs/container/gha-runner-scale-set-controller) on the {% data variables.product.prodname\_container\_registry %}. > [!NOTE] > \* You must use Helm charts on your local file system. > \* You cannot use the released Helm charts. 1. Update the `tag` in the [gha-runner-scale-set-controller `values.yaml`](https://github.com/actions/actions-runner-controller/blob/master/charts/gha-runner-scale-set-controller/values.yaml) file to: `canary-SHORT\_SHA` 1. Update the field `appVersion` in the [`Chart.yaml`](https://github.com/actions/actions-runner-controller/blob/master/charts/gha-runner-scale-set/Chart.yaml) file for `gha-runner-scale-set` to: `canary-SHORT\_SHA` 1. Re-install ARC using the updated Helm chart and `values.yaml` files. ## High availability and automatic failover ARC can be deployed in a high availability (active-active) configuration. If you have two distinct Kubernetes clusters deployed in separate regions, you can deploy ARC in both clusters and configure runner scale sets to use the same `runnerScaleSetName`. In order to do this, each runner scale set must be assigned to a distinct runner group. For example, you can have two runner scale sets each named `arc-runner-set`, as long as one runner scale set belongs to `runner-group-A` and the other runner scale set belongs to `runner-group-B`. For information on assigning runner scale sets to runner groups, see [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners/managing-access-to-self-hosted-runners-using-groups). If both runner scale sets are online, jobs assigned to them will be distributed arbitrarily (assignment race). You cannot configure the job assignment algorithm. If one of the clusters goes down, the runner scale set in the other cluster will continue to acquire jobs normally without any intervention or configuration change. ## Using ARC across organizations A single installation of {% data variables.product.prodname\_actions\_runner\_controller %} allows you to configure one or more runner scale sets. These runner scale sets can be registered to a repository, organization, or enterprise. You can also use runner groups to control the permissions boundaries of these runner scale sets. As a best practice, create a unique namespace for each organization. You could also create a namespace for each runner group or each runner scale set. You can install as many runner scale sets as needed in each namespace. This will provide you the highest levels of isolation and improve your security. You can use {% data variables.product.prodname\_github\_apps %} for authentication and define granular permissions for each runner scale set. ## Legal notice {% data reusables.actions.actions-runner-controller-legal-notice %} | https://github.com/github/docs/blob/main//content/actions/tutorials/use-actions-runner-controller/deploy-runner-scale-sets.md | main | github-actions | [
0.08525653928518295,
0.004749182611703873,
-0.03291052207350731,
-0.01751968450844288,
0.02673264406621456,
-0.02532484382390976,
-0.11494210362434387,
0.01412623655050993,
-0.09940287470817566,
0.04208064079284668,
0.13292774558067322,
-0.02783087268471718,
-0.02816946990787983,
0.0025671... | -0.048214 |
You can authenticate {% data variables.product.prodname\_actions\_runner\_controller %} (ARC) to the {% data variables.product.prodname\_dotcom %} API by using a {% data variables.product.prodname\_github\_app %} or by using a {% data variables.product.pat\_v1 %}. > [!NOTE] > You cannot authenticate using a {% data variables.product.prodname\_github\_app %} for runners at the enterprise level. For more information, see [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners/managing-access-to-self-hosted-runners-using-groups#about-runner-groups). ## Authenticating ARC with a {% data variables.product.prodname\_github\_app %} 1. Create a {% data variables.product.prodname\_github\_app %} that is owned by an organization. For more information, see [AUTOTITLE](/apps/creating-github-apps/creating-github-apps/creating-a-github-app). Configure the {% data variables.product.prodname\_github\_app %} as follows. 1. For "Homepage URL," enter `https://github.com/actions/actions-runner-controller`. 1. Under "Permissions," click \*\*Repository permissions\*\*. Then use the dropdown menus to select the following access permissions. \* \*\*Administration:\*\* Read and write > [!NOTE] > `Administration: Read and write` is only required when configuring {% data variables.product.prodname\_actions\_runner\_controller %} to register at the repository scope. It is not required to register at the organization scope. \* \*\*Metadata:\*\* Read-only 1. Under "Permissions," click \*\*Organization permissions\*\*. Then use the dropdown menus to select the following access permissions. \* \*\*Self-hosted runners:\*\* Read and write {% data reusables.actions.arc-app-post-install-steps %} 1. In the menu at the top-left corner of the page, click \*\*Install app\*\*, and next to your organization, click \*\*Install\*\* to install the app on your organization. 1. After confirming the installation permissions on your organization, note the app installation ID. You will use it later. You can find the app installation ID on the app installation page, which has the following URL format: `https://{% data variables.product.product\_url %}/organizations/ORGANIZATION/settings/installations/INSTALLATION\_ID` {% data reusables.actions.arc-app-post-install-set-secrets %} ## Authenticating ARC with a {% data variables.product.pat\_v1 %} ARC can use {% data variables.product.pat\_v1\_plural %} to register self-hosted runners. {% ifversion ghec or ghes %} > [!NOTE] > Authenticating ARC with a {% data variables.product.pat\_v1 %} is the only supported authentication method to register runners at the enterprise level. {% endif %} 1. Create a {% data variables.product.pat\_v1 %} with the required scopes. The required scopes are different depending on whether you are registering runners at the repository{% ifversion ghec or ghes %}, organization, or enterprise{% else %} or organization{% endif %} level. For more information on how to create a {% data variables.product.pat\_v1 %}, see [AUTOTITLE](/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token#creating-a-personal-access-token-classic). The following is the list of required {% data variables.product.pat\_generic %} scopes for ARC runners. \* Repository runners: `repo` \* Organization runners: `admin:org` {% ifversion ghec or ghes %} \* Enterprise runners: `manage\_runners:enterprise` {% endif %} 1. To create a Kubernetes secret with the value of your {% data variables.product.pat\_v1 %}, use the following command. {% data reusables.actions.arc-runners-namespace %} ```bash copy kubectl create secret generic pre-defined-secret \ --namespace=arc-runners \ --from-literal=github\_token='YOUR-PAT' ``` 1. In your copy of the [`values.yaml`](https://github.com/actions/actions-runner-controller/blob/master/charts/gha-runner-scale-set/values.yaml) file, pass the secret name as a reference. ```yaml githubConfigSecret: pre-defined-secret ``` {% data reusables.actions.actions-runner-controller-helm-chart-options %} ## Authenticating ARC with a {% data variables.product.pat\_v2 %} ARC can use {% data variables.product.pat\_v2\_plural %} to register self-hosted runners. {% ifversion ghec or ghes %} > [!NOTE] > Authenticating ARC with a {% data variables.product.pat\_v1 %} is the only supported authentication method to register runners at the enterprise level. {% endif %} 1. Create a {% data variables.product.pat\_v2 %} with the required scopes. The required scopes are different depending on whether you are registering runners at the repository or organization level. For more information on how to create a {% data variables.product.pat\_v2 %}, see [AUTOTITLE](/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token#creating-a-fine-grained-personal-access-token). The following is the list of required {% data variables.product.pat\_generic %} scopes for ARC runners. \* Repository runners: \* \*\*Administration:\*\* Read and write \* Organization runners: \* \*\*Administration:\*\* Read \* \*\*Self-hosted runners:\*\* Read and write 1. To create a Kubernetes secret with the value of your {% data variables.product.pat\_v2 %}, use | https://github.com/github/docs/blob/main//content/actions/tutorials/use-actions-runner-controller/authenticate-to-the-api.md | main | github-actions | [
-0.044462792575359344,
-0.047590699046850204,
-0.1140458956360817,
0.02378719672560692,
0.01989244669675827,
0.01593339629471302,
0.04297509416937828,
0.07730476558208466,
-0.040077343583106995,
-0.006629234179854393,
0.039351627230644226,
-0.01183124165982008,
0.04209636524319649,
0.02001... | 0.10142 |
following is the list of required {% data variables.product.pat\_generic %} scopes for ARC runners. \* Repository runners: \* \*\*Administration:\*\* Read and write \* Organization runners: \* \*\*Administration:\*\* Read \* \*\*Self-hosted runners:\*\* Read and write 1. To create a Kubernetes secret with the value of your {% data variables.product.pat\_v2 %}, use the following command. {% data reusables.actions.arc-runners-namespace %} ```bash copy kubectl create secret generic pre-defined-secret \ --namespace=arc-runners \ --from-literal=github\_token='YOUR-PAT' ``` 1. In your copy of the [`values.yaml`](https://github.com/actions/actions-runner-controller/blob/master/charts/gha-runner-scale-set/values.yaml) file, pass the secret name as a reference. ```yaml githubConfigSecret: pre-defined-secret ``` {% data reusables.actions.actions-runner-controller-helm-chart-options %} ## Authenticating ARC with vault secrets > [!NOTE] > Vault integration is currently available in public preview with support for Azure Key Vault. Starting with gha-runner-scale-set version 0.12.0, ARC supports retrieving GitHub credentials from an external vault. Vault integration is configured per runner scale set. This means you can run some scale sets using Kubernetes secrets while others use vault-based secrets, depending on your security and operational requirements. ### Enabling Vault Integration To enable vault integration for a runner scale set: 1. \*\*Set the `githubConfigSecret` field\*\* in your `values.yaml` file to the name of the secret key stored in your vault. This value must be a string. 1. \*\*Uncomment and configure the `keyVault` section\*\* in your `values.yaml` file with the appropriate provider and access details. 1. \*\*Provide the required certificate\*\* (`.pfx`) to both the controller and the listener. You can do this by: \*Rebuilding the controller image with the certificate included, or \*Mounting the certificate as a volume in both the controller and the listener using the `listenerTemplate` and `controllerManager` fields. ### Secret Format The secret stored in Azure Key Vault must be in JSON format. The structure depends on the type of authentication you are using: #### Example: GitHub Token ```json { "github\_token": "TOKEN" } ``` #### Example: GitHub App ```json { "github\_app\_id": "APP\_ID\_OR\_CLIENT\_ID", "github\_app\_installation\_id": "INSTALLATION\_ID", "github\_app\_private\_key": "PRIVATE\_KEY" } ``` ### Configuring `values.yaml` for Vault Integration The certificate is stored as a .pfx file and mounted to the container at /akv/cert.pfx. Below is an example of how to configure the keyVault section to use this certificate for authentication: ```yaml keyVault: type: "azure\_key\_vault" proxy: https: url: "PROXY\_URL" credentialSecretRef: "PROXY\_CREDENTIALS\_SECRET\_NAME" http: {} noProxy: [] azureKeyVault: clientId: tenantId: url: certificatePath: "/akv/cert.pfx" ``` ### Providing the Certificate to the Controller and Listener ARC requires a `.pfx` certificate to authenticate with the vault. This certificate must be made available to both the controller and the listener components during controller installation. You can do this by mounting the certificate as a volume using the `controllerManager` and `listenerTemplate` fields in your `values.yaml` file: ```yaml volumes: - name: cert-volume secret: secretName: my-cert-secret volumeMounts: - mountPath: /akv name: cert-volume readOnly: true listenerTemplate: volumeMounts: - name: cert-volume mountPath: /akv/certs readOnly: true volumes: - name: cert-volume secret: secretName: my-cert-secret ``` The code below is an example of a scale set `values.yml` file. ```yaml listenerTemplate: spec: containers: - name: listener volumeMounts: - name: cert-volume mountPath: /akv readOnly: true volumes: - name: cert-volume secret: secretName: my-cert-secret ``` ## Legal notice {% data reusables.actions.actions-runner-controller-legal-notice %} | https://github.com/github/docs/blob/main//content/actions/tutorials/use-actions-runner-controller/authenticate-to-the-api.md | main | github-actions | [
0.01679179072380066,
-0.026312563568353653,
-0.034627579152584076,
0.02642732299864292,
-0.0784829631447792,
0.035833124071359634,
0.003181182313710451,
0.008934283629059792,
0.08048368245363235,
0.017130687832832336,
0.0283079594373703,
-0.08649591356515884,
0.014845411293208599,
-0.04086... | 0.102289 |
## Using ARC runners in a workflow file To assign jobs to run on a runner scale set, you can specify the name of the scale set as the value for the `runs-on` key in your {% data variables.product.prodname\_actions %} workflow file. For example, the following configuration for a runner scale set has the `INSTALLATION\_NAME` value set to `arc-runner-set`. ```bash # Using a {% data variables.product.pat\_generic\_title\_case %} (PAT) INSTALLATION\_NAME="arc-runner-set" NAMESPACE="arc-runners" GITHUB\_CONFIG\_URL="https://github.com/" GITHUB\_PAT="" helm install "${INSTALLATION\_NAME}" \ --namespace "${NAMESPACE}" \ --create-namespace \ --set githubConfigUrl="${GITHUB\_CONFIG\_URL}" \ --set githubConfigSecret.github\_token="${GITHUB\_PAT}" \ oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set ``` To use this configuration in a workflow, set the value of the `runs-on` key in your workflow to `arc-runner-set`, similar to the following example. ```yaml jobs: job\_name: runs-on: arc-runner-set ``` ## Using runner scale set names Runner scale set names are unique within the runner group they belong to. To deploy multiple runner scale sets with the same name, they must belong to different runner groups. For more information about specifying runner scale set names, see [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/deploying-runner-scale-sets-with-actions-runner-controller). {% data reusables.actions.actions-runner-controller-labels %} For more information, see [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/deploying-runner-scale-sets-with-actions-runner-controller#scaling-runners). ## Legal notice {% data reusables.actions.actions-runner-controller-legal-notice %} | https://github.com/github/docs/blob/main//content/actions/tutorials/use-actions-runner-controller/use-arc-in-a-workflow.md | main | github-actions | [
0.04688487946987152,
-0.051569364964962006,
-0.07449772953987122,
-0.01248702872544527,
-0.057061608880758286,
0.08145337551832199,
0.038798052817583084,
0.08284144848585129,
-0.037389710545539856,
-0.04823043942451477,
-0.00009243694512406364,
-0.01841844618320465,
0.04202922806143761,
-0... | 0.040005 |
{% data reusables.actions.enterprise-github-hosted-runners %} ## Introduction In this guide, you'll learn about the basic components needed to create and use a packaged composite action. To focus this guide on the components needed to package the action, the functionality of the action's code is minimal. The action prints "Hello World" and then "Goodbye", or if you provide a custom name, it prints "Hello [who-to-greet]" and then "Goodbye". The action also maps a random number to the `random-number` output variable, and runs a script named `goodbye.sh`. Once you complete this project, you should understand how to build your own composite action and test it in a workflow. {% data reusables.actions.context-injection-warning %} ### Composite actions and reusable workflows Composite actions allow you to collect a series of workflow job steps into a single action which you can then run as a single job step in multiple workflows. Reusable workflows provide another way of avoiding duplication, by allowing you to run a complete workflow from within other workflows. For more information, see [AUTOTITLE](/actions/using-workflows/avoiding-duplication). ## Prerequisites > > [!NOTE] > This example explains how to create a composite action within a separate repository. However, it is possible to create a composite action within the same repository. For more information, see [AUTOTITLE](/actions/creating-actions/creating-a-composite-action#creating-a-composite-action-within-the-same-repository). Before you begin, you'll create a repository on {% data variables.product.github %}. 1. Create a new public repository on {% data variables.product.github %}. You can choose any repository name, or use the following `hello-world-composite-action` example. You can add these files after your project has been pushed to {% data variables.product.github %}. For more information, see [AUTOTITLE](/repositories/creating-and-managing-repositories/creating-a-new-repository). 1. Clone your repository to your computer. For more information, see [AUTOTITLE](/repositories/creating-and-managing-repositories/cloning-a-repository). 1. From your terminal, change directories into your new repository. ```shell copy cd hello-world-composite-action ``` 1. In the `hello-world-composite-action` repository, create a new file called `goodbye.sh` with example code: ```shell copy echo "echo Goodbye" > goodbye.sh ``` 1. From your terminal, make `goodbye.sh` executable. {% linux %} {% data reusables.actions.composite-actions-executable-linux-mac %} {% endlinux %} {% mac %} {% data reusables.actions.composite-actions-executable-linux-mac %} {% endmac %} {% windows %} ```shell copy git add --chmod=+x -- goodbye.sh ``` {% endwindows %} 1. From your terminal, check in your `goodbye.sh` file. {% linux %} {% data reusables.actions.composite-actions-commit-file-linux-mac %} {% endlinux %} {% mac %} {% data reusables.actions.composite-actions-commit-file-linux-mac %} {% endmac %} {% windows %} ```shell copy git commit -m "Add goodbye script" git push ``` {% endwindows %} ## Creating an action metadata file 1. In the `hello-world-composite-action` repository, create a new file called `action.yml` and add the following example code. For more information about this syntax, see [AUTOTITLE](/actions/creating-actions/metadata-syntax-for-github-actions#runs-for-composite-actions). ```yaml copy name: 'Hello World' description: 'Greet someone' inputs: who-to-greet: # id of input description: 'Who to greet' required: true default: 'World' outputs: random-number: description: "Random number" value: {% raw %}${{ steps.random-number-generator.outputs.random-number }}{% endraw %} runs: using: "composite" steps: - name: Set Greeting run: echo "Hello $INPUT\_WHO\_TO\_GREET." shell: bash env: INPUT\_WHO\_TO\_GREET: {% raw %}${{ inputs.who-to-greet }}{% endraw %} - name: Random Number Generator id: random-number-generator run: echo "random-number=$(echo $RANDOM)" >> $GITHUB\_OUTPUT shell: bash - name: Set GitHub Path run: echo "$GITHUB\_ACTION\_PATH" >> $GITHUB\_PATH shell: bash env: GITHUB\_ACTION\_PATH: {% raw %}${{ github.action\_path }}{% endraw %} - name: Run goodbye.sh run: goodbye.sh shell: bash ``` This file defines the `who-to-greet` input, maps the random generated number to the `random-number` output variable, adds the action's path to the runner system path (to locate the `goodbye.sh` script during execution), and runs the `goodbye.sh` script. For more information about managing outputs, see [AUTOTITLE](/actions/creating-actions/metadata-syntax-for-github-actions#outputs-for-composite-actions). For more information about how to use `github.action\_path`, see [AUTOTITLE](/actions/learn-github-actions/contexts#github-context). 1. From your terminal, check in your `action.yml` file. ```shell copy git add | https://github.com/github/docs/blob/main//content/actions/tutorials/create-actions/create-a-composite-action.md | main | github-actions | [
-0.03692486509680748,
0.004754403606057167,
-0.006759206764400005,
0.05164247378706932,
0.005322502460330725,
-0.021120134741067886,
0.032549310475587845,
-0.02080375887453556,
0.02967628836631775,
0.004406529478728771,
-0.0040242234244942665,
0.031223854050040245,
0.06134369596838951,
-0.... | 0.140746 |
action's path to the runner system path (to locate the `goodbye.sh` script during execution), and runs the `goodbye.sh` script. For more information about managing outputs, see [AUTOTITLE](/actions/creating-actions/metadata-syntax-for-github-actions#outputs-for-composite-actions). For more information about how to use `github.action\_path`, see [AUTOTITLE](/actions/learn-github-actions/contexts#github-context). 1. From your terminal, check in your `action.yml` file. ```shell copy git add action.yml git commit -m "Add action" git push ``` 1. From your terminal, add a tag. This example uses a tag called `v1`. For more information, see [AUTOTITLE](/actions/creating-actions/about-custom-actions#using-release-management-for-actions). ```shell copy git tag -a -m "Description of this release" v1 git push --follow-tags ``` ## Testing out your action in a workflow The following workflow code uses the completed hello world action that you made in [AUTOTITLE](/actions/creating-actions/creating-a-composite-action#creating-an-action-metadata-file). Copy the workflow code into a `.github/workflows/main.yml` file in another repository, replacing `OWNER` and `SHA` with the repository owner and the SHA of the commit you want to use, respectively. You can also replace the `who-to-greet` input with your name. ```yaml copy on: [push] jobs: hello\_world\_job: runs-on: ubuntu-latest name: A job to say hello steps: - uses: {% data reusables.actions.action-checkout %} - id: foo uses: OWNER/hello-world-composite-action@SHA with: who-to-greet: 'Mona the Octocat' - run: echo random-number "$RANDOM\_NUMBER" shell: bash env: RANDOM\_NUMBER: {% raw %}${{ steps.foo.outputs.random-number }}{% endraw %} ``` From your repository, click the \*\*Actions\*\* tab, and select the latest workflow run. The output should include: "Hello Mona the Octocat", the result of the "Goodbye" script, and a random number. ## Creating a composite action within the same repository 1. Create a new subfolder called `hello-world-composite-action`, this can be placed in any subfolder within the repository. However, it is recommended that this be placed in the `.github/actions` subfolder to make organization easier. 1. In the `hello-world-composite-action` folder, do the same steps to create the `goodbye.sh` script ```shell copy echo "echo Goodbye" > goodbye.sh ``` {% linux %} {% data reusables.actions.composite-actions-executable-linux-mac %} {% endlinux %} {% mac %} {% data reusables.actions.composite-actions-executable-linux-mac %} {% endmac %} {% windows %} ```shell copy git add --chmod=+x -- goodbye.sh ``` {% endwindows %} {% linux %} {% data reusables.actions.composite-actions-commit-file-linux-mac %} {% endlinux %} {% mac %} {% data reusables.actions.composite-actions-commit-file-linux-mac %} {% endmac %} {% windows %} ```shell copy git commit -m "Add goodbye script" git push ``` {% endwindows %} 1. In the `hello-world-composite-action` folder, create the `action.yml` file based on the steps in [AUTOTITLE](/actions/creating-actions/creating-a-composite-action#creating-an-action-metadata-file). 1. When using the action, use the relative path to the folder where the composite action's `action.yml` file is located in the `uses` key. The below example assumes it is in the `.github/actions/hello-world-composite-action` folder. ```yaml copy on: [push] jobs: hello\_world\_job: runs-on: ubuntu-latest name: A job to say hello steps: - uses: {% data reusables.actions.action-checkout %} - id: foo uses: ./.github/actions/hello-world-composite-action with: who-to-greet: 'Mona the Octocat' - run: echo random-number "$RANDOM\_NUMBER" shell: bash env: RANDOM\_NUMBER: {% raw %}${{ steps.foo.outputs.random-number }}{% endraw %} ``` ## Example composite actions on {% data variables.product.github %} You can find many examples of composite actions on {% data variables.product.github %}. \* [microsoft/action-python](https://github.com/microsoft/action-python) \* [microsoft/gpt-review](https://github.com/microsoft/gpt-review) \* [tailscale/github-action](https://github.com/tailscale/github-action) | https://github.com/github/docs/blob/main//content/actions/tutorials/create-actions/create-a-composite-action.md | main | github-actions | [
0.04267197474837303,
-0.06969517469406128,
-0.04113473743200302,
0.004684383049607277,
0.05325942859053612,
-0.002736084396019578,
-0.003071756800636649,
0.03143151104450226,
0.04101729765534401,
0.044950347393751144,
0.05847080424427986,
0.021177565678954124,
0.013574515469372272,
-0.0172... | 0.063748 |
{% data reusables.actions.enterprise-github-hosted-runners %} ## Introduction In this guide, you'll learn about the basic components needed to create and use a packaged JavaScript action. To focus this guide on the components needed to package the action, the functionality of the action's code is minimal. The action prints "Hello World" in the logs or "Hello [who-to-greet]" if you provide a custom name. This guide uses the {% data variables.product.prodname\_actions %} Toolkit Node.js module to speed up development. For more information, see the [actions/toolkit](https://github.com/actions/toolkit) repository. Once you complete this project, you should understand how to build your own JavaScript action and test it in a workflow. {% data reusables.actions.pure-javascript %} {% data reusables.actions.context-injection-warning %} ## Prerequisites Before you begin, you'll need to download Node.js and create a public {% data variables.product.prodname\_dotcom %} repository. 1. Download and install Node.js 20.x, which includes npm. https://nodejs.org/en/download/ 1. Create a new public repository on {% data variables.product.github %} and call it "hello-world-javascript-action". For more information, see [AUTOTITLE](/repositories/creating-and-managing-repositories/creating-a-new-repository). 1. Clone your repository to your computer. For more information, see [AUTOTITLE](/repositories/creating-and-managing-repositories/cloning-a-repository). 1. From your terminal, change directories into your new repository. ```shell copy cd hello-world-javascript-action ``` 1. From your terminal, initialize the directory with npm to generate a `package.json` file. ```shell copy npm init -y ``` ## Creating an action metadata file Create a new file named `action.yml` in the `hello-world-javascript-action` directory with the following example code. For more information, see [AUTOTITLE](/actions/creating-actions/metadata-syntax-for-github-actions). ```yaml copy name: Hello World description: Greet someone and record the time inputs: who-to-greet: # id of input description: Who to greet required: true default: World outputs: time: # id of output description: The time we greeted you runs: using: node20 main: dist/index.js ``` This file defines the `who-to-greet` input and `time` output. It also tells the action runner how to start running this JavaScript action. ## Adding actions toolkit packages The actions toolkit is a collection of Node.js packages that allow you to quickly build JavaScript actions with more consistency. The toolkit [`@actions/core`](https://github.com/actions/toolkit/tree/main/packages/core) package provides an interface to the workflow commands, input and output variables, exit statuses, and debug messages. The toolkit also offers a [`@actions/github`](https://github.com/actions/toolkit/tree/main/packages/github) package that returns an authenticated Octokit REST client and access to GitHub Actions contexts. The toolkit offers more than the `core` and `github` packages. For more information, see the [actions/toolkit](https://github.com/actions/toolkit) repository. At your terminal, install the actions toolkit `core` and `github` packages. ```shell copy npm install @actions/core @actions/github ``` You should now see a `node\_modules` directory and a `package-lock.json` file which track any installed dependencies and their versions. You should not commit the `node\_modules` directory to your repository. ## Writing the action code This action uses the toolkit to get the `who-to-greet` input variable required in the action's metadata file and prints "Hello [who-to-greet]" in a debug message in the log. Next, the script gets the current time and sets it as an output variable that actions running later in a job can use. GitHub Actions provide context information about the webhook event, Git refs, workflow, action, and the person who triggered the workflow. To access the context information, you can use the `github` package. The action you'll write will print the webhook event payload to the log. Add a new file called `src/index.js`, with the following code. {% raw %} ```javascript copy import \* as core from "@actions/core"; import \* as github from "@actions/github"; try { // `who-to-greet` input defined in action metadata file const nameToGreet = core.getInput("who-to-greet"); core.info(`Hello ${nameToGreet}!`); // Get the current time and set it as an output variable const time = new Date().toTimeString(); core.setOutput("time", time); // Get the JSON webhook payload for the event | https://github.com/github/docs/blob/main//content/actions/tutorials/create-actions/create-a-javascript-action.md | main | github-actions | [
-0.07536177337169647,
-0.04069746658205986,
-0.012076555751264095,
0.06299421191215515,
0.05050438642501831,
-0.013232024386525154,
0.03423624485731125,
0.049668122082948685,
-0.03376845642924309,
-0.019414911046624184,
0.002573570469394326,
0.06429800391197205,
-0.019261976704001427,
-0.0... | 0.146723 |
import \* as github from "@actions/github"; try { // `who-to-greet` input defined in action metadata file const nameToGreet = core.getInput("who-to-greet"); core.info(`Hello ${nameToGreet}!`); // Get the current time and set it as an output variable const time = new Date().toTimeString(); core.setOutput("time", time); // Get the JSON webhook payload for the event that triggered the workflow const payload = JSON.stringify(github.context.payload, undefined, 2); core.info(`The event payload: ${payload}`); } catch (error) { core.setFailed(error.message); } ``` {% endraw %} If an error is thrown in the above `index.js` example, `core.setFailed(error.message);` uses the actions toolkit [`@actions/core`](https://github.com/actions/toolkit/tree/main/packages/core) package to log a message and set a failing exit code. For more information, see [AUTOTITLE](/actions/creating-actions/setting-exit-codes-for-actions). ## Creating a README To let people know how to use your action, you can create a README file. A README is most helpful when you plan to share your action publicly, but is also a great way to remind you or your team how to use the action. In your `hello-world-javascript-action` directory, create a `README.md` file that specifies the following information: \* A detailed description of what the action does. \* Required input and output arguments. \* Optional input and output arguments. \* Secrets the action uses. \* Environment variables the action uses. \* An example of how to use your action in a workflow. ````markdown copy # Hello world JavaScript action This action prints "Hello World" or "Hello" + the name of a person to greet to the log. ## Inputs ### `who-to-greet` \*\*Required\*\* The name of the person to greet. Default `"World"`. ## Outputs ### `time` The time we greeted you. ## Example usage ```yaml uses: actions/hello-world-javascript-action@e76147da8e5c81eaf017dede5645551d4b94427b with: who-to-greet: Mona the Octocat ``` ```` ## Commit, tag, and push your action {% data variables.product.github %} downloads each action run in a workflow during runtime and executes it as a complete package of code before you can use workflow commands like `run` to interact with the runner machine. This means you must include any package dependencies required to run the JavaScript code. For example, this action uses `@actions/core` and `@actions/github` packages. Checking in your `node\_modules` directory can cause problems. As an alternative, you can use tools such as [`rollup.js`](https://github.com/rollup/rollup) or [`@vercel/ncc`](https://github.com/vercel/ncc) to combine your code and dependencies into one file for distribution. 1. Install `rollup` and its plugins by running this command in your terminal. `npm install --save-dev rollup @rollup/plugin-commonjs @rollup/plugin-node-resolve` 1. Create a new file called `rollup.config.js` in the root of your repository with the following code. ```javascript copy import commonjs from "@rollup/plugin-commonjs"; import { nodeResolve } from "@rollup/plugin-node-resolve"; const config = { input: "src/index.js", output: { esModule: true, file: "dist/index.js", format: "es", sourcemap: true, }, plugins: [commonjs(), nodeResolve({ preferBuiltins: true })], }; export default config; ``` 1. Compile your `dist/index.js` file. `rollup --config rollup.config.js` You'll see a new `dist/index.js` file with your code and any dependencies. 1. From your terminal, commit the updates. ```shell copy git add src/index.js dist/index.js rollup.config.js package.json package-lock.json README.md action.yml git commit -m "Initial commit of my first action" git tag -a -m "My first action release" v1.1 git push --follow-tags ``` When you commit and push your code, your updated repository should look like this: ```text hello-world-javascript-action/ ├── action.yml ├── dist/ │ └── index.js ├── package.json ├── package-lock.json ├── README.md ├── rollup.config.js └── src/ └── index.js ``` ## Testing out your action in a workflow Now you're ready to test your action out in a workflow. Public actions can be used by workflows in any repository. When an action is in a private{% ifversion ghec or ghes %} or internal{% endif %} repository, the repository settings dictate whether the action is available only within the | https://github.com/github/docs/blob/main//content/actions/tutorials/create-actions/create-a-javascript-action.md | main | github-actions | [
-0.0679873377084732,
0.09519512206315994,
-0.027799170464277267,
0.027278514578938484,
0.046321552246809006,
-0.07509973645210266,
0.01654892787337303,
0.039816319942474365,
0.0949593186378479,
-0.04845000430941582,
-0.05914666876196861,
-0.09052984416484833,
0.007003172766417265,
-0.03479... | 0.148275 |
Now you're ready to test your action out in a workflow. Public actions can be used by workflows in any repository. When an action is in a private{% ifversion ghec or ghes %} or internal{% endif %} repository, the repository settings dictate whether the action is available only within the same repository or also to other repositories owned by the same {% ifversion ghec or ghes %}organization or enterprise{% else %}user or organization{% endif %}. For more information, see [AUTOTITLE](/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository). {% data reusables.actions.enterprise-marketplace-actions %} ### Example using a public action This example demonstrates how your new public action can be run from within an external repository. Copy the following YAML into a new file at `.github/workflows/main.yml`, and update the `uses: octocat/hello-world-javascript-action@1a2b3c4d5e6f7a8b9c0d1e2f3a4b5c6d7e8f9a0b` line with your username and the name of the public repository you created above. You can also replace the `who-to-greet` input with your name. {% raw %} ```yaml copy on: push: branches: - main jobs: hello\_world\_job: name: A job to say hello runs-on: ubuntu-latest steps: - name: Hello world action step id: hello uses: octocat/hello-world-javascript-action@1a2b3c4d5e6f7a8b9c0d1e2f3a4b5c6d7e8f9a0b with: who-to-greet: Mona the Octocat # Use the output from the `hello` step - name: Get the output time run: echo "The time was ${{ steps.hello.outputs.time }}" ``` {% endraw %} When this workflow is triggered, the runner will download the `hello-world-javascript-action` action from your public repository and then execute it. ### Example using a private action Copy the workflow code into a `.github/workflows/main.yml` file in your action's repository. You can also replace the `who-to-greet` input with your name. ```yaml copy on: push: branches: - main jobs: hello\_world\_job: name: A job to say hello runs-on: ubuntu-latest steps: # To use this repository's private action, # you must check out the repository - name: Checkout uses: {% data reusables.actions.action-checkout %} - name: Hello world action step uses: ./ # Uses an action in the root directory id: hello with: who-to-greet: Mona the Octocat # Use the output from the `hello` step - name: Get the output time run: echo "The time was {% raw %}${{ steps.hello.outputs.time }}{% endraw %}" ``` {% data reusables.actions.test-private-action-example %} ## Template repositories for creating JavaScript actions {% data variables.product.prodname\_dotcom %} provides template repositories for creating JavaScript and TypeScript actions. You can use these templates to quickly get started with creating a new action that includes tests, linting, and other recommended practices. \* [`javascript-action` template repository](https://github.com/actions/javascript-action) \* [`typescript-action` template repository](https://github.com/actions/typescript-action) ## Example JavaScript actions on {% data variables.product.prodname\_dotcom\_the\_website %} You can find many examples of JavaScript actions on {% data variables.product.prodname\_dotcom\_the\_website %}. \* [DevExpress/testcafe-action](https://github.com/DevExpress/testcafe-action) \* [duckduckgo/privacy-configuration](https://github.com/duckduckgo/privacy-configuration) | https://github.com/github/docs/blob/main//content/actions/tutorials/create-actions/create-a-javascript-action.md | main | github-actions | [
0.0004192725755274296,
-0.10301979631185532,
-0.06368040293455124,
-0.029652424156665802,
-0.025840697810053825,
-0.023138631135225296,
0.006876905914396048,
-0.02808648720383644,
-0.040715157985687256,
0.047428324818611145,
0.0706905722618103,
-0.01198568381369114,
-0.0028714591171592474,
... | 0.072727 |
{% data reusables.actions.enterprise-github-hosted-runners %} ## Introduction CircleCI and {% data variables.product.prodname\_actions %} both allow you to create workflows that automatically build, test, publish, release, and deploy code. CircleCI and {% data variables.product.prodname\_actions %} share some similarities in workflow configuration: \* Workflow configuration files are written in YAML and stored in the repository. \* Workflows include one or more jobs. \* Jobs include one or more steps or individual commands. \* Steps or tasks can be reused and shared with the community. For more information, see [AUTOTITLE](/actions/learn-github-actions/understanding-github-actions). ## Key differences When migrating from CircleCI, consider the following differences: \* CircleCI’s automatic test parallelism automatically groups tests according to user-specified rules or historical timing information. This functionality is not built into {% data variables.product.prodname\_actions %}. \* Actions that execute in Docker containers are sensitive to permissions problems since containers have a different mapping of users. You can avoid many of these problems by not using the `USER` instruction in your \_Dockerfile\_. For more information about the Docker filesystem on {% data variables.product.github %}-hosted runners, see [AUTOTITLE](/actions/using-github-hosted-runners/about-github-hosted-runners#docker-container-filesystem). ## Migrating workflows and jobs CircleCI defines `workflows` in the \_config.yml\_ file, which allows you to configure more than one workflow. {% data variables.product.github %} requires one workflow file per workflow, and as a consequence, does not require you to declare `workflows`. You'll need to create a new workflow file for each workflow configured in \_config.yml\_. Both CircleCI and {% data variables.product.prodname\_actions %} configure `jobs` in the configuration file using similar syntax. If you configure any dependencies between jobs using `requires` in your CircleCI workflow, you can use the equivalent {% data variables.product.prodname\_actions %} `needs` syntax. For more information, see [AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_idneeds). ## Migrating orbs to actions Both CircleCI and {% data variables.product.prodname\_actions %} provide a mechanism to reuse and share tasks in a workflow. CircleCI uses a concept called orbs, written in YAML, to provide tasks that people can reuse in a workflow. {% data variables.product.prodname\_actions %} has powerful and flexible reusable components called actions, which you build with either JavaScript files or Docker images. You can create actions by writing custom code that interacts with your repository in any way you'd like, including integrating with {% data variables.product.github %}'s APIs and any publicly available third-party API. For example, an action can publish npm modules, send SMS alerts when urgent issues are created, or deploy production-ready code. For more information, see [AUTOTITLE](/actions/creating-actions). {% ifversion fpt or ghec %} CircleCI can reuse pieces of workflows with YAML anchors and aliases. {% data variables.product.prodname\_actions %} supports YAML anchors and aliases for reusability, and also provides matrices for running jobs with different configurations. For more information about matrices, see [AUTOTITLE](/actions/using-jobs/using-a-matrix-for-your-jobs). {% else %} CircleCI can reuse pieces of workflows with YAML anchors and aliases. {% data variables.product.prodname\_actions %} supports the most common need for reusability using matrices. For more information about matrices, see [AUTOTITLE](/actions/using-jobs/using-a-matrix-for-your-jobs). {% endif %} ## Using Docker images Both CircleCI and {% data variables.product.prodname\_actions %} support running steps inside of a Docker image. CircleCI provides a set of pre-built images with common dependencies. These images have the `USER` set to `circleci`, which causes permissions to conflict with {% data variables.product.prodname\_actions %}. We recommend that you move away from CircleCI's pre-built images when you migrate to {% data variables.product.prodname\_actions %}. In many cases, you can use actions to install the additional dependencies you need. For more information about the Docker filesystem, see [AUTOTITLE](/actions/using-github-hosted-runners/about-github-hosted-runners#docker-container-filesystem). For more information about the tools and packages available on {% data variables.product.prodname\_dotcom %}-hosted runner images, see [AUTOTITLE](/actions/using-github-hosted-runners/about-github-hosted-runners#supported-software). ## Using variables and secrets CircleCI and {% data variables.product.prodname\_actions %} support setting variables in the | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/manual-migrations/migrate-from-circleci.md | main | github-actions | [
-0.053440243005752563,
-0.05176016688346863,
-0.10653620958328247,
-0.022345582023262978,
0.06882941722869873,
-0.0412137545645237,
0.0300868172198534,
0.04947966709733009,
0.015764718875288963,
-0.014079115353524685,
0.030728843063116074,
0.03298185393214226,
0.04674087092280388,
-0.02985... | 0.192383 |
to install the additional dependencies you need. For more information about the Docker filesystem, see [AUTOTITLE](/actions/using-github-hosted-runners/about-github-hosted-runners#docker-container-filesystem). For more information about the tools and packages available on {% data variables.product.prodname\_dotcom %}-hosted runner images, see [AUTOTITLE](/actions/using-github-hosted-runners/about-github-hosted-runners#supported-software). ## Using variables and secrets CircleCI and {% data variables.product.prodname\_actions %} support setting variables in the configuration file and creating secrets using the CircleCI or {% data variables.product.github %} UI. For more information, see [AUTOTITLE](/actions/reference/variables-reference#default-environment-variables) and [AUTOTITLE](/actions/security-guides/using-secrets-in-github-actions). ## Caching CircleCI and {% data variables.product.prodname\_actions %} provide a method to manually cache files in the configuration file. Below is an example of the syntax for each system. ### CircleCI syntax for caching {% raw %} ```yaml - restore\_cache: keys: - v1-npm-deps-{{ checksum "package-lock.json" }} - v1-npm-deps- ``` {% endraw %} ### GitHub Actions syntax for caching ```yaml - name: Cache node modules uses: {% data reusables.actions.action-cache %} with: path: ~/.npm key: {% raw %}v1-npm-deps-${{ hashFiles('\*\*/package-lock.json') }}{% endraw %} restore-keys: v1-npm-deps- ``` {% data variables.product.prodname\_actions %} does not have an equivalent of CircleCI’s Docker Layer Caching (or DLC). ## Persisting data between jobs Both CircleCI and {% data variables.product.prodname\_actions %} provide mechanisms to persist data between jobs. Below is an example in CircleCI and {% data variables.product.prodname\_actions %} configuration syntax. ### CircleCI syntax for persisting data between jobs {% raw %} ```yaml - persist\_to\_workspace: root: workspace paths: - math-homework.txt ... - attach\_workspace: at: /tmp/workspace ``` {% endraw %} ### GitHub Actions syntax for persisting data between jobs ```yaml - name: Upload math result for job 1 uses: {% data reusables.actions.action-upload-artifact %} with: name: homework path: math-homework.txt ... - name: Download math result for job 1 uses: {% data reusables.actions.action-download-artifact %} with: name: homework ``` For more information, see [AUTOTITLE](/actions/using-workflows/storing-workflow-data-as-artifacts). ## Using databases and service containers Both systems enable you to include additional containers for databases, caching, or other dependencies. In CircleCI, the first image listed in the \_config.yaml\_ is the primary image used to run commands. {% data variables.product.prodname\_actions %} uses explicit sections: use `container` for the primary container, and list additional containers in `services`. Below is an example in CircleCI and {% data variables.product.prodname\_actions %} configuration syntax. ### CircleCI syntax for using databases and service containers {% raw %} ```yaml --- version: 2.1 jobs: ruby-26: docker: - image: circleci/ruby:2.6.3-node-browsers-legacy environment: PGHOST: localhost PGUSER: administrate RAILS\_ENV: test - image: postgres:10.1-alpine environment: POSTGRES\_USER: administrate POSTGRES\_DB: ruby26 POSTGRES\_PASSWORD: "" working\_directory: ~/administrate steps: - checkout # Bundle install dependencies - run: bundle install --path vendor/bundle # Wait for DB - run: dockerize -wait tcp://localhost:5432 -timeout 1m # Setup the environment - run: cp .sample.env .env # Setup the database - run: bundle exec rake db:setup # Run the tests - run: bundle exec rake workflows: version: 2 build: jobs: - ruby-26 ... - attach\_workspace: at: /tmp/workspace ``` {% endraw %} ### GitHub Actions syntax for using databases and service containers ```yaml name: Containers on: [push] jobs: build: runs-on: ubuntu-latest container: circleci/ruby:2.6.3-node-browsers-legacy env: PGHOST: postgres PGUSER: administrate RAILS\_ENV: test services: postgres: image: postgres:10.1-alpine env: POSTGRES\_USER: administrate POSTGRES\_DB: ruby25 POSTGRES\_PASSWORD: "" ports: - 5432:5432 # Add a health check options: --health-cmd pg\_isready --health-interval 10s --health-timeout 5s --health-retries 5 steps: # This Docker file changes sets USER to circleci instead of using the default user, so we need to update file permissions for this image to work on GH Actions. # See https://docs.github.com/actions/using-github-hosted-runners/about-github-hosted-runners#docker-container-filesystem - name: Setup file system permissions run: sudo chmod -R 777 $GITHUB\_WORKSPACE /github /\_\_w/\_temp - uses: {% data reusables.actions.action-checkout %} - name: Install dependencies run: bundle install --path vendor/bundle - name: Setup environment configuration run: cp .sample.env .env - name: Setup database run: bundle exec rake db:setup - name: Run tests | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/manual-migrations/migrate-from-circleci.md | main | github-actions | [
-0.044618356972932816,
-0.014882996678352356,
-0.07778262346982956,
-0.025514807552099228,
0.06914325803518295,
0.023724352940917015,
-0.0008119393023662269,
0.0876079648733139,
-0.04135804623365402,
-0.011639010161161423,
0.02725476585328579,
-0.02143935114145279,
0.03936850279569626,
0.0... | 0.071183 |
name: Setup file system permissions run: sudo chmod -R 777 $GITHUB\_WORKSPACE /github /\_\_w/\_temp - uses: {% data reusables.actions.action-checkout %} - name: Install dependencies run: bundle install --path vendor/bundle - name: Setup environment configuration run: cp .sample.env .env - name: Setup database run: bundle exec rake db:setup - name: Run tests run: bundle exec rake ``` For more information, see [AUTOTITLE](/actions/using-containerized-services/about-service-containers). ## Complete Example Below is a real-world example. The left shows the actual CircleCI \_config.yml\_ for the [thoughtbot/administrator](https://github.com/thoughtbot/administrate) repository. The right shows the {% data variables.product.prodname\_actions %} equivalent. ### Complete example for CircleCI {% raw %} ```yaml --- version: 2.1 commands: shared\_steps: steps: - checkout # Restore Cached Dependencies - restore\_cache: name: Restore bundle cache key: administrate-{{ checksum "Gemfile.lock" }} # Bundle install dependencies - run: bundle install --path vendor/bundle # Cache Dependencies - save\_cache: name: Store bundle cache key: administrate-{{ checksum "Gemfile.lock" }} paths: - vendor/bundle # Wait for DB - run: dockerize -wait tcp://localhost:5432 -timeout 1m # Setup the environment - run: cp .sample.env .env # Setup the database - run: bundle exec rake db:setup # Run the tests - run: bundle exec rake default\_job: &default\_job working\_directory: ~/administrate steps: - shared\_steps # Run the tests against multiple versions of Rails - run: bundle exec appraisal install - run: bundle exec appraisal rake jobs: ruby-25: <<: \*default\_job docker: - image: circleci/ruby:2.5.0-node-browsers environment: PGHOST: localhost PGUSER: administrate RAILS\_ENV: test - image: postgres:10.1-alpine environment: POSTGRES\_USER: administrate POSTGRES\_DB: ruby25 POSTGRES\_PASSWORD: "" ruby-26: <<: \*default\_job docker: - image: circleci/ruby:2.6.3-node-browsers-legacy environment: PGHOST: localhost PGUSER: administrate RAILS\_ENV: test - image: postgres:10.1-alpine environment: POSTGRES\_USER: administrate POSTGRES\_DB: ruby26 POSTGRES\_PASSWORD: "" workflows: version: 2 multiple-rubies: jobs: - ruby-26 - ruby-25 ``` {% endraw %} ### Complete example for GitHub Actions ```yaml {% data reusables.actions.actions-not-certified-by-github-comment %} {% data reusables.actions.actions-use-sha-pinning-comment %} name: Containers on: [push] jobs: build: strategy: matrix: ruby: ['2.5', '2.6.3'] runs-on: ubuntu-latest env: PGHOST: localhost PGUSER: administrate RAILS\_ENV: test services: postgres: image: postgres:10.1-alpine env: POSTGRES\_USER: administrate POSTGRES\_DB: ruby25 POSTGRES\_PASSWORD: "" ports: - 5432:5432 # Add a health check options: --health-cmd pg\_isready --health-interval 10s --health-timeout 5s --health-retries 5 steps: - uses: {% data reusables.actions.action-checkout %} - name: Setup Ruby uses: eregon/use-ruby-action@ec02537da5712d66d4d50a0f33b7eb52773b5ed1 with: ruby-version: {% raw %}${{ matrix.ruby }}{% endraw %} - name: Cache dependencies uses: {% data reusables.actions.action-cache %} with: path: vendor/bundle key: administrate-{% raw %}${{ matrix.image }}-${{ hashFiles('Gemfile.lock') }}{% endraw %} - name: Install postgres headers run: | sudo apt-get update sudo apt-get install libpq-dev - name: Install dependencies run: bundle install --path vendor/bundle - name: Setup environment configuration run: cp .sample.env .env - name: Setup database run: bundle exec rake db:setup - name: Run tests run: bundle exec rake - name: Install appraisal run: bundle exec appraisal install - name: Run appraisal run: bundle exec appraisal rake ``` | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/manual-migrations/migrate-from-circleci.md | main | github-actions | [
-0.036046285182237625,
-0.008161012083292007,
-0.12299090623855591,
-0.06325849145650864,
0.03511735424399376,
-0.057102642953395844,
0.08548557758331299,
0.08977557718753815,
-0.07102746516466141,
0.07499151676893234,
0.11948201060295105,
-0.0006987490924075246,
0.04822609946131706,
0.025... | 0.053083 |
{% data reusables.actions.enterprise-github-hosted-runners %} ## Introduction Jenkins and {% data variables.product.prodname\_actions %} both allow you to create workflows that automatically build, test, publish, release, and deploy code. Jenkins and {% data variables.product.prodname\_actions %} share some similarities in workflow configuration: \* Jenkins creates workflows using \_Declarative Pipelines\_, which are similar to {% data variables.product.prodname\_actions %} workflow files. \* Jenkins uses \_stages\_ to run a collection of steps, while {% data variables.product.prodname\_actions %} uses jobs to group one or more steps or individual commands. \* Jenkins and {% data variables.product.prodname\_actions %} support container-based builds. For more information, see [AUTOTITLE](/actions/creating-actions/creating-a-docker-container-action). \* Steps or tasks can be reused and shared with the community. For more information, see [AUTOTITLE](/actions/learn-github-actions/understanding-github-actions). ## Key differences \* Jenkins has two types of syntax for creating pipelines: Declarative Pipeline and Scripted Pipeline. {% data variables.product.prodname\_actions %} uses YAML to create workflows and configuration files. For more information, see [AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions). \* Jenkins deployments are typically self-hosted, with users maintaining the servers in their own data centers. {% data variables.product.prodname\_actions %} offers a hybrid cloud approach by hosting its own runners that you can use to run jobs, while also supporting self-hosted runners. For more information, see [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners). ## Comparing capabilities ### Distributing your builds Jenkins lets you send builds to a single build agent, or you can distribute them across multiple agents. You can also classify these agents according to various attributes, such as operating system types. Similarly, {% data variables.product.prodname\_actions %} can send jobs to {% data variables.product.prodname\_dotcom %}-hosted or self-hosted runners, and you can use labels to classify runners according to various attributes. For more information, see [AUTOTITLE](/actions/learn-github-actions/understanding-github-actions#runners) and [AUTOTITLE](/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners). ### Using sections to organize pipelines Jenkins splits its Declarative Pipelines into multiple sections. Similarly, {% data variables.product.prodname\_actions %} organizes its workflows into separate sections. The table below compares Jenkins sections with the {% data variables.product.prodname\_actions %} workflow. | Jenkins Directives | {% data variables.product.prodname\_actions %} | | ------------- | ------------- | | [`agent`](https://jenkins.io/doc/book/pipeline/syntax/#agent) | [`jobs..runs-on`](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_idruns-on) [`jobs..container`](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_idcontainer) | | [`post`](https://jenkins.io/doc/book/pipeline/syntax/#post) | None | | [`stages`](https://jenkins.io/doc/book/pipeline/syntax/#stages) | [`jobs`](/actions/using-workflows/workflow-syntax-for-github-actions#jobs) | | [`steps`](https://jenkins.io/doc/book/pipeline/syntax/#steps) | [`jobs..steps`](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_idsteps) | ## Using directives Jenkins uses directives to manage \_Declarative Pipelines\_. These directives define the characteristics of your workflow and how it will execute. The table below demonstrates how these directives map to concepts within {% data variables.product.prodname\_actions %}. | Jenkins Directives | {% data variables.product.prodname\_actions %} | | ------------- | ------------- | | [`environment`](https://jenkins.io/doc/book/pipeline/syntax/#environment) | [`jobs..env`](/actions/using-workflows/workflow-syntax-for-github-actions#env) [`jobs..steps[\*].env`](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_idstepsenv) | | [`options`](https://jenkins.io/doc/book/pipeline/syntax/#options) | [`jobs..strategy`](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_idstrategy) [`jobs..strategy.fail-fast`](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_idstrategyfail-fast) [`jobs..timeout-minutes`](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_idtimeout-minutes) | | [`parameters`](https://jenkins.io/doc/book/pipeline/syntax/#options) | [`inputs`](/actions/creating-actions/metadata-syntax-for-github-actions#inputs) [`outputs`](/actions/creating-actions/metadata-syntax-for-github-actions#outputs-for-docker-container-and-javascript-actions) | | [`triggers`](https://jenkins.io/doc/book/pipeline/syntax/#triggers) | [`on`](/actions/using-workflows/workflow-syntax-for-github-actions#on) [`on..types`](/actions/using-workflows/workflow-syntax-for-github-actions#onevent\_nametypes) [`on..`](/actions/automating-your-workflow-with-github-actions/workflow-syntax-for-github-actions#onpushbranchestagsbranches-ignoretags-ignore) [`on..`](/actions/automating-your-workflow-with-github-actions/workflow-syntax-for-github-actions#onpull\_requestpull\_request\_targetbranchesbranches-ignore) [`on..paths`](/actions/automating-your-workflow-with-github-actions/workflow-syntax-for-github-actions#onpushpull\_requestpull\_request\_targetpathspaths-ignore) | | [`triggers { upstreamprojects() }`](https://jenkins.io/doc/book/pipeline/syntax/#triggers) | [`jobs..needs`](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_idneeds) | | [Jenkins cron syntax](https://jenkins.io/doc/book/pipeline/syntax/#cron-syntax) | [`on.schedule`](/actions/using-workflows/workflow-syntax-for-github-actions#onschedule) | | [`stage`](https://jenkins.io/doc/book/pipeline/syntax/#stage) | [`jobs.`](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_id) [`jobs..name`](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_idname) | | [`tools`](https://jenkins.io/doc/book/pipeline/syntax/#tools) | [Specifications for {% data variables.product.prodname\_dotcom %}-hosted runners](/actions/using-github-hosted-runners/about-github-hosted-runners#supported-software) | | [`input`](https://jenkins.io/doc/book/pipeline/syntax/#input) | [`inputs`](/actions/creating-actions/metadata-syntax-for-github-actions#inputs) | | [`when`](https://jenkins.io/doc/book/pipeline/syntax/#when) | [`jobs..if`](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_idif) | ## Using sequential stages ### Parallel job processing Jenkins can run the `stages` and `steps` in parallel, while {% data variables.product.prodname\_actions %} currently only runs jobs in parallel. | Jenkins Parallel | {% data variables.product.prodname\_actions %} | | ------------- | ------------- | | [`parallel`](https://jenkins.io/doc/book/pipeline/syntax/#parallel) | [`jobs..strategy.max-parallel`](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_idstrategymax-parallel) | ### Matrix Both {% data variables.product.prodname\_actions %} and Jenkins let you use a matrix to define various system combinations. | Jenkins | {% data variables.product.prodname\_actions %} | | ------------- | ------------- | | [`axis`](https://jenkins.io/doc/book/pipeline/syntax/#matrix-axes) | [`strategy/matrix`](/actions/using-workflows/about-workflows#using-a-build-matrix) [`context`](/actions/learn-github-actions/contexts) | | [`stages`](https://jenkins.io/doc/book/pipeline/syntax/#matrix-stages) | [`steps-context`](/actions/learn-github-actions/contexts#steps-context) | | [`excludes`](https://jenkins.io/doc/book/pipeline/syntax/#matrix-stages) | None | ### Using steps to execute tasks Jenkins groups `steps` together in `stages`. Each of these steps can be a script, function, or command, among others. Similarly, {% data variables.product.prodname\_actions %} uses `jobs` to execute specific groups of `steps`. | | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/manual-migrations/migrate-from-jenkins.md | main | github-actions | [
-0.03762951120734215,
-0.006195539142936468,
-0.04264286532998085,
0.014324159361422062,
0.018864993005990982,
-0.021173294633626938,
-0.001363539369776845,
0.03233249858021736,
0.007822471670806408,
-0.0022889827378094196,
-0.02326028048992157,
0.00304067088291049,
0.018220143392682076,
-... | 0.127296 |
| | [`stages`](https://jenkins.io/doc/book/pipeline/syntax/#matrix-stages) | [`steps-context`](/actions/learn-github-actions/contexts#steps-context) | | [`excludes`](https://jenkins.io/doc/book/pipeline/syntax/#matrix-stages) | None | ### Using steps to execute tasks Jenkins groups `steps` together in `stages`. Each of these steps can be a script, function, or command, among others. Similarly, {% data variables.product.prodname\_actions %} uses `jobs` to execute specific groups of `steps`. | Jenkins | {% data variables.product.prodname\_actions %} | | ------------- | ------------- | | [`steps`](https://jenkins.io/doc/book/pipeline/syntax/#steps) | [`jobs..steps`](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_idsteps) | ## Examples of common tasks ### Scheduling a pipeline to run with `cron` #### Jenkins pipeline with `cron` ```yaml pipeline { agent any triggers { cron('H/15 \* \* \* 1-5') } } ``` #### {% data variables.product.prodname\_actions %} workflow with `cron` ```yaml on: schedule: - cron: '\*/15 \* \* \* 1-5' ``` For more information about `schedule` events and accepted cron syntax, see [AUTOTITLE](/actions/reference/workflows-and-actions/events-that-trigger-workflows#schedule). ### Configuring environment variables in a pipeline #### Jenkins pipeline with an environment variable ```yaml pipeline { agent any environment { MAVEN\_PATH = '/usr/local/maven' } } ``` #### {% data variables.product.prodname\_actions %} workflow with an environment variable ```yaml jobs: maven-build: env: MAVEN\_PATH: '/usr/local/maven' ``` ### Building from upstream projects #### Jenkins pipeline that builds from an upstream project ```yaml pipeline { triggers { upstream( upstreamProjects: 'job1,job2', threshold: hudson.model.Result.SUCCESS ) } } ``` #### {% data variables.product.prodname\_actions %} workflow that builds from an upstream project ```yaml jobs: job1: job2: needs: job1 job3: needs: [job1, job2] ``` ### Building with multiple operating systems #### Jenkins pipeline that builds with multiple operating systems ```yaml pipeline { agent none stages { stage('Run Tests') { matrix { axes { axis { name: 'PLATFORM' values: 'macos', 'linux' } } agent { label "${PLATFORM}" } stages { stage('test') { tools { nodejs "node-20" } steps { dir("scripts/myapp") { sh(script: "npm install -g bats") sh(script: "bats tests") } } } } } } } } ``` #### {% data variables.product.prodname\_actions %} workflow that builds with multiple operating systems ```yaml name: demo-workflow on: push: jobs: test: runs-on: {% raw %}${{ matrix.os }}{% endraw %} strategy: fail-fast: false matrix: os: [macos-latest, ubuntu-latest] steps: - uses: {% data reusables.actions.action-checkout %} - uses: {% data reusables.actions.action-setup-node %} with: node-version: 20 - run: npm install -g bats - run: bats tests working-directory: ./scripts/myapp ``` | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/manual-migrations/migrate-from-jenkins.md | main | github-actions | [
-0.05498823896050453,
0.014247151091694832,
-0.06561312079429626,
0.002075949450954795,
-0.014434588141739368,
-0.020026100799441338,
-0.01420954242348671,
-0.0014675690326839685,
-0.018481921404600143,
-0.017622116953134537,
0.044730834662914276,
-0.006479034665971994,
0.038225311785936356,... | 0.074226 |
{% data reusables.actions.enterprise-github-hosted-runners %} ## Introduction This guide helps you migrate from Travis CI to {% data variables.product.prodname\_actions %}. It compares their concepts and syntax, describes the similarities, and demonstrates their different approaches to common tasks. ## Before you start Before starting your migration to {% data variables.product.prodname\_actions %}, it would be useful to become familiar with how it works: \* For a quick example that demonstrates a {% data variables.product.prodname\_actions %} job, see [AUTOTITLE](/actions/quickstart). \* To learn the essential {% data variables.product.prodname\_actions %} concepts, see [AUTOTITLE](/actions/learn-github-actions/understanding-github-actions). ## Comparing job execution To give you control over when CI tasks are executed, a {% data variables.product.prodname\_actions %} \_workflow\_ uses \_jobs\_ that run in parallel by default. Each job contains \_steps\_ that are executed in a sequence that you define. If you need to run setup and cleanup actions for a job, you can define steps in each job to perform these. ## Key similarities {% data variables.product.prodname\_actions %} and Travis CI share certain similarities, and understanding these ahead of time can help smooth the migration process. ### Using YAML syntax Travis CI and {% data variables.product.prodname\_actions %} both use YAML to create jobs and workflows, and these files are stored in the code's repository. For more information on how {% data variables.product.prodname\_actions %} uses YAML, see [AUTOTITLE](/actions/learn-github-actions/understanding-github-actions#create-an-example-workflow). ### Custom variables Travis CI lets you set variables and share them between stages. Similarly, {% data variables.product.prodname\_actions %} lets you define variables for a workflow. For more information, see [AUTOTITLE](/actions/learn-github-actions/variables). ### Default variables Travis CI and {% data variables.product.prodname\_actions %} both include default environment variables that you can use in your YAML files. For {% data variables.product.prodname\_actions %}, you can see these listed in [AUTOTITLE](/actions/reference/variables-reference#default-environment-variables). ### Parallel job processing Travis CI can use `stages` to run jobs in parallel. Similarly, {% data variables.product.prodname\_actions %} runs `jobs` in parallel. For more information, see [AUTOTITLE](/actions/using-workflows/about-workflows#creating-dependent-jobs). ### Status badges Travis CI and {% data variables.product.prodname\_actions %} both support status badges, which let you indicate whether a build is passing or failing. For more information, see [AUTOTITLE](/actions/monitoring-and-troubleshooting-workflows/adding-a-workflow-status-badge). ### Using a matrix Travis CI and {% data variables.product.prodname\_actions %} both support a matrix, allowing you to perform testing using combinations of operating systems and software packages. For more information, see [AUTOTITLE](/actions/using-jobs/using-a-matrix-for-your-jobs). Below is an example comparing the syntax for each system. #### Travis CI syntax for a matrix {% raw %} ```yaml matrix: include: - rvm: '2.5' - rvm: '2.6.3' ``` {% endraw %} #### {% data variables.product.prodname\_actions %} syntax for a matrix {% raw %} ```yaml jobs: build: strategy: matrix: ruby: ['2.5', '2.6.3'] ``` {% endraw %} ### Targeting specific branches Travis CI and {% data variables.product.prodname\_actions %} both allow you to target your CI to a specific branch. For more information, see [AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#onpushbranchestagsbranches-ignoretags-ignore). Below is an example of the syntax for each system. #### Travis CI syntax for targeting specific branches {% raw %} ```yaml branches: only: - main - 'mona/octocat' ``` {% endraw %} #### {% data variables.product.prodname\_actions %} syntax for targeting specific branches {% raw %} ```yaml on: push: branches: - main - 'mona/octocat' ``` {% endraw %} ### Checking out submodules Travis CI and {% data variables.product.prodname\_actions %} both allow you to control whether submodules are included in the repository clone. Below is an example of the syntax for each system. #### Travis CI syntax for checking out submodules {% raw %} ```yaml git: submodules: false ``` {% endraw %} #### {% data variables.product.prodname\_actions %} syntax for checking out submodules ```yaml - uses: {% data reusables.actions.action-checkout %} with: submodules: false ``` ### Using environment variables in a matrix Travis CI and {% data | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/manual-migrations/migrate-from-travis-ci.md | main | github-actions | [
-0.008723268285393715,
-0.06409303843975067,
-0.08263854682445526,
0.019228819757699966,
0.0273981261998415,
-0.014103501103818417,
0.018833735957741737,
0.060092657804489136,
-0.059160344302654266,
0.01480624359101057,
0.08343532681465149,
0.0010282682487741113,
0.05392918735742569,
-0.07... | 0.103905 |
CI syntax for checking out submodules {% raw %} ```yaml git: submodules: false ``` {% endraw %} #### {% data variables.product.prodname\_actions %} syntax for checking out submodules ```yaml - uses: {% data reusables.actions.action-checkout %} with: submodules: false ``` ### Using environment variables in a matrix Travis CI and {% data variables.product.prodname\_actions %} can both add custom variables to a test matrix, which allows you to refer to the variable in a later step. In {% data variables.product.prodname\_actions %}, you can use the `include` key to add custom environment variables to a matrix. {% data reusables.actions.matrix-variable-example %} ## Key features in {% data variables.product.prodname\_actions %} When migrating from Travis CI, consider the following key features in {% data variables.product.prodname\_actions %}: ### Storing secrets {% data variables.product.prodname\_actions %} allows you to store secrets and reference them in your jobs. {% data variables.product.prodname\_actions %} organizations can limit which repositories can access organization secrets. Deployment protection rules can require manual approval for a workflow to access environment secrets. For more information, see [AUTOTITLE](/actions/security-for-github-actions/security-guides/about-secrets). ### Sharing files between jobs and workflows {% data variables.product.prodname\_actions %} includes integrated support for artifact storage, allowing you to share files between jobs in a workflow. You can also save the resulting files and share them with other workflows. For more information, see [AUTOTITLE](/actions/learn-github-actions/essential-features-of-github-actions#sharing-data-between-jobs). ### Hosting your own runners If your jobs require specific hardware or software, {% data variables.product.prodname\_actions %} allows you to host your own runners and send your jobs to them for processing. {% data variables.product.prodname\_actions %} also lets you use policies to control how these runners are accessed, granting access at the organization or repository level. For more information, see [AUTOTITLE](/actions/how-tos/managing-self-hosted-runners). {% ifversion fpt or ghec %} ### Concurrent jobs and execution time The concurrent jobs and workflow execution times in {% data variables.product.prodname\_actions %} can vary depending on your {% data variables.product.company\_short %} plan. For more information, see [AUTOTITLE](/actions/learn-github-actions/usage-limits-billing-and-administration). {% endif %} ### Using different languages in {% data variables.product.prodname\_actions %} When working with different languages in {% data variables.product.prodname\_actions %}, you can create a step in your job to set up your language dependencies. For more information about working with a particular language, see [AUTOTITLE](/actions/use-cases-and-examples/building-and-testing). ## Executing scripts {% data variables.product.prodname\_actions %} can use `run` steps to run scripts or shell commands. To use a particular shell, you can specify the `shell` type when providing the path to the script. For more information, see [AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_idstepsrun). For example: ```yaml steps: - name: Run build script run: ./.github/scripts/build.sh shell: bash ``` ## Error handling in {% data variables.product.prodname\_actions %} When migrating to {% data variables.product.prodname\_actions %}, there are different approaches to error handling that you might need to be aware of. ### Script error handling {% data variables.product.prodname\_actions %} stops a job immediately if one of the steps returns an error code. For more information, see [AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#exit-codes-and-error-action-preference). ### Job error handling {% data variables.product.prodname\_actions %} uses `if` conditionals to execute jobs or steps in certain situations. For example, you can run a step when another step results in a `failure()`. For more information, see [AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#example-using-status-check-functions). You can also use [`continue-on-error`](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_idcontinue-on-error) to prevent a workflow run from stopping when a job fails. ## Migrating syntax for conditionals and expressions To run jobs under conditional expressions, Travis CI and {% data variables.product.prodname\_actions %} share a similar `if` condition syntax. {% data variables.product.prodname\_actions %} lets you use the `if` conditional to prevent a job or step from running unless a condition is met. For more information, see [AUTOTITLE](/actions/learn-github-actions/expressions). This example demonstrates how an `if` conditional can control whether a step is executed: ```yaml jobs: conditional: runs-on: ubuntu-latest steps: - | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/manual-migrations/migrate-from-travis-ci.md | main | github-actions | [
0.022706910967826843,
-0.027798697352409363,
-0.06766267865896225,
0.04660748317837715,
0.0456504188477993,
0.00686965836212039,
0.05320613458752632,
0.036502160131931305,
-0.06565526872873306,
0.023941712453961372,
0.1083306074142456,
-0.04085906222462654,
0.04443996027112007,
0.001274552... | 0.010554 |
syntax. {% data variables.product.prodname\_actions %} lets you use the `if` conditional to prevent a job or step from running unless a condition is met. For more information, see [AUTOTITLE](/actions/learn-github-actions/expressions). This example demonstrates how an `if` conditional can control whether a step is executed: ```yaml jobs: conditional: runs-on: ubuntu-latest steps: - run: echo "This step runs with str equals 'ABC' and num equals 123" if: env.str == 'ABC' && env.num == 123 ``` ## Migrating phases to steps Where Travis CI uses \_phases\_ to run \_steps\_, {% data variables.product.prodname\_actions %} has \_steps\_ which execute \_actions\_. You can find prebuilt actions in the [{% data variables.product.prodname\_marketplace %}](https://github.com/marketplace?type=actions), or you can create your own actions. For more information, see [AUTOTITLE](/actions/creating-actions). Below is an example of the syntax for each system. ### Travis CI syntax for phases and steps {% raw %} ```yaml language: python python: - "3.7" script: - python script.py ``` {% endraw %} ### {% data variables.product.prodname\_actions %} syntax for steps and actions ```yaml jobs: run\_python: runs-on: ubuntu-latest steps: - uses: {% data reusables.actions.action-setup-python %} with: python-version: '3.7' architecture: 'x64' - run: python script.py ``` ## Caching dependencies Travis CI and {% data variables.product.prodname\_actions %} let you manually cache dependencies for later reuse. These examples demonstrate the cache syntax for each system. ### Travis CI syntax for caching {% raw %} ```yaml language: node\_js cache: npm ``` {% endraw %} ### GitHub Actions syntax for caching ```yaml - name: Cache node modules uses: {% data reusables.actions.action-cache %} with: path: ~/.npm key: {% raw %}v1-npm-deps-${{ hashFiles('\*\*/package-lock.json') }}{% endraw %} restore-keys: v1-npm-deps- ``` ## Examples of common tasks This section compares how {% data variables.product.prodname\_actions %} and Travis CI perform common tasks. ### Configuring environment variables You can create custom environment variables in a {% data variables.product.prodname\_actions %} job. #### Travis CI syntax for an environment variable ```yaml env: - MAVEN\_PATH="/usr/local/maven" ``` #### {% data variables.product.prodname\_actions %} workflow with an environment variable ```yaml jobs: maven-build: env: MAVEN\_PATH: '/usr/local/maven' ``` ### Building with Node.js #### Travis CI for building with Node.js {% raw %} ```yaml install: - npm install script: - npm run build - npm test ``` {% endraw %} #### {% data variables.product.prodname\_actions %} workflow for building with Node.js ```yaml name: Node.js CI on: [push] jobs: build: runs-on: ubuntu-latest steps: - uses: {% data reusables.actions.action-checkout %} - name: Use Node.js uses: {% data reusables.actions.action-setup-node %} with: node-version: '16.x' - run: npm install - run: npm run build - run: npm test ``` ## Next steps To continue learning about the main features of {% data variables.product.prodname\_actions %}, see [AUTOTITLE](/actions/learn-github-actions). | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/manual-migrations/migrate-from-travis-ci.md | main | github-actions | [
-0.008627570234239101,
0.00862320326268673,
-0.04772283509373665,
-0.005484035704284906,
0.03619877249002457,
0.00471325870603323,
0.056594062596559525,
0.0024887078907340765,
-0.015076111070811749,
-0.0007954970351420343,
0.07008276879787445,
-0.020325802266597748,
0.03991972282528877,
-0... | 0.078025 |
{% data reusables.actions.enterprise-github-hosted-runners %} ## Introduction GitLab CI/CD and {% data variables.product.prodname\_actions %} both allow you to create workflows that automatically build, test, publish, release, and deploy code. GitLab CI/CD and {% data variables.product.prodname\_actions %} share some similarities in workflow configuration: \* Workflow configuration files are written in YAML and are stored in the code's repository. \* Workflows include one or more jobs. \* Jobs include one or more steps or individual commands. \* Jobs can run on either managed or self-hosted machines. There are a few differences, and this guide will show you the important differences so that you can migrate your workflow to {% data variables.product.prodname\_actions %}. ## Jobs Jobs in GitLab CI/CD are very similar to jobs in {% data variables.product.prodname\_actions %}. In both systems, jobs have the following characteristics: \* Jobs contain a series of steps or scripts that run sequentially. \* Jobs can run on separate machines or in separate containers. \* Jobs run in parallel by default, but can be configured to run sequentially. You can run a script or a shell command in a job. In GitLab CI/CD, script steps are specified using the `script` key. In {% data variables.product.prodname\_actions %}, all scripts are specified using the `run` key. Below is an example of the syntax for each system. ### GitLab CI/CD syntax for jobs {% raw %} ```yaml job1: variables: GIT\_CHECKOUT: "true" script: - echo "Run your script here" ``` {% endraw %} ### {% data variables.product.prodname\_actions %} syntax for jobs ```yaml jobs: job1: steps: - uses: {% data reusables.actions.action-checkout %} - run: echo "Run your script here" ``` ## Runners Runners are machines on which the jobs run. Both GitLab CI/CD and {% data variables.product.prodname\_actions %} offer managed and self-hosted variants of runners. In GitLab CI/CD, `tags` are used to run jobs on different platforms, while in {% data variables.product.prodname\_actions %} it is done with the `runs-on` key. Below is an example of the syntax for each system. ### GitLab CI/CD syntax for runners {% raw %} ```yaml windows\_job: tags: - windows script: - echo Hello, %USERNAME%! linux\_job: tags: - linux script: - echo "Hello, $USER!" ``` {% endraw %} ### {% data variables.product.prodname\_actions %} syntax for runners {% raw %} ```yaml windows\_job: runs-on: windows-latest steps: - run: echo Hello, %USERNAME%! linux\_job: runs-on: ubuntu-latest steps: - run: echo "Hello, $USER!" ``` {% endraw %} For more information, see [AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_idruns-on). ## Docker images Both GitLab CI/CD and {% data variables.product.prodname\_actions %} support running jobs in a Docker image. In GitLab CI/CD, Docker images are defined with an `image` key, while in {% data variables.product.prodname\_actions %} it is done with the `container` key. Below is an example of the syntax for each system. ### GitLab CI/CD syntax for Docker images {% raw %} ```yaml my\_job: image: node:20-bookworm-slim ``` {% endraw %} ### {% data variables.product.prodname\_actions %} syntax for Docker images {% raw %} ```yaml jobs: my\_job: container: node:20-bookworm-slim ``` {% endraw %} For more information, see [AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_idcontainer). ## Condition and expression syntax GitLab CI/CD uses `rules` to determine if a job will run for a specific condition. {% data variables.product.prodname\_actions %} uses the `if` keyword to prevent a job from running unless a condition is met. Below is an example of the syntax for each system. ### GitLab CI/CD syntax for conditions and expressions {% raw %} ```yaml deploy\_prod: stage: deploy script: - echo "Deploy to production server" rules: - if: '$CI\_COMMIT\_BRANCH == "master"' ``` {% endraw %} ### {% data variables.product.prodname\_actions %} syntax for conditions and expressions {% raw %} ```yaml jobs: deploy\_prod: if: contains( github.ref, 'master') runs-on: ubuntu-latest steps: - run: | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/manual-migrations/migrate-from-gitlab-cicd.md | main | github-actions | [
-0.08265931159257889,
-0.06182081997394562,
-0.08268827944993973,
0.01864611729979515,
-0.011947411112487316,
-0.0758921429514885,
0.00026883292593993247,
0.04782206937670708,
0.014018268324434757,
0.0046282862313091755,
0.026795515790581703,
0.05632060766220093,
0.088206447660923,
-0.0662... | 0.157858 |
expressions {% raw %} ```yaml deploy\_prod: stage: deploy script: - echo "Deploy to production server" rules: - if: '$CI\_COMMIT\_BRANCH == "master"' ``` {% endraw %} ### {% data variables.product.prodname\_actions %} syntax for conditions and expressions {% raw %} ```yaml jobs: deploy\_prod: if: contains( github.ref, 'master') runs-on: ubuntu-latest steps: - run: echo "Deploy to production server" ``` {% endraw %} For more information, see [AUTOTITLE](/actions/learn-github-actions/expressions). ## Dependencies between Jobs Both GitLab CI/CD and {% data variables.product.prodname\_actions %} allow you to set dependencies for a job. In both systems, jobs run in parallel by default, but job dependencies in {% data variables.product.prodname\_actions %} can be specified explicitly with the `needs` key. GitLab CI/CD also has a concept of `stages`, where jobs in a stage run concurrently, but the next stage will start when all the jobs in the previous stage have completed. You can recreate this scenario in {% data variables.product.prodname\_actions %} with the `needs` key. Below is an example of the syntax for each system. The workflows start with two jobs named `build\_a` and `build\_b` running in parallel, and when those jobs complete, another job called `test\_ab` will run. Finally, when `test\_ab` completes, the `deploy\_ab` job will run. ### GitLab CI/CD syntax for dependencies between jobs {% raw %} ```yaml stages: - build - test - deploy build\_a: stage: build script: - echo "This job will run first." build\_b: stage: build script: - echo "This job will run first, in parallel with build\_a." test\_ab: stage: test script: - echo "This job will run after build\_a and build\_b have finished." deploy\_ab: stage: deploy script: - echo "This job will run after test\_ab is complete" ``` {% endraw %} ### {% data variables.product.prodname\_actions %} syntax for dependencies between jobs {% raw %} ```yaml jobs: build\_a: runs-on: ubuntu-latest steps: - run: echo "This job will be run first." build\_b: runs-on: ubuntu-latest steps: - run: echo "This job will be run first, in parallel with build\_a" test\_ab: runs-on: ubuntu-latest needs: [build\_a,build\_b] steps: - run: echo "This job will run after build\_a and build\_b have finished" deploy\_ab: runs-on: ubuntu-latest needs: [test\_ab] steps: - run: echo "This job will run after test\_ab is complete" ``` {% endraw %} For more information, see [AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_idneeds). ## Scheduling workflows Both GitLab CI/CD and {% data variables.product.prodname\_actions %} allow you to run workflows at a specific interval. In GitLab CI/CD, pipeline schedules are configured with the UI, while in {% data variables.product.prodname\_actions %} you can trigger a workflow on a scheduled interval with the "on" key. For more information, see [AUTOTITLE](/actions/using-workflows/events-that-trigger-workflows#scheduled-events). ## Variables and secrets GitLab CI/CD and {% data variables.product.prodname\_actions %} support setting variables in the pipeline or workflow configuration file, and creating secrets using the GitLab or {% data variables.product.github %} UI. For more information, see [AUTOTITLE](/actions/learn-github-actions/variables) and [AUTOTITLE](/actions/security-for-github-actions/security-guides/about-secrets). ## Caching GitLab CI/CD and {% data variables.product.prodname\_actions %} provide a method in the configuration file to manually cache workflow files. Below is an example of the syntax for each system. ### GitLab CI/CD syntax for caching {% raw %} ```yaml image: node:latest cache: key: $CI\_COMMIT\_REF\_SLUG paths: - .npm/ before\_script: - npm ci --cache .npm --prefer-offline test\_async: script: - node ./specs/start.js ./specs/async.spec.js ``` {% endraw %} ### {% data variables.product.prodname\_actions %} syntax for caching ```yaml jobs: test\_async: runs-on: ubuntu-latest steps: - name: Cache node modules uses: {% data reusables.actions.action-cache %} with: path: ~/.npm key: {% raw %}v1-npm-deps-${{ hashFiles('\*\*/package-lock.json') }}{% endraw %} restore-keys: v1-npm-deps- ``` ## Artifacts Both GitLab CI/CD and {% data variables.product.prodname\_actions %} can upload files and directories created by a job as artifacts. In {% data variables.product.prodname\_actions %}, artifacts can be used to persist data across multiple jobs. | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/manual-migrations/migrate-from-gitlab-cicd.md | main | github-actions | [
-0.02771272324025631,
-0.030726233497262,
-0.03629746288061142,
-0.0332239605486393,
0.0298454649746418,
-0.02947998233139515,
0.013822018168866634,
-0.0005483628483489156,
-0.012021389789879322,
0.0445110909640789,
0.06385395675897598,
-0.035596009343862534,
0.0627664253115654,
-0.0092255... | 0.024875 |
with: path: ~/.npm key: {% raw %}v1-npm-deps-${{ hashFiles('\*\*/package-lock.json') }}{% endraw %} restore-keys: v1-npm-deps- ``` ## Artifacts Both GitLab CI/CD and {% data variables.product.prodname\_actions %} can upload files and directories created by a job as artifacts. In {% data variables.product.prodname\_actions %}, artifacts can be used to persist data across multiple jobs. Below is an example of the syntax for each system. ### GitLab CI/CD syntax for artifacts {% raw %} ```yaml script: artifacts: paths: - math-homework.txt ``` {% endraw %} ### {% data variables.product.prodname\_actions %} syntax for artifacts ```yaml - name: Upload math result for job 1 uses: {% data reusables.actions.action-upload-artifact %} with: name: homework path: math-homework.txt ``` For more information, see [AUTOTITLE](/actions/using-workflows/storing-workflow-data-as-artifacts). ## Databases and service containers Both systems enable you to include additional containers for databases, caching, or other dependencies. In GitLab CI/CD, a container for the job is specified with the `image` key, while {% data variables.product.prodname\_actions %} uses the `container` key. In both systems, additional service containers are specified with the `services` key. Below is an example of the syntax for each system. ### GitLab CI/CD syntax for databases and service containers {% raw %} ```yaml container-job: variables: POSTGRES\_PASSWORD: postgres # The hostname used to communicate with the # PostgreSQL service container POSTGRES\_HOST: postgres # The default PostgreSQL port POSTGRES\_PORT: 5432 image: node:20-bookworm-slim services: - postgres script: # Performs a clean installation of all dependencies # in the `package.json` file - npm ci # Runs a script that creates a PostgreSQL client, # populates the client with data, and retrieves data - node client.js tags: - docker ``` {% endraw %} ### {% data variables.product.prodname\_actions %} syntax for databases and service containers ```yaml jobs: container-job: runs-on: ubuntu-latest container: node:20-bookworm-slim services: postgres: image: postgres env: POSTGRES\_PASSWORD: postgres steps: - name: Check out repository code uses: {% data reusables.actions.action-checkout %} # Performs a clean installation of all dependencies # in the `package.json` file - name: Install dependencies run: npm ci - name: Connect to PostgreSQL # Runs a script that creates a PostgreSQL client, # populates the client with data, and retrieves data run: node client.js env: # The hostname used to communicate with the # PostgreSQL service container POSTGRES\_HOST: postgres # The default PostgreSQL port POSTGRES\_PORT: 5432 ``` For more information, see [AUTOTITLE](/actions/using-containerized-services/about-service-containers). | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/manual-migrations/migrate-from-gitlab-cicd.md | main | github-actions | [
-0.12150255590677261,
0.014951705001294613,
-0.03187509998679161,
-0.0011800340143963695,
0.016035079956054688,
-0.10009097307920456,
-0.006896037142723799,
0.009592367336153984,
-0.027505721896886826,
0.047861237078905106,
0.0361567847430706,
0.04232308268547058,
0.055527713149785995,
0.0... | 0.066945 |
{% data reusables.actions.enterprise-github-hosted-runners %} ## Introduction Azure Pipelines and {% data variables.product.prodname\_actions %} both allow you to create workflows that automatically build, test, publish, release, and deploy code. Azure Pipelines and {% data variables.product.prodname\_actions %} share some similarities in workflow configuration: \* Workflow configuration files are written in YAML and are stored in the code's repository. \* Workflows include one or more jobs. \* Jobs include one or more steps or individual commands. \* Steps or tasks can be reused and shared with the community. For more information, see [AUTOTITLE](/actions/learn-github-actions/understanding-github-actions). ## Key differences When migrating from Azure Pipelines, consider the following differences: \* Azure Pipelines supports a legacy \_classic editor\_, which lets you define your CI configuration in a GUI editor instead of creating the pipeline definition in a YAML file. {% data variables.product.prodname\_actions %} uses YAML files to define workflows and does not support a graphical editor. \* Azure Pipelines allows you to omit some structure in job definitions. For example, if you only have a single job, you don't need to define the job and only need to define its steps. {% data variables.product.prodname\_actions %} requires explicit configuration, and YAML structure cannot be omitted. \* Azure Pipelines supports \_stages\_ defined in the YAML file, which can be used to create deployment workflows. {% data variables.product.prodname\_actions %} requires you to separate stages into separate YAML workflow files. \* On-premises Azure Pipelines build agents can be selected with capabilities. {% data variables.product.prodname\_actions %} self-hosted runners can be selected with labels. ## Migrating jobs and steps Jobs and steps in Azure Pipelines are very similar to jobs and steps in {% data variables.product.prodname\_actions %}. In both systems, jobs have the following characteristics: \* Jobs contain a series of steps that run sequentially. \* Jobs run on separate virtual machines or in separate containers. \* Jobs run in parallel by default, but can be configured to run sequentially. ## Migrating script steps You can run a script or a shell command as a step in a workflow. In Azure Pipelines, script steps can be specified using the `script` key, or with the `bash`, `powershell`, or `pwsh` keys. Scripts can also be specified as an input to the [Bash task](https://docs.microsoft.com/azure/devops/pipelines/tasks/utility/bash?view=azure-devops) or the [PowerShell task](https://docs.microsoft.com/azure/devops/pipelines/tasks/utility/powershell?view=azure-devops). In {% data variables.product.prodname\_actions %}, all scripts are specified using the `run` key. To select a particular shell, you can specify the `shell` key when providing the script. For more information, see [AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_idstepsrun). Below is an example of the syntax for each system. ### Azure Pipelines syntax for script steps {% raw %} ```yaml jobs: - job: scripts pool: vmImage: 'windows-latest' steps: - script: echo "This step runs in the default shell" - bash: echo "This step runs in bash" - pwsh: Write-Host "This step runs in PowerShell Core" - task: PowerShell@2 inputs: script: Write-Host "This step runs in PowerShell" ``` {% endraw %} ### {% data variables.product.prodname\_actions %} syntax for script steps {% raw %} ```yaml jobs: scripts: runs-on: windows-latest steps: - run: echo "This step runs in the default shell" - run: echo "This step runs in bash" shell: bash - run: Write-Host "This step runs in PowerShell Core" shell: pwsh - run: Write-Host "This step runs in PowerShell" shell: powershell ``` {% endraw %} ## Differences in script error handling In Azure Pipelines, scripts can be configured to error if any output is sent to `stderr`. {% data variables.product.prodname\_actions %} does not support this configuration. {% data variables.product.prodname\_actions %} configures shells to "fail fast" whenever possible, which stops the script immediately if one of the commands in a script exits with an error code. In contrast, | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/manual-migrations/migrate-from-azure-pipelines.md | main | github-actions | [
-0.053772568702697754,
-0.06768131256103516,
-0.08670662343502045,
0.0037604456301778555,
0.014477678574621677,
0.024725336581468582,
0.019412962719798088,
0.02688230574131012,
0.011718467809259892,
0.04400666430592537,
0.027543820440769196,
0.04062072932720184,
0.04040480777621269,
-0.011... | 0.140799 |
configured to error if any output is sent to `stderr`. {% data variables.product.prodname\_actions %} does not support this configuration. {% data variables.product.prodname\_actions %} configures shells to "fail fast" whenever possible, which stops the script immediately if one of the commands in a script exits with an error code. In contrast, Azure Pipelines requires explicit configuration to exit immediately on an error. For more information, see [AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#exit-codes-and-error-action-preference). ## Differences in the default shell on Windows In Azure Pipelines, the default shell for scripts on Windows platforms is the Command shell (\_cmd.exe\_). In {% data variables.product.prodname\_actions %}, the default shell for scripts on Windows platforms is PowerShell. PowerShell has several differences in built-in commands, variable expansion, and flow control. If you're running a simple command, you might be able to run a Command shell script in PowerShell without any changes. But in most cases, you will either need to update your script with PowerShell syntax or instruct {% data variables.product.prodname\_actions %} to run the script with the Command shell instead of PowerShell. You can do this by specifying `shell` as `cmd`. Below is an example of the syntax for each system. ### Azure Pipelines syntax using CMD by default {% raw %} ```yaml jobs: - job: run\_command pool: vmImage: 'windows-latest' steps: - script: echo "This step runs in CMD on Windows by default" ``` {% endraw %} ### {% data variables.product.prodname\_actions %} syntax for specifying CMD {% raw %} ```yaml jobs: run\_command: runs-on: windows-latest steps: - run: echo "This step runs in PowerShell on Windows by default" - run: echo "This step runs in CMD on Windows explicitly" shell: cmd ``` {% endraw %} For more information, see [AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#using-a-specific-shell). ## Migrating conditionals and expression syntax Azure Pipelines and {% data variables.product.prodname\_actions %} can both run steps conditionally. In Azure Pipelines, conditional expressions are specified using the `condition` key. In {% data variables.product.prodname\_actions %}, conditional expressions are specified using the `if` key. Azure Pipelines uses functions within expressions to execute steps conditionally. In contrast, {% data variables.product.prodname\_actions %} uses an infix notation. For example, you must replace the `eq` function in Azure Pipelines with the `==` operator in {% data variables.product.prodname\_actions %}. Below is an example of the syntax for each system. ### Azure Pipelines syntax for conditional expressions {% raw %} ```yaml jobs: - job: conditional pool: vmImage: 'ubuntu-latest' steps: - script: echo "This step runs with str equals 'ABC' and num equals 123" condition: and(eq(variables.str, 'ABC'), eq(variables.num, 123)) ``` {% endraw %} ### {% data variables.product.prodname\_actions %} syntax for conditional expressions {% raw %} ```yaml jobs: conditional: runs-on: ubuntu-latest steps: - run: echo "This step runs with str equals 'ABC' and num equals 123" if: ${{ env.str == 'ABC' && env.num == 123 }} ``` {% endraw %} For more information, see [AUTOTITLE](/actions/learn-github-actions/expressions). ## Dependencies between jobs Both Azure Pipelines and {% data variables.product.prodname\_actions %} allow you to set dependencies for a job. In both systems, jobs run in parallel by default, but job dependencies can be specified explicitly. In Azure Pipelines, this is done with the `dependsOn` key. In {% data variables.product.prodname\_actions %}, this is done with the `needs` key. Below is an example of the syntax for each system. The workflows start a first job named `initial`, and when that job completes, two jobs named `fanout1` and `fanout2` will run. Finally, when those jobs complete, the job `fanin` will run. ### Azure Pipelines syntax for dependencies between jobs {% raw %} ```yaml jobs: - job: initial pool: vmImage: 'ubuntu-latest' steps: - script: echo "This job will be run first." - job: fanout1 pool: vmImage: 'ubuntu-latest' dependsOn: initial steps: - | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/manual-migrations/migrate-from-azure-pipelines.md | main | github-actions | [
0.02650488168001175,
-0.0311017744243145,
-0.08593553304672241,
0.027441516518592834,
0.012179731391370296,
0.035020194947719574,
0.024962250143289566,
0.02468932792544365,
0.059069227427244186,
0.11400074511766434,
0.017632776871323586,
-0.03933555632829666,
0.025274019688367844,
-0.01664... | -0.01261 |
run. Finally, when those jobs complete, the job `fanin` will run. ### Azure Pipelines syntax for dependencies between jobs {% raw %} ```yaml jobs: - job: initial pool: vmImage: 'ubuntu-latest' steps: - script: echo "This job will be run first." - job: fanout1 pool: vmImage: 'ubuntu-latest' dependsOn: initial steps: - script: echo "This job will run after the initial job, in parallel with fanout2." - job: fanout2 pool: vmImage: 'ubuntu-latest' dependsOn: initial steps: - script: echo "This job will run after the initial job, in parallel with fanout1." - job: fanin pool: vmImage: 'ubuntu-latest' dependsOn: [fanout1, fanout2] steps: - script: echo "This job will run after fanout1 and fanout2 have finished." ``` {% endraw %} ### {% data variables.product.prodname\_actions %} syntax for dependencies between jobs {% raw %} ```yaml jobs: initial: runs-on: ubuntu-latest steps: - run: echo "This job will be run first." fanout1: runs-on: ubuntu-latest needs: initial steps: - run: echo "This job will run after the initial job, in parallel with fanout2." fanout2: runs-on: ubuntu-latest needs: initial steps: - run: echo "This job will run after the initial job, in parallel with fanout1." fanin: runs-on: ubuntu-latest needs: [fanout1, fanout2] steps: - run: echo "This job will run after fanout1 and fanout2 have finished." ``` {% endraw %} For more information, see [AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_idneeds). ## Migrating tasks to actions Azure Pipelines uses \_tasks\_, which are application components that can be re-used in multiple workflows. {% data variables.product.prodname\_actions %} uses \_actions\_, which can be used to perform tasks and customize your workflow. In both systems, you can specify the name of the task or action to run, along with any required inputs as key/value pairs. Below is an example of the syntax for each system. ### Azure Pipelines syntax for tasks {% raw %} ```yaml jobs: - job: run\_python pool: vmImage: 'ubuntu-latest' steps: - task: UsePythonVersion@0 inputs: versionSpec: '3.7' architecture: 'x64' - script: python script.py ``` {% endraw %} ### {% data variables.product.prodname\_actions %} syntax for actions ```yaml jobs: run\_python: runs-on: ubuntu-latest steps: - uses: {% data reusables.actions.action-setup-python %} with: python-version: '3.7' architecture: 'x64' - run: python script.py ``` You can find actions that you can use in your workflow in [{% data variables.product.prodname\_marketplace %}](https://github.com/marketplace?type=actions), or you can create your own actions. For more information, see [AUTOTITLE](/actions/creating-actions). | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/manual-migrations/migrate-from-azure-pipelines.md | main | github-actions | [
-0.02624696120619774,
-0.029915984719991684,
-0.026346996426582336,
-0.007980607450008392,
0.007408002857118845,
-0.013425225391983986,
-0.042265988886356354,
-0.06655934453010559,
-0.00020241299353074282,
0.04318869858980179,
0.058188457041978836,
-0.03180621564388275,
-0.03792273998260498,... | -0.000248 |
{% data reusables.actions.enterprise-github-hosted-runners %} ## About {% data variables.product.prodname\_actions\_importer %} You can use {% data variables.product.prodname\_actions\_importer %} to plan and automatically migrate your CI/CD supported pipelines to {% data variables.product.prodname\_actions %}. {% data variables.product.prodname\_actions\_importer %} is distributed as a Docker container, and uses a [{% data variables.product.prodname\_dotcom %} CLI](https://cli.github.com) extension to interact with the container. Any workflow that is converted by the {% data variables.product.prodname\_actions\_importer %} should be inspected for correctness before using it as a production workload. The goal is to achieve an 80% conversion rate for every workflow, however, the actual conversion rate will depend on the makeup of each individual pipeline that is converted. ## Supported CI platforms You can use {% data variables.product.prodname\_actions\_importer %} to migrate from the following platforms: \* Azure DevOps \* Bamboo \* Bitbucket Pipelines \* CircleCI \* GitLab (both cloud and self-hosted) \* Jenkins \* Travis CI ## Prerequisites {% data variables.product.prodname\_actions\_importer %} has the following requirements: {% data reusables.actions.actions-importer-prerequisites %} ### Installing the {% data variables.product.prodname\_actions\_importer %} CLI extension {% data reusables.actions.installing-actions-importer %} ### Updating the {% data variables.product.prodname\_actions\_importer %} CLI To ensure you're running the latest version of {% data variables.product.prodname\_actions\_importer %}, you should regularly run the `update` command: ```bash gh actions-importer update ``` ### Authenticating at the command line You must configure credentials that allow {% data variables.product.prodname\_actions\_importer %} to communicate with {% data variables.product.prodname\_dotcom %} and your current CI server. You can configure these credentials using environment variables or a `.env.local` file. The environment variables can be configured in an interactive prompt, by running the following command: ```bash gh actions-importer configure ``` ## Using the {% data variables.product.prodname\_actions\_importer %} CLI Use the subcommands of `gh actions-importer` to begin your migration to {% data variables.product.prodname\_actions %}, including `audit`, `forecast`, `dry-run`, and `migrate`. ### Auditing your existing CI pipelines The `audit` subcommand can be used to plan your CI/CD migration by analyzing your current CI/CD footprint. This analysis can be used to plan a timeline for migrating to {% data variables.product.prodname\_actions %}. To run an audit, use the following command to determine your available options: ```bash $ gh actions-importer audit -h Description: Plan your CI/CD migration by analyzing your current CI/CD footprint. [...] Commands: azure-devops An audit will output a list of data used in an Azure DevOps instance. bamboo An audit will output a list of data used in a Bamboo instance. circle-ci An audit will output a list of data used in a CircleCI instance. gitlab An audit will output a list of data used in a GitLab instance. jenkins An audit will output a list of data used in a Jenkins instance. travis-ci An audit will output a list of data used in a Travis CI instance. ``` ### Forecasting usage The `forecast` subcommand reviews historical pipeline usage to create a forecast of {% data variables.product.prodname\_actions %} usage. To run a forecast, use the following command to determine your available options: ```bash $ gh actions-importer forecast -h Description: Forecasts GitHub Actions usage from historical pipeline utilization. [...] Commands: azure-devops Forecasts GitHub Actions usage from historical Azure DevOps pipeline utilization. bamboo Forecasts GitHub Actions usage from historical Bamboo pipeline utilization. jenkins Forecasts GitHub Actions usage from historical Jenkins pipeline utilization. gitlab Forecasts GitHub Actions usage from historical GitLab pipeline utilization. circle-ci Forecasts GitHub Actions usage from historical CircleCI pipeline utilization. travis-ci Forecasts GitHub Actions usage from historical Travis CI pipeline utilization. github Forecasts GitHub Actions usage from historical GitHub pipeline utilization. ``` ### Testing the migration process The `dry-run` subcommand can be used to convert a pipeline to its {% data variables.product.prodname\_actions %} equivalent, and then write the | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/automated-migrations/use-github-actions-importer.md | main | github-actions | [
-0.0191778764128685,
-0.018023274838924408,
-0.07124876230955124,
0.0009494220721535385,
0.00317554222419858,
-0.04993833601474762,
0.0008919631945900619,
0.08373480290174484,
-0.05604395270347595,
0.009989292360842228,
0.009183617308735847,
-0.05776333808898926,
0.03840775787830353,
-0.04... | 0.083862 |
pipeline utilization. travis-ci Forecasts GitHub Actions usage from historical Travis CI pipeline utilization. github Forecasts GitHub Actions usage from historical GitHub pipeline utilization. ``` ### Testing the migration process The `dry-run` subcommand can be used to convert a pipeline to its {% data variables.product.prodname\_actions %} equivalent, and then write the workflow to your local filesystem. To perform a dry run, use the following command to determine your available options: ```bash $ gh actions-importer dry-run -h Description: Convert a pipeline to a GitHub Actions workflow and output its yaml file. [...] Commands: azure-devops Convert an Azure DevOps pipeline to a GitHub Actions workflow and output its yaml file. bamboo Convert a Bamboo pipeline to GitHub Actions workflows and output its yaml file. circle-ci Convert a CircleCI pipeline to GitHub Actions workflows and output the yaml file(s). gitlab Convert a GitLab pipeline to a GitHub Actions workflow and output the yaml file. jenkins Convert a Jenkins job to a GitHub Actions workflow and output its yaml file. travis-ci Convert a Travis CI pipeline to a GitHub Actions workflow and output its yaml file. ``` ### Migrating a pipeline to {% data variables.product.prodname\_actions %} The `migrate` subcommand can be used to convert a pipeline to its GitHub Actions equivalent and then create a pull request with the contents. To run a migration, use the following command to determine your available options: ```bash $ gh actions-importer migrate -h Description: Convert a pipeline to a GitHub Actions workflow and open a pull request with the changes. [...] Commands: azure-devops Convert an Azure DevOps pipeline to a GitHub Actions workflow and open a pull request with the changes. bamboo Convert a Bamboo pipeline to GitHub Actions workflows and open a pull request with the changes. circle-ci Convert a CircleCI pipeline to GitHub Actions workflows and open a pull request with the changes. gitlab Convert a GitLab pipeline to a GitHub Actions workflow and open a pull request with the changes. jenkins Convert a Jenkins job to a GitHub Actions workflow and open a pull request with the changes. travis-ci Convert a Travis CI pipeline to a GitHub Actions workflow and open a pull request with the changes. ``` ## Performing self-serve migrations using IssueOps You can use {% data variables.product.prodname\_actions %} and {% data variables.product.prodname\_github\_issues %} to run CLI commands for {% data variables.product.prodname\_actions\_importer %}. This allows you to migrate your CI/CD workflows without installing software on your local machine. This approach is especially useful for organizations that want to enable self-service migrations to {% data variables.product.prodname\_actions %}. Once IssueOps is configured, users can open an issue with the relevant template to migrate pipelines to {% data variables.product.prodname\_actions %}. For more information about setting up self-serve migrations with IssueOps, see the [`actions/importer-issue-ops`](https://github.com/actions/importer-issue-ops) template repository. ## Using the {% data variables.product.prodname\_actions\_importer %} labs repository The {% data variables.product.prodname\_actions\_importer %} labs repository contains platform-specific learning paths that teach you how to use {% data variables.product.prodname\_actions\_importer %} and how to approach migrations to {% data variables.product.prodname\_actions %}. You can use this repository to learn how to use {% data variables.product.prodname\_actions\_importer %} to help plan, forecast, and automate your migration to {% data variables.product.prodname\_actions %}. To learn more, see the [GitHub Actions Importer labs repository](https://github.com/actions/importer-labs/tree/main#readme). ## Legal notice {% data reusables.actions.actions-importer-legal-notice %} | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/automated-migrations/use-github-actions-importer.md | main | github-actions | [
-0.016401397064328194,
-0.05966350436210632,
-0.05122443288564682,
0.002214935142546892,
0.03356063365936279,
-0.07057764381170273,
-0.040607184171676636,
0.018516037613153458,
-0.012707898393273354,
0.057108860462903976,
0.027560163289308548,
-0.0619853250682354,
0.019359789788722992,
-0.... | 0.024962 |
## About migrating from Bamboo with GitHub Actions Importer The instructions below will guide you through configuring your environment to use {% data variables.product.prodname\_actions\_importer %} to migrate Bamboo pipelines to {% data variables.product.prodname\_actions %}. ### Prerequisites \* A Bamboo account or organization with projects and pipelines that you want to convert to {% data variables.product.prodname\_actions %} workflows. \* Bamboo version of 7.1.1 or greater. \* Access to create a Bamboo {% data variables.product.pat\_generic %} for your account or organization. {% data reusables.actions.actions-importer-prerequisites %} ### Limitations There are some limitations when migrating from Bamboo to {% data variables.product.prodname\_actions %} with {% data variables.product.prodname\_actions\_importer %}: \* {% data variables.product.prodname\_actions\_importer %} relies on the YAML specification generated by the Bamboo Server to perform migrations. When Bamboo does not support exporting something to YAML, the missing information is not migrated. \* Trigger conditions are unsupported. When {% data variables.product.prodname\_actions\_importer %} encounters a trigger with a condition, the condition is surfaced as a comment and the trigger is transformed without it. \* Bamboo Plans with customized settings for storing artifacts are not transformed. Instead, artifacts are stored and retrieved using the [`upload-artifact`](https://github.com/actions/upload-artifact) and [`download-artifact`](https://github.com/actions/download-artifact) actions. \* Disabled plans must be disabled manually in the GitHub UI. For more information, see [AUTOTITLE](/actions/using-workflows/disabling-and-enabling-a-workflow). \* Disabled jobs are transformed with a `if: false` condition which prevents it from running. You must remove this to re-enable the job. \* Disabled tasks are not transformed because they are not included in the exported plan when using the Bamboo API. \* Bamboo provides options to clean up build workspaces after a build is complete. These are not transformed because it is assumed GitHub-hosted runners or ephemeral self-hosted runners will automatically handle this. \* The hanging build detection options are not transformed because there is no equivalent in {% data variables.product.prodname\_actions %}. The closest option is `timeout-minutes` on a job, which can be used to set the maximum number of minutes to let a job run. For more information, see [AUTOTITLE](/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob\_idtimeout-minutes). \* Pattern match labeling is not transformed because there is no equivalent in {% data variables.product.prodname\_actions %}. \* All artifacts are transformed into an `actions/upload-artifact`, regardless of whether they are `shared` or not, so they can be downloaded from any job in the workflow. \* Permissions are not transformed because there is no suitable equivalent in {% data variables.product.prodname\_actions %}. \* If the Bamboo version is between 7.1.1 and 8.1.1, project and plan variables will not be migrated. #### Manual tasks Certain Bamboo constructs must be migrated manually. These include: \* Masked variables \* Artifact expiry settings ## Installing the {% data variables.product.prodname\_actions\_importer %} CLI extension {% data reusables.actions.installing-actions-importer %} ## Configuring credentials The `configure` CLI command is used to set required credentials and options for {% data variables.product.prodname\_actions\_importer %} when working with Bamboo and {% data variables.product.prodname\_dotcom %}. 1. Create a {% data variables.product.prodname\_dotcom %} {% data variables.product.pat\_v1 %}. For more information, see [AUTOTITLE](/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-personal-access-token-classic). Your token must have the `workflow` scope. After creating the token, copy it and save it in a safe location for later use. 1. Create a Bamboo {% data variables.product.pat\_generic %}. For more information, see [{% data variables.product.pat\_generic\_title\_case\_plural %}](https://confluence.atlassian.com/bamboo/personal-access-tokens-976779873.html) in the Bamboo documentation. Your token must have the following permissions, depending on which resources will be transformed. Resource Type | View | View Configuration | Edit |:--- | :---: | :---: | :---: | Build Plan | {% octicon "check" aria-label="Required" %} | {% octicon "check" aria-label="Required" %} | {% octicon "check" aria-label="Required" %} | Deployment Project | {% octicon "check" aria-label="Required" %} | {% octicon "check" aria-label="Required" %} | {% octicon "x" aria-label="Not required" %} | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/automated-migrations/bamboo-migration.md | main | github-actions | [
-0.058020755648612976,
-0.0909675881266594,
-0.06350710988044739,
0.0035720926243811846,
0.02796630747616291,
0.011081810109317303,
0.03736744076013565,
0.062252432107925415,
-0.06939347833395004,
0.03742481768131256,
0.061525676399469376,
-0.0640350952744484,
0.0062926760874688625,
-0.001... | -0.037174 |
|:--- | :---: | :---: | :---: | Build Plan | {% octicon "check" aria-label="Required" %} | {% octicon "check" aria-label="Required" %} | {% octicon "check" aria-label="Required" %} | Deployment Project | {% octicon "check" aria-label="Required" %} | {% octicon "check" aria-label="Required" %} | {% octicon "x" aria-label="Not required" %} | Deployment Environment | {% octicon "check" aria-label="Required" %} |{% octicon "x" aria-label="Not required" %}| {% octicon "x" aria-label="Not required" %} After creating the token, copy it and save it in a safe location for later use. 1. In your terminal, run the {% data variables.product.prodname\_actions\_importer %} `configure` CLI command: ```shell gh actions-importer configure ``` The `configure` command will prompt you for the following information: \* For "Which CI providers are you configuring?", use the arrow keys to select `Bamboo`, press `Space` to select it, then press `Enter`. \* For "{% data variables.product.pat\_generic\_caps %} for GitHub", enter the value of the {% data variables.product.pat\_v1 %} that you created earlier, and press `Enter`. \* For "Base url of the GitHub instance", {% ifversion ghes %}enter the URL for {% data variables.location.product\_location\_enterprise %}, and press `Enter`.{% else %}press `Enter` to accept the default value (`https://github.com`).{% endif %} \* For "{% data variables.product.pat\_generic\_caps %} for Bamboo", enter the value for the Bamboo {% data variables.product.pat\_generic %} that you created earlier, and press `Enter`. \* For "Base url of the Bamboo instance", enter the URL for your Bamboo Server or Bamboo Data Center instance, and press `Enter`. An example of the `configure` command is shown below: ```shell $ gh actions-importer configure ✔ Which CI providers are you configuring?: Bamboo Enter the following values (leave empty to omit): ✔ {% data variables.product.pat\_generic\_caps %} for GitHub: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\* ✔ Base url of the GitHub instance: https://github.com ✔ {% data variables.product.pat\_generic\_caps %} for Bamboo: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* ✔ Base url of the Bamboo instance: https://bamboo.example.com Environment variables successfully updated. ``` 1. In your terminal, run the {% data variables.product.prodname\_actions\_importer %} `update` CLI command to connect to {% data variables.product.prodname\_registry %} {% data variables.product.prodname\_container\_registry %} and ensure that the container image is updated to the latest version: ```shell gh actions-importer update ``` The output of the command should be similar to below: ```shell Updating ghcr.io/actions-importer/cli:latest... ghcr.io/actions-importer/cli:latest up-to-date ``` ## Perform an audit of Bamboo You can use the `audit` command to get a high-level view of all projects in a Bamboo organization. The `audit` command performs the following steps: 1. Fetches all of the projects defined in a Bamboo organization. 1. Converts each pipeline to its equivalent {% data variables.product.prodname\_actions %} workflow. 1. Generates a report that summarizes how complete and complex of a migration is possible with {% data variables.product.prodname\_actions\_importer %}. ### Running the audit command To perform an audit of a Bamboo instance, run the following command in your terminal: ```shell gh actions-importer audit bamboo --output-dir tmp/audit ``` ### Inspecting the audit results {% data reusables.actions.gai-inspect-audit %} ## Forecasting usage You can use the `forecast` command to forecast potential {% data variables.product.prodname\_actions %} usage by computing metrics from completed pipeline runs in your Bamboo instance. ### Running the forecast command To perform a forecast of potential {% data variables.product.prodname\_actions %} usage, run the following command in your terminal. By default, {% data variables.product.prodname\_actions\_importer %} includes the previous seven days in the forecast report. ```shell gh actions-importer forecast bamboo --output-dir tmp/forecast\_reports ``` ### Forecasting a project To limit the forecast to the plans and deployments environments associated with a project, you can use the `--project` option, where the value is set to a build project key. For example: ```shell gh actions-importer forecast bamboo --project PAN --output-dir tmp/forecast\_reports ``` ### Inspecting | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/automated-migrations/bamboo-migration.md | main | github-actions | [
0.013331621885299683,
0.009974286891520023,
-0.0769403800368309,
0.023247160017490387,
0.003789051901549101,
0.025241920724511147,
0.08188848197460175,
0.023834075778722763,
-0.07164514064788818,
-0.028966659680008888,
0.05895368382334709,
-0.10262156277894974,
0.039086900651454926,
-0.058... | 0.013865 |
tmp/forecast\_reports ``` ### Forecasting a project To limit the forecast to the plans and deployments environments associated with a project, you can use the `--project` option, where the value is set to a build project key. For example: ```shell gh actions-importer forecast bamboo --project PAN --output-dir tmp/forecast\_reports ``` ### Inspecting the forecast report The `forecast\_report.md` file in the specified output directory contains the results of the forecast. Listed below are some key terms that can appear in the forecast report: \* The \*\*job count\*\* is the total number of completed jobs. \* The \*\*pipeline count\*\* is the number of unique pipelines used. \* \*\*Execution time\*\* describes the amount of time a runner spent on a job. This metric can be used to help plan for the cost of {% data variables.product.prodname\_dotcom %}-hosted runners. \* This metric is correlated to how much you should expect to spend in {% data variables.product.prodname\_actions %}. This will vary depending on the hardware used for these minutes. You can use the [{% data variables.product.prodname\_actions %} pricing calculator](https://github.com/pricing/calculator) to estimate the costs. \* \*\*Queue time\*\* metrics describe the amount of time a job spent waiting for a runner to be available to execute it. \* \*\*Concurrent jobs\*\* metrics describe the amount of jobs running at any given time. This metric can be used to ## Perform a dry-run migration of a Bamboo pipeline You can use the `dry-run` command to convert a Bamboo pipeline to an equivalent {% data variables.product.prodname\_actions %} workflow. A dry-run creates the output files in a specified directory, but does not open a pull request to migrate the pipeline. ### Running a dry-run migration for a build plan To perform a dry run of migrating your Bamboo build plan to {% data variables.product.prodname\_actions %}, run the following command in your terminal, replacing `:my\_plan\_slug` with the plan's project and plan key in the format `-` (for example: `PAN-SCRIP`). ```shell gh actions-importer dry-run bamboo build --plan-slug :my\_plan\_slug --output-dir tmp/dry-run ``` ### Running a dry-run migration for a deployment project To perform a dry run of migrating your Bamboo deployment project to {% data variables.product.prodname\_actions %}, run the following command in your terminal, replacing `:my\_deployment\_project\_id` with the ID of the deployment project you are converting. ```shell gh actions-importer dry-run bamboo deployment --deployment-project-id :my\_deployment\_project\_id --output-dir tmp/dry-run ``` You can view the logs of the dry run and the converted workflow files in the specified output directory. {% data reusables.actions.gai-custom-transformers-rec %} ## Perform a production migration of a Bamboo pipeline You can use the `migrate` command to convert a Bamboo pipeline and open a pull request with the equivalent {% data variables.product.prodname\_actions %} workflow. ### Running the migrate command for a build plan To migrate a Bamboo build plan to {% data variables.product.prodname\_actions %}, run the following command in your terminal, replacing the `target-url` value with the URL for your {% data variables.product.prodname\_dotcom %} repository, and `:my\_plan\_slug` with the plan's project and plan key in the format `-`. ```shell gh actions-importer migrate bamboo build --plan-slug :my\_plan\_slug --target-url :target\_url --output-dir tmp/migrate ``` The command's output includes the URL to the pull request that adds the converted workflow to your repository. An example of a successful output is similar to the following: ```shell $ gh actions-importer migrate bamboo build --plan-slug :PROJECTKEY-PLANKEY --target-url https://github.com/octo-org/octo-repo --output-dir tmp/migrate [2022-08-20 22:08:20] Logs: 'tmp/migrate/log/actions-importer-20220916-014033.log' [2022-08-20 22:08:20] Pull request: 'https://github.com/octo-org/octo-repo/pull/1' ``` ### Running the migrate command for a deployment project To migrate a Bamboo deployment project to {% data variables.product.prodname\_actions %}, run the following command in your terminal, replacing the `target-url` value with the URL for your {% data variables.product.prodname\_dotcom %} repository, and `:my\_deployment\_project\_id` with | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/automated-migrations/bamboo-migration.md | main | github-actions | [
-0.08141817152500153,
-0.008271086029708385,
-0.08322833478450775,
0.07086861878633499,
-0.0044061061926186085,
-0.010392027907073498,
-0.021215088665485382,
0.09871850907802582,
-0.030628828331828117,
0.041973650455474854,
-0.060467373579740524,
-0.0640573501586914,
0.03696479648351669,
-... | 0.112628 |
[2022-08-20 22:08:20] Pull request: 'https://github.com/octo-org/octo-repo/pull/1' ``` ### Running the migrate command for a deployment project To migrate a Bamboo deployment project to {% data variables.product.prodname\_actions %}, run the following command in your terminal, replacing the `target-url` value with the URL for your {% data variables.product.prodname\_dotcom %} repository, and `:my\_deployment\_project\_id` with the ID of the deployment project you are converting. ```shell gh actions-importer migrate bamboo deployment --deployment-project-id :my\_deployment\_project\_id --target-url :target\_url --output-dir tmp/migrate ``` The command's output includes the URL to the pull request that adds the converted workflow to your repository. An example of a successful output is similar to the following: ```shell $ gh actions-importer migrate bamboo deployment --deployment-project-id 123 --target-url https://github.com/octo-org/octo-repo --output-dir tmp/migrate [2023-04-20 22:08:20] Logs: 'tmp/migrate/log/actions-importer-20230420-014033.log' [2023-04-20 22:08:20] Pull request: 'https://github.com/octo-org/octo-repo/pull/1' ``` {% data reusables.actions.gai-inspect-pull-request %} ## Reference This section contains reference information on environment variables, optional arguments, and supported syntax when using {% data variables.product.prodname\_actions\_importer %} to migrate from Bamboo. ### Using environment variables {% data reusables.actions.gai-config-environment-variables %} {% data variables.product.prodname\_actions\_importer %} uses the following environment variables to connect to your Bamboo instance: \* `GITHUB\_ACCESS\_TOKEN`: The {% data variables.product.pat\_v1 %} used to create pull requests with a converted workflow (requires `repo` and `workflow` scopes). \* `GITHUB\_INSTANCE\_URL`: The URL to the target {% data variables.product.prodname\_dotcom %} instance (for example, `https://github.com`). \* `BAMBOO\_ACCESS\_TOKEN`: The Bamboo {% data variables.product.pat\_generic %} used to authenticate with your Bamboo instance. \* `BAMBOO\_INSTANCE\_URL`: The URL to the Bamboo instance (for example, `https://bamboo.example.com`). These environment variables can be specified in a `.env.local` file that is loaded by {% data variables.product.prodname\_actions\_importer %} when it is run. ### Optional arguments {% data reusables.actions.gai-optional-arguments-intro %} #### `--source-file-path` You can use the `--source-file-path` argument with the `dry-run` or `migrate` subcommands. By default, {% data variables.product.prodname\_actions\_importer %} fetches pipeline contents from the Bamboo instance. The `--source-file-path` argument tells {% data variables.product.prodname\_actions\_importer %} to use the specified source file path instead. For example: ```shell gh actions-importer dry-run bamboo build --plan-slug IN-COM -o tmp/bamboo --source-file-path ./path/to/my/bamboo/file.yml ``` #### `--config-file-path` You can use the `--config-file-path` argument with the `audit`, `dry-run`, and `migrate` subcommands. By default, {% data variables.product.prodname\_actions\_importer %} fetches pipeline contents from the Bamboo instance. The `--config-file-path` argument tells {% data variables.product.prodname\_actions\_importer %} to use the specified source files instead. ##### Audit example In this example, {% data variables.product.prodname\_actions\_importer %} uses the specified YAML configuration file to perform an audit. ```bash gh actions-importer audit bamboo -o tmp/bamboo --config-file-path "./path/to/my/bamboo/config.yml" ``` To audit a Bamboo instance using a config file, the config file must be in the following format, and each `repository\_slug` must be unique: ```yaml source\_files: - repository\_slug: IN/COM path: path/to/one/source/file.yml - repository\_slug: IN/JOB path: path/to/another/source/file.yml ``` ##### Dry run example In this example, {% data variables.product.prodname\_actions\_importer %} uses the specified YAML configuration file as the source file to perform a dry run. The repository slug is built using the `--plan-slug` option. The source file path is matched and pulled from the specified source file. ```bash gh actions-importer dry-run bamboo build --plan-slug IN-COM -o tmp/bamboo --config-file-path "./path/to/my/bamboo/config.yml" ``` ### Supported syntax for Bamboo pipelines The following table shows the type of properties that {% data variables.product.prodname\_actions\_importer %} is currently able to convert. | Bamboo | GitHub Actions | Status | | :---------------------------------- | :-----------------------------------------------| ---------------------: | | `environments` | `jobs` | Supported | | `environments.` | `jobs.` | Supported | | `.artifacts` | `jobs..steps.actions/upload-artifact` | Supported | | `.artifact-subscriptions` | `jobs..steps.actions/download-artifact` | Supported | | `.docker` | `jobs..container` | Supported | | `.final-tasks` | `jobs..steps.if` | Supported | | `.requirements` | `jobs..runs-on` | Supported | | `.tasks` | `jobs..steps` | Supported | | `.variables` | `jobs..env` | Supported | | `stages` | | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/automated-migrations/bamboo-migration.md | main | github-actions | [
-0.05749359354376793,
-0.030199898406863213,
-0.04959660395979881,
-0.032188836485147476,
0.05818617716431618,
-0.006893787067383528,
0.03184007853269577,
0.03204573690891266,
-0.06964680552482605,
0.09047640860080719,
0.06899751722812653,
-0.08700583875179291,
-0.00950514804571867,
-0.056... | 0.025747 |
| `jobs..steps.actions/upload-artifact` | Supported | | `.artifact-subscriptions` | `jobs..steps.actions/download-artifact` | Supported | | `.docker` | `jobs..container` | Supported | | `.final-tasks` | `jobs..steps.if` | Supported | | `.requirements` | `jobs..runs-on` | Supported | | `.tasks` | `jobs..steps` | Supported | | `.variables` | `jobs..env` | Supported | | `stages` | `jobs..needs` | Supported | | `stages..final` | `jobs..if` | Supported | | `stages..jobs` | `jobs` | Supported | | `stages..jobs.` | `jobs.` | Supported | | `stages..manual` | `jobs..environment` | Supported | | `triggers` | `on` | Supported | | `dependencies` | `jobs..steps.` | Partially Supported | | `branches` | Not applicable | Unsupported | | `deployment.deployment-permissions` | Not applicable | Unsupported | | `environment-permissions` | Not applicable | Unsupported | | `notifications` | Not applicable | Unsupported | | `plan-permissions` | Not applicable | Unsupported | | `release-naming` | Not applicable | Unsupported | | `repositories` | Not applicable | Unsupported | For more information about supported Bamboo concept and plugin mappings, see the [`github/gh-actions-importer` repository](https://github.com/github/gh-actions-importer/blob/main/docs/bamboo/index.md). ### Environment variable mapping {% data variables.product.prodname\_actions\_importer %} uses the mapping in the table below to convert default Bamboo environment variables to the closest equivalent in {% data variables.product.prodname\_actions %}. | Bamboo | GitHub Actions | | :----------------------------------------------- | :-------------------------------------------------- | | `bamboo.agentId` | {% raw %}`${{ github.runner\_name }}`{% endraw %} | `bamboo.agentWorkingDirectory` | {% raw %}`${{ github.workspace }}`{% endraw %} | `bamboo.buildKey` | {% raw %}`${{ github.workflow }}-${{ github.job }}`{% endraw %} | `bamboo.buildNumber` | {% raw %}`${{ github.run\_id }}`{% endraw %} | `bamboo.buildPlanName` | {% raw %}`${{ github.repository }}-${{ github.workflow }}-${{ github.job }`{% endraw %} | `bamboo.buildResultKey` | {% raw %}`${{ github.workflow }}-${{ github.job }}-${{ github.run\_id }}`{% endraw %} | `bamboo.buildResultsUrl` | {% raw %}`${{ github.server\_url }}/${{ github.repository }}/actions/runs/${{ github.run\_id }}`{% endraw %} | `bamboo.build.working.directory` | {% raw %}`${{ github.workspace }}`{% endraw %} | `bamboo.deploy.project` | {% raw %}`${{ github.repository }}`{% endraw %} | `bamboo.ManualBuildTriggerReason.userName` | {% raw %}`${{ github.actor }}`{% endraw %} | `bamboo.planKey` | {% raw %}`${{ github.workflow }}`{% endraw %} | `bamboo.planName` | {% raw %}`${{ github.repository }}-${{ github.workflow }}`{% endraw %} | `bamboo.planRepository.branchDisplayName` | {% raw %}`${{ github.ref }}`{% endraw %} | `bamboo.planRepository..branch` | {% raw %}`${{ github.ref }}`{% endraw %} | `bamboo.planRepository..branchName` | {% raw %}`${{ github.ref }}`{% endraw %} | `bamboo.planRepository..name` | {% raw %}`${{ github.repository }}`{% endraw %} | `bamboo.planRepository..repositoryUrl` | {% raw %}`${{ github.server }}/${{ github.repository }}`{% endraw %} | `bamboo.planRepository..revision` | {% raw %}`${{ github.sha }}`{% endraw %} | `bamboo.planRepository..username` | {% raw %}`${{ github.actor}}`{% endraw %} | `bamboo.repository.branch.name` | {% raw %}`${{ github.ref }}`{% endraw %} | `bamboo.repository.git.branch` | {% raw %}`${{ github.ref }}`{% endraw %} | `bamboo.repository.git.repositoryUrl` | {% raw %}`${{ github.server }}/${{ github.repository }}`{% endraw %} | `bamboo.repository.pr.key` | {% raw %}`${{ github.event.pull\_request.number }}`{% endraw %} | `bamboo.repository.pr.sourceBranch` | {% raw %}`${{ github.event.pull\_request.head.ref }}`{% endraw %} | `bamboo.repository.pr.targetBranch` | {% raw %}`${{ github.event.pull\_request.base.ref }}`{% endraw %} | `bamboo.resultsUrl` | {% raw %}`${{ github.server\_url }}/${{ github.repository }}/actions/runs/${{ github.run\_id }}`{% endraw %} | `bamboo.shortJobKey` | {% raw %}`${{ github.job }}`{% endraw %} | `bamboo.shortJobName` | {% raw %}`${{ github.job }}`{% endraw %} | `bamboo.shortPlanKey` | {% raw %}`${{ github.workflow }}`{% endraw %} | `bamboo.shortPlanName` | {% raw %}`${{ github.workflow }}`{% endraw %} > [!NOTE] > Unknown variables are transformed to {% raw %}`${{ env. }}`{% endraw %} and must be replaced or added under `env` for proper operation. For example, `${bamboo.jira.baseUrl}` will become {% raw %}`${{ env.jira\_baseUrl }}`{% endraw %}. ### System Variables System variables used in tasks are transformed to the equivalent bash shell variable and are assumed to be available. For example, `${system.}` will be transformed to `$variable\_name`. We | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/automated-migrations/bamboo-migration.md | main | github-actions | [
-0.03558016195893288,
-0.03259965777397156,
-0.0314241424202919,
-0.0555419884622097,
0.03412036970257759,
-0.04551970586180687,
-0.04649827256798744,
-0.016783667728304863,
-0.07814182341098785,
0.06106320768594742,
-0.010412207804620266,
-0.06503994762897491,
0.0001229012996191159,
0.035... | 0.055579 |
or added under `env` for proper operation. For example, `${bamboo.jira.baseUrl}` will become {% raw %}`${{ env.jira\_baseUrl }}`{% endraw %}. ### System Variables System variables used in tasks are transformed to the equivalent bash shell variable and are assumed to be available. For example, `${system.}` will be transformed to `$variable\_name`. We recommend you verify this to ensure proper operation of the workflow. ## Legal notice {% data reusables.actions.actions-importer-legal-notice %} | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/automated-migrations/bamboo-migration.md | main | github-actions | [
-0.04946776479482651,
-0.007066990248858929,
-0.035906948149204254,
0.011716346256434917,
0.008256480097770691,
0.01529285591095686,
0.05039137974381447,
0.029924912378191948,
0.01771744154393673,
0.03270770609378815,
0.02267608605325222,
-0.027189867570996284,
-0.008031903766095638,
0.034... | 0.045463 |
## About migrating from CircleCI with GitHub Actions Importer The instructions below will guide you through configuring your environment to use {% data variables.product.prodname\_actions\_importer %} to migrate CircleCI pipelines to {% data variables.product.prodname\_actions %}. ### Prerequisites \* A CircleCI account or organization with projects and pipelines that you want to convert to {% data variables.product.prodname\_actions %} workflows. \* Access to create a CircleCI personal API token for your account or organization. {% data reusables.actions.actions-importer-prerequisites %} ### Limitations There are some limitations when migrating from CircleCI to {% data variables.product.prodname\_actions %} with {% data variables.product.prodname\_actions\_importer %}: \* Automatic caching in between jobs of different workflows is not supported. \* The `audit` command is only supported when you use a CircleCI organization account. The `dry-run` and `migrate` commands can be used with a CircleCI organization or user account. #### Manual tasks Certain CircleCI constructs must be migrated manually. These include: \* Contexts \* Project-level environment variables \* Unknown job properties \* Unknown orbs ## Installing the {% data variables.product.prodname\_actions\_importer %} CLI extension {% data reusables.actions.installing-actions-importer %} ## Configuring credentials The `configure` CLI command is used to set required credentials and options for {% data variables.product.prodname\_actions\_importer %} when working with CircleCI and {% data variables.product.prodname\_dotcom %}. 1. Create a {% data variables.product.prodname\_dotcom %} {% data variables.product.pat\_v1 %}. For more information, see [AUTOTITLE](/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-personal-access-token-classic). Your token must have the `workflow` scope. After creating the token, copy it and save it in a safe location for later use. 1. Create a CircleCI personal API token. For more information, see [Managing API Tokens](https://circleci.com/docs/managing-api-tokens/#creating-a-personal-api-token) in the CircleCI documentation. After creating the token, copy it and save it in a safe location for later use. 1. In your terminal, run the {% data variables.product.prodname\_actions\_importer %} `configure` CLI command: ```shell gh actions-importer configure ``` The `configure` command will prompt you for the following information: \* For "Which CI providers are you configuring?", use the arrow keys to select `CircleCI`, press `Space` to select it, then press `Enter`. \* For "{% data variables.product.pat\_generic\_caps %} for GitHub", enter the value of the {% data variables.product.pat\_v1 %} that you created earlier, and press `Enter`. \* For "Base url of the GitHub instance", {% ifversion ghes %}enter the URL for {% data variables.location.product\_location\_enterprise %}, and press `Enter`.{% else %}press `Enter` to accept the default value (`https://github.com`).{% endif %} \* For "{% data variables.product.pat\_generic\_caps %} for CircleCI", enter the value for the CircleCI personal API token that you created earlier, and press `Enter`. \* For "Base url of the CircleCI instance", press `Enter` to accept the default value (`https://circleci.com`). \* For "CircleCI organization name", enter the name for your CircleCI organization, and press `Enter`. An example of the `configure` command is shown below: ```shell $ gh actions-importer configure ✔ Which CI providers are you configuring?: CircleCI Enter the following values (leave empty to omit): ✔ {% data variables.product.pat\_generic\_caps %} for GitHub: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\* ✔ Base url of the GitHub instance: https://github.com ✔ {% data variables.product.pat\_generic\_caps %} for CircleCI: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* ✔ Base url of the CircleCI instance: https://circleci.com ✔ CircleCI organization name: mycircleciorganization Environment variables successfully updated. ``` 1. In your terminal, run the {% data variables.product.prodname\_actions\_importer %} `update` CLI command to connect to {% data variables.product.prodname\_registry %} {% data variables.product.prodname\_container\_registry %} and ensure that the container image is updated to the latest version: ```shell gh actions-importer update ``` The output of the command should be similar to below: ```shell Updating ghcr.io/actions-importer/cli:latest... ghcr.io/actions-importer/cli:latest up-to-date ``` ## Perform an audit of CircleCI You can use the `audit` command to get a high-level view of all projects in a CircleCI organization. The `audit` command performs the following steps: 1. | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/automated-migrations/circleci-migration.md | main | github-actions | [
-0.0482788048684597,
-0.07795077562332153,
-0.10840266197919846,
-0.02970360592007637,
0.04273489862680435,
-0.02531970664858818,
0.06064566224813461,
0.06739794462919235,
-0.0740465298295021,
0.009909386746585369,
0.03586515039205551,
-0.06326600164175034,
0.05578453093767166,
-0.00718016... | 0.100317 |
update ``` The output of the command should be similar to below: ```shell Updating ghcr.io/actions-importer/cli:latest... ghcr.io/actions-importer/cli:latest up-to-date ``` ## Perform an audit of CircleCI You can use the `audit` command to get a high-level view of all projects in a CircleCI organization. The `audit` command performs the following steps: 1. Fetches all of the projects defined in a CircleCI organization. 1. Converts each pipeline to its equivalent {% data variables.product.prodname\_actions %} workflow. 1. Generates a report that summarizes how complete and complex of a migration is possible with {% data variables.product.prodname\_actions\_importer %}. ### Running the audit command To perform an audit of a CircleCI organization, run the following command in your terminal: ```shell gh actions-importer audit circle-ci --output-dir tmp/audit ``` ### Inspecting the audit results {% data reusables.actions.gai-inspect-audit %} ## Forecast potential {% data variables.product.prodname\_actions %} usage You can use the `forecast` command to forecast potential {% data variables.product.prodname\_actions %} usage by computing metrics from completed pipeline runs in CircleCI. ### Running the forecast command To perform a forecast of potential {% data variables.product.prodname\_actions %} usage, run the following command in your terminal. By default, {% data variables.product.prodname\_actions\_importer %} includes the previous seven days in the forecast report. ```shell gh actions-importer forecast circle-ci --output-dir tmp/forecast\_reports ``` ### Inspecting the forecast report The `forecast\_report.md` file in the specified output directory contains the results of the forecast. Listed below are some key terms that can appear in the forecast report: \* The \*\*job count\*\* is the total number of completed jobs. \* The \*\*pipeline count\*\* is the number of unique pipelines used. \* \*\*Execution time\*\* describes the amount of time a runner spent on a job. This metric can be used to help plan for the cost of {% data variables.product.prodname\_dotcom %}-hosted runners. This metric is correlated to how much you should expect to spend in {% data variables.product.prodname\_actions %}. This will vary depending on the hardware used for these minutes. You can use the [{% data variables.product.prodname\_actions %} pricing calculator](https://github.com/pricing/calculator) to estimate the costs. \* \*\*Queue time\*\* metrics describe the amount of time a job spent waiting for a runner to be available to execute it. \* \*\*Concurrent jobs\*\* metrics describe the amount of jobs running at any given time. This metric can be used to define the number of runners you should configure. Additionally, these metrics are defined for each queue of runners in CircleCI. This is especially useful if there is a mix of hosted or self-hosted runners, or high or low spec machines, so you can see metrics specific to different types of runners. ## Perform a dry-run migration of a CircleCI pipeline You can use the `dry-run` command to convert a CircleCI pipeline to an equivalent {% data variables.product.prodname\_actions %} workflow. A dry-run creates the output files in a specified directory, but does not open a pull request to migrate the pipeline. To perform a dry run of migrating your CircleCI project to {% data variables.product.prodname\_actions %}, run the following command in your terminal, replacing `my-circle-ci-project` with the name of your CircleCI project. ```shell gh actions-importer dry-run circle-ci --output-dir tmp/dry-run --circle-ci-project my-circle-ci-project ``` You can view the logs of the dry run and the converted workflow files in the specified output directory. {% data reusables.actions.gai-custom-transformers-rec %} ## Perform a production migration of a CircleCI pipeline You can use the `migrate` command to convert a CircleCI pipeline and open a pull request with the equivalent {% data variables.product.prodname\_actions %} workflow. ### Running the migrate command To migrate a CircleCI pipeline to {% data variables.product.prodname\_actions %}, run the following command in your terminal, replacing the `target-url` value with the | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/automated-migrations/circleci-migration.md | main | github-actions | [
-0.050664760172367096,
-0.049110621213912964,
-0.06880001723766327,
-0.03217432647943497,
0.07614389806985855,
-0.08836449682712555,
0.023182809352874756,
0.011538314633071423,
-0.04094010964035988,
0.017802419140934944,
0.055602334439754486,
-0.0745437890291214,
0.015276962891221046,
-0.0... | 0.105943 |
use the `migrate` command to convert a CircleCI pipeline and open a pull request with the equivalent {% data variables.product.prodname\_actions %} workflow. ### Running the migrate command To migrate a CircleCI pipeline to {% data variables.product.prodname\_actions %}, run the following command in your terminal, replacing the `target-url` value with the URL for your {% data variables.product.prodname\_dotcom %} repository, and `my-circle-ci-project` with the name of your CircleCI project. ```shell gh actions-importer migrate circle-ci --target-url https://github.com/octo-org/octo-repo --output-dir tmp/migrate --circle-ci-project my-circle-ci-project ``` The command's output includes the URL to the pull request that adds the converted workflow to your repository. An example of a successful output is similar to the following: ```shell $ gh actions-importer migrate circle-ci --target-url https://github.com/octo-org/octo-repo --output-dir tmp/migrate --circle-ci-project my-circle-ci-project [2022-08-20 22:08:20] Logs: 'tmp/migrate/log/actions-importer-20220916-014033.log' [2022-08-20 22:08:20] Pull request: 'https://github.com/octo-org/octo-repo/pull/1' ``` {% data reusables.actions.gai-inspect-pull-request %} ## Reference This section contains reference information on environment variables, optional arguments, and supported syntax when using {% data variables.product.prodname\_actions\_importer %} to migrate from CircleCI. ### Using environment variables {% data reusables.actions.gai-config-environment-variables %} {% data variables.product.prodname\_actions\_importer %} uses the following environment variables to connect to your CircleCI instance: \* `GITHUB\_ACCESS\_TOKEN`: The {% data variables.product.pat\_v1 %} used to create pull requests with a converted workflow (requires `repo` and `workflow` scopes). \* `GITHUB\_INSTANCE\_URL`: The URL to the target {% data variables.product.prodname\_dotcom %} instance (for example, `https://github.com`). \* `CIRCLE\_CI\_ACCESS\_TOKEN`: The CircleCI personal API token used to authenticate with your CircleCI instance. \* `CIRCLE\_CI\_INSTANCE\_URL`: The URL to the CircleCI instance (for example, `https://circleci.com`). If the variable is left unset, `https://circleci.com` is used as the default value. \* `CIRCLE\_CI\_ORGANIZATION`: The organization name of your CircleCI instance. \* `CIRCLE\_CI\_PROVIDER`: The location where your pipeline's source file is stored (such as `github`). Currently, only {% data variables.product.prodname\_dotcom %} is supported. \* `CIRCLE\_CI\_SOURCE\_GITHUB\_ACCESS\_TOKEN` (Optional): The {% data variables.product.pat\_v1 %} used to authenticate with your source {% data variables.product.prodname\_dotcom %} instance (requires `repo` scope). If not provided, the value of `GITHUB\_ACCESS\_TOKEN` is used instead. \* `CIRCLE\_CI\_SOURCE\_GITHUB\_INSTANCE\_URL` (Optional): The URL to the source {% data variables.product.prodname\_dotcom %} instance. If not provided, the value of `GITHUB\_INSTANCE\_URL` is used instead. These environment variables can be specified in a `.env.local` file that is loaded by {% data variables.product.prodname\_actions\_importer %} when it is run. ### Optional arguments {% data reusables.actions.gai-optional-arguments-intro %} #### `--source-file-path` You can use the `--source-file-path` argument with the `forecast`, `dry-run`, or `migrate` subcommands. By default, {% data variables.product.prodname\_actions\_importer %} fetches pipeline contents from source control. The `--source-file-path` argument tells {% data variables.product.prodname\_actions\_importer %} to use the specified source file path instead. For example: ```shell gh actions-importer dry-run circle-ci --output-dir ./output/ --source-file-path ./path/to/.circleci/config.yml ``` If you would like to supply multiple source files when running the `forecast` subcommand, you can use pattern matching in the file path value. For example, `gh forecast --source-file-path ./tmp/previous\_forecast/jobs/\*.json` supplies {% data variables.product.prodname\_actions\_importer %} with any source files that match the `./tmp/previous\_forecast/jobs/\*.json` file path. #### `--config-file-path` You can use the `--config-file-path` argument with the `audit`, `dry-run`, and `migrate` subcommands. By default, {% data variables.product.prodname\_actions\_importer %} fetches pipeline contents from source control. The `--config-file-path` argument tells {% data variables.product.prodname\_actions\_importer %} to use the specified source files instead. The `--config-file-path` argument can also be used to specify which repository a converted composite action should be migrated to. ##### Audit example In this example, {% data variables.product.prodname\_actions\_importer %} uses the specified YAML configuration file to perform an audit. ```bash gh actions-importer audit circle-ci --output-dir ./output/ --config-file-path ./path/to/circle-ci/config.yml ``` To audit a CircleCI instance using a config file, the config file must be in the following format, and each `repository\_slug` must be unique: ```yaml source\_files: - repository\_slug: circle-org-name/circle-project-name path: path/to/.circleci/config.yml - repository\_slug: circle-org-name/some-other-circle-project-name path: path/to/.circleci/config.yml ``` ##### | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/automated-migrations/circleci-migration.md | main | github-actions | [
-0.04313208907842636,
-0.026482777670025826,
-0.09136737883090973,
-0.04467024654150009,
0.042110491544008255,
-0.05359519273042679,
0.0032895493786782026,
0.051464520394802094,
-0.04012423753738403,
0.013461533933877945,
0.01064030360430479,
-0.07468598335981369,
0.035356856882572174,
-0.... | 0.066919 |
an audit. ```bash gh actions-importer audit circle-ci --output-dir ./output/ --config-file-path ./path/to/circle-ci/config.yml ``` To audit a CircleCI instance using a config file, the config file must be in the following format, and each `repository\_slug` must be unique: ```yaml source\_files: - repository\_slug: circle-org-name/circle-project-name path: path/to/.circleci/config.yml - repository\_slug: circle-org-name/some-other-circle-project-name path: path/to/.circleci/config.yml ``` ##### Dry run example In this example, {% data variables.product.prodname\_actions\_importer %} uses the specified YAML configuration file as the source file to perform a dry run. The pipeline is selected by matching the `repository\_slug` in the config file to the value of the `--circle-ci-organization` and `--circle-ci-project` options. The `path` is then used to pull the specified source file. ```bash gh actions-importer dry-run circle-ci --circle-ci-project circle-org-name/circle-project-name --output-dir ./output/ --config-file-path ./path/to/circle-ci/config.yml ``` ##### Specify the repository of converted composite actions {% data variables.product.prodname\_actions\_importer %} uses the YAML file provided to the `--config-file-path` argument to determine the repository that converted composite actions are migrated to. To begin, you should run an audit without the `--config-file-path` argument: ```bash gh actions-importer audit circle-ci --output-dir ./output/ ``` The output of this command will contain a file named `config.yml` that contains a list of all the composite actions that were converted by {% data variables.product.prodname\_actions\_importer %}. For example, the `config.yml` file may have the following contents: ```yaml composite\_actions: - name: my-composite-action.yml target\_url: https://github.com/octo-org/octo-repo ref: main ``` You can use this file to specify which repository and ref a reusable workflow or composite action should be added to. You can then use the `--config-file-path` argument to provide the `config.yml` file to {% data variables.product.prodname\_actions\_importer %}. For example, you can use this file when running a `migrate` command to open a pull request for each unique repository defined in the config file: ```bash gh actions-importer migrate circle-ci --circle-ci-project my-project-name --output-dir output/ --config-file-path config.yml --target-url https://github.com/my-org/my-repo ``` #### `--include-from` You can use the `--include-from` argument with the `audit` subcommand. The `--include-from` argument specifies a file that contains a line-delimited list of repositories to include in the audit of a CircleCI organization. Any repositories that are not included in the file are excluded from the audit. For example: ```bash gh actions-importer audit circle-ci --output-dir ./output/ --include-from repositories.txt ``` The file supplied for this parameter must be a line-delimited list of repositories, for example: ```text repository\_one repository\_two repository\_three ``` ### Supported syntax for CircleCI pipelines The following table shows the type of properties that {% data variables.product.prodname\_actions\_importer %} is currently able to convert. | CircleCI Pipelines | GitHub Actions | Status | | :------------------ | :--------------------------------- | :------------------ | | cron triggers | * `on.schedule` | Supported | | environment | * `env` * `jobs..env` * `jobs..steps.env` | Supported | | executors | * `runs-on` | Supported | | jobs | * `jobs` | Supported | | job | * `jobs.` * `jobs..name` | Supported | | matrix | * `jobs..strategy` * `jobs..strategy.matrix` | Supported | | parameters | * `env` * `workflow-dispatch.inputs` | Supported | | steps | * `jobs..steps` | Supported | | when, unless | * `jobs..if` | Supported | | triggers | * `on` | Supported | | executors | * `container` * `services` | Partially Supported | | orbs | * `actions` | Partially Supported | | executors | * `self hosted runners` | Unsupported | | setup | Not applicable | Unsupported | | version | Not applicable | Unsupported | For more information about supported CircleCI concept and orb mappings, see the [`github/gh-actions-importer` repository](https://github.com/github/gh-actions-importer/blob/main/docs/circle\_ci/index.md). ### Environment variable mapping {% data variables.product.prodname\_actions\_importer %} uses the mapping in the table below to convert default CircleCI environment variables to the closest equivalent in {% data variables.product.prodname\_actions %}. | CircleCI | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/automated-migrations/circleci-migration.md | main | github-actions | [
-0.007200437132269144,
-0.0018276296323165298,
-0.08515077084302902,
-0.005831454414874315,
0.09946852922439575,
-0.08416374772787094,
0.06506582349538803,
-0.0010555069893598557,
0.003772967029362917,
-0.010803012177348137,
0.0628141313791275,
-0.05707192420959473,
0.044533852487802505,
-... | 0.065465 |
applicable | Unsupported | For more information about supported CircleCI concept and orb mappings, see the [`github/gh-actions-importer` repository](https://github.com/github/gh-actions-importer/blob/main/docs/circle\_ci/index.md). ### Environment variable mapping {% data variables.product.prodname\_actions\_importer %} uses the mapping in the table below to convert default CircleCI environment variables to the closest equivalent in {% data variables.product.prodname\_actions %}. | CircleCI | GitHub Actions | | :------------------------------------ | :--------------------------------------------- | | `CI` | {% raw %}`$CI`{% endraw %} | | `CIRCLE\_BRANCH` | {% raw %}`${{ github.ref }}`{% endraw %} | | `CIRCLE\_JOB` | {% raw %}`${{ github.job }}`{% endraw %} | | `CIRCLE\_PR\_NUMBER` | {% raw %}`${{ github.event.number }}`{% endraw %} | | `CIRCLE\_PR\_REPONAME` | {% raw %}`${{ github.repository }}`{% endraw %} | | `CIRCLE\_PROJECT\_REPONAME` | {% raw %}`${{ github.repository }}`{% endraw %} | | `CIRCLE\_SHA1` | {% raw %}`${{ github.sha }}`{% endraw %} | | `CIRCLE\_TAG` | {% raw %}`${{ github.ref }}`{% endraw %} | | `CIRCLE\_USERNAME` | {% raw %}`${{ github.actor }}`{% endraw %} | | `CIRCLE\_WORKFLOW\_ID` | {% raw %}`${{ github.run\_number }}`{% endraw %} | | `CIRCLE\_WORKING\_DIRECTORY` | {% raw %}`${{ github.workspace }}`{% endraw %} | | `<< pipeline.id >>` | {% raw %}`${{ github.workflow }}`{% endraw %} | | `<< pipeline.number >>` | {% raw %}`${{ github.run\_number }}`{% endraw %} | | `<< pipeline.project.git\_url >>` | `$GITHUB\_SERVER\_URL/$GITHUB\_REPOSITORY` | | `<< pipeline.project.type >>` | `github` | | `<< pipeline.git.tag >>` | {% raw %}`${{ github.ref }}`{% endraw %} | | `<< pipeline.git.branch >>` | {% raw %}`${{ github.ref }}`{% endraw %} | | `<< pipeline.git.revision >>` | {% raw %}`${{ github.event.pull\_request.head.sha }}`{% endraw %} | | `<< pipeline.git.base\_revision >>` | {% raw %}`${{ github.event.pull\_request.base.sha }}`{% endraw %} | ## Legal notice {% data reusables.actions.actions-importer-legal-notice %} | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/automated-migrations/circleci-migration.md | main | github-actions | [
-0.036468807607889175,
-0.028118867427110672,
-0.10825289785861969,
-0.04953252524137497,
0.013246020302176476,
-0.03547519072890282,
0.018778212368488312,
0.04381166398525238,
-0.06636350601911545,
-0.00137170625384897,
0.11818147450685501,
-0.06709467619657516,
0.05986206233501434,
-0.01... | 0.067738 |
## About migrating from Jenkins with GitHub Actions Importer The instructions below will guide you through configuring your environment to use {% data variables.product.prodname\_actions\_importer %} to migrate Jenkins pipelines to {% data variables.product.prodname\_actions %}. ### Prerequisites \* A Jenkins account or organization with pipelines and jobs that you want to convert to {% data variables.product.prodname\_actions %} workflows. \* Access to create a Jenkins personal API token for your account or organization. {% data reusables.actions.actions-importer-prerequisites %} ### Limitations There are some limitations when migrating from Jenkins to {% data variables.product.prodname\_actions %} with {% data variables.product.prodname\_actions\_importer %}. For example, you must migrate the following constructs manually: \* Mandatory build tools \* Scripted pipelines \* Secrets \* Self-hosted runners \* Unknown plugins For more information on manual migrations, see [AUTOTITLE](/actions/migrating-to-github-actions/manually-migrating-to-github-actions/migrating-from-jenkins-to-github-actions). ## Installing the {% data variables.product.prodname\_actions\_importer %} CLI extension {% data reusables.actions.installing-actions-importer %} ## Configuring credentials The `configure` CLI command is used to set required credentials and options for {% data variables.product.prodname\_actions\_importer %} when working with Jenkins and {% data variables.product.prodname\_dotcom %}. 1. Create a {% data variables.product.prodname\_dotcom %} {% data variables.product.pat\_v1 %}. For more information, see [AUTOTITLE](/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-personal-access-token-classic). Your token must have the `workflow` scope. After creating the token, copy it and save it in a safe location for later use. 1. Create a Jenkins API token. For more information, see [Authenticating scripted clients](https://www.jenkins.io/doc/book/system-administration/authenticating-scripted-clients/) in the Jenkins documentation. After creating the token, copy it and save it in a safe location for later use. 1. In your terminal, run the {% data variables.product.prodname\_actions\_importer %} `configure` CLI command: ```shell gh actions-importer configure ``` The `configure` command will prompt you for the following information: \* For "Which CI providers are you configuring?", use the arrow keys to select `Jenkins`, press `Space` to select it, then press `Enter`. \* For "{% data variables.product.pat\_generic\_caps %} for GitHub", enter the value of the {% data variables.product.pat\_v1 %} that you created earlier, and press `Enter`. \* For "Base url of the GitHub instance", {% ifversion ghes %}enter the URL for {% data variables.location.product\_location\_enterprise %}, and press `Enter`.{% else %}press `Enter` to accept the default value (`https://github.com`).{% endif %} \* For "{% data variables.product.pat\_generic\_caps %} for Jenkins", enter the value for the Jenkins personal API token that you created earlier, and press `Enter`. \* For "Username of Jenkins user", enter your Jenkins username and press `Enter`. \* For "Base url of the Jenkins instance", enter the URL of your Jenkins instance, and press `Enter`. An example of the `configure` command is shown below: ```shell $ gh actions-importer configure ✔ Which CI providers are you configuring?: Jenkins Enter the following values (leave empty to omit): ✔ {% data variables.product.pat\_generic\_caps %} for GitHub: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\* ✔ Base url of the GitHub instance: https://github.com ✔ {% data variables.product.pat\_generic\_caps %} for Jenkins: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\* ✔ Username of Jenkins user: admin ✔ Base url of the Jenkins instance: https://localhost Environment variables successfully updated. ``` 1. In your terminal, run the {% data variables.product.prodname\_actions\_importer %} `update` CLI command to connect to {% data variables.product.prodname\_registry %} {% data variables.product.prodname\_container\_registry %} and ensure that the container image is updated to the latest version: ```shell gh actions-importer update ``` The output of the command should be similar to below: ```shell Updating ghcr.io/actions-importer/cli:latest... ghcr.io/actions-importer/cli:latest up-to-date ``` ## Perform an audit of Jenkins You can use the `audit` command to get a high-level view of all pipelines in a Jenkins server. The `audit` command performs the following steps: 1. Fetches all of the projects defined in a Jenkins server. 1. Converts each pipeline to its equivalent {% data variables.product.prodname\_actions %} workflow. 1. Generates a report that summarizes how complete and complex of a | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/automated-migrations/jenkins-migration.md | main | github-actions | [
-0.048465415835380554,
-0.10806715488433838,
-0.050674568861722946,
-0.005288234446197748,
0.016582602635025978,
0.017206914722919464,
-0.004843398462980986,
0.06362118571996689,
-0.0906275138258934,
0.009377710521221161,
0.009298530407249928,
-0.05955345183610916,
0.04659174010157585,
0.0... | 0.034847 |
view of all pipelines in a Jenkins server. The `audit` command performs the following steps: 1. Fetches all of the projects defined in a Jenkins server. 1. Converts each pipeline to its equivalent {% data variables.product.prodname\_actions %} workflow. 1. Generates a report that summarizes how complete and complex of a migration is possible with {% data variables.product.prodname\_actions\_importer %}. ### Running the audit command To perform an audit of a Jenkins server, run the following command in your terminal: ```shell gh actions-importer audit jenkins --output-dir tmp/audit ``` ### Inspecting the audit results {% data reusables.actions.gai-inspect-audit %} ## Forecast potential build runner usage You can use the `forecast` command to forecast potential {% data variables.product.prodname\_actions %} usage by computing metrics from completed pipeline runs in your Jenkins server. ### Prerequisites for running the forecast command In order to run the `forecast` command against a Jenkins instance, you must install the [`paginated-builds` plugin](https://plugins.jenkins.io/paginated-builds) on your Jenkins server. This plugin allows {% data variables.product.prodname\_actions\_importer %} to efficiently retrieve historical build data for jobs that have a large number of builds. Because Jenkins does not provide a method to retrieve paginated build data, using this plugin prevents timeouts from the Jenkins server that can occur when fetching a large amount of historical data. The `paginated-builds` plugin is open source, and exposes a REST API endpoint to fetch build data in pages, rather than all at once. To install the `paginated-builds` plugin: 1. On your Jenkins instance, navigate to `https:///pluginManager/available`. 1. Search for the `paginated-builds` plugin. 1. Check the box on the left and select \*\*Install without restart\*\*. ### Running the forecast command To perform a forecast of potential {% data variables.product.prodname\_actions %}, run the following command in your terminal. By default, {% data variables.product.prodname\_actions\_importer %} includes the previous seven days in the forecast report. ```shell gh actions-importer forecast jenkins --output-dir tmp/forecast ``` ### Inspecting the forecast report The `forecast\_report.md` file in the specified output directory contains the results of the forecast. Listed below are some key terms that can appear in the forecast report: \* The \*\*job count\*\* is the total number of completed jobs. \* The \*\*pipeline count\*\* is the number of unique pipelines used. \* \*\*Execution time\*\* describes the amount of time a runner spent on a job. This metric can be used to help plan for the cost of {% data variables.product.prodname\_dotcom %}-hosted runners. \* This metric is correlated to how much you should expect to spend in {% data variables.product.prodname\_actions %}. This will vary depending on the hardware used for these minutes. You can use the [{% data variables.product.prodname\_actions %} pricing calculator](https://github.com/pricing/calculator) to estimate the costs. \* \*\*Queue time\*\* metrics describe the amount of time a job spent waiting for a runner to be available to execute it. \* \*\*Concurrent jobs\*\* metrics describe the amount of jobs running at any given time. This metric can be used to define the number of runners you should configure. Additionally, these metrics are defined for each queue of runners in Jenkins. This is especially useful if there is a mix of hosted or self-hosted runners, or high or low spec machines, so you can see metrics specific to different types of runners. ## Perform a dry-run migration of a Jenkins pipeline You can use the `dry-run` command to convert a Jenkins pipeline to its equivalent {% data variables.product.prodname\_actions %} workflow. ### Running the dry-run command You can use the `dry-run` command to convert a Jenkins pipeline to an equivalent {% data variables.product.prodname\_actions %} workflow. A dry-run creates the output files in a specified directory, but does not open a pull request to migrate the | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/automated-migrations/jenkins-migration.md | main | github-actions | [
-0.04821551963686943,
-0.04475078731775284,
-0.02660643309354782,
0.012666556052863598,
0.03728046640753746,
-0.04320757836103439,
-0.030891530215740204,
0.02896454557776451,
-0.004934772849082947,
0.008033407852053642,
-0.05295638367533684,
-0.05597594007849693,
-0.0069908336736261845,
-0... | 0.040883 |
equivalent {% data variables.product.prodname\_actions %} workflow. ### Running the dry-run command You can use the `dry-run` command to convert a Jenkins pipeline to an equivalent {% data variables.product.prodname\_actions %} workflow. A dry-run creates the output files in a specified directory, but does not open a pull request to migrate the pipeline. To perform a dry run of migrating your Jenkins pipelines to {% data variables.product.prodname\_actions %}, run the following command in your terminal, replacing `my-jenkins-project` with the URL of your Jenkins job. ```shell gh actions-importer dry-run jenkins --source-url my-jenkins-project --output-dir tmp/dry-run ``` ### Inspecting the converted workflows You can view the logs of the dry run and the converted workflow files in the specified output directory. {% data reusables.actions.gai-custom-transformers-rec %} ## Perform a production migration of a Jenkins pipeline You can use the `migrate` command to convert a Jenkins pipeline and open a pull request with the equivalent {% data variables.product.prodname\_actions %} workflow. ### Running the migrate command To migrate a Jenkins pipeline to {% data variables.product.prodname\_actions %}, run the following command in your terminal, replacing the `target-url` value with the URL for your {% data variables.product.github %} repository, and `my-jenkins-project` with the URL for your Jenkins job. ```shell gh actions-importer migrate jenkins --target-url https://github.com/:owner/:repo --output-dir tmp/migrate --source-url my-jenkins-project ``` The command's output includes the URL to the pull request that adds the converted workflow to your repository. An example of a successful output is similar to the following: ```shell $ gh actions-importer migrate jenkins --target-url https://github.com/octo-org/octo-repo --output-dir tmp/migrate --source-url http://localhost:8080/job/monas\_dev\_work/job/monas\_freestyle [2022-08-20 22:08:20] Logs: 'tmp/migrate/log/actions-importer-20220916-014033.log' [2022-08-20 22:08:20] Pull request: 'https://github.com/octo-org/octo-repo/pull/1' ``` {% data reusables.actions.gai-inspect-pull-request %} ## Reference This section contains reference information on environment variables, optional arguments, and supported syntax when using {% data variables.product.prodname\_actions\_importer %} to migrate from Jenkins. ### Using environment variables {% data reusables.actions.gai-config-environment-variables %} {% data variables.product.prodname\_actions\_importer %} uses the following environment variables to connect to your Jenkins instance: \* `GITHUB\_ACCESS\_TOKEN`: The {% data variables.product.pat\_v1 %} used to create pull requests with a converted workflow (requires `repo` and `workflow` scopes). \* `GITHUB\_INSTANCE\_URL`: The URL to the target {% data variables.product.prodname\_dotcom %} instance (for example, `https://github.com`). \* `JENKINS\_ACCESS\_TOKEN`: The Jenkins API token used to view Jenkins resources. > [!NOTE] > This token requires access to all jobs that you want to migrate or audit. In cases where a folder or job does not inherit access control lists from their parent, you must grant explicit permissions or full admin privileges. \* `JENKINS\_USERNAME`: The username of the user account that created the Jenkins API token. \* `JENKINS\_INSTANCE\_URL`: The URL of the Jenkins instance. \* `JENKINSFILE\_ACCESS\_TOKEN` (Optional) The API token used to retrieve the contents of a `Jenkinsfile` stored in the build repository. This requires the `repo` scope. If this is not provided, the `GITHUB\_ACCESS\_TOKEN` will be used instead. These environment variables can be specified in a `.env.local` file that is loaded by {% data variables.product.prodname\_actions\_importer %} when it is run. ### Using optional arguments {% data reusables.actions.gai-optional-arguments-intro %} #### `--source-file-path` You can use the `--source-file-path` argument with the `forecast`, `dry-run`, or `migration` subcommands. By default, {% data variables.product.prodname\_actions\_importer %} fetches pipeline contents from source control. The `--source-file-path` argument tells {% data variables.product.prodname\_actions\_importer %} to use the specified source file path instead. You can use this option for Jenkinsfile and multibranch pipelines. If you would like to supply multiple source files when running the `forecast` subcommand, you can use pattern matching in the file path value. For example, `gh forecast --source-file-path ./tmp/previous\_forecast/jobs/\*.json` supplies {% data variables.product.prodname\_actions\_importer %} with any source files that match the `./tmp/previous\_forecast/jobs/\*.json` file path. ##### Jenkinsfile pipeline example In this example, {% data variables.product.prodname\_actions\_importer %} uses the | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/automated-migrations/jenkins-migration.md | main | github-actions | [
-0.06978604942560196,
-0.03338214382529259,
-0.033055443316698074,
-0.01936492882668972,
-0.0027994769625365734,
-0.007718420587480068,
0.010413646697998047,
0.04226686432957649,
-0.056065090000629425,
0.0014474475756287575,
-0.00486746896058321,
-0.022131983190774918,
0.031271882355213165,
... | 0.01275 |
files when running the `forecast` subcommand, you can use pattern matching in the file path value. For example, `gh forecast --source-file-path ./tmp/previous\_forecast/jobs/\*.json` supplies {% data variables.product.prodname\_actions\_importer %} with any source files that match the `./tmp/previous\_forecast/jobs/\*.json` file path. ##### Jenkinsfile pipeline example In this example, {% data variables.product.prodname\_actions\_importer %} uses the specified Jenkinsfile as the source file to perform a dry run. ```shell gh actions-importer dry-run jenkins --output-dir path/to/output/ --source-file-path path/to/Jenkinsfile --source-url :url\_to\_jenkins\_job ``` #### `--config-file-path` You can use the `--config-file-path` argument with the `audit`, `dry-run`, and `migrate` subcommands. By default, {% data variables.product.prodname\_actions\_importer %} fetches pipeline contents from source control. The `--config-file-path` argument tells {% data variables.product.prodname\_actions\_importer %} to use the specified source files instead. When you use the `--config-file-path` option with the `dry-run` or `migrate` subcommands, {% data variables.product.prodname\_actions\_importer %} matches the repository slug to the job represented by the `--source-url` option to select the pipeline. It uses the `config-file-path` to pull the specified source file. ##### Audit example In this example, {% data variables.product.prodname\_actions\_importer %} uses the specified YAML configuration file to perform an audit. ```shell gh actions-importer audit jenkins --output-dir path/to/output/ --config-file-path path/to/jenkins/config.yml ``` To audit a Jenkins instance using a config file, the config file must be in the following format, and each `repository\_slug` value must be unique: ```yaml source\_files: - repository\_slug: pipeline-name path: path/to/Jenkinsfile - repository\_slug: multi-branch-pipeline-name branches: - branch: main path: path/to/Jenkinsfile - branch: node path: path/to/Jenkinsfile ``` ### Supported syntax for Jenkins pipelines The following tables show the type of properties {% data variables.product.prodname\_actions\_importer %} is currently able to convert. For more details about how Jenkins pipeline syntax aligns with {% data variables.product.prodname\_actions %}, see [AUTOTITLE](/actions/migrating-to-github-actions/manually-migrating-to-github-actions/migrating-from-jenkins-to-github-actions). For information about supported Jenkins plugins, see the [`github/gh-actions-importer` repository](https://github.com/github/gh-actions-importer/blob/main/docs/jenkins/index.md). #### Supported syntax for Freestyle pipelines | Jenkins | GitHub Actions | Status | | :------------------------ | :--------------------------------- | :------------------ | | docker template | `jobs..container` | Supported | | build | `jobs` | Partially supported | | build environment | `env` | Partially supported | | build triggers | `on` | Partially supported | | general | `runners` | Partially supported | #### Supported syntax for Jenkinsfile pipelines | Jenkins | GitHub Actions | Status | | :---------- | :--------------------------------- | :------------------ | | docker | `jobs..container` | Supported | | stage | `jobs.` | Supported | | agent | `runners` | Partially supported | | environment | `env` | Partially supported | | stages | `jobs` | Partially supported | | steps | `jobs..steps` | Partially supported | | triggers | `on` | Partially supported | | when | `jobs..if` | Partially supported | | inputs | `inputs` | Unsupported | | matrix | `jobs..strategy.matrix` | Unsupported | | options | `jobs..strategy` | Unsupported | | parameters | `inputs` | Unsupported | ### Environment variables syntax {% data variables.product.prodname\_actions\_importer %} uses the mapping in the table below to convert default Jenkins environment variables to the closest equivalent in {% data variables.product.prodname\_actions %}. | Jenkins | GitHub Actions | | :---------------- | :------------------------------------------------------------------------------------ | | `${BUILD\_ID}` | `{% raw %}${{ github.run\_id }}{% endraw %}` | | `${BUILD\_NUMBER}` | `{% raw %}${{ github.run\_id }}{% endraw %}` | | `${BUILD\_TAG}` | `{% raw %}${{ github.workflow }}-${{ github.run\_id }}{% endraw %}` | | `${BUILD\_URL}` | `{% raw %}${{ github.server\_url }}/${{ github.repository }}/actions/runs/${{ github.run\_id }}{% endraw %}` | | `${JENKINS\_URL}` | `{% raw %}${{ github.server\_url }}{% endraw %}` | | `${JOB\_NAME}` | `{% raw %}${{ github.workflow }}{% endraw %}` | | `${WORKSPACE}` | `{% raw %}${{ github.workspace }}{% endraw %}` | ## Legal notice {% data reusables.actions.actions-importer-legal-notice %} | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/automated-migrations/jenkins-migration.md | main | github-actions | [
-0.09849070012569427,
0.02616492286324501,
0.009161978028714657,
0.021980106830596924,
0.026398682966828346,
-0.01745586469769478,
-0.05189454182982445,
0.06007085740566254,
-0.031229199841618538,
-0.026736915111541748,
-0.054306864738464355,
-0.01961154118180275,
-0.006881598848849535,
0.... | 0.014039 |
| `{% raw %}${{ github.server\_url }}{% endraw %}` | | `${JOB\_NAME}` | `{% raw %}${{ github.workflow }}{% endraw %}` | | `${WORKSPACE}` | `{% raw %}${{ github.workspace }}{% endraw %}` | ## Legal notice {% data reusables.actions.actions-importer-legal-notice %} | https://github.com/github/docs/blob/main//content/actions/tutorials/migrate-to-github-actions/automated-migrations/jenkins-migration.md | main | github-actions | [
-0.02020896039903164,
-0.00930782686918974,
-0.059126004576683044,
0.06309565156698227,
0.09111692756414413,
-0.007310762070119381,
0.007621819153428078,
-0.00833568163216114,
0.02060343511402607,
0.007388165220618248,
0.021125493571162224,
-0.008750852197408676,
0.011925064027309418,
0.01... | 0.033696 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.