questions
stringlengths 4
1.65k
| answers
stringlengths 1.73k
353k
| site
stringclasses 24
values | answers_cleaned
stringlengths 1.73k
353k
|
---|---|---|---|
vault Client count calculation Technical overview of client count calculations in Vault Vault provides usage telemetry for the number of clients based on the number of layout docs page title Client count calculation | ---
layout: docs
page_title: Client count calculation
description: |-
Technical overview of client count calculations in Vault
---
# Client count calculation
Vault provides usage telemetry for the number of clients based on the number of
unique entity assignments within a Vault cluster over a given billing period:
- Standard entity assignments based on authentication method for active entities.
- Constructed entity assignments for active non-entity tokens, including batch
tokens created by performance standby nodes.
- Certificate entity assignments for ACME connections.
- Secrets being synced to at least one sync destination.
```markdown
CLIENT_COUNT_PER_CLUSTER = UNIQUE_STANDARD_ENTITIES +
UNIQUE_CONSTRUCTED_ENTITIES +
UNIQUE_CERTIFICATE_ENTITIES +
UNIQUE_SYNCED_SECRETS
```
Vault does not aggregate or de-duplicate clients across clusters, but all logs
and precomputed reports are included in DR replication.
## How Vault tracks clients
Each time a client authenticates, Vault checks whether the corresponding entity
ID has already been recorded in the client log as active for the current month:
- **If no record exists**, Vault adds an entry for the entity ID.
- If a record exists but the entity was last active **prior to the current month**,
Vault adds a new entry to the client record for the entity ID.
- If a record exists and the entity was last active **within the current month**,
Vault does not add a new entry to the client record for the entity ID.
For example:
- Two non-entity tokens under the same namespace, with the same alias name and
policy assignment receive the same entity assignment and are only counted
**once**.
- Two authentication requests from a single ACME client for the same certificate
identifiers from different mounts receive the same entity assignments and
are counted **once**.
- An application authenticating with AppRole receive the same entity assignment
every time and only counted **once**.
At the **end of each month**, Vault pre-computes reports for each cluster on the
number of active entities, per namespace, for each time period within the
configured retention period. By de-duplicating records from the current month
against records for the previous month, Vault ensures entities that remain
active within every calendar month are only counted once for the year.
The deduplication process has two additional consequences:
1. Detailed reporting lags by 1 month at the start of the billing period.
1. Billing period reports that include the current month must use an
approximation for the number of new clients in the current month.
## How Vault approximates current-month client count
Vault approximates client count for the current month using a
[hyperloglog algorithm](https://en.wikipedia.org/wiki/HyperLogLog) that looks
at the difference between the cardinalities of:
- the number of clients across the **entire** billing period, and
- the number of clients across the billing period **excluding** clients from the current month.
The approximation algorithm uses the
[axiomhq](https://github.com/axiomhq/hyperloglog) library with fourteen
registers and sparse representations (when applicable). The multiset for the
calculation is the total number of clients within a billing period, and the
accuracy estimate for the approximation decreases as the difference between the
number of clients in the current month and the number of clients in the billing
period increases.
### Testing verification for client count approximations
Given `CM` as the number of clients for the current month and `BP` as the number
of clients in the billing period, we found that the approximation becomes
increasingly imprecise as:
- the difference between `BC` and `CM` increases
- the value of `CM` approaches zero.
- the number of months in the billing period increase.
The maximum observed error rate
(`ER = (FOUND_NEW_CLIENTS / EXPECTED_NEW_CLIENTS)`) was 30% for 10,000 clients
or less, with an error rate of 5 – 10% in the average case.
For the purposes of predictive analysis, the following tables list a random
sample the values we found during testing for `CM`, `BP`, and `ER`.
<Tabs>
<Tab heading="Single-month tests">
| Current month (`CM`) | Billing period (`BP`) | Error rate (`ER`) |
| :-----------------: | :------------------: | :---------------: |
| 7 | 10 | 0% |
| 20 | 600 | 0% |
| 20 | 1000 | 0% |
| 20 | 6000 | 10% |
| 20 | 10000 | 10% |
| 200 | 600 | 0% |
| 200 | 10000 | 7% |
| 400 | 6000 | 5% |
| 2000 | 10000 | 4% |
</Tab>
<Tab heading="Multi-month / multi-segment tests">
| Current month (`CM`) | Billing period (`BP`) | Error rate (`ER`) |
| :-----------------: | :------------------: | :---------------: |
| 20 | 15 | 0% |
| 20 | 100 | 0% |
| 20 | 1000 | 0% |
| 20 | 10000 | 30% |
| 200 | 10000 | 6% |
| 2000 | 10000 | 2% |
</Tab>
</Tabs>
## Resource costs for client computation
In addition to the storage used for storing the pre-computed reports, each
active entity in the client log consumes a few bytes of storage. As a safety
measure against runaway storage growth, Vault limits the number of entity
records to 656,000 per month, but typical storage costs are much less.
On average, 1000 monthly active entities requires 3.0 MiB of storage capacity
over the default 48-month retention period.
@include "content-footer-title.mdx"
<Tabs>
<Tab heading="Related concepts">
<ul>
<li>
<a href="/vault/docs/concepts/client-count/">Clients and entities</a>
</li>
<li>
<a href="/vault/docs/concepts/client-count/faq">Client count FAQ</a>
</li>
</ul>
</Tab>
<Tab heading="Related API docs">
<ul>
<li>
<a href="/vault/api-docs/system/internal-counters#client-count">Client Count API</a>
</li>
<li>
<a href="/vault/api-docs/system/internal-counters">Internal counters API</a>
</li>
</ul>
</Tab>
<Tab heading="Related tutorials">
<ul>
<li>
<a href="/vault/tutorials/monitoring/usage-metrics">
Vault Usage Metrics in Vault UI
</a>
</li>
<li>
<a href="/vault/tutorials/monitoring/usage-metrics">KMIP Client metrics</a>
</li>
</ul>
</Tab>
<Tab heading="Other resources">
<ul>
<li>
<a href="https://github.com/axiomhq/hyperloglog#readme">Accuracy estimates for the axiomhq hyperloglog library</a>
</li>
<li>
Blog post: <a href="https://www.hashicorp.com/blog/onboarding-applications-to-vault-using-terraform-a-practical-guide">
Onboarding Applications to Vault Using Terraform: A Practical Guide
</a>
</li>
</ul>
</Tab>
</Tabs> | vault | layout docs page title Client count calculation description Technical overview of client count calculations in Vault Client count calculation Vault provides usage telemetry for the number of clients based on the number of unique entity assignments within a Vault cluster over a given billing period Standard entity assignments based on authentication method for active entities Constructed entity assignments for active non entity tokens including batch tokens created by performance standby nodes Certificate entity assignments for ACME connections Secrets being synced to at least one sync destination markdown CLIENT COUNT PER CLUSTER UNIQUE STANDARD ENTITIES UNIQUE CONSTRUCTED ENTITIES UNIQUE CERTIFICATE ENTITIES UNIQUE SYNCED SECRETS Vault does not aggregate or de duplicate clients across clusters but all logs and precomputed reports are included in DR replication How Vault tracks clients Each time a client authenticates Vault checks whether the corresponding entity ID has already been recorded in the client log as active for the current month If no record exists Vault adds an entry for the entity ID If a record exists but the entity was last active prior to the current month Vault adds a new entry to the client record for the entity ID If a record exists and the entity was last active within the current month Vault does not add a new entry to the client record for the entity ID For example Two non entity tokens under the same namespace with the same alias name and policy assignment receive the same entity assignment and are only counted once Two authentication requests from a single ACME client for the same certificate identifiers from different mounts receive the same entity assignments and are counted once An application authenticating with AppRole receive the same entity assignment every time and only counted once At the end of each month Vault pre computes reports for each cluster on the number of active entities per namespace for each time period within the configured retention period By de duplicating records from the current month against records for the previous month Vault ensures entities that remain active within every calendar month are only counted once for the year The deduplication process has two additional consequences 1 Detailed reporting lags by 1 month at the start of the billing period 1 Billing period reports that include the current month must use an approximation for the number of new clients in the current month How Vault approximates current month client count Vault approximates client count for the current month using a hyperloglog algorithm https en wikipedia org wiki HyperLogLog that looks at the difference between the cardinalities of the number of clients across the entire billing period and the number of clients across the billing period excluding clients from the current month The approximation algorithm uses the axiomhq https github com axiomhq hyperloglog library with fourteen registers and sparse representations when applicable The multiset for the calculation is the total number of clients within a billing period and the accuracy estimate for the approximation decreases as the difference between the number of clients in the current month and the number of clients in the billing period increases Testing verification for client count approximations Given CM as the number of clients for the current month and BP as the number of clients in the billing period we found that the approximation becomes increasingly imprecise as the difference between BC and CM increases the value of CM approaches zero the number of months in the billing period increase The maximum observed error rate ER FOUND NEW CLIENTS EXPECTED NEW CLIENTS was 30 for 10 000 clients or less with an error rate of 5 ndash 10 in the average case For the purposes of predictive analysis the following tables list a random sample the values we found during testing for CM BP and ER Tabs Tab heading Single month tests Current month CM Billing period BP Error rate ER 7 10 0 20 600 0 20 1000 0 20 6000 10 20 10000 10 200 600 0 200 10000 7 400 6000 5 2000 10000 4 Tab Tab heading Multi month multi segment tests Current month CM Billing period BP Error rate ER 20 15 0 20 100 0 20 1000 0 20 10000 30 200 10000 6 2000 10000 2 Tab Tabs Resource costs for client computation In addition to the storage used for storing the pre computed reports each active entity in the client log consumes a few bytes of storage As a safety measure against runaway storage growth Vault limits the number of entity records to 656 000 per month but typical storage costs are much less On average 1000 monthly active entities requires 3 0 MiB of storage capacity over the default 48 month retention period include content footer title mdx Tabs Tab heading Related concepts ul li a href vault docs concepts client count Clients and entities a li li a href vault docs concepts client count faq Client count FAQ a li ul Tab Tab heading Related API docs ul li a href vault api docs system internal counters client count Client Count API a li li a href vault api docs system internal counters Internal counters API a li ul Tab Tab heading Related tutorials ul li a href vault tutorials monitoring usage metrics Vault Usage Metrics in Vault UI a li li a href vault tutorials monitoring usage metrics KMIP Client metrics a li ul Tab Tab heading Other resources ul li a href https github com axiomhq hyperloglog readme Accuracy estimates for the axiomhq hyperloglog library a li li Blog post a href https www hashicorp com blog onboarding applications to vault using terraform a practical guide Onboarding Applications to Vault Using Terraform A Practical Guide a li ul Tab Tabs |
vault in Vault page title Clients and entities Clients and entities layout docs Technical overview covering the concept of clients entities and entity IDs | ---
layout: docs
page_title: Clients and entities
description: |-
Technical overview covering the concept of clients, entities, and entity IDs
in Vault
---
# Clients and entities
Anything that connects and authenticates to Vault to accomplish a task is a
**client**. For example, a user logging into a cluster to manage policies or a
machine-based system (application or cloud service) requesting a database token
are both considered clients.
![Vault Client Workflows](https://www.datocms-assets.com/2885/1617325020-valult-client-workflows.png)
While there are many different potential clients, the most common are:
1. **Human users** interacting directly with Vault.
1. **Applications and microservices**.
1. **Servers and platforms** like VMs, Docker containers, or Kubernetes pods.
1. **Orchestrators** like Nomad, Terraform, Ansible, ACME, and other continuous
integration / continuous delivery (CI/CD) pipelines.
1. **Vault agents and proxies** that act on behalf of an application or
microservice.
## Identity and entity assignment
Authorized clients can connect to Vault with a variety of authentication methods.
Authorization source | AuthN method
-------------------------- | ---------------------------------
Externally managed or SSO | Active Directory, LDAP, OIDC, JWT, GitHub, username+password
Platform- or server-based | Kubernetes, AWS, GCP, Azure, Cert, Cloud Foundry
Self | AppRole, tokens with no associated authN path or role
![Vault client types](https://www.datocms-assets.com/2885/1617325030-vault-clients.png)
When a client authenticates, Vault assigns a unique identifier
(**client entity**) in the [Vault identity system](/vault/docs/secrets/identity)
based on the authentication method used or a previously assigned alias.
**Entity aliases** let clients authenticate with multiple methods but still be
associated with a single policy, share resources, and count as the same entity,
regardless of the authentication method used for a particular session.
## Standard entity assignments
@include "authn-names.mdx"
Each authentication method has a unique ID string that corresponds to a client
entity used for telemetry. For example, a microservice authenticating with
AppRole takes the associated role ID as the entity. If you are running at scale
and have multiple copies of the microservices using the same role id, the full
set of instances will share the same identifier.
As a result, it is critical that you configure different clients
(microservices, humans, applications, services, platforms, servers, or pipelines)
in a way that results in distinct clients having unique identifiers. For example,
the role IDs should be different **between** two microservices, MicroserviceA and
MicroServiceB, even if the **specific instances** of MicroServiceA and
MicroServiceB share a common role ID.
## Entity assignment with ACME
Vault treats all ACME connections that authenticate under the same certificate
identifier (domain) as the same **certificate entity** for client count
calculations.
For example:
- ACME client requests (from the same server or separate servers) for the same
certificate identifier (a unique combination of CN, DNS, SANS and IP SANS)
are treated as the same entity.
- If an ACME client makes a request for `a.test.com`, and subsequently makes a new
request for `b.test.com` and `*.test.com` then two distinct entities will be created,
one for `a.test.com` and another for the combination of `b.test.com` and `*.test.com`.
- Overlap of certificate identifiers from different ACME clients will be treated
as the same entity e.g. if client 1 requests `a.test.com` and client 2 requests
`a.test.com` a single entity is created for both requests.
## Secret sync clients
Vault can automatically update secrets in external destinations with [secret sync](/vault/docs/sync).
A secret that gets synced to one or more destinations is considered a **secret
sync client** for client count calculations.
Note that:
- Each synced secret is counted distinctly based on the path and namespace of
the secret. If you have secrets at path `kv1/secret` and `kv2/secret`
which are both synced, then two distinct secret syncs will be counted.
- A secret can be synced to multiple different destinations, and it will still
only be counted as one secret sync. If `kv/secret` is synced to both Azure Key
Vault and AWS Secret Manager, this will be counted as only one secret sync
client.
- Secret sync clients are only created after you create an association between a
secret and a store. If you create `kv/secret` and do not associate this secret
with any destinations, it will not be counted as a secret sync client.
- Secret syncs clients are registered in Vault's client counting system so long
as the sync is active. If you create `kv/secret` and associate it with a
destination in January, update the secret in May, and then delete the secret
in September, Vault will consider this client as having been seen throughout
the entire period of January through September.
## Entity assignment with namespaces
A namespace represents a isolated, logical space within a single Vault
cluster and is typically used for administrative purposes.
When a client authenticates **within a given namespace**, Vault assigns the same
client entity to activities within any child namespaces because the namespaces
exist within the same larger scope.
When a client authenticates **across namespace boundaries**, Vault treats the
single client as two distinct entities because the client is operating
across different scopes with different policy assignments and resources.
For example:
- Different requests under parent and child namespaces from a single client
authenticated under the **parent** namespace are assigned **the same entity
ID**. All the client activities occur **within** the boundaries of the
namespace referenced in the original authentication request.
- Different requests under parent and child namespaces from a single client
authenticated under the **child** namespace are assigned **different entity
IDs**. Some of the client activities occur **outside** the boundaries of the
namespace referenced in the original authentication request.
- Requests by the same client to two different namespaces, NAMESPACE<sub>A</sub>
and NAMESPACE<sub>B</sub> are assigned **different entity IDs**.
## Entity assignment with non-entity tokens
Vault uses tokens as the core method for authentication. You can use tokens to
authenticate directly, or use token [auth methods](/vault/docs/concepts/auth)
to dynamically generate tokens based on external identities.
When clients authenticate with the [token auth method](/vault/docs/auth/token)
**without** a client identity, the result is a **non-entity token**. For example,
a service might use the token authentication method to create a token for a user
whose explicit identity is unknown.
Ultimately, non-entity tokens trace back to a particular client or purpose so
Vault assigns unique entity IDs to non-entity tokens based on a combination of
the:
- assigned entity alias name (if present),
- associated policies, and
- namespace under which the token was created.
In **rare** cases, tokens may be created outside of the Vault identity system
**without** an associated entity or identity. Vault treats every unaffiliated
token as a unique client for production usage. We strongly discourage the use of
unaffiliated tokens and recommend that you always associate a token with an
entity alias and token role.
<Note title="Behavior change in Vault 1.9+">
As of Vault 1.9, all non-entity tokens with the same namespace and policy
assignments are treated as the same client entity. Prior to Vault 1.9, every
non-entity token was treated as a unique client entity, which drastically
inflated telemetry around client count.
If you are using Vault 1.8 or earlier, and need to address client count
inflation without upgrading, we recommend creating a
[token role](/vault/api-docs/auth/token#create-update-token-role) with
allowable entity aliases and assigning all tokens to an appropriate
[role and entity alias name](/vault/api-docs/auth/token#create-token) before
using them.
</Note>
@include "content-footer-title.mdx"
<Tabs>
<Tab heading="Related concepts">
<ul>
<li>
<a href="/vault/docs/concepts/client-count/counting">Client count calculation</a>
</li>
<li>
<a href="/vault/docs/concepts/client-count/faq">Client count FAQ</a>
</li>
</ul>
</Tab>
<Tab heading="Related tutorials">
<ul>
<li>
<a href="/vault/tutorials/auth-methods/identity">Identity: Entities and Groups</a>
</li>
<li>
<a href="/vault/tutorials/enterprise/namespaces">Secure Multi-Tenancy with Namespaces</a>
</li>
</ul>
</Tab>
<Tab heading="Other resources">
<ul>
<li>
Article: <a href="https://www.hashicorp.com/identity-based-security-and-low-trust-networks">
Identity-based Security and Low-trust Networks
</a>
</li>
</ul>
</Tab>
</Tabs> | vault | layout docs page title Clients and entities description Technical overview covering the concept of clients entities and entity IDs in Vault Clients and entities Anything that connects and authenticates to Vault to accomplish a task is a client For example a user logging into a cluster to manage policies or a machine based system application or cloud service requesting a database token are both considered clients Vault Client Workflows https www datocms assets com 2885 1617325020 valult client workflows png While there are many different potential clients the most common are 1 Human users interacting directly with Vault 1 Applications and microservices 1 Servers and platforms like VMs Docker containers or Kubernetes pods 1 Orchestrators like Nomad Terraform Ansible ACME and other continuous integration continuous delivery CI CD pipelines 1 Vault agents and proxies that act on behalf of an application or microservice Identity and entity assignment Authorized clients can connect to Vault with a variety of authentication methods Authorization source AuthN method Externally managed or SSO Active Directory LDAP OIDC JWT GitHub username password Platform or server based Kubernetes AWS GCP Azure Cert Cloud Foundry Self AppRole tokens with no associated authN path or role Vault client types https www datocms assets com 2885 1617325030 vault clients png When a client authenticates Vault assigns a unique identifier client entity in the Vault identity system vault docs secrets identity based on the authentication method used or a previously assigned alias Entity aliases let clients authenticate with multiple methods but still be associated with a single policy share resources and count as the same entity regardless of the authentication method used for a particular session Standard entity assignments include authn names mdx Each authentication method has a unique ID string that corresponds to a client entity used for telemetry For example a microservice authenticating with AppRole takes the associated role ID as the entity If you are running at scale and have multiple copies of the microservices using the same role id the full set of instances will share the same identifier As a result it is critical that you configure different clients microservices humans applications services platforms servers or pipelines in a way that results in distinct clients having unique identifiers For example the role IDs should be different between two microservices MicroserviceA and MicroServiceB even if the specific instances of MicroServiceA and MicroServiceB share a common role ID Entity assignment with ACME Vault treats all ACME connections that authenticate under the same certificate identifier domain as the same certificate entity for client count calculations For example ACME client requests from the same server or separate servers for the same certificate identifier a unique combination of CN DNS SANS and IP SANS are treated as the same entity If an ACME client makes a request for a test com and subsequently makes a new request for b test com and test com then two distinct entities will be created one for a test com and another for the combination of b test com and test com Overlap of certificate identifiers from different ACME clients will be treated as the same entity e g if client 1 requests a test com and client 2 requests a test com a single entity is created for both requests Secret sync clients Vault can automatically update secrets in external destinations with secret sync vault docs sync A secret that gets synced to one or more destinations is considered a secret sync client for client count calculations Note that Each synced secret is counted distinctly based on the path and namespace of the secret If you have secrets at path kv1 secret and kv2 secret which are both synced then two distinct secret syncs will be counted A secret can be synced to multiple different destinations and it will still only be counted as one secret sync If kv secret is synced to both Azure Key Vault and AWS Secret Manager this will be counted as only one secret sync client Secret sync clients are only created after you create an association between a secret and a store If you create kv secret and do not associate this secret with any destinations it will not be counted as a secret sync client Secret syncs clients are registered in Vault s client counting system so long as the sync is active If you create kv secret and associate it with a destination in January update the secret in May and then delete the secret in September Vault will consider this client as having been seen throughout the entire period of January through September Entity assignment with namespaces A namespace represents a isolated logical space within a single Vault cluster and is typically used for administrative purposes When a client authenticates within a given namespace Vault assigns the same client entity to activities within any child namespaces because the namespaces exist within the same larger scope When a client authenticates across namespace boundaries Vault treats the single client as two distinct entities because the client is operating across different scopes with different policy assignments and resources For example Different requests under parent and child namespaces from a single client authenticated under the parent namespace are assigned the same entity ID All the client activities occur within the boundaries of the namespace referenced in the original authentication request Different requests under parent and child namespaces from a single client authenticated under the child namespace are assigned different entity IDs Some of the client activities occur outside the boundaries of the namespace referenced in the original authentication request Requests by the same client to two different namespaces NAMESPACE sub A sub and NAMESPACE sub B sub are assigned different entity IDs Entity assignment with non entity tokens Vault uses tokens as the core method for authentication You can use tokens to authenticate directly or use token auth methods vault docs concepts auth to dynamically generate tokens based on external identities When clients authenticate with the token auth method vault docs auth token without a client identity the result is a non entity token For example a service might use the token authentication method to create a token for a user whose explicit identity is unknown Ultimately non entity tokens trace back to a particular client or purpose so Vault assigns unique entity IDs to non entity tokens based on a combination of the assigned entity alias name if present associated policies and namespace under which the token was created In rare cases tokens may be created outside of the Vault identity system without an associated entity or identity Vault treats every unaffiliated token as a unique client for production usage We strongly discourage the use of unaffiliated tokens and recommend that you always associate a token with an entity alias and token role Note title Behavior change in Vault 1 9 As of Vault 1 9 all non entity tokens with the same namespace and policy assignments are treated as the same client entity Prior to Vault 1 9 every non entity token was treated as a unique client entity which drastically inflated telemetry around client count If you are using Vault 1 8 or earlier and need to address client count inflation without upgrading we recommend creating a token role vault api docs auth token create update token role with allowable entity aliases and assigning all tokens to an appropriate role and entity alias name vault api docs auth token create token before using them Note include content footer title mdx Tabs Tab heading Related concepts ul li a href vault docs concepts client count counting Client count calculation a li li a href vault docs concepts client count faq Client count FAQ a li ul Tab Tab heading Related tutorials ul li a href vault tutorials auth methods identity Identity Entities and Groups a li li a href vault tutorials enterprise namespaces Secure Multi Tenancy with Namespaces a li ul Tab Tab heading Other resources ul li Article a href https www hashicorp com identity based security and low trust networks Identity based Security and Low trust Networks a li ul Tab Tabs |
vault Client calculation and sizing can be complex to compute when you have multiple Vault usage metrics page title Vault usage metrics Learn how to discover the number of Vault clients for each namespace in Vault layout docs | ---
layout: docs
page_title: Vault usage metrics
description: |-
Learn how to discover the number of Vault clients for each namespace in Vault.
---
# Vault usage metrics
Client calculation and sizing can be complex to compute when you have multiple
namespaces and auth mounts. The **Vault Usage Metrics** dashboard on Vault UI
provides the information where you can filter the data by namespace and/or auth
mounts. You can also use Vault CLI or API to query the usage metrics.
## Enable usage metrics
Usage metrics are a feature that is enabled by default for Vault Enterprise and
HCP Vault Dedicated. However, if you are running Vault Community Edition, you
need to enable usage metrics since it is disabled by default.
<Tabs>
<Tab heading="Web UI" group="ui">
1. Open a web browser to access the Vault UI, and sign in.
1. Select **Client Count** from the left navigation menu.
1. Select **Configuration**.
1. Select **Edit configuration**.
![Edit configuration](/img/ui-usage-metrics-config.png)
1. Select the toggle for **Usage data collection** so that the text reads **Data
collection is on**.
<Tip title="Retention period">
The retention period sets the number of months for which Vault will maintain
activity logs to track active clients. (Default: 48 months)
</Tip>
1. Click **Save** to apply the changes.
1. Click **Continue** in the confirmation dialog to enable usage metrics tracking.
</Tab>
<Tab heading="CLI command" group="cli">
```shell-session
$ vault write sys/internal/counters/config enabled=enable
```
Valid values for `enabled` parameter are: `default`, `enable`, and `disable`.
<Tip title="Retention period">
By default, Vault maintains activity logs to track
active clients for 24 months. If you wish to change the retention period, use
the `retention_months` parameter.
</Tip>
**Example:**
```shell-session
$ vault write sys/internal/counters/config \
enabled=enable \
retention_months=12
```
</Tab>
<Tab heading="API call using cURL" group="api">
```shell-session
$ curl --header "X-Vault-Token: <TOKEN>" \
--request POST \
--data '{"enabled": "enable"}' \
$VAULT_ADDR/v1/sys/internal/counters/config
```
Valid values for `enabled` parameter are: `default`, `enable`, and `disable`.
<Tip title="Retention period">
By default, Vault maintains activity logs to track
active clients for 24 months. If you wish to change the retention period, use
the `retention_months` parameter.
</Tip>
**Example:**
```shell-session
$ curl --header "X-Vault-Token: <TOKEN>" \
--request POST \
--data '{"enabled": "enable", "retention_months": 12}' \
$VAULT_ADDR/v1/sys/internal/counters/config
```
</Tab>
</Tabs>
## Usage metrics dashboard
1. Sign into Vault UI. The **Client count** section displays the total number of
clients for the current billing period.
1. Select **Details**.
![Vault UI default dashboard example](/img/ui-client-count.png)
1. Examine the **Vault Usage Metrics** dashboard to learn your Vault usage.
![Example Vault Usage Metrics dashboard view](/img/ui-usage-metrics-1.png)
#### Usage metrics data categories
- **Running client total** are the primary metric on which pricing is based.
It is the sum of entity clients (or distinct entities) and non-entity clients.
- **Entity clients** (distinct entities) are representations of a particular
user, client, or application that belongs to a defined Vault entity. If you
are unfamiliar with Vault entities, refer to the [Identity: Entities and
Groups](/vault/tutorials/auth-methods/identity) tutorial.
- **Non-entity clients** are clients without an entity attached.
This is because some customers or workflows might avoid using entity-creating
authentication methods and instead depend on token creation through the Vault
API. Refer to [understanding non-entity
tokens](/vault/docs/concepts/client-count#understanding-non-entity-tokens)
to learn more.
<Note>
The non-entity client count excludes `root` tokens.
</Note>
- **Secrets sync clients** are the number of external destinations Vault
connects to sync the secrets. Refer to the
[documentation](/vault/docs/concepts/client-count#secret-sync-clients) for
more details.
- **ACME clients** are the ACME connections that authenticate under the same
certificate identifier (domain) as the same certificate entity for client
count calculations. Refer to the
[documentation](/vault/docs/concepts/client-count#entity-assignment-with-acme)
for more details.
![ACME clients example](/img/ui-usage-metrics-acme.png)
## Select a data range
Under the **Client counting period**, select **Edit** to query the data for
a different billing period.
![Query](/img/ui-usage-metrics-period.png)
Keep in mind that Vault begins collecting data when the feature is enabled. For
example, if you enabled the usage metrics in March of 2023, you cannot query
data in January of 2023.
Vault will return metrics from March of 2023 through most recent full month.
## Filter by namespaces
If you have [namespaces](/vault/docs/enterprise/namespaces), the dashboard
displays the top ten namespaces by total clients.
![Namespace attribution example](/img/ui-usage-metrics-namespace.png)
Use the **Filters** to view the metrics data of a specific namespace.
![Filter by namespace](/img/ui-usage-metrics-filter.png)
## Mount attribution
The clients are also shown as graphs per auth mount. The **Mount attribution**
section displays the top auth mounts where you can expect to find your most used
auth method mounts with respect to client usage. This allows you to detect which
auth mount had the most number of total clients in the given billing period. You
can filter for auth mounts within a namespace, or view auth mounts across
namespaces. The mount attribution is available even if you are not using
namespaces.
![Usage metrics by mount attribution](/img/ui-usage-metrics-mounts.png)
## Query usage metrics via CLI
Retrieve the usage metrics for the current billing period.
```shell-session
$ vault operator usage
```
**Example output:**
<CodeBlockConfig hideClipboard>
```plaintxt
Period start: 2024-03-01T00:00:00Z
Period end: 2024-10-31T23:59:59Z
Namespace path Entity Clients Non-Entity clients Secret syncs ACME clients Active clients
-------------- -------------- ------------------ ------------ ------------ --------------
[root] 86 114 0 0 200
education/ 31 31 0 0 62
education/certification/ 18 25 0 0 43
education/training/ 192 197 0 0 389
finance/ 18 26 0 0 44
marketing/ 28 17 0 0 45
test-ns-1-with-namespace-length-over-18-characters/ 84 75 0 0 159
test-ns-1/ 59 66 0 0 125
test-ns-2-with-namespace-length-over-18-characters/ 58 46 0 0 104
test-ns-2/ 56 47 0 0 103
Total 630 644 0 0 1274
```
</CodeBlockConfig>
The output shows client usage metrics for each namespace.
### Filter by namespace
You can narrow the scope for `education` namespace and its child namespaces.
```shell-session
$ vault operator usage -namespace education
Period start: 2024-03-01T00:00:00Z
Period end: 2024-10-31T23:59:59Z
Namespace path Entity Clients Non-Entity clients Secret syncs ACME clients Active clients
-------------- -------------- ------------------ ------------ ------------ --------------
education/ 31 31 0 0 62
education/certification/ 18 25 0 0 43
education/training/ 192 197 0 0 389
Total 241 253 0 0 494
```
### Query with a time frame
To query the client usage metrics for the month of June, 2024. The start
time is June 1, 2024 (`2024-06-01T00:00:00Z`) and the end time is June
30, 2024 (`2024-06-30T23:59:59Z`).
The `start_time` and `end_time` should be an RFC3339 timestamp or Unix epoch
time.
```shell-session
$ vault operator usage \
-start-time=2024-06-01T00:00:00Z \
-end-time=2024-06-30T23:59:59Z
```
**Example output:**
<CodeBlockConfig hideClipboard>
```plaintext
Period start: 2024-06-01T00:00:00Z
Period end: 2024-06-30T23:59:59Z
Namespace path Entity Clients Non-Entity clients Secret syncs ACME clients Active clients
-------------- -------------- ------------------ ------------ ------------ --------------
[root] 10 16 0 0 26
education/ 7 1 0 0 8
education/certification/ 2 4 0 0 6
education/training/ 37 30 0 0 67
finance/ 3 6 0 0 9
marketing/ 2 2 0 0 4
test-ns-1-with-namespace-length-over-18-characters/ 6 9 0 0 15
test-ns-1/ 9 12 0 0 21
test-ns-2-with-namespace-length-over-18-characters/ 5 5 0 0 10
test-ns-2/ 9 7 0 0 16
Total 90 92 0 0 182
```
</CodeBlockConfig>
## Export the metrics data
You can export the metrics data by clicking on the **Export attribution data**
button.
![Metrics UI](/img/ui-usage-metrics-export.png)
This downloads the usage metrics data on your local drive in comma separated
values format (`.csv`) or JSON.
## API
- Refer to the
[`sys/internal/counters`](/vault/api-docs/system/internal-counters#client-count)
page to retrieve client count using API.
- [Activity export API](/vault/api-docs/system/internal-counters#activity-export) to
export activity log. | vault | layout docs page title Vault usage metrics description Learn how to discover the number of Vault clients for each namespace in Vault Vault usage metrics Client calculation and sizing can be complex to compute when you have multiple namespaces and auth mounts The Vault Usage Metrics dashboard on Vault UI provides the information where you can filter the data by namespace and or auth mounts You can also use Vault CLI or API to query the usage metrics Enable usage metrics Usage metrics are a feature that is enabled by default for Vault Enterprise and HCP Vault Dedicated However if you are running Vault Community Edition you need to enable usage metrics since it is disabled by default Tabs Tab heading Web UI group ui 1 Open a web browser to access the Vault UI and sign in 1 Select Client Count from the left navigation menu 1 Select Configuration 1 Select Edit configuration Edit configuration img ui usage metrics config png 1 Select the toggle for Usage data collection so that the text reads Data collection is on Tip title Retention period The retention period sets the number of months for which Vault will maintain activity logs to track active clients Default 48 months Tip 1 Click Save to apply the changes 1 Click Continue in the confirmation dialog to enable usage metrics tracking Tab Tab heading CLI command group cli shell session vault write sys internal counters config enabled enable Valid values for enabled parameter are default enable and disable Tip title Retention period By default Vault maintains activity logs to track active clients for 24 months If you wish to change the retention period use the retention months parameter Tip Example shell session vault write sys internal counters config enabled enable retention months 12 Tab Tab heading API call using cURL group api shell session curl header X Vault Token TOKEN request POST data enabled enable VAULT ADDR v1 sys internal counters config Valid values for enabled parameter are default enable and disable Tip title Retention period By default Vault maintains activity logs to track active clients for 24 months If you wish to change the retention period use the retention months parameter Tip Example shell session curl header X Vault Token TOKEN request POST data enabled enable retention months 12 VAULT ADDR v1 sys internal counters config Tab Tabs Usage metrics dashboard 1 Sign into Vault UI The Client count section displays the total number of clients for the current billing period 1 Select Details Vault UI default dashboard example img ui client count png 1 Examine the Vault Usage Metrics dashboard to learn your Vault usage Example Vault Usage Metrics dashboard view img ui usage metrics 1 png Usage metrics data categories Running client total are the primary metric on which pricing is based It is the sum of entity clients or distinct entities and non entity clients Entity clients distinct entities are representations of a particular user client or application that belongs to a defined Vault entity If you are unfamiliar with Vault entities refer to the Identity Entities and Groups vault tutorials auth methods identity tutorial Non entity clients are clients without an entity attached This is because some customers or workflows might avoid using entity creating authentication methods and instead depend on token creation through the Vault API Refer to understanding non entity tokens vault docs concepts client count understanding non entity tokens to learn more Note The non entity client count excludes root tokens Note Secrets sync clients are the number of external destinations Vault connects to sync the secrets Refer to the documentation vault docs concepts client count secret sync clients for more details ACME clients are the ACME connections that authenticate under the same certificate identifier domain as the same certificate entity for client count calculations Refer to the documentation vault docs concepts client count entity assignment with acme for more details ACME clients example img ui usage metrics acme png Select a data range Under the Client counting period select Edit to query the data for a different billing period Query img ui usage metrics period png Keep in mind that Vault begins collecting data when the feature is enabled For example if you enabled the usage metrics in March of 2023 you cannot query data in January of 2023 Vault will return metrics from March of 2023 through most recent full month Filter by namespaces If you have namespaces vault docs enterprise namespaces the dashboard displays the top ten namespaces by total clients Namespace attribution example img ui usage metrics namespace png Use the Filters to view the metrics data of a specific namespace Filter by namespace img ui usage metrics filter png Mount attribution The clients are also shown as graphs per auth mount The Mount attribution section displays the top auth mounts where you can expect to find your most used auth method mounts with respect to client usage This allows you to detect which auth mount had the most number of total clients in the given billing period You can filter for auth mounts within a namespace or view auth mounts across namespaces The mount attribution is available even if you are not using namespaces Usage metrics by mount attribution img ui usage metrics mounts png Query usage metrics via CLI Retrieve the usage metrics for the current billing period shell session vault operator usage Example output CodeBlockConfig hideClipboard plaintxt Period start 2024 03 01T00 00 00Z Period end 2024 10 31T23 59 59Z Namespace path Entity Clients Non Entity clients Secret syncs ACME clients Active clients root 86 114 0 0 200 education 31 31 0 0 62 education certification 18 25 0 0 43 education training 192 197 0 0 389 finance 18 26 0 0 44 marketing 28 17 0 0 45 test ns 1 with namespace length over 18 characters 84 75 0 0 159 test ns 1 59 66 0 0 125 test ns 2 with namespace length over 18 characters 58 46 0 0 104 test ns 2 56 47 0 0 103 Total 630 644 0 0 1274 CodeBlockConfig The output shows client usage metrics for each namespace Filter by namespace You can narrow the scope for education namespace and its child namespaces shell session vault operator usage namespace education Period start 2024 03 01T00 00 00Z Period end 2024 10 31T23 59 59Z Namespace path Entity Clients Non Entity clients Secret syncs ACME clients Active clients education 31 31 0 0 62 education certification 18 25 0 0 43 education training 192 197 0 0 389 Total 241 253 0 0 494 Query with a time frame To query the client usage metrics for the month of June 2024 The start time is June 1 2024 2024 06 01T00 00 00Z and the end time is June 30 2024 2024 06 30T23 59 59Z The start time and end time should be an RFC3339 timestamp or Unix epoch time shell session vault operator usage start time 2024 06 01T00 00 00Z end time 2024 06 30T23 59 59Z Example output CodeBlockConfig hideClipboard plaintext Period start 2024 06 01T00 00 00Z Period end 2024 06 30T23 59 59Z Namespace path Entity Clients Non Entity clients Secret syncs ACME clients Active clients root 10 16 0 0 26 education 7 1 0 0 8 education certification 2 4 0 0 6 education training 37 30 0 0 67 finance 3 6 0 0 9 marketing 2 2 0 0 4 test ns 1 with namespace length over 18 characters 6 9 0 0 15 test ns 1 9 12 0 0 21 test ns 2 with namespace length over 18 characters 5 5 0 0 10 test ns 2 9 7 0 0 16 Total 90 92 0 0 182 CodeBlockConfig Export the metrics data You can export the metrics data by clicking on the Export attribution data button Metrics UI img ui usage metrics export png This downloads the usage metrics data on your local drive in comma separated values format csv or JSON API Refer to the sys internal counters vault api docs system internal counters client count page to retrieve client count using API Activity export API vault api docs system internal counters activity export to export activity log |
vault Deprecation announcements updates and migration plans for Vault page title Deprecation notices Vault implements a multi phased approach to deprecations to provide users with layout docs Deprecation notices | ---
layout: docs
page_title: Deprecation notices
description: >-
Deprecation announcements, updates, and migration plans for Vault.
---
# Deprecation notices
Vault implements a multi-phased approach to deprecations to provide users with
advanced warning, minimize business disruptions, and allow for the safe handling
of data affected by a feature removal.
<Highlight title="Have questions?">
If you have questions or concerns about a deprecated feature, please create a
topic on the [Vault community forum](https://discuss.hashicorp.com/c/vault/30)
or raise a ticket with your support team.
</Highlight>
<a id="announcements" />
## Recent announcements
<Tabs>
<Tab heading="DEPRECATED">
<EnterpriseAlert product="vault">
The Vault Support Team can provide <b>limited</b> help with a deprecated feature.
Limited support includes troubleshooting solutions and workarounds but does not
include software patches or bug fixes. Refer to
the <a href="https://support.hashicorp.com/hc/en-us/articles/360021185113-Support-Period-and-End-of-Life-EOL-Policy">HashiCorp Support Policy</a> for
more information on the product support timeline.
</EnterpriseAlert>
@include 'deprecation/ruby-client-library.mdx'
@include 'deprecation/active-directory-secrets-engine.mdx'
</Tab>
<Tab heading="PENDING REMOVAL">
@include 'deprecation/vault-agent-api-proxy.mdx'
@include 'deprecation/aws-field-change.mdx'
@include 'deprecation/centrify-auth-method.mdx'
</Tab>
<Tab heading="REMOVED">
@include 'deprecation/duplicative-docker-images.mdx'
@include 'deprecation/azure-password-policy.mdx'
</Tab>
</Tabs>
<a id="phases" />
## Deprecation phases
The lifecycle of a Vault feature or plugin includes 4 phases:
- **supported** - generally available (GA), functioning as expected, and under
active maintenance
- **deprecated** - marked for removal in a future release
- **pending removal** - support ended or replaced by another feature
- **removed** - end of lifecycle
### Deprecated ((#deprecated))
"Deprecated" is the first phase of the deprecation process and indicates that
the feature is marked for removal in a future release. When you upgrade Vault,
newly deprecated features will begin alerting that the feature is deprecated:
- Built-in authentication and secrets plugins log `Warn`-level messages on
unseal.
- All deprecated features log `Warn`-level messages.
- All `POST`, `GET`, and `LIST` endpoints associated with the feature return
warnings in response data.
Built-in Vault authentication and secrets plugins also expose their deprecation
status through the Vault CLI and Vault API.
CLI command | API endpoint
---------------------------------------------------------------------------- | --------------
N/A | [`/sys/plugins/catalog`](/vault/api-docs/system/plugins-catalog)
[`vault plugin info auth <PLUGIN_NAME>`](/vault/docs/commands/plugin/info) | [`/sys/plugins/catalog/auth/:name`](/vault/api-docs/system/plugins-catalog#list-plugins-1)
[`vault plugin info secret <PLUGIN_NAME>`](/vault/docs/commands/plugin/info) | [`/sys/plugins/catalog/secret/:name`](/vault/api-docs/system/plugins-catalog#list-plugins-1)
### Pending removal
"Pending removal" is the second phase of the deprecation process and indicates
that the feature behavior is fundamentally altered in the following ways:
- Built-in authentication and secrets plugins log `Error`-level messages and
cause an immediate shutdown of the Vault core.
- All features pending removal fail and log `Error`-level messages.
- All CLI commands and API endpoints associated with the feature fail and return
errors.
<Warning title="Use with caution">
In critical situations, you may be able to override the pending removal behavior with the
[`VAULT_ALLOW_PENDING_REMOVAL_MOUNTS`](/vault/docs/commands/server#vault_allow_pending_removal_mounts)
environment variable, which forces Vault to treat some features that are pending
removal as if they were still only deprecated.
</Warning>
### Removed
"Removed" is the last phase of the deprecation process and indicates that the
feature is no longer supported and no longer exists within Vault.
## Migrate from deprecated features
Features in the "pending removal" and "removed" phases will fail, log errors,
and, for built-in authentication or secret plugins, cause an immediate shutdown
of the Vault core.
Migrate away from a deprecated feature and successfully upgrade to newer Vault
versions, you must eliminate the deprecated features:
1. Downgrade Vault to a previous version if necessary.
1. Replace any "Removed" or "Pending removal" feature with the recommended
alternative.
1. Upgrade to latest desired version. | vault | layout docs page title Deprecation notices description Deprecation announcements updates and migration plans for Vault Deprecation notices Vault implements a multi phased approach to deprecations to provide users with advanced warning minimize business disruptions and allow for the safe handling of data affected by a feature removal Highlight title Have questions If you have questions or concerns about a deprecated feature please create a topic on the Vault community forum https discuss hashicorp com c vault 30 or raise a ticket with your support team Highlight a id announcements Recent announcements Tabs Tab heading DEPRECATED EnterpriseAlert product vault The Vault Support Team can provide b limited b help with a deprecated feature Limited support includes troubleshooting solutions and workarounds but does not include software patches or bug fixes Refer to the a href https support hashicorp com hc en us articles 360021185113 Support Period and End of Life EOL Policy HashiCorp Support Policy a for more information on the product support timeline EnterpriseAlert include deprecation ruby client library mdx include deprecation active directory secrets engine mdx Tab Tab heading PENDING REMOVAL include deprecation vault agent api proxy mdx include deprecation aws field change mdx include deprecation centrify auth method mdx Tab Tab heading REMOVED include deprecation duplicative docker images mdx include deprecation azure password policy mdx Tab Tabs a id phases Deprecation phases The lifecycle of a Vault feature or plugin includes 4 phases supported generally available GA functioning as expected and under active maintenance deprecated marked for removal in a future release pending removal support ended or replaced by another feature removed end of lifecycle Deprecated deprecated Deprecated is the first phase of the deprecation process and indicates that the feature is marked for removal in a future release When you upgrade Vault newly deprecated features will begin alerting that the feature is deprecated Built in authentication and secrets plugins log Warn level messages on unseal All deprecated features log Warn level messages All POST GET and LIST endpoints associated with the feature return warnings in response data Built in Vault authentication and secrets plugins also expose their deprecation status through the Vault CLI and Vault API CLI command API endpoint N A sys plugins catalog vault api docs system plugins catalog vault plugin info auth PLUGIN NAME vault docs commands plugin info sys plugins catalog auth name vault api docs system plugins catalog list plugins 1 vault plugin info secret PLUGIN NAME vault docs commands plugin info sys plugins catalog secret name vault api docs system plugins catalog list plugins 1 Pending removal Pending removal is the second phase of the deprecation process and indicates that the feature behavior is fundamentally altered in the following ways Built in authentication and secrets plugins log Error level messages and cause an immediate shutdown of the Vault core All features pending removal fail and log Error level messages All CLI commands and API endpoints associated with the feature fail and return errors Warning title Use with caution In critical situations you may be able to override the pending removal behavior with the VAULT ALLOW PENDING REMOVAL MOUNTS vault docs commands server vault allow pending removal mounts environment variable which forces Vault to treat some features that are pending removal as if they were still only deprecated Warning Removed Removed is the last phase of the deprecation process and indicates that the feature is no longer supported and no longer exists within Vault Migrate from deprecated features Features in the pending removal and removed phases will fail log errors and for built in authentication or secret plugins cause an immediate shutdown of the Vault core Migrate away from a deprecated feature and successfully upgrade to newer Vault versions you must eliminate the deprecated features 1 Downgrade Vault to a previous version if necessary 1 Replace any Removed or Pending removal feature with the recommended alternative 1 Upgrade to latest desired version |
vault it for a certain duration page title debug Command The debug command monitors a Vault server probing information about layout docs debug | ---
layout: docs
page_title: debug - Command
description: |-
The "debug" command monitors a Vault server, probing information about
it for a certain duration.
---
# debug
The `debug` command starts a process that monitors a Vault server, probing
information about it for a certain duration.
Gathering information about the state of the Vault cluster often requires the
operator to access all necessary information via various API calls and terminal
commands. The `debug` command aims to provide a simple workflow that produces a
consistent output to help operators retrieve and share information about the
server in question.
The `debug` command honors the same variables that the base command
accepts, such as the token stored via a previous login or the environment
variables `VAULT_TOKEN` and `VAULT_ADDR`. The token used determines the
permissions and, in turn, the information that `debug` may be able to collect.
The address specified determines the target server that will be probed against.
If the command is interrupted, the information collected up until that
point gets persisted to an output directory.
## Permissions
Regardless of whether a particular target is provided, the ability for `debug`
to fetch data for the target depends on the token provided. Some targets, such
as `server-status`, queries unauthenticated endpoints which means that it can be
queried all the time. Other targets require the token to have ACL permissions to
query the matching endpoint in order to get a proper response. Any errors
encountered during capture due to permissions or otherwise will be logged in the
index file.
The following policy can be used for generating debug packages with all targets:
```hcl
path "auth/token/lookup-self" {
capabilities = ["read"]
}
path "sys/pprof/*" {
capabilities = ["read"]
}
path "sys/config/state/sanitized" {
capabilities = ["read"]
}
path "sys/monitor" {
capabilities = ["read"]
}
path "sys/host-info" {
capabilities = ["read"]
}
path "sys/in-flight-req" {
capabilities = ["read"]
}
```
## Capture targets
The `-target` flag can be specified multiple times to capture specific
information when debug is running. By default, it captures all information.
| Target | Description |
| :------------------- | :-------------------------------------------------------------------------------- |
| `config` | Sanitized version of the configuration state. |
| `host` | Information about the instance running the server, such as CPU, memory, and disk. |
| `metrics` | Telemetry information. |
| `pprof` | Runtime profiling data, including heap, CPU, goroutine, and trace profiling. |
| `replication-status` | Replication status. |
| `server-status` | Health and seal status. |
Note that the `config`, `host`,`metrics`, and `pprof` targets are only queried
on active and performance standby nodes due to the the fact that the information
pertains to the local node and the request should not be forwarded.
Additionally, host information is not available on the OpenBSD platform due to
library limitations in fetching the data without enabling `cgo`.
[Enterprise] Telemetry can be gathered from a DR Secondary active node via the
`metrics` target if [unauthenticated_metrics_access](/vault/docs/configuration/listener/tcp#unauthenticated_metrics_access) is enabled.
## Output layout
The output of the bundled information, once decompressed, is contained within a
single directory. Each target, with the exception of profiling data, is captured
in a single file. For each of those targets collection is represented as a JSON
array object, with each entry captured at each interval as a JSON object.
```shell-session
$ tree vault-debug-2019-10-15T21-44-49Z/
vault-debug-2019-10-15T21-44-49Z/
├── 2019-10-15T21-44-49Z
│ ├── goroutine.prof
│ ├── heap.prof
│ ├── profile.prof
│ └── trace.out
├── 2019-10-15T21-45-19Z
│ ├── goroutine.prof
│ ├── heap.prof
│ ├── profile.prof
│ └── trace.out
├── 2019-10-15T21-45-49Z
│ ├── goroutine.prof
│ ├── heap.prof
│ ├── profile.prof
│ └── trace.out
├── 2019-10-15T21-46-19Z
│ ├── goroutine.prof
│ ├── heap.prof
│ ├── profile.prof
│ └── trace.out
├── 2019-10-15T21-46-49Z
│ ├── goroutine.prof
│ └── heap.prof
├── config.json
├── host_info.json
├── index.json
├── metrics.json
├── replication_status.json
└── server_status.json
```
## Examples
Start debug using reasonable defaults:
```shell-session
$ vault debug
```
Start debug with different duration, intervals, and metrics interval values, and
skip compression:
```shell-session
$ vault debug -duration=1m -interval=10s -metrics-interval=5s -compress=false
```
Start debug with specific targets:
```shell-session
$ vault debug -target=host -target=metrics
```
## Usage
The following flags are available in addition to the [standard set of
flags](/vault/docs/commands) included on all commands.
### Command options
- `-compress` `(bool: true)` - Toggles whether to compress output package The
default is true.
- `-duration` `(int or time string: "2m")` - Duration to run the command. The
default is 2m0s.
- `-interval` `(int or time string: "30s")` - The polling interval at which to
collect profiling data and server state. The default is 30s.
- `-log-format` `(string: "standard")` - Log format to be captured if "log"
target specified. Supported values are "standard" and "json". The default is
"standard".
- `-metrics-interval` `(int or time string: "10s")` - The polling interval at
which to collect metrics data. The default is 10s.
- `-output` `(string)` - Specifies the output path for the debug package. Defaults
to a time-based generated file name.
- `-target` `(string: all targets)` - Target to capture, defaulting to all if
none specified. This can be specified multiple times to capture multiple
targets. Available targets are: config, host, metrics, pprof,
replication-status, server-status. | vault | layout docs page title debug Command description The debug command monitors a Vault server probing information about it for a certain duration debug The debug command starts a process that monitors a Vault server probing information about it for a certain duration Gathering information about the state of the Vault cluster often requires the operator to access all necessary information via various API calls and terminal commands The debug command aims to provide a simple workflow that produces a consistent output to help operators retrieve and share information about the server in question The debug command honors the same variables that the base command accepts such as the token stored via a previous login or the environment variables VAULT TOKEN and VAULT ADDR The token used determines the permissions and in turn the information that debug may be able to collect The address specified determines the target server that will be probed against If the command is interrupted the information collected up until that point gets persisted to an output directory Permissions Regardless of whether a particular target is provided the ability for debug to fetch data for the target depends on the token provided Some targets such as server status queries unauthenticated endpoints which means that it can be queried all the time Other targets require the token to have ACL permissions to query the matching endpoint in order to get a proper response Any errors encountered during capture due to permissions or otherwise will be logged in the index file The following policy can be used for generating debug packages with all targets hcl path auth token lookup self capabilities read path sys pprof capabilities read path sys config state sanitized capabilities read path sys monitor capabilities read path sys host info capabilities read path sys in flight req capabilities read Capture targets The target flag can be specified multiple times to capture specific information when debug is running By default it captures all information Target Description config Sanitized version of the configuration state host Information about the instance running the server such as CPU memory and disk metrics Telemetry information pprof Runtime profiling data including heap CPU goroutine and trace profiling replication status Replication status server status Health and seal status Note that the config host metrics and pprof targets are only queried on active and performance standby nodes due to the the fact that the information pertains to the local node and the request should not be forwarded Additionally host information is not available on the OpenBSD platform due to library limitations in fetching the data without enabling cgo Enterprise Telemetry can be gathered from a DR Secondary active node via the metrics target if unauthenticated metrics access vault docs configuration listener tcp unauthenticated metrics access is enabled Output layout The output of the bundled information once decompressed is contained within a single directory Each target with the exception of profiling data is captured in a single file For each of those targets collection is represented as a JSON array object with each entry captured at each interval as a JSON object shell session tree vault debug 2019 10 15T21 44 49Z vault debug 2019 10 15T21 44 49Z 2019 10 15T21 44 49Z goroutine prof heap prof profile prof trace out 2019 10 15T21 45 19Z goroutine prof heap prof profile prof trace out 2019 10 15T21 45 49Z goroutine prof heap prof profile prof trace out 2019 10 15T21 46 19Z goroutine prof heap prof profile prof trace out 2019 10 15T21 46 49Z goroutine prof heap prof config json host info json index json metrics json replication status json server status json Examples Start debug using reasonable defaults shell session vault debug Start debug with different duration intervals and metrics interval values and skip compression shell session vault debug duration 1m interval 10s metrics interval 5s compress false Start debug with specific targets shell session vault debug target host target metrics Usage The following flags are available in addition to the standard set of flags vault docs commands included on all commands Command options compress bool true Toggles whether to compress output package The default is true duration int or time string 2m Duration to run the command The default is 2m0s interval int or time string 30s The polling interval at which to collect profiling data and server state The default is 30s log format string standard Log format to be captured if log target specified Supported values are standard and json The default is standard metrics interval int or time string 10s The polling interval at which to collect metrics data The default is 10s output string Specifies the output path for the debug package Defaults to a time based generated file name target string all targets Target to capture defaulting to all if none specified This can be specified multiple times to capture multiple targets Available targets are config host metrics pprof replication status server status |
vault page title events Command The events command interacts with the Vault events notifications subsystem events layout docs EnterpriseAlert product vault | ---
layout: docs
page_title: events - Command
description: |-
The "events" command interacts with the Vault events notifications subsystem.
---
# events
<EnterpriseAlert product="vault" />
Use the `events` command to get a real-time display of
[event notifications](/vault/docs/concepts/events) generated by Vault and to subscribe to Vault
event notifications. Note that the `events subscribe` runs indefinitly and will not exit on
its own unless it encounters an unexpected error. Similar to `tail -f` in the
Unix world, you must terminate the process from the command line to end the
`events` command.
Specify the desired event types (also called "topics") as a glob pattern. To
match against multiple event types, use `*` as a wildcard. The command returns
serialized JSON objects in the default protobuf JSON serialization format with
one line per event received.
## Examples
Subscribe to all event notifications:
```shell-session
$ vault events subscribe '*'
```
Subscribe to all KV event notifications:
```shell-session
$ vault events subscribe 'kv*'
```
Subscribe to all `kv-v2/data-write` event notifications:
```shell-session
$ vault events subscribe kv-v2/data-write
```
Subscribe to all KV event notifications in the current and `ns1` namespaces for the secret `secret/data/foo` that do not involve writing data:
```shell-session
$ vault events subscribe -namespaces=ns1 -filter='data_path == secret/data/foo and operation != "data-write"' 'kv*'
```
## Usage
`events subscribe` supports the following flags in addition to the [standard set of
flags](/vault/docs/commands) included on all commands.
### Options
- `-timeout`: `(duration: "")` - close the WebSocket automatically after the
specified duration.
- `-namespaces` `(string)` - Additional **child** namespaces for the
subscription. Repeat the flag to add additional namespace patterns to the
subscription request. Vault automatically prepends the issuing namespace for
the request to the provided namespace. For example, if you include
`-namespaces=ns2` on a request made in the `ns1` namespace, Vault will attempt
to subscribe you to event notifications under the `ns1/ns2` and `ns1` namespaces. You can
use the `*` character to include wildcards in the namespace pattern. By
default, Vault will only subscribe to event notifications in the requesting namespace.
<Note>
To subscribe to event notifications across multiple namespaces, you must provide a root
token or a token associated with appropriate policies across all the targeted
namespaces. Refer to
the <a href="/vault/tutorials/enterprise/namespaces">Secure multi-tenancy with
namespaces</a>tutorial for configuring your Vault instance appropriately.
</Note>
- `-filter` `(string: "")` - Filter expression used to select event notifications to be sent
through the WebSocket.
Refer to the [Filter expressions](/vault/docs/concepts/filtering) guide for a complete
list of filtering options and an explanation on how Vault evaluates filter expressions.
The following values are available in the filter expression:
- `event_type`: the event type, e.g., `kv-v2/data-write`.
- `operation`: the operation name that caused the event notification, e.g., `write`.
- `source_plugin_mount`: the mount of the plugin that produced the event notification,
e.g., `secret/`
- `data_path`: the API path that can be used to access the data of the secret related to the event notification, e.g., `secret/data/foo`
- `namespace`: the path of the namespace that created the event notification, e.g., `ns1/`
The filter string is empty by default. Unfiltered subscription requests match to
all event notifications that the requestor has access to for the target event type. When the
filter string is not empty, Vault applies the filter conditions after the policy
checks to narrow the event notifications provided in the response.
Filters can be straightforward path matches like
`data_path == secret/data/foo`, which specifies that Vault should pass
return event notifications that refer to the `secret/data/foo` secret to the WebSocket.
Or more complex statements that exclude specific operations. For example:
```
data_path == secret/data/foo and operation != write
``` | vault | layout docs page title events Command description The events command interacts with the Vault events notifications subsystem events EnterpriseAlert product vault Use the events command to get a real time display of event notifications vault docs concepts events generated by Vault and to subscribe to Vault event notifications Note that the events subscribe runs indefinitly and will not exit on its own unless it encounters an unexpected error Similar to tail f in the Unix world you must terminate the process from the command line to end the events command Specify the desired event types also called topics as a glob pattern To match against multiple event types use as a wildcard The command returns serialized JSON objects in the default protobuf JSON serialization format with one line per event received Examples Subscribe to all event notifications shell session vault events subscribe Subscribe to all KV event notifications shell session vault events subscribe kv Subscribe to all kv v2 data write event notifications shell session vault events subscribe kv v2 data write Subscribe to all KV event notifications in the current and ns1 namespaces for the secret secret data foo that do not involve writing data shell session vault events subscribe namespaces ns1 filter data path secret data foo and operation data write kv Usage events subscribe supports the following flags in addition to the standard set of flags vault docs commands included on all commands Options timeout duration close the WebSocket automatically after the specified duration namespaces string Additional child namespaces for the subscription Repeat the flag to add additional namespace patterns to the subscription request Vault automatically prepends the issuing namespace for the request to the provided namespace For example if you include namespaces ns2 on a request made in the ns1 namespace Vault will attempt to subscribe you to event notifications under the ns1 ns2 and ns1 namespaces You can use the character to include wildcards in the namespace pattern By default Vault will only subscribe to event notifications in the requesting namespace Note To subscribe to event notifications across multiple namespaces you must provide a root token or a token associated with appropriate policies across all the targeted namespaces Refer to the a href vault tutorials enterprise namespaces Secure multi tenancy with namespaces a tutorial for configuring your Vault instance appropriately Note filter string Filter expression used to select event notifications to be sent through the WebSocket Refer to the Filter expressions vault docs concepts filtering guide for a complete list of filtering options and an explanation on how Vault evaluates filter expressions The following values are available in the filter expression event type the event type e g kv v2 data write operation the operation name that caused the event notification e g write source plugin mount the mount of the plugin that produced the event notification e g secret data path the API path that can be used to access the data of the secret related to the event notification e g secret data foo namespace the path of the namespace that created the event notification e g ns1 The filter string is empty by default Unfiltered subscription requests match to all event notifications that the requestor has access to for the target event type When the filter string is not empty Vault applies the filter conditions after the policy checks to narrow the event notifications provided in the response Filters can be straightforward path matches like data path secret data foo which specifies that Vault should pass return event notifications that refer to the secret data foo secret to the WebSocket Or more complex statements that exclude specific operations For example data path secret data foo and operation write |
vault page title Use a custom token helper The Vault CLI supports external token helpers to help simplify retrieving layout docs Use a custom token helper setting and erasing authentication tokens | ---
layout: docs
page_title: Use a custom token helper
description: >-
The Vault CLI supports external token helpers to help simplify retrieving,
setting and erasing authentication tokens.
---
# Use a custom token helper
A **token helper** is a program or script that saves, retrieves, or erases a
saved authentication token.
By default, the Vault CLI includes a token helper that caches tokens from any
enabled authentication backend in a `~/.vault-token` file. You can customize
the caching behavior with a custom token helper.
## Step 1: Script your helper
Your token helper must accept a single command-line argument:
Argument | Action
-------- | ------
`get` | Fetch and print a cached authentication token to `stdout`
`store` | Read an authentication token from `stdin` and save it in a secure location
`erase` | Delete a cached authentication token
You can manage the authentication tokens in whatever way you prefer, but your
helper must adhere to following output requirements:
- Limit `stdout` writes to token strings.
- Write all error messages to `stderr`.
- Write all non-error and non-token output to `syslog` or a log file.
- Return the status code `0` on success.
- Return non-zero status codes for errors.
## Step 2: Configure Vault
To configure a custom token helper, edit (or create) a CLI configuration file
called `.vault` under your home directory and set the `token_helper` parameter
with the **fully qualified path** to your new helper:
<Tabs>
<Tab heading="Linux shell" group="nix">
```
echo 'token_helper = "/path/to/token/helper.sh"' >> ${HOME}/.vault
```
</Tab>
<Tab heading="Powershell" group="ps">
Make sure to use UTF-8 encoding (`ascii`) or Vault may complain about invalid
characters when reading the configuration file:
```powershell
'token_helper = "\\path\\to\\token\\helper.ps1"' | `
Out-File -FilePath ${env:USERPROFILE}/.vault -Encoding ascii -Append
```
</Tab>
</Tabs>
<Tip>
Make sure the script is executable by the Vault binary.
</Tip>
## Example token helper
The following token helper manages tokens in a JSON file in the home directory
called `.vault_tokens`.
The helper relies on the `$VAULT_ADDR` environment variable to store and
retrieve tokens from different Vault servers.
<CodeTabs>
```shell-session
#!/bin/bash
function write_error(){ >&2 echo $@; }
# Customize the hash key for tokens. Currently, we remove the strings
# 'https://', '.', and ':' from the passed address (Vault address environment
# by default) because jq has trouble with special characeters in JSON field
# names
function createHashKey {
local key=""
if [[ -z "${1}" ]] ; then key="${VAULT_ADDR}"
else key="${1}"
fi
# We index the token according to the Vault server address by default so
# return an error if the address is empty
if [[ -z "${key}" ]] ; then
write_error "Error: VAULT_ADDR environment variable unset."
exit 100
fi
key=${key//"http://"/""}
key=${key//"."/"_"}
key=${key//":"/"_"}
echo "addr-${key}"
}
TOKEN_FILE="${HOME}/.vault_token"
KEY=$(createHashKey)
TOKEN="null"
# If the token file does not exist, create it
if [ ! -f ${TOKEN_FILE} ] ; then
echo "{}" > ${TOKEN_FILE}
fi
case "${1}" in
"get")
# Read the current JSON data and pull the token associated with ${KEY}
TOKEN=$(cat ${TOKEN_FILE} | jq --arg key "${KEY}" -r '.[$key]')
# If the token != to the string "null", print the token to stdout
# jq returns "null" if the key was not found in the JSON data
if [ ! "${TOKEN}" == "null" ] ; then
echo "${TOKEN}"
fi
exit 0
;;
"store")
# Get the token from stdin
read TOKEN
# Read the current JSON data and add a new entry
JSON=$(
jq \
--arg key "${KEY}" \
--arg token "${TOKEN}" \
'.[$key] = $token' ${TOKEN_FILE}
)
;;
"erase")
# Read the current JSON data and remove the entry if it exists
JSON=$(
jq \
--arg key "${KEY}" \
--arg token "${TOKEN}" \
'del(.[$key])' ${TOKEN_FILE}
)
;;
*)
# change to stderr for real code
write_error "Error: Provide a valid command: get, store, or erase."
exit 101
esac
# Update the JSON file and return success
echo $JSON | jq "." > ${TOKEN_FILE}
exit 0
```
```powershell
<#
.Synopsis
Vault token helper script
.INPUTS
Positional/command line argument: get, store, erase
.OUTPUTS
get: prints a cached authentication token to stdin (if it exists)
store: no output, updates the token cache
erase: no output, updates the token cache
#>
<#
.Synopsis
CreateHashKey
.DESCRIPTION
Customize the hash key for tokens. Currently, we remove the strings
'https://', '.', and ':' from the passed address (Vault address environment by
default) variable to simplify the hash key string
#>
function CreateHashKey {
Param($address = "${env:VAULT_ADDR}")
# We index the token according to the Vault server address by default so
# return an error if the address is empty
if ( -not $address) {
Write-Error "[Missing value] env:VAULT_ADDR currently unset."
exit 101
}
$key = ${address}.Replace("/","").Replace(".","_").Replace(":","_")
return ${key}.Replace("http_", "addr-")
}
<#
.Synopsis
GetTokenCache
.DESCRIPTION
Read in or create a new token cache and initialize the hash
#>
function GetTokenCache {
Param($filename)
# Read the JSON file (token cache) and initialize the hash data or create an
# empty hash if the file does not exist yet
if ( Get-Item -Path "./${filename}" -ErrorAction SilentlyContinue ) {
$fileData = (Get-Content "${filename}" -Raw | ConvertFrom-Json -AsHashtable)
} else {
$fileData = (Write-Output "{}" | ConvertFrom-Json -AsHashtable)
}
return $fileData
}
<#
.Synopsis
UpdateTokenCache
.DESCRIPTION
Write the token hash out to the cache
#>
function UpdateTokenCache {
Param($filename, $fileData)
$jsonData = ($fileData | ConvertTo-Json)
# Convert the hash to JSON and update the token cache
$jsonData | Out-File -Encoding ascii "${filename}"
return
}
$tokenFile = "${env:USERPROFILE}/.vault_token"
$hashData = (GetTokenCache "${tokenFile}")
$key = (CreateHashKey)
$token = $null
switch -Exact -CaseSensitive (${args}[0]) {
"get" {
# Print the token to stdin and return success
Write-Output ${hashData}.${key}
exit 0
}
"store" {
$token = Read-Host
# Add the new token to the hash
$hashData["${key}"] = "${token}"
}
"erase" {
# Erase the token entry if it exists
if ($hashData.ContainsKey("${key}") ) {
$hashData.Remove("${key}")
}
}
Default {
# The argument was invalid so return an error
Write-Error "[Invalid argument] Command must be: get, store, or erase."
exit 102
}
}
# Update the token cache and return success
UpdateTokenCache ${tokenFile} ${hashData}
exit 0
```
```ruby
#!/usr/bin/env ruby
require 'json'
// We index the token according to the Vault server address
// so the VAULT_ADDR variable is required
unless ENV['VAULT_ADDR']
STDERR.puts "No VAULT_ADDR environment variable set. Set it and run me again!"
exit 100
end
// If the token file does not exist, create and initialize the hashmap
begin
tokens = JSON.parse(File.read("#{ENV['HOME']}/.vault_tokens"))
rescue Errno::ENOENT => e
# file doesn't exist so create a blank hash for it
tokens = {}
end
// Get the first command line argument
case ARGV.first
when 'get'
// Write the token to stdout if it exists
print tokens[ENV['VAULT_ADDR']] if tokens[ENV['VAULT_ADDR']]
exit 0
when 'store'
// Read the token from stdin
tokens[ENV['VAULT_ADDR']] = STDIN.read
when 'erase'
// Delete the token entry if it exists
tokens.delete!(ENV['VAULT_ADDR'])
end
// Update the token file
File.open("#{ENV['HOME']}/.vault_tokens", 'w') { |file| file.write(tokens.to_json) }
```
</CodeTabs> | vault | layout docs page title Use a custom token helper description The Vault CLI supports external token helpers to help simplify retrieving setting and erasing authentication tokens Use a custom token helper A token helper is a program or script that saves retrieves or erases a saved authentication token By default the Vault CLI includes a token helper that caches tokens from any enabled authentication backend in a vault token file You can customize the caching behavior with a custom token helper Step 1 Script your helper Your token helper must accept a single command line argument Argument Action get Fetch and print a cached authentication token to stdout store Read an authentication token from stdin and save it in a secure location erase Delete a cached authentication token You can manage the authentication tokens in whatever way you prefer but your helper must adhere to following output requirements Limit stdout writes to token strings Write all error messages to stderr Write all non error and non token output to syslog or a log file Return the status code 0 on success Return non zero status codes for errors Step 2 Configure Vault To configure a custom token helper edit or create a CLI configuration file called vault under your home directory and set the token helper parameter with the fully qualified path to your new helper Tabs Tab heading Linux shell group nix echo token helper path to token helper sh HOME vault Tab Tab heading Powershell group ps Make sure to use UTF 8 encoding ascii or Vault may complain about invalid characters when reading the configuration file powershell token helper path to token helper ps1 Out File FilePath env USERPROFILE vault Encoding ascii Append Tab Tabs Tip Make sure the script is executable by the Vault binary Tip Example token helper The following token helper manages tokens in a JSON file in the home directory called vault tokens The helper relies on the VAULT ADDR environment variable to store and retrieve tokens from different Vault servers CodeTabs shell session bin bash function write error 2 echo Customize the hash key for tokens Currently we remove the strings https and from the passed address Vault address environment by default because jq has trouble with special characeters in JSON field names function createHashKey local key if z 1 then key VAULT ADDR else key 1 fi We index the token according to the Vault server address by default so return an error if the address is empty if z key then write error Error VAULT ADDR environment variable unset exit 100 fi key key http key key key key echo addr key TOKEN FILE HOME vault token KEY createHashKey TOKEN null If the token file does not exist create it if f TOKEN FILE then echo TOKEN FILE fi case 1 in get Read the current JSON data and pull the token associated with KEY TOKEN cat TOKEN FILE jq arg key KEY r key If the token to the string null print the token to stdout jq returns null if the key was not found in the JSON data if TOKEN null then echo TOKEN fi exit 0 store Get the token from stdin read TOKEN Read the current JSON data and add a new entry JSON jq arg key KEY arg token TOKEN key token TOKEN FILE erase Read the current JSON data and remove the entry if it exists JSON jq arg key KEY arg token TOKEN del key TOKEN FILE change to stderr for real code write error Error Provide a valid command get store or erase exit 101 esac Update the JSON file and return success echo JSON jq TOKEN FILE exit 0 powershell Synopsis Vault token helper script INPUTS Positional command line argument get store erase OUTPUTS get prints a cached authentication token to stdin if it exists store no output updates the token cache erase no output updates the token cache Synopsis CreateHashKey DESCRIPTION Customize the hash key for tokens Currently we remove the strings https and from the passed address Vault address environment by default variable to simplify the hash key string function CreateHashKey Param address env VAULT ADDR We index the token according to the Vault server address by default so return an error if the address is empty if not address Write Error Missing value env VAULT ADDR currently unset exit 101 key address Replace Replace Replace return key Replace http addr Synopsis GetTokenCache DESCRIPTION Read in or create a new token cache and initialize the hash function GetTokenCache Param filename Read the JSON file token cache and initialize the hash data or create an empty hash if the file does not exist yet if Get Item Path filename ErrorAction SilentlyContinue fileData Get Content filename Raw ConvertFrom Json AsHashtable else fileData Write Output ConvertFrom Json AsHashtable return fileData Synopsis UpdateTokenCache DESCRIPTION Write the token hash out to the cache function UpdateTokenCache Param filename fileData jsonData fileData ConvertTo Json Convert the hash to JSON and update the token cache jsonData Out File Encoding ascii filename return tokenFile env USERPROFILE vault token hashData GetTokenCache tokenFile key CreateHashKey token null switch Exact CaseSensitive args 0 get Print the token to stdin and return success Write Output hashData key exit 0 store token Read Host Add the new token to the hash hashData key token erase Erase the token entry if it exists if hashData ContainsKey key hashData Remove key Default The argument was invalid so return an error Write Error Invalid argument Command must be get store or erase exit 102 Update the token cache and return success UpdateTokenCache tokenFile hashData exit 0 ruby usr bin env ruby require json We index the token according to the Vault server address so the VAULT ADDR variable is required unless ENV VAULT ADDR STDERR puts No VAULT ADDR environment variable set Set it and run me again exit 100 end If the token file does not exist create and initialize the hashmap begin tokens JSON parse File read ENV HOME vault tokens rescue Errno ENOENT e file doesn t exist so create a blank hash for it tokens end Get the first command line argument case ARGV first when get Write the token to stdout if it exists print tokens ENV VAULT ADDR if tokens ENV VAULT ADDR exit 0 when store Read the token from stdin tokens ENV VAULT ADDR STDIN read when erase Delete the token entry if it exists tokens delete ENV VAULT ADDR end Update the token file File open ENV HOME vault tokens w file file write tokens to json CodeTabs |
vault The Vault CLI is a static binary that wraps the Vault API While every CLI page title Vault CLI usage layout docs Vault CLI Technical reference for the Vault CLI | ---
layout: docs
page_title: Vault CLI usage
description: >-
Technical reference for the Vault CLI
---
# Vault CLI
The Vault CLI is a static binary that wraps the Vault API. While every CLI
command maps directly to one or more APIs internally, not every endpoint is
exposed publicly and not every API endpoint has a corresponding CLI command.
## Usage
<CodeBlockConfig hideClipboard>
```shell-session
$ vault <command> [subcommand(s)] [flag(s)] [command-argument(s)]
$ vault <command> [subcommand(s)] [-help | -h]
```
</CodeBlockConfig>
<Tip>
Use the `-help` flag with any command to see a description of the command and a
list of supported options and flags.
</Tip>
The Vault CLI returns different exit codes depending on whether and where an
error occurred:
- **`0`** - Success
- **`1`** - Local/terminal error (invalid flags, failed validation, wrong
numbers of arguments, etc.)
- **`2`** - Remote/server error (API failures, bad TLS, incorrect API
parameters, etc.)
### Authenticating to Vault
Unauthenticated users can use CLI commands with the `--help` flag, but must use
[`vault login`](/vault/docs/commands/login) or set the
[`VAULT_TOKEN`](/vault/docs/commands#standard-vault_token) environment variable
to use the CLI.
The CLI uses a token helper to cache access tokens after authenticating with
`vault login` The default file for cached tokens is `~/.vault-token` and
deleting the file forcibly logs the user out of Vault.
If you prefer to use a custom token helper,
[you can create your own](/vault/docs/commands/token-helper) and configure the
CLI to use it.
### Passing command arguments
Command arguments include any relevant configuration settings and
command-specific options. Command options pass input data as `key=value` pairs,
which you can provided inline, as a `stdin` stream, or from a local file.
<Tabs>
<Tab heading="Inline">
To pass input inline with the command, use the `<option-name>=<value>` syntax:
<CodeBlockConfig hideClipboard>
```shell-session
$ vault audit enable file file_path="/var/log/vault.log"
```
</CodeBlockConfig>
</Tab>
<Tab heading="stdin">
To pass input from `stdin`, use `-` as a stand-in for the entire `key=value`
pair or a specific option value.
To pipe the option and value, use a JSON object with the option name and value:
<CodeBlockConfig hideClipboard>
```shell-session
$ echo -n '{"file_path":"/var/log/vault.log"}' | vault audit enable file -
```
</CodeBlockConfig>
To pipe the option value by itself, provide the option name inline:
<CodeBlockConfig hideClipboard>
```shell-session
$ echo -n "/var/log/vault.log" | vault audit enable file file_path=-
```
</CodeBlockConfig>
</Tab>
<Tab heading="Local file">
To pass data as a file, use `@` as a stand-in for the entire
`<option-name>=<value>` pair or a specific option value.
To pass the option and value, use a JSON file:
<CodeBlockConfig hideClipboard>
```shell-session
data.json:
{
"file_path":"/var/log/vault.log"
}
$ vault audit enable file @data.json
```
</CodeBlockConfig>
To pass the option value by itself, use the option name inline and pass the
value as text:
<CodeBlockConfig hideClipboard>
```shell-session
data.txt:
/var/log/vault.log
$ vault audit enable file file_path=@data.txt
```
</CodeBlockConfig>
If you use `@` as part of an argument **name** in `<option-name>=<value>`
format, Vault treats the `@` as part of the key name, rather than a file
reference. As a result, Vault does not support filenames that include the `=`
character.
<Note title="Escape literal '@' values">
To keep Vault from parsing values that begin with a literal `@`, escape the
value with a backslash (`\`):
<CodeBlockConfig hideClipboard>
```shell-session
$ vault login -method userpass \
username="mitchellh" \
password="\@badpasswordisbad"
```
</CodeBlockConfig>
</Note>
</Tab>
</Tabs>
### Calling API endpoints
To invoke an API endpoint with the Vault CLI, you can use one of the following
CLI commands with the associated endpoint path:
CLI command | HTTP verbs
-------------- | -------------
`vault read` | `GET`
`vault write` | `PUT`, `POST`
`vault delete` | `DELETE`
`vault list` | `LIST`
For example, to call the UpsertLDAPGroup endpoint,
`/auth/ldap/groups/{group-name}` to create a new LDAP group called `admin`:
<CodeBlockConfig hideClipboard>
```shell-session
$ vault write /auth/ldap/groups/admin policies="admin,default"
```
</CodeBlockConfig>
<Tip title="Core plugins have dedicated commands">
You can use `read`, `write`, `delete`, or `list` with the relevant paths for
any valid API endpoint, but some plugins are central to the functionality
of Vault and have dedicated CLI commands:
- [`vault kv`](/vault/docs/commands/kv)
- [`vault transit`](/vault/docs/commands/transit)
- [`vault transform`](/vault/docs/commands/transform)
- [`vault token`](/vault/docs/commands/token)
</Tip>
## Enable autocomplete
The CLI does not autocomplete commands by default. To enable autocomplete for
flags, subcommands, and arguments (where supported), use the
`-autocomplete-install` flag and **restart your shell session**:
```shell-session
$ vault -autocomplete-install
```
To use autocomplete, press `<tab>` while typing a command to show a list of
available completions. Or, use the `-<tab>` flag to show available flag
completions for the current command.
<Tip>
If you have configured the `VAULT_*` environment variables needed to connect to
your Vault instance, the autocomplete feature automatically queries the Vault
server and returns helpful argument suggestions.
</Tip>
## Configure environment variables
You can use environment variables to configure the CLI globally. Some
configuration settings have a corresponding CLI flag to configure a specific
command.
For example, `export VAULT_ADDR='http://localhost:8200'` sets the
address of your Vault server globally, while
`-address='http://someotherhost:8200'` overrides the value for a specific
command.
---
@include 'global-settings/all-env-variables.mdx'
## Troubleshoot CLI errors
If you run into errors when executing a particular CLI command, the following
flags and commands can help you track down the problem.
### Confirm you are using the right endpoint or command
If a command behaves differently than expected or you need details about a
specific endpoint, you can use the
[`vault path-help`](/vault/docs/commands/path-help) command to see the help text
for a given endpoint path.
For example, to see the help for `sys/mounts`:
```shell-session
$ vault path-help sys/mounts
Request: mounts
Matching Route: ^mounts$
List the currently mounted backends.
## DESCRIPTION
This path responds to the following HTTP methods.
GET /
Lists all the mounted secret backends.
GET /<mount point>
Get information about the mount at the specified path.
POST /<mount point>
Mount a new secret backend to the mount point in the URL.
POST /<mount point>/tune
Tune configuration parameters for the given mount point.
DELETE /<mount point>
Unmount the specified mount point.
```
### Construct the associated cURL command
To determine if the problem exists with the structure of your CLI command or the
associated endpoint, you can use the `-output-curl-string` flag:
For example, to test that a `vault write` command to create a new user is not
failing due to an issue with the `/auth/userpass/users/{username}` endpoint, use
the generated cURL command to call the endpoint directly:
```shell-session
$ vault write -output-curl-string auth/userpass/users/bob password="long-password"
curl -X PUT -H "X-Vault-Request: true" -H "X-Vault-Token: $(vault print token)" -d '{"password":"long-password"}' http://127.0.0.1:8200/v1/auth/userpass/users/bob
```
### Construct the required Vault policy
To determine if the problem relates to insufficient permissions, you can use the
`-output-policy` flag to construct a minimal Vault policy that grants the
permissions needed to execute the relevant command.
For example, to confirm you have permission to write a secret to the `kv`
plugin, mounted at `kv/secret`, use `-output-policy` then confirm you have the
capabilities listed:
```
$ vault kv put -output-policy kv/secret value=itsasecret
path "kv/data/secret" {
capabilities = ["create", "update"]
}
``` | vault | layout docs page title Vault CLI usage description Technical reference for the Vault CLI Vault CLI The Vault CLI is a static binary that wraps the Vault API While every CLI command maps directly to one or more APIs internally not every endpoint is exposed publicly and not every API endpoint has a corresponding CLI command Usage CodeBlockConfig hideClipboard shell session vault command subcommand s flag s command argument s vault command subcommand s help h CodeBlockConfig Tip Use the help flag with any command to see a description of the command and a list of supported options and flags Tip The Vault CLI returns different exit codes depending on whether and where an error occurred 0 Success 1 Local terminal error invalid flags failed validation wrong numbers of arguments etc 2 Remote server error API failures bad TLS incorrect API parameters etc Authenticating to Vault Unauthenticated users can use CLI commands with the help flag but must use vault login vault docs commands login or set the VAULT TOKEN vault docs commands standard vault token environment variable to use the CLI The CLI uses a token helper to cache access tokens after authenticating with vault login The default file for cached tokens is vault token and deleting the file forcibly logs the user out of Vault If you prefer to use a custom token helper you can create your own vault docs commands token helper and configure the CLI to use it Passing command arguments Command arguments include any relevant configuration settings and command specific options Command options pass input data as key value pairs which you can provided inline as a stdin stream or from a local file Tabs Tab heading Inline To pass input inline with the command use the option name value syntax CodeBlockConfig hideClipboard shell session vault audit enable file file path var log vault log CodeBlockConfig Tab Tab heading stdin To pass input from stdin use as a stand in for the entire key value pair or a specific option value To pipe the option and value use a JSON object with the option name and value CodeBlockConfig hideClipboard shell session echo n file path var log vault log vault audit enable file CodeBlockConfig To pipe the option value by itself provide the option name inline CodeBlockConfig hideClipboard shell session echo n var log vault log vault audit enable file file path CodeBlockConfig Tab Tab heading Local file To pass data as a file use as a stand in for the entire option name value pair or a specific option value To pass the option and value use a JSON file CodeBlockConfig hideClipboard shell session data json file path var log vault log vault audit enable file data json CodeBlockConfig To pass the option value by itself use the option name inline and pass the value as text CodeBlockConfig hideClipboard shell session data txt var log vault log vault audit enable file file path data txt CodeBlockConfig If you use as part of an argument name in option name value format Vault treats the as part of the key name rather than a file reference As a result Vault does not support filenames that include the character Note title Escape literal values To keep Vault from parsing values that begin with a literal escape the value with a backslash CodeBlockConfig hideClipboard shell session vault login method userpass username mitchellh password badpasswordisbad CodeBlockConfig Note Tab Tabs Calling API endpoints To invoke an API endpoint with the Vault CLI you can use one of the following CLI commands with the associated endpoint path CLI command HTTP verbs vault read GET vault write PUT POST vault delete DELETE vault list LIST For example to call the UpsertLDAPGroup endpoint auth ldap groups group name to create a new LDAP group called admin CodeBlockConfig hideClipboard shell session vault write auth ldap groups admin policies admin default CodeBlockConfig Tip title Core plugins have dedicated commands You can use read write delete or list with the relevant paths for any valid API endpoint but some plugins are central to the functionality of Vault and have dedicated CLI commands vault kv vault docs commands kv vault transit vault docs commands transit vault transform vault docs commands transform vault token vault docs commands token Tip Enable autocomplete The CLI does not autocomplete commands by default To enable autocomplete for flags subcommands and arguments where supported use the autocomplete install flag and restart your shell session shell session vault autocomplete install To use autocomplete press tab while typing a command to show a list of available completions Or use the tab flag to show available flag completions for the current command Tip If you have configured the VAULT environment variables needed to connect to your Vault instance the autocomplete feature automatically queries the Vault server and returns helpful argument suggestions Tip Configure environment variables You can use environment variables to configure the CLI globally Some configuration settings have a corresponding CLI flag to configure a specific command For example export VAULT ADDR http localhost 8200 sets the address of your Vault server globally while address http someotherhost 8200 overrides the value for a specific command include global settings all env variables mdx Troubleshoot CLI errors If you run into errors when executing a particular CLI command the following flags and commands can help you track down the problem Confirm you are using the right endpoint or command If a command behaves differently than expected or you need details about a specific endpoint you can use the vault path help vault docs commands path help command to see the help text for a given endpoint path For example to see the help for sys mounts shell session vault path help sys mounts Request mounts Matching Route mounts List the currently mounted backends DESCRIPTION This path responds to the following HTTP methods GET Lists all the mounted secret backends GET mount point Get information about the mount at the specified path POST mount point Mount a new secret backend to the mount point in the URL POST mount point tune Tune configuration parameters for the given mount point DELETE mount point Unmount the specified mount point Construct the associated cURL command To determine if the problem exists with the structure of your CLI command or the associated endpoint you can use the output curl string flag For example to test that a vault write command to create a new user is not failing due to an issue with the auth userpass users username endpoint use the generated cURL command to call the endpoint directly shell session vault write output curl string auth userpass users bob password long password curl X PUT H X Vault Request true H X Vault Token vault print token d password long password http 127 0 0 1 8200 v1 auth userpass users bob Construct the required Vault policy To determine if the problem relates to insufficient permissions you can use the output policy flag to construct a minimal Vault policy that grants the permissions needed to execute the relevant command For example to confirm you have permission to write a secret to the kv plugin mounted at kv secret use output policy then confirm you have the capabilities listed vault kv put output policy kv secret value itsasecret path kv data secret capabilities create update |
vault The server command starts a Vault server that responds to API requests By page title server Command server default Vault will start in a sealed state The Vault cluster must be layout docs initialized before use | ---
layout: docs
page_title: server - Command
description: |-
The "server" command starts a Vault server that responds to API requests. By
default, Vault will start in a "sealed" state. The Vault cluster must be
initialized before use.
---
# server
The `server` command starts a Vault server that responds to API requests. By
default, Vault will start in a "sealed" state. The Vault cluster must be
initialized before use, usually by the `vault operator init` command. Each Vault
server must also be unsealed using the `vault operator unseal` command or the
API before the server can respond to requests.
For more information, please see:
- [`operator init` command](/vault/docs/commands/operator/init) for information
on initializing a Vault server.
- [`operator unseal` command](/vault/docs/commands/operator/unseal) for
information on providing unseal keys.
- [Vault configuration](/vault/docs/configuration) for the syntax and
various configuration options for a Vault server.
## Examples
Start a server with a configuration file:
```shell-session
$ vault server -config=/etc/vault/config.hcl
```
Run in "dev" mode with a custom initial root token:
```shell-session
$ vault server -dev -dev-root-token-id="root"
```
## Usage
The following flags are available in addition to the [standard set of
flags](/vault/docs/commands) included on all commands.
### Command options
- `-config` `(string: "")` - Path to a configuration file or directory of
configuration files. This flag can be specified multiple times to load
multiple configurations. If the path is a directory, all files which end in
.hcl or .json are loaded.
- `-log-level` ((#\_log_level)) `(string: "info")` - Log verbosity level. Supported values (in
order of descending detail) are `trace`, `debug`, `info`, `warn`, and `error`. This can
also be specified via the `VAULT_LOG_LEVEL` environment variable.
- `-log-format` ((#\_log_format)) `(string: "standard")` - Log format. Supported values
are `standard` and `json`. This can also be specified via the
`VAULT_LOG_FORMAT` environment variable.
- `-log-file` ((#\_log_file)) - The absolute path where Vault should save log
messages in addition to other, existing outputs like journald / stdout. Paths
that end with a path separator use the default file name, `vault.log`. Paths
that do not end with a file extension use the default `.log` extension. If the
log file rotates, Vault appends the current timestamp to the file name
at the time of rotation. For example:
| `log-file` | Full log file | Rotated log file |
|-------------------------|-------------------------|-------------------------------------|
| `/var/log` | `/var/log/vault.log` | `/var/log/vault-{timestamp}.log` |
| `/var/log/my-diary` | `/var/log/my-diary.log` | `/var/log/my-diary-{timestamp}.log` |
| `/var/log/my-diary.txt` | `/var/log/my-diary.txt` | `/var/log/my-diary-{timestamp}.txt` |
- `-log-rotate-bytes` ((#\_log_rotate_bytes)) - to specify the number of
bytes that should be written to a log before it needs to be rotated. Unless specified,
there is no limit to the number of bytes that can be written to a log file.
- `-log-rotate-duration` ((#\_log_rotate_duration)) - to specify the maximum
duration a log should be written to before it needs to be rotated. Must be a duration
value such as 30s. Defaults to 24h.
- `-log-rotate-max-files` ((#\_log_rotate_max_files)) - to specify the maximum
number of older log file archives to keep. Defaults to 0 (no files are ever deleted).
Set to -1 to discard old log files when a new one is created.
- `-experiment` `(string array: [])` - The name of an experiment to enable for this node.
This flag can be specified multiple times to enable multiple experiments. Experiments
should NOT be used in production, and the associated APIs may have backwards incompatible
changes between releases. Additional experiments can also be specified via the
`VAULT_EXPERIMENTS` environment variable as a comma-separated list, or via the
[`experiments`](/vault/docs/configuration#experiments) config key.
- `-pprof-dump-dir` `(string: "")` - Directory where the generated profiles are
created. Vault does not generate profiles when `pprof-dump-dir` is unset.
Use `pprof-dump-dir` temporarily during debugging sessions. Do not use
`pprof-dump-dir` in regular production processes.
- `VAULT_ALLOW_PENDING_REMOVAL_MOUNTS` `(bool: false)` - (environment variable)
Allow Vault to be started with builtin engines which have the `Pending Removal`
deprecation state. This is a temporary stopgap in place in order to perform an
upgrade and disable these engines. Once these engines are marked `Removed` (in
the next major release of Vault), the environment variable will no longer work
and a downgrade must be performed in order to remove the offending engines. For
more information, see the [deprecation faq](/vault/docs/deprecation/faq/#q-what-are-the-phases-of-deprecation).
### Dev options
- `-dev` `(bool: false)` - Enable development mode. In this mode, Vault runs
in-memory and starts unsealed. As the name implies, do not run "dev" mode in
production.
- `-dev-tls` `(bool: false)` - Enable TLS development mode. In this mode, Vault runs
in-memory and starts unsealed with a generated TLS CA, certificate and key.
As the name implies, do not run "dev" mode in production.
- `-dev-tls-cert-dir` `(string: "")` - Directory where generated TLS files are created if `-dev-tls` is specified. If left unset, files are generated in a temporary directory.
- `-dev-listen-address` `(string: "127.0.0.1:8200")` - Address to bind to in
"dev" mode. This can also be specified via the `VAULT_DEV_LISTEN_ADDRESS`
environment variable.
- `-dev-root-token-id` `(string: "")` - Initial root token. This only applies
when running in "dev" mode. This can also be specified via the
`VAULT_DEV_ROOT_TOKEN_ID` environment variable.
_Note:_ The token ID should not start with the `s.` prefix.
- `-dev-no-store-token` `(string: "")` - Do not persist the dev root token to
the token helper (usually the local filesystem) for use in future requests.
The token will only be displayed in the command output.
- `-dev-plugin-dir` `(string: "")` - Directory from which plugins are allowed to be loaded. Only applies in "dev" mode, it will automatically register all the plugins in the provided directory. | vault | layout docs page title server Command description The server command starts a Vault server that responds to API requests By default Vault will start in a sealed state The Vault cluster must be initialized before use server The server command starts a Vault server that responds to API requests By default Vault will start in a sealed state The Vault cluster must be initialized before use usually by the vault operator init command Each Vault server must also be unsealed using the vault operator unseal command or the API before the server can respond to requests For more information please see operator init command vault docs commands operator init for information on initializing a Vault server operator unseal command vault docs commands operator unseal for information on providing unseal keys Vault configuration vault docs configuration for the syntax and various configuration options for a Vault server Examples Start a server with a configuration file shell session vault server config etc vault config hcl Run in dev mode with a custom initial root token shell session vault server dev dev root token id root Usage The following flags are available in addition to the standard set of flags vault docs commands included on all commands Command options config string Path to a configuration file or directory of configuration files This flag can be specified multiple times to load multiple configurations If the path is a directory all files which end in hcl or json are loaded log level log level string info Log verbosity level Supported values in order of descending detail are trace debug info warn and error This can also be specified via the VAULT LOG LEVEL environment variable log format log format string standard Log format Supported values are standard and json This can also be specified via the VAULT LOG FORMAT environment variable log file log file The absolute path where Vault should save log messages in addition to other existing outputs like journald stdout Paths that end with a path separator use the default file name vault log Paths that do not end with a file extension use the default log extension If the log file rotates Vault appends the current timestamp to the file name at the time of rotation For example log file Full log file Rotated log file var log var log vault log var log vault timestamp log var log my diary var log my diary log var log my diary timestamp log var log my diary txt var log my diary txt var log my diary timestamp txt log rotate bytes log rotate bytes to specify the number of bytes that should be written to a log before it needs to be rotated Unless specified there is no limit to the number of bytes that can be written to a log file log rotate duration log rotate duration to specify the maximum duration a log should be written to before it needs to be rotated Must be a duration value such as 30s Defaults to 24h log rotate max files log rotate max files to specify the maximum number of older log file archives to keep Defaults to 0 no files are ever deleted Set to 1 to discard old log files when a new one is created experiment string array The name of an experiment to enable for this node This flag can be specified multiple times to enable multiple experiments Experiments should NOT be used in production and the associated APIs may have backwards incompatible changes between releases Additional experiments can also be specified via the VAULT EXPERIMENTS environment variable as a comma separated list or via the experiments vault docs configuration experiments config key pprof dump dir string Directory where the generated profiles are created Vault does not generate profiles when pprof dump dir is unset Use pprof dump dir temporarily during debugging sessions Do not use pprof dump dir in regular production processes VAULT ALLOW PENDING REMOVAL MOUNTS bool false environment variable Allow Vault to be started with builtin engines which have the Pending Removal deprecation state This is a temporary stopgap in place in order to perform an upgrade and disable these engines Once these engines are marked Removed in the next major release of Vault the environment variable will no longer work and a downgrade must be performed in order to remove the offending engines For more information see the deprecation faq vault docs deprecation faq q what are the phases of deprecation Dev options dev bool false Enable development mode In this mode Vault runs in memory and starts unsealed As the name implies do not run dev mode in production dev tls bool false Enable TLS development mode In this mode Vault runs in memory and starts unsealed with a generated TLS CA certificate and key As the name implies do not run dev mode in production dev tls cert dir string Directory where generated TLS files are created if dev tls is specified If left unset files are generated in a temporary directory dev listen address string 127 0 0 1 8200 Address to bind to in dev mode This can also be specified via the VAULT DEV LISTEN ADDRESS environment variable dev root token id string Initial root token This only applies when running in dev mode This can also be specified via the VAULT DEV ROOT TOKEN ID environment variable Note The token ID should not start with the s prefix dev no store token string Do not persist the dev root token to the token helper usually the local filesystem for use in future requests The token will only be displayed in the command output dev plugin dir string Directory from which plugins are allowed to be loaded Only applies in dev mode it will automatically register all the plugins in the provided directory |
vault login The login command authenticates users or machines to Vault using the conceptually similar to a session token on a website page title login Command provided arguments A successful authentication results in a Vault token layout docs | ---
layout: docs
page_title: login - Command
description: |-
The "login" command authenticates users or machines to Vault using the
provided arguments. A successful authentication results in a Vault token -
conceptually similar to a session token on a website.
---
# login
The `login` command authenticates users or machines to Vault using the provided
arguments. A successful authentication results in a Vault token - conceptually
similar to a session token on a website. By default, this token is cached on the
local machine for future requests.
The `-method` flag allows using other auth methods, such as userpass,
github, or cert. For these, additional "K=V" pairs may be required. For more
information about the list of configuration parameters available for a given
auth method, use the "vault auth help TYPE" command. You can also use "vault
auth list" to see the list of enabled auth methods.
If an auth method is enabled at a non-standard path, the `-method`
flag still refers to the canonical type, but the `-path` flag refers to the
enabled path.
If the authentication is requested with response wrapping (via `-wrap-ttl`),
the returned token is automatically unwrapped unless:
- The `-token-only` flag is used, in which case this command will output
the wrapping token.
- The `-no-store` flag is used, in which case this command will output the
details of the wrapping token.
## Examples
By default, login uses a "token" method and reads from stdin:
```shell-session
$ vault login
Token (will be hidden):
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.
Key Value
--- -----
token s.nDj4BB2tK8NaFffwBZBxyIa1
token_accessor ZuaObqdTeCHZ4oa9HWmdQJuZ
token_duration ∞
token_renewable false
token_policies ["root"]
identity_policies []
policies ["root"]
```
Alternatively, the token may be provided as a command line argument (note that
this may be captured by shell history or process listings):
```shell-session
$ vault login s.3jnbMAKl1i4YS3QoKdbHzGXq
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.
Key Value
--- -----
token s.3jnbMAKl1i4YS3QoKdbHzGXq
token_accessor 7Uod1Rm0ejUAz77Oh7SxpAM0
token_duration 767h59m49s
token_renewable true
token_policies ["admin" "default"]
identity_policies []
policies ["admin" "default"]
```
To login with a different method, use `-method`:
```shell-session
$ vault login -method=userpass username=my-username
Password (will be hidden):
Success! You are now authenticated. The token information below is already
stored in the token helper. You do NOT need to run "vault login" again. Future
requests will use this token automatically.
Key Value
--- -----
token s.2y4SU3Sk46dK3p2Y8q2jSBwL
token_accessor 8J125x9SZyB76MI9uF2jSJZf
token_duration 768h
token_renewable true
token_policies ["default"]
identity_policies []
policies ["default"]
token_meta_username my-username
```
~> Notice that the command option (`-method=userpass`) precedes the command
argument (`username=my-username`).
If a github auth method was enabled at the path "github-prod":
```shell-session
$ vault login -method=github -path=github-prod
Success! You are now authenticated. The token information below is already
stored in the token helper. You do NOT need to run "vault login" again. Future
requests will use this token automatically.
Key Value
--- -----
token s.2f3c5L1MHtnqbuNCbx90utmC
token_accessor JLUIXJ6ltUftTt2UYRl2lTAC
token_duration 768h
token_renewable true
token_policies ["default"]
identity_policies []
policies ["default"]
token_meta_org hashicorp
token_meta_username my-username
```
## Usage
The following flags are available in addition to the [standard set of
flags](/vault/docs/commands) included on all commands.
### Output options
- `-field` `(string: "")` - Print only the field with the given name, in the format
specified in the `-format` directive. The result will not have a trailing
newline making it ideal for piping to other processes.
- `-format` `(string: "table")` - Print the output in the given format. Valid
formats are "table", "json", or "yaml". This can also be specified via the
`VAULT_FORMAT` environment variable.
### Command options
- `-method` `(string "token")` - Type of authentication to use such as
"userpass" or "ldap". Note this corresponds to the TYPE, not the enabled path.
Use -path to specify the path where the authentication is enabled.
- `-no-print` `(bool: false)` - Do not display the token. The token will
still be stored to the configured token helper. The default is false.
- `-no-store` `(bool: false)` - Do not persist the token to the token helper
(usually the local filesystem) after authentication for use in future
requests. The token will only be displayed in the command output.
- `-path` `(string: "")` - Remote path in Vault where the auth method
is enabled. This defaults to the TYPE of method (e.g. userpass -> userpass/).
- `-token-only` `(bool: false)` - Output only the token with no verification.
This flag is a shortcut for "-field=token -no-store". Setting those
flags to other values will have no affect. | vault | layout docs page title login Command description The login command authenticates users or machines to Vault using the provided arguments A successful authentication results in a Vault token conceptually similar to a session token on a website login The login command authenticates users or machines to Vault using the provided arguments A successful authentication results in a Vault token conceptually similar to a session token on a website By default this token is cached on the local machine for future requests The method flag allows using other auth methods such as userpass github or cert For these additional K V pairs may be required For more information about the list of configuration parameters available for a given auth method use the vault auth help TYPE command You can also use vault auth list to see the list of enabled auth methods If an auth method is enabled at a non standard path the method flag still refers to the canonical type but the path flag refers to the enabled path If the authentication is requested with response wrapping via wrap ttl the returned token is automatically unwrapped unless The token only flag is used in which case this command will output the wrapping token The no store flag is used in which case this command will output the details of the wrapping token Examples By default login uses a token method and reads from stdin shell session vault login Token will be hidden Success You are now authenticated The token information displayed below is already stored in the token helper You do NOT need to run vault login again Future Vault requests will automatically use this token Key Value token s nDj4BB2tK8NaFffwBZBxyIa1 token accessor ZuaObqdTeCHZ4oa9HWmdQJuZ token duration token renewable false token policies root identity policies policies root Alternatively the token may be provided as a command line argument note that this may be captured by shell history or process listings shell session vault login s 3jnbMAKl1i4YS3QoKdbHzGXq Success You are now authenticated The token information displayed below is already stored in the token helper You do NOT need to run vault login again Future Vault requests will automatically use this token Key Value token s 3jnbMAKl1i4YS3QoKdbHzGXq token accessor 7Uod1Rm0ejUAz77Oh7SxpAM0 token duration 767h59m49s token renewable true token policies admin default identity policies policies admin default To login with a different method use method shell session vault login method userpass username my username Password will be hidden Success You are now authenticated The token information below is already stored in the token helper You do NOT need to run vault login again Future requests will use this token automatically Key Value token s 2y4SU3Sk46dK3p2Y8q2jSBwL token accessor 8J125x9SZyB76MI9uF2jSJZf token duration 768h token renewable true token policies default identity policies policies default token meta username my username Notice that the command option method userpass precedes the command argument username my username If a github auth method was enabled at the path github prod shell session vault login method github path github prod Success You are now authenticated The token information below is already stored in the token helper You do NOT need to run vault login again Future requests will use this token automatically Key Value token s 2f3c5L1MHtnqbuNCbx90utmC token accessor JLUIXJ6ltUftTt2UYRl2lTAC token duration 768h token renewable true token policies default identity policies policies default token meta org hashicorp token meta username my username Usage The following flags are available in addition to the standard set of flags vault docs commands included on all commands Output options field string Print only the field with the given name in the format specified in the format directive The result will not have a trailing newline making it ideal for piping to other processes format string table Print the output in the given format Valid formats are table json or yaml This can also be specified via the VAULT FORMAT environment variable Command options method string token Type of authentication to use such as userpass or ldap Note this corresponds to the TYPE not the enabled path Use path to specify the path where the authentication is enabled no print bool false Do not display the token The token will still be stored to the configured token helper The default is false no store bool false Do not persist the token to the token helper usually the local filesystem after authentication for use in future requests The token will only be displayed in the command output path string Remote path in Vault where the auth method is enabled This defaults to the TYPE of method e g userpass userpass token only bool false Output only the token with no verification This flag is a shortcut for field token no store Setting those flags to other values will have no affect |
vault credentials secrets configuration or arbitrary data The specific behavior page title write Command layout docs of this command is determined at the thing mounted at the path write The write command writes data to Vault at the given path The data can be | ---
layout: docs
page_title: write - Command
description: |-
The "write" command writes data to Vault at the given path. The data can be
credentials, secrets, configuration, or arbitrary data. The specific behavior
of this command is determined at the thing mounted at the path.
---
# write
The `write` command writes data to Vault at the given path (wrapper command for
HTTP PUT or POST). The data can be credentials, secrets, configuration, or
arbitrary data. The specific behavior of the `write` command is determined at
the thing mounted at the path.
Data is specified as "**key=value**" pairs on the command line. If the value begins
with an "**@**", then it is loaded from a file. If the value for a key is "**-**", Vault
will read the value from stdin rather than the command line.
Some API fields require more advanced structures such as maps. These cannot
directly be represented on the command line. However, direct control of the
request parameters can be achieved by using `-` as the only data argument.
This causes `vault write` to read a JSON blob containing all request parameters
from stdin. This argument will be ignored if used in conjunction with any
"key=value" pairs.
For a full list of examples and paths, please see the documentation that
corresponds to the secrets engines in use.
## Examples
Store an arbitrary secrets in the token's cubbyhole.
```shell-session
$ vault write cubbyhole/git-credentials username="student01" password="p@$$w0rd"
```
Create a new encryption key in the transit secrets engine:
```shell-session
$ vault write -force transit/keys/my-key
```
The `-force` flag allows the write operation without input data. (See [command
options](#command-options).)
Upload an AWS IAM policy from a file on disk:
```shell-session
$ vault write aws/roles/ops policy=@policy.json
```
Configure access to Consul by providing an access token:
```shell-session
$ echo $MY_TOKEN | vault write consul/config/access token=-
```
Set role-level TTL values for a user named "alice" so the generated lease has a
default TTL of 8 hours (28800 seconds) and maximum TTL of 12 hours
(43200 seconds):
```shell-session
$ VAULT_TOKEN=$VAULT_TOKEN vault write /auth/userpass/users/alice \
token_ttl="8h" token_max_ttl="12h"
```
### API versus CLI
Create a token with TTL set to 8 hours, limited to 3 uses, and attach `admin`
and `secops` policies.
```shell-session
$ vault write auth/token/create policies="admin" policies="secops" ttl=8h num_uses=3
```
Equivalent cURL command for this operation:
```shell-session
$ tee request_payload.json -<<EOF
{
"policies": ["admin", "secops"],
"ttl": "8h",
"num_uses": 3
}
EOF
$ curl --header "X-Vault-Token: $VAULT_TOKEN" \
--request POST \
--data @request_payload.json \
$VAULT_ADDR/v1/auth/token/create
```
The `vault write` command simplifies the API call.
Since token management is a common task, Vault CLI provides a
[`token`](/vault/docs/commands/token) command with
[`create`](/vault/docs/commands/token/create) subcommand. The CLI command simplifies
the token creation. Use the `vault create` command with options to set the token
TTL, policies, and use limit.
```shell-session
$ vault token create -policy=admin -policy=secops -ttl=8h -use-limit=3
```
-> **Syntax:** The command options start with `-` (e.g. `-ttl`) while API path
parameters do not (e.g. `ttl`). You always set the API parameters after the path
you are invoking.
## Usage
The following flags are available in addition to the [standard set of
flags](/vault/docs/commands) included on all commands.
### Output options
- `-field` `(string: "")` - Print only the field with the given name, in the format
specified in the `-format` directive. The result will not have a trailing
newline making it ideal for piping to other processes.
- `-format` `(string: "table")` - Print the output in the given format. Valid
formats are "table", "json", or "yaml". This can also be specified via the
`VAULT_FORMAT` environment variable.
### Command options
- `-force` `(bool: false)` - Allow the operation to continue with no key=value
pairs. This allows writing to keys that do not need or expect data. This is
aliased as `-f`. | vault | layout docs page title write Command description The write command writes data to Vault at the given path The data can be credentials secrets configuration or arbitrary data The specific behavior of this command is determined at the thing mounted at the path write The write command writes data to Vault at the given path wrapper command for HTTP PUT or POST The data can be credentials secrets configuration or arbitrary data The specific behavior of the write command is determined at the thing mounted at the path Data is specified as key value pairs on the command line If the value begins with an then it is loaded from a file If the value for a key is Vault will read the value from stdin rather than the command line Some API fields require more advanced structures such as maps These cannot directly be represented on the command line However direct control of the request parameters can be achieved by using as the only data argument This causes vault write to read a JSON blob containing all request parameters from stdin This argument will be ignored if used in conjunction with any key value pairs For a full list of examples and paths please see the documentation that corresponds to the secrets engines in use Examples Store an arbitrary secrets in the token s cubbyhole shell session vault write cubbyhole git credentials username student01 password p w0rd Create a new encryption key in the transit secrets engine shell session vault write force transit keys my key The force flag allows the write operation without input data See command options command options Upload an AWS IAM policy from a file on disk shell session vault write aws roles ops policy policy json Configure access to Consul by providing an access token shell session echo MY TOKEN vault write consul config access token Set role level TTL values for a user named alice so the generated lease has a default TTL of 8 hours 28800 seconds and maximum TTL of 12 hours 43200 seconds shell session VAULT TOKEN VAULT TOKEN vault write auth userpass users alice token ttl 8h token max ttl 12h API versus CLI Create a token with TTL set to 8 hours limited to 3 uses and attach admin and secops policies shell session vault write auth token create policies admin policies secops ttl 8h num uses 3 Equivalent cURL command for this operation shell session tee request payload json EOF policies admin secops ttl 8h num uses 3 EOF curl header X Vault Token VAULT TOKEN request POST data request payload json VAULT ADDR v1 auth token create The vault write command simplifies the API call Since token management is a common task Vault CLI provides a token vault docs commands token command with create vault docs commands token create subcommand The CLI command simplifies the token creation Use the vault create command with options to set the token TTL policies and use limit shell session vault token create policy admin policy secops ttl 8h use limit 3 Syntax The command options start with e g ttl while API path parameters do not e g ttl You always set the API parameters after the path you are invoking Usage The following flags are available in addition to the standard set of flags vault docs commands included on all commands Output options field string Print only the field with the given name in the format specified in the format directive The result will not have a trailing newline making it ideal for piping to other processes format string table Print the output in the given format Valid formats are table json or yaml This can also be specified via the VAULT FORMAT environment variable Command options force bool false Allow the operation to continue with no key value pairs This allows writing to keys that do not need or expect data This is aliased as f |
vault ssh page title ssh Command credentials obtained from an SSH secrets engine The ssh command establishes an SSH connection with the target machine using layout docs | ---
layout: docs
page_title: ssh - Command
description: |-
The "ssh" command establishes an SSH connection with the target machine using
credentials obtained from an SSH secrets engine.
---
# ssh
The `ssh` command establishes an SSH connection with the target machine.
This command uses one of the SSH secrets engines to authenticate and
automatically establish an SSH connection to a host. This operation requires
that the SSH secrets engine is mounted and configured.
The user must have `ssh` installed locally - this command will exec out to it
with the proper commands to provide an "SSH-like" consistent experience.
## Examples
SSH using the OTP mode (requires [sshpass](https://linux.die.net/man/1/sshpass)
for full automation):
```shell-session
$ vault ssh -mode=otp -role=my-role user@1.2.3.4
```
SSH using the CA mode:
```shell-session
$ vault ssh -mode=ca -role=my-role user@1.2.3.4
```
SSH using CA mode with host key verification:
```shell-session
$ vault ssh \
-mode=ca \
-role=my-role \
-host-key-mount-point=host-signer \
-host-key-hostnames=example.com \
user@example.com
```
For step-by-step guides and instructions for each of the available SSH
auth methods, please see the corresponding [SSH secrets
engine](/vault/docs/secrets/ssh).
## Usage
The following flags are available in addition to the [standard set of
flags](/vault/docs/commands) included on all commands.
### Output options
- `-field` `(string: "")` - Print only the field with the given name, in the format
specified in the `-format` directive. The result will not have a trailing
newline making it ideal for piping to other processes.
- `-format` `(string: "table")` - Print the output in the given format. Valid
formats are "table", "json", or "yaml". This can also be specified via the
`VAULT_FORMAT` environment variable.
### SSH options
- `-mode` `(string: "")` - Name of the authentication mode (ca, dynamic, otp)."
- `-mount-point` `(string: "ssh/")` - Mount point to the SSH secrets engine.
- `-no-exec` `(bool: false)` - Print the generated credentials, but do not
establish a connection.
- `-role` `(string: "")` - Name of the role to use to generate the key.
- `-strict-host-key-checking` `(string: "")` - Value to use for the SSH
configuration option "StrictHostKeyChecking". The default is ask. This can
also be specified via the `VAULT_SSH_STRICT_HOST_KEY_CHECKING` environment
variable.
- `-user-known-hosts-file` `(string: "~/.ssh/known_hosts")` - Value to use for
the SSH configuration option "UserKnownHostsFile". This can also be specified
via the `VAULT_SSH_USER_KNOWN_HOSTS_FILE` environment variable.
### CA mode options
- `-host-key-hostnames` `(string: "*")` - List of hostnames to delegate for the
CA. The default value allows all domains and IPs. This is specified as a
comma-separated list of values. This can also be specified via the
`VAULT_SSH_HOST_KEY_HOSTNAMES` environment variable.
- `-host-key-mount-point` `(string: "")` - Mount point to the SSH
secrets engine where host keys are signed. When given a value, Vault will
generate a custom "known_hosts" file with delegation to the CA at the provided
mount point to verify the SSH connection's host keys against the provided CA.
By default, host keys are validated against the user's local "known_hosts"
file. This flag forces strict key host checking and ignores a custom user
known hosts file. This can also be specified via the
`VAULT_SSH_HOST_KEY_MOUNT_POINT` environment variable.
- `-private-key-path` `(string: "~/.ssh/id_rsa")` - Path to the SSH private key
to use for authentication. This must be the corresponding private key to
`-public-key-path`.
- `-public-key-path` `(string: "~/.ssh/id_rsa.pub")` - Path to the SSH public
key to send to Vault for signing. | vault | layout docs page title ssh Command description The ssh command establishes an SSH connection with the target machine using credentials obtained from an SSH secrets engine ssh The ssh command establishes an SSH connection with the target machine This command uses one of the SSH secrets engines to authenticate and automatically establish an SSH connection to a host This operation requires that the SSH secrets engine is mounted and configured The user must have ssh installed locally this command will exec out to it with the proper commands to provide an SSH like consistent experience Examples SSH using the OTP mode requires sshpass https linux die net man 1 sshpass for full automation shell session vault ssh mode otp role my role user 1 2 3 4 SSH using the CA mode shell session vault ssh mode ca role my role user 1 2 3 4 SSH using CA mode with host key verification shell session vault ssh mode ca role my role host key mount point host signer host key hostnames example com user example com For step by step guides and instructions for each of the available SSH auth methods please see the corresponding SSH secrets engine vault docs secrets ssh Usage The following flags are available in addition to the standard set of flags vault docs commands included on all commands Output options field string Print only the field with the given name in the format specified in the format directive The result will not have a trailing newline making it ideal for piping to other processes format string table Print the output in the given format Valid formats are table json or yaml This can also be specified via the VAULT FORMAT environment variable SSH options mode string Name of the authentication mode ca dynamic otp mount point string ssh Mount point to the SSH secrets engine no exec bool false Print the generated credentials but do not establish a connection role string Name of the role to use to generate the key strict host key checking string Value to use for the SSH configuration option StrictHostKeyChecking The default is ask This can also be specified via the VAULT SSH STRICT HOST KEY CHECKING environment variable user known hosts file string ssh known hosts Value to use for the SSH configuration option UserKnownHostsFile This can also be specified via the VAULT SSH USER KNOWN HOSTS FILE environment variable CA mode options host key hostnames string List of hostnames to delegate for the CA The default value allows all domains and IPs This is specified as a comma separated list of values This can also be specified via the VAULT SSH HOST KEY HOSTNAMES environment variable host key mount point string Mount point to the SSH secrets engine where host keys are signed When given a value Vault will generate a custom known hosts file with delegation to the CA at the provided mount point to verify the SSH connection s host keys against the provided CA By default host keys are validated against the user s local known hosts file This flag forces strict key host checking and ignores a custom user known hosts file This can also be specified via the VAULT SSH HOST KEY MOUNT POINT environment variable private key path string ssh id rsa Path to the SSH private key to use for authentication This must be the corresponding private key to public key path public key path string ssh id rsa pub Path to the SSH public key to send to Vault for signing |
vault authenticated token The generated token will inherit all policies and The token create command creates a new token that can be used for page title token create Command permissions of the currently authenticated token unless you explicitly define authentication This token will be created as a child of the currently layout docs a subset list policies to assign to the token | ---
layout: docs
page_title: token create - Command
description: |-
The "token create" command creates a new token that can be used for
authentication. This token will be created as a child of the currently
authenticated token. The generated token will inherit all policies and
permissions of the currently authenticated token unless you explicitly define
a subset list policies to assign to the token.
---
# token create
The `token create` command creates a new token that can be used for
authentication. This token will be created as a child of the currently
authenticated token. The generated token will inherit all policies and
permissions of the currently authenticated token unless you explicitly define a
subset list policies to assign to the token.
A ttl can also be associated with the token. If a ttl is not associated with the
token, then it cannot be renewed. If a ttl is associated with the token, it will
expire after that amount of time unless it is renewed.
Metadata associated with the token (specified with `-metadata`) is written to
the audit log when the token is used.
If a role is specified, the role may override parameters specified here.
## Examples
Create a token attached to specific policies:
```shell-session
$ vault token create -policy=my-policy -policy=other-policy
Key Value
--- -----
token 95eba8ed-f6fc-958a-f490-c7fd0eda5e9e
token_accessor 882d4a40-3796-d06e-c4f0-604e8503750b
token_duration 768h
token_renewable true
token_policies [default my-policy other-policy]
```
Create a periodic token:
```shell-session
$ vault token create -period=30m
Key Value
--- -----
token fdb90d58-af87-024f-fdcd-9f95039e353a
token_accessor 4cd9177c-034b-a004-c62d-54bc56c0e9bd
token_duration 30m
token_renewable true
token_policies [my-policy]
```
## Usage
The following flags are available in addition to the [standard set of
flags](/vault/docs/commands) included on all commands.
### Output options
- `-field` `(string: "")` - Print only the field with the given name. Specifying
this option will take precedence over other formatting directives. The result
will not have a trailing newline making it ideal for piping to other processes.
- `-format` `(string: "table")` - Print the output in the given format. Valid
formats are "table", "json", or "yaml". This can also be specified via the
`VAULT_FORMAT` environment variable.
### Command options
- `-display-name` `(string: "")` - Name to associate with this token. This is a
non-sensitive value that can be used to help identify created secrets (e.g.
prefixes).
- `-entity-alias` `(string: "")` - Name of the entity alias to associate with
during token creation. Only works in combination with -role argument and used
entity alias must be listed in allowed_entity_aliases. If this has been
specified, the entity will not be inherited from the parent.
- `-explicit-max-ttl` `(duration: "")` - Explicit maximum lifetime for the
token. Unlike normal TTLs, the maximum TTL is a hard limit and cannot be
exceeded. Uses [duration format strings](/vault/docs/concepts/duration-format).
- `-id` `(string: "")` - Value for the token. By default, this is an
auto-generated value. Specifying this value requires sudo permissions.
- `-metadata` `(k=v: "")` - Arbitrary key=value metadata to associate with the
token. This metadata will show in the audit log when the token is used. This
can be specified multiple times to add multiple pieces of metadata.
- `-no-default-policy` `(bool: false)` - Detach the "default" policy from the
policy set for this token.
- `-orphan` `(bool: false)` - Create the token with no parent. This prevents the
token from being revoked when the token which created it expires. Setting this
value requires sudo permissions.
- `-period` `(duration: "")` - If specified, every renewal will use the given
period. Periodic tokens do not expire as long as they are actively being
renewed (unless `-explicit-max-ttl` is also provided). Setting this value
requires sudo permissions. Uses [duration format strings](/vault/docs/concepts/duration-format).
- `-policy` `(string: "")` - Name of a policy to associate with this token. This
can be specified multiple times to attach multiple policies.
- `-renewable` `(bool: true)` - Allow the token to be renewed up to it's maximum
TTL.
- `-role` `(string: "")` - Name of the role to create the token against.
Specifying -role may override other arguments. The locally authenticated Vault
token must have permission for `auth/token/create/<role>`.
- `-ttl` `(duration: "")` - Initial TTL to associate with the token. Token
renewals may be able to extend beyond this value, depending on the configured
maximumTTLs. Uses [duration format strings](/vault/docs/concepts/duration-format).
- `-type` `(string: "service")` - The type of token to create. Can be "service" or "batch".
- `-use-limit` `(int: 0)` - Number of times this token can be used. After the
last use, the token is automatically revoked. By default, tokens can be used
an unlimited number of times until their expiration.
- `-wrap-ttl` `(duration: "")` - Wraps the response in a cubbyhole token with the
requested TTL. The response is available via the "vault unwrap" command. The TTL
is specified as a numeric string with suffix like "30s" or "5m". This can also be
specified via the `VAULT_WRAP_TTL` environment variable. | vault | layout docs page title token create Command description The token create command creates a new token that can be used for authentication This token will be created as a child of the currently authenticated token The generated token will inherit all policies and permissions of the currently authenticated token unless you explicitly define a subset list policies to assign to the token token create The token create command creates a new token that can be used for authentication This token will be created as a child of the currently authenticated token The generated token will inherit all policies and permissions of the currently authenticated token unless you explicitly define a subset list policies to assign to the token A ttl can also be associated with the token If a ttl is not associated with the token then it cannot be renewed If a ttl is associated with the token it will expire after that amount of time unless it is renewed Metadata associated with the token specified with metadata is written to the audit log when the token is used If a role is specified the role may override parameters specified here Examples Create a token attached to specific policies shell session vault token create policy my policy policy other policy Key Value token 95eba8ed f6fc 958a f490 c7fd0eda5e9e token accessor 882d4a40 3796 d06e c4f0 604e8503750b token duration 768h token renewable true token policies default my policy other policy Create a periodic token shell session vault token create period 30m Key Value token fdb90d58 af87 024f fdcd 9f95039e353a token accessor 4cd9177c 034b a004 c62d 54bc56c0e9bd token duration 30m token renewable true token policies my policy Usage The following flags are available in addition to the standard set of flags vault docs commands included on all commands Output options field string Print only the field with the given name Specifying this option will take precedence over other formatting directives The result will not have a trailing newline making it ideal for piping to other processes format string table Print the output in the given format Valid formats are table json or yaml This can also be specified via the VAULT FORMAT environment variable Command options display name string Name to associate with this token This is a non sensitive value that can be used to help identify created secrets e g prefixes entity alias string Name of the entity alias to associate with during token creation Only works in combination with role argument and used entity alias must be listed in allowed entity aliases If this has been specified the entity will not be inherited from the parent explicit max ttl duration Explicit maximum lifetime for the token Unlike normal TTLs the maximum TTL is a hard limit and cannot be exceeded Uses duration format strings vault docs concepts duration format id string Value for the token By default this is an auto generated value Specifying this value requires sudo permissions metadata k v Arbitrary key value metadata to associate with the token This metadata will show in the audit log when the token is used This can be specified multiple times to add multiple pieces of metadata no default policy bool false Detach the default policy from the policy set for this token orphan bool false Create the token with no parent This prevents the token from being revoked when the token which created it expires Setting this value requires sudo permissions period duration If specified every renewal will use the given period Periodic tokens do not expire as long as they are actively being renewed unless explicit max ttl is also provided Setting this value requires sudo permissions Uses duration format strings vault docs concepts duration format policy string Name of a policy to associate with this token This can be specified multiple times to attach multiple policies renewable bool true Allow the token to be renewed up to it s maximum TTL role string Name of the role to create the token against Specifying role may override other arguments The locally authenticated Vault token must have permission for auth token create role ttl duration Initial TTL to associate with the token Token renewals may be able to extend beyond this value depending on the configured maximumTTLs Uses duration format strings vault docs concepts duration format type string service The type of token to create Can be service or batch use limit int 0 Number of times this token can be used After the last use the token is automatically revoked By default tokens can be used an unlimited number of times until their expiration wrap ttl duration Wraps the response in a cubbyhole token with the requested TTL The response is available via the vault unwrap command The TTL is specified as a numeric string with suffix like 30s or 5m This can also be specified via the VAULT WRAP TTL environment variable |
vault The kv metadata command has subcommands for interacting with the metadata kv metadata page title kv metadata Command layout docs endpoint in Vault s key value store | ---
layout: docs
page_title: kv metadata - Command
description: |-
The "kv metadata" command has subcommands for interacting with the metadata
endpoint in Vault's key-value store.
---
# kv metadata
~> **NOTE:** This is a [KV version 2](/vault/docs/secrets/kv/kv-v2) secrets
engine command, and not available for Version 1.
The `kv metadata` command has subcommands for interacting with the metadata and
versions for the versioned secrets (KV version 2 secrets engine) at the
specified path.
## Usage
```text
Usage: vault kv metadata <subcommand> [options] [args]
# ...
Subcommands:
delete Deletes all versions and metadata for a key in the KV store
get Retrieves key metadata from the KV store
put Sets or updates key settings in the KV store
```
### kv metadata delete
The `kv metadata delete` command deletes all versions and metadata for the
provided key.
#### Examples
Deletes all versions and metadata of the key "creds":
```shell-session
$ vault kv metadata delete -mount=secret creds
Success! Data deleted (if it existed) at: secret/metadata/creds
```
### kv metadata get
The `kv metadata get` command retrieves the metadata of the versioned secrets at
the given key name. If no key exists with that name, an error is returned.
#### Examples
Retrieves the metadata of the key name, "creds":
```shell-session
$ vault kv metadata get -mount=secret creds
=== Metadata Path ===
secret/metadata/creds
========== Metadata ==========
Key Value
--- -----
cas_required false
created_time 2019-06-28T15:53:30.395814Z
current_version 5
delete_version_after 0s
max_versions 0
oldest_version 0
updated_time 2019-06-28T16:01:47.40064Z
====== Version 1 ======
Key Value
--- -----
created_time 2019-06-28T15:53:30.395814Z
deletion_time n/a
destroyed false
====== Version 2 ======
Key Value
--- -----
created_time 2019-06-28T16:01:36.676912Z
deletion_time n/a
destroyed false
...
```
### kv metadata put
The `kv metadata put` command can be used to create a blank key in the KV v2
secrets engine or to update key configuration for a specified key.
#### Examples
Create a key in the KV v2 with no data at the key "creds":
```shell-session
$ vault kv metadata put -mount=secret creds
Success! Data written to: secret/metadata/creds
```
Set the maximum number of versions to keep for the key "creds":
```shell-session
$ vault kv metadata put -mount=secret -max-versions=5 creds
Success! Data written to: secret/metadata/creds
```
**NOTE:** If not set, the backend’s configured max version is used. Once a key
has more than the configured allowed versions the oldest version will be
permanently deleted.
Require Check-and-Set for the key "creds":
```shell-session
$ vault kv metadata put -mount=secret -cas-required creds
```
**NOTE:** When check-and-set is required, the key will require the `cas`
parameter to be set on all write requests. Otherwise, the backend’s
configuration will be used.
Set the length of time before a version is deleted for the key "creds":
```shell-session
$ vault kv metadata put -mount=secret -delete-version-after="3h25m19s" creds
```
**NOTE:** If not set, the backend's configured Delete-Version-After is used. If
set to a duration greater than the backend's, the backend's Delete-Version-After
setting will be used. Any changes to the Delete-Version-After setting will only
be applied to new versions.
#### Output options
- `-format` `(string: "table")` - Print the output in the given format. Valid
formats are "table", "json", or "yaml". This can also be specified via the
`VAULT_FORMAT` environment variable.
#### Subcommand options
- `-cas-required` `(bool: false)` - If true the key will require the cas
parameter to be set on all write requests. If false, the backend’s
configuration will be used. The default is false.
- `-max-versions` `(int: 0)` - The number of versions to keep per key. If not
set, the backend’s configured max version is used. Once a key has more than the
configured allowed versions the oldest version will be permanently deleted.
- `-delete-version-after` `(string:"0s")` – Set the `delete-version-after` value
to a duration to specify the `deletion_time` for all new versions written to
this key. If not set, the backend's `delete_version_after` will be used. If
the value is greater than the backend's `delete_version_after`, the backend's
`delete_version_after` will be used. Accepts [duration format strings](/vault/docs/concepts/duration-format).
- `-custom-metadata` `(string: "")` - Specifies a key-value pair for the
`custom_metadata` field. This can be specified multiple times to add multiple
pieces of metadata.
- `-mount` `(string: "")` - Specifies the path where the KV backend is mounted.
If specified, the next argument will be interpreted as the secret path. If
this flag is not specified, the next argument will be interpreted as the
combined mount path and secret path, with /data/ automatically inserted for
KV v2 secrets. | vault | layout docs page title kv metadata Command description The kv metadata command has subcommands for interacting with the metadata endpoint in Vault s key value store kv metadata NOTE This is a KV version 2 vault docs secrets kv kv v2 secrets engine command and not available for Version 1 The kv metadata command has subcommands for interacting with the metadata and versions for the versioned secrets KV version 2 secrets engine at the specified path Usage text Usage vault kv metadata subcommand options args Subcommands delete Deletes all versions and metadata for a key in the KV store get Retrieves key metadata from the KV store put Sets or updates key settings in the KV store kv metadata delete The kv metadata delete command deletes all versions and metadata for the provided key Examples Deletes all versions and metadata of the key creds shell session vault kv metadata delete mount secret creds Success Data deleted if it existed at secret metadata creds kv metadata get The kv metadata get command retrieves the metadata of the versioned secrets at the given key name If no key exists with that name an error is returned Examples Retrieves the metadata of the key name creds shell session vault kv metadata get mount secret creds Metadata Path secret metadata creds Metadata Key Value cas required false created time 2019 06 28T15 53 30 395814Z current version 5 delete version after 0s max versions 0 oldest version 0 updated time 2019 06 28T16 01 47 40064Z Version 1 Key Value created time 2019 06 28T15 53 30 395814Z deletion time n a destroyed false Version 2 Key Value created time 2019 06 28T16 01 36 676912Z deletion time n a destroyed false kv metadata put The kv metadata put command can be used to create a blank key in the KV v2 secrets engine or to update key configuration for a specified key Examples Create a key in the KV v2 with no data at the key creds shell session vault kv metadata put mount secret creds Success Data written to secret metadata creds Set the maximum number of versions to keep for the key creds shell session vault kv metadata put mount secret max versions 5 creds Success Data written to secret metadata creds NOTE If not set the backend s configured max version is used Once a key has more than the configured allowed versions the oldest version will be permanently deleted Require Check and Set for the key creds shell session vault kv metadata put mount secret cas required creds NOTE When check and set is required the key will require the cas parameter to be set on all write requests Otherwise the backend s configuration will be used Set the length of time before a version is deleted for the key creds shell session vault kv metadata put mount secret delete version after 3h25m19s creds NOTE If not set the backend s configured Delete Version After is used If set to a duration greater than the backend s the backend s Delete Version After setting will be used Any changes to the Delete Version After setting will only be applied to new versions Output options format string table Print the output in the given format Valid formats are table json or yaml This can also be specified via the VAULT FORMAT environment variable Subcommand options cas required bool false If true the key will require the cas parameter to be set on all write requests If false the backend s configuration will be used The default is false max versions int 0 The number of versions to keep per key If not set the backend s configured max version is used Once a key has more than the configured allowed versions the oldest version will be permanently deleted delete version after string 0s Set the delete version after value to a duration to specify the deletion time for all new versions written to this key If not set the backend s delete version after will be used If the value is greater than the backend s delete version after the backend s delete version after will be used Accepts duration format strings vault docs concepts duration format custom metadata string Specifies a key value pair for the custom metadata field This can be specified multiple times to add multiple pieces of metadata mount string Specifies the path where the KV backend is mounted If specified the next argument will be interpreted as the secret path If this flag is not specified the next argument will be interpreted as the combined mount path and secret path with data automatically inserted for KV v2 secrets |
vault kv page title kv Command The kv command groups subcommands for interacting with Vault s key value secret engine layout docs | ---
layout: docs
page_title: kv - Command
description: |-
The "kv" command groups subcommands for interacting with Vault's key/value
secret engine.
---
# kv
The `kv` command groups subcommands for interacting with Vault's key/value
secrets engine (both [KV version 1](/vault/docs/secrets/kv/kv-v1) and [KV
Version 2](/vault/docs/secrets/kv/kv-v2).
## Syntax
Option flags for a given subcommand are provided after the subcommand, but before the arguments.
The path to where the secrets engine is mounted can be indicated with the `-mount` flag, such as `vault kv get -mount=secret creds`.
The deprecated path-like syntax can also be used (e.g. `vault kv get secret/creds`), but this should be avoided
for KV v2, because it is not actually the full API path to the secret
(secret/data/foo) and may cause confusion.
~> A `flag provided but not defined: -mount` error means you are using an older version of Vault before the
mount flag syntax was introduced. Upgrade to at least Vault 1.11, or refer to previous versions of the docs
which only use the old syntax to refer to the mount path.
## Mount flag syntax (KV)
All `kv` commands can alternatively refer to the path to the KV secrets engine using a flag-based syntax like `$ vault kv get -mount=secret password`
instead of `$ vault kv get secret/password`. The mount flag syntax was created to mitigate confusion caused by the fact that for KV v2 secrets,
their full path (used in policies and raw API calls) actually contains a nested `/data/` element (e.g. `secret/data/password`) which can be easily overlooked when using
the above KV v1-like syntax `secret/password`. To avoid this confusion, all KV-specific docs pages will use the `-mount` flag.
## Exit codes
The Vault CLI aims to be consistent and well-behaved unless documented
otherwise.
- Local errors such as incorrect flags, failed validations, or wrong numbers
of arguments return an exit code of 1.
- Any remote errors such as API failures, bad TLS, or incorrect API parameters
return an exit status of 2
Some commands override this default where it makes sense. These commands
document this anomaly.
## Examples
Create or update the key named "creds" in the KV version 2 enabled at "secret"
with the value "passcode=my-long-passcode":
```shell-session
$ vault kv put -mount=secret creds passcode=my-long-passcode
== Secret Path ==
secret/data/creds
======= Metadata =======
Key Value
--- -----
created_time 2022-06-15T20:14:17.107852Z
custom_metadata <nil>
deletion_time n/a
destroyed false
version 1
```
Read this value back:
```shell-session
$ vault kv get -mount=secret creds
== Secret Path ==
secret/data/creds
======= Metadata =======
Key Value
--- -----
created_time 2022-06-15T20:14:17.107852Z
custom_metadata <nil>
deletion_time n/a
destroyed false
version 1
====== Data ======
Key Value
--- -----
passcode my-long-passcode
```
Get metadata for the key named "creds":
```shell-session
$ vault kv metadata get -mount=secret creds
=== Metadata Path ===
secret/metadata/creds
========== Metadata ==========
Key Value
--- -----
cas_required false
created_time 2022-06-15T20:14:17.107852Z
current_version 1
custom_metadata <nil>
delete_version_after 0s
max_versions 0
oldest_version 0
updated_time 2022-06-15T20:14:17.107852Z
====== Version 1 ======
Key Value
--- -----
created_time 2022-06-15T20:14:17.107852Z
deletion_time n/a
destroyed false
```
Get a specific version of the key named "creds":
```shell-session
$ vault kv get -mount=secret -version=1 creds
== Secret Path ==
secret/data/creds
======= Metadata =======
Key Value
--- -----
created_time 2022-06-15T20:14:17.107852Z
custom_metadata <nil>
deletion_time n/a
destroyed false
version 1
====== Data ======
Key Value
--- -----
passcode my-long-passcode
```
## Usage
```text
Usage: vault kv <subcommand> [options] [args]
# ...
Subcommands:
delete Deletes versions in the KV store
destroy Permanently removes one or more versions in the KV store
enable-versioning Turns on versioning for a KV store
get Retrieves data from the KV store
list List data or secrets
metadata Interact with Vault's Key-Value storage
patch Sets or updates data in the KV store without overwriting
put Sets or updates data in the KV store
rollback Rolls back to a previous version of data
undelete Undeletes versions in the KV store
```
For more information, examples, and usage about a subcommand, click on the name
of the subcommand in the sidebar. | vault | layout docs page title kv Command description The kv command groups subcommands for interacting with Vault s key value secret engine kv The kv command groups subcommands for interacting with Vault s key value secrets engine both KV version 1 vault docs secrets kv kv v1 and KV Version 2 vault docs secrets kv kv v2 Syntax Option flags for a given subcommand are provided after the subcommand but before the arguments The path to where the secrets engine is mounted can be indicated with the mount flag such as vault kv get mount secret creds The deprecated path like syntax can also be used e g vault kv get secret creds but this should be avoided for KV v2 because it is not actually the full API path to the secret secret data foo and may cause confusion A flag provided but not defined mount error means you are using an older version of Vault before the mount flag syntax was introduced Upgrade to at least Vault 1 11 or refer to previous versions of the docs which only use the old syntax to refer to the mount path Mount flag syntax KV All kv commands can alternatively refer to the path to the KV secrets engine using a flag based syntax like vault kv get mount secret password instead of vault kv get secret password The mount flag syntax was created to mitigate confusion caused by the fact that for KV v2 secrets their full path used in policies and raw API calls actually contains a nested data element e g secret data password which can be easily overlooked when using the above KV v1 like syntax secret password To avoid this confusion all KV specific docs pages will use the mount flag Exit codes The Vault CLI aims to be consistent and well behaved unless documented otherwise Local errors such as incorrect flags failed validations or wrong numbers of arguments return an exit code of 1 Any remote errors such as API failures bad TLS or incorrect API parameters return an exit status of 2 Some commands override this default where it makes sense These commands document this anomaly Examples Create or update the key named creds in the KV version 2 enabled at secret with the value passcode my long passcode shell session vault kv put mount secret creds passcode my long passcode Secret Path secret data creds Metadata Key Value created time 2022 06 15T20 14 17 107852Z custom metadata nil deletion time n a destroyed false version 1 Read this value back shell session vault kv get mount secret creds Secret Path secret data creds Metadata Key Value created time 2022 06 15T20 14 17 107852Z custom metadata nil deletion time n a destroyed false version 1 Data Key Value passcode my long passcode Get metadata for the key named creds shell session vault kv metadata get mount secret creds Metadata Path secret metadata creds Metadata Key Value cas required false created time 2022 06 15T20 14 17 107852Z current version 1 custom metadata nil delete version after 0s max versions 0 oldest version 0 updated time 2022 06 15T20 14 17 107852Z Version 1 Key Value created time 2022 06 15T20 14 17 107852Z deletion time n a destroyed false Get a specific version of the key named creds shell session vault kv get mount secret version 1 creds Secret Path secret data creds Metadata Key Value created time 2022 06 15T20 14 17 107852Z custom metadata nil deletion time n a destroyed false version 1 Data Key Value passcode my long passcode Usage text Usage vault kv subcommand options args Subcommands delete Deletes versions in the KV store destroy Permanently removes one or more versions in the KV store enable versioning Turns on versioning for a KV store get Retrieves data from the KV store list List data or secrets metadata Interact with Vault s Key Value storage patch Sets or updates data in the KV store without overwriting put Sets or updates data in the KV store rollback Rolls back to a previous version of data undelete Undeletes versions in the KV store For more information examples and usage about a subcommand click on the name of the subcommand in the sidebar |
vault The auth tune command tunes the configuration options for the auth method at page title auth tune Command auth tune layout docs the given PATH | ---
layout: docs
page_title: auth tune - Command
description: |-
The "auth tune" command tunes the configuration options for the auth method at
the given PATH.
---
# auth tune
The `auth tune` command tunes the configuration options for the auth method at
the given PATH.
<Note>
The argument corresponds to the **path** where the auth method is
enabled, not the auth **type**.
</Note>
## Examples
Before tuning the auth method configuration, view the current configuration of the
auth method enabled at `github/`.
```shell-session
$ vault read sys/auth/github/tune
Key Value
--- -----
default_lease_ttl 768h
description n/a
force_no_cache false
max_lease_ttl 768h
token_type default-service
```
The default lease for the auth method enabled at `github/` is currently set to
768 hours. Tune this value to 72 hours.
```shell-session
$ vault auth tune -default-lease-ttl=72h github/
Success! Tuned the auth method at: github/
```
Verify the updated configuration.
<CodeBlockConfig highlight="1,4">
```shell-session
$ vault read sys/auth/github/tune
Key Value
--- -----
default_lease_ttl 72h
description n/a
force_no_cache false
max_lease_ttl 768h
token_type default-service
```
</CodeBlockConfig>
To restore back to the system default, you can use `-1`.
```shell-session
$ vault auth tune -default-lease-ttl=-1 github/
Success! Tuned the auth method at: github/
```
Verify the updated configuration.
<CodeBlockConfig highlight="1,4">
```shell-session
$ vault read sys/auth/github/tune
Key Value
--- -----
default_lease_ttl 768h
description n/a
force_no_cache false
max_lease_ttl 768h
token_type default-service
```
</CodeBlockConfig>
You can specify multiple audit non-hmac request keys.
```shell-session
$ vault auth tune -audit-non-hmac-request-keys=value1 -audit-non-hmac-request-keys=value2 github/
Success! Tuned the auth method at: github/
```
### Enable user lockout
User lockout feature is only supported for
[userpass](/vault/docs/auth/userpass), [ldap](/vault/docs/auth/ldap), and
[approle](/vault/docs/auth/approle) auth methods.
Tune the `userpass/` auth method to lock out the user after 10 failed login
attempts within 10 minutes.
```shell-session
$ vault auth tune -user-lockout-threshold=10 -user-lockout-duration=10m userpass/
Success! Tuned the auth method at: userpass/
```
View the current configuration of the auth method enabled at `userpass/`.
<CodeBlockConfig highlight="1,11-13">
```shell-session
$ vault read sys/auth/userpass/tune
Key Value
--- -----
default_lease_ttl 768h
description n/a
force_no_cache false
max_lease_ttl 768h
token_type default-service
user_lockout_counter_reset_duration 0s
user_lockout_disable false
user_lockout_duration 10m
user_lockout_threshold 10
```
</CodeBlockConfig>
## Usage
The following flags are available in addition to the [standard set of
flags](/vault/docs/commands) included on all commands.
- `-allowed-response-headers` `(string: "")` - response header values that the auth
method will be allowed to set.
- `-audit-non-hmac-request-keys` `(string: "")` - Key that will not be HMAC'd
by audit devices in the request data object. Note that multiple keys may be
specified by providing this option multiple times, each time with 1 key.
- `-audit-non-hmac-response-keys` `(string: "")` - Key that will not be HMAC'd
by audit devices in the response data object. Note that multiple keys may be
specified by providing this option multiple times, each time with 1 key.
- `-default-lease-ttl` `(duration: "")` - The default lease TTL for this auth
method. If unspecified, this defaults to the Vault server's globally
configured default lease TTL, or a previously configured value for the auth
method.
- `-description` `(string: "")` - Specifies the description of the auth method.
This overrides the current stored value, if any.
- `-listing-visibility` `(string: "")` - The flag to toggle whether to show the
mount in the UI-specific listing endpoint. Valid values are `"unauth"` or `"hidden"`.
Passing empty string leaves the current setting unchanged.
- `-max-lease-ttl` `(duration: "")` - The maximum lease TTL for this auth
method. If unspecified, this defaults to the Vault server's globally
configured [maximum lease TTL](/vault/docs/configuration#max_lease_ttl), or a
previously configured value for the auth method. This value is allowed to
override the server's global max TTL; it can be longer or shorter.
- `-passthrough-request-headers` `(string: "")` - request header values that will
be sent to the auth method. Note that multiple keys may be
specified by providing this option multiple times, each time with 1 key.
- `-token-type` `(string: "")` - Specifies the type of tokens that should be
returned by the auth method.
- `-trim-request-trailing-slashes` `(bool: false)` - If true, requests to
this mount with trailing slashes will have those slashes trimmed.
Necessary for some standards based APIs handled by Vault.
- `-plugin-version` `(string: "")` - Configures the semantic version of the plugin
to use. The new version will not start running until the mount is
[reloaded](/vault/docs/commands/plugin/reload).
- `-user-lockout-threshold` `(string: "")` - Specifies the number of failed login attempts
after which the user is locked out. User lockout feature was added in Vault 1.13.
- `-user-lockout-duration` `(duration: "")` - Specifies the duration for which a user will be locked out.
User lockout feature was added in Vault 1.13.
- `-user-lockout-counter-reset-duration` `(duration: "")` - Specifies the duration after which the lockout
counter is reset with no failed login attempts. User lockout feature was added in Vault 1.13.
- `-user-lockout-disable` `(bool: false)` - Disables the user lockout feature if set to true. User lockout feature was added in Vault 1.13.
| vault | layout docs page title auth tune Command description The auth tune command tunes the configuration options for the auth method at the given PATH auth tune The auth tune command tunes the configuration options for the auth method at the given PATH Note The argument corresponds to the path where the auth method is enabled not the auth type Note Examples Before tuning the auth method configuration view the current configuration of the auth method enabled at github shell session vault read sys auth github tune Key Value default lease ttl 768h description n a force no cache false max lease ttl 768h token type default service The default lease for the auth method enabled at github is currently set to 768 hours Tune this value to 72 hours shell session vault auth tune default lease ttl 72h github Success Tuned the auth method at github Verify the updated configuration CodeBlockConfig highlight 1 4 shell session vault read sys auth github tune Key Value default lease ttl 72h description n a force no cache false max lease ttl 768h token type default service CodeBlockConfig To restore back to the system default you can use 1 shell session vault auth tune default lease ttl 1 github Success Tuned the auth method at github Verify the updated configuration CodeBlockConfig highlight 1 4 shell session vault read sys auth github tune Key Value default lease ttl 768h description n a force no cache false max lease ttl 768h token type default service CodeBlockConfig You can specify multiple audit non hmac request keys shell session vault auth tune audit non hmac request keys value1 audit non hmac request keys value2 github Success Tuned the auth method at github Enable user lockout User lockout feature is only supported for userpass vault docs auth userpass ldap vault docs auth ldap and approle vault docs auth approle auth methods Tune the userpass auth method to lock out the user after 10 failed login attempts within 10 minutes shell session vault auth tune user lockout threshold 10 user lockout duration 10m userpass Success Tuned the auth method at userpass View the current configuration of the auth method enabled at userpass CodeBlockConfig highlight 1 11 13 shell session vault read sys auth userpass tune Key Value default lease ttl 768h description n a force no cache false max lease ttl 768h token type default service user lockout counter reset duration 0s user lockout disable false user lockout duration 10m user lockout threshold 10 CodeBlockConfig Usage The following flags are available in addition to the standard set of flags vault docs commands included on all commands allowed response headers string response header values that the auth method will be allowed to set audit non hmac request keys string Key that will not be HMAC d by audit devices in the request data object Note that multiple keys may be specified by providing this option multiple times each time with 1 key audit non hmac response keys string Key that will not be HMAC d by audit devices in the response data object Note that multiple keys may be specified by providing this option multiple times each time with 1 key default lease ttl duration The default lease TTL for this auth method If unspecified this defaults to the Vault server s globally configured default lease TTL or a previously configured value for the auth method description string Specifies the description of the auth method This overrides the current stored value if any listing visibility string The flag to toggle whether to show the mount in the UI specific listing endpoint Valid values are unauth or hidden Passing empty string leaves the current setting unchanged max lease ttl duration The maximum lease TTL for this auth method If unspecified this defaults to the Vault server s globally configured maximum lease TTL vault docs configuration max lease ttl or a previously configured value for the auth method This value is allowed to override the server s global max TTL it can be longer or shorter passthrough request headers string request header values that will be sent to the auth method Note that multiple keys may be specified by providing this option multiple times each time with 1 key token type string Specifies the type of tokens that should be returned by the auth method trim request trailing slashes bool false If true requests to this mount with trailing slashes will have those slashes trimmed Necessary for some standards based APIs handled by Vault plugin version string Configures the semantic version of the plugin to use The new version will not start running until the mount is reloaded vault docs commands plugin reload user lockout threshold string Specifies the number of failed login attempts after which the user is locked out User lockout feature was added in Vault 1 13 user lockout duration duration Specifies the duration for which a user will be locked out User lockout feature was added in Vault 1 13 user lockout counter reset duration duration Specifies the duration after which the lockout counter is reset with no failed login attempts User lockout feature was added in Vault 1 13 user lockout disable bool false Disables the user lockout feature if set to true User lockout feature was added in Vault 1 13 |
vault The secrets enable command enables a secrets engine at a given path If an page title secrets enable Command layout docs the secrets engine is enabled it usually needs configuration The secrets engine already exists at the given path an error is returned After configuration varies by secrets engine | ---
layout: docs
page_title: secrets enable - Command
description: |-
The "secrets enable" command enables a secrets engine at a given path. If an
secrets engine already exists at the given path, an error is returned. After
the secrets engine is enabled, it usually needs configuration. The
configuration varies by secrets engine.
---
# secrets enable
The `secrets enable` command enables a secrets engine at a given path. If an
secrets engine already exists at the given path, an error is returned. After the
secrets engine is enabled, it usually needs configuration. The configuration
varies by secrets engine.
By default, secrets engines are enabled at the path corresponding to their TYPE,
but users can customize the path using the `-path` option.
Some secrets engines persist data, some act as data pass-through, and some
generate dynamic credentials. The secrets engine will likely require
configuration after it is mounted. For details on the specific configuration
options, please see the [secrets engine
documentation](/vault/docs/secrets).
## Examples
Enable the AWS secrets engine at "aws/":
```shell-session
$ vault secrets enable aws
Success! Enabled the aws secrets engine at: aws/
```
Enable the SSH secrets engine at ssh-prod/:
```shell-session
$ vault secrets enable -path=ssh-prod ssh
```
Enable the database secrets engine with an explicit maximum TTL of 30m:
```shell-session
$ vault secrets enable -max-lease-ttl=30m database
```
Enable a custom plugin (after it is registered in the plugin registry):
```shell-session
$ vault secrets enable -path=my-secrets my-plugin
```
For more information on the specific configuration options and paths, please see
the [secrets engine](/vault/docs/secrets) documentation.
## Usage
The following flags are available in addition to the [standard set of
flags](/vault/docs/commands) included on all commands.
- `-audit-non-hmac-request-keys` `(string: "")` - Key that will not be HMAC'd
by audit devices in the request data object. Note that multiple keys may be
specified by providing this option multiple times, each time with 1 key.
An example of this is provided in the [tune section](/vault/docs/commands/secrets/tune).
- `-audit-non-hmac-response-keys` `(string: "")` - Key that will not be HMAC'd
by audit devices in the response data object. Note that multiple keys may be
specified by providing this option multiple times, each time with 1 key.
- `-default-lease-ttl` `(duration: "")` - The default lease TTL for this secrets
engine. If unspecified, this defaults to the Vault server's globally
configured default lease TTL.
- `-description` `(string: "")` - Human-friendly description for the purpose of
this engine.
- `-force-no-cache` `(bool: false)` - Force the secrets engine to disable
caching. If unspecified, this defaults to the Vault server's globally
configured cache settings. This does not affect caching of the underlying
encrypted data storage.
- `-local` `(bool: false)` - Mark the secrets engine as local-only. Local
engines are not replicated or removed by replication.
- `-max-lease-ttl` `(duration: "")` The maximum lease TTL for this secrets
engine. If unspecified, this defaults to the Vault server's globally
configured maximum lease TTL.
- `-path` `(string: "")` Place where the secrets engine will be accessible. This
must be unique cross all secrets engines. This defaults to the "type" of the
secrets engine.
!> **Case-sensitive:** The path where you enable secrets engines is case-sensitive. For
example, the KV secrets engine enabled at `kv/` and `KV/` are treated as two
distinct instances of KV secrets engine.
- `-passthrough-request-headers` `(string: "")` - request header values that will
be sent to the secrets engine. Note that multiple keys may be
specified by providing this option multiple times, each time with 1 key.
- `-allowed-response-headers` `(string: "")` - response header values that the secrets
engine will be allowed to set. Note that multiple keys may be
specified by providing this option multiple times, each time with 1 key.
- `-allowed-managed-keys` `(string: "")` - Managed key name(s) that the mount
in question is allowed to access. Note that multiple keys may be specified
by providing this option multiple times, each time with 1 key.
- `-delegated-auth-accessors` `(string: "")` - An authorized accessor the auth
backend can delegate authentication to. To allow multiple accessors, provide
the `delegated-auth-accessors` multiple times, each time with 1 accessor.
- `-trim-request-trailing-slashes` `(bool: false)` - If true, requests to
this mount with trailing slashes will have those slashes trimmed.
Necessary for some standards based APIs handled by Vault.
- `-plugin-version` `(string: "")` - Configures the semantic version of the plugin
to use. If unspecified, implies the built-in or any matching unversioned plugin
that may have been registered. | vault | layout docs page title secrets enable Command description The secrets enable command enables a secrets engine at a given path If an secrets engine already exists at the given path an error is returned After the secrets engine is enabled it usually needs configuration The configuration varies by secrets engine secrets enable The secrets enable command enables a secrets engine at a given path If an secrets engine already exists at the given path an error is returned After the secrets engine is enabled it usually needs configuration The configuration varies by secrets engine By default secrets engines are enabled at the path corresponding to their TYPE but users can customize the path using the path option Some secrets engines persist data some act as data pass through and some generate dynamic credentials The secrets engine will likely require configuration after it is mounted For details on the specific configuration options please see the secrets engine documentation vault docs secrets Examples Enable the AWS secrets engine at aws shell session vault secrets enable aws Success Enabled the aws secrets engine at aws Enable the SSH secrets engine at ssh prod shell session vault secrets enable path ssh prod ssh Enable the database secrets engine with an explicit maximum TTL of 30m shell session vault secrets enable max lease ttl 30m database Enable a custom plugin after it is registered in the plugin registry shell session vault secrets enable path my secrets my plugin For more information on the specific configuration options and paths please see the secrets engine vault docs secrets documentation Usage The following flags are available in addition to the standard set of flags vault docs commands included on all commands audit non hmac request keys string Key that will not be HMAC d by audit devices in the request data object Note that multiple keys may be specified by providing this option multiple times each time with 1 key An example of this is provided in the tune section vault docs commands secrets tune audit non hmac response keys string Key that will not be HMAC d by audit devices in the response data object Note that multiple keys may be specified by providing this option multiple times each time with 1 key default lease ttl duration The default lease TTL for this secrets engine If unspecified this defaults to the Vault server s globally configured default lease TTL description string Human friendly description for the purpose of this engine force no cache bool false Force the secrets engine to disable caching If unspecified this defaults to the Vault server s globally configured cache settings This does not affect caching of the underlying encrypted data storage local bool false Mark the secrets engine as local only Local engines are not replicated or removed by replication max lease ttl duration The maximum lease TTL for this secrets engine If unspecified this defaults to the Vault server s globally configured maximum lease TTL path string Place where the secrets engine will be accessible This must be unique cross all secrets engines This defaults to the type of the secrets engine Case sensitive The path where you enable secrets engines is case sensitive For example the KV secrets engine enabled at kv and KV are treated as two distinct instances of KV secrets engine passthrough request headers string request header values that will be sent to the secrets engine Note that multiple keys may be specified by providing this option multiple times each time with 1 key allowed response headers string response header values that the secrets engine will be allowed to set Note that multiple keys may be specified by providing this option multiple times each time with 1 key allowed managed keys string Managed key name s that the mount in question is allowed to access Note that multiple keys may be specified by providing this option multiple times each time with 1 key delegated auth accessors string An authorized accessor the auth backend can delegate authentication to To allow multiple accessors provide the delegated auth accessors multiple times each time with 1 accessor trim request trailing slashes bool false If true requests to this mount with trailing slashes will have those slashes trimmed Necessary for some standards based APIs handled by Vault plugin version string Configures the semantic version of the plugin to use If unspecified implies the built in or any matching unversioned plugin that may have been registered |
vault The secrets tune command tunes the configuration options for the secrets secrets tune The secrets tune command tunes the configuration options for the secrets engine at the given PATH page title secrets tune Command layout docs | ---
layout: docs
page_title: secrets tune - Command
description: |-
The "secrets tune" command tunes the configuration options for the secrets engine at the given PATH.
---
# secrets tune
The `secrets tune` command tunes the configuration options for the secrets
engine at the given PATH. The argument corresponds to the PATH where the secrets
engine is enabled, not the type.
## Examples
Before tuning the secret mount, view the current configuration of the
mount enabled at "pki/":
```shell-session
$ vault read sys/mounts/pki/tune
Key Value
--- -----
default_lease_ttl 12h
description Example PKI mount
force_no_cache false
max_lease_ttl 24h
```
Tune the default lease, exclude `common_name` and `serial_number` from being HMAC'd in the audit log for the PKI secrets engine:
```shell-session
$ vault secrets tune -default-lease-ttl=18h -audit-non-hmac-request-keys=common_name -audit-non-hmac-response-keys=serial_number pki/
Success! Tuned the secrets engine at: pki/
$ vault read sys/mounts/pki/tune
Key Value
--- -----
audit_non_hmac_request_keys [common_name]
audit_non_hmac_response_keys [serial_number]
default_lease_ttl 18h
description Example PKI mount
force_no_cache false
max_lease_ttl 24h
```
Specify multiple audit non-hmac request keys:
```shell-session
$ vault secrets tune -audit-non-hmac-request-keys=common_name -audit-non-hmac-request-keys=ttl pki/
```
## Usage
The following flags are available in addition to the [standard set of
flags](/vault/docs/commands) included on all commands.
- `-allowed-response-headers` `(string: "")` - response header values that the
secrets engine will be allowed to set. Note that multiple keys may be
specified by providing this option multiple times, each time with 1 key.
- `-audit-non-hmac-request-keys` `(string: "")` - Key that will not be HMAC'd
by audit devices in the request data object. Note that multiple keys may be
specified by providing this option multiple times, each time with 1 key.
- `-audit-non-hmac-response-keys` `(string: "")` - Key that will not be HMAC'd
by audit devices in the response data object. Note that multiple keys may be
specified by providing this option multiple times, each time with 1 key.
- `-default-lease-ttl` `(duration: "")` - The default lease TTL for this secrets
engine. If unspecified, this defaults to the Vault server's globally
configured default lease TTL, or a previously configured value for the secrets
engine. Uses [duration format strings](/vault/docs/concepts/duration-format).
- `-description` `(string: "")` - Specifies the description of the mount.
This overrides the current stored value, if any.
- `-listing-visibility` `(string: "")` - The flag to toggle whether to show the
mount in the UI-specific listing endpoint. Valid values are `"unauth"` or `"hidden"`.
Passing empty string leaves the current setting unchanged.
- `-max-lease-ttl` `(duration: "")` - The maximum lease TTL for this secrets
engine. If unspecified, this defaults to the Vault server's globally
configured [maximum lease TTL](/vault/docs/configuration#max_lease_ttl), or a
previously configured value for the secrets engine. This value is allowed to
override the server's global max TTL; it can be longer or shorter.
Uses [duration format strings](/vault/docs/concepts/duration-format).
- `-passthrough-request-headers` `(string: "")` - request header values that will
be sent to the secrets engine. Note that multiple keys may be
specified by providing this option multiple times, each time with 1 key.
- `-allowed-managed-keys` `(string: "")` - Managed key name(s) that the mount
in question is allowed to access. Note that multiple keys may be specified
by providing this option multiple times, each time with 1 key.
- `-delegated-auth-accessors` `(string: "")` - An authorized accessor the auth
backend can delegate authentication to. To allow multiple accessors, provide
the `delegated-auth-accessors` multiple times, each time with 1 accessor.
- `-trim-request-trailing-slashes` `(bool: false)` - If true, requests to
this mount with trailing slashes will have those slashes trimmed.
Necessary for some standards based APIs handled by Vault.
- `-plugin-version` `(string: "")` - Configures the semantic version of the plugin
to use. The new version will not start running until the mount is
[reloaded](/vault/docs/commands/plugin/reload). | vault | layout docs page title secrets tune Command description The secrets tune command tunes the configuration options for the secrets engine at the given PATH secrets tune The secrets tune command tunes the configuration options for the secrets engine at the given PATH The argument corresponds to the PATH where the secrets engine is enabled not the type Examples Before tuning the secret mount view the current configuration of the mount enabled at pki shell session vault read sys mounts pki tune Key Value default lease ttl 12h description Example PKI mount force no cache false max lease ttl 24h Tune the default lease exclude common name and serial number from being HMAC d in the audit log for the PKI secrets engine shell session vault secrets tune default lease ttl 18h audit non hmac request keys common name audit non hmac response keys serial number pki Success Tuned the secrets engine at pki vault read sys mounts pki tune Key Value audit non hmac request keys common name audit non hmac response keys serial number default lease ttl 18h description Example PKI mount force no cache false max lease ttl 24h Specify multiple audit non hmac request keys shell session vault secrets tune audit non hmac request keys common name audit non hmac request keys ttl pki Usage The following flags are available in addition to the standard set of flags vault docs commands included on all commands allowed response headers string response header values that the secrets engine will be allowed to set Note that multiple keys may be specified by providing this option multiple times each time with 1 key audit non hmac request keys string Key that will not be HMAC d by audit devices in the request data object Note that multiple keys may be specified by providing this option multiple times each time with 1 key audit non hmac response keys string Key that will not be HMAC d by audit devices in the response data object Note that multiple keys may be specified by providing this option multiple times each time with 1 key default lease ttl duration The default lease TTL for this secrets engine If unspecified this defaults to the Vault server s globally configured default lease TTL or a previously configured value for the secrets engine Uses duration format strings vault docs concepts duration format description string Specifies the description of the mount This overrides the current stored value if any listing visibility string The flag to toggle whether to show the mount in the UI specific listing endpoint Valid values are unauth or hidden Passing empty string leaves the current setting unchanged max lease ttl duration The maximum lease TTL for this secrets engine If unspecified this defaults to the Vault server s globally configured maximum lease TTL vault docs configuration max lease ttl or a previously configured value for the secrets engine This value is allowed to override the server s global max TTL it can be longer or shorter Uses duration format strings vault docs concepts duration format passthrough request headers string request header values that will be sent to the secrets engine Note that multiple keys may be specified by providing this option multiple times each time with 1 key allowed managed keys string Managed key name s that the mount in question is allowed to access Note that multiple keys may be specified by providing this option multiple times each time with 1 key delegated auth accessors string An authorized accessor the auth backend can delegate authentication to To allow multiple accessors provide the delegated auth accessors multiple times each time with 1 accessor trim request trailing slashes bool false If true requests to this mount with trailing slashes will have those slashes trimmed Necessary for some standards based APIs handled by Vault plugin version string Configures the semantic version of the plugin to use The new version will not start running until the mount is reloaded vault docs commands plugin reload |
vault vault agent Start an instance of Vault Agent Use vault agent to start an instance of Vault Agent layout docs page title agent Vault CLI | ---
layout: docs
page_title: "agent - Vault CLI"
description: >-
Use vault agent to start an instance of Vault Agent.
---
# `vault agent`
Start an instance of Vault Agent.
<CodeBlockConfig hideClipboard>
```shell-session
$ vault agent -config <config_file>
$ vault agent [-help | -h]
```
</CodeBlockConfig>
## Description
`vault agent` start an instance of Vault Agent, which automatically
authenticates and fetches secrets for client applications.
<Tip title="Related API endpoints">
**None**
</Tip>
## Command arguments
None.
## Command options
None.
## Command flags
<br />
@include 'cli/agent/flags/config.mdx'
<br /><hr /><br />
@include 'cli/agent/flags/exit-after-auth.mdx'
<br /><hr /><br />
@include 'cli/shared/flags/log-file.mdx'
<br /><hr /><br />
@include 'cli/shared/flags/log-format.mdx'
<br /><hr /><br />
@include 'cli/shared/flags/log-level.mdx'
<br /><hr /><br />
@include 'cli/shared/flags/log-rotate-bytes.mdx'
<br /><hr /><br />
@include 'cli/shared/flags/log-rotate-duration.mdx'
<br /><hr /><br />
@include 'cli/shared/flags/log-rotate-max-files.mdx'
## Standard flags
<br />
@include 'cli/standard-settings/all-standard-flags-but-format.mdx'
## Examples
Start Vault Agent with a single configuration file:
```shell-session
$ vault agent -config=/etc/vault/agent/config.hcl
```
Start Vault Agent with a two discrete configuration files:
```shell-session
$ vault agent \
-config=/etc/vault/agent/base-config.hcl \
-config=/etc/vault/agent/auto-auth-config.hcl
```
Start Vault Agent with a set of configuration files under the `` directory:
```shell-session
$ vault agent -config=/etc/vault/agent/config-files/
`` | vault | layout docs page title agent Vault CLI description Use vault agent to start an instance of Vault Agent vault agent Start an instance of Vault Agent CodeBlockConfig hideClipboard shell session vault agent config config file vault agent help h CodeBlockConfig Description vault agent start an instance of Vault Agent which automatically authenticates and fetches secrets for client applications Tip title Related API endpoints None Tip Command arguments None Command options None Command flags br include cli agent flags config mdx br hr br include cli agent flags exit after auth mdx br hr br include cli shared flags log file mdx br hr br include cli shared flags log format mdx br hr br include cli shared flags log level mdx br hr br include cli shared flags log rotate bytes mdx br hr br include cli shared flags log rotate duration mdx br hr br include cli shared flags log rotate max files mdx Standard flags br include cli standard settings all standard flags but format mdx Examples Start Vault Agent with a single configuration file shell session vault agent config etc vault agent config hcl Start Vault Agent with a two discrete configuration files shell session vault agent config etc vault agent base config hcl config etc vault agent auto auth config hcl Start Vault Agent with a set of configuration files under the directory shell session vault agent config etc vault agent config files |
vault file from secrets plugin data agent generate config layout docs page title agent generate config Vault CLI Use vault agent generate config to generate a basic Vault Agent configuration | ---
layout: docs
page_title: "agent generate-config - Vault CLI"
description: >-
Use vault agent generate-config to generate a basic Vault Agent configuration
file from secrets plugin data.
---
# `agent generate-config`
Use secrets plugin data to generate a basic
[configuration file](/vault/docs/agent-and-proxy/agent#configuration-file-options)
for running Vault Agent in [process supervisor mode](/vault/docs/agent-and-proxy/agent/process-supervisor).
<CodeBlockConfig hideClipboard>
```shell-session
$ vault agent generate-config -type <config_file_type> [options] [<file_path>]
```
</CodeBlockConfig>
## Description
`agent generate-config` composes configuration details for Vault Agent
based on the configuration `type` and writes a local configuration file for
running Vault agent in process supervisor mode.
<Tip title="Related API endpoints">
- None
</Tip>
### Limitations and warnings
Limitations:
- Plugin support limited to KV plugins.
- Configuration type limited to environment variable templates.
<Warning title="Not appropriate for production">
The file created by `agent generate-config` includes an `auto_auth` section
configured to use the `token_file` authentication method.
Token files are convenient for local testing, but **are not** appropriates for
production use. Refer to the full list of Vault Agent
[autoAuth methods](/vault/docs/agent-and-proxy/autoauth/methods) for available
production-ready authentication methods.
</Warning>
## Arguments
<br />
@include 'cli/agent/args/file_path.mdx'
## Options
None.
## Command Flags
<br />
@include 'cli/agent/flags/exec.mdx'
<br /><hr /><br />
@include 'cli/agent/flags/path.mdx'
<br /><hr /><br />
@include 'cli/agent/flags/type.mdx'
## Global flags
<br />
@include 'cli/standard-settings/all-standard-flags-but-format.mdx'
## Examples
Generate an environment variable template configuration for the `foo` secrets
plugin:
```shell-session
$ vault agent generate-config \
-type="env-template" \
-exec="./my-app arg1 arg2" \
-path="secret/foo"
Command output
```
Generate an environment variable template configuration for more than one
secrets plugin:
```shell-session
$ vault agent generate-config -type="env-template" \
-exec="./my-app arg1 arg2" \
-path="secret/foo" \
-path="secret/bar" \
-path="secret/my-app/*"
`` | vault | layout docs page title agent generate config Vault CLI description Use vault agent generate config to generate a basic Vault Agent configuration file from secrets plugin data agent generate config Use secrets plugin data to generate a basic configuration file vault docs agent and proxy agent configuration file options for running Vault Agent in process supervisor mode vault docs agent and proxy agent process supervisor CodeBlockConfig hideClipboard shell session vault agent generate config type config file type options file path CodeBlockConfig Description agent generate config composes configuration details for Vault Agent based on the configuration type and writes a local configuration file for running Vault agent in process supervisor mode Tip title Related API endpoints None Tip Limitations and warnings Limitations Plugin support limited to KV plugins Configuration type limited to environment variable templates Warning title Not appropriate for production The file created by agent generate config includes an auto auth section configured to use the token file authentication method Token files are convenient for local testing but are not appropriates for production use Refer to the full list of Vault Agent autoAuth methods vault docs agent and proxy autoauth methods for available production ready authentication methods Warning Arguments br include cli agent args file path mdx Options None Command Flags br include cli agent flags exec mdx br hr br include cli agent flags path mdx br hr br include cli agent flags type mdx Global flags br include cli standard settings all standard flags but format mdx Examples Generate an environment variable template configuration for the foo secrets plugin shell session vault agent generate config type env template exec my app arg1 arg2 path secret foo Command output Generate an environment variable template configuration for more than one secrets plugin shell session vault agent generate config type env template exec my app arg1 arg2 path secret foo path secret bar path secret my app |
vault Create and enable a new audit device to capture log data from Vault Enable a new audit device page title audit enable Vault CLI layout docs audit enable | ---
layout: docs
page_title: "audit enable - Vault CLI"
description: >-
Create and enable a new audit device to capture log data from Vault.
---
# `audit enable`
Enable a new audit device.
<CodeBlockConfig hideClipboard>
```shell-session
$ vault audit enable [flags] <device_type> [options] [<config_argument=value>...]
$ vault audit enable [-help | -h]
```
</CodeBlockConfig>
## Description
`audit enable` creates and enables an audit device at the given path or returns
an error if an audit device already exists at the given path. The device
configuration parameters depend on the audit device type.
<Tip title="Related API endpoints">
EnableAuditDevice - [`POST:/sys/audit/{mount-path}`](/vault/api-docs/system/audit#enable-audit-device)
</Tip>
## Command arguments
@include 'cli/audit/args/device_type.mdx'
Each audit device type also has a set of configuration arguments:
<Tabs>
<Tab heading="File">
<CodeBlockConfig hideClipboard>
```shell-session
$ vault audit enable [flags] file [options] \
file_path=<path/to/log/file> \
[mode=<file_permissions>]
```
</CodeBlockConfig>
<br />
@include 'cli/audit/args/file/file_path.mdx'
<br /><hr /><br />
@include 'cli/audit/args/file/mode.mdx'
</Tab>
<Tab heading="Socket">
<CodeBlockConfig hideClipboard>
```shell-session
$ vault audit enable [flags] socket [options] \
[address=<server_address>] \
[socket_type=<protocol>] \
[write_timeout=<wait_time>]
```
</CodeBlockConfig>
<br />
@include 'cli/audit/args/socket/address.mdx'
<br /><hr /><br />
@include 'cli/audit/args/socket/socket_type.mdx'
<br /><hr /><br />
@include 'cli/audit/args/socket/write_timeout.mdx'
</Tab>
<Tab heading="Syslog">
<CodeBlockConfig hideClipboard>
```shell-session
$ vault audit enable [flags] syslog [options] \
[facility=<process_entry_source>] \
[tag=<program_entry_source>]
```
</CodeBlockConfig>
<br />
@include 'cli/audit/args/syslog/facility.mdx'
<br /><hr /><br />
@include 'cli/audit/args/syslog/tag.mdx'
</Tab>
</Tabs>
## Command options
<br />
@include 'cli/audit/options/elide_list_responses.mdx'
<br /><hr /><br />
@include 'cli/audit/options/exclude.mdx'
<br /><hr /><br />
@include 'cli/audit/options/fallback.mdx'
<br /><hr /><br />
@include 'cli/audit/options/filter.mdx'
<br /><hr /><br />
@include 'cli/audit/options/format.mdx'
<br /><hr /><br />
@include 'cli/audit/options/hmac_accessor.mdx'
<br /><hr /><br />
@include 'cli/audit/options/log_raw.mdx'
<br /><hr /><br />
@include 'cli/audit/options/prefix.mdx'
## Command flags
<br />
@include 'cli/audit/flags/description.mdx'
<br /><hr /><br />
@include 'cli/audit/flags/local.mdx'
<br /><hr /><br />
@include 'cli/audit/flags/path.mdx'
## Standard flags
<br />
@include 'cli/standard-settings/all-standard-flags-but-format.mdx'
## Examples
Enable a `file` type audit device at the default path, `file/`:
```shell-session
$ vault audit enable file file_path=/tmp/my-file.txt
Success! Enabled the file audit device at: file/
```
Enable a `file` type audit device at the path, `audit/file`:
```shell-session
$ vault audit enable -path=audit/file file file_path=/tmp/my-file.txt
Success! Enabled the file audit device at: audit/file/
```
| vault | layout docs page title audit enable Vault CLI description Create and enable a new audit device to capture log data from Vault audit enable Enable a new audit device CodeBlockConfig hideClipboard shell session vault audit enable flags device type options config argument value vault audit enable help h CodeBlockConfig Description audit enable creates and enables an audit device at the given path or returns an error if an audit device already exists at the given path The device configuration parameters depend on the audit device type Tip title Related API endpoints EnableAuditDevice POST sys audit mount path vault api docs system audit enable audit device Tip Command arguments include cli audit args device type mdx Each audit device type also has a set of configuration arguments Tabs Tab heading File CodeBlockConfig hideClipboard shell session vault audit enable flags file options file path path to log file mode file permissions CodeBlockConfig br include cli audit args file file path mdx br hr br include cli audit args file mode mdx Tab Tab heading Socket CodeBlockConfig hideClipboard shell session vault audit enable flags socket options address server address socket type protocol write timeout wait time CodeBlockConfig br include cli audit args socket address mdx br hr br include cli audit args socket socket type mdx br hr br include cli audit args socket write timeout mdx Tab Tab heading Syslog CodeBlockConfig hideClipboard shell session vault audit enable flags syslog options facility process entry source tag program entry source CodeBlockConfig br include cli audit args syslog facility mdx br hr br include cli audit args syslog tag mdx Tab Tabs Command options br include cli audit options elide list responses mdx br hr br include cli audit options exclude mdx br hr br include cli audit options fallback mdx br hr br include cli audit options filter mdx br hr br include cli audit options format mdx br hr br include cli audit options hmac accessor mdx br hr br include cli audit options log raw mdx br hr br include cli audit options prefix mdx Command flags br include cli audit flags description mdx br hr br include cli audit flags local mdx br hr br include cli audit flags path mdx Standard flags br include cli standard settings all standard flags but format mdx Examples Enable a file type audit device at the default path file shell session vault audit enable file file path tmp my file txt Success Enabled the file audit device at file Enable a file type audit device at the path audit file shell session vault audit enable path audit file file file path tmp my file txt Success Enabled the file audit device at audit file |
vault facilitate level with no decryption involved page title operator migrate Command layout docs migrating Vault between configurations It operates directly at the storage The operator migrate command copies data between storage backends to | ---
layout: docs
page_title: operator migrate - Command
description: >-
The "operator migrate" command copies data between storage backends to
facilitate
migrating Vault between configurations. It operates directly at the storage
level, with no decryption involved.
---
# operator migrate
The `operator migrate` command copies data between storage backends to facilitate
migrating Vault between configurations. It operates directly at the storage
level, with no decryption involved. Keys in the destination storage backend will
be overwritten, and the destination should _not_ be initialized prior to the
migrate operation. The source data is not modified, with the exception of a small lock
key added during migration.
This is intended to be an offline operation to ensure data consistency, and Vault
will not allow starting the server if a migration is in progress.
## Examples
Migrate all keys:
```shell-session
$ vault operator migrate -config migrate.hcl
2018-09-20T14:23:23.656-0700 [INFO ] copied key: data/core/seal-config
2018-09-20T14:23:23.657-0700 [INFO ] copied key: data/core/wrapping/jwtkey
2018-09-20T14:23:23.658-0700 [INFO ] copied key: data/logical/fd1bed89-ffc4-d631-00dd-0696c9f930c6/31c8e6d9-2a17-d98f-bdf1-aa868afa1291/archive/metadata
2018-09-20T14:23:23.660-0700 [INFO ] copied key: data/logical/fd1bed89-ffc4-d631-00dd-0696c9f930c6/31c8e6d9-2a17-d98f-bdf1-aa868afa1291/metadata/5kKFZ4YnzgNfy9UcWOzxxzOMpqlp61rYuq6laqpLQDnB3RawKpqi7yBTrawj1P
...
```
Migration is done in a consistent, sorted order. If the migration is halted or
exits before completion (e.g. due to a connection error with a storage backend),
it may be resumed from an arbitrary key prefix:
```shell-session
$ vault operator migrate -config migrate.hcl -start "data/logical/fd"
```
## Configuration
The `operator migrate` command uses a dedicated configuration file to specify the source
and destination storage backends. The format of the storage stanzas is identical
to that used to [configure Vault](/vault/docs/configuration/storage),
with the only difference being that two stanzas are required: `storage_source` and `storage_destination`.
```hcl
storage_source "mysql" {
username = "user1234"
password = "secret123!"
database = "vault"
}
storage_destination "consul" {
address = "127.0.0.1:8500"
path = "vault/"
}
```
## Migrating to integrated raft storage
### Example configuration
The below configuration will migrate away from Consul storage to integrated
raft storage. The raft data will be stored on the local filesystem in the
defined `path`. `node_id` can optionally be set to identify this node.
[cluster_addr](/vault/docs/configuration#cluster_addr) must be set to the
cluster hostname of this node. For more configuration options see the [raft
storage configuration documentation](/vault/docs/configuration/storage/raft).
If the original configuration uses "raft" for `ha_storage` a different
`path` needs to be declared for the path in `storage_destination` and the new
configuration for the node post-migration.
```hcl
storage_source "consul" {
address = "127.0.0.1:8500"
path = "vault"
}
storage_destination "raft" {
path = "/path/to/raft/data"
node_id = "raft_node_1"
}
cluster_addr = "http://127.0.0.1:8201"
```
### Run the migration
Vault will need to be offline during the migration process. First, stop Vault.
Then, run the migration on the server you wish to become a the new Vault node.
```shell-session
$ vault operator migrate -config migrate.hcl
2018-09-20T14:23:23.656-0700 [INFO ] copied key: data/core/seal-config
2018-09-20T14:23:23.657-0700 [INFO ] copied key: data/core/wrapping/jwtkey
2018-09-20T14:23:23.658-0700 [INFO ] copied key: data/logical/fd1bed89-ffc4-d631-00dd-0696c9f930c6/31c8e6d9-2a17-d98f-bdf1-aa868afa1291/archive/metadata
2018-09-20T14:23:23.660-0700 [INFO ] copied key: data/logical/fd1bed89-ffc4-d631-00dd-0696c9f930c6/31c8e6d9-2a17-d98f-bdf1-aa868afa1291/metadata/5kKFZ4YnzgNfy9UcWOzxxzOMpqlp61rYuq6laqpLQDnB3RawKpqi7yBTrawj1P
...
```
After migration has completed, the data is stored on the local file system. To
use the new storage backend with Vault, update Vault's configuration file as
described in the [raft storage configuration
documentation](/vault/docs/configuration/storage/raft). Then start and unseal the
vault server.
### Join additional nodes
After migration the raft cluster will only have a single node. Additional peers
should be joined to this node.
If the cluster was previously HA-enabled using "raft" as the `ha_storage`, the
nodes will have to re-join to the migrated node before unsealing.
## Usage
The following flags are available for the `operator migrate` command.
- `-config` `(string: <required>)` - Path to the migration configuration file.
- `-start` `(string: "")` - Migration starting key prefix. Only keys at or after this value will be copied.
- `-reset` - Reset the migration lock. A lock file is added during migration to prevent
starting the Vault server or another migration. The `-reset` option can be used to
remove a stale lock file if present.
- `-max-parallel` `int: 10` - Allows the operator to specify the maximum number of lightweight threads (goroutines)
which may be used to migrate data in parallel. This can potentially speed up migration on slower backends at
the cost of more resources (e.g. CPU, memory). Permitted values range from `1` (synchronous) to the maximum value
for an `integer`. If not supplied, a default of `10` parallel goroutines will be used.
~> Note: The maximum number of concurrent requests handled by a storage backend is ultimately governed by the
storage backend configuration setting, which enforces a maximum number of concurrent requests (`max_parallel`). | vault | layout docs page title operator migrate Command description The operator migrate command copies data between storage backends to facilitate migrating Vault between configurations It operates directly at the storage level with no decryption involved operator migrate The operator migrate command copies data between storage backends to facilitate migrating Vault between configurations It operates directly at the storage level with no decryption involved Keys in the destination storage backend will be overwritten and the destination should not be initialized prior to the migrate operation The source data is not modified with the exception of a small lock key added during migration This is intended to be an offline operation to ensure data consistency and Vault will not allow starting the server if a migration is in progress Examples Migrate all keys shell session vault operator migrate config migrate hcl 2018 09 20T14 23 23 656 0700 INFO copied key data core seal config 2018 09 20T14 23 23 657 0700 INFO copied key data core wrapping jwtkey 2018 09 20T14 23 23 658 0700 INFO copied key data logical fd1bed89 ffc4 d631 00dd 0696c9f930c6 31c8e6d9 2a17 d98f bdf1 aa868afa1291 archive metadata 2018 09 20T14 23 23 660 0700 INFO copied key data logical fd1bed89 ffc4 d631 00dd 0696c9f930c6 31c8e6d9 2a17 d98f bdf1 aa868afa1291 metadata 5kKFZ4YnzgNfy9UcWOzxxzOMpqlp61rYuq6laqpLQDnB3RawKpqi7yBTrawj1P Migration is done in a consistent sorted order If the migration is halted or exits before completion e g due to a connection error with a storage backend it may be resumed from an arbitrary key prefix shell session vault operator migrate config migrate hcl start data logical fd Configuration The operator migrate command uses a dedicated configuration file to specify the source and destination storage backends The format of the storage stanzas is identical to that used to configure Vault vault docs configuration storage with the only difference being that two stanzas are required storage source and storage destination hcl storage source mysql username user1234 password secret123 database vault storage destination consul address 127 0 0 1 8500 path vault Migrating to integrated raft storage Example configuration The below configuration will migrate away from Consul storage to integrated raft storage The raft data will be stored on the local filesystem in the defined path node id can optionally be set to identify this node cluster addr vault docs configuration cluster addr must be set to the cluster hostname of this node For more configuration options see the raft storage configuration documentation vault docs configuration storage raft If the original configuration uses raft for ha storage a different path needs to be declared for the path in storage destination and the new configuration for the node post migration hcl storage source consul address 127 0 0 1 8500 path vault storage destination raft path path to raft data node id raft node 1 cluster addr http 127 0 0 1 8201 Run the migration Vault will need to be offline during the migration process First stop Vault Then run the migration on the server you wish to become a the new Vault node shell session vault operator migrate config migrate hcl 2018 09 20T14 23 23 656 0700 INFO copied key data core seal config 2018 09 20T14 23 23 657 0700 INFO copied key data core wrapping jwtkey 2018 09 20T14 23 23 658 0700 INFO copied key data logical fd1bed89 ffc4 d631 00dd 0696c9f930c6 31c8e6d9 2a17 d98f bdf1 aa868afa1291 archive metadata 2018 09 20T14 23 23 660 0700 INFO copied key data logical fd1bed89 ffc4 d631 00dd 0696c9f930c6 31c8e6d9 2a17 d98f bdf1 aa868afa1291 metadata 5kKFZ4YnzgNfy9UcWOzxxzOMpqlp61rYuq6laqpLQDnB3RawKpqi7yBTrawj1P After migration has completed the data is stored on the local file system To use the new storage backend with Vault update Vault s configuration file as described in the raft storage configuration documentation vault docs configuration storage raft Then start and unseal the vault server Join additional nodes After migration the raft cluster will only have a single node Additional peers should be joined to this node If the cluster was previously HA enabled using raft as the ha storage the nodes will have to re join to the migrated node before unsealing Usage The following flags are available for the operator migrate command config string required Path to the migration configuration file start string Migration starting key prefix Only keys at or after this value will be copied reset Reset the migration lock A lock file is added during migration to prevent starting the Vault server or another migration The reset option can be used to remove a stale lock file if present max parallel int 10 Allows the operator to specify the maximum number of lightweight threads goroutines which may be used to migrate data in parallel This can potentially speed up migration on slower backends at the cost of more resources e g CPU memory Permitted values range from 1 synchronous to the maximum value for an integer If not supplied a default of 10 parallel goroutines will be used Note The maximum number of concurrent requests handled by a storage backend is ultimately governed by the storage backend configuration setting which enforces a maximum number of concurrent requests max parallel |
vault The operator import command imports secrets from external systems in to Vault layout docs page title operator import Command operator import | ---
layout: docs
page_title: operator import - Command
description: >-
The "operator import" command imports secrets from external systems
in to Vault.
---
# operator import
@include 'alerts/enterprise-only.mdx'
@include 'alerts/alpha.mdx'
The `operator import` command imports secrets from external systems in to Vault.
Secrets with the same name at the same storage path will be overwritten upon import.
<Note title="Imports can be long-running processes">
You can write import plans that read from as many sources as you want. The
amount of data migrated from each source depends on the filters applied and the
dataset available. Be mindful of the time needed to read from each source,
apply any filters, and store the data in Vault.
</Note>
## Examples
Read the config file `import.hcl` to generate a new import plan:
```shell-session
$ vault operator import -config import.hcl plan
```
Output:
<CodeBlockConfig hideClipboard>
-----------
Import plan
-----------
The following namespaces are missing:
* ns-1/
The following mounts are missing:
* ns-1/mount-1
Secrets to be imported to the destination "my-dest-1":
* secret-1
* secret-2
</CodeBlockConfig>
## Configuration
The `operator import` command uses a dedicated configuration file to specify the source,
destination, and mapping rules. To learn more about these types and secrets importing in
general, refer to the [Secrets Import documentation](/vault/docs/import).
```hcl
source_gcp {
name = "my-gcp-source-1"
credentials = "@/path/to/service-account-key.json"
}
destination_vault {
name = "my-dest-1"
address = "http://127.0.0.1:8200/"
token = "root"
namespace = "ns-1"
mount = "mount-1"
}
mapping_passthrough {
name = "my-map-1"
source = "my-gcp-1"
destination = "my-dest-1"
priority = 1
}
```
## Usage
### Arguments
- `plan` - Executes a read-only operation to let operators preview the secrets to import based on the configuration file.
- `apply` - Executes the import operations to read the specified secrets from the source and write them into Vault.
Apply first executes a plan, then asks the user to approve the results before performing the actual import.
### Flags
The `operator import` command accepts the following flags:
- `-config` `(string: "import.hcl")` - Path to the import configuration HCL file. The default path is `import.hcl`.
- `-auto-approve` `(bool: <false>)` - Automatically responds "yes" to all user-input prompts for the `apply` command.
- `-auto-create` `(bool: <false>)` - Automatically creates any missing namespaces and KVv2 mounts when
running the `apply` command.
- `-log-level` ((#\_log_level)) `(string: "info")` - Log verbosity level. Supported values (in
order of descending detail) are `trace`, `debug`, `info`, `warn`, and `error`. You can also set log-level with the `VAULT_LOG_LEVEL` environment variable. | vault | layout docs page title operator import Command description The operator import command imports secrets from external systems in to Vault operator import include alerts enterprise only mdx include alerts alpha mdx The operator import command imports secrets from external systems in to Vault Secrets with the same name at the same storage path will be overwritten upon import Note title Imports can be long running processes You can write import plans that read from as many sources as you want The amount of data migrated from each source depends on the filters applied and the dataset available Be mindful of the time needed to read from each source apply any filters and store the data in Vault Note Examples Read the config file import hcl to generate a new import plan shell session vault operator import config import hcl plan Output CodeBlockConfig hideClipboard Import plan The following namespaces are missing ns 1 The following mounts are missing ns 1 mount 1 Secrets to be imported to the destination my dest 1 secret 1 secret 2 CodeBlockConfig Configuration The operator import command uses a dedicated configuration file to specify the source destination and mapping rules To learn more about these types and secrets importing in general refer to the Secrets Import documentation vault docs import hcl source gcp name my gcp source 1 credentials path to service account key json destination vault name my dest 1 address http 127 0 0 1 8200 token root namespace ns 1 mount mount 1 mapping passthrough name my map 1 source my gcp 1 destination my dest 1 priority 1 Usage Arguments plan Executes a read only operation to let operators preview the secrets to import based on the configuration file apply Executes the import operations to read the specified secrets from the source and write them into Vault Apply first executes a plan then asks the user to approve the results before performing the actual import Flags The operator import command accepts the following flags config string import hcl Path to the import configuration HCL file The default path is import hcl auto approve bool false Automatically responds yes to all user input prompts for the apply command auto create bool false Automatically creates any missing namespaces and KVv2 mounts when running the apply command log level log level string info Log verbosity level Supported values in order of descending detail are trace debug info warn and error You can also set log level with the VAULT LOG LEVEL environment variable |
vault process by which Vault s storage backend is prepared to receive data Since The operator init command initializes a Vault server Initialization is the initialize one Vault to initialize the storage backend Vault servers share the same storage backend in HA mode you only need to layout docs page title operator init Command | ---
layout: docs
page_title: operator init - Command
description: |-
The "operator init" command initializes a Vault server. Initialization is the
process by which Vault's storage backend is prepared to receive data. Since
Vault servers share the same storage backend in HA mode, you only need to
initialize one Vault to initialize the storage backend.
---
# operator init
The `operator init` command initializes a Vault server. Initialization is the
process by which Vault's storage backend is prepared to receive data. Since
Vault servers share the same storage backend in HA mode, you only need to
initialize one Vault to initialize the storage backend.
This command cannot be run against already-initialized Vault cluster.
During initialization, Vault generates a root key, which is stored in the storage backend alongside all other Vault data. The root key itself is encrypted and requires an _unseal key_ to decrypt it.
The default Vault configuration uses [Shamir's Secret Sharing](https://en.wikipedia.org/wiki/Shamir%27s_Secret_Sharing) to split the root key into a configured number of shards (referred as key shares, or unseal keys). A certain threshold of shards is required to reconstruct the root key, which is then used to decrypt the Vault's encryption key.
Refer to the [Seal/Unseal](/vault/docs/concepts/seal#seal-unseal) documentation for further details.
## Examples
Start initialization with the default options:
```shell-session
$ vault operator init
```
Initialize, but encrypt the unseal keys with pgp keys:
```shell-session
$ vault operator init \
-key-shares=3 \
-key-threshold=2 \
-pgp-keys="keybase:hashicorp,keybase:jefferai,keybase:sethvargo"
```
Initialize Auto Unseal with a non-default threshold and number of recovery keys, and encrypt the recovery keys with pgp keys:
```shell-session
$ vault operator init \
-recovery-shares=7 \
-recovery-threshold=4 \
-recovery-pgp-keys="keybase:jeff,keybase:chris,keybase:brian,keybase:calvin,keybase:matthew,keybase:vishal,keybase:nick"
```
Encrypt the initial root token using a pgp key:
```shell-session
$ vault operator init -root-token-pgp-key="keybase:hashicorp"
```
## Usage
The following flags are available in addition to the [standard set of
flags](/vault/docs/commands) included on all commands.
### Output options
- `-format` `(string: "")` - Print the output in the given format. Valid formats
are "table", "json", or "yaml". The default is table. This can also be
specified via the `VAULT_FORMAT` environment variable.
### Common options
- `-key-shares` `(int: 5)` - Number of key shares to split the generated master
key into. This is the number of "unseal keys" to generate. This is aliased as
`-n`.
- `-key-threshold` `(int: 3)` - Number of key shares required to reconstruct the
root key. This must be less than or equal to -key-shares. This is aliased as
`-t`.
- `-pgp-keys` `(string: "...")` - Comma-separated list of paths to files on disk
containing public PGP keys OR a comma-separated list of Keybase usernames
using the format `keybase:<username>`. When supplied, the generated unseal
keys will be encrypted and base64-encoded in the order specified in this list.
The number of entries must match -key-shares, unless -stored-shares are used.
- `-root-token-pgp-key` `(string: "")` - Path to a file on disk containing a
binary or base64-encoded public PGP key. This can also be specified as a
Keybase username using the format `keybase:<username>`. When supplied, the
generated root token will be encrypted and base64-encoded with the given
public key.
- `-status` `(bool": false)` - Print the current initialization status. An exit
code of 0 means the Vault is already initialized. An exit code of 1 means an
error occurred. An exit code of 2 means the Vault is not initialized.
### Consul options
- `-consul-auto` `(bool: false)` - Perform automatic service discovery using
Consul in HA mode. When all nodes in a Vault HA cluster are registered with
Consul, enabling this option will trigger automatic service discovery based on
the provided -consul-service value. When Consul is Vault's HA backend, this
functionality is automatically enabled. Ensure the proper Consul environment
variables are set (CONSUL_HTTP_ADDR, etc). When only one Vault server is
discovered, it will be initialized automatically. When more than one Vault
server is discovered, they will each be output for selection. The default is
false.
- `-consul-service` `(string: "vault")` - Name of the service in Consul under
which the Vault servers are registered.
### HSM and KMS options
- `-recovery-pgp-keys` `(string: "...")` - Behaves like `-pgp-keys`, but for the
recovery key shares. This is only available with [Auto Unseal](/vault/docs/concepts/seal#auto-unseal) seals (HSM, KMS and Transit seals).
- `-recovery-shares` `(int: 5)` - Number of key shares to split the recovery key
into. This is only available with [Auto Unseal](/vault/docs/concepts/seal#auto-unseal) seals (HSM, KMS and Transit seals).
- `-recovery-threshold` `(int: 3)` - Number of key shares required to
reconstruct the recovery key. This is only available with [Auto Unseal](/vault/docs/concepts/seal#auto-unseal) seals (HSM, KMS and Transit seals).
- `-stored-shares` `(int: 0)` - Number of unseal keys to store on an HSM. This
must be equal to `-key-shares`.
-> **Recovery keys:** Refer to the
[Seal/Unseal](/vault/docs/concepts/seal#recovery-key) documentation to learn more
about recovery keys. | vault | layout docs page title operator init Command description The operator init command initializes a Vault server Initialization is the process by which Vault s storage backend is prepared to receive data Since Vault servers share the same storage backend in HA mode you only need to initialize one Vault to initialize the storage backend operator init The operator init command initializes a Vault server Initialization is the process by which Vault s storage backend is prepared to receive data Since Vault servers share the same storage backend in HA mode you only need to initialize one Vault to initialize the storage backend This command cannot be run against already initialized Vault cluster During initialization Vault generates a root key which is stored in the storage backend alongside all other Vault data The root key itself is encrypted and requires an unseal key to decrypt it The default Vault configuration uses Shamir s Secret Sharing https en wikipedia org wiki Shamir 27s Secret Sharing to split the root key into a configured number of shards referred as key shares or unseal keys A certain threshold of shards is required to reconstruct the root key which is then used to decrypt the Vault s encryption key Refer to the Seal Unseal vault docs concepts seal seal unseal documentation for further details Examples Start initialization with the default options shell session vault operator init Initialize but encrypt the unseal keys with pgp keys shell session vault operator init key shares 3 key threshold 2 pgp keys keybase hashicorp keybase jefferai keybase sethvargo Initialize Auto Unseal with a non default threshold and number of recovery keys and encrypt the recovery keys with pgp keys shell session vault operator init recovery shares 7 recovery threshold 4 recovery pgp keys keybase jeff keybase chris keybase brian keybase calvin keybase matthew keybase vishal keybase nick Encrypt the initial root token using a pgp key shell session vault operator init root token pgp key keybase hashicorp Usage The following flags are available in addition to the standard set of flags vault docs commands included on all commands Output options format string Print the output in the given format Valid formats are table json or yaml The default is table This can also be specified via the VAULT FORMAT environment variable Common options key shares int 5 Number of key shares to split the generated master key into This is the number of unseal keys to generate This is aliased as n key threshold int 3 Number of key shares required to reconstruct the root key This must be less than or equal to key shares This is aliased as t pgp keys string Comma separated list of paths to files on disk containing public PGP keys OR a comma separated list of Keybase usernames using the format keybase username When supplied the generated unseal keys will be encrypted and base64 encoded in the order specified in this list The number of entries must match key shares unless stored shares are used root token pgp key string Path to a file on disk containing a binary or base64 encoded public PGP key This can also be specified as a Keybase username using the format keybase username When supplied the generated root token will be encrypted and base64 encoded with the given public key status bool false Print the current initialization status An exit code of 0 means the Vault is already initialized An exit code of 1 means an error occurred An exit code of 2 means the Vault is not initialized Consul options consul auto bool false Perform automatic service discovery using Consul in HA mode When all nodes in a Vault HA cluster are registered with Consul enabling this option will trigger automatic service discovery based on the provided consul service value When Consul is Vault s HA backend this functionality is automatically enabled Ensure the proper Consul environment variables are set CONSUL HTTP ADDR etc When only one Vault server is discovered it will be initialized automatically When more than one Vault server is discovered they will each be output for selection The default is false consul service string vault Name of the service in Consul under which the Vault servers are registered HSM and KMS options recovery pgp keys string Behaves like pgp keys but for the recovery key shares This is only available with Auto Unseal vault docs concepts seal auto unseal seals HSM KMS and Transit seals recovery shares int 5 Number of key shares to split the recovery key into This is only available with Auto Unseal vault docs concepts seal auto unseal seals HSM KMS and Transit seals recovery threshold int 3 Number of key shares required to reconstruct the recovery key This is only available with Auto Unseal vault docs concepts seal auto unseal seals HSM KMS and Transit seals stored shares int 0 Number of unseal keys to store on an HSM This must be equal to key shares Recovery keys Refer to the Seal Unseal vault docs concepts seal recovery key documentation to learn more about recovery keys |
vault operator raft The operator raft command is used to interact with the integrated Raft storage backend layout docs This command groups subcommands for operators to manage the Integrated Storage Raft backend page title operator raft Command | ---
layout: docs
page_title: operator raft - Command
description: >-
The "operator raft" command is used to interact with the integrated Raft storage backend.
---
# operator raft
This command groups subcommands for operators to manage the Integrated Storage Raft backend.
```text
Usage: vault operator raft <subcommand> [options] [args]
This command groups subcommands for operators interacting with the Vault
integrated Raft storage backend. Most users will not need to interact with these
commands. Here are a few examples of the Raft operator commands:
Subcommands:
join Joins a node to the Raft cluster
list-peers Returns the Raft peer set
remove-peer Removes a node from the Raft cluster
snapshot Restores and saves snapshots from the Raft cluster
```
## join
This command is used to join a new node as a peer to the Raft cluster. In order
to join, there must be at least one existing member of the cluster. If Shamir
seal is in use, then unseal keys are to be supplied before or after the
join process, depending on whether it's being used exclusively for HA.
If raft is used for `storage`, the node must be joined before unsealing and the
`leader-api-addr` argument must be provided. If raft is used for `ha_storage`,
the node must be first unsealed before joining and the `leader-api-addr` must
_not_ be provided.
```text
Usage: vault operator raft join [options] <leader-api-addr>
Join the current node as a peer to the Raft cluster by providing the address
of the Raft leader node.
$ vault operator raft join "http://127.0.0.2:8200"
```
The `join` command also allows operators to specify cloud auto-join configuration
instead of a static IP address or hostname. When provided, Vault will attempt to
automatically discover and resolve potential leader addresses based on the provided
auto-join configuration.
Vault uses go-discover to support the auto-join functionality. Please see the
go-discover
[README](https://github.com/hashicorp/go-discover/blob/master/README.md) for
details on the format.
By default, Vault will attempt to reach discovered peers using HTTPS and port 8200.
Operators may override these through the `--auto-join-scheme` and `--auto-join-port`
CLI flags respectively.
```text
Usage: vault operator raft join [options] <auto-join-configuration>
Join the current node as a peer to the Raft cluster by providing cloud auto-join
metadata configuration.
$ vault operator raft join "provider=aws region=eu-west-1 ..."
```
### Parameters
The following flags are available for the `operator raft join` command.
- `-leader-ca-cert` `(string: "")` - CA cert to communicate with Raft leader.
- `-leader-client-cert` `(string: "")` - Client cert to authenticate to Raft leader.
- `-leader-client-key` `(string: "")` - Client key to authenticate to Raft leader.
- `-non-voter` `(bool: false) (enterprise)` - This flag is used to make the
server not participate in the Raft quorum, and have it only receive the data
replication stream. This can be used to add read scalability to a cluster in
cases where a high volume of reads to servers are needed. The default is false.
See [`retry_join_as_non_voter`](/vault/docs/configuration/storage/raft#retry_join_as_non_voter)
for the equivalent config option when using `retry_join` stanzas instead.
- `-retry` `(bool: false)` - Continuously retry joining the Raft cluster upon
failures. The default is false.
~> **Note:** Please be aware that the content (not the path to the file) of the certificate or key is expected for these parameters: `-leader-ca-cert`, `-leader-client-cert`, `-leader-client-key`.
## list-peers
This command is used to list the full set of peers in the Raft cluster.
```text
Usage: vault operator raft list-peers
Provides the details of all the peers in the Raft cluster.
$ vault operator raft list-peers
```
### Example output
```json
{
...
"data": {
"config": {
"index": 62,
"servers": [
{
"address": "127.0.0.2:8201",
"leader": true,
"node_id": "node1",
"protocol_version": "3",
"voter": true
},
{
"address": "127.0.0.4:8201",
"leader": false,
"node_id": "node3",
"protocol_version": "3",
"voter": true
}
]
}
}
}
```
Use the output of `list-peers` to ensure that your cluster is in an expected state.
If you've removed a server using `remove-peer`, the server should no longer be
listed in the `list-peers` output. If you've added a server using `add-peer` or
through `retry_join`, check the `list-peers` output to see that it has been added
to the cluster and (if the node has not been added as a non-voter)
it has been promoted to a voter.
## remove-peer
This command is used to remove a node from being a peer to the Raft cluster. In
certain cases where a peer may be left behind in the Raft configuration even
though the server is no longer present and known to the cluster, this command
can be used to remove the failed server so that it is no longer affects the Raft
quorum.
```text
Usage: vault operator raft remove-peer <server_id>
Removes a node from the Raft cluster.
$ vault operator raft remove-peer node1
```
<Note>
Once a node is removed, its Raft data needs to be deleted before it may be joined back into an existing cluster. This requires shutting down the Vault process, deleting the data, then restarting the Vault process on the removed node.
</Note>
## snapshot
This command groups subcommands for operators interacting with the snapshot
functionality of the integrated Raft storage backend. There are 2 subcommands
supported: `save` and `restore`.
```text
Usage: vault operator raft snapshot <subcommand> [options] [args]
This command groups subcommands for operators interacting with the snapshot
functionality of the integrated Raft storage backend.
Subcommands:
restore Installs the provided snapshot, returning the cluster to the state defined in it
save Saves a snapshot of the current state of the Raft cluster into a file
```
### snapshot save
Takes a snapshot of the Vault data. The snapshot can be used to restore Vault to
the point in time when a snapshot was taken.
```text
Usage: vault operator raft snapshot save <snapshot_file>
Saves a snapshot of the current state of the Raft cluster into a file.
$ vault operator raft snapshot save raft.snap
```
~> **Note:** Snapshot is not supported when Raft is used only for `ha_storage`.
### snapshot restore
Restores a snapshot of Vault data taken with `vault operator raft snapshot save`.
```text
Usage: vault operator raft snapshot restore <snapshot_file>
Installs the provided snapshot, returning the cluster to the state defined in it.
$ vault operator raft snapshot restore raft.snap
```
### snapshot inspect
Inspects a snapshot file taken from a Vault Raft cluster and prints a table showing the number of keys and the amount of space used.
```text
Usage: vault operator raft snapshot inspect <snapshot_file>
```
For example:
```shell-session
$ vault operator raft snapshot inspect raft.snap
```
## autopilot
This command groups subcommands for operators interacting with the autopilot
functionality of the integrated Raft storage backend. There are 3 subcommands
supported: `get-config`, `set-config` and `state`.
For a more detailed overview of autopilot features, see the [concepts page](/vault/docs/concepts/integrated-storage/autopilot).
```text
Usage: vault operator raft autopilot <subcommand> [options] [args]
This command groups subcommands for operators interacting with the autopilot
functionality of the integrated Raft storage backend.
Subcommands:
get-config Returns the configuration of the autopilot subsystem under integrated storage
set-config Modify the configuration of the autopilot subsystem under integrated storage
state Displays the state of the raft cluster under integrated storage as seen by autopilot
```
### autopilot state
Displays the state of the raft cluster under integrated storage as seen by
autopilot. It shows whether autopilot thinks the cluster is healthy or not.
State includes a list of all servers by nodeID and IP address.
```text
Usage: vault operator raft autopilot state
Displays the state of the raft cluster under integrated storage as seen by autopilot.
$ vault operator raft autopilot state
```
#### Example output
```text
Healthy: true
Failure Tolerance: 1
Leader: vault_1
Voters:
vault_1
vault_2
vault_3
Servers:
vault_1
Name: vault_1
Address: 127.0.0.1:8201
Status: leader
Node Status: alive
Healthy: true
Last Contact: 0s
Last Term: 3
Last Index: 61
Version: 1.17.3
Node Type: voter
vault_2
Name: vault_2
Address: 127.0.0.1:8203
Status: voter
Node Status: alive
Healthy: true
Last Contact: 564.765375ms
Last Term: 3
Last Index: 61
Version: 1.17.3
Node Type: voter
vault_3
Name: vault_3
Address: 127.0.0.1:8205
Status: voter
Node Status: alive
Healthy: true
Last Contact: 3.814017875s
Last Term: 3
Last Index: 61
Version: 1.17.3
Node Type: voter
```
The "Failure Tolerance" of a cluster is the number of nodes in the cluster that could
fail gradually without causing an outage.
When verifying the health of your cluster, check the following fields of each server:
- Healthy: whether Autopilot considers this node healthy or not
- Status: the voting status of the node. This will be `voter`, `leader`, or [`non-voter`](/vault/docs/concepts/integrated-storage#non-voting-nodes-enterprise-only).
- Last Index: the index of the last applied Raft log. This should be close to the "Last Index" value of the leader.
- Version: the version of Vault running on the server
- Node Type: the type of node. On CE, this will always be `voter`. See below for an explanation of Enterprise node types.
Vault Enterprise will include additional output related to automated upgrades, optimistic failure tolerance, and redundancy zones.
#### Example Vault enterprise output
```text
Redundancy Zones:
a
Servers: vault_1, vault_2, vault_5
Voters: vault_1
Failure Tolerance: 2
b
Servers: vault_3, vault_4
Voters: vault_3
Failure Tolerance: 1
Upgrade Info:
Status: await-new-voters
Target Version: 1.17.5
Target Version Voters:
Target Version Non-Voters: vault_5
Other Version Voters: vault_1, vault_3
Other Version Non-Voters: vault_2, vault_4
Redundancy Zones:
a
Target Version Voters:
Target Version Non-Voters: vault_5
Other Version Voters: vault_1
Other Version Non-Voters: vault_2
b
Target Version Voters:
Target Version Non-Voters:
Other Version Voters: vault_3
Other Version Non-Voters: vault_4
```
"Optimistic Failure Tolerance" describes the number of healthy active and
back-up voting servers that can fail gradually without causing an outage.
@include 'autopilot/node-types.mdx'
### autopilot get-config
Returns the configuration of the autopilot subsystem under integrated storage.
```text
Usage: vault operator raft autopilot get-config
Returns the configuration of the autopilot subsystem under integrated storage.
$ vault operator raft autopilot get-config
```
### autopilot set-config
Modify the configuration of the autopilot subsystem under integrated storage.
```text
Usage: vault operator raft autopilot set-config [options]
Modify the configuration of the autopilot subsystem under integrated storage.
$ vault operator raft autopilot set-config -server-stabilization-time 10s
```
Flags applicable to this command are the following:
- `cleanup-dead-servers` `(bool: false)` - Controls whether to remove dead servers from
the Raft peer list periodically or when a new server joins. This requires that
`min-quorum` is also set.
- `last-contact-threshold` `(string: "10s")` - Limit on the amount of time a server can
go without leader contact before being considered unhealthy.
- `dead-server-last-contact-threshold` `(string: "24h")` - Limit on the amount of time
a server can go without leader contact before being considered failed. This
takes effect only when `cleanup_dead_servers` is set. When adding new nodes
to your cluster, the `dead_server_last_contact_threshold` needs to be larger
than the amount of time that it takes to load a Raft snapshot, otherwise the
newly added nodes will be removed from your cluster before they have finished
loading the snapshot and starting up. If you are using an [HSM](/vault/docs/enterprise/hsm), your
`dead_server_last_contact_threshold` needs to be larger than the response
time of the HSM.
<Warning>
We strongly recommend keeping `dead_server_last_contact_threshold` at a high
duration, such as a day, as it being too low could result in removal of nodes
that aren't actually dead
</Warning>
- `max-trailing-logs` `(int: 1000)` - Amount of entries in the Raft Log that a server
can be behind before being considered unhealthy. If this value is too low,
it can cause the cluster to lose quorum if a follower falls behind. This
value only needs to be increased from the default if you have a very high
write load on Vault and you see that it takes a long time to promote new
servers to becoming voters. This is an unlikely scenario and most users
should not modify this value.
- `min-quorum` `(int)` - The minimum number of servers that should always be
present in a cluster. Autopilot will not prune servers below this number.
**There is no default for this value** and it should be set to the expected
number of voters in your cluster when `cleanup_dead_servers` is set as `true`.
Use the [quorum size guidance](/vault/docs/internals/integrated-storage#quorum-size-and-failure-tolerance)
to determine the proper minimum quorum size for your cluster.
- `server-stabilization-time` `(string: "10s")` - Minimum amount of time a server must be in a healthy state before it
can become a voter. Until that happens, it will be visible as a peer in the cluster, but as a non-voter, meaning it
won't contribute to quorum.
- `disable-upgrade-migration` `(bool: false)` - Controls whether to disable automated
upgrade migrations, an Enterprise-only feature. | vault | layout docs page title operator raft Command description The operator raft command is used to interact with the integrated Raft storage backend operator raft This command groups subcommands for operators to manage the Integrated Storage Raft backend text Usage vault operator raft subcommand options args This command groups subcommands for operators interacting with the Vault integrated Raft storage backend Most users will not need to interact with these commands Here are a few examples of the Raft operator commands Subcommands join Joins a node to the Raft cluster list peers Returns the Raft peer set remove peer Removes a node from the Raft cluster snapshot Restores and saves snapshots from the Raft cluster join This command is used to join a new node as a peer to the Raft cluster In order to join there must be at least one existing member of the cluster If Shamir seal is in use then unseal keys are to be supplied before or after the join process depending on whether it s being used exclusively for HA If raft is used for storage the node must be joined before unsealing and the leader api addr argument must be provided If raft is used for ha storage the node must be first unsealed before joining and the leader api addr must not be provided text Usage vault operator raft join options leader api addr Join the current node as a peer to the Raft cluster by providing the address of the Raft leader node vault operator raft join http 127 0 0 2 8200 The join command also allows operators to specify cloud auto join configuration instead of a static IP address or hostname When provided Vault will attempt to automatically discover and resolve potential leader addresses based on the provided auto join configuration Vault uses go discover to support the auto join functionality Please see the go discover README https github com hashicorp go discover blob master README md for details on the format By default Vault will attempt to reach discovered peers using HTTPS and port 8200 Operators may override these through the auto join scheme and auto join port CLI flags respectively text Usage vault operator raft join options auto join configuration Join the current node as a peer to the Raft cluster by providing cloud auto join metadata configuration vault operator raft join provider aws region eu west 1 Parameters The following flags are available for the operator raft join command leader ca cert string CA cert to communicate with Raft leader leader client cert string Client cert to authenticate to Raft leader leader client key string Client key to authenticate to Raft leader non voter bool false enterprise This flag is used to make the server not participate in the Raft quorum and have it only receive the data replication stream This can be used to add read scalability to a cluster in cases where a high volume of reads to servers are needed The default is false See retry join as non voter vault docs configuration storage raft retry join as non voter for the equivalent config option when using retry join stanzas instead retry bool false Continuously retry joining the Raft cluster upon failures The default is false Note Please be aware that the content not the path to the file of the certificate or key is expected for these parameters leader ca cert leader client cert leader client key list peers This command is used to list the full set of peers in the Raft cluster text Usage vault operator raft list peers Provides the details of all the peers in the Raft cluster vault operator raft list peers Example output json data config index 62 servers address 127 0 0 2 8201 leader true node id node1 protocol version 3 voter true address 127 0 0 4 8201 leader false node id node3 protocol version 3 voter true Use the output of list peers to ensure that your cluster is in an expected state If you ve removed a server using remove peer the server should no longer be listed in the list peers output If you ve added a server using add peer or through retry join check the list peers output to see that it has been added to the cluster and if the node has not been added as a non voter it has been promoted to a voter remove peer This command is used to remove a node from being a peer to the Raft cluster In certain cases where a peer may be left behind in the Raft configuration even though the server is no longer present and known to the cluster this command can be used to remove the failed server so that it is no longer affects the Raft quorum text Usage vault operator raft remove peer server id Removes a node from the Raft cluster vault operator raft remove peer node1 Note Once a node is removed its Raft data needs to be deleted before it may be joined back into an existing cluster This requires shutting down the Vault process deleting the data then restarting the Vault process on the removed node Note snapshot This command groups subcommands for operators interacting with the snapshot functionality of the integrated Raft storage backend There are 2 subcommands supported save and restore text Usage vault operator raft snapshot subcommand options args This command groups subcommands for operators interacting with the snapshot functionality of the integrated Raft storage backend Subcommands restore Installs the provided snapshot returning the cluster to the state defined in it save Saves a snapshot of the current state of the Raft cluster into a file snapshot save Takes a snapshot of the Vault data The snapshot can be used to restore Vault to the point in time when a snapshot was taken text Usage vault operator raft snapshot save snapshot file Saves a snapshot of the current state of the Raft cluster into a file vault operator raft snapshot save raft snap Note Snapshot is not supported when Raft is used only for ha storage snapshot restore Restores a snapshot of Vault data taken with vault operator raft snapshot save text Usage vault operator raft snapshot restore snapshot file Installs the provided snapshot returning the cluster to the state defined in it vault operator raft snapshot restore raft snap snapshot inspect Inspects a snapshot file taken from a Vault Raft cluster and prints a table showing the number of keys and the amount of space used text Usage vault operator raft snapshot inspect snapshot file For example shell session vault operator raft snapshot inspect raft snap autopilot This command groups subcommands for operators interacting with the autopilot functionality of the integrated Raft storage backend There are 3 subcommands supported get config set config and state For a more detailed overview of autopilot features see the concepts page vault docs concepts integrated storage autopilot text Usage vault operator raft autopilot subcommand options args This command groups subcommands for operators interacting with the autopilot functionality of the integrated Raft storage backend Subcommands get config Returns the configuration of the autopilot subsystem under integrated storage set config Modify the configuration of the autopilot subsystem under integrated storage state Displays the state of the raft cluster under integrated storage as seen by autopilot autopilot state Displays the state of the raft cluster under integrated storage as seen by autopilot It shows whether autopilot thinks the cluster is healthy or not State includes a list of all servers by nodeID and IP address text Usage vault operator raft autopilot state Displays the state of the raft cluster under integrated storage as seen by autopilot vault operator raft autopilot state Example output text Healthy true Failure Tolerance 1 Leader vault 1 Voters vault 1 vault 2 vault 3 Servers vault 1 Name vault 1 Address 127 0 0 1 8201 Status leader Node Status alive Healthy true Last Contact 0s Last Term 3 Last Index 61 Version 1 17 3 Node Type voter vault 2 Name vault 2 Address 127 0 0 1 8203 Status voter Node Status alive Healthy true Last Contact 564 765375ms Last Term 3 Last Index 61 Version 1 17 3 Node Type voter vault 3 Name vault 3 Address 127 0 0 1 8205 Status voter Node Status alive Healthy true Last Contact 3 814017875s Last Term 3 Last Index 61 Version 1 17 3 Node Type voter The Failure Tolerance of a cluster is the number of nodes in the cluster that could fail gradually without causing an outage When verifying the health of your cluster check the following fields of each server Healthy whether Autopilot considers this node healthy or not Status the voting status of the node This will be voter leader or non voter vault docs concepts integrated storage non voting nodes enterprise only Last Index the index of the last applied Raft log This should be close to the Last Index value of the leader Version the version of Vault running on the server Node Type the type of node On CE this will always be voter See below for an explanation of Enterprise node types Vault Enterprise will include additional output related to automated upgrades optimistic failure tolerance and redundancy zones Example Vault enterprise output text Redundancy Zones a Servers vault 1 vault 2 vault 5 Voters vault 1 Failure Tolerance 2 b Servers vault 3 vault 4 Voters vault 3 Failure Tolerance 1 Upgrade Info Status await new voters Target Version 1 17 5 Target Version Voters Target Version Non Voters vault 5 Other Version Voters vault 1 vault 3 Other Version Non Voters vault 2 vault 4 Redundancy Zones a Target Version Voters Target Version Non Voters vault 5 Other Version Voters vault 1 Other Version Non Voters vault 2 b Target Version Voters Target Version Non Voters Other Version Voters vault 3 Other Version Non Voters vault 4 Optimistic Failure Tolerance describes the number of healthy active and back up voting servers that can fail gradually without causing an outage include autopilot node types mdx autopilot get config Returns the configuration of the autopilot subsystem under integrated storage text Usage vault operator raft autopilot get config Returns the configuration of the autopilot subsystem under integrated storage vault operator raft autopilot get config autopilot set config Modify the configuration of the autopilot subsystem under integrated storage text Usage vault operator raft autopilot set config options Modify the configuration of the autopilot subsystem under integrated storage vault operator raft autopilot set config server stabilization time 10s Flags applicable to this command are the following cleanup dead servers bool false Controls whether to remove dead servers from the Raft peer list periodically or when a new server joins This requires that min quorum is also set last contact threshold string 10s Limit on the amount of time a server can go without leader contact before being considered unhealthy dead server last contact threshold string 24h Limit on the amount of time a server can go without leader contact before being considered failed This takes effect only when cleanup dead servers is set When adding new nodes to your cluster the dead server last contact threshold needs to be larger than the amount of time that it takes to load a Raft snapshot otherwise the newly added nodes will be removed from your cluster before they have finished loading the snapshot and starting up If you are using an HSM vault docs enterprise hsm your dead server last contact threshold needs to be larger than the response time of the HSM Warning We strongly recommend keeping dead server last contact threshold at a high duration such as a day as it being too low could result in removal of nodes that aren t actually dead Warning max trailing logs int 1000 Amount of entries in the Raft Log that a server can be behind before being considered unhealthy If this value is too low it can cause the cluster to lose quorum if a follower falls behind This value only needs to be increased from the default if you have a very high write load on Vault and you see that it takes a long time to promote new servers to becoming voters This is an unlikely scenario and most users should not modify this value min quorum int The minimum number of servers that should always be present in a cluster Autopilot will not prune servers below this number There is no default for this value and it should be set to the expected number of voters in your cluster when cleanup dead servers is set as true Use the quorum size guidance vault docs internals integrated storage quorum size and failure tolerance to determine the proper minimum quorum size for your cluster server stabilization time string 10s Minimum amount of time a server must be in a healthy state before it can become a voter Until that happens it will be visible as a peer in the cluster but as a non voter meaning it won t contribute to quorum disable upgrade migration bool false Controls whether to disable automated upgrade migrations an Enterprise only feature |
vault page title operator rekey Command optionally change the total number of key shares or the required threshold of The operator rekey command generates a new set of unseal keys This can layout docs those key shares to reconstruct the root key This operation is zero downtime but it requires the Vault is unsealed and a quorum of existing unseal keys are provided | ---
layout: docs
page_title: operator rekey - Command
description: |-
The "operator rekey" command generates a new set of unseal keys. This can
optionally change the total number of key shares or the required threshold of
those key shares to reconstruct the root key. This operation is zero
downtime, but it requires the Vault is unsealed and a quorum of existing
unseal keys are provided.
---
# operator rekey
The `operator rekey` command generates a new set of unseal keys. This can
optionally change the total number of key shares or the required threshold of
those key shares to reconstruct the root key. This operation is zero downtime,
but it requires the Vault is unsealed and a quorum of existing unseal keys are
provided.
An unseal key may be provided directly on the command line as an argument to the
command. If key is specified as "-", the command will read from stdin. If a TTY
is available, the command will prompt for text.
## Examples
Initialize a rekey:
```shell-session
$ vault operator rekey \
-init \
-key-shares=15 \
-key-threshold=9
```
Initialize a rekey when Auto Unseal is used for the Vault cluster:
```shell-session
$ vault operator rekey \
-target=recovery \
-init \
-key-shares=15 \
-key-threshold=9
```
Initialize a rekey and activate the verification process:
```shell-session
$ vault operator rekey \
-init \
-key-shares=15 \
-key-threshold=9 \
-verify
```
Rekey and encrypt the resulting unseal keys with PGP:
```shell-session
$ vault operator rekey \
-init \
-key-shares=3 \
-key-threshold=2 \
-pgp-keys="keybase:hashicorp,keybase:jefferai,keybase:sethvargo"
```
Rekey an Auto Unseal vault and encrypt the resulting recovery keys with PGP:
```shell-session
$ vault operator rekey \
-target=recovery \
-init \
-pgp-keys=keybase:grahamhashicorp
-key-shares=1
-key-threshold=1
```
Store encrypted PGP keys in Vault's core:
```shell-session
$ vault operator rekey \
-init \
-pgp-keys="..." \
-backup
```
Retrieve backed-up unseal keys:
```shell-session
$ vault operator rekey -backup-retrieve
```
Delete backed-up unseal keys:
```shell-session
$ vault operator rekey -backup-delete
```
Perform the verification of the rekey using the verification nonce:
```shell-session
$ vault operator rekey -verify -nonce="..."
```
## Usage
The following flags are available in addition to the [standard set of
flags](/vault/docs/commands) included on all commands.
### Output options
- `-format` `(string: "table")` - Print the output in the given format. Valid
formats are "table", "json", or "yaml". This can also be specified via the
`VAULT_FORMAT` environment variable.
### Command options
- `-cancel` `(bool: false)` - Reset the rekeying progress. This will discard any submitted unseal keys
or configuration. The default is false.
- `-init` `(bool: false)` - Initialize the rekeying operation. This can only be
done if no rekeying operation is in progress. Customize the new number of key
shares and key threshold using the `-key-shares` and `-key-threshold flags`.
- `-key-shares` `(int: 5)` - Number of key shares to split the generated master
key into. This is the number of "unseal keys" to generate. This is aliased as
`-n`
- `-key-threshold` `(int: 3)` - Number of key shares required to reconstruct the
root key. This must be less than or equal to -key-shares. This is aliased as
`-t`.
- `-nonce` `(string: "")` - Nonce value provided at initialization. The same
nonce value must be provided with each unseal key.
- `-pgp-keys` `(string: "...")` - Comma-separated list of paths to files on disk
containing public PGP keys OR a comma-separated list of Keybase usernames
using the format `keybase:<username>`. When supplied, the generated unseal
keys will be encrypted and base64-encoded in the order specified in this list.
- `-status` `(bool: false)` - Print the status of the current attempt without
providing an unseal key. The default is false.
- `-target` `(string: "barrier")` - Target for rekeying. "recovery" only applies
when HSM support is enabled or using [Auto Unseal](/vault/docs/concepts/seal#auto-unseal).
- `-verify` `(bool: false)` - Indicate during the phase `-init` that the
verification process is activated for the rekey. Along with `-nonce` option
it indicates that the nonce given is for the verification process.
### Backup options
- `-backup` `(bool: false)` - Store a backup of the current PGP encrypted unseal
keys in Vault's core. The encrypted values can be recovered in the event of
failure or discarded after success. See the -backup-delete and
\-backup-retrieve options for more information. This option only applies when
the existing unseal keys were PGP encrypted.
- `-backup-delete` `(bool: false)` - Delete any stored backup unseal keys.
- `-backup-retrieve` `(bool: false)` - Retrieve the backed-up unseal keys. This
option is only available if the PGP keys were provided and the backup has not
been deleted. | vault | layout docs page title operator rekey Command description The operator rekey command generates a new set of unseal keys This can optionally change the total number of key shares or the required threshold of those key shares to reconstruct the root key This operation is zero downtime but it requires the Vault is unsealed and a quorum of existing unseal keys are provided operator rekey The operator rekey command generates a new set of unseal keys This can optionally change the total number of key shares or the required threshold of those key shares to reconstruct the root key This operation is zero downtime but it requires the Vault is unsealed and a quorum of existing unseal keys are provided An unseal key may be provided directly on the command line as an argument to the command If key is specified as the command will read from stdin If a TTY is available the command will prompt for text Examples Initialize a rekey shell session vault operator rekey init key shares 15 key threshold 9 Initialize a rekey when Auto Unseal is used for the Vault cluster shell session vault operator rekey target recovery init key shares 15 key threshold 9 Initialize a rekey and activate the verification process shell session vault operator rekey init key shares 15 key threshold 9 verify Rekey and encrypt the resulting unseal keys with PGP shell session vault operator rekey init key shares 3 key threshold 2 pgp keys keybase hashicorp keybase jefferai keybase sethvargo Rekey an Auto Unseal vault and encrypt the resulting recovery keys with PGP shell session vault operator rekey target recovery init pgp keys keybase grahamhashicorp key shares 1 key threshold 1 Store encrypted PGP keys in Vault s core shell session vault operator rekey init pgp keys backup Retrieve backed up unseal keys shell session vault operator rekey backup retrieve Delete backed up unseal keys shell session vault operator rekey backup delete Perform the verification of the rekey using the verification nonce shell session vault operator rekey verify nonce Usage The following flags are available in addition to the standard set of flags vault docs commands included on all commands Output options format string table Print the output in the given format Valid formats are table json or yaml This can also be specified via the VAULT FORMAT environment variable Command options cancel bool false Reset the rekeying progress This will discard any submitted unseal keys or configuration The default is false init bool false Initialize the rekeying operation This can only be done if no rekeying operation is in progress Customize the new number of key shares and key threshold using the key shares and key threshold flags key shares int 5 Number of key shares to split the generated master key into This is the number of unseal keys to generate This is aliased as n key threshold int 3 Number of key shares required to reconstruct the root key This must be less than or equal to key shares This is aliased as t nonce string Nonce value provided at initialization The same nonce value must be provided with each unseal key pgp keys string Comma separated list of paths to files on disk containing public PGP keys OR a comma separated list of Keybase usernames using the format keybase username When supplied the generated unseal keys will be encrypted and base64 encoded in the order specified in this list status bool false Print the status of the current attempt without providing an unseal key The default is false target string barrier Target for rekeying recovery only applies when HSM support is enabled or using Auto Unseal vault docs concepts seal auto unseal verify bool false Indicate during the phase init that the verification process is activated for the rekey Along with nonce option it indicates that the nonce given is for the verification process Backup options backup bool false Store a backup of the current PGP encrypted unseal keys in Vault s core The encrypted values can be recovered in the event of failure or discarded after success See the backup delete and backup retrieve options for more information This option only applies when the existing unseal keys were PGP encrypted backup delete bool false Delete any stored backup unseal keys backup retrieve bool false Retrieve the backed up unseal keys This option is only available if the PGP keys were provided and the backup has not been deleted |
vault vault operator diagnose is a new operator centric command focused on providing a clear description but will also warn on configurations or statuses that it deems to be unsafe in some way of what is working in Vault and what is not working The command focuses on why Vault cannot serve requests page title operator diagnose Command layout docs | ---
layout: docs
page_title: operator diagnose - Command
description: |-
"vault operator diagnose" is a new operator-centric command, focused on providing a clear description
of what is working in Vault, and what is not working. The command focuses on why Vault cannot serve requests,
but will also warn on configurations or statuses that it deems to be unsafe in some way.
---
# operator diagnose
The operator diagnose command should be used primarily when vault is down or
partially inoperational. The command can be used safely regardless of the state
vault is in, but may return meaningless results for some of the test cases if the
vault server is already running.
Note: if you run the diagnose command proactively, either before a server
starts or while a server is operational, please consult the documentation
on the individual checks below to see which checks are returning false error
messages or warnings.
## Usage
The following flags are available in addition to the [standard set of
flags](/vault/docs/commands) included on all commands.
### Output options
- `-format` `(string: "table")` - Print the output in the given format. Valid
formats are "table", "json", or "yaml". This can also be specified via the
`VAULT_FORMAT` environment variable.
#### Output layout
The operator diagnose command will output a set of lines in the CLI.
Each line will begin with a prefix in parenthesis. These are:.
- `[ success ]` - Denotes that the check was successful.
- `[ warning ]` - Denotes that the check has passed, but that there may be potential
issues to look into that may relate to the issues vault is experiencing. Diagnose warns
frequently. These warnings are meant to serve as starting points in the debugging process.
- `[ failure ]` - Denotes that the check has failed. Failures are critical issues in the eyes
of the diagnose command.
In addition to these prefixed lines, there may be output lines that are not prefixed, but are
color-coded purple. These are advice lines from Diagnose, and are meant to offer general guidance
on how to go about fixing potential warnings or failures that may arise.
Warn or fail prefixes in nested checks will bubble up to the parent if the prefix superceeds the
parent prefix. Fail superceeds warn, and warn superceeds ok. For example, if the TLS checks under
the Storage check fails, the `[ failure ]` prefix will bubble up to the Storage check.
### Command options
- `-config` `(string; "")` - The path to the vault configuration file used by
the vault server on startup.
### Diagnose checks
The following section details the various checks that Diagnose runs. Check names in documentation
will be separated by slashes to denote that they are nested, when applicable. For example, a check
documented as `A / B` will show up as `B` in the `operator diagnose` output, and will be nested
(indented) under `A`.
#### Vault diagnose
`Vault Diagnose` is the top level check that contains the rest of the checks. It will report the status
of the check
#### Check operating system / check open file limit
`Check Open File Limit` verifies that the open file limit value is set high enough for vault
to run effectively. We recommend setting these limits to at least 1024768.
This check will be skipped on openbsd, arm, and windows.
#### Check operating system / check disk usage
`Check Disk Usage` will report disk usage for each partition. For each partition on a prod host,
we recommend having at least 5% of the partition free to use, and at least 1 GB of space.
This check will be skipped on openbsd and arm.
#### Parse configuration
`Parse Configuration` will check the vault server config file for syntax errors. It will check
for extra values in the configuration file, repeated stanzas, and stanzas that do not belong
in the configuration file (for example a "tcpp" listener as opposed to a tcp listener).
Currently, the `storage` stanza is not checked.
#### Check storage / create storage backend
`Create Storage Backend` ensures that the storage stanza configured in the vault server config
has enough information to create a storage object internally. Common errors will have to do
with misconfigured fields in the storage stanza.
#### Check storage / check consul TLS
`Check Consul TLS` verifies TLS information included in the storage stanza if the storage type
is consul. If a certificate chain is provided, Diagnose parses the root, intermediate, and leaf
certificates, and checks each one for correctness.
#### Check storage / check consul direct storage access
`Check Consul Direct Storage Access` is a consul-specific check that ensures Vault is not accessing
the consul server directly, but rather through a local agent.
#### Check storage / check raft folder permissions
`Check Raft Folder Permissions` computes the permissions on the raft folder, checks that a boltDB file
has been initialized within the folder previously, and ensures that the folder is not too permissive, but
at the same time has enough permissions to be used. The raft folder should not have `other` permissions, but
should have `group rw` or `owner rw`, depending on different setups. This check also warns if it detects a
symlink being used.
Note that this check will warn that a raft file has not been created if diagnose is run without any
pre-existing server runs.
This check will be skipped on windows.
#### Check storage / check raft folder ownership
`Check Raft Folder Ownership` ensures that vault does not need to run as root to access the boltDB folder.
Note that this check will warn that a raft file has not been created if diagnose is run without any
pre-existing server runs.
This check will be skipped on windows.
#### Check storage / check for raft quorum
`Check For Raft Quorum` uses the FSM to ensure that there were an odd number of voters in the raft quorum when
vault was last running.
Note that this check will warn that there are 0 voters if diagnose is run without any pre-existing server runs.
#### Check storage / check storage access
`Check Storage Access` will try to write a dud value, named `diagnose/latency/<uuid>`, to storage.
Ensure that there is no important data at this location before running diagnose, as this check
will overwrite that data. This check will then try to list and read the value it wrote to ensure
the name and value is as expected.
`Check Storage Access` will warn if any operation takes longer than 100ms, and error out if the
entire check takes longer than 30s.
#### Check service discovery / check consul service discovery TLS
`Check Consul Service Discovery TLS` verifies TLS information included in the service discovery
stanza if the storage type is consul. If a certificate chain is provided, Diagnose parses
the root, intermediate, and leaf certificates, and checks each one for correctness.
#### Check service discovery / check consul direct service discovery
`Check Consul Direct Service Discovery` is a consul-specific check that ensures Vault
is not accessing the consul server directly, but rather through a local agent.
#### Create Vault server configuration seals
`Create Vault Server Configuration Seals` creates seals from the vault configuration
stanza and verifies they can be initialized and finalized.
#### Check transit seal TLS
`Check Transit Seal TLS` checks the TLS client certificate, key, and CA certificate
provided in a transit seal stanza (if one exists) for correctness.
#### Create core configuration / initialize randomness for core
`Initialize Randomness for Core` ensures that vault has access to the randReader that
the vault core uses.
#### HA storage
This check and any nested checks will be the same as the `Check Storage` checks.
The only difference is that the checks here will be run on whatever is specified in the
`ha_storage` section of the vault configuration, as opposed to the `storage` section.
#### Determine redirect address
Ensures that one of the `VAULT_API_ADDR`, `VAULT_REDIRECT_ADDR`, or `VAULT_ADVERTISE_ADDR`
environment variables are set, or that the redirect address is specified in the vault
configuration.
#### Check cluster address
Parses the cluster address from the `VAULT_CLUSTER_ADDR` environment variable, or from the
redirect address or cluster address specified in the vault configuration, and checks that
the address is of the form `host:port`.
#### Check core creation
`Check Core Creation` verifies the logical configuration checks that vault does when it
creates a core object. These are runtime checks, meaning any errors thrown by this diagnose
test will also be thrown by the vault server itself when it is run.
#### Check for autoloaded license
`Check For Autoloaded License` is an enterprise diagnose check, which verifies that vault
has access to a valid autoloaded license that will not expire in the next 30 days.
#### Start listeners / check listener TLS
`Check Listener TLS` verifies the server certificate file and key are valid and matching.
It also checks the client CA file, if one is provided, for a valid certificate, and performs
the standard runtime listener checks on the listener configuration stanza, such as verifying
that the minimum and maximum TLS versions are within the bounds of what vault supports.
Like all the other Diagnose TLS checks, it will warn if any of the certificates provided are
set to expire within the next month.
#### Start listeners / create listeners
`Create Listeners` uses the listener configuration to initialize the listeners, erroring with
a server error if anything goes wrong.
#### Check autounseal encryption
`Check Autounseal Encryption` will initialize the barrier using the seal stanza, if the seal
type is not a shamir seal, and use it to encrypt and decrypt a dud value.
#### Check server before runtime
`Check Server Before Runtime` achieves parity with the server run command, running through
the runtime code checks before the server is initialized to ensure that nothing fails.
This check will never fail without another diagnose check failing. | vault | layout docs page title operator diagnose Command description vault operator diagnose is a new operator centric command focused on providing a clear description of what is working in Vault and what is not working The command focuses on why Vault cannot serve requests but will also warn on configurations or statuses that it deems to be unsafe in some way operator diagnose The operator diagnose command should be used primarily when vault is down or partially inoperational The command can be used safely regardless of the state vault is in but may return meaningless results for some of the test cases if the vault server is already running Note if you run the diagnose command proactively either before a server starts or while a server is operational please consult the documentation on the individual checks below to see which checks are returning false error messages or warnings Usage The following flags are available in addition to the standard set of flags vault docs commands included on all commands Output options format string table Print the output in the given format Valid formats are table json or yaml This can also be specified via the VAULT FORMAT environment variable Output layout The operator diagnose command will output a set of lines in the CLI Each line will begin with a prefix in parenthesis These are success Denotes that the check was successful warning Denotes that the check has passed but that there may be potential issues to look into that may relate to the issues vault is experiencing Diagnose warns frequently These warnings are meant to serve as starting points in the debugging process failure Denotes that the check has failed Failures are critical issues in the eyes of the diagnose command In addition to these prefixed lines there may be output lines that are not prefixed but are color coded purple These are advice lines from Diagnose and are meant to offer general guidance on how to go about fixing potential warnings or failures that may arise Warn or fail prefixes in nested checks will bubble up to the parent if the prefix superceeds the parent prefix Fail superceeds warn and warn superceeds ok For example if the TLS checks under the Storage check fails the failure prefix will bubble up to the Storage check Command options config string The path to the vault configuration file used by the vault server on startup Diagnose checks The following section details the various checks that Diagnose runs Check names in documentation will be separated by slashes to denote that they are nested when applicable For example a check documented as A B will show up as B in the operator diagnose output and will be nested indented under A Vault diagnose Vault Diagnose is the top level check that contains the rest of the checks It will report the status of the check Check operating system check open file limit Check Open File Limit verifies that the open file limit value is set high enough for vault to run effectively We recommend setting these limits to at least 1024768 This check will be skipped on openbsd arm and windows Check operating system check disk usage Check Disk Usage will report disk usage for each partition For each partition on a prod host we recommend having at least 5 of the partition free to use and at least 1 GB of space This check will be skipped on openbsd and arm Parse configuration Parse Configuration will check the vault server config file for syntax errors It will check for extra values in the configuration file repeated stanzas and stanzas that do not belong in the configuration file for example a tcpp listener as opposed to a tcp listener Currently the storage stanza is not checked Check storage create storage backend Create Storage Backend ensures that the storage stanza configured in the vault server config has enough information to create a storage object internally Common errors will have to do with misconfigured fields in the storage stanza Check storage check consul TLS Check Consul TLS verifies TLS information included in the storage stanza if the storage type is consul If a certificate chain is provided Diagnose parses the root intermediate and leaf certificates and checks each one for correctness Check storage check consul direct storage access Check Consul Direct Storage Access is a consul specific check that ensures Vault is not accessing the consul server directly but rather through a local agent Check storage check raft folder permissions Check Raft Folder Permissions computes the permissions on the raft folder checks that a boltDB file has been initialized within the folder previously and ensures that the folder is not too permissive but at the same time has enough permissions to be used The raft folder should not have other permissions but should have group rw or owner rw depending on different setups This check also warns if it detects a symlink being used Note that this check will warn that a raft file has not been created if diagnose is run without any pre existing server runs This check will be skipped on windows Check storage check raft folder ownership Check Raft Folder Ownership ensures that vault does not need to run as root to access the boltDB folder Note that this check will warn that a raft file has not been created if diagnose is run without any pre existing server runs This check will be skipped on windows Check storage check for raft quorum Check For Raft Quorum uses the FSM to ensure that there were an odd number of voters in the raft quorum when vault was last running Note that this check will warn that there are 0 voters if diagnose is run without any pre existing server runs Check storage check storage access Check Storage Access will try to write a dud value named diagnose latency uuid to storage Ensure that there is no important data at this location before running diagnose as this check will overwrite that data This check will then try to list and read the value it wrote to ensure the name and value is as expected Check Storage Access will warn if any operation takes longer than 100ms and error out if the entire check takes longer than 30s Check service discovery check consul service discovery TLS Check Consul Service Discovery TLS verifies TLS information included in the service discovery stanza if the storage type is consul If a certificate chain is provided Diagnose parses the root intermediate and leaf certificates and checks each one for correctness Check service discovery check consul direct service discovery Check Consul Direct Service Discovery is a consul specific check that ensures Vault is not accessing the consul server directly but rather through a local agent Create Vault server configuration seals Create Vault Server Configuration Seals creates seals from the vault configuration stanza and verifies they can be initialized and finalized Check transit seal TLS Check Transit Seal TLS checks the TLS client certificate key and CA certificate provided in a transit seal stanza if one exists for correctness Create core configuration initialize randomness for core Initialize Randomness for Core ensures that vault has access to the randReader that the vault core uses HA storage This check and any nested checks will be the same as the Check Storage checks The only difference is that the checks here will be run on whatever is specified in the ha storage section of the vault configuration as opposed to the storage section Determine redirect address Ensures that one of the VAULT API ADDR VAULT REDIRECT ADDR or VAULT ADVERTISE ADDR environment variables are set or that the redirect address is specified in the vault configuration Check cluster address Parses the cluster address from the VAULT CLUSTER ADDR environment variable or from the redirect address or cluster address specified in the vault configuration and checks that the address is of the form host port Check core creation Check Core Creation verifies the logical configuration checks that vault does when it creates a core object These are runtime checks meaning any errors thrown by this diagnose test will also be thrown by the vault server itself when it is run Check for autoloaded license Check For Autoloaded License is an enterprise diagnose check which verifies that vault has access to a valid autoloaded license that will not expire in the next 30 days Start listeners check listener TLS Check Listener TLS verifies the server certificate file and key are valid and matching It also checks the client CA file if one is provided for a valid certificate and performs the standard runtime listener checks on the listener configuration stanza such as verifying that the minimum and maximum TLS versions are within the bounds of what vault supports Like all the other Diagnose TLS checks it will warn if any of the certificates provided are set to expire within the next month Start listeners create listeners Create Listeners uses the listener configuration to initialize the listeners erroring with a server error if anything goes wrong Check autounseal encryption Check Autounseal Encryption will initialize the barrier using the seal stanza if the seal type is not a shamir seal and use it to encrypt and decrypt a dud value Check server before runtime Check Server Before Runtime achieves parity with the server run command running through the runtime code checks before the server is initialized to ensure that nothing fails This check will never fail without another diagnose check failing |
vault engine mount against an optional configuration pki health check The pki health check command verifies the health of the given PKI secrets layout docs page title pki health check Command | ---
layout: docs
page_title: pki health-check - Command
description: |-
The "pki health-check" command verifies the health of the given PKI secrets
engine mount against an optional configuration.
---
# pki health-check
The `pki health-check` command verifies the health of the given PKI secrets
engine mount against an optional configuration.
This runs with the permissions of the given token, reading various APIs from
the mount and `/sys` against the given Vault server
Mounts need to be specified with any namespaces prefixed in the path, e.g.,
`ns1/pki`.
## Examples
Performs a basic health check against the `pki-root` mount:
```shell-session
$ vault pki health-check pki-root/
```
Configuration can be specified using the `-health-config` flag:
```shell-session
$ vault pki health-check -health-config=mycorp-root.json pki-root/
```
Using the `-list` flag will show the list of health checks and any
known configuration values (with their defaults) that will be run
against this mount:
```shell-session
$ vault pki health-check -list pki-root/
```
## Usage
The following flags are unique to this command:
- `-default-disabled` - When specified, results in all health checks being
disabled by default unless enabled by the configuration file explicitly.
The default is `false`, meaning all default-enabled health checks will run.
- `-health-config` `(string: "")` - Path to JSON configuration file to
modify health check execution and parameters.
- `-list` - When specified, no health checks are run, but all known health
checks are printed. Still requires a positional mount argument. The default
is `false`, meaning no listing is printed and health checks will execute.
- `-return-indicator` `(string: "default")` - Behavior of the return value
(exit code) of this command:
- `permission`, for exiting with a non-zero code when the tool lacks
permissions or has a version mismatch with
the server;
- `critical`, for exiting with a non-zero code when a check returns a
critical status in addition to the above;
- `warning`, for exiting with a non-zero status when a check returns a
warning status in addition to the above;
- `informational`, for exiting with a non-zero status when a check
returns an informational status in addition to the above;
- `default`, for the default behavior based on severity of message
and only returning a zero exit status when all checks have passed
and no execution errors have occurred.
This command respects the `-format` parameter to control the presentation of
output sent to stdout. Fatal errors that prevent health checks from executing
may not follow this formatting.
## Return status and output
This command returns the following exit codes:
- `0` - Everything is good.
- `1` - Usage error (check CLI parameters).
- `2` - Informational message from a health check.
- `3` - Warning message from a health check.
- `4` - Critical message from a health check.
- `5` - A version mismatch between health check and Vault Server occurred,
preventing one or more health checks from being fully run.
- `6` - A permission denied message was returned from Vault Server for
one or more health checks.
Note that an exit code of `5` (due to a version mismatch) is not necessarily
fatal to the health check. For example, the `crl_validity_period` health
check will return an invalid version warning when run against Vault 1.11 as
no Delta CRL exists for this version of Vault, but this will not impact its
ability to check the complete CRL.
Each health check outputs one or results in a list. This list contains a
mapping of keys (`status`, `status_code`, `endpoint`, and `message`) to
values returned by the health check. An endpoint may occur in more than
one health check and is not necessarily guaranteed to exist on the server
(e.g., using wildcards to indicate all matching paths have the same
result). Tabular form elides the status code, as this is meant to be
consumed programatically.
These correspond to the following health check status values:
- status `not_applicable` / status code `0`: exit code `0`.
- status `ok` / status code `1`: exit code `0`
- status `informational` / status code `2`: exit code `2`.
- status `warning` / status code `3`: exit code `3`.
- status `critical` / status code `4`: exit code `4`.
- status `invalid_version` / status code `5`: exit code `5`.
- status `insufficient_permissions` / status code `6`: exit code `6`.
## Health checks
The following health checks are currently implemented. More health checks may
be added in future releases and may default to being enabled.
### CA validity period
**Name**: `ca_validity_period`
**Accessed APIs**:
- `LIST /issuers` (unauthenticated)
- `READ /issuer/:issuer_ref/json` (unauthenticated)
**Config Parameters**:
- `root_expiry_critical` `(duration: 182d)` - for a duration within which the root's lifetime is considered critical
- `intermediate_expiry_critical` `(duration: 30d)` - for a duration within which the intermediate's lifetime is considered critical
- `root_expiry_warning` `(duration: 365d)` - for a duration within which the root's lifetime is considered warning
- `intermediate_expiry_warning` `(duration: 60d)` - for a duration within which the intermediate's lifetime is considered warning
- `root_expiry_informational` `(duration: 730d)` - for a duration within which the root's lifetime is considered informational
- `intermediate_expiry_informational` `(duration: 180d)` - for a duration within which the intermediate's lifetime is considered informational
This health check will check each issuer in the mount for validity status, returning a list. If a CA expires within the next 30 days, the result will be critical. If a root CA expires within the next 12 months or an intermediate CA within the next 2 months, the result will be a warning. If a root CA expires within 24 months or an intermediate CA within 6 months, the result will be informational.
**Remediation steps**:
1. Perform a [CA rotation operation](/vault/docs/secrets/pki/rotation-primitives)
to check for CAs that are about to expire.
1. Migrate from expiring CAs to new CAs.
1. Delete any expired CAs with one of the following options:
- Run [tidy](/vault/api-docs/secret/pki#tidy) manually with `vault write <mount>/tidy tidy_expired_issuers=true`.
- Use the Vault API to call [delete issuer](/vault/api-docs/secret/pki#delete-issuer).
### CRL validity period
**Name**: `crl_validity_period`
**Accessed APIs**:
- `LIST /issuers` (unauthenticated)
- `READ /config/crl` (optional)
- `READ /issuer/:issuer_ref/crl` (unauthenticated)
- `READ /issuer/:issuer_ref/crl/delta` (unauthenticated)
**Config Parameters**:
- `crl_expiry_pct_critical` `(int: 95)` - the percentage of validity period after which a CRL should be considered critically close to expiry
- `delta_crl_expiry_pct_critical` `(int: 95)` - the percentage of validity period after which a Delta CRL should be considered critically close to expiry
This health check checks each issuer's CRL for validity status, returning a list. Unlike CAs, where a date-based duration makes sense due to effort required to successfully rotate, rotating CRLs are much easier, so a percentage based approach makes sense. If the chosen percentage exceeds that of the `grace_period` from the CRL configuration, an informational message will be issued rather than OK.
For informational purposes, it reads the CRL config and suggests enabling auto-rebuild CRLs if not enabled.
**Remediation steps**:
Use `vault write` to enable CRL auto-rebuild:
```shell-session
$ vault write <mount>/config/crl auto_rebuild=true
```
### Hardware-Backed root certificate
**Name**: `hardware_backed_root`
**APIs**:
- `LIST /issuers` (unauthenticated)
- `READ /issuer/:issuer_ref`
- `READ /key/:key_ref`
**Config Parameters**:
- `enabled` `(boolean: false)` - defaults to not being run.
This health check checks issuers for root CAs backed by software keys. While Vault is secure, for production root certificates, we'd recommend the additional integrity of KMS-backed keys. This is an informational check only. When all roots are KMS-backed, we'll return OK; when no issuers are roots, we'll return not applicable.
Read more about hardware-backed keys within [Vault Enterprise Managed Keys](/vault/docs/enterprise/managed-keys)
### Root certificate issued Non-CA leaves
**Name**: `root_issued_leaves`
**APIs**:
- `LIST /issuers` (unauthenticated)
- `READ /issuer/:issuer_ref/pem` (unauthenticated)
- `LIST /certs`
- `READ /certs/:serial` (unauthenticated)
**Config Parameters**:
- `certs_to_fetch` `(int: 100)` - a quantity of leaf certificates to fetch to see if any leaves have been issued by a root directly.
This health check verifies whether a proper CA hierarchy is in use. We do this by fetching `certs_to_fetch` leaf certificates (configurable) and seeing if they are a non-issuer leaf and if they were signed by a root issuer in this mount. If one is found, we'll issue a warning about this, and recommend setting up an intermediate CA.
**Remediation steps**:
1. Restrict the use of `sign`, `sign-verbatim`, `issue`, and ACME APIs against
the root issuer.
1. Create an intermediary issuer in a different mount.
1. Have the root issuer sign the new intermediary issuer.
1. Issue new leaf certificates using the intermediary issuer.
### Role allows implicit localhost issuance
**Name**: `role_allows_localhost`
**APIs**:
- `LIST /roles`
- `READ /roles/:name`
**Config Parameters**: (none)
Checks whether any roles exist that allow implicit localhost based issuance
(`allow_localhost=true`) with a non-empty `allowed_domains` value.
**Remediation steps**:
1. Set `allow_localhost` to `false` for all roles.
1. Update the `allowed_domains` field with an explicit list of allowed
localhost-like domains.
### Role allows Glob-Based wildcard issuance
**Name**: `role_allows_glob_wildcards`
**APIs**:
- `LIST /roles`
- `READ /roles/:name`
**Config Parameters**:
- `allowed_roles` `(list: nil)` - an allow-list of roles to ignore.
Check each role to see whether or not it allows wildcard issuance **and** glob
domains. Wildcards and globs can interact and result in nested wildcards among
other (potentially dangerous) quirks.
**Remediation steps**:
1. Split any role that need both of `allow_glob_domains` and `allow_wildcard_certificates` to be true into two roles.
1. Continue splitting roles until both of the following are true for all roles:
- The role has `allow_glob_domains` **or** `allow_wildcard_certificates`, but
not both.
- Roles with `allow_glob_domains` **and** `allow_wildcard_certificates` are
the only roles required for **all** SANs on the certificate.
1. Add the roles that allow glob domains and wildcards to `allowed_roles` so
Vault ignores them in future checks.
### Role sets `no_store=false` and performance
**Name**: `role_no_store_false`
**APIs**:
- `LIST /roles`
- `READ /roles/:name`
- `LIST /certs`
- `READ /config/crl`
**Config Parameters**:
- `allowed_roles` `(list: nil)` - an allow-list of roles to ignore.
Checks each role to see whether `no_store` is set to `false`.
<Warning>
Vault will provide warnings and performance will suffer if you have a large
number of certificates without temporal CRL auto-rebuilding and set `no_store`
to `true`.
</Warning>
**Remediation steps**:
1. Update none-ACME roles with `no_store=false`. **NOTE**: Roles used for ACME
issuance must have `no_store` set to `true`.
1. Set your certificate lifetimes as short as possible.
1. Use [BYOC revocations](/vault/api-docs/secret/pki#revoke-certificate) to
revoke certificates as needed.
### Accessibility of audit information
**Name**: `audit_visibility`
**APIs**:
- `READ /sys/mounts/:mount/tune`
**Config Parameters**:
- `ignored_parameters` `(list: nil)` - a list of parameters to ignore their HMAC status.
This health check checks whether audit information is accessible to log consumers, validating whether our list of safe and unsafe audit parameters are generally followed. These are informational responses, if any are present.
**Remediation steps**:
Use `vault secrets tune` to set the desired audit parameters:
```shell-session
$ vault secrets tune \
-audit-non-hmac-response-keys=certificate \
-audit-non-hmac-response-keys=issuing_ca \
-audit-non-hmac-response-keys=serial_number \
-audit-non-hmac-response-keys=error \
-audit-non-hmac-response-keys=ca_chain \
-audit-non-hmac-request-keys=certificate \
-audit-non-hmac-request-keys=issuer_ref \
-audit-non-hmac-request-keys=common_name \
-audit-non-hmac-request-keys=alt_names \
-audit-non-hmac-request-keys=other_sans \
-audit-non-hmac-request-keys=ip_sans \
-audit-non-hmac-request-keys=uri_sans \
-audit-non-hmac-request-keys=ttl \
-audit-non-hmac-request-keys=not_after \
-audit-non-hmac-request-keys=serial_number \
-audit-non-hmac-request-keys=key_type \
-audit-non-hmac-request-keys=private_key_format \
-audit-non-hmac-request-keys=managed_key_name \
-audit-non-hmac-request-keys=managed_key_id \
-audit-non-hmac-request-keys=ou \
-audit-non-hmac-request-keys=organization \
-audit-non-hmac-request-keys=country \
-audit-non-hmac-request-keys=locality \
-audit-non-hmac-request-keys=province \
-audit-non-hmac-request-keys=street_address \
-audit-non-hmac-request-keys=postal_code \
-audit-non-hmac-request-keys=permitted_dns_domains \
-audit-non-hmac-request-keys=policy_identifiers \
-audit-non-hmac-request-keys=ext_key_usage_oids \
-audit-non-hmac-request-keys=csr \
<mount>
```
### ACL policies allow problematic endpoints
**Name**: `policy_allow_endpoints`
**APIs**:
- `LIST /sys/policy`
- `READ /sys/policy/:name`
**Config Parameters**:
- `allowed_policies` `(list: nil)` - a list of policies to allow-list for access to insecure APIs.
This health check checks whether unsafe access to APIs (such as `sign-intermediate`, `sign-verbatim`, and `sign-self-issued`) are allowed. Any findings are a critical result and should be rectified by the administrator or explicitly allowed.
### Allow If-Modified-Since requests
**Name**: `allow_if_modified_since`
**APIs**:
- `READ /sys/internal/ui/mounts`
**Config Parameters**: (none)
This health check verifies if the `If-Modified-Since` header has been added to `passthrough_request_headers` and if `Last-Modified` header has been added to `allowed_response_headers`. This is an informational message if both haven't been configured, or a warning if only one has been configured.
**Remediation steps**:
1. Update `allowed_response_headers` and `passthrough_request_headers` for all
policies with `vault secrets tune`:
```shell-session
$ vault secrets tune \
-passthrough-request-headers="If-Modified-Since" \
-allowed-response-headers="Last-Modified" \
<mount>
```
1. Update ACME-specific headers with `vault secrets tune` (if you are using ACME):
```shell-session
$ vault secrets tune \
-passthrough-request-headers="If-Modified-Since" \
-allowed-response-headers="Last-Modified" \
-allowed-response-headers="Replay-Nonce" \
-allowed-response-headers="Link" \
-allowed-response-headers="Location" \
<mount>
```
### Auto-Tidy disabled
**Name**: `enable_auto_tidy`
**APIs**:
- `READ /config/auto-tidy`
**Config Parameters**:
- `interval_duration_critical` `(duration: 7d)` - the maximum allowed interval_duration to hit critical threshold.
- `interval_duration_warning` `(duration: 2d)` - the maximum allowed interval_duration to hit a warning threshold.
- `pause_duration_critical` `(duration: 1s)` - the maximum allowed pause_duration to hit a critical threshold.
- `pause_duration_warning` `(duration: 200ms)` - the maximum allowed pause_duration to hit a warning threshold.
This health check verifies that auto-tidy is enabled, with sane defaults for interval_duration and pause_duration. Any disabled findings will be informational, as this is a best-practice but not strictly required, but other findings w.r.t. `interval_duration` or `pause_duration` will be critical/warnings.
**Remediation steps**
Use `vault write` to enable auto-tidy with the recommended defaults:
```shell-session
$ vault write <mount>/config/auto-tidy \
enabled=true \
tidy_cert_store=true \
tidy_revoked_certs=true \
tidy_acme=true \
tidy_revocation_queue=true \
tidy_cross_cluster_revoked_certs=true \
tidy_revoked_cert_issuer_associations=true
```
### Tidy hasn't run
**Name**: `tidy_last_run`
**APIs**:
- `READ /tidy-status`
**Config Parameters**:
- `last_run_critical` `(duration: 7d)` - the critical delay threshold between when tidy should have last run.
- `last_run_warning` `(duration: 2d)` - the warning delay threshold between when tidy should have last run.
This health check verifies that tidy has run within the last run window. This can be critical/warning alerts as this can start to seriously impact Vault's performance.
**Remediation steps**:
1. Schedule a manual run of tidy with `vault write`:
```shell-session
$ vault write <mount>/tidy \
tidy_cert_store=true \
tidy_revoked_certs=true \
tidy_acme=true \
tidy_revocation_queue=true \
tidy_cross_cluster_revoked_certs=true \
tidy_revoked_cert_issuer_associations=true
```
1. Review the tidy status endpoint, `vault read <mount>/tidy-status` for
additional information.
1. Re-configure auto-tidy based on the log information and results of your
manual run.
### Too many certificates
**Name**: `too_many_certs`
**APIs**:
- `READ /tidy-status`
- `LIST /certs`
**Config Parameters**:
- `count_critical` `(int: 250000)` - the critical threshold at which there are too many certs.
- `count_warning` `(int: 50000)` - the warning threshold at which there are too many certs.
This health check verifies that this cluster has a reasonable number of certificates. Ideally this would be fetched from tidy's status or a new metric reporting format, but as a fallback when tidy hasn't run, a list operation will be performed instead.
**Remediation steps**:
1. Verify that tidy ran recently with `vault read`:
```shell-session
$ vault read <mount>/tidy-status
````
1. Schedule a manual run of tidy with `vault write`:
```shell-session
$ vault write <mount>/tidy \
tidy_cert_store=true \
tidy_revoked_certs=true \
tidy_acme=true \
tidy_revocation_queue=true \
tidy_cross_cluster_revoked_certs=true \
tidy_revoked_cert_issuer_associations=true
```
1. Enable `auto-tidy`.
1. Make sure that you are not renewing certificates too soon. Certificate
lifetimes should reflect the expected usage of the certificate. If the TTL is
set appropriately, most certificates renew at approximately 2/3 of their
lifespan.
1. Consider setting the `no_store` field for all roles to `true` and use [BYOC revocations](/vault/api-docs/secret/pki#revoke-certificate) to avoid storage.
### Enable ACME issuance
**Name**: `enable_acme_issuance`
**APIs**:
- `READ /config/acme`
- `READ /config/cluster`
- `LIST /issuers` (unauthenticated)
- `READ /issuer/:issuer_ref/json` (unauthenticated)
**Config Parameters**: (none)
This health check verifies that ACME is enabled within a mount that contains an intermediary issuer, as this is considered a best-practice to support a self-rotating PKI infrastructure.
Review the [ACME Certificate Issuance](/vault/api-docs/secret/pki#acme-certificate-issuance)
API documentation to learn about enabling ACME support in Vault.
### ACME response headers
**Name**: `allow_acme_headers`
**APIs**:
- `READ /sys/internal/ui/mounts`
**Config Parameters**: (none)
This health check verifies if the `"Replay-Nonce`, `Link`, and `Location` headers have been added to `allowed_response_headers`, when the ACME feature is enabled. The ACME protocol will not work if these headers are not added to the mount.
**Remediation steps**:
Use `vault secrets tune` to add the missing headers to `allowed_response_headers`:
```shell-session
$ vault secrets tune \
-allowed-response-headers="Last-Modified" \
-allowed-response-headers="Replay-Nonce" \
-allowed-response-headers="Link" \
-allowed-response-headers="Location" \
<mount>
``` | vault | layout docs page title pki health check Command description The pki health check command verifies the health of the given PKI secrets engine mount against an optional configuration pki health check The pki health check command verifies the health of the given PKI secrets engine mount against an optional configuration This runs with the permissions of the given token reading various APIs from the mount and sys against the given Vault server Mounts need to be specified with any namespaces prefixed in the path e g ns1 pki Examples Performs a basic health check against the pki root mount shell session vault pki health check pki root Configuration can be specified using the health config flag shell session vault pki health check health config mycorp root json pki root Using the list flag will show the list of health checks and any known configuration values with their defaults that will be run against this mount shell session vault pki health check list pki root Usage The following flags are unique to this command default disabled When specified results in all health checks being disabled by default unless enabled by the configuration file explicitly The default is false meaning all default enabled health checks will run health config string Path to JSON configuration file to modify health check execution and parameters list When specified no health checks are run but all known health checks are printed Still requires a positional mount argument The default is false meaning no listing is printed and health checks will execute return indicator string default Behavior of the return value exit code of this command permission for exiting with a non zero code when the tool lacks permissions or has a version mismatch with the server critical for exiting with a non zero code when a check returns a critical status in addition to the above warning for exiting with a non zero status when a check returns a warning status in addition to the above informational for exiting with a non zero status when a check returns an informational status in addition to the above default for the default behavior based on severity of message and only returning a zero exit status when all checks have passed and no execution errors have occurred This command respects the format parameter to control the presentation of output sent to stdout Fatal errors that prevent health checks from executing may not follow this formatting Return status and output This command returns the following exit codes 0 Everything is good 1 Usage error check CLI parameters 2 Informational message from a health check 3 Warning message from a health check 4 Critical message from a health check 5 A version mismatch between health check and Vault Server occurred preventing one or more health checks from being fully run 6 A permission denied message was returned from Vault Server for one or more health checks Note that an exit code of 5 due to a version mismatch is not necessarily fatal to the health check For example the crl validity period health check will return an invalid version warning when run against Vault 1 11 as no Delta CRL exists for this version of Vault but this will not impact its ability to check the complete CRL Each health check outputs one or results in a list This list contains a mapping of keys status status code endpoint and message to values returned by the health check An endpoint may occur in more than one health check and is not necessarily guaranteed to exist on the server e g using wildcards to indicate all matching paths have the same result Tabular form elides the status code as this is meant to be consumed programatically These correspond to the following health check status values status not applicable status code 0 exit code 0 status ok status code 1 exit code 0 status informational status code 2 exit code 2 status warning status code 3 exit code 3 status critical status code 4 exit code 4 status invalid version status code 5 exit code 5 status insufficient permissions status code 6 exit code 6 Health checks The following health checks are currently implemented More health checks may be added in future releases and may default to being enabled CA validity period Name ca validity period Accessed APIs LIST issuers unauthenticated READ issuer issuer ref json unauthenticated Config Parameters root expiry critical duration 182d for a duration within which the root s lifetime is considered critical intermediate expiry critical duration 30d for a duration within which the intermediate s lifetime is considered critical root expiry warning duration 365d for a duration within which the root s lifetime is considered warning intermediate expiry warning duration 60d for a duration within which the intermediate s lifetime is considered warning root expiry informational duration 730d for a duration within which the root s lifetime is considered informational intermediate expiry informational duration 180d for a duration within which the intermediate s lifetime is considered informational This health check will check each issuer in the mount for validity status returning a list If a CA expires within the next 30 days the result will be critical If a root CA expires within the next 12 months or an intermediate CA within the next 2 months the result will be a warning If a root CA expires within 24 months or an intermediate CA within 6 months the result will be informational Remediation steps 1 Perform a CA rotation operation vault docs secrets pki rotation primitives to check for CAs that are about to expire 1 Migrate from expiring CAs to new CAs 1 Delete any expired CAs with one of the following options Run tidy vault api docs secret pki tidy manually with vault write mount tidy tidy expired issuers true Use the Vault API to call delete issuer vault api docs secret pki delete issuer CRL validity period Name crl validity period Accessed APIs LIST issuers unauthenticated READ config crl optional READ issuer issuer ref crl unauthenticated READ issuer issuer ref crl delta unauthenticated Config Parameters crl expiry pct critical int 95 the percentage of validity period after which a CRL should be considered critically close to expiry delta crl expiry pct critical int 95 the percentage of validity period after which a Delta CRL should be considered critically close to expiry This health check checks each issuer s CRL for validity status returning a list Unlike CAs where a date based duration makes sense due to effort required to successfully rotate rotating CRLs are much easier so a percentage based approach makes sense If the chosen percentage exceeds that of the grace period from the CRL configuration an informational message will be issued rather than OK For informational purposes it reads the CRL config and suggests enabling auto rebuild CRLs if not enabled Remediation steps Use vault write to enable CRL auto rebuild shell session vault write mount config crl auto rebuild true Hardware Backed root certificate Name hardware backed root APIs LIST issuers unauthenticated READ issuer issuer ref READ key key ref Config Parameters enabled boolean false defaults to not being run This health check checks issuers for root CAs backed by software keys While Vault is secure for production root certificates we d recommend the additional integrity of KMS backed keys This is an informational check only When all roots are KMS backed we ll return OK when no issuers are roots we ll return not applicable Read more about hardware backed keys within Vault Enterprise Managed Keys vault docs enterprise managed keys Root certificate issued Non CA leaves Name root issued leaves APIs LIST issuers unauthenticated READ issuer issuer ref pem unauthenticated LIST certs READ certs serial unauthenticated Config Parameters certs to fetch int 100 a quantity of leaf certificates to fetch to see if any leaves have been issued by a root directly This health check verifies whether a proper CA hierarchy is in use We do this by fetching certs to fetch leaf certificates configurable and seeing if they are a non issuer leaf and if they were signed by a root issuer in this mount If one is found we ll issue a warning about this and recommend setting up an intermediate CA Remediation steps 1 Restrict the use of sign sign verbatim issue and ACME APIs against the root issuer 1 Create an intermediary issuer in a different mount 1 Have the root issuer sign the new intermediary issuer 1 Issue new leaf certificates using the intermediary issuer Role allows implicit localhost issuance Name role allows localhost APIs LIST roles READ roles name Config Parameters none Checks whether any roles exist that allow implicit localhost based issuance allow localhost true with a non empty allowed domains value Remediation steps 1 Set allow localhost to false for all roles 1 Update the allowed domains field with an explicit list of allowed localhost like domains Role allows Glob Based wildcard issuance Name role allows glob wildcards APIs LIST roles READ roles name Config Parameters allowed roles list nil an allow list of roles to ignore Check each role to see whether or not it allows wildcard issuance and glob domains Wildcards and globs can interact and result in nested wildcards among other potentially dangerous quirks Remediation steps 1 Split any role that need both of allow glob domains and allow wildcard certificates to be true into two roles 1 Continue splitting roles until both of the following are true for all roles The role has allow glob domains or allow wildcard certificates but not both Roles with allow glob domains and allow wildcard certificates are the only roles required for all SANs on the certificate 1 Add the roles that allow glob domains and wildcards to allowed roles so Vault ignores them in future checks Role sets no store false and performance Name role no store false APIs LIST roles READ roles name LIST certs READ config crl Config Parameters allowed roles list nil an allow list of roles to ignore Checks each role to see whether no store is set to false Warning Vault will provide warnings and performance will suffer if you have a large number of certificates without temporal CRL auto rebuilding and set no store to true Warning Remediation steps 1 Update none ACME roles with no store false NOTE Roles used for ACME issuance must have no store set to true 1 Set your certificate lifetimes as short as possible 1 Use BYOC revocations vault api docs secret pki revoke certificate to revoke certificates as needed Accessibility of audit information Name audit visibility APIs READ sys mounts mount tune Config Parameters ignored parameters list nil a list of parameters to ignore their HMAC status This health check checks whether audit information is accessible to log consumers validating whether our list of safe and unsafe audit parameters are generally followed These are informational responses if any are present Remediation steps Use vault secrets tune to set the desired audit parameters shell session vault secrets tune audit non hmac response keys certificate audit non hmac response keys issuing ca audit non hmac response keys serial number audit non hmac response keys error audit non hmac response keys ca chain audit non hmac request keys certificate audit non hmac request keys issuer ref audit non hmac request keys common name audit non hmac request keys alt names audit non hmac request keys other sans audit non hmac request keys ip sans audit non hmac request keys uri sans audit non hmac request keys ttl audit non hmac request keys not after audit non hmac request keys serial number audit non hmac request keys key type audit non hmac request keys private key format audit non hmac request keys managed key name audit non hmac request keys managed key id audit non hmac request keys ou audit non hmac request keys organization audit non hmac request keys country audit non hmac request keys locality audit non hmac request keys province audit non hmac request keys street address audit non hmac request keys postal code audit non hmac request keys permitted dns domains audit non hmac request keys policy identifiers audit non hmac request keys ext key usage oids audit non hmac request keys csr mount ACL policies allow problematic endpoints Name policy allow endpoints APIs LIST sys policy READ sys policy name Config Parameters allowed policies list nil a list of policies to allow list for access to insecure APIs This health check checks whether unsafe access to APIs such as sign intermediate sign verbatim and sign self issued are allowed Any findings are a critical result and should be rectified by the administrator or explicitly allowed Allow If Modified Since requests Name allow if modified since APIs READ sys internal ui mounts Config Parameters none This health check verifies if the If Modified Since header has been added to passthrough request headers and if Last Modified header has been added to allowed response headers This is an informational message if both haven t been configured or a warning if only one has been configured Remediation steps 1 Update allowed response headers and passthrough request headers for all policies with vault secrets tune shell session vault secrets tune passthrough request headers If Modified Since allowed response headers Last Modified mount 1 Update ACME specific headers with vault secrets tune if you are using ACME shell session vault secrets tune passthrough request headers If Modified Since allowed response headers Last Modified allowed response headers Replay Nonce allowed response headers Link allowed response headers Location mount Auto Tidy disabled Name enable auto tidy APIs READ config auto tidy Config Parameters interval duration critical duration 7d the maximum allowed interval duration to hit critical threshold interval duration warning duration 2d the maximum allowed interval duration to hit a warning threshold pause duration critical duration 1s the maximum allowed pause duration to hit a critical threshold pause duration warning duration 200ms the maximum allowed pause duration to hit a warning threshold This health check verifies that auto tidy is enabled with sane defaults for interval duration and pause duration Any disabled findings will be informational as this is a best practice but not strictly required but other findings w r t interval duration or pause duration will be critical warnings Remediation steps Use vault write to enable auto tidy with the recommended defaults shell session vault write mount config auto tidy enabled true tidy cert store true tidy revoked certs true tidy acme true tidy revocation queue true tidy cross cluster revoked certs true tidy revoked cert issuer associations true Tidy hasn t run Name tidy last run APIs READ tidy status Config Parameters last run critical duration 7d the critical delay threshold between when tidy should have last run last run warning duration 2d the warning delay threshold between when tidy should have last run This health check verifies that tidy has run within the last run window This can be critical warning alerts as this can start to seriously impact Vault s performance Remediation steps 1 Schedule a manual run of tidy with vault write shell session vault write mount tidy tidy cert store true tidy revoked certs true tidy acme true tidy revocation queue true tidy cross cluster revoked certs true tidy revoked cert issuer associations true 1 Review the tidy status endpoint vault read mount tidy status for additional information 1 Re configure auto tidy based on the log information and results of your manual run Too many certificates Name too many certs APIs READ tidy status LIST certs Config Parameters count critical int 250000 the critical threshold at which there are too many certs count warning int 50000 the warning threshold at which there are too many certs This health check verifies that this cluster has a reasonable number of certificates Ideally this would be fetched from tidy s status or a new metric reporting format but as a fallback when tidy hasn t run a list operation will be performed instead Remediation steps 1 Verify that tidy ran recently with vault read shell session vault read mount tidy status 1 Schedule a manual run of tidy with vault write shell session vault write mount tidy tidy cert store true tidy revoked certs true tidy acme true tidy revocation queue true tidy cross cluster revoked certs true tidy revoked cert issuer associations true 1 Enable auto tidy 1 Make sure that you are not renewing certificates too soon Certificate lifetimes should reflect the expected usage of the certificate If the TTL is set appropriately most certificates renew at approximately 2 3 of their lifespan 1 Consider setting the no store field for all roles to true and use BYOC revocations vault api docs secret pki revoke certificate to avoid storage Enable ACME issuance Name enable acme issuance APIs READ config acme READ config cluster LIST issuers unauthenticated READ issuer issuer ref json unauthenticated Config Parameters none This health check verifies that ACME is enabled within a mount that contains an intermediary issuer as this is considered a best practice to support a self rotating PKI infrastructure Review the ACME Certificate Issuance vault api docs secret pki acme certificate issuance API documentation to learn about enabling ACME support in Vault ACME response headers Name allow acme headers APIs READ sys internal ui mounts Config Parameters none This health check verifies if the Replay Nonce Link and Location headers have been added to allowed response headers when the ACME feature is enabled The ACME protocol will not work if these headers are not added to the mount Remediation steps Use vault secrets tune to add the missing headers to allowed response headers shell session vault secrets tune allowed response headers Last Modified allowed response headers Replay Nonce allowed response headers Link allowed response headers Location mount |
vault This page contains the list of deprecations and important or breaking changes page title Upgrading to Vault 0 9 0 Guides Overview layout docs for Vault 0 9 0 Please read it carefully | ---
layout: docs
page_title: Upgrading to Vault 0.9.0 - Guides
description: |-
This page contains the list of deprecations and important or breaking changes
for Vault 0.9.0. Please read it carefully.
---
# Overview
This page contains the list of deprecations and important or breaking changes
for Vault 0.9.0 compared to the most recent release. Please read it carefully.
### PKI root generation (Since 0.8.1)
Calling [`pki/root/generate`][generate-root] when a CA cert/key already exists will now return a
`204` instead of overwriting an existing root. If you want to recreate the
root, first run a delete operation on `pki/root` (requires `sudo` capability),
then generate it again.
### Token period in AWS IAM auth (Since 0.8.2)
In prior versions of Vault, if authenticating via AWS IAM and requesting a
periodic token, the period was not properly respected. This could lead to
tokens expiring unexpectedly, or a token lifetime being longer than expected.
Upon token renewal with Vault 0.8.2 the period will be properly enforced.
### SSH CLI parameters (Since 0.8.2)
`vault ssh` users should supply `-mode` and `-role` to reduce the number of API
calls. A future version of Vault will mark these optional values are required.
Failure to supply `-mode` or `-role` will result in a warning.
### Vault plugin init (Since 0.8.2)
Vault plugins will first briefly run a restricted version of the plugin to
fetch metadata, and then lazy-load the plugin on first request to prevent
crash/deadlock of Vault during the unseal process. Plugins will need to be
built with the latest changes in order for them to run properly.
### Policy input format standardization (Since 0.8.3)
For all built-in authentication backends, policies can now be specified as a
comma-delimited string or an array if using JSON as API input; on read,
policies will be returned as an array; and the `default` policy will not be
forcefully added to policies saved in configurations. Please note that the
`default` policy will continue to be added to generated tokens, however, rather
than backends adding `default` to the given set of input policies (in some
cases, and not in others), the stored set will reflect the user-specified set.
### PKI `sign-self-issued` modifies `Issuer` in generated certificates (Since 0.8.3)
In 0.8.2 the endpoint would not modify the Issuer in the generated certificate,
leaving the output self-issued. Although theoretically valid, in practice
crypto stacks were unhappy validating paths containing such certs. As a result,
`sign-self-issued` now encodes the signing CA's Subject DN into the Issuer DN
of the generated certificate.
### `sys/raw` requires enabling (Since 0.8.3)
While the `sys/raw` endpoint can be extremely useful in break-glass or support
scenarios, it is also extremely dangerous. As of now, a configuration file
option `raw_storage_endpoint` must be set in order to enable this API endpoint.
Once set, the available functionality has been enhanced slightly; it now
supports listing and decrypting most of Vault's core data structures, except
for the encryption keyring itself.
### `generic` is now `kv` (Since 0.8.3)
To better reflect its actual use, the `generic` backend is now `kv`. Using
`generic` will still work for backwards compatibility.
### HSM users need to specify new config options (In 0.9)
When using Vault with an HSM, a new parameter is required: `hmac_key_label`.
This performs a similar function to `key_label` but for the HMAC key Vault will
use. Vault will generate a suitable key if this value is specified and
`generate_key` is set true. See [the seal configuration page][pkcs11-seal] for
more information.
### API HTTP client behavior (In 0.9)
When calling `NewClient` the API no longer modifies the provided
client/transport. In particular this means it will no longer enable redirection
limiting and HTTP/2 support on custom clients. It is suggested that if you want
to make changes to an HTTP client that you use one created by `DefaultConfig`
as a starting point.
### AWS EC2 client nonce behavior (In 0.9)
The client nonce generated by the backend that gets returned along with the
authentication response will be audited in plaintext. If this is undesired, the
clients can choose to supply a custom nonce to the login endpoint. The custom
nonce set by the client will from now on, not be returned back with the
authentication response, and hence not audit logged.
### AWS auth role options (In 0.9)
The API will now error when trying to create or update a role with the
mutually-exclusive options `disallow_reauthentication` and
`allow_instance_migration`.
### SSH CA role read changes (In 0.9)
When reading back a role from the `ssh` backend, the TTL/max TTL values will
now be an integer number of seconds rather than a string. This better matches
the API elsewhere in Vault.
### SSH role list changes (In 0.9)
When listing roles from the `ssh` backend via the API, the response data will
additionally return a `key_info` map that will contain a map of each key with a
corresponding object containing the `key_type`.
### More granularity in audit logs (In 0.9)
Audit request and response entries are still in RFC3339 format but now have a
granularity of nanoseconds.
[generate-root]: /vault/api-docs/secret/pki#generate-root
[pkcs11-seal]: /vault/docs/configuration/seal/pkcs11 | vault | layout docs page title Upgrading to Vault 0 9 0 Guides description This page contains the list of deprecations and important or breaking changes for Vault 0 9 0 Please read it carefully Overview This page contains the list of deprecations and important or breaking changes for Vault 0 9 0 compared to the most recent release Please read it carefully PKI root generation Since 0 8 1 Calling pki root generate generate root when a CA cert key already exists will now return a 204 instead of overwriting an existing root If you want to recreate the root first run a delete operation on pki root requires sudo capability then generate it again Token period in AWS IAM auth Since 0 8 2 In prior versions of Vault if authenticating via AWS IAM and requesting a periodic token the period was not properly respected This could lead to tokens expiring unexpectedly or a token lifetime being longer than expected Upon token renewal with Vault 0 8 2 the period will be properly enforced SSH CLI parameters Since 0 8 2 vault ssh users should supply mode and role to reduce the number of API calls A future version of Vault will mark these optional values are required Failure to supply mode or role will result in a warning Vault plugin init Since 0 8 2 Vault plugins will first briefly run a restricted version of the plugin to fetch metadata and then lazy load the plugin on first request to prevent crash deadlock of Vault during the unseal process Plugins will need to be built with the latest changes in order for them to run properly Policy input format standardization Since 0 8 3 For all built in authentication backends policies can now be specified as a comma delimited string or an array if using JSON as API input on read policies will be returned as an array and the default policy will not be forcefully added to policies saved in configurations Please note that the default policy will continue to be added to generated tokens however rather than backends adding default to the given set of input policies in some cases and not in others the stored set will reflect the user specified set PKI sign self issued modifies Issuer in generated certificates Since 0 8 3 In 0 8 2 the endpoint would not modify the Issuer in the generated certificate leaving the output self issued Although theoretically valid in practice crypto stacks were unhappy validating paths containing such certs As a result sign self issued now encodes the signing CA s Subject DN into the Issuer DN of the generated certificate sys raw requires enabling Since 0 8 3 While the sys raw endpoint can be extremely useful in break glass or support scenarios it is also extremely dangerous As of now a configuration file option raw storage endpoint must be set in order to enable this API endpoint Once set the available functionality has been enhanced slightly it now supports listing and decrypting most of Vault s core data structures except for the encryption keyring itself generic is now kv Since 0 8 3 To better reflect its actual use the generic backend is now kv Using generic will still work for backwards compatibility HSM users need to specify new config options In 0 9 When using Vault with an HSM a new parameter is required hmac key label This performs a similar function to key label but for the HMAC key Vault will use Vault will generate a suitable key if this value is specified and generate key is set true See the seal configuration page pkcs11 seal for more information API HTTP client behavior In 0 9 When calling NewClient the API no longer modifies the provided client transport In particular this means it will no longer enable redirection limiting and HTTP 2 support on custom clients It is suggested that if you want to make changes to an HTTP client that you use one created by DefaultConfig as a starting point AWS EC2 client nonce behavior In 0 9 The client nonce generated by the backend that gets returned along with the authentication response will be audited in plaintext If this is undesired the clients can choose to supply a custom nonce to the login endpoint The custom nonce set by the client will from now on not be returned back with the authentication response and hence not audit logged AWS auth role options In 0 9 The API will now error when trying to create or update a role with the mutually exclusive options disallow reauthentication and allow instance migration SSH CA role read changes In 0 9 When reading back a role from the ssh backend the TTL max TTL values will now be an integer number of seconds rather than a string This better matches the API elsewhere in Vault SSH role list changes In 0 9 When listing roles from the ssh backend via the API the response data will additionally return a key info map that will contain a map of each key with a corresponding object containing the key type More granularity in audit logs In 0 9 Audit request and response entries are still in RFC3339 format but now have a granularity of nanoseconds generate root vault api docs secret pki generate root pkcs11 seal vault docs configuration seal pkcs11 |
vault page title Upgrading to Vault 1 13 x Guides This page contains the list of deprecations and important or breaking changes for Vault 1 13 x Please read it carefully layout docs Overview | ---
layout: docs
page_title: Upgrading to Vault 1.13.x - Guides
description: |-
This page contains the list of deprecations and important or breaking changes
for Vault 1.13.x. Please read it carefully.
---
# Overview
This page contains the list of deprecations and important or breaking changes
for Vault 1.13.x compared to 1.12. Please read it carefully.
## Changes
@include 'consul-dataplane-upgrade-note.mdx'
### Undo logs
Vault 1.13 introduced changes to add extra resiliency to log shipping with undo logs. These logs can help prevent several Merkle syncs from occurring due to rapid key changes in the primary Merkle tree as the secondary tries to synchronize. For integrated storage users, upgrading Vault to 1.13 will enable this feature by default. For Consul storage users, Consul also needs to be upgraded to 1.14 to use this feature.
### User lockout
As of version 1.13, Vault will stop trying to validate user credentials if the
user submits multiple invalid credentials in quick succession. During lockout,
Vault ignores requests from the barred user rather than responding with a
permission denied error.
User lockout is enabled by default with a lockout threshold of 5 attempt, a
lockout duration of 15 minutes, and a counter reset window of 15 minutes.
For more information, refer to the [User lockout](/vault/docs/concepts/user-lockout)
overview.
### Active directory secrets engine deprecation
The Active Directory (AD) secrets engine has been deprecated as of the Vault 1.13 release.
We will continue to support the AD secrets engine in maintenance mode for six major Vault
releases. Maintenance mode means that we will fix bugs and security issues but will not add
new features. For additional information, see the [deprecation table](/vault/docs/deprecation)
and [migration guide](/vault/docs/secrets/ad/migration-guide).
### AliCloud auth role parameter
The AliCloud auth plugin will now require the `role` parameter on login. This
has always been documented as a required field but the requirement will now be
enforced.
### Mounts associated with removed builtin plugins will result in core shutdown on upgrade
As of 1.13.0 Standalone (logical) DB Engines and the AppId Auth Method have been
marked with the `Removed` status. Any attempt to unseal Vault with
mounts backed by one of these builtin plugins will result in an immediate
shutdown of the Vault core.
-> **NOTE** In the event that an external plugin with the same name and type as
a deprecated builtin is deregistered, any subsequent unseal will continue to
unseal with an unusable auth backend, and a corresponding ERROR log.
```shell-session
$ vault plugin register -sha256=c805cf3b69f704dfcd5176ef1c7599f88adbfd7374e9c76da7f24a32a97abfe1 auth app-id
Success! Registered plugin: app-id
$ vault auth enable -plugin-name=app-id plugin
Success! Enabled app-id auth method at: app-id/
$ vault auth list -detailed | grep "app-id"
app-id/ app-id auth_app-id_3a8f2e24 system system default-service replicated false false map[] n/a 0018263c-0d64-7a70-fd5c-50e05c5f5dc3 n/a n/a c805cf3b69f704dfcd5176ef1c7599f88adbfd7374e9c76da7f24a32a97abfe1 n/a
$ vault plugin deregister auth app-id
Success! Deregistered plugin (if it was registered): app-id
$ vault plugin list -detailed | grep "app-id"
app-id auth v1.13.0+builtin.vault removed
$ curl --header "X-Vault-Token: $VAULT_TOKEN" --request POST http://127.0.0.2:8200/v1/sys/seal
$ vault operator unseal <key1>
...
$ vault operator unseal <key2>
...
$ vault operator unseal <key3>
...
$ grep "app-id" /path/to/vault.log
[ERROR] core: skipping deprecated auth entry: name=app-id path=app-id/ error="mount entry associated with removed builtin"
[ERROR] core: skipping initialization for nil auth backend: path=app-id/ type=app-id version="v1.13.0+builtin.vault"
```
The remediation for affected mounts is to downgrade to the previously-used version of Vault
environment variable and replace any `Removed` feature with the
[preferred alternative
feature](/vault/docs/deprecation/faq#q-what-should-i-do-if-i-use-mount-filters-appid-or-any-of-the-standalone-db-engines).
For more information on the phases of deprecation, see the [Deprecation Notices
FAQ](/vault/docs/deprecation/faq#q-what-are-the-phases-of-deprecation).
#### Impacted versions
Affects upgrading from any version of Vault to 1.13.x. All other upgrade paths
are unaffected.
### Application of Sentinel Role Governing Policies (RGPs) via identity groups
@include 'application-of-sentinel-rgps-via-identity-groups.mdx'
## Known issues
@include 'tokenization-rotation-persistence.mdx'
@include 'known-issues/ocsp-redirect.mdx'
@include 'known-issues/1_13-reload-census-panic-standby.mdx'
### PKI revocation request forwarding
If a revocation request comes in to a standby or performance secondary node,
for a certificate that is present locally, the request will not be correctly
forwarded to the active node of this cluster.
As a workaround, submit revocation requests to the active node only.
### STS credentials do not return a lease_duration
Vault 1.13.0 introduced a change to the AWS Secrets Engine such that it no longer creates leases for STS credentials due
to the fact that they cannot be revoked or renewed. As part of this change, a bug was introduced which causes `lease_duration`
to always return zero. This prevents the Vault Agent from refreshing STS credentials and may introduce undesired behaviour
for anything which relies on a non-zero `lease_duration`.
For applications that can control what value to look for, the `ttl` value in the response can be used to know when to
request STS credentials next.
An additional workaround for users rendering STS credentials via the Vault Agent is to set the
`static-secret-render-interval` for a template using the credentials. Setting this configuration to 15 minutes
accommodates the default minimum duration of an STS token and overrides the default render interval of 5 minutes.
#### Impacted versions
Affects Vault 1.13.0 only.
### LDAP pagination issue
There was a regression introduced in 1.13.2 relating to LDAP maximum page sizes, resulting in
an error `no LDAP groups found in groupDN [...] only policies from locally-defined groups available`. The issue
occurs when upgrading Vault with an instance that has an existing LDAP Auth configuration.
As a workaround, disable paged searching using the following:
```shell-session
vault write auth/ldap/config max_page_size=-1
```
#### Impacted versions
Affects Vault 1.13.2.
### PKI Cross-Cluster revocation requests and unified CRL/OCSP
When revoking certificates on a cluster that doesn't own the
certificate, writing the revocation request will fail with
a message like `error persisting cross-cluster revocation request`.
Similar errors will appear in the log for failure to write
unified CRL and unified delta CRL WAL entries.
As a workaround, submit revocation requests to the cluster which
issued the certificate, or use BYOC revocation. Use cluster-local
OCSP and CRLs until this is resolved.
#### Impacted versions
Affects Vault 1.13.0 to 1.13.2. Fixed in 1.13.3.
On upgrade, all local revocations will be synchronized between
clusters; revocation requests are not persisted when failing to
write cross-cluster.
### Slow startup time when storing PKI certificates
There was a regression introduced in 1.13.0 where Vault is slow to start because the
PKI secret engine performs a list operation on the stored certificates. If a large number
of certificates are stored this can cause long start times on active and standby nodes.
There is currently no workaround for this other than limiting the number of certificates stored
in Vault via the [PKI tidy](/vault/api-docs/secret/pki#tidy) or using `no_store`
flag for [PKI roles](/vault/api-docs/secret/pki#createupdate-role).
#### Impacted versions
Affects Vault 1.13.0+
@include 'perf-standby-token-create-forwarding-failure.mdx'
@include 'known-issues/update-primary-data-loss.mdx'
@include 'pki-double-migration-bug.mdx'
@include 'known-issues/update-primary-addrs-panic.mdx'
@include 'known-issues/transit-managed-keys-panics.mdx'
@include 'known-issues/internal-error-namespace-missing-policy.mdx'
@include 'known-issues/ephemeral-loggers-memory-leak.mdx'
@include 'known-issues/sublogger-levels-unchanged-on-reload.mdx'
@include 'known-issues/expiration-metrics-fatal-error.mdx'
@include 'known-issues/perf-secondary-many-mounts-deadlock.mdx' | vault | layout docs page title Upgrading to Vault 1 13 x Guides description This page contains the list of deprecations and important or breaking changes for Vault 1 13 x Please read it carefully Overview This page contains the list of deprecations and important or breaking changes for Vault 1 13 x compared to 1 12 Please read it carefully Changes include consul dataplane upgrade note mdx Undo logs Vault 1 13 introduced changes to add extra resiliency to log shipping with undo logs These logs can help prevent several Merkle syncs from occurring due to rapid key changes in the primary Merkle tree as the secondary tries to synchronize For integrated storage users upgrading Vault to 1 13 will enable this feature by default For Consul storage users Consul also needs to be upgraded to 1 14 to use this feature User lockout As of version 1 13 Vault will stop trying to validate user credentials if the user submits multiple invalid credentials in quick succession During lockout Vault ignores requests from the barred user rather than responding with a permission denied error User lockout is enabled by default with a lockout threshold of 5 attempt a lockout duration of 15 minutes and a counter reset window of 15 minutes For more information refer to the User lockout vault docs concepts user lockout overview Active directory secrets engine deprecation The Active Directory AD secrets engine has been deprecated as of the Vault 1 13 release We will continue to support the AD secrets engine in maintenance mode for six major Vault releases Maintenance mode means that we will fix bugs and security issues but will not add new features For additional information see the deprecation table vault docs deprecation and migration guide vault docs secrets ad migration guide AliCloud auth role parameter The AliCloud auth plugin will now require the role parameter on login This has always been documented as a required field but the requirement will now be enforced Mounts associated with removed builtin plugins will result in core shutdown on upgrade As of 1 13 0 Standalone logical DB Engines and the AppId Auth Method have been marked with the Removed status Any attempt to unseal Vault with mounts backed by one of these builtin plugins will result in an immediate shutdown of the Vault core NOTE In the event that an external plugin with the same name and type as a deprecated builtin is deregistered any subsequent unseal will continue to unseal with an unusable auth backend and a corresponding ERROR log shell session vault plugin register sha256 c805cf3b69f704dfcd5176ef1c7599f88adbfd7374e9c76da7f24a32a97abfe1 auth app id Success Registered plugin app id vault auth enable plugin name app id plugin Success Enabled app id auth method at app id vault auth list detailed grep app id app id app id auth app id 3a8f2e24 system system default service replicated false false map n a 0018263c 0d64 7a70 fd5c 50e05c5f5dc3 n a n a c805cf3b69f704dfcd5176ef1c7599f88adbfd7374e9c76da7f24a32a97abfe1 n a vault plugin deregister auth app id Success Deregistered plugin if it was registered app id vault plugin list detailed grep app id app id auth v1 13 0 builtin vault removed curl header X Vault Token VAULT TOKEN request POST http 127 0 0 2 8200 v1 sys seal vault operator unseal key1 vault operator unseal key2 vault operator unseal key3 grep app id path to vault log ERROR core skipping deprecated auth entry name app id path app id error mount entry associated with removed builtin ERROR core skipping initialization for nil auth backend path app id type app id version v1 13 0 builtin vault The remediation for affected mounts is to downgrade to the previously used version of Vault environment variable and replace any Removed feature with the preferred alternative feature vault docs deprecation faq q what should i do if i use mount filters appid or any of the standalone db engines For more information on the phases of deprecation see the Deprecation Notices FAQ vault docs deprecation faq q what are the phases of deprecation Impacted versions Affects upgrading from any version of Vault to 1 13 x All other upgrade paths are unaffected Application of Sentinel Role Governing Policies RGPs via identity groups include application of sentinel rgps via identity groups mdx Known issues include tokenization rotation persistence mdx include known issues ocsp redirect mdx include known issues 1 13 reload census panic standby mdx PKI revocation request forwarding If a revocation request comes in to a standby or performance secondary node for a certificate that is present locally the request will not be correctly forwarded to the active node of this cluster As a workaround submit revocation requests to the active node only STS credentials do not return a lease duration Vault 1 13 0 introduced a change to the AWS Secrets Engine such that it no longer creates leases for STS credentials due to the fact that they cannot be revoked or renewed As part of this change a bug was introduced which causes lease duration to always return zero This prevents the Vault Agent from refreshing STS credentials and may introduce undesired behaviour for anything which relies on a non zero lease duration For applications that can control what value to look for the ttl value in the response can be used to know when to request STS credentials next An additional workaround for users rendering STS credentials via the Vault Agent is to set the static secret render interval for a template using the credentials Setting this configuration to 15 minutes accommodates the default minimum duration of an STS token and overrides the default render interval of 5 minutes Impacted versions Affects Vault 1 13 0 only LDAP pagination issue There was a regression introduced in 1 13 2 relating to LDAP maximum page sizes resulting in an error no LDAP groups found in groupDN only policies from locally defined groups available The issue occurs when upgrading Vault with an instance that has an existing LDAP Auth configuration As a workaround disable paged searching using the following shell session vault write auth ldap config max page size 1 Impacted versions Affects Vault 1 13 2 PKI Cross Cluster revocation requests and unified CRL OCSP When revoking certificates on a cluster that doesn t own the certificate writing the revocation request will fail with a message like error persisting cross cluster revocation request Similar errors will appear in the log for failure to write unified CRL and unified delta CRL WAL entries As a workaround submit revocation requests to the cluster which issued the certificate or use BYOC revocation Use cluster local OCSP and CRLs until this is resolved Impacted versions Affects Vault 1 13 0 to 1 13 2 Fixed in 1 13 3 On upgrade all local revocations will be synchronized between clusters revocation requests are not persisted when failing to write cross cluster Slow startup time when storing PKI certificates There was a regression introduced in 1 13 0 where Vault is slow to start because the PKI secret engine performs a list operation on the stored certificates If a large number of certificates are stored this can cause long start times on active and standby nodes There is currently no workaround for this other than limiting the number of certificates stored in Vault via the PKI tidy vault api docs secret pki tidy or using no store flag for PKI roles vault api docs secret pki createupdate role Impacted versions Affects Vault 1 13 0 include perf standby token create forwarding failure mdx include known issues update primary data loss mdx include pki double migration bug mdx include known issues update primary addrs panic mdx include known issues transit managed keys panics mdx include known issues internal error namespace missing policy mdx include known issues ephemeral loggers memory leak mdx include known issues sublogger levels unchanged on reload mdx include known issues expiration metrics fatal error mdx include known issues perf secondary many mounts deadlock mdx |
vault These are general upgrade instructions for Vault plugins Plugin upgrade procedure Upgrading Vault plugins page title Upgrading Plugins Guides layout docs | ---
layout: docs
page_title: Upgrading Plugins - Guides
description: These are general upgrade instructions for Vault plugins.
---
# Upgrading Vault plugins
## Plugin upgrade procedure
The following procedures detail steps for upgrading a plugin that has been mounted
at a path on a running server. The steps are the same whether the plugin being
upgraded is built-in or external.
~> [Plugin versioning](/vault/docs/plugins#plugin-versioning) was introduced
with Vault 1.12.0, so if your Vault server is on 1.11.x or earlier, see the
[1.11.x version of this page](/vault/docs/v1.11.x/upgrading/plugins)
for plugin upgrade instructions.
### Upgrading auth and secrets plugins
The process is nearly identical for auth and secret plugins. If you are upgrading
an auth plugin, just replace all usages of `secrets` or `secret` with `auth`.
1. [Register][plugin_registration] the first version of your plugin to the catalog.
Skip this step if your initial plugin is built-in or already registered.
```shell-session
$ vault plugin register \
-sha256=<SHA256 Hex value of the plugin binary> \
secret \
my-secret-plugin
Success! Registered plugin: my-secret-plugin
```
1. [Mount][plugin_management] the plugin. Skip this step if your initial plugin
is already mounted.
```shell-session
$ vault secrets enable my-secret-plugin
Success! Enabled the my-secret-plugin secrets engine at: my-secret-plugin/
```
1. Register a second version of your plugin. You **must** use the same plugin
type and name (the last two arguments) as the plugin being upgraded. This is
true regardless of whether the plugin being upgraded is built-in or external.
```shell-session
$ vault plugin register \
-sha256=<SHA256 Hex value of the plugin binary> \
-command=my-secret-plugin-1.0.1 \
-version=v1.0.1 \
secret \
my-secret-plugin
Success! Registered plugin: my-secret-plugin
```
1. Set the new version as the cluster's pinned version.
```shell-session
$ vault write sys/plugins/pins/secret/my-secret-plugin version=v1.0.1
```
1. Trigger a global [plugin reload](/vault/docs/commands/plugin/reload) to
reload all instances of the plugin.
```shell-session
$ vault plugin reload -type=secret -plugin=my-secret-plugin -scope=global
Success! Reloading plugin: my-secret-plugin, reload_id: 98b1e875-4217-745d-07f2-93d14219fb3c
```
1. **Optional:** Check the "Running Version" field to verify the new version is
running:
```shell-session
$ vault secrets list -detailed
```
Until the reload step, the mount will still run the first version of `my-secret-plugin`. When
the reload is triggered, Vault will kill `my-secret-plugin`’s process and start the
new plugin process for `my-secret-plugin` version 1.0.1.
### Upgrading database plugins
1. [Register][plugin_registration] the first version of your plugin to the catalog.
Skip this step if your initial plugin is built-in or already registered.
```shell-session
$ vault plugin register
-sha256=<SHA256 Hex value of the plugin binary> \
database \
my-db-plugin
Success! Registered plugin: my-db-plugin
```
1. [Mount][plugin_management] the plugin. Skip this step if your initial plugin
is already mounted.
```shell-session
$ vault secrets enable database
$ vault write database/config/my-db \
plugin_name=my-db-plugin \
# ...
Success! Data written to: database/config/my-db
```
1. Register a second version of your plugin. You **must** use the same plugin
type and name (the last two arguments) as the plugin being upgraded. This is
true regardless of whether the plugin being upgraded is built-in or external.
```shell-session
$ vault plugin register \
-sha256=<SHA256 Hex value of the plugin binary> \
-command=my-db-plugin-1.0.1 \
-version=v1.0.1 \
database \
my-db-plugin
Success! Registered plugin: my-db-plugin
```
1. Set the new version as the cluster's pinned version.
```shell-session
$ vault write sys/plugins/pins/database/my-db-plugin version=v1.0.1
```
1. Trigger a global [plugin reload](/vault/docs/commands/plugin/reload) to
reload all instances of the plugin.
```shell-session
$ vault plugin reload -type=database -plugin=my-db-plugin -scope=global
Success! Reloading plugin: my-db-plugin, reload_id: 98b1e875-4217-745d-07f2-93d14219fb3c
```
1. **Optional:** Verify the current version of the running plugin:
```shell-session
$ vault read database/config/my-db
```
Until the reload step, the mount will still run the first version of `my-db-plugin`. When
the reload is triggered, Vault will kill `my-db-plugin`’s process and start the
new plugin process for `my-db-plugin` version 1.0.1.
### Downgrading plugins
Plugin downgrades follow the same procedure as upgrades. You can use the Vault
plugin list command to check what plugin versions are available to downgrade to:
```shell-session
$ vault plugin list secret
Name Version
---- -------
ad v0.14.0+builtin
alicloud v0.13.0+builtin
aws v1.12.0+builtin.vault
azure v0.14.0+builtin
cassandra v1.12.0+builtin.vault
consul v1.12.0+builtin.vault
gcp v0.14.0+builtin
gcpkms v0.13.0+builtin
kv v0.13.3+builtin
ldap v1.12.0+builtin.vault
mongodb v1.12.0+builtin.vault
mongodbatlas v0.8.0+builtin
mssql v1.12.0+builtin.vault
mysql v1.12.0+builtin.vault
nomad v1.12.0+builtin.vault
openldap v0.9.0+builtin
pki v1.12.0+builtin.vault
postgresql v1.12.0+builtin.vault
rabbitmq v1.12.0+builtin.vault
ssh v1.12.0+builtin.vault
terraform v0.6.0+builtin
totp v1.12.0+builtin.vault
transit v1.12.0+builtin.vault
```
### Additional upgrade notes
* As mentioned earlier, disabling existing mounts will wipe the existing data.
* Overwriting an existing version in the catalog will affect all uses of that
plugin version. So if you have 5 different Azure Secrets mounts using v1.0.0,
they'll all start using the new binary if you overwrite it. We recommend
treating plugin versions in the catalog as immutable, much like version control
tags.
* Each plugin has its own data within Vault storage. While it is rare for HashiCorp
maintained plugins to update their storage schema, it is up to plugin authors
to manage schema upgrades and downgrades. Check the plugin release notes for
any unsupported upgrade or downgrade transitions, especially before moving to
a new major version or downgrading.
* Existing Vault [leases](/vault/docs/concepts/lease) and [tokens](/vault/docs/concepts/tokens)
are generally unaffected by plugin upgrades and reloads. This is because the lifecycle
of leases and tokens is handled by core systems within Vault. The plugin itself only
handles renewal and revocation of them when it’s requested by those core systems.
[plugin_reload_api]: /vault/api-docs/system/plugins-reload
[plugin_registration]: /vault/docs/plugins/plugin-architecture#plugin-registration
[plugin_management]: /vault/docs/plugins/plugin-management#enabling-disabling-external-plugins | vault | layout docs page title Upgrading Plugins Guides description These are general upgrade instructions for Vault plugins Upgrading Vault plugins Plugin upgrade procedure The following procedures detail steps for upgrading a plugin that has been mounted at a path on a running server The steps are the same whether the plugin being upgraded is built in or external Plugin versioning vault docs plugins plugin versioning was introduced with Vault 1 12 0 so if your Vault server is on 1 11 x or earlier see the 1 11 x version of this page vault docs v1 11 x upgrading plugins for plugin upgrade instructions Upgrading auth and secrets plugins The process is nearly identical for auth and secret plugins If you are upgrading an auth plugin just replace all usages of secrets or secret with auth 1 Register plugin registration the first version of your plugin to the catalog Skip this step if your initial plugin is built in or already registered shell session vault plugin register sha256 SHA256 Hex value of the plugin binary secret my secret plugin Success Registered plugin my secret plugin 1 Mount plugin management the plugin Skip this step if your initial plugin is already mounted shell session vault secrets enable my secret plugin Success Enabled the my secret plugin secrets engine at my secret plugin 1 Register a second version of your plugin You must use the same plugin type and name the last two arguments as the plugin being upgraded This is true regardless of whether the plugin being upgraded is built in or external shell session vault plugin register sha256 SHA256 Hex value of the plugin binary command my secret plugin 1 0 1 version v1 0 1 secret my secret plugin Success Registered plugin my secret plugin 1 Set the new version as the cluster s pinned version shell session vault write sys plugins pins secret my secret plugin version v1 0 1 1 Trigger a global plugin reload vault docs commands plugin reload to reload all instances of the plugin shell session vault plugin reload type secret plugin my secret plugin scope global Success Reloading plugin my secret plugin reload id 98b1e875 4217 745d 07f2 93d14219fb3c 1 Optional Check the Running Version field to verify the new version is running shell session vault secrets list detailed Until the reload step the mount will still run the first version of my secret plugin When the reload is triggered Vault will kill my secret plugin s process and start the new plugin process for my secret plugin version 1 0 1 Upgrading database plugins 1 Register plugin registration the first version of your plugin to the catalog Skip this step if your initial plugin is built in or already registered shell session vault plugin register sha256 SHA256 Hex value of the plugin binary database my db plugin Success Registered plugin my db plugin 1 Mount plugin management the plugin Skip this step if your initial plugin is already mounted shell session vault secrets enable database vault write database config my db plugin name my db plugin Success Data written to database config my db 1 Register a second version of your plugin You must use the same plugin type and name the last two arguments as the plugin being upgraded This is true regardless of whether the plugin being upgraded is built in or external shell session vault plugin register sha256 SHA256 Hex value of the plugin binary command my db plugin 1 0 1 version v1 0 1 database my db plugin Success Registered plugin my db plugin 1 Set the new version as the cluster s pinned version shell session vault write sys plugins pins database my db plugin version v1 0 1 1 Trigger a global plugin reload vault docs commands plugin reload to reload all instances of the plugin shell session vault plugin reload type database plugin my db plugin scope global Success Reloading plugin my db plugin reload id 98b1e875 4217 745d 07f2 93d14219fb3c 1 Optional Verify the current version of the running plugin shell session vault read database config my db Until the reload step the mount will still run the first version of my db plugin When the reload is triggered Vault will kill my db plugin s process and start the new plugin process for my db plugin version 1 0 1 Downgrading plugins Plugin downgrades follow the same procedure as upgrades You can use the Vault plugin list command to check what plugin versions are available to downgrade to shell session vault plugin list secret Name Version ad v0 14 0 builtin alicloud v0 13 0 builtin aws v1 12 0 builtin vault azure v0 14 0 builtin cassandra v1 12 0 builtin vault consul v1 12 0 builtin vault gcp v0 14 0 builtin gcpkms v0 13 0 builtin kv v0 13 3 builtin ldap v1 12 0 builtin vault mongodb v1 12 0 builtin vault mongodbatlas v0 8 0 builtin mssql v1 12 0 builtin vault mysql v1 12 0 builtin vault nomad v1 12 0 builtin vault openldap v0 9 0 builtin pki v1 12 0 builtin vault postgresql v1 12 0 builtin vault rabbitmq v1 12 0 builtin vault ssh v1 12 0 builtin vault terraform v0 6 0 builtin totp v1 12 0 builtin vault transit v1 12 0 builtin vault Additional upgrade notes As mentioned earlier disabling existing mounts will wipe the existing data Overwriting an existing version in the catalog will affect all uses of that plugin version So if you have 5 different Azure Secrets mounts using v1 0 0 they ll all start using the new binary if you overwrite it We recommend treating plugin versions in the catalog as immutable much like version control tags Each plugin has its own data within Vault storage While it is rare for HashiCorp maintained plugins to update their storage schema it is up to plugin authors to manage schema upgrades and downgrades Check the plugin release notes for any unsupported upgrade or downgrade transitions especially before moving to a new major version or downgrading Existing Vault leases vault docs concepts lease and tokens vault docs concepts tokens are generally unaffected by plugin upgrades and reloads This is because the lifecycle of leases and tokens is handled by core systems within Vault The plugin itself only handles renewal and revocation of them when it s requested by those core systems plugin reload api vault api docs system plugins reload plugin registration vault docs plugins plugin architecture plugin registration plugin management vault docs plugins plugin management enabling disabling external plugins |
vault page title Upgrade to Vault 1 15 x Guides for anyone upgrading to 1 15 x from Vault 1 14 x Deprecations important or breaking changes and remediation recommendations layout docs Overview | ---
layout: docs
page_title: Upgrade to Vault 1.15.x - Guides
description: |-
Deprecations, important or breaking changes, and remediation recommendations
for anyone upgrading to 1.15.x from Vault 1.14.x.
---
# Overview
The Vault 1.15.x upgrade guide contains information on deprecations, important
or breaking changes, and remediation recommendations for anyone upgrading from
Vault 1.14. **Please read carefully**.
## Consul service registration
As of version 1.15, `service_tags` supplied to Vault for the purpose of [Consul
service registration](/vault/docs/configuration/service-registration/consul#service_tags)
will be **case-sensitive**.
In previous versions of Vault tags were converted to lowercase which led to issues,
for example when tags contained Traefik rules which use case-sensitive method names
such as `Host()`.
If you previously used Consul service registration tags ignoring case, or relied
on the lowercase tags created by Vault, then this change may cause unexpected behavior.
Please audit your Consul storage stanza to ensure that you either:
* Manually convert your `service_tags` to lowercase if required
* Ensure that any system that relies on the tags is aware of the new case-preserving behavior
## Rollback metrics
Vault no longer measures and reports the metrics `vault.rollback.attempts.{MOUNTPOINT}` and `vault.route.rollback.{MOUNTPOINT}` by default. The new default metrics are `vault.rollback.attempts`
and `vault.route.rollback`, which **do not** contain the mount point in the metric name.
To continue measuring `vault.rollback.attempts.{MOUNTPOINT}` and
`vault.route.rollback.{MOUNTPOINT}`, you must explicitly enable mount-specific
metrics in the `telemetry` stanza of your Vault configuration with the
[`add_mount_point_rollback_metrics`](/vault/docs/configuration/telemetry#add_mount_point_rollback_metrics)
option.
## Application of Sentinel Role Governing Policies (RGPs) via identity groups
@include 'application-of-sentinel-rgps-via-identity-groups.mdx'
### Docker image no longer contains `curl`
As of 1.15.13 and later, the `curl` binary is no longer included in the published Docker container
images for Vault and Vault Enterprise. If your workflow depends on `curl` being available in the
container, consider one of the following strategies:
#### Create a wrapper container image
Use the HashiCorp image as a base image to create a new container image with `curl` installed.
```Dockerfile
FROM hashicorp/vault-enterprise
RUN apk add curl
```
**NOTE:** While this is the preferred option it will require managing your own registry and rebuilding new images.
#### Install it at runtime dynamically
When running the image as root (not recommended), you can install it at runtime dynamically by using the `apk` package manager:
```shell-session
docker exec <CONTAINER-ID> apk add curl
```
```shell-session
kubectl exec -ti <NAME> -- apk add curl
```
When running the image as non-root without privilege escalation (recommended) you can use existing
tools to install a static binary of `curl` into the `vault` users home directory:
```shell-session
docker exec <CONTAINER-ID> wget https://github.com/moparisthebest/static-curl/releases/latest/download/curl-amd64 -O /home/vault/curl && chmod +x /home/vault/curl
```
```shell-session
kubectl exec -ti <NAME> -- wget https://github.com/moparisthebest/static-curl/releases/latest/download/curl-amd64 -O /home/vault/curl && chmod +x /home/vault/curl
```
**NOTE:** When using this option you'll want to verify that the static binary comes from a trusted source.
## Known issues and workarounds
@include 'known-issues/1_15-auto-upgrade.mdx'
@include 'known-issues/transit-managed-keys-panics.mdx'
@include 'known-issues/transit-managed-keys-sign-fails.mdx'
@include 'known-issues/aws-auth-panics.mdx'
@include 'known-issues/ui-collapsed-navbar.mdx'
@include 'known-issues/1_15-audit-file-sighup-does-not-trigger-reload.mdx'
@include 'known-issues/internal-error-namespace-missing-policy.mdx'
@include 'known-issues/ephemeral-loggers-memory-leak.mdx'
@include 'known-issues/sublogger-levels-unchanged-on-reload.mdx'
@include 'known-issues/kv2-url-change.mdx'
@include 'known-issues/expiration-metrics-fatal-error.mdx'
@include 'known-issues/1_15_audit-use-of-log-raw-applies-to-all-devices.mdx'
@include 'known-issues/1_15_openldap-rotate-credentials.mdx'
@include 'known-issues/perf-secondary-many-mounts-deadlock.mdx'
@include 'known-issues/1_15-audit-panic-handling-with-eventlogger.mdx'
@include 'known-issues/ocsp-redirect.mdx'
@include 'known-issues/1_15-audit-vault-enterprise-perf-standby-logs-all-headers.mdx'
@include 'known-issues/perf-standbys-revert-to-standby.mdx'
@include 'known-issues/1_13-reload-census-panic-standby.mdx'
@include 'known-issues/autopilot-upgrade-upgrade-version.mdx'
@include 'known-issues/config_listener_proxy_protocol_behavior_issue.mdx'
@include 'known-issues/duplicate-identity-groups.mdx'
@include 'known-issues/manual-entity-merge-does-not-persist.mdx'
| vault | layout docs page title Upgrade to Vault 1 15 x Guides description Deprecations important or breaking changes and remediation recommendations for anyone upgrading to 1 15 x from Vault 1 14 x Overview The Vault 1 15 x upgrade guide contains information on deprecations important or breaking changes and remediation recommendations for anyone upgrading from Vault 1 14 Please read carefully Consul service registration As of version 1 15 service tags supplied to Vault for the purpose of Consul service registration vault docs configuration service registration consul service tags will be case sensitive In previous versions of Vault tags were converted to lowercase which led to issues for example when tags contained Traefik rules which use case sensitive method names such as Host If you previously used Consul service registration tags ignoring case or relied on the lowercase tags created by Vault then this change may cause unexpected behavior Please audit your Consul storage stanza to ensure that you either Manually convert your service tags to lowercase if required Ensure that any system that relies on the tags is aware of the new case preserving behavior Rollback metrics Vault no longer measures and reports the metrics vault rollback attempts MOUNTPOINT and vault route rollback MOUNTPOINT by default The new default metrics are vault rollback attempts and vault route rollback which do not contain the mount point in the metric name To continue measuring vault rollback attempts MOUNTPOINT and vault route rollback MOUNTPOINT you must explicitly enable mount specific metrics in the telemetry stanza of your Vault configuration with the add mount point rollback metrics vault docs configuration telemetry add mount point rollback metrics option Application of Sentinel Role Governing Policies RGPs via identity groups include application of sentinel rgps via identity groups mdx Docker image no longer contains curl As of 1 15 13 and later the curl binary is no longer included in the published Docker container images for Vault and Vault Enterprise If your workflow depends on curl being available in the container consider one of the following strategies Create a wrapper container image Use the HashiCorp image as a base image to create a new container image with curl installed Dockerfile FROM hashicorp vault enterprise RUN apk add curl NOTE While this is the preferred option it will require managing your own registry and rebuilding new images Install it at runtime dynamically When running the image as root not recommended you can install it at runtime dynamically by using the apk package manager shell session docker exec CONTAINER ID apk add curl shell session kubectl exec ti NAME apk add curl When running the image as non root without privilege escalation recommended you can use existing tools to install a static binary of curl into the vault users home directory shell session docker exec CONTAINER ID wget https github com moparisthebest static curl releases latest download curl amd64 O home vault curl chmod x home vault curl shell session kubectl exec ti NAME wget https github com moparisthebest static curl releases latest download curl amd64 O home vault curl chmod x home vault curl NOTE When using this option you ll want to verify that the static binary comes from a trusted source Known issues and workarounds include known issues 1 15 auto upgrade mdx include known issues transit managed keys panics mdx include known issues transit managed keys sign fails mdx include known issues aws auth panics mdx include known issues ui collapsed navbar mdx include known issues 1 15 audit file sighup does not trigger reload mdx include known issues internal error namespace missing policy mdx include known issues ephemeral loggers memory leak mdx include known issues sublogger levels unchanged on reload mdx include known issues kv2 url change mdx include known issues expiration metrics fatal error mdx include known issues 1 15 audit use of log raw applies to all devices mdx include known issues 1 15 openldap rotate credentials mdx include known issues perf secondary many mounts deadlock mdx include known issues 1 15 audit panic handling with eventlogger mdx include known issues ocsp redirect mdx include known issues 1 15 audit vault enterprise perf standby logs all headers mdx include known issues perf standbys revert to standby mdx include known issues 1 13 reload census panic standby mdx include known issues autopilot upgrade upgrade version mdx include known issues config listener proxy protocol behavior issue mdx include known issues duplicate identity groups mdx include known issues manual entity merge does not persist mdx |
vault page title Upgrade to Vault 1 17 x Guides Deprecations important or breaking changes and remediation recommendations layout docs for anyone upgrading to 1 17 x from Vault 1 16 x Overview | ---
layout: docs
page_title: Upgrade to Vault 1.17.x - Guides
description: |-
Deprecations, important or breaking changes, and remediation recommendations
for anyone upgrading to 1.17.x from Vault 1.16.x.
---
# Overview
The Vault 1.17.x upgrade guide contains information on deprecations, important
or breaking changes, and remediation recommendations for anyone upgrading from
Vault 1.16. **Please read carefully**.
## Important changes
<a id="audit-headers" />
### Allowed audit headers now have unremovable defaults
The [config auditing API endpoint](/vault/api-docs/system/config-auditing#create-update-audit-request-header)
tells Vault to log incoming request headers (when present) in the audit log.
Previously, Vault only logged headers that were explicitly configured for
logging. As of version 1.17, Vault automatically logs a predefined set of
[default headers](/vault/docs/audit#default-headers). By default, the header
values are not HMAC encrypted. You must explicitly configure the
[HMAC setting](/vault/api-docs/system/config-auditing#hmac) for each of the
default headers if required.
Refer to the
[audit request headers documentation](/vault/docs/audit#audit-request-headers)
for more information.
<a id="pki-truncate" />
### PKI sign-intermediate now truncates notAfter field to signing issuer
Prior to 1.17.x, Vault allowed the calculated sign-intermediate `notAfter` field
to go beyond the signing issuer `notAfter` field. The extended value lead to a
CA chain that would not validate properly. As of 1.17.x, Vault truncates the
intermediary `notAfter` value to the signing issuer `notAfter` if the calculated
field is greater.
#### How to opt out
You can use the new `enforce_leaf_not_after_behavior` flag on the
sign-intermediate API along with the `leaf_not_after_behavior` flag for the
signing issuer to opt out of the truncating behavior.
When you set `enforce_leaf_not_after_behavior` to true, the sign-intermediate
API uses the `leaf_not_after_behavior` value configured for the signing issuer
to control truncation the behavior. Setting the issuer `leaf_not_after_behavior`
field to `permit` and `enforce_leaf_not_after_behavior` to true restores the
legacy behavior.
<a id="request-limiter" />
### Request limiter deprecation
Vault 1.16.0 included an experimental request limiter. The limiter was disabled
by default. Further testing indicated that an alternative approach improves
performance and reduces risk for many workloads. Vault 1.17.0 includes a
new [adaptive overload
protection](/vault/docs/concepts/adaptive-overload-protection) feature that
prevents outages when Vault is overwhelmed by write requests. Adaptive overload
protection is a beta feature in 1.17.0 and is disabled by default.
The beta request limiter will be removed from Vault entirely in a later release.
### JWT auth login requires bound audiences on the role
The `bound_audiences` parameter of "jwt" roles is **mandatory** if the JWT contains an audience
(which is more often than not the case), and **must** match at least one of
the JWT's associated `aud` claims. The `aud` claim claim can be a single string
or a list of strings as per [RFC 7519 Section 4.1.3](https://datatracker.ietf.org/doc/html/rfc7519#section-4.1.3).
If the JWT's `aud` claim is not set, then the role's `bound_audiences`
parameter is not required.
Users may not be able to log into Vault if the JWT role is configured
incorrectly. For additional details, refer to the
[JWT auth method (API)](/vault/api-docs/auth/jwt) documentation.
### Activity Log Changes
#### Default Activity Log Querying Period
As of 1.17.9 and later, the field `default_report_months` can no longer be configured or read. Any previously set values
will be ignored by the system.
Attempts to modify `default_report_months` through the
[/sys/internal/counters/config](/vault/api-docs/system/internal-counters#update-the-client-count-configuration)
endpoint, will result in the following warning from Vault:
<CodeBlockConfig hideClipboard>
```shell-session
WARNING! The following warnings were returned from Vault:
* default_report_months is deprecated: defaulting to billing start time
```
</CodeBlockConfig>
The `current_billing_period` toggle for `/sys/internal/counters/activity` is also deprecated, as this will be set
true by default.
Attempts to set `current_billing_period` will result in the following warning from Vault:
<CodeBlockConfig hideClipboard>
```shell-session
WARNING! The following warnings were returned from Vault:
* current_billing_period is deprecated; unless otherwise specified, all requests will default to the current billing period
```
</CodeBlockConfig>
### Auto-rolled billing start date
As of 1.17.3 and later, the billing start date (license start date if not configured) rolls over to the latest billing year at the end of the last cycle.
@include 'auto-roll-billing-start.mdx'
@include 'auto-roll-billing-start-example.mdx'
### Docker image no longer contains `curl`
As of 1.17.3 and later, the `curl` binary is no longer included in the published Docker container
images for Vault and Vault Enterprise. If your workflow depends on `curl` being available in the
container, consider one of the following strategies:
#### Create a wrapper container image
Use the HashiCorp image as a base image to create a new container image with `curl` installed.
```Dockerfile
FROM hashicorp/vault-enterprise
RUN apk add curl
```
**NOTE:** While this is the preferred option it will require managing your own registry and rebuilding new images.
#### Install it at runtime dynamically
When running the image as root (not recommended), you can install it at runtime dynamically by using the `apk` package manager:
```shell-session
docker exec <CONTAINER-ID> apk add curl
```
```shell-session
kubectl exec -ti <NAME> -- apk add curl
```
When running the image as non-root without privilege escalation (recommended) you can use existing
tools to install a static binary of `curl` into the `vault` users home directory:
```shell-session
docker exec <CONTAINER-ID> wget https://github.com/moparisthebest/static-curl/releases/latest/download/curl-amd64 -O /home/vault/curl && chmod +x /home/vault/curl
```
```shell-session
kubectl exec -ti <NAME> -- wget https://github.com/moparisthebest/static-curl/releases/latest/download/curl-amd64 -O /home/vault/curl && chmod +x /home/vault/curl
```
**NOTE:** When using this option you'll want to verify that the static binary comes from a trusted source.
### Product usage reporting
As of 1.17.9, Vault will collect anonymous product usage metrics for HashiCorp. This information will be collected
alongside client activity data, and will be sent automatically if automated reporting is configured, or added to manual
reports if manual reporting is preferred.
See the main page for [Vault product usage metrics reporting](/vault/docs/enterprise/license/product-usage-reporting) for
more details, and information about opt-out.
## Known issues and workarounds
@include 'known-issues/1_17_audit-log-hmac.mdx'
@include 'known-issues/ocsp-redirect.mdx'
@include 'known-issues/agent-and-proxy-excessive-cpu-1-17.mdx'
@include 'known-issues/config_listener_proxy_protocol_behavior_issue.mdx'
@include 'known-issues/transit-input-on-cmac-response.mdx'
@include 'known-issues/dangling-entity-aliases-in-memory.mdx'
@include 'known-issues/duplicate-identity-groups.mdx'
@include 'known-issues/manual-entity-merge-does-not-persist.mdx'
@include 'known-issues/aws-auth-external-id.mdx'
@include 'known-issues/sync-activation-flags-cache-not-updated.mdx'
@include 'known-issues/duplicate-hsm-key.mdx' | vault | layout docs page title Upgrade to Vault 1 17 x Guides description Deprecations important or breaking changes and remediation recommendations for anyone upgrading to 1 17 x from Vault 1 16 x Overview The Vault 1 17 x upgrade guide contains information on deprecations important or breaking changes and remediation recommendations for anyone upgrading from Vault 1 16 Please read carefully Important changes a id audit headers Allowed audit headers now have unremovable defaults The config auditing API endpoint vault api docs system config auditing create update audit request header tells Vault to log incoming request headers when present in the audit log Previously Vault only logged headers that were explicitly configured for logging As of version 1 17 Vault automatically logs a predefined set of default headers vault docs audit default headers By default the header values are not HMAC encrypted You must explicitly configure the HMAC setting vault api docs system config auditing hmac for each of the default headers if required Refer to the audit request headers documentation vault docs audit audit request headers for more information a id pki truncate PKI sign intermediate now truncates notAfter field to signing issuer Prior to 1 17 x Vault allowed the calculated sign intermediate notAfter field to go beyond the signing issuer notAfter field The extended value lead to a CA chain that would not validate properly As of 1 17 x Vault truncates the intermediary notAfter value to the signing issuer notAfter if the calculated field is greater How to opt out You can use the new enforce leaf not after behavior flag on the sign intermediate API along with the leaf not after behavior flag for the signing issuer to opt out of the truncating behavior When you set enforce leaf not after behavior to true the sign intermediate API uses the leaf not after behavior value configured for the signing issuer to control truncation the behavior Setting the issuer leaf not after behavior field to permit and enforce leaf not after behavior to true restores the legacy behavior a id request limiter Request limiter deprecation Vault 1 16 0 included an experimental request limiter The limiter was disabled by default Further testing indicated that an alternative approach improves performance and reduces risk for many workloads Vault 1 17 0 includes a new adaptive overload protection vault docs concepts adaptive overload protection feature that prevents outages when Vault is overwhelmed by write requests Adaptive overload protection is a beta feature in 1 17 0 and is disabled by default The beta request limiter will be removed from Vault entirely in a later release JWT auth login requires bound audiences on the role The bound audiences parameter of jwt roles is mandatory if the JWT contains an audience which is more often than not the case and must match at least one of the JWT s associated aud claims The aud claim claim can be a single string or a list of strings as per RFC 7519 Section 4 1 3 https datatracker ietf org doc html rfc7519 section 4 1 3 If the JWT s aud claim is not set then the role s bound audiences parameter is not required Users may not be able to log into Vault if the JWT role is configured incorrectly For additional details refer to the JWT auth method API vault api docs auth jwt documentation Activity Log Changes Default Activity Log Querying Period As of 1 17 9 and later the field default report months can no longer be configured or read Any previously set values will be ignored by the system Attempts to modify default report months through the sys internal counters config vault api docs system internal counters update the client count configuration endpoint will result in the following warning from Vault CodeBlockConfig hideClipboard shell session WARNING The following warnings were returned from Vault default report months is deprecated defaulting to billing start time CodeBlockConfig The current billing period toggle for sys internal counters activity is also deprecated as this will be set true by default Attempts to set current billing period will result in the following warning from Vault CodeBlockConfig hideClipboard shell session WARNING The following warnings were returned from Vault current billing period is deprecated unless otherwise specified all requests will default to the current billing period CodeBlockConfig Auto rolled billing start date As of 1 17 3 and later the billing start date license start date if not configured rolls over to the latest billing year at the end of the last cycle include auto roll billing start mdx include auto roll billing start example mdx Docker image no longer contains curl As of 1 17 3 and later the curl binary is no longer included in the published Docker container images for Vault and Vault Enterprise If your workflow depends on curl being available in the container consider one of the following strategies Create a wrapper container image Use the HashiCorp image as a base image to create a new container image with curl installed Dockerfile FROM hashicorp vault enterprise RUN apk add curl NOTE While this is the preferred option it will require managing your own registry and rebuilding new images Install it at runtime dynamically When running the image as root not recommended you can install it at runtime dynamically by using the apk package manager shell session docker exec CONTAINER ID apk add curl shell session kubectl exec ti NAME apk add curl When running the image as non root without privilege escalation recommended you can use existing tools to install a static binary of curl into the vault users home directory shell session docker exec CONTAINER ID wget https github com moparisthebest static curl releases latest download curl amd64 O home vault curl chmod x home vault curl shell session kubectl exec ti NAME wget https github com moparisthebest static curl releases latest download curl amd64 O home vault curl chmod x home vault curl NOTE When using this option you ll want to verify that the static binary comes from a trusted source Product usage reporting As of 1 17 9 Vault will collect anonymous product usage metrics for HashiCorp This information will be collected alongside client activity data and will be sent automatically if automated reporting is configured or added to manual reports if manual reporting is preferred See the main page for Vault product usage metrics reporting vault docs enterprise license product usage reporting for more details and information about opt out Known issues and workarounds include known issues 1 17 audit log hmac mdx include known issues ocsp redirect mdx include known issues agent and proxy excessive cpu 1 17 mdx include known issues config listener proxy protocol behavior issue mdx include known issues transit input on cmac response mdx include known issues dangling entity aliases in memory mdx include known issues duplicate identity groups mdx include known issues manual entity merge does not persist mdx include known issues aws auth external id mdx include known issues sync activation flags cache not updated mdx include known issues duplicate hsm key mdx |
vault This page contains the list of deprecations and important or breaking changes page title Upgrading to Vault 1 10 x Guides for Vault 1 10 x Please read it carefully layout docs Overview | ---
layout: docs
page_title: Upgrading to Vault 1.10.x - Guides
description: |-
This page contains the list of deprecations and important or breaking changes
for Vault 1.10.x. Please read it carefully.
---
# Overview
This page contains the list of deprecations and important or breaking changes
for Vault 1.10.x compared to 1.9. Please read it carefully.
## SSH secrets engine
The new default value of `algorithm_signer` for SSH CA roles has been changed
to `rsa-sha2-256` from `ssh-rsa`. Existing roles will be migrated to
explicitly specify the `algorithm_signer=ssh-rsa` for RSA keys if they used
the implicit (empty) default, but newly created roles will use the new default
value (preferring a literal `default` which presently uses `rsa-sha2-256`).
## Etcd v2 API no longer supported
Support for the Etcd v2 API is removed in Vault 1.10. The Etcd v2 API
was deprecated with the release of [Etcd v3.5](https://etcd.io/blog/2021/announcing-etcd-3.5/),
and will be decommissioned in a forthcoming Etcd release.
Users of the `etcd` storage backend with the etcdv2 API that are
upgrading to Vault 1.10 should [migrate](/vault/docs/commands/operator/migrate)
Vault storage to an Etcd v3 cluster prior to upgrading to Vault 1.10.
All storage migrations should have
[backups](/vault/docs/concepts/storage#backing-up-vault-s-persisted-data)
taken prior to migration.
## OTP generation process
Customers passing in OTPs during the process of generating root tokens must modify
the OTP generation to include an additional 2 characters before upgrading so that the
OTP can be xor-ed with the encoded root token. This change was implemented as a result
of the change in the prefix from hvs. to s. for service tokens.
## New error response for requests to perf standbys lagging behind active node
The introduction of [Server Side Consistent Tokens](/vault/docs/faq/ssct) means that
when issuing a request to a perf standby right after having obtained a token (e.g.
via login), if the token and its lease haven't yet been replicated to the perf
standby, an HTTP 412 error will be returned. Before 1.10.0 this typically would've
resulted in a 400, 403, or 50x response.
## Token format change
Token prefixes were updated to be more easily identifiable.
- Service tokens previously started with s. now start with hvs.
- Batch tokens previously started with b. now start with hvb.
- Recovery tokens previously started with r. now start with hvr.
Additionally, non-root service tokens are now longer than before. Previously, service tokens
were 26 characters; they now have a minimum of 95 characters. However, existing tokens will
still work.
Refer to the [Server Side Consistent Token FAQ](/vault/docs/faq/ssct) for details.
## OIDC provider built-in resources
In Vault 1.9, the [OIDC identity provider](/vault/docs/secrets/identity/oidc-provider) feature
was released as a tech preview. In Vault 1.10, built-in resources were introduced to the
OIDC provider system to reduce configuration steps and enhance usability.
The following built-in resources are included in each Vault namespace starting with Vault
1.10:
- A `default` OIDC provider that's usable by all client applications
- A `default` key for signing and verification of ID tokens
- An `allow_all` assignment which authorizes all Vault entities to authenticate via a
client application
If you created an [OIDC provider](/vault/api-docs/secret/identity/oidc-provider#create-or-update-a-provider)
with the name `default`, [key](/vault/api-docs/secret/identity/tokens#create-a-named-key) with the
name `default`, or [assignment](/vault/api-docs/secret/identity/oidc-provider#create-or-update-an-assignment)
with the name `allow_all` using the Vault 1.9 tech preview, the installation of these built-in
resources will be skipped. We _strongly recommend_ that you delete any existing resources
that have naming collisions before upgrading to Vault 1.10. Failing to delete resources with
naming collisions could result unexpected default behavior. Additionally, we recommend reading
the corresponding details in the OIDC provider [concepts](/vault/docs/concepts/oidc-provider) document
to understand how the built-in resources are used in the system.
## Known issues
@include 'raft-retry-join-failure.mdx'
@include 'raft-panic-old-tls-key.mdx'
@include 'tokenization-rotation-persistence.mdx'
### Errors returned by perf standbys lagging behind active node with consul storage
The introduction of [Server Side Consistent Tokens](/vault/docs/faq/ssct) means that
when issuing a request to a perf standby right after having obtained a token (e.g.
via login), if the token and its lease haven't yet been replicated to the perf
standby, an HTTP 412 error will be returned. Before 1.10.0 this wouldn't have
resulted in the client seeing errors with Consul storage.
### Single Vault follower restart causes election even with established quorum
We now support Server Side Consistent Tokens (See [Replication](/vault/docs/configuration/replication) and [Vault Eventual Consistency](/vault/docs/enterprise/consistency)), which introduces a new token format that can only be used on nodes of 1.10 or higher version. This new format is enabled by default upon upgrading to the new version. Old format tokens can be read by Vault 1.10, but the new format Vault 1.10 tokens cannot be read by older Vault versions.
For more details, see the [Server Side Consistent Tokens FAQ](/vault/docs/faq/ssct).
Since service tokens are always created on the leader, as long as the leader is not upgraded before performance standbys, service tokens will be of the old format and still be usable during the upgrade process. However, the usual upgrade process we recommend can't be relied upon to always upgrade the leader last. Due to this known [issue](https://github.com/hashicorp/vault/issues/14153), a Vault cluster using Integrated Storage may result in a leader not being upgraded last, and this can trigger a re-election. This re-election can cause the upgraded node to become the leader, resulting in the newly created tokens on the leader to be unusable on nodes that have not yet been upgraded. Note that this issue does not impact Vault Community Edition users.
We will have a fix for this issue in Vault 1.10.3. Until this issue is fixed, you may be at risk of having performance standbys unable to service requests until all nodes are upgraded. We recommended that you plan for a maintenance window to upgrade.
### Limited policy shows unhelpful message in UI after mounting a secret engine
When a user has a policy that allows creating a secret engine but not reading it, after successful creation, the user sees a message n is undefined instead of a permissions error. We will have a fix for this issue in an upcoming minor release.
### Adding/Modifying Duo MFA method for enterprise MFA triggers a panic error
When adding or modifying a Duo MFA method for step-up Enterprise MFA using the `sys/mfa/method/duo` endpoint, a panic gets triggered due to a missing schema field. We will have a fix for this in Vault 1.10.1. Until this issue is fixed, avoid making any changes to your Duo configuration if you are upgrading Vault to v1.10.0.
### Sign in to UI using OIDC auth method results in an error
Signing in to the Vault UI using an OIDC auth mount listed in the "tabs" of the form will result
in the following error: "Authentication failed: role with oidc role_type is not allowed".
The auth mounts listed in the "tabs" of the form are those that have [listing_visibility](/vault/api-docs/system/auth#listing_visibility-1)
set to `unauth`.
There is a workaround for this error that will allow you to sign in to Vault using the OIDC
auth method. Select the "Other" tab instead of selecting the specific OIDC auth mount tab.
From there, select "OIDC" from the "Method" select box and proceed to sign in to Vault.
### Login MFA not enforced after restart
A serious bug was identified in the Login MFA feature introduced in 1.10.0:
[#15108](https://github.com/hashicorp/vault/issues/15108).
Upon restart, Vault is not populating its in-memory MFA data structures based
on what is found in storage. Although Vault is persisting to storage MFA methods
and login enforcement configs populated via /identity/mfa, they will effectively
disappear after the process is restarted.
We plan to issue a new 1.10.3 release to address this soon. We recommend delaying
any rollouts of Login MFA until that release.
| vault | layout docs page title Upgrading to Vault 1 10 x Guides description This page contains the list of deprecations and important or breaking changes for Vault 1 10 x Please read it carefully Overview This page contains the list of deprecations and important or breaking changes for Vault 1 10 x compared to 1 9 Please read it carefully SSH secrets engine The new default value of algorithm signer for SSH CA roles has been changed to rsa sha2 256 from ssh rsa Existing roles will be migrated to explicitly specify the algorithm signer ssh rsa for RSA keys if they used the implicit empty default but newly created roles will use the new default value preferring a literal default which presently uses rsa sha2 256 Etcd v2 API no longer supported Support for the Etcd v2 API is removed in Vault 1 10 The Etcd v2 API was deprecated with the release of Etcd v3 5 https etcd io blog 2021 announcing etcd 3 5 and will be decommissioned in a forthcoming Etcd release Users of the etcd storage backend with the etcdv2 API that are upgrading to Vault 1 10 should migrate vault docs commands operator migrate Vault storage to an Etcd v3 cluster prior to upgrading to Vault 1 10 All storage migrations should have backups vault docs concepts storage backing up vault s persisted data taken prior to migration OTP generation process Customers passing in OTPs during the process of generating root tokens must modify the OTP generation to include an additional 2 characters before upgrading so that the OTP can be xor ed with the encoded root token This change was implemented as a result of the change in the prefix from hvs to s for service tokens New error response for requests to perf standbys lagging behind active node The introduction of Server Side Consistent Tokens vault docs faq ssct means that when issuing a request to a perf standby right after having obtained a token e g via login if the token and its lease haven t yet been replicated to the perf standby an HTTP 412 error will be returned Before 1 10 0 this typically would ve resulted in a 400 403 or 50x response Token format change Token prefixes were updated to be more easily identifiable Service tokens previously started with s now start with hvs Batch tokens previously started with b now start with hvb Recovery tokens previously started with r now start with hvr Additionally non root service tokens are now longer than before Previously service tokens were 26 characters they now have a minimum of 95 characters However existing tokens will still work Refer to the Server Side Consistent Token FAQ vault docs faq ssct for details OIDC provider built in resources In Vault 1 9 the OIDC identity provider vault docs secrets identity oidc provider feature was released as a tech preview In Vault 1 10 built in resources were introduced to the OIDC provider system to reduce configuration steps and enhance usability The following built in resources are included in each Vault namespace starting with Vault 1 10 A default OIDC provider that s usable by all client applications A default key for signing and verification of ID tokens An allow all assignment which authorizes all Vault entities to authenticate via a client application If you created an OIDC provider vault api docs secret identity oidc provider create or update a provider with the name default key vault api docs secret identity tokens create a named key with the name default or assignment vault api docs secret identity oidc provider create or update an assignment with the name allow all using the Vault 1 9 tech preview the installation of these built in resources will be skipped We strongly recommend that you delete any existing resources that have naming collisions before upgrading to Vault 1 10 Failing to delete resources with naming collisions could result unexpected default behavior Additionally we recommend reading the corresponding details in the OIDC provider concepts vault docs concepts oidc provider document to understand how the built in resources are used in the system Known issues include raft retry join failure mdx include raft panic old tls key mdx include tokenization rotation persistence mdx Errors returned by perf standbys lagging behind active node with consul storage The introduction of Server Side Consistent Tokens vault docs faq ssct means that when issuing a request to a perf standby right after having obtained a token e g via login if the token and its lease haven t yet been replicated to the perf standby an HTTP 412 error will be returned Before 1 10 0 this wouldn t have resulted in the client seeing errors with Consul storage Single Vault follower restart causes election even with established quorum We now support Server Side Consistent Tokens See Replication vault docs configuration replication and Vault Eventual Consistency vault docs enterprise consistency which introduces a new token format that can only be used on nodes of 1 10 or higher version This new format is enabled by default upon upgrading to the new version Old format tokens can be read by Vault 1 10 but the new format Vault 1 10 tokens cannot be read by older Vault versions For more details see the Server Side Consistent Tokens FAQ vault docs faq ssct Since service tokens are always created on the leader as long as the leader is not upgraded before performance standbys service tokens will be of the old format and still be usable during the upgrade process However the usual upgrade process we recommend can t be relied upon to always upgrade the leader last Due to this known issue https github com hashicorp vault issues 14153 a Vault cluster using Integrated Storage may result in a leader not being upgraded last and this can trigger a re election This re election can cause the upgraded node to become the leader resulting in the newly created tokens on the leader to be unusable on nodes that have not yet been upgraded Note that this issue does not impact Vault Community Edition users We will have a fix for this issue in Vault 1 10 3 Until this issue is fixed you may be at risk of having performance standbys unable to service requests until all nodes are upgraded We recommended that you plan for a maintenance window to upgrade Limited policy shows unhelpful message in UI after mounting a secret engine When a user has a policy that allows creating a secret engine but not reading it after successful creation the user sees a message n is undefined instead of a permissions error We will have a fix for this issue in an upcoming minor release Adding Modifying Duo MFA method for enterprise MFA triggers a panic error When adding or modifying a Duo MFA method for step up Enterprise MFA using the sys mfa method duo endpoint a panic gets triggered due to a missing schema field We will have a fix for this in Vault 1 10 1 Until this issue is fixed avoid making any changes to your Duo configuration if you are upgrading Vault to v1 10 0 Sign in to UI using OIDC auth method results in an error Signing in to the Vault UI using an OIDC auth mount listed in the tabs of the form will result in the following error Authentication failed role with oidc role type is not allowed The auth mounts listed in the tabs of the form are those that have listing visibility vault api docs system auth listing visibility 1 set to unauth There is a workaround for this error that will allow you to sign in to Vault using the OIDC auth method Select the Other tab instead of selecting the specific OIDC auth mount tab From there select OIDC from the Method select box and proceed to sign in to Vault Login MFA not enforced after restart A serious bug was identified in the Login MFA feature introduced in 1 10 0 15108 https github com hashicorp vault issues 15108 Upon restart Vault is not populating its in memory MFA data structures based on what is found in storage Although Vault is persisting to storage MFA methods and login enforcement configs populated via identity mfa they will effectively disappear after the process is restarted We plan to issue a new 1 10 3 release to address this soon We recommend delaying any rollouts of Login MFA until that release |
vault These are general upgrade instructions for Vault for both non HA and HA page title Upgrading Vault Guides setups Please ensure that you also read the version specific upgrade notes layout docs Upgrading Vault | ---
layout: docs
page_title: Upgrading Vault - Guides
description: |-
These are general upgrade instructions for Vault for both non-HA and HA
setups. Please ensure that you also read the version-specific upgrade notes.
---
# Upgrading Vault
These are general upgrade instructions for Vault for both non-HA and HA setups.
_Please ensure that you also read any version-specific upgrade notes which can be
found in the sidebar._
!> **Important:** Always back up your data before upgrading! Vault does not
make backward-compatibility guarantees for its data store. Simply replacing the
newly-installed Vault binary with the previous version will not cleanly
downgrade Vault, as upgrades may perform changes to the underlying data
structure that make the data incompatible with a downgrade. If you need to roll
back to a previous version of Vault, you should roll back your data store as
well.
Vault upgrades are designed such that large jumps (ie 1.3.10 -> 1.7.x) are
supported. The upgrade notes for each intervening version must be reviewed. The
upgrade notes may describe additional steps or configuration to update before,
during, or after the upgrade.
We also recommend you consult the
[deprecation notices](/vault/docs/deprecation). The notice page includes
a comprehensive list of deprecated features and the Vault versions where
the feature was removed or is scheduled to be removed.
@include 'versions.mdx'
## Integrated storage autopilot
Vault 1.11 introduced [automated
upgrades](/vault/docs/concepts/integrated-storage/autopilot#automated-upgrades) as
part of the Integrated Storage Autopilot feature. If your Vault environment is
configured to use Integrated Storage, consider leveraging this new feature to
upgrade your Vault environment.
-> **Tutorial:** Refer to the [Automate Upgrades with Vault
Enterprise](/vault/tutorials/raft/raft-upgrade-automation)
tutorial for more details.
## Agent
The Vault Agent is an API client of the Vault Server. Vault APIs are almost
always backwards compatible. When they are not, this is called out in the
upgrade guide for the new Vault version, and there is a lengthy deprecation
period. The Vault Agent version can lag behind the Vault Server version, though
we recommend keeping all Vault instances up to date with the most recent minor Vault version
to the extent possible.
## Testing the upgrade
It's always a good idea to try to ensure that the upgrade will be successful in
your environment. The ideal way to do this is to take a snapshot of your data
and load it into a test cluster. However, if you are issuing secrets to third
party resources (cloud credentials, database credentials, etc.) ensure that you
do not allow external network connectivity during testing, in case credentials
expire. This prevents the test cluster from trying to revoke these resources
along with the non-test cluster.
## Upgrading from Community to Enterprise editions
Upgrading to Vault Enterprise installations follow the same steps as Community edition upgrades except that the Vault Enterprise binary is to be used and the license file [applied](/vault/api-docs/system/license#install-license), when applicable. The Enterprise binary and license file can be obtained through your HashiCorp sales team.
## Non-HA installations
Upgrading non-HA installations of Vault is as simple as replacing the Vault
binary with the new version and restarting Vault. Any upgrade tasks that can be
performed for you will be taken care of when Vault is unsealed.
Always use `SIGINT` or `SIGTERM` to properly shut down Vault.
Be sure to also read and follow any instructions in the version-specific
upgrade notes.
## HA installations
The recommended upgrade procedure depends on the version of Vault you're currently on and the storage backend of Vault. If you're currently running on Vault 1.11 or later with Integrated Storage and you have Autopilot enabled, you should let Autopilot do the upgrade for you, as that's easier and
less prone to human error. Please refer to our [automated
upgrades](/vault/docs/concepts/integrated-storage/autopilot#automated-upgrades) documentation for information on this feature and our
[Automate Upgrades with Vault
Enterprise](/vault/tutorials/raft/raft-upgrade-automation)
tutorial for more details.
If you're currently on a version of Vault before 1.11, or you've chosen to opt-out the Autopilot automated upgrade features when running Vault after 1.11 with Integrated Storage, or if you are running Vault with other storage backend such as Consul. Please refer to our [Vault HA upgrades Pre 1.11/Without Autopilot Upgrade Automation](/vault/docs/upgrading/vault-ha-upgrade) documentation for more details. Please note that this upgrade procedure also applies if you are upgrading Vault from pre 1.11 to post 1.11.
## Enterprise replication installations
<Note>
Prior to any upgrade, be sure to also read and follow any instructions in the
version-specific upgrade notes which are found in the navigation menu for this
documentation.
</Note>
Upgrading Vault Enterprise clusters which participate in [Enterprise
Replication](/vault/docs/enterprise/replication) requires the following basic
order of operations:
- **Upgrade the replication secondary instances first** using appropriate
guidance from the previous sections
- Verify functionality of each secondary instance after upgrading
- When satisfied with functionality of upgraded secondary instances, upgrade
the primary instance
<Note>
It is not safe to replicate from a newer version of Vault to an older version.
When upgrading replicated clusters, ensure that upstream clusters are always on
older versions of Vault than downstream clusters.
</Note>
Here is an example of upgrading four Vault replicated Vault clusters:
![Upgrading multiple replicated clusters](/img/vault-replication-upgrade.png)
In the above scenario, the ideal upgrade procedure would be as follows,
verifying functionality after each cluster upgrade.
1. Upgrade Clusters B and D. These clusters have no downstream clusters, so they
should be upgraded first, but the ordering of B vs D does not matter.
2. Upgrade Cluster C, which now has an upgraded downstream cluster (Cluster D).
Because Cluster C is a cluster, it should also use the HA upgrade process.
3. Finally, upgrade Cluster A. All clusters downstream of A will already be
upgraded. It should be upgraded last, as it is a Performance Primary and a DR
Primary. | vault | layout docs page title Upgrading Vault Guides description These are general upgrade instructions for Vault for both non HA and HA setups Please ensure that you also read the version specific upgrade notes Upgrading Vault These are general upgrade instructions for Vault for both non HA and HA setups Please ensure that you also read any version specific upgrade notes which can be found in the sidebar Important Always back up your data before upgrading Vault does not make backward compatibility guarantees for its data store Simply replacing the newly installed Vault binary with the previous version will not cleanly downgrade Vault as upgrades may perform changes to the underlying data structure that make the data incompatible with a downgrade If you need to roll back to a previous version of Vault you should roll back your data store as well Vault upgrades are designed such that large jumps ie 1 3 10 1 7 x are supported The upgrade notes for each intervening version must be reviewed The upgrade notes may describe additional steps or configuration to update before during or after the upgrade We also recommend you consult the deprecation notices vault docs deprecation The notice page includes a comprehensive list of deprecated features and the Vault versions where the feature was removed or is scheduled to be removed include versions mdx Integrated storage autopilot Vault 1 11 introduced automated upgrades vault docs concepts integrated storage autopilot automated upgrades as part of the Integrated Storage Autopilot feature If your Vault environment is configured to use Integrated Storage consider leveraging this new feature to upgrade your Vault environment Tutorial Refer to the Automate Upgrades with Vault Enterprise vault tutorials raft raft upgrade automation tutorial for more details Agent The Vault Agent is an API client of the Vault Server Vault APIs are almost always backwards compatible When they are not this is called out in the upgrade guide for the new Vault version and there is a lengthy deprecation period The Vault Agent version can lag behind the Vault Server version though we recommend keeping all Vault instances up to date with the most recent minor Vault version to the extent possible Testing the upgrade It s always a good idea to try to ensure that the upgrade will be successful in your environment The ideal way to do this is to take a snapshot of your data and load it into a test cluster However if you are issuing secrets to third party resources cloud credentials database credentials etc ensure that you do not allow external network connectivity during testing in case credentials expire This prevents the test cluster from trying to revoke these resources along with the non test cluster Upgrading from Community to Enterprise editions Upgrading to Vault Enterprise installations follow the same steps as Community edition upgrades except that the Vault Enterprise binary is to be used and the license file applied vault api docs system license install license when applicable The Enterprise binary and license file can be obtained through your HashiCorp sales team Non HA installations Upgrading non HA installations of Vault is as simple as replacing the Vault binary with the new version and restarting Vault Any upgrade tasks that can be performed for you will be taken care of when Vault is unsealed Always use SIGINT or SIGTERM to properly shut down Vault Be sure to also read and follow any instructions in the version specific upgrade notes HA installations The recommended upgrade procedure depends on the version of Vault you re currently on and the storage backend of Vault If you re currently running on Vault 1 11 or later with Integrated Storage and you have Autopilot enabled you should let Autopilot do the upgrade for you as that s easier and less prone to human error Please refer to our automated upgrades vault docs concepts integrated storage autopilot automated upgrades documentation for information on this feature and our Automate Upgrades with Vault Enterprise vault tutorials raft raft upgrade automation tutorial for more details If you re currently on a version of Vault before 1 11 or you ve chosen to opt out the Autopilot automated upgrade features when running Vault after 1 11 with Integrated Storage or if you are running Vault with other storage backend such as Consul Please refer to our Vault HA upgrades Pre 1 11 Without Autopilot Upgrade Automation vault docs upgrading vault ha upgrade documentation for more details Please note that this upgrade procedure also applies if you are upgrading Vault from pre 1 11 to post 1 11 Enterprise replication installations Note Prior to any upgrade be sure to also read and follow any instructions in the version specific upgrade notes which are found in the navigation menu for this documentation Note Upgrading Vault Enterprise clusters which participate in Enterprise Replication vault docs enterprise replication requires the following basic order of operations Upgrade the replication secondary instances first using appropriate guidance from the previous sections Verify functionality of each secondary instance after upgrading When satisfied with functionality of upgraded secondary instances upgrade the primary instance Note It is not safe to replicate from a newer version of Vault to an older version When upgrading replicated clusters ensure that upstream clusters are always on older versions of Vault than downstream clusters Note Here is an example of upgrading four Vault replicated Vault clusters Upgrading multiple replicated clusters img vault replication upgrade png In the above scenario the ideal upgrade procedure would be as follows verifying functionality after each cluster upgrade 1 Upgrade Clusters B and D These clusters have no downstream clusters so they should be upgraded first but the ordering of B vs D does not matter 2 Upgrade Cluster C which now has an upgraded downstream cluster Cluster D Because Cluster C is a cluster it should also use the HA upgrade process 3 Finally upgrade Cluster A All clusters downstream of A will already be upgraded It should be upgraded last as it is a Performance Primary and a DR Primary |
vault This page contains the list of deprecations and important or breaking changes page title Upgrading to Vault 1 12 x Guides layout docs for Vault 1 12 x Please read it carefully Overview | ---
layout: docs
page_title: Upgrading to Vault 1.12.x - Guides
description: |-
This page contains the list of deprecations and important or breaking changes
for Vault 1.12.x. Please read it carefully.
---
# Overview
This page contains the list of deprecations and important or breaking changes
for Vault 1.12.x compared to 1.11. Please read it carefully.
## Changes
### Supported storage backends
Vault Enterprise will now perform a supported storage check at startup. There is no impact on Vault Community Edition users.
@include 'ent-supported-storage.mdx'
@include 'consul-dataplane-upgrade-note.mdx'
### External plugin loading
Vault 1.12.0 introduced a change to how external plugins are loaded. Prior to
Vault 1.12.0 plugins were lazy loaded on startup. This means that plugin
processes were killed after a successful mount and then respawned when a
request is routed to them. Vault 1.12.0 introduced auto mutual TLS for
secrets/auth plugins so we do not lazy load them on startup anymore.
## Known issues
### Pinning to builtin plugin versions may cause failure on upgrade
1.12.0 introduced plugin versions, and with it, the ability to explicitly specify
the builtin version of a plugin when mounting an auth, database or secrets plugin.
For example, `vault auth enable -plugin-version=v1.12.0+builtin.vault approle`. If
there are any mounts where the _builtin_ version was explicitly specified in this way,
Vault may fail to start on upgrading to 1.12.1 due to the specified version no
longer being available.
To check whether a mount path is affected, read the tune information, or the
database config. The affected plugins are `snowflake-database-plugin@v0.6.0+builtin`
and any plugins with `+builtin.vault` metadata in their version.
In this example, the first two mounts are affected because `plugin_version` is
explicitly set and is one of the affected versions. The third mount is not
affected because it only has `+builtin` metadata, and is not the Snowflake
database plugin. All mounts where the version is omitted, or the plugin is
external (regardless of whether the version is specified) are unaffected.
-> **NOTE:** Make sure you use Vault CLI 1.12.0 or later to check mounts.
```shell-session
$ vault read sys/auth/approle/tune
Key Value
--- -----
...
plugin_version v1.12.0+builtin.vault
$ vault read database/config/snowflake
Key Value
--- -----
...
plugin_name snowflake-database-plugin
plugin_version v0.6.0+builtin
$ vault read sys/auth/kubernetes/tune
Key Value
--- -----
...
plugin_version v0.14.0+builtin
```
As it is not currently possible to unset the plugin version, there are 3 possible
remediations if you have any affected mounts:
* Upgrade Vault directly to 1.12.2 once released
* Upgrade to an external version of the plugin before upgrading to 1.12.1;
* Using the [tune API](/vault/api-docs/system/auth#tune-auth-method) for auth methods
* Using the [tune API](/vault/api-docs/system/mounts#tune-mount-configuration) for secrets plugins
* Or using the [configure connection](/vault/api-docs/secret/databases#configure-connection)
API for database plugins
* Unmount and remount the path without a version specified before upgrading to 1.12.1.
**Note:** This will delete all data and leases associated with the mount.
The bug was introduced by commit
https://github.com/hashicorp/vault/commit/c36330f4c713b886a8a23c08cbbd862a7c530fc8.
#### Impacted versions
Affects upgrading from 1.12.0 to 1.12.1. All other upgrade paths are unaffected.
1.12.2 will introduce a fix that enables upgrades from affected deployments of
1.12.0.
### Mounts associated with deprecated builtin plugins will result in core shutdown on upgrade
As of 1.12.0 Standalone (logical) DB Engines and the AppId Auth Method have been
marked with the `Pending Removal` status. Any attempt to unseal Vault with
mounts backed by one of these builtin plugins will result in an immediate
shutdown of the Vault core.
-> **NOTE** In the event that an external plugin with the same name and type as
a deprecated builtin is deregistered, any subsequent unseal of Vault will also
result in a core shutdown.
```shell-session
$ vault plugin register -sha256=c805cf3b69f704dfcd5176ef1c7599f88adbfd7374e9c76da7f24a32a97abfe1 auth app-id
Success! Registered plugin: app-id
$ vault auth enable -plugin-name=app-id plugin
Success! Enabled app-id auth method at: app-id/
$ vault auth list -detailed
app-id/ app-id auth_app-id_3a8f2e24 system system default-service replicated false false map[] n/a 0018263c-0d64-7a70-fd5c-50e05c5f5dc3 n/a n/a c805cf3b69f704dfcd5176ef1c7599f88adbfd7374e9c76da7f24a32a97abfe1 n/a
$ vault plugin deregister auth app-id
Success! Deregistered plugin (if it was registered): app-id
$ vault plugin list -detailed | grep "app-id"
app-id auth v1.12.0+builtin.vault pending removal
```
The remediation for affected mounts is to set the
[VAULT_ALLOW_PENDING_REMOVAL_MOUNTS](/vault/docs/commands/server#vault_allow_pending_removal_mounts)
environment variable and replace any `Pending Removal` feature with the
[preferred alternative
feature](/vault/docs/deprecation/faq#q-what-should-i-do-if-i-use-mount-filters-appid-or-any-of-the-standalone-db-engines).
For more information on the phases of deprecation, see the [Deprecation Notices
FAQ](/vault/docs/deprecation/faq#q-what-are-the-phases-of-deprecation).
#### Impacted versions
Affects upgrading from any version of Vault to 1.12.x. All other upgrade paths
are unaffected.
### `vault plugin list` fails when audit logging is enabled
If audit logging is enabled, Vault will fail to audit the response from any
calls to the [`GET /v1/sys/plugins/catalog`](/vault/api-docs/system/plugins-catalog#list-plugins)
endpoint, which causes the whole request to fail and return a 500 internal
server error. From the CLI, this looks like the following:
```shell-session
$ vault plugin list
Error listing available plugins: data from server response is empty
```
It will produce errors in Vault Server's logs such as:
```text
2022-11-30T20:04:22.397Z [ERROR] audit: panic during logging: request_path=sys/plugins/catalog error="reflect: reflect.Value.Set using value obtained using unexported field"
2022-11-30T20:04:22.398Z [ERROR] core: failed to audit response: request_path=sys/plugins/catalog
error=
| 1 error occurred:
| * panic generating audit log
|
```
As a workaround, [listing plugins by type](/vault/api-docs/system/plugins-catalog#list-plugins-1)
will succeed:
* `vault list sys/plugins/catalog/auth`
* `vault list sys/plugins/catalog/database`
* `vault list sys/plugins/catalog/secret`
The bug was introduced by commit
https://github.com/hashicorp/vault/commit/76165052e54f884ed0aa2caa496083dc84ad1c19.
#### Impacted versions
Affects versions 1.12.0, 1.12.1, and 1.12.2. A fix will be released in 1.12.3.
### PKI OCSP GET requests return malformed request responses
If an OCSP GET request contains a '+' character, a malformed request response will be
returned instead of properly processing the request due to a double decoding issue within the
handler.
As a workaround, OCSP POST requests can be used which are unaffected.
#### Impacted versions
Affects version 1.12.3. A fix will be released in 1.12.4.
@include 'tokenization-rotation-persistence.mdx'
@include 'known-issues/ocsp-redirect.mdx'
### LDAP pagination issue
There was a regression introduced in 1.12.6 relating to LDAP maximum page sizes, resulting in
an error `no LDAP groups found in groupDN [...] only policies from locally-defined groups available`. The issue
occurs when upgrading Vault with an instance that has an existing LDAP Auth configuration.
As a workaround, disable paged searching using the following:
```shell-session
vault write auth/ldap/config max_page_size=-1
```
#### Impacted versions
Affects Vault 1.12.6.
### Slow startup time when storing PKI certificates
There was a regression introduced in 1.12.0 where Vault is slow to start because the
PKI secret engine performs a list operation on the stored certificates. If a large number
of certificates are stored this can cause long start times on active and standby nodes.
There is currently no workaround for this other than limiting the number of certificates stored
in Vault via the [PKI tidy](/vault/api-docs/secret/pki#tidy) or using `no_store`
flag for [PKI roles](/vault/api-docs/secret/pki#createupdate-role).
#### Impacted versions
Affects Vault 1.12.0+
@include 'pki-double-migration-bug.mdx' | vault | layout docs page title Upgrading to Vault 1 12 x Guides description This page contains the list of deprecations and important or breaking changes for Vault 1 12 x Please read it carefully Overview This page contains the list of deprecations and important or breaking changes for Vault 1 12 x compared to 1 11 Please read it carefully Changes Supported storage backends Vault Enterprise will now perform a supported storage check at startup There is no impact on Vault Community Edition users include ent supported storage mdx include consul dataplane upgrade note mdx External plugin loading Vault 1 12 0 introduced a change to how external plugins are loaded Prior to Vault 1 12 0 plugins were lazy loaded on startup This means that plugin processes were killed after a successful mount and then respawned when a request is routed to them Vault 1 12 0 introduced auto mutual TLS for secrets auth plugins so we do not lazy load them on startup anymore Known issues Pinning to builtin plugin versions may cause failure on upgrade 1 12 0 introduced plugin versions and with it the ability to explicitly specify the builtin version of a plugin when mounting an auth database or secrets plugin For example vault auth enable plugin version v1 12 0 builtin vault approle If there are any mounts where the builtin version was explicitly specified in this way Vault may fail to start on upgrading to 1 12 1 due to the specified version no longer being available To check whether a mount path is affected read the tune information or the database config The affected plugins are snowflake database plugin v0 6 0 builtin and any plugins with builtin vault metadata in their version In this example the first two mounts are affected because plugin version is explicitly set and is one of the affected versions The third mount is not affected because it only has builtin metadata and is not the Snowflake database plugin All mounts where the version is omitted or the plugin is external regardless of whether the version is specified are unaffected NOTE Make sure you use Vault CLI 1 12 0 or later to check mounts shell session vault read sys auth approle tune Key Value plugin version v1 12 0 builtin vault vault read database config snowflake Key Value plugin name snowflake database plugin plugin version v0 6 0 builtin vault read sys auth kubernetes tune Key Value plugin version v0 14 0 builtin As it is not currently possible to unset the plugin version there are 3 possible remediations if you have any affected mounts Upgrade Vault directly to 1 12 2 once released Upgrade to an external version of the plugin before upgrading to 1 12 1 Using the tune API vault api docs system auth tune auth method for auth methods Using the tune API vault api docs system mounts tune mount configuration for secrets plugins Or using the configure connection vault api docs secret databases configure connection API for database plugins Unmount and remount the path without a version specified before upgrading to 1 12 1 Note This will delete all data and leases associated with the mount The bug was introduced by commit https github com hashicorp vault commit c36330f4c713b886a8a23c08cbbd862a7c530fc8 Impacted versions Affects upgrading from 1 12 0 to 1 12 1 All other upgrade paths are unaffected 1 12 2 will introduce a fix that enables upgrades from affected deployments of 1 12 0 Mounts associated with deprecated builtin plugins will result in core shutdown on upgrade As of 1 12 0 Standalone logical DB Engines and the AppId Auth Method have been marked with the Pending Removal status Any attempt to unseal Vault with mounts backed by one of these builtin plugins will result in an immediate shutdown of the Vault core NOTE In the event that an external plugin with the same name and type as a deprecated builtin is deregistered any subsequent unseal of Vault will also result in a core shutdown shell session vault plugin register sha256 c805cf3b69f704dfcd5176ef1c7599f88adbfd7374e9c76da7f24a32a97abfe1 auth app id Success Registered plugin app id vault auth enable plugin name app id plugin Success Enabled app id auth method at app id vault auth list detailed app id app id auth app id 3a8f2e24 system system default service replicated false false map n a 0018263c 0d64 7a70 fd5c 50e05c5f5dc3 n a n a c805cf3b69f704dfcd5176ef1c7599f88adbfd7374e9c76da7f24a32a97abfe1 n a vault plugin deregister auth app id Success Deregistered plugin if it was registered app id vault plugin list detailed grep app id app id auth v1 12 0 builtin vault pending removal The remediation for affected mounts is to set the VAULT ALLOW PENDING REMOVAL MOUNTS vault docs commands server vault allow pending removal mounts environment variable and replace any Pending Removal feature with the preferred alternative feature vault docs deprecation faq q what should i do if i use mount filters appid or any of the standalone db engines For more information on the phases of deprecation see the Deprecation Notices FAQ vault docs deprecation faq q what are the phases of deprecation Impacted versions Affects upgrading from any version of Vault to 1 12 x All other upgrade paths are unaffected vault plugin list fails when audit logging is enabled If audit logging is enabled Vault will fail to audit the response from any calls to the GET v1 sys plugins catalog vault api docs system plugins catalog list plugins endpoint which causes the whole request to fail and return a 500 internal server error From the CLI this looks like the following shell session vault plugin list Error listing available plugins data from server response is empty It will produce errors in Vault Server s logs such as text 2022 11 30T20 04 22 397Z ERROR audit panic during logging request path sys plugins catalog error reflect reflect Value Set using value obtained using unexported field 2022 11 30T20 04 22 398Z ERROR core failed to audit response request path sys plugins catalog error 1 error occurred panic generating audit log As a workaround listing plugins by type vault api docs system plugins catalog list plugins 1 will succeed vault list sys plugins catalog auth vault list sys plugins catalog database vault list sys plugins catalog secret The bug was introduced by commit https github com hashicorp vault commit 76165052e54f884ed0aa2caa496083dc84ad1c19 Impacted versions Affects versions 1 12 0 1 12 1 and 1 12 2 A fix will be released in 1 12 3 PKI OCSP GET requests return malformed request responses If an OCSP GET request contains a character a malformed request response will be returned instead of properly processing the request due to a double decoding issue within the handler As a workaround OCSP POST requests can be used which are unaffected Impacted versions Affects version 1 12 3 A fix will be released in 1 12 4 include tokenization rotation persistence mdx include known issues ocsp redirect mdx LDAP pagination issue There was a regression introduced in 1 12 6 relating to LDAP maximum page sizes resulting in an error no LDAP groups found in groupDN only policies from locally defined groups available The issue occurs when upgrading Vault with an instance that has an existing LDAP Auth configuration As a workaround disable paged searching using the following shell session vault write auth ldap config max page size 1 Impacted versions Affects Vault 1 12 6 Slow startup time when storing PKI certificates There was a regression introduced in 1 12 0 where Vault is slow to start because the PKI secret engine performs a list operation on the stored certificates If a large number of certificates are stored this can cause long start times on active and standby nodes There is currently no workaround for this other than limiting the number of certificates stored in Vault via the PKI tidy vault api docs secret pki tidy or using no store flag for PKI roles vault api docs secret pki createupdate role Impacted versions Affects Vault 1 12 0 include pki double migration bug mdx |
vault for anyone upgrading to 1 16 x from Vault 1 15 x Deprecations important or breaking changes and remediation recommendations layout docs page title Upgrade to Vault 1 16 x Guides Overview | ---
layout: docs
page_title: Upgrade to Vault 1.16.x - Guides
description: |-
Deprecations, important or breaking changes, and remediation recommendations
for anyone upgrading to 1.16.x from Vault 1.15.x.
---
# Overview
The Vault 1.16.x upgrade guide contains information on deprecations, important
or breaking changes, and remediation recommendations for anyone upgrading from
Vault 1.15. **Please read carefully**.
## Important changes
### External plugin variables take precedence over system variables ((#external-plugin-variables))
Vault gives precedence to plugin environment variables over system environment
variables when loading external plugins. The behavior for builtin plugins and
plugins that do not specify additional environment variables is unaffected.
For example, if you register an external plugin with `SOURCE=child` in the
[env](/vault/api-docs/system/plugins-catalog#env) parameter but the main Vault
process already has `SOURCE=parent` defined, the plugin process starts
with `SOURCE=child`.
Refer to the [plugin management](/vault/docs/plugins/plugin-management) page for
more details on plugin environment variables.
<Highlight title="Avoid conflicts with containerized plugins">
Containerized plugins do not inherit system-defined environment variables. As
a result, containerized plugins cannot have conflicts with Vault environment
variables.
</Highlight>
#### How to opt out
To opt out of the precedence change, set the
`VAULT_PLUGIN_USE_LEGACY_ENV_LAYERING` environment variable to `true` for the
main Vault process:
```shell-session
$ export VAULT_PLUGIN_USE_LEGACY_ENV_LAYERING=true
```
Setting `VAULT_PLUGIN_USE_LEGACY_ENV_LAYERING` to `true` tells Vault to:
1. prioritize environment variables from the Vault server environment whenever
the system detects a variable conflict.
1. report on plugin variable conflicts during the unseal process by printing
warnings for plugins with conflicting environment variables or logging an
informational entry when there are no conflicts.
For example, assume you set `VAULT_PLUGIN_USE_LEGACY_ENV_LAYERING` to `true`
and have an environment variable `SOURCE=parent`.
If you register an external plugin called `myplugin` with `SOURCE=child`, the
plugin process starts with `SOURCE=parent` and Vault reports a conflict for
`myplugin`.
### LDAP auth login changes
Users cannot log in using LDAP unless the LDAP plugin is configured
with an `userdn` value scoped to an organization unit (OU) where the
user resides.
### LDAP auth entity alias names no longer include upndomain
The `userattr` field on the LDAP auth config is now used as the entity alias.
Prior to 1.16, the LDAP auth method would detect if `upndomain` was configured
on the mount and then use `<cn>@<upndomain>` as the entity alias value.
The consequence of not configuring this correctly means users may not have the
correct policies attached to their tokens when logging in.
#### How to opt out
To opt out of the entity alias change, update the `userattr` field on the config:
```
userattr="userprincipalname"
```
Refer to the [LDAP auth method (API)](/vault/api-docs/auth/ldap) page for
more details on the configuration.
### Secrets Sync now requires setting a one-time flag before use
To use the Secrets Sync feature, the feature must be activated with a new one-time
operation called an activation-flag. The feature is gated until a Vault operator
decides to trigger the flag. More information can be found in the
[secrets sync documentation](/vault/docs/sync#activating-the-feature).
### Activity Log Changes
#### Default Activity Log Querying Period
As of 1.16.13 and later, the field `default_report_months` can no longer be configured or read. Any previously set values
will be ignored by the system.
Attempts to modify `default_report_months` through the
[/sys/internal/counters/config](/vault/api-docs/system/internal-counters#update-the-client-count-configuration)
endpoint, will result in the following warning from Vault:
<CodeBlockConfig hideClipboard>
```shell-session
WARNING! The following warnings were returned from Vault:
* default_report_months is deprecated: defaulting to billing start time
```
</CodeBlockConfig>
The `current_billing_period` toggle for `/sys/internal/counters/activity` is also deprecated, as this will be set
true by default.
Attempts to set `current_billing_period` will result in the following warning from Vault:
<CodeBlockConfig hideClipboard>
```shell-session
WARNING! The following warnings were returned from Vault:
* current_billing_period is deprecated; unless otherwise specified, all requests will default to the current billing period
```
</CodeBlockConfig>
### Auto-rolled billing start date
As of 1.16.7 and later, the billing start date (license start date if not configured) automatically rolls over to the latest billing year at the end of the last cycle.
@include 'auto-roll-billing-start.mdx'
@include 'auto-roll-billing-start-example.mdx'
### Docker image no longer contains `curl`
As of 1.16.7 and later, the `curl` binary is no longer included in the published Docker container
images for Vault and Vault Enterprise. If your workflow depends on `curl` being available in the
container, consider one of the following strategies:
#### Create a wrapper container image
Use the HashiCorp image as a base image to create a new container image with `curl` installed.
```Dockerfile
FROM hashicorp/vault-enterprise
RUN apk add curl
```
**NOTE:** While this is the preferred option it will require managing your own registry and rebuilding new images.
#### Install it at runtime dynamically
When running the image as root (not recommended), you can install it at runtime dynamically by using the `apk` package manager:
```shell-session
docker exec <CONTAINER-ID> apk add curl
```
```shell-session
kubectl exec -ti <NAME> -- apk add curl
```
When running the image as non-root without privilege escalation (recommended) you can use existing
tools to install a static binary of `curl` into the `vault` users home directory:
```shell-session
docker exec <CONTAINER-ID> wget https://github.com/moparisthebest/static-curl/releases/latest/download/curl-amd64 -O /home/vault/curl && chmod +x /home/vault/curl
```
```shell-session
kubectl exec -ti <NAME> -- wget https://github.com/moparisthebest/static-curl/releases/latest/download/curl-amd64 -O /home/vault/curl && chmod +x /home/vault/curl
```
**NOTE:** When using this option you'll want to verify that the static binary comes from a trusted source.
### Product usage reporting
As of 1.16.13, Vault will collect anonymous product usage metrics for HashiCorp. This information will be collected
alongside client activity data, and will be sent automatically if automated reporting is configured, or added to manual
reports if manual reporting is preferred.
See the main page for [Vault product usage metrics reporting](/vault/docs/enterprise/license/product-usage-reporting) for
more details, and information about opt-out.
## Known issues and workarounds
@include 'known-issues/1_17_audit-log-hmac.mdx'
@include 'known-issues/1_16-jwt_auth_bound_audiences.mdx'
@include 'known-issues/1_16-jwt_auth_config.mdx'
@include 'known-issues/1_16-ldap_auth_login_anonymous_group_search.mdx'
@include 'known-issues/1_16-ldap_auth_login_missing_entity_alias.mdx'
@include 'known-issues/1_16-default-policy-needs-to-be-updated.mdx'
@include 'known-issues/1_16-default-lcq-pre-1_9-upgrade.mdx'
@include 'known-issues/ocsp-redirect.mdx'
@include 'known-issues/1_16_azure-secrets-engine-client-id.mdx'
@include 'known-issues/perf-standbys-revert-to-standby.mdx'
@include 'known-issues/1_13-reload-census-panic-standby.mdx'
@include 'known-issues/autopilot-upgrade-upgrade-version.mdx'
@include 'known-issues/1_16_secrets-sync-chroot-activation.mdx'
@include 'known-issues/config_listener_proxy_protocol_behavior_issue.mdx'
@include 'known-issues/dangling-entity-aliases-in-memory.mdx'
@include 'known-issues/duplicate-identity-groups.mdx'
@include 'known-issues/manual-entity-merge-does-not-persist.mdx'
@include 'known-issues/duplicate-hsm-key.mdx'
| vault | layout docs page title Upgrade to Vault 1 16 x Guides description Deprecations important or breaking changes and remediation recommendations for anyone upgrading to 1 16 x from Vault 1 15 x Overview The Vault 1 16 x upgrade guide contains information on deprecations important or breaking changes and remediation recommendations for anyone upgrading from Vault 1 15 Please read carefully Important changes External plugin variables take precedence over system variables external plugin variables Vault gives precedence to plugin environment variables over system environment variables when loading external plugins The behavior for builtin plugins and plugins that do not specify additional environment variables is unaffected For example if you register an external plugin with SOURCE child in the env vault api docs system plugins catalog env parameter but the main Vault process already has SOURCE parent defined the plugin process starts with SOURCE child Refer to the plugin management vault docs plugins plugin management page for more details on plugin environment variables Highlight title Avoid conflicts with containerized plugins Containerized plugins do not inherit system defined environment variables As a result containerized plugins cannot have conflicts with Vault environment variables Highlight How to opt out To opt out of the precedence change set the VAULT PLUGIN USE LEGACY ENV LAYERING environment variable to true for the main Vault process shell session export VAULT PLUGIN USE LEGACY ENV LAYERING true Setting VAULT PLUGIN USE LEGACY ENV LAYERING to true tells Vault to 1 prioritize environment variables from the Vault server environment whenever the system detects a variable conflict 1 report on plugin variable conflicts during the unseal process by printing warnings for plugins with conflicting environment variables or logging an informational entry when there are no conflicts For example assume you set VAULT PLUGIN USE LEGACY ENV LAYERING to true and have an environment variable SOURCE parent If you register an external plugin called myplugin with SOURCE child the plugin process starts with SOURCE parent and Vault reports a conflict for myplugin LDAP auth login changes Users cannot log in using LDAP unless the LDAP plugin is configured with an userdn value scoped to an organization unit OU where the user resides LDAP auth entity alias names no longer include upndomain The userattr field on the LDAP auth config is now used as the entity alias Prior to 1 16 the LDAP auth method would detect if upndomain was configured on the mount and then use cn upndomain as the entity alias value The consequence of not configuring this correctly means users may not have the correct policies attached to their tokens when logging in How to opt out To opt out of the entity alias change update the userattr field on the config userattr userprincipalname Refer to the LDAP auth method API vault api docs auth ldap page for more details on the configuration Secrets Sync now requires setting a one time flag before use To use the Secrets Sync feature the feature must be activated with a new one time operation called an activation flag The feature is gated until a Vault operator decides to trigger the flag More information can be found in the secrets sync documentation vault docs sync activating the feature Activity Log Changes Default Activity Log Querying Period As of 1 16 13 and later the field default report months can no longer be configured or read Any previously set values will be ignored by the system Attempts to modify default report months through the sys internal counters config vault api docs system internal counters update the client count configuration endpoint will result in the following warning from Vault CodeBlockConfig hideClipboard shell session WARNING The following warnings were returned from Vault default report months is deprecated defaulting to billing start time CodeBlockConfig The current billing period toggle for sys internal counters activity is also deprecated as this will be set true by default Attempts to set current billing period will result in the following warning from Vault CodeBlockConfig hideClipboard shell session WARNING The following warnings were returned from Vault current billing period is deprecated unless otherwise specified all requests will default to the current billing period CodeBlockConfig Auto rolled billing start date As of 1 16 7 and later the billing start date license start date if not configured automatically rolls over to the latest billing year at the end of the last cycle include auto roll billing start mdx include auto roll billing start example mdx Docker image no longer contains curl As of 1 16 7 and later the curl binary is no longer included in the published Docker container images for Vault and Vault Enterprise If your workflow depends on curl being available in the container consider one of the following strategies Create a wrapper container image Use the HashiCorp image as a base image to create a new container image with curl installed Dockerfile FROM hashicorp vault enterprise RUN apk add curl NOTE While this is the preferred option it will require managing your own registry and rebuilding new images Install it at runtime dynamically When running the image as root not recommended you can install it at runtime dynamically by using the apk package manager shell session docker exec CONTAINER ID apk add curl shell session kubectl exec ti NAME apk add curl When running the image as non root without privilege escalation recommended you can use existing tools to install a static binary of curl into the vault users home directory shell session docker exec CONTAINER ID wget https github com moparisthebest static curl releases latest download curl amd64 O home vault curl chmod x home vault curl shell session kubectl exec ti NAME wget https github com moparisthebest static curl releases latest download curl amd64 O home vault curl chmod x home vault curl NOTE When using this option you ll want to verify that the static binary comes from a trusted source Product usage reporting As of 1 16 13 Vault will collect anonymous product usage metrics for HashiCorp This information will be collected alongside client activity data and will be sent automatically if automated reporting is configured or added to manual reports if manual reporting is preferred See the main page for Vault product usage metrics reporting vault docs enterprise license product usage reporting for more details and information about opt out Known issues and workarounds include known issues 1 17 audit log hmac mdx include known issues 1 16 jwt auth bound audiences mdx include known issues 1 16 jwt auth config mdx include known issues 1 16 ldap auth login anonymous group search mdx include known issues 1 16 ldap auth login missing entity alias mdx include known issues 1 16 default policy needs to be updated mdx include known issues 1 16 default lcq pre 1 9 upgrade mdx include known issues ocsp redirect mdx include known issues 1 16 azure secrets engine client id mdx include known issues perf standbys revert to standby mdx include known issues 1 13 reload census panic standby mdx include known issues autopilot upgrade upgrade version mdx include known issues 1 16 secrets sync chroot activation mdx include known issues config listener proxy protocol behavior issue mdx include known issues dangling entity aliases in memory mdx include known issues duplicate identity groups mdx include known issues manual entity merge does not persist mdx include known issues duplicate hsm key mdx |
vault for anyone upgrading to 1 18 x from Vault 1 17 x Deprecations important or breaking changes and remediation recommendations layout docs Overview page title Upgrade to Vault 1 18 x Guides | ---
layout: docs
page_title: Upgrade to Vault 1.18.x - Guides
description: |-
Deprecations, important or breaking changes, and remediation recommendations
for anyone upgrading to 1.18.x from Vault 1.17.x.
---
# Overview
The Vault 1.18.x upgrade guide contains information on deprecations, important
or breaking changes, and remediation recommendations for anyone upgrading from
Vault 1.17. **Please read carefully**.
## Important changes
### Activity Log Changes
#### Default Activity Log Querying Period
The field `default_report_months` can no longer be configured or read. Any previously set values
will be ignored by the system.
Attempts to modify `default_report_months` through the
[/sys/internal/counters/config](/vault/api-docs/system/internal-counters#update-the-client-count-configuration)
endpoint, will result in the following warning from Vault:
<CodeBlockConfig hideClipboard>
```shell-session
WARNING! The following warnings were returned from Vault:
* default_report_months is deprecated: defaulting to billing start time
```
</CodeBlockConfig>
The `current_billing_period` toggle for `/sys/internal/counters/activity` is also deprecated, as this will be set
true by default.
Attempts to set `current_billing_period` will result in the following warning from Vault:
<CodeBlockConfig hideClipboard>
```shell-session
WARNING! The following warnings were returned from Vault:
* current_billing_period is deprecated; unless otherwise specified, all requests will default to the current billing period
```
</CodeBlockConfig>
### Docker image no longer contains `curl`
The `curl` binary is no longer included in the published Docker container images for Vault and Vault
Enterprise. If your workflow depends on `curl` being available in the container, consider one of the
following strategies:
#### Create a wrapper container image
Use the HashiCorp image as a base image to create a new container image with `curl` installed.
```Dockerfile
FROM hashicorp/vault-enterprise
RUN apk add curl
```
**NOTE:** While this is the preferred option it will require managing your own registry and rebuilding new images.
#### Install it at runtime dynamically
When running the image as root (not recommended), you can install it at runtime dynamically by using the `apk` package manager:
```shell-session
docker exec <CONTAINER-ID> apk add curl
```
```shell-session
kubectl exec -ti <NAME> -- apk add curl
```
When running the image as non-root without privilege escalation (recommended) you can use existing
tools to install a static binary of `curl` into the `vault` users home directory:
```shell-session
docker exec <CONTAINER-ID> wget https://github.com/moparisthebest/static-curl/releases/latest/download/curl-amd64 -O /home/vault/curl && chmod +x /home/vault/curl
```
```shell-session
kubectl exec -ti <NAME> -- wget https://github.com/moparisthebest/static-curl/releases/latest/download/curl-amd64 -O /home/vault/curl && chmod +x /home/vault/curl
```
**NOTE:** When using this option you'll want to verify that the static binary comes from a trusted source.
### Request limiter configuration removal
Vault 1.16.0 included an experimental request limiter. The limiter was disabled
by default with an opt-in `request_limiter` configuration.
Further testing indicated that an alternative approach improves performance and
reduces risk for many workloads. Vault 1.17.0 included a new [adaptive overload
protection](/vault/docs/concepts/adaptive-overload-protection) feature that
prevents outages when Vault is overwhelmed by write requests.
Adaptive overload protection was a beta feature in 1.17.0.
As of Vault 1.18.0, the adaptive overload protection feature for writes is
now GA and enabled by default for the integrated storage backend.
The beta `request_limiter` configuration stanza is officially removed in Vault 1.18.0.
Vault will output two types of warnings if the `request_limiter` stanza is
detected in your Vault config.
1. A UI warning message printed to `stderr`:
```text
WARNING: Request Limiter configuration is no longer supported; overriding server configuration to disable
```
2. A log line with level `WARN`, appearing in Vault's logs:
```text
... [WARN] unknown or unsupported field request_limiter found in configuration at config.hcl:22:1
```
### Product usage reporting
As of 1.18.2, Vault will collect anonymous product usage metrics for HashiCorp. This information will be collected
alongside client activity data, and will be sent automatically if automated reporting is configured, or added to manual
reports if manual reporting is preferred.
See the main page for [Vault product usage metrics reporting](/vault/docs/enterprise/license/product-usage-reporting) for
more details, and information about opt-out.
## Known issues and workarounds
@include 'known-issues/duplicate-hsm-key.mdx' | vault | layout docs page title Upgrade to Vault 1 18 x Guides description Deprecations important or breaking changes and remediation recommendations for anyone upgrading to 1 18 x from Vault 1 17 x Overview The Vault 1 18 x upgrade guide contains information on deprecations important or breaking changes and remediation recommendations for anyone upgrading from Vault 1 17 Please read carefully Important changes Activity Log Changes Default Activity Log Querying Period The field default report months can no longer be configured or read Any previously set values will be ignored by the system Attempts to modify default report months through the sys internal counters config vault api docs system internal counters update the client count configuration endpoint will result in the following warning from Vault CodeBlockConfig hideClipboard shell session WARNING The following warnings were returned from Vault default report months is deprecated defaulting to billing start time CodeBlockConfig The current billing period toggle for sys internal counters activity is also deprecated as this will be set true by default Attempts to set current billing period will result in the following warning from Vault CodeBlockConfig hideClipboard shell session WARNING The following warnings were returned from Vault current billing period is deprecated unless otherwise specified all requests will default to the current billing period CodeBlockConfig Docker image no longer contains curl The curl binary is no longer included in the published Docker container images for Vault and Vault Enterprise If your workflow depends on curl being available in the container consider one of the following strategies Create a wrapper container image Use the HashiCorp image as a base image to create a new container image with curl installed Dockerfile FROM hashicorp vault enterprise RUN apk add curl NOTE While this is the preferred option it will require managing your own registry and rebuilding new images Install it at runtime dynamically When running the image as root not recommended you can install it at runtime dynamically by using the apk package manager shell session docker exec CONTAINER ID apk add curl shell session kubectl exec ti NAME apk add curl When running the image as non root without privilege escalation recommended you can use existing tools to install a static binary of curl into the vault users home directory shell session docker exec CONTAINER ID wget https github com moparisthebest static curl releases latest download curl amd64 O home vault curl chmod x home vault curl shell session kubectl exec ti NAME wget https github com moparisthebest static curl releases latest download curl amd64 O home vault curl chmod x home vault curl NOTE When using this option you ll want to verify that the static binary comes from a trusted source Request limiter configuration removal Vault 1 16 0 included an experimental request limiter The limiter was disabled by default with an opt in request limiter configuration Further testing indicated that an alternative approach improves performance and reduces risk for many workloads Vault 1 17 0 included a new adaptive overload protection vault docs concepts adaptive overload protection feature that prevents outages when Vault is overwhelmed by write requests Adaptive overload protection was a beta feature in 1 17 0 As of Vault 1 18 0 the adaptive overload protection feature for writes is now GA and enabled by default for the integrated storage backend The beta request limiter configuration stanza is officially removed in Vault 1 18 0 Vault will output two types of warnings if the request limiter stanza is detected in your Vault config 1 A UI warning message printed to stderr text WARNING Request Limiter configuration is no longer supported overriding server configuration to disable 2 A log line with level WARN appearing in Vault s logs text WARN unknown or unsupported field request limiter found in configuration at config hcl 22 1 Product usage reporting As of 1 18 2 Vault will collect anonymous product usage metrics for HashiCorp This information will be collected alongside client activity data and will be sent automatically if automated reporting is configured or added to manual reports if manual reporting is preferred See the main page for Vault product usage metrics reporting vault docs enterprise license product usage reporting for more details and information about opt out Known issues and workarounds include known issues duplicate hsm key mdx |
vault This page contains the list of deprecations and important or breaking changes page title Upgrading to Vault 0 10 0 Guides for Vault 0 10 0 Please read it carefully layout docs Overview | ---
layout: docs
page_title: Upgrading to Vault 0.10.0 - Guides
description: |-
This page contains the list of deprecations and important or breaking changes
for Vault 0.10.0. Please read it carefully.
---
# Overview
This page contains the list of deprecations and important or breaking changes
for Vault 0.10.0 compared to 0.9.0. Please read it carefully.
## Changes since 0.9.6
### Database plugin compatibility
The database plugin interface was enhanced to support some additional
functionality related to root credential rotation and supporting templated
URL strings. The changes were made in a backwards-compatible way and all
builtin plugins were updated with the new features. Custom plugins not built
into Vault will need to be upgraded to support templated URL strings and
root rotation. Additionally, the Initialize method was deprecated in favor
of a new Init method that supports configuration modifications that occur in
the plugin back to the primary data store.
### Removal of returned secret information
For a long time Vault has returned configuration given to various secret
engines and auth methods with secret values (such as secret API keys or
passwords) still intact, and with a warning to the user on write that anyone
with read access could see the secret. This was mostly done to make it easy for
tools like Terraform to judge whether state had drifted. However, it also feels
quite un-Vault-y to do this and we've never felt very comfortable doing so. In
0.10 we have gone through and removed this behavior from the various backends;
fields which contained secret values are simply no longer returned on read. We
are working with the Terraform team to make changes to their provider to
accommodate this as best as possible, and users of other tools may have to make
adjustments, but in the end we felt that the ends did not justify the means and
we needed to prioritize security over operational convenience.
### LDAP auth method case sensitivity
We now treat usernames and groups configured locally for policy assignment in a
case insensitive fashion by default. Existing configurations will continue to
work as they do now; however, the next time a configuration is written
`case_sensitive_names` will need to be explicitly set to `true`.
### TTL handling moved to core
All lease TTL handling has been centralized within the core of Vault to ensure
consistency across all backends. Since this was previously delegated to
individual backends, there may be some slight differences in TTLs generated
from some backends.
### Default `secret/` mount is deprecated
In 0.12 we will stop mounting `secret/` by default at initialization time (it
will still be available in `dev` mode).
## Full list since 0.9.0
### Change to AWS role output
The AWS authentication backend now allows binds for inputs as either a
comma-delimited string or a string array. However, to keep consistency with
input and output, when reading a role the binds will now be returned as string
arrays rather than strings.
### Change to AWS IAM auth ARN prefix matching
In order to prefix-match IAM role and instance profile ARNs in AWS auth
backend, you now must explicitly opt-in by adding a `*` to the end of the ARN.
Existing configurations will be upgraded automatically, but when writing a new
role configuration the updated behavior will be used.
### Backwards compatible CLI changes
This upgrade guide is typically reserved for breaking changes, however it
is worth calling out that the CLI interface to Vault has been completely
revamped while maintaining backwards compatibility. This could lead to
potential confusion while browsing the latest version of the Vault
documentation on vaultproject.io.
All previous CLI commands should continue to work and are backwards
compatible in almost all cases.
Documentation for previous versions of Vault can be accessed using
the GitHub interface by browsing tags (eg [0.9.1 website tree](https://github.com/hashicorp/vault/tree/v0.9.1/website)) or by
[building the Vault website locally](https://github.com/hashicorp/vault/tree/v0.9.1/website#running-the-site-locally).
### `sys/health` DR secondary reporting
The `replication_dr_secondary` bool returned by `sys/health` could be
misleading since it would be `false` both when a cluster was not a DR secondary
but also when the node is a standby in the cluster and has not yet fully
received state from the active node. This could cause health checks on LBs to
decide that the node was acceptable for traffic even though DR secondaries
cannot handle normal Vault traffic. (In other words, the bool could only convey
"yes" or "no" but not "not sure yet".) This has been replaced by
`replication_dr_mode` and `replication_perf_mode` which are string values that
convey the current state of the node; a value of `disabled` indicates that
replication is disabled or the state is still being discovered. As a result, an
LB check can positively verify that the node is both not `disabled` and is not
a DR secondary, and avoid sending traffic to it if either is true.
### PKI secret backend roles parameter types
For `ou` and `organization` in role definitions in the PKI secret backend,
input can now be a comma-separated string or an array of strings. Reading a
role will now return arrays for these parameters.
### Plugin API changes
The plugin API has been updated to utilize golang's context.Context package.
Many function signatures now accept a context object as the first parameter.
Existing plugins will need to pull in the latest Vault code and update their
function signatures to begin using context and the new gRPC transport.
### AppRole case sensitivity
In prior versions of Vault, `list` operations against AppRole roles would
require preserving case in the role name, even though most other operations
within AppRole are case-insensitive with respect to the role name. This has
been fixed; existing roles will behave as they have in the past, but new roles
will act case-insensitively in these cases.
### Token auth backend roles parameter types
For `allowed_policies` and `disallowed_policies` in role definitions in the
token auth backend, input can now be a comma-separated string or an array of
strings. Reading a role will now return arrays for these parameters.
### Transit key exporting
You can now mark a key in the `transit` backend as `exportable` at any time,
rather than just at creation time; however, once this value is set, it still
cannot be unset.
### PKI secret backend roles parameter types
For `allowed_domains` and `key_usage` in role definitions in the PKI secret
backend, input can now be a comma-separated string or an array of strings.
Reading a role will now return arrays for these parameters.
### SSH dynamic keys method defaults to 2048-bit keys
When using the dynamic key method in the SSH backend, the default is now to use
2048-bit keys if no specific key bit size is specified.
### Consul secret backend lease handling
The `consul` secret backend can now accept both strings and integer numbers of
seconds for its lease value. The value returned on a role read will be an
integer number of seconds instead of a human-friendly string.
### Unprintable characters not allowed in API paths
Unprintable characters are no longer allowed in names in the API (paths and
path parameters), with an extra restriction on whitespace characters. Allowed
characters are those that are considered printable by Unicode plus spaces. | vault | layout docs page title Upgrading to Vault 0 10 0 Guides description This page contains the list of deprecations and important or breaking changes for Vault 0 10 0 Please read it carefully Overview This page contains the list of deprecations and important or breaking changes for Vault 0 10 0 compared to 0 9 0 Please read it carefully Changes since 0 9 6 Database plugin compatibility The database plugin interface was enhanced to support some additional functionality related to root credential rotation and supporting templated URL strings The changes were made in a backwards compatible way and all builtin plugins were updated with the new features Custom plugins not built into Vault will need to be upgraded to support templated URL strings and root rotation Additionally the Initialize method was deprecated in favor of a new Init method that supports configuration modifications that occur in the plugin back to the primary data store Removal of returned secret information For a long time Vault has returned configuration given to various secret engines and auth methods with secret values such as secret API keys or passwords still intact and with a warning to the user on write that anyone with read access could see the secret This was mostly done to make it easy for tools like Terraform to judge whether state had drifted However it also feels quite un Vault y to do this and we ve never felt very comfortable doing so In 0 10 we have gone through and removed this behavior from the various backends fields which contained secret values are simply no longer returned on read We are working with the Terraform team to make changes to their provider to accommodate this as best as possible and users of other tools may have to make adjustments but in the end we felt that the ends did not justify the means and we needed to prioritize security over operational convenience LDAP auth method case sensitivity We now treat usernames and groups configured locally for policy assignment in a case insensitive fashion by default Existing configurations will continue to work as they do now however the next time a configuration is written case sensitive names will need to be explicitly set to true TTL handling moved to core All lease TTL handling has been centralized within the core of Vault to ensure consistency across all backends Since this was previously delegated to individual backends there may be some slight differences in TTLs generated from some backends Default secret mount is deprecated In 0 12 we will stop mounting secret by default at initialization time it will still be available in dev mode Full list since 0 9 0 Change to AWS role output The AWS authentication backend now allows binds for inputs as either a comma delimited string or a string array However to keep consistency with input and output when reading a role the binds will now be returned as string arrays rather than strings Change to AWS IAM auth ARN prefix matching In order to prefix match IAM role and instance profile ARNs in AWS auth backend you now must explicitly opt in by adding a to the end of the ARN Existing configurations will be upgraded automatically but when writing a new role configuration the updated behavior will be used Backwards compatible CLI changes This upgrade guide is typically reserved for breaking changes however it is worth calling out that the CLI interface to Vault has been completely revamped while maintaining backwards compatibility This could lead to potential confusion while browsing the latest version of the Vault documentation on vaultproject io All previous CLI commands should continue to work and are backwards compatible in almost all cases Documentation for previous versions of Vault can be accessed using the GitHub interface by browsing tags eg 0 9 1 website tree https github com hashicorp vault tree v0 9 1 website or by building the Vault website locally https github com hashicorp vault tree v0 9 1 website running the site locally sys health DR secondary reporting The replication dr secondary bool returned by sys health could be misleading since it would be false both when a cluster was not a DR secondary but also when the node is a standby in the cluster and has not yet fully received state from the active node This could cause health checks on LBs to decide that the node was acceptable for traffic even though DR secondaries cannot handle normal Vault traffic In other words the bool could only convey yes or no but not not sure yet This has been replaced by replication dr mode and replication perf mode which are string values that convey the current state of the node a value of disabled indicates that replication is disabled or the state is still being discovered As a result an LB check can positively verify that the node is both not disabled and is not a DR secondary and avoid sending traffic to it if either is true PKI secret backend roles parameter types For ou and organization in role definitions in the PKI secret backend input can now be a comma separated string or an array of strings Reading a role will now return arrays for these parameters Plugin API changes The plugin API has been updated to utilize golang s context Context package Many function signatures now accept a context object as the first parameter Existing plugins will need to pull in the latest Vault code and update their function signatures to begin using context and the new gRPC transport AppRole case sensitivity In prior versions of Vault list operations against AppRole roles would require preserving case in the role name even though most other operations within AppRole are case insensitive with respect to the role name This has been fixed existing roles will behave as they have in the past but new roles will act case insensitively in these cases Token auth backend roles parameter types For allowed policies and disallowed policies in role definitions in the token auth backend input can now be a comma separated string or an array of strings Reading a role will now return arrays for these parameters Transit key exporting You can now mark a key in the transit backend as exportable at any time rather than just at creation time however once this value is set it still cannot be unset PKI secret backend roles parameter types For allowed domains and key usage in role definitions in the PKI secret backend input can now be a comma separated string or an array of strings Reading a role will now return arrays for these parameters SSH dynamic keys method defaults to 2048 bit keys When using the dynamic key method in the SSH backend the default is now to use 2048 bit keys if no specific key bit size is specified Consul secret backend lease handling The consul secret backend can now accept both strings and integer numbers of seconds for its lease value The value returned on a role read will be an integer number of seconds instead of a human friendly string Unprintable characters not allowed in API paths Unprintable characters are no longer allowed in names in the API paths and path parameters with an extra restriction on whitespace characters Allowed characters are those that are considered printable by Unicode plus spaces |
vault This page contains the list of deprecations and important or breaking changes for Vault 1 2 0 Please read it carefully page title Upgrading to Vault 1 2 0 Guides layout docs Overview | ---
layout: docs
page_title: Upgrading to Vault 1.2.0 - Guides
description: |-
This page contains the list of deprecations and important or breaking changes
for Vault 1.2.0. Please read it carefully.
---
# Overview
This page contains the list of deprecations and important or breaking changes
for Vault 1.2.0 compared to 1.1.0. Please read it carefully.
## Known issues
### AppRole upgrade issue
Due to a bug, on upgrade AppRole roles cannot be read properly. If using AppRole, do not upgrade until this issue is fixed in 1.2.1.
## Changes/Deprecations
### Path character handling
Due to underlying changes in Go's runtime past version 1.11.5, Vault is now
stricter about what characters it will accept in path names. Whereas before it
would filter out unprintable characters (and this could be turned off), control
characters and other invalid characters are now rejected within Go's HTTP
library before the request is passed to Vault, and this cannot be disabled. To
continue using these (e.g. for already-written paths), they must be properly
percent-encoded (e.g. `\r` becomes `%0D`, `\x00` becomes `%00`, and so on).
### AWSKMS seal region
The user-configured regions on the AWSKMS seal stanza will now be preferred
over regions set in the enclosing environment.
### Audit logging of empty values
All values in audit logs now are omitted if they are empty. This helps reduce
the size of audit log entries by not reproducing keys in each entry that
commonly don't contain any value, which can help in cases where audit log
entries are above the maximum UDP packet size and others.
### Rollback logging
Rollback will no longer display log messages when it runs; it will only display
messages if an error occurs.
### Database plugins
Database plugins now default to 4 max open connections rather than 2. This
should be safe in nearly all cases and fixes some issues where a single
operation could fail with the default configuration because it needed three
connections just for that operation. However, this could result in an increase
in held open file descriptors for each database configuration, so ensure that
there is sufficient overhead.
### AppRole various changes
- AppRole uses new, common token fields for values that overlap with other auth
methods. `period` and `policies` will continue to work, with priority being
given to the `token_` prefixed versions of these fields, but the values for
those will only be returned on read if they were set initially.
- `default` is no longer automatically added to policies after submission. It
was a no-op anyways since Vault's core would always add it, and changing this
behavior allows AppRole to support the new `token_no_default_policy`
parameter
- The long-deprecated `bound_cidr_list` is no longer returned when reading a
role.
### Token store roles changes
Token store roles use new, common token fields for the values that overlap with
other auth backends. `period`, `explicit_max_ttl`, and `bound_cidrs` will
continue to work, with priority being given to the `token_` prefixed versions
of those parameters. They will also be returned when doing a read on the role
if they were used to provide values initially; however, in Vault 1.4 if
`period` or `explicit_max_ttl` is zero they will no longer be returned.
(`explicit_max_ttl` was already not returned if empty.)
### Go API/SDK changes
Vault now uses Go's official dependency management system, Go Modules, to
manage dependencies. As a result to both reduce transitive dependencies for API
library users and plugin authors, and to work around various conflicts, we have
moved various helpers around, mostly under an `sdk/` submodule. A couple of
functions have also moved from plugin helper code to the `api/` submodule. If
you are a plugin author, take a look at some of our official plugins and the
paths they are importing for guidance.
### Change in LDAP group CN handling
A bug fix put in place in Vault 1.1.1 to allow group CNs to be found from an
LDAP server in lowercase `cn` as well as uppercase `CN` had an unintended
consequence. If prior to that a group used `cn`, as in `cn=foo,ou=bar` then the
group that would need to be put into place in the LDAP plugin to match against
policies is `cn=foo,ou=bar` since the CN would not be correctly found. After
the change, the CN was correctly found, but this would result in the group name
being parsed as `foo` and would not match groups using the full DN. In 1.1.5+,
there is a boolean config setting `use_pre111_group_cn_behavior` to allow
reverting to the old matching behavior; we also attempt to upgrade exiting
configs to have that defaulted to true.
### JWT/OIDC plugin
Logins of role_type "oidc" via the /login path are no longer allowed.
### ACL wildcards
New ordering put into place in Vault 1.1.1 defines which policy wins when there
are multiple inexact matches and at least one path contains `+`. `+*` is now
illegal in policy paths. The previous behavior simply selected any matching
segment-wildcard path that matched.
### Replication
Due to technical limitations, mounting and unmounting was not previously
possible from a performance secondary. These have been resolved, and these
operations may now be run from a performance secondary. | vault | layout docs page title Upgrading to Vault 1 2 0 Guides description This page contains the list of deprecations and important or breaking changes for Vault 1 2 0 Please read it carefully Overview This page contains the list of deprecations and important or breaking changes for Vault 1 2 0 compared to 1 1 0 Please read it carefully Known issues AppRole upgrade issue Due to a bug on upgrade AppRole roles cannot be read properly If using AppRole do not upgrade until this issue is fixed in 1 2 1 Changes Deprecations Path character handling Due to underlying changes in Go s runtime past version 1 11 5 Vault is now stricter about what characters it will accept in path names Whereas before it would filter out unprintable characters and this could be turned off control characters and other invalid characters are now rejected within Go s HTTP library before the request is passed to Vault and this cannot be disabled To continue using these e g for already written paths they must be properly percent encoded e g r becomes 0D x00 becomes 00 and so on AWSKMS seal region The user configured regions on the AWSKMS seal stanza will now be preferred over regions set in the enclosing environment Audit logging of empty values All values in audit logs now are omitted if they are empty This helps reduce the size of audit log entries by not reproducing keys in each entry that commonly don t contain any value which can help in cases where audit log entries are above the maximum UDP packet size and others Rollback logging Rollback will no longer display log messages when it runs it will only display messages if an error occurs Database plugins Database plugins now default to 4 max open connections rather than 2 This should be safe in nearly all cases and fixes some issues where a single operation could fail with the default configuration because it needed three connections just for that operation However this could result in an increase in held open file descriptors for each database configuration so ensure that there is sufficient overhead AppRole various changes AppRole uses new common token fields for values that overlap with other auth methods period and policies will continue to work with priority being given to the token prefixed versions of these fields but the values for those will only be returned on read if they were set initially default is no longer automatically added to policies after submission It was a no op anyways since Vault s core would always add it and changing this behavior allows AppRole to support the new token no default policy parameter The long deprecated bound cidr list is no longer returned when reading a role Token store roles changes Token store roles use new common token fields for the values that overlap with other auth backends period explicit max ttl and bound cidrs will continue to work with priority being given to the token prefixed versions of those parameters They will also be returned when doing a read on the role if they were used to provide values initially however in Vault 1 4 if period or explicit max ttl is zero they will no longer be returned explicit max ttl was already not returned if empty Go API SDK changes Vault now uses Go s official dependency management system Go Modules to manage dependencies As a result to both reduce transitive dependencies for API library users and plugin authors and to work around various conflicts we have moved various helpers around mostly under an sdk submodule A couple of functions have also moved from plugin helper code to the api submodule If you are a plugin author take a look at some of our official plugins and the paths they are importing for guidance Change in LDAP group CN handling A bug fix put in place in Vault 1 1 1 to allow group CNs to be found from an LDAP server in lowercase cn as well as uppercase CN had an unintended consequence If prior to that a group used cn as in cn foo ou bar then the group that would need to be put into place in the LDAP plugin to match against policies is cn foo ou bar since the CN would not be correctly found After the change the CN was correctly found but this would result in the group name being parsed as foo and would not match groups using the full DN In 1 1 5 there is a boolean config setting use pre111 group cn behavior to allow reverting to the old matching behavior we also attempt to upgrade exiting configs to have that defaulted to true JWT OIDC plugin Logins of role type oidc via the login path are no longer allowed ACL wildcards New ordering put into place in Vault 1 1 1 defines which policy wins when there are multiple inexact matches and at least one path contains is now illegal in policy paths The previous behavior simply selected any matching segment wildcard path that matched Replication Due to technical limitations mounting and unmounting was not previously possible from a performance secondary These have been resolved and these operations may now be run from a performance secondary |
vault This page contains the list of breaking changes for Vault 0 6 1 Please read page title Upgrading to Vault 0 6 1 Guides it carefully layout docs Overview | ---
layout: docs
page_title: Upgrading to Vault 0.6.1 - Guides
description: |-
This page contains the list of breaking changes for Vault 0.6.1. Please read
it carefully.
---
# Overview
This page contains the list of breaking changes for Vault 0.6.1. Please read it
carefully.
## Standby nodes must be 0.6.1 as well
Once an active node is running 0.6.1, only standby nodes running 0.6.1+ will be
able to form an HA cluster. If following our [general upgrade
instructions](/vault/docs/upgrading) this will
not be an issue.
## Health endpoint status code changes
Prior to 0.6.1, the health endpoint would return a `500` (Internal Server
Error) for both a sealed and uninitialized state. In both states this was
confusing, since it was hard to tell, based on the status code, an actual
internal error from Vault from a Vault that was simply uninitialized or sealed,
not to mention differentiating between those two states.
In 0.6.1, a sealed Vault will return a `503` (Service Unavailable) status code.
As before, this can be adjusted with the `sealedcode` query parameter. An
uninitialized Vault will return a `501` (Not Implemented) status code. This can
be adjusted with the `uninitcode` query parameter.
This removes ambiguity/confusion and falls more in line with the intention of
each status code (including `500`).
## Root token creation restrictions
Root tokens (tokens with the `root` policy) can no longer be created except by
another root token or the
[`generate-root`](/vault/api-docs/system/generate-root)
endpoint or CLI command.
## PKI backend certificates will contain default key usages
Issued certificates from the `pki` backend against roles created or modified
after upgrading will contain a set of default key usages. This increases
compatibility with some software that requires strict adherence to RFCs, such
as OpenVPN.
This behavior is fully adjustable; see the [PKI backend
documentation](/vault/docs/secrets/pki) for
details.
## DynamoDB does not support HA by default
If using DynamoDB and want to use HA support, you will need to explicitly
enable it in Vault's configuration; see the
[documentation](/vault/docs/configuration#ha_storage)
for details.
If you are already using DynamoDB in an HA fashion and wish to keep doing so,
it is _very important_ that you set this option **before** upgrading your Vault
instances. Without doing so, each Vault instance will believe that it is
standalone and there could be consistency issues.
## LDAP auth method forgets bind password and insecure TLS settings
Due to a bug, these two settings are forgotten if they have been configured in
the LDAP backend prior to 0.6.1. If you are using these settings with LDAP,
please be sure to re-submit your LDAP configuration to Vault after the upgrade,
so ensure that you have a valid token to do so before upgrading if you are
relying on LDAP authentication for permissions to modify the backend itself.
## LDAP auth method does not search `memberOf`
The LDAP backend went from a model where all permutations of storing and
filtering groups were tried in all cases to one where specific filters are
defined by the administrator. This vastly increases overall directory
compatibility, especially with Active Directory when using nested groups, but
unfortunately has the side effect that `memberOf` is no longer searched for by
default, which is a breaking change for many existing setups.
`Scenario 2` in the [updated
documentation](/vault/docs/auth/ldap) shows an
example of configuring the backend to query `memberOf`. It is recommended that
a test Vault server be set up and that successful authentication can be
performed using the new configuration before upgrading a primary or production
Vault instance.
In addition, if LDAP is relied upon for authentication, operators should ensure
that they have valid tokens with policies allowing modification of LDAP
parameters before upgrading, so that once an upgrade is performed, the new
configuration can be specified successfully.
## App-ID is deprecated
With the addition of the new [AppRole
backend](/vault/docs/auth/approle), App-ID is
deprecated. There are no current plans to remove it, but we encourage using
AppRole whenever possible, as it offers enhanced functionality and can
accommodate many more types of authentication paradigms. App-ID will receive
security-related fixes only. | vault | layout docs page title Upgrading to Vault 0 6 1 Guides description This page contains the list of breaking changes for Vault 0 6 1 Please read it carefully Overview This page contains the list of breaking changes for Vault 0 6 1 Please read it carefully Standby nodes must be 0 6 1 as well Once an active node is running 0 6 1 only standby nodes running 0 6 1 will be able to form an HA cluster If following our general upgrade instructions vault docs upgrading this will not be an issue Health endpoint status code changes Prior to 0 6 1 the health endpoint would return a 500 Internal Server Error for both a sealed and uninitialized state In both states this was confusing since it was hard to tell based on the status code an actual internal error from Vault from a Vault that was simply uninitialized or sealed not to mention differentiating between those two states In 0 6 1 a sealed Vault will return a 503 Service Unavailable status code As before this can be adjusted with the sealedcode query parameter An uninitialized Vault will return a 501 Not Implemented status code This can be adjusted with the uninitcode query parameter This removes ambiguity confusion and falls more in line with the intention of each status code including 500 Root token creation restrictions Root tokens tokens with the root policy can no longer be created except by another root token or the generate root vault api docs system generate root endpoint or CLI command PKI backend certificates will contain default key usages Issued certificates from the pki backend against roles created or modified after upgrading will contain a set of default key usages This increases compatibility with some software that requires strict adherence to RFCs such as OpenVPN This behavior is fully adjustable see the PKI backend documentation vault docs secrets pki for details DynamoDB does not support HA by default If using DynamoDB and want to use HA support you will need to explicitly enable it in Vault s configuration see the documentation vault docs configuration ha storage for details If you are already using DynamoDB in an HA fashion and wish to keep doing so it is very important that you set this option before upgrading your Vault instances Without doing so each Vault instance will believe that it is standalone and there could be consistency issues LDAP auth method forgets bind password and insecure TLS settings Due to a bug these two settings are forgotten if they have been configured in the LDAP backend prior to 0 6 1 If you are using these settings with LDAP please be sure to re submit your LDAP configuration to Vault after the upgrade so ensure that you have a valid token to do so before upgrading if you are relying on LDAP authentication for permissions to modify the backend itself LDAP auth method does not search memberOf The LDAP backend went from a model where all permutations of storing and filtering groups were tried in all cases to one where specific filters are defined by the administrator This vastly increases overall directory compatibility especially with Active Directory when using nested groups but unfortunately has the side effect that memberOf is no longer searched for by default which is a breaking change for many existing setups Scenario 2 in the updated documentation vault docs auth ldap shows an example of configuring the backend to query memberOf It is recommended that a test Vault server be set up and that successful authentication can be performed using the new configuration before upgrading a primary or production Vault instance In addition if LDAP is relied upon for authentication operators should ensure that they have valid tokens with policies allowing modification of LDAP parameters before upgrading so that once an upgrade is performed the new configuration can be specified successfully App ID is deprecated With the addition of the new AppRole backend vault docs auth approle App ID is deprecated There are no current plans to remove it but we encourage using AppRole whenever possible as it offers enhanced functionality and can accommodate many more types of authentication paradigms App ID will receive security related fixes only |
vault This page contains the list of deprecations and important or breaking changes for Vault 0 11 0 Please read it carefully page title Upgrading to Vault 0 11 0 Guides layout docs Overview | ---
layout: docs
page_title: Upgrading to Vault 0.11.0 - Guides
description: |-
This page contains the list of deprecations and important or breaking changes
for Vault 0.11.0. Please read it carefully.
---
# Overview
This page contains the list of deprecations and important or breaking changes
for Vault 0.11.0 compared to 0.10.0. Please read it carefully.
## Known issues
### Nomad integration
Users that integrate Vault with Nomad should hold off on upgrading. A modification to
Vault's API is causing a runtime issue with the Nomad to Vault integration.
### Minified JSON policies
Users that generate policies in minfied JSON may cause a parsing errors due to
a regression in the policy parser when it encounters repeating brackets. Although
HCL is the official language for policies in Vault, HCL is JSON compatible and JSON
should work in place of HCL. To work around this error, pretty print the JSON policies
or add spaces between repeating brackets. This regression will be addressed in
a future release.
### Common mount prefixes
Before running the upgrade, users should run `vault secrets list` and `vault auth list`
to check their mount table to ensure that mounts do not have common prefix "folders".
For example, if there is a mount with path `team1/` and a mount with path `team1/secrets`,
Vault will fail to unseal. Before upgrade, these mounts must be remounted at a path that
does not share a common prefix.
## Changes since 0.10.4
### Request timeouts
A default request timeout of 90s is now enforced. This setting can be
overwritten in the config file. If you anticipate requests taking longer than
90s this setting should be configured before upgrading.
### `sys/` top level injection
For the last two years for backwards compatibility data for various `sys/`
routes has been injected into both the Secret's Data map and into the top level
of the JSON response object. However, this has some subtle issues that pop up
from time to time and is becoming increasingly complicated to maintain, so it's
finally being removed.
### Path fallback for list operations
For a very long time Vault has automatically adjusted `list` operations to
always end in a `/`, as list operations operates on prefixes, so all list
operations by definition end with `/`. This was done server-side so affects all
clients. However, this has also led to a lot of confusion for users writing
policies that assume that the path that they use in the CLI is the path used
internally. Starting in 0.11, ACL policies gain a new fallback rule for
listing: they will use a matching path ending in `/` if available, but if not
found, they will look for the same path without a trailing `/`. This allows
putting `list` capabilities in the same path block as most other capabilities
for that path, while not providing any extra access if `list` wasn't actually
provided there.
### Performance standbys on by default
If your flavor/license of Vault Enterprise supports Performance Standbys, they
are on by default. You can disable this behavior per-node with the
`disable_performance_standby` configuration flag.
### AWS secret engine roles
Roles in the AWS Secret Engine were previously ambiguous. For example, if the
`arn` parameter had been specified, that could have been interpreted as the ARN
of an AWS IAM policy to attach to an IAM user or it could have been the ARN of
an AWS role to assume. Now, types are explicit, both in terms of what
credential type is being requested (e.g., an IAM User or an Assumed Role?) as
well as the parameters being sent to vault (e.g., the IAM policy document
attached to an IAM user or used during a GetFederationToken call). All
credential retrieval remains backwards compatible as does updating role data.
However, the data returned when reading role data is now different and
breaking, so anything which reads role data out of Vault will need to be
updated to handle the new role data format.
While creating/updating roles remains backwards compatible, the old parameters
are now considered deprecated. You should use the new parameters as documented
in the API docs.
As part of this, the `/aws/creds/` and `/aws/sts/` endpoints have been merged,
with the behavior only differing as specified below. The `/aws/sts/` endpoint
is considered deprecated and should only be used when needing backwards
compatibility.
All roles will be automatically updated to the new role format when accessed.
However, due to the way role data was previously being stored in Vault, it's
possible that invalid data was stored that both make the upgrade impossible as
well as would have made the role unable to retrieve credentials. In this
situation, the previous role data is returned in an `invalid_data` key so you
can inspect what used to be in the role and correct the role data if desired.
One consequence of the prior AWS role storage format is that a single Vault
role could have led to two different AWS credential types being retrieved when
a `policy` parameter was stored. In this case, these legacy roles will be
allowed to retrieve both IAM User and Federation Token credentials, with the
credential type depending on the path used to access it (IAM User if accessed
via the `/aws/creds/<role_name>` endpoint and Federation Token if accessed via
the `/aws/sts/<role_name>` endpoint).
## Full list since 0.10.0
### Revocations of dynamic secrets leases now asynchronous
Dynamic secret lease revocation are now queued/asynchronous rather
than synchronous. This allows Vault to take responsibility for revocation
even if the initial attempt fails. The previous synchronous behavior can be
attained via the `-sync` CLI flag or `sync` API parameter. When in
synchronous mode, if the operation results in failure it is up to the user
to retry.
### CLI retries
The CLI will no longer retry commands on 5xx errors. This was a
source of confusion to users as to why Vault would "hang" before returning a
5xx error. The Go API client still defaults to two retries.
### Identity entity alias metadata
You can no longer manually set metadata on
entity aliases. All alias data (except the canonical entity ID it refers to)
is intended to be managed by the plugin providing the alias information, so
allowing it to be set manually didn't make sense.
### Convergent encryption version 3
If you are using `transit`'s convergent encryption feature, which prior to this
release was at version 2, we recommend
[rotating](/vault/api-docs/secret/transit#rotate-key)
your encryption key (the new key will use version 3) and
[rewrapping](/vault/api-docs/secret/transit#rewrap-data)
your data to mitigate the chance of offline plaintext-confirmation attacks.
### PKI duration return types
The PKI backend now returns durations (e.g. when reading a role) as an integer
number of seconds instead of a Go-style string. | vault | layout docs page title Upgrading to Vault 0 11 0 Guides description This page contains the list of deprecations and important or breaking changes for Vault 0 11 0 Please read it carefully Overview This page contains the list of deprecations and important or breaking changes for Vault 0 11 0 compared to 0 10 0 Please read it carefully Known issues Nomad integration Users that integrate Vault with Nomad should hold off on upgrading A modification to Vault s API is causing a runtime issue with the Nomad to Vault integration Minified JSON policies Users that generate policies in minfied JSON may cause a parsing errors due to a regression in the policy parser when it encounters repeating brackets Although HCL is the official language for policies in Vault HCL is JSON compatible and JSON should work in place of HCL To work around this error pretty print the JSON policies or add spaces between repeating brackets This regression will be addressed in a future release Common mount prefixes Before running the upgrade users should run vault secrets list and vault auth list to check their mount table to ensure that mounts do not have common prefix folders For example if there is a mount with path team1 and a mount with path team1 secrets Vault will fail to unseal Before upgrade these mounts must be remounted at a path that does not share a common prefix Changes since 0 10 4 Request timeouts A default request timeout of 90s is now enforced This setting can be overwritten in the config file If you anticipate requests taking longer than 90s this setting should be configured before upgrading sys top level injection For the last two years for backwards compatibility data for various sys routes has been injected into both the Secret s Data map and into the top level of the JSON response object However this has some subtle issues that pop up from time to time and is becoming increasingly complicated to maintain so it s finally being removed Path fallback for list operations For a very long time Vault has automatically adjusted list operations to always end in a as list operations operates on prefixes so all list operations by definition end with This was done server side so affects all clients However this has also led to a lot of confusion for users writing policies that assume that the path that they use in the CLI is the path used internally Starting in 0 11 ACL policies gain a new fallback rule for listing they will use a matching path ending in if available but if not found they will look for the same path without a trailing This allows putting list capabilities in the same path block as most other capabilities for that path while not providing any extra access if list wasn t actually provided there Performance standbys on by default If your flavor license of Vault Enterprise supports Performance Standbys they are on by default You can disable this behavior per node with the disable performance standby configuration flag AWS secret engine roles Roles in the AWS Secret Engine were previously ambiguous For example if the arn parameter had been specified that could have been interpreted as the ARN of an AWS IAM policy to attach to an IAM user or it could have been the ARN of an AWS role to assume Now types are explicit both in terms of what credential type is being requested e g an IAM User or an Assumed Role as well as the parameters being sent to vault e g the IAM policy document attached to an IAM user or used during a GetFederationToken call All credential retrieval remains backwards compatible as does updating role data However the data returned when reading role data is now different and breaking so anything which reads role data out of Vault will need to be updated to handle the new role data format While creating updating roles remains backwards compatible the old parameters are now considered deprecated You should use the new parameters as documented in the API docs As part of this the aws creds and aws sts endpoints have been merged with the behavior only differing as specified below The aws sts endpoint is considered deprecated and should only be used when needing backwards compatibility All roles will be automatically updated to the new role format when accessed However due to the way role data was previously being stored in Vault it s possible that invalid data was stored that both make the upgrade impossible as well as would have made the role unable to retrieve credentials In this situation the previous role data is returned in an invalid data key so you can inspect what used to be in the role and correct the role data if desired One consequence of the prior AWS role storage format is that a single Vault role could have led to two different AWS credential types being retrieved when a policy parameter was stored In this case these legacy roles will be allowed to retrieve both IAM User and Federation Token credentials with the credential type depending on the path used to access it IAM User if accessed via the aws creds role name endpoint and Federation Token if accessed via the aws sts role name endpoint Full list since 0 10 0 Revocations of dynamic secrets leases now asynchronous Dynamic secret lease revocation are now queued asynchronous rather than synchronous This allows Vault to take responsibility for revocation even if the initial attempt fails The previous synchronous behavior can be attained via the sync CLI flag or sync API parameter When in synchronous mode if the operation results in failure it is up to the user to retry CLI retries The CLI will no longer retry commands on 5xx errors This was a source of confusion to users as to why Vault would hang before returning a 5xx error The Go API client still defaults to two retries Identity entity alias metadata You can no longer manually set metadata on entity aliases All alias data except the canonical entity ID it refers to is intended to be managed by the plugin providing the alias information so allowing it to be set manually didn t make sense Convergent encryption version 3 If you are using transit s convergent encryption feature which prior to this release was at version 2 we recommend rotating vault api docs secret transit rotate key your encryption key the new key will use version 3 and rewrapping vault api docs secret transit rewrap data your data to mitigate the chance of offline plaintext confirmation attacks PKI duration return types The PKI backend now returns durations e g when reading a role as an integer number of seconds instead of a Go style string |
vault page title Vault HA upgrades without Autopilot Upgrade Automation Pre 1 11 Upgrade instructions for Vault HA Pre 1 11 or Vault without autopilot upgrade automation being enabled Be sure to read the Upgrading Vault Guides as well Vault HA upgrades without autopilot upgrade automation Pre 1 11 layout docs This is our recommended upgrade procedure if one of the following applies | ---
layout: docs
page_title: Vault HA upgrades without Autopilot Upgrade Automation (Pre 1.11)
description: |-
Upgrade instructions for Vault HA Pre 1.11 or Vault without autopilot upgrade automation being enabled. Be sure to read the Upgrading-Vault Guides as well.
---
# Vault HA upgrades without autopilot upgrade automation (Pre 1.11)
This is our recommended upgrade procedure if **one** of the following applies:
- Running Vault version earlier than 1.11
- Opt-out the [Autopilot automated upgrade](/vault/docs/concepts/integrated-storage/autopilot#automated-upgrade) features with Vault 1.11 or later
- Running Vault with external storage backend such as Consul
You should consider how to apply the steps described in this document to your
particular setup since HA setups can differ on whether a load balancer is in
use, what addresses clients are being given to connect to Vault (standby +
leader, leader-only, or discovered via service discovery), etc.
If you are running on Vault 1.11+ with Integrated Storage and wish to enable the
Autopilot upgrade automation features, read to the [automated
upgrades](/vault/docs/concepts/integrated-storage/autopilot#automated-upgrades)
documentation for details and the [Automate Upgrades with Vault
Enterprise](/vault/tutorials/raft/raft-upgrade-automation) tutorial for
additional guidance.
## HA installations
Regardless of the method you use, do not fail over from a newer version of Vault
to an older version. Our suggested procedure is designed to prevent this.
Please note that Vault does not support true zero-downtime upgrades, but with
proper upgrade procedure the downtime should be very short (a few hundred
milliseconds to a second depending on how the speed of access to the storage
backend).
<Warning title="Important">
If you are currently running on Vault 1.11+ with Integrated Storage and have
chosen to opt-out the Autopilot automated upgrade features, please disable the
default automated upgrade migrations feature of the Vault. To disable this
feature, follow the [Automate Upgrades with Vault Enterprise Autopilot
configuration](/vault/tutorials/raft/raft-upgrade-automation#autopilot-configuration)
tutorial for more details. Without disabling this feature, you may run into Lost
Quorum issue as described in the [Quorum lost while upgrading the vault from
1.11.0 to later version of
it](https://support.hashicorp.com/hc/en-us/articles/7122445204755-Quorum-lost-while-upgrading-the-vault-from-1-11-0-to-later-version-of-it)
article.
</Warning>
Perform these steps on each standby:
1. Properly shut down Vault on the standby node via `SIGINT` or `SIGTERM`
2. Replace the Vault binary with the new version; ensure that `mlock()`
capability is added to the new binary with
[setcap](/vault/docs/configuration#disable_mlock)
3. Start the standby node
4. Unseal the standby node
5. Verify `vault status` shows correct Version and HA Mode is `standby`
6. Review the node's logs to ensure successful startup and unseal
At this point all standby nodes are upgraded and ready to take over. The
upgrade will not complete until one of the upgraded standby nodes takes over
active duty.
To complete the cluster upgrade:
1. Properly shut down the remaining (active) node via `SIGINT` or `SIGTERM`
<Warning title="Important">
DO NOT attempt to issue a [step-down](/vault/docs/commands/operator/step-down)
operation at any time during the upgrade process.
</Warning>
<Note>
It is important that you shut the node down properly.
This will release the current leadership and the HA lock, allowing a standby
node to take over with a very short delay.
If you kill Vault without letting it release the lock, a standby node will
not be able to take over until the lock's timeout period has expired. This
is backend-specific but could be ten seconds or more.
</Note>
2. Replace the Vault binary with the new version; ensure that `mlock()`
capability is added to the new binary with
[setcap](/vault/docs/configuration#disable_mlock)
3. Start the node
4. Unseal the node
5. Verify `vault status` shows correct Version and HA Mode is `standby`
6. Review the node's logs to ensure successful startup and unseal
Internal upgrade tasks will happen after one of the upgraded standby nodes
takes over active duty.
Be sure to also read and follow any instructions in the version-specific
upgrade notes.
## Enterprise replication installations
See the main [upgrading](/vault/docs/upgrading#enterprise-replication-installations) page. | vault | layout docs page title Vault HA upgrades without Autopilot Upgrade Automation Pre 1 11 description Upgrade instructions for Vault HA Pre 1 11 or Vault without autopilot upgrade automation being enabled Be sure to read the Upgrading Vault Guides as well Vault HA upgrades without autopilot upgrade automation Pre 1 11 This is our recommended upgrade procedure if one of the following applies Running Vault version earlier than 1 11 Opt out the Autopilot automated upgrade vault docs concepts integrated storage autopilot automated upgrade features with Vault 1 11 or later Running Vault with external storage backend such as Consul You should consider how to apply the steps described in this document to your particular setup since HA setups can differ on whether a load balancer is in use what addresses clients are being given to connect to Vault standby leader leader only or discovered via service discovery etc If you are running on Vault 1 11 with Integrated Storage and wish to enable the Autopilot upgrade automation features read to the automated upgrades vault docs concepts integrated storage autopilot automated upgrades documentation for details and the Automate Upgrades with Vault Enterprise vault tutorials raft raft upgrade automation tutorial for additional guidance HA installations Regardless of the method you use do not fail over from a newer version of Vault to an older version Our suggested procedure is designed to prevent this Please note that Vault does not support true zero downtime upgrades but with proper upgrade procedure the downtime should be very short a few hundred milliseconds to a second depending on how the speed of access to the storage backend Warning title Important If you are currently running on Vault 1 11 with Integrated Storage and have chosen to opt out the Autopilot automated upgrade features please disable the default automated upgrade migrations feature of the Vault To disable this feature follow the Automate Upgrades with Vault Enterprise Autopilot configuration vault tutorials raft raft upgrade automation autopilot configuration tutorial for more details Without disabling this feature you may run into Lost Quorum issue as described in the Quorum lost while upgrading the vault from 1 11 0 to later version of it https support hashicorp com hc en us articles 7122445204755 Quorum lost while upgrading the vault from 1 11 0 to later version of it article Warning Perform these steps on each standby 1 Properly shut down Vault on the standby node via SIGINT or SIGTERM 2 Replace the Vault binary with the new version ensure that mlock capability is added to the new binary with setcap vault docs configuration disable mlock 3 Start the standby node 4 Unseal the standby node 5 Verify vault status shows correct Version and HA Mode is standby 6 Review the node s logs to ensure successful startup and unseal At this point all standby nodes are upgraded and ready to take over The upgrade will not complete until one of the upgraded standby nodes takes over active duty To complete the cluster upgrade 1 Properly shut down the remaining active node via SIGINT or SIGTERM Warning title Important DO NOT attempt to issue a step down vault docs commands operator step down operation at any time during the upgrade process Warning Note It is important that you shut the node down properly This will release the current leadership and the HA lock allowing a standby node to take over with a very short delay If you kill Vault without letting it release the lock a standby node will not be able to take over until the lock s timeout period has expired This is backend specific but could be ten seconds or more Note 2 Replace the Vault binary with the new version ensure that mlock capability is added to the new binary with setcap vault docs configuration disable mlock 3 Start the node 4 Unseal the node 5 Verify vault status shows correct Version and HA Mode is standby 6 Review the node s logs to ensure successful startup and unseal Internal upgrade tasks will happen after one of the upgraded standby nodes takes over active duty Be sure to also read and follow any instructions in the version specific upgrade notes Enterprise replication installations See the main upgrading vault docs upgrading enterprise replication installations page |
vault This page contains the list of deprecations and important or breaking changes for Vault 1 9 x Please read it carefully layout docs page title Upgrading to Vault 1 9 x Guides Overview | ---
layout: docs
page_title: Upgrading to Vault 1.9.x - Guides
description: |-
This page contains the list of deprecations and important or breaking changes
for Vault 1.9.x. Please read it carefully.
---
# Overview
This page contains the list of deprecations and important or breaking changes
for Vault 1.9.x compared to 1.8. Please read it carefully.
## OIDC provider
Vault 1.9.0 introduced the ability for Vault to be an OpenID Connect (OIDC) identity
provider. To support the feature, Vault's [default policy](/vault/docs/concepts/policies#default-policy)
was modified to include an ACL rule for its Authorization Endpoint. Due to the handling
of Vault's default policy during upgrades, existing deployments of Vault that are upgraded
to 1.9.0 will not have this required ACL rule.
If you're upgrading to 1.9.0 and want to use the new OIDC provider feature, the following
ACL rule must be added to the default policy **or** a policy associated with the Vault
[Auth Method](/vault/docs/auth) used to authenticate end-users during
the OIDC flow.
```hcl
# Allow a token to make requests to the authorization endpoint for OIDC providers.
path "identity/oidc/provider/+/authorize" {
capabilities = ["read", "update"]
}
```
## Identity tokens
The Identity secrets engine has changed the procedure for creating Identity
token roles. When creating a role, the key parameter is required and the key
must exist. Previously, it was possible to create a role and assign it a named
key that did not yet exist despite the documentation stating otherwise.
All calls to [create or update a role](/vault/api-docs/secret/identity/tokens#create-or-update-a-role)
must be checked to ensure that roles are not being created or updated with
non-existent keys.
## SSH role parameter `allowed_extensions` behavior change
Prior versions of Vault allowed clients to specify any extension when requesting
SSH certificate [signing requests](/vault/api-docs/secret/ssh#sign-ssh-key)
if their role had an `allowed_extensions` set to `""` or was missing.
Now, Vault will reject a client request that specifies extensions if the role
parameter `allowed_extensions` is empty or missing from the role they are
associated with.
To re-enable the old behavior, update the roles with a value
of `"*"` to the `allowed_extensions` parameter allowing any/all extensions to be
specified by clients.
@include 'entity-alias-mapping.mdx'
## Deprecations
### HTTP request counter deprecation
In Vault 1.9, the internal HTTP Request count
[API](/vault/api-docs/v1.8.x/system/internal-counters#http-requests)
will be removed from the product. Calls to the endpoint will result in a 404
error with a message stating that `functionality on this path has been removed`.
Vault does not make backwards compatible guarantees on internal APIs (those
prefaced with `sys/internal`). They are subject to change and may disappear
without notice.
### Etcd v2
Support for Etcd v2 will be removed from Vault in Vault 1.10 (not this Vault
release, but the next one). The Etcd v2 API
was deprecated with the release of [Etcd
v3.5](https://etcd.io/blog/2021/announcing-etcd-3.5/), and will be
decommissioned in the Etcd v3.6 release.
Users upgrading to Vault 1.9 and planning to eventually upgrade to Vault 1.10
should prepare to [migrate](/vault/docs/commands/operator/migrate) Vault storage to
an Etcd v3 cluster prior to upgrading to Vault 1.10. All storage migrations
should have [backups](/vault/docs/concepts/storage#backing-up-vault-s-persisted-data)
taken prior to migration.
## TLS cipher suites changes
In Vault 1.9, due to changes in Go 1.17, the `tls_prefer_server_cipher_suites`
TCP configuration parameter has been deprecated and its value will be ignored.
Additionally, Go has begun doing automated cipher suite ordering and no longer
respects the order of suites given in `tls_cipher_suites`.
See [this blog post](https://go.dev/blog/tls-cipher-suites) for more information.
@include 'pki-forwarding-bug.mdx'
## Known issues
@include 'raft-panic-old-tls-key.mdx'
### Identity token backend key rotations
Existing Vault installations that use the [Identity Token
backend](/vault/api-docs/secret/identity/tokens) and have [named
keys](/vault/api-docs/secret/identity/tokens#create-a-named-key) generated will
encounter a panic when any of those existing keys pass their
`rotation_period`. This issue affects Vault 1.9.0, and is fixed in Vault 1.9.1.
Users should upgrade directly to 1.9.1 or above in order to avoid this panic.
If a panic is encountered after an upgrade to Vault 1.9.0, the named key will be
corrupted on storage and become unusable. In this case, the key will need to be
deleted and re-created. A fix to fully mitigate this panic will be addressed on
Vault 1.9.3.
### Activity log Non-Entity tokens
When upgrading Vault from 1.8 (or earlier) to 1.9 (or later), client counts of [non-entity tokens](/vault/docs/concepts/client-count#non-entity-tokens) will only include the tokens used after the upgrade.
Starting in Vault 1.9, the activity log records and de-duplicates non-entity tokens by using the namespace and token's policies to generate a unique identifier. Because Vault did not create identifiers for these tokens before 1.9, the activity log cannot know whether this token has been seen pre-1.9. To prevent inaccurate and inflated counts, the activity log will ignore any counts of non-entity tokens that were created before the upgrade and only the non-entity tokens from versions 1.9 and later will be counted.
Before upgrading, you should [query Vault usage metrics](/vault/tutorials/monitoring/usage-metrics#querying-usage-metrics) and report the usage data for billing purposes.
See the client count [overview](/vault/docs/concepts/client-count) and [FAQ](/vault/docs/concepts/client-count/faq) for more information. | vault | layout docs page title Upgrading to Vault 1 9 x Guides description This page contains the list of deprecations and important or breaking changes for Vault 1 9 x Please read it carefully Overview This page contains the list of deprecations and important or breaking changes for Vault 1 9 x compared to 1 8 Please read it carefully OIDC provider Vault 1 9 0 introduced the ability for Vault to be an OpenID Connect OIDC identity provider To support the feature Vault s default policy vault docs concepts policies default policy was modified to include an ACL rule for its Authorization Endpoint Due to the handling of Vault s default policy during upgrades existing deployments of Vault that are upgraded to 1 9 0 will not have this required ACL rule If you re upgrading to 1 9 0 and want to use the new OIDC provider feature the following ACL rule must be added to the default policy or a policy associated with the Vault Auth Method vault docs auth used to authenticate end users during the OIDC flow hcl Allow a token to make requests to the authorization endpoint for OIDC providers path identity oidc provider authorize capabilities read update Identity tokens The Identity secrets engine has changed the procedure for creating Identity token roles When creating a role the key parameter is required and the key must exist Previously it was possible to create a role and assign it a named key that did not yet exist despite the documentation stating otherwise All calls to create or update a role vault api docs secret identity tokens create or update a role must be checked to ensure that roles are not being created or updated with non existent keys SSH role parameter allowed extensions behavior change Prior versions of Vault allowed clients to specify any extension when requesting SSH certificate signing requests vault api docs secret ssh sign ssh key if their role had an allowed extensions set to or was missing Now Vault will reject a client request that specifies extensions if the role parameter allowed extensions is empty or missing from the role they are associated with To re enable the old behavior update the roles with a value of to the allowed extensions parameter allowing any all extensions to be specified by clients include entity alias mapping mdx Deprecations HTTP request counter deprecation In Vault 1 9 the internal HTTP Request count API vault api docs v1 8 x system internal counters http requests will be removed from the product Calls to the endpoint will result in a 404 error with a message stating that functionality on this path has been removed Vault does not make backwards compatible guarantees on internal APIs those prefaced with sys internal They are subject to change and may disappear without notice Etcd v2 Support for Etcd v2 will be removed from Vault in Vault 1 10 not this Vault release but the next one The Etcd v2 API was deprecated with the release of Etcd v3 5 https etcd io blog 2021 announcing etcd 3 5 and will be decommissioned in the Etcd v3 6 release Users upgrading to Vault 1 9 and planning to eventually upgrade to Vault 1 10 should prepare to migrate vault docs commands operator migrate Vault storage to an Etcd v3 cluster prior to upgrading to Vault 1 10 All storage migrations should have backups vault docs concepts storage backing up vault s persisted data taken prior to migration TLS cipher suites changes In Vault 1 9 due to changes in Go 1 17 the tls prefer server cipher suites TCP configuration parameter has been deprecated and its value will be ignored Additionally Go has begun doing automated cipher suite ordering and no longer respects the order of suites given in tls cipher suites See this blog post https go dev blog tls cipher suites for more information include pki forwarding bug mdx Known issues include raft panic old tls key mdx Identity token backend key rotations Existing Vault installations that use the Identity Token backend vault api docs secret identity tokens and have named keys vault api docs secret identity tokens create a named key generated will encounter a panic when any of those existing keys pass their rotation period This issue affects Vault 1 9 0 and is fixed in Vault 1 9 1 Users should upgrade directly to 1 9 1 or above in order to avoid this panic If a panic is encountered after an upgrade to Vault 1 9 0 the named key will be corrupted on storage and become unusable In this case the key will need to be deleted and re created A fix to fully mitigate this panic will be addressed on Vault 1 9 3 Activity log Non Entity tokens When upgrading Vault from 1 8 or earlier to 1 9 or later client counts of non entity tokens vault docs concepts client count non entity tokens will only include the tokens used after the upgrade Starting in Vault 1 9 the activity log records and de duplicates non entity tokens by using the namespace and token s policies to generate a unique identifier Because Vault did not create identifiers for these tokens before 1 9 the activity log cannot know whether this token has been seen pre 1 9 To prevent inaccurate and inflated counts the activity log will ignore any counts of non entity tokens that were created before the upgrade and only the non entity tokens from versions 1 9 and later will be counted Before upgrading you should query Vault usage metrics vault tutorials monitoring usage metrics querying usage metrics and report the usage data for billing purposes See the client count overview vault docs concepts client count and FAQ vault docs concepts client count faq for more information |
vault page title Upgrading to Vault 0 5 0 Guides layout docs actions you must take to facilitate a smooth upgrade path This page contains the full list of breaking changes for Vault 0 5 including Overview | ---
layout: docs
page_title: Upgrading to Vault 0.5.0 - Guides
description: |-
This page contains the full list of breaking changes for Vault 0.5, including
actions you must take to facilitate a smooth upgrade path.
---
# Overview
This page contains the list of breaking changes for Vault 0.5. Please read it
carefully.
Please note that these are changes to Vault itself. Client libraries maintained
by HashiCorp have been updated with support for these changes, but if you are
using community-supported libraries, you should ensure that they are ready for
Vault 0.5 before upgrading.
## Rekey requires nonce
Vault now generates a nonce when a rekey operation is started in order to
ensure that the operation cannot be hijacked. The nonce is output when the
rekey operation is started and when rekey status is requested.
The nonce must be provided as part of the request parameters when providing an
unseal key. The nonce can be communicated from the request initiator to unseal
key holders via side channels; the unseal key holders can then verify the nonce
(by providing it) when they submit their unseal key.
As a convenience, if using the CLI interactively to provide the unseal key, the
nonce will be displayed for verification but the user will not be required to
manually re-type it.
## `TTL` field in token lookup
Previously, the `ttl` field returned when calling `lookup` or `lookup-self` on
the token auth method displayed the TTL set at token creation. It
now displays the time remaining (in seconds) for the token's validity period.
The original behavior has been moved to a field named `creation_ttl`.
## Grace periods removed
Vault no longer uses grace periods internally for leases or token TTLs.
Previously these were set by backends and could differ greatly from one backend
to another, causing confusion. TTLs (the `lease_duration` field for a lease,
or, for a token lookup, the `ttl`) are now exact.
## `token-renew` CLI command
If the token given for renewal is the same as the token in use by the client,
the `renew-self` endpoint will be used in the API rather than the `renew`
endpoint. Since the `default` policy contains `auth/token/renew-self` this
makes it much more likely that the request will succeed rather than somewhat
confusingly failing due to a lack of permissions on `auth/token/renew`.
## `status` CLI command
The `status` CLI command now returns an exit code of `0` for an unsealed Vault
(as before), `2` for a sealed Vault, and `1` for an error. This keeps error
return codes consistent across commands.
## Transit upsertion behavior uses capabilities
Previously, attempting to encrypt with a key that did not exist would create a
key with default values. This was convenient but ultimately allowed a client to
potentially escape an ACL policy restriction, albeit without any dangerous
access. Now that Vault supports more granular capabilities in policies,
upsertion behavior is controlled by whether the client has the `create`
capability for the request (upsertion is allowed) or only the `update`
capability (upsertion is denied).
## etcd physical backend uses `sync`
The `etcd` physical backend now supports `sync` functionality and it is turned
on by default, which maps to the upstream library's default. It can be
disabled; see the configuration page for information.
## S3 physical backend prefers environment variables
The `s3` physical backend now prefers environment variables over configuration
file variables. This matches the behavior of the rest of the backends and of
Vault generally.
## Lease default and renewal handling
All backends now honor system and mount-specific default and maximum lease
times, except when specifically overridden by backend configuration or role
parameters, or when doing so would not make sense (e.g. AWS STS tokens cannot
have a lifetime of greater than 1 hour).
This allows for a _much_ more uniform approach to managing leases on both the
operational side and the user side, and removes much ambiguity and uncertainty
resulting from backend-hardcoded limits.
However, also this means that the leases generated by the backends may return
significantly different TTLs in 0.5 than in previous versions, unless they have
been preconfigured. You can use the `mount-tune` CLI command or the
`/sys/mounts/<mount point>/tune` endpoint to adjust default and max TTL
behavior for any mount. This is supported in 0.4, so you can perform this
tuning before upgrading.
The following list details the ways in which lease handling has changed
per-backend. In all cases the "mount TTL" means the mount-specific value for
default or max TTL; however, if no value is set on a given mount, the system
default/max values are used. This lists only the changes; any lease-issuing
or renew function not listed here behaves the same as in 0.4.
(As a refresher: the default TTL is the amount of time that the initial
lease/token is valid for before it must be renewed; the maximum TTL is the
amount of time a lease or token is valid for before it can no longer be renewed
and must be reissued. A mount can be more restrictive with its maximum TTL, but
cannot be less restrictive than the mount's maximum TTL.)
#### Credential (Auth) backends
- `github` – The renewal function now uses the backend's configured maximum
TTL, if set; otherwise, the mount maximum TTL is used.
- `ldap` – The renewal function now uses the mount default TTL instead of always
using one hour.
- `token` – Tokens can no longer be renewed forever; instead, they now honor the
mount default/max TTL.
- `userpass` – The renew function now uses the backend's configured maximum TTL,
if set; otherwise the mount maximum TTL is used.
#### Secrets engines
- `aws` – New IAM roles no longer always have a default TTL of one hour, instead
honoring the configured default if available and the mount default TTL if not
(renewal always used the configured values if available). STS tokens return a
TTL corresponding to the lifetime of the token in AWS and cannot be renewed.
- `cassandra` – `lease_grace_period` has been removed since Vault no longer uses
grace periods.
- `consul` – The mount default TTL is now used as the default TTL if there is no
backend configuration parameter. Renewal now uses the mount default and
maximum TTLs.
- `mysql` – The mount default TTL is now used as the default TTL if there is no
backend configuration parameter.
- `postgresql` – The mount default TTL is now used as the default TTL if there
is no backend configuration parameter. In addition, there is no longer any
grace period with the time configured for password expiration within Postgres
itself. | vault | layout docs page title Upgrading to Vault 0 5 0 Guides description This page contains the full list of breaking changes for Vault 0 5 including actions you must take to facilitate a smooth upgrade path Overview This page contains the list of breaking changes for Vault 0 5 Please read it carefully Please note that these are changes to Vault itself Client libraries maintained by HashiCorp have been updated with support for these changes but if you are using community supported libraries you should ensure that they are ready for Vault 0 5 before upgrading Rekey requires nonce Vault now generates a nonce when a rekey operation is started in order to ensure that the operation cannot be hijacked The nonce is output when the rekey operation is started and when rekey status is requested The nonce must be provided as part of the request parameters when providing an unseal key The nonce can be communicated from the request initiator to unseal key holders via side channels the unseal key holders can then verify the nonce by providing it when they submit their unseal key As a convenience if using the CLI interactively to provide the unseal key the nonce will be displayed for verification but the user will not be required to manually re type it TTL field in token lookup Previously the ttl field returned when calling lookup or lookup self on the token auth method displayed the TTL set at token creation It now displays the time remaining in seconds for the token s validity period The original behavior has been moved to a field named creation ttl Grace periods removed Vault no longer uses grace periods internally for leases or token TTLs Previously these were set by backends and could differ greatly from one backend to another causing confusion TTLs the lease duration field for a lease or for a token lookup the ttl are now exact token renew CLI command If the token given for renewal is the same as the token in use by the client the renew self endpoint will be used in the API rather than the renew endpoint Since the default policy contains auth token renew self this makes it much more likely that the request will succeed rather than somewhat confusingly failing due to a lack of permissions on auth token renew status CLI command The status CLI command now returns an exit code of 0 for an unsealed Vault as before 2 for a sealed Vault and 1 for an error This keeps error return codes consistent across commands Transit upsertion behavior uses capabilities Previously attempting to encrypt with a key that did not exist would create a key with default values This was convenient but ultimately allowed a client to potentially escape an ACL policy restriction albeit without any dangerous access Now that Vault supports more granular capabilities in policies upsertion behavior is controlled by whether the client has the create capability for the request upsertion is allowed or only the update capability upsertion is denied etcd physical backend uses sync The etcd physical backend now supports sync functionality and it is turned on by default which maps to the upstream library s default It can be disabled see the configuration page for information S3 physical backend prefers environment variables The s3 physical backend now prefers environment variables over configuration file variables This matches the behavior of the rest of the backends and of Vault generally Lease default and renewal handling All backends now honor system and mount specific default and maximum lease times except when specifically overridden by backend configuration or role parameters or when doing so would not make sense e g AWS STS tokens cannot have a lifetime of greater than 1 hour This allows for a much more uniform approach to managing leases on both the operational side and the user side and removes much ambiguity and uncertainty resulting from backend hardcoded limits However also this means that the leases generated by the backends may return significantly different TTLs in 0 5 than in previous versions unless they have been preconfigured You can use the mount tune CLI command or the sys mounts mount point tune endpoint to adjust default and max TTL behavior for any mount This is supported in 0 4 so you can perform this tuning before upgrading The following list details the ways in which lease handling has changed per backend In all cases the mount TTL means the mount specific value for default or max TTL however if no value is set on a given mount the system default max values are used This lists only the changes any lease issuing or renew function not listed here behaves the same as in 0 4 As a refresher the default TTL is the amount of time that the initial lease token is valid for before it must be renewed the maximum TTL is the amount of time a lease or token is valid for before it can no longer be renewed and must be reissued A mount can be more restrictive with its maximum TTL but cannot be less restrictive than the mount s maximum TTL Credential Auth backends github The renewal function now uses the backend s configured maximum TTL if set otherwise the mount maximum TTL is used ldap The renewal function now uses the mount default TTL instead of always using one hour token Tokens can no longer be renewed forever instead they now honor the mount default max TTL userpass The renew function now uses the backend s configured maximum TTL if set otherwise the mount maximum TTL is used Secrets engines aws New IAM roles no longer always have a default TTL of one hour instead honoring the configured default if available and the mount default TTL if not renewal always used the configured values if available STS tokens return a TTL corresponding to the lifetime of the token in AWS and cannot be renewed cassandra lease grace period has been removed since Vault no longer uses grace periods consul The mount default TTL is now used as the default TTL if there is no backend configuration parameter Renewal now uses the mount default and maximum TTLs mysql The mount default TTL is now used as the default TTL if there is no backend configuration parameter postgresql The mount default TTL is now used as the default TTL if there is no backend configuration parameter In addition there is no longer any grace period with the time configured for password expiration within Postgres itself |
vault The Vercel Project sync destination allows Vault to safely synchronize secrets as Vercel environment variables Sync secrets from Vault to Vercel Project Automatically sync and unsync the secrets from Vault to a Vercel project to centralize visibility and control of secrets lifecycle management layout docs page title Sync secrets from Vault to Vercel Project | ---
layout: docs
page_title: Sync secrets from Vault to Vercel Project
description: >-
Automatically sync and unsync the secrets from Vault to a Vercel project to centralize visibility and control of secrets lifecycle management.
---
# Sync secrets from Vault to Vercel Project
The Vercel Project sync destination allows Vault to safely synchronize secrets as Vercel environment variables.
This is a low footprint option that enables your applications to benefit from Vault-managed secrets without requiring them
to connect directly with Vault. This guide walks you through the configuration process.
Prerequisites:
* Ability to read or create KVv2 secrets
* Ability to create Vercel tokens with access to modify project environment variables
* Ability to create sync destinations and associations on your Vault server
## Setup
1. If you do not already have a Vercel token, navigate [your account settings](https://vercel.com/account/tokens) to
generate credentials with the necessary permissions to manage your project's environment variables.
1. Next you need to locate your project ID. It can be found under the `Settings` tab in your project's overview page.
1. Configure a sync destination with the access token and project ID obtained in the previous steps.
```shell-session
$ vault write sys/sync/destinations/vercel-project/my-dest \
access_token="$TOKEN" \
project_id="$PROJECT_ID" \
deployment_environments=development \
deployment_environments=preview \
deployment_environments=production
```
**Output:**
<CodeBlockConfig hideClipboard>
```plaintext
Key Value
--- -----
connection_details map[access_token:***** deployment_environments:[development preview production] project_id:<project-id>]
name my-dest
type vercel-project
```
</CodeBlockConfig>
## Usage
1. If you do not already have a KVv2 secret to sync, mount a new KVv2 secrets engine.
```shell-session
$ vault secrets enable -path='my-kv' kv-v2
```
**Output:**
<CodeBlockConfig hideClipboard>
```plaintext
Success! Enabled the kv-v2 secrets engine at: my-kv/
```
</CodeBlockConfig>
1. Create secrets you wish to sync with a target Vercel project.
```shell-session
$ vault kv put -mount='my-kv' my-secret key1='val1'
```
**Output:**
<CodeBlockConfig hideClipboard>
```plaintext
==== Secret Path ====
my-kv/data/my-secret
======= Metadata =======
Key Value
--- -----
created_time <timestamp>
custom_metadata <nil>
deletion_time n/a
destroyed false
version 1
```
</CodeBlockConfig>
1. Create an association between the destination and a secret to synchronize.
```shell-session
$ vault write sys/sync/destinations/vercel-project/my-dest/associations/set \
mount='my-kv' \
secret_name='my-secret'
```
**Output:**
<CodeBlockConfig hideClipboard>
```plaintext
Key Value
--- -----
associated_secrets map[kv_1234/my-secret:map[accessor:kv_1234 secret_name:my-secret sync_status:SYNCED updated_at:<timestamp>]]
store_name my-dest
store_type vercel-project
```
</CodeBlockConfig>
1. Navigate to your project's settings under the `Environment Variables` section to confirm your secret was successfully
created in your Vercel project.
Moving forward, any modification on the Vault secret will be propagated in near real time to its Vercel environment variable
counterpart. Creating a new secret version in Vault will overwrite the value in your Vercel Project. Deleting the secret
or the association in Vault will delete the secret on Vercel as well.
<Note>
Vault syncs secrets differently depending on whether you have configured
`secret-key` or `secret-path` [granularity](/vault/docs/sync#granularity):
- `secret-key` granularity splits KVv2 secrets from Vault into key-value pairs
and stores the pairs as distinct entries in Vercel. For example,
`secrets.key1="val1"` and `secrets.key2="val2"`.
- `secret-path` granularity stores secrets as a single JSON string that contains
all the associated key-value pairs. For example, `{"key1":"val1", "key2":"val2"}`.
Since Vercel projects limit environment variables to single-value secrets, the
sync granularity defaults to `secret-key`.
</Note>
## API
Please see the [secrets sync API](/vault/api-docs/system/secrets-sync) for more details. | vault | layout docs page title Sync secrets from Vault to Vercel Project description Automatically sync and unsync the secrets from Vault to a Vercel project to centralize visibility and control of secrets lifecycle management Sync secrets from Vault to Vercel Project The Vercel Project sync destination allows Vault to safely synchronize secrets as Vercel environment variables This is a low footprint option that enables your applications to benefit from Vault managed secrets without requiring them to connect directly with Vault This guide walks you through the configuration process Prerequisites Ability to read or create KVv2 secrets Ability to create Vercel tokens with access to modify project environment variables Ability to create sync destinations and associations on your Vault server Setup 1 If you do not already have a Vercel token navigate your account settings https vercel com account tokens to generate credentials with the necessary permissions to manage your project s environment variables 1 Next you need to locate your project ID It can be found under the Settings tab in your project s overview page 1 Configure a sync destination with the access token and project ID obtained in the previous steps shell session vault write sys sync destinations vercel project my dest access token TOKEN project id PROJECT ID deployment environments development deployment environments preview deployment environments production Output CodeBlockConfig hideClipboard plaintext Key Value connection details map access token deployment environments development preview production project id project id name my dest type vercel project CodeBlockConfig Usage 1 If you do not already have a KVv2 secret to sync mount a new KVv2 secrets engine shell session vault secrets enable path my kv kv v2 Output CodeBlockConfig hideClipboard plaintext Success Enabled the kv v2 secrets engine at my kv CodeBlockConfig 1 Create secrets you wish to sync with a target Vercel project shell session vault kv put mount my kv my secret key1 val1 Output CodeBlockConfig hideClipboard plaintext Secret Path my kv data my secret Metadata Key Value created time timestamp custom metadata nil deletion time n a destroyed false version 1 CodeBlockConfig 1 Create an association between the destination and a secret to synchronize shell session vault write sys sync destinations vercel project my dest associations set mount my kv secret name my secret Output CodeBlockConfig hideClipboard plaintext Key Value associated secrets map kv 1234 my secret map accessor kv 1234 secret name my secret sync status SYNCED updated at timestamp store name my dest store type vercel project CodeBlockConfig 1 Navigate to your project s settings under the Environment Variables section to confirm your secret was successfully created in your Vercel project Moving forward any modification on the Vault secret will be propagated in near real time to its Vercel environment variable counterpart Creating a new secret version in Vault will overwrite the value in your Vercel Project Deleting the secret or the association in Vault will delete the secret on Vercel as well Note Vault syncs secrets differently depending on whether you have configured secret key or secret path granularity vault docs sync granularity secret key granularity splits KVv2 secrets from Vault into key value pairs and stores the pairs as distinct entries in Vercel For example secrets key1 val1 and secrets key2 val2 secret path granularity stores secrets as a single JSON string that contains all the associated key value pairs For example key1 val1 key2 val2 Since Vercel projects limit environment variables to single value secrets the sync granularity defaults to secret key Note API Please see the secrets sync API vault api docs system secrets sync for more details |
vault The Google Cloud Platform GCP Secret Manager sync destination allows Vault to safely synchronize secrets to your GCP projects page title Sync secrets from Vault to GCP Secret Manager Sync secrets from Vault to GCP Secret Manager layout docs Automatically sync and unsync the secrets from Vault to GCP Secret Manager to centralize visibility and control of secrets lifecycle management | ---
layout: docs
page_title: Sync secrets from Vault to GCP Secret Manager
description: >-
Automatically sync and unsync the secrets from Vault to GCP Secret Manager to centralize visibility and control of secrets lifecycle management.
---
# Sync secrets from Vault to GCP Secret Manager
The Google Cloud Platform (GCP) Secret Manager sync destination allows Vault to safely synchronize secrets to your GCP projects.
This is a low footprint option that enables your applications to benefit from Vault-managed secrets without requiring them
to connect directly with Vault. This guide walks you through the configuration process.
Prerequisites:
* Ability to read or create KVv2 secrets
* Ability to create GCP Service Account credentials with access to the Secret Manager
* Ability to create sync destinations and associations on your Vault server
## Setup
1. If you do not already have a Service Account, navigate to the IAM & Admin page in the Google Cloud console to
[create a new Service Account](https://cloud.google.com/iam/docs/service-accounts-create) with the
[necessary permissions](/vault/docs/sync/gcpsm#permissions). [Instructions](/vault/docs/sync/gcpsm#provision-service-account)
to provision this Service Account via Terraform can be found below.
1. Configure a sync destination with the Service Account JSON credentials created in the previous step. See docs for
[alternative ways](/vault/docs/secrets/gcp#authentication) to pass in the `credentials` parameter.
```shell-session
$ vault write sys/sync/destinations/gcp-sm/my-dest \
credentials='@path/to/credentials.json'
replication_locations='us-east1,us-west1'
```
**Output:**
<CodeBlockConfig hideClipboard>
```plaintext
Key Value
--- -----
connection_details map[credentials:***** replication_locations:us-east1,us-west1]
name my-dest
type gcp-sm
```
</CodeBlockConfig>
## Usage
1. If you do not already have a KVv2 secret to sync, mount a new KVv2 secrets engine.
```shell-session
$ vault secrets enable -path=my-kv kv-v2
```
**Output**:
<CodeBlockConfig hideClipboard>
```plaintext
Success! Enabled the kv-v2 secrets engine at: my-kv/
```
</CodeBlockConfig>
1. Create secrets you wish to sync with a target GCP Secret Manager.
```shell-session
$ vault kv put -mount=my-kv my-secret foo='bar'
```
**Output**:
<CodeBlockConfig hideClipboard>
```plaintext
==== Secret Path ====
my-kv/data/my-secret
======= Metadata =======
Key Value
--- -----
created_time <timestamp>
custom_metadata <nil>
deletion_time n/a
destroyed false
version 1
```
</CodeBlockConfig>
1. Create an association between the destination and a secret to synchronize.
```shell-session
$ vault write sys/sync/destinations/gcp-sm/my-dest/associations/set \
mount='my-kv' \
secret_name='my-secret'
```
**Output:**
<CodeBlockConfig hideClipboard>
```plaintext
Key Value
--- -----
associated_secrets map[kv_1234/my-secret:map[accessor:kv_1234 secret_name:my-secret sync_status:SYNCED updated_at:<timestamp>]]
store_name my-dest
store_type gcp-sm
```
</CodeBlockConfig>
1. Navigate to the [Secret Manager](https://console.cloud.google.com/security/secret-manager) in the Google Cloud console
to confirm your secret was successfully created in your GCP project.
Moving forward, any modification on the Vault secret will be propagated in near real time to its GCP Secret Manager
counterpart. Creating a new secret version in Vault will create a new version in GCP Secret Manager. Deleting the secret
or the association in Vault will delete the secret in your GCP project as well.
### Replication policy
GCP can target specific geographic regions to provide strict control on where
your applications store data and sync secrets. You can target specific GCP
regions for each sync destinations during creation which will limit where Vault writes
secrets.
Regardless of the region limits on writes, synced secrets are always readable
globally when the client has the required permissions.
## Permissions
The credentials given to Vault must have the following permissions to synchronize secrets:
```shell-session
secretmanager.secrets.create
secretmanager.secrets.delete
secretmanager.secrets.update
secretmanager.versions.add
secretmanager.versions.destroy
```
## Provision service account
Vault needs to be configured with credentials to establish a trust relationship with your GCP project so it can manage
Secret Manager secrets on your behalf. The IAM & Admin page in the Google Cloud console can be used to
[create a new Service Account](https://cloud.google.com/iam/docs/service-accounts-create) with access to the Secret Manager.
You can equally use the [Terraform Google provider](https://registry.terraform.io/providers/hashicorp/google/latest/docs#authentication-and-configuration)
to provision a GCP Service Account with the appropriate policies.
1. Copy-paste this HCL snippet into a `secrets-sync-setup.tf` file.
```hcl
provider "google" {
// See https://registry.terraform.io/providers/hashicorp/google/latest/docs#authentication-and-configuration to setup the Google Provider
// for options on how to configure this provider. The following parameters or environment
// variables are typically used.
// Parameters
// region = "" (Optional)
// project = ""
// credentials = ""
// Environment Variables
// GOOGLE_REGION (optional)
// GOOGLE_PROJECT
// GOOGLE_CREDENTIALS (The path to a service account key file with the
// "Service Account Admin", "Service Account Key Admin",
// "Secret Manager Admin", and "Project IAM Admin" roles
// attached)
}
data "google_client_config" "config" {}
resource "google_service_account" "vault_secrets_sync_account" {
account_id = "gcp-sm-vault-secrets-sync"
description = "service account for Vault Secrets Sync feature"
}
// Production environments should use a more restricted role.
// The built-in secret manager admin role is used as an example for simplicity.
data "google_iam_policy" "vault_secrets_sync_iam_policy" {
binding {
role = "roles/secretmanager.admin"
members = [
google_service_account.vault_secrets_sync_account.email,
]
}
}
resource "google_project_iam_member" "vault_secrets_sync_iam_member" {
project = data.google_client_config.config.project
role = "roles/secretmanager.admin"
member = google_service_account.vault_secrets_sync_account.member
}
resource "google_service_account_key" "vault_secrets_sync_account_key" {
service_account_id = google_service_account.vault_secrets_sync_account.name
public_key_type = "TYPE_X509_PEM_FILE"
}
resource "local_file" "vault_secrets_sync_credentials_file" {
content = base64decode(google_service_account_key.vault_secrets_sync_account_key.private_key)
filename = "gcp-sm-sync-service-account-credentials.json"
}
output "vault_secrets_sync_credentials_file_path" {
value = abspath("${path.module}/${local_file.sync_service_account_credentials_file.filename}")
}
```
1. Execute a plan to validate the Terraform Google provider is properly configured.
```shell-session
$ terraform init && terraform plan
```
**Output:**
<CodeBlockConfig hideClipboard>
```plaintext
(...)
Plan: 4 to add, 0 to change, 0 to destroy.
```
</CodeBlockConfig>
1. Execute an apply to provision the Service Account.
```shell-session
$ terraform apply
```
**Output:**
<CodeBlockConfig hideClipboard>
```plaintext
(...)
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
Outputs:
sync_service_account_credentials_file = "/path/to/credentials/file/gcp-sm-sync-service-account-credentials.json"
```
</CodeBlockConfig>
The generated Service Account credentials file can then be used to configure the Vault GCP Secret Manager destination
following the [setup](/vault/docs/sync/gcpsm#setup) steps.
## Targeting specific GCP projects
By default, the target GCP project to sync secrets with is derived from the service
account JSON [credentials](/vault/api-docs/system/secrets-sync#credentials) or application
default credentials for a particular GCP sync destination. This means secrets will be synced
within the parent project of the configured service account.
In some cases, it's desirable to use a single service account or [workload identity](https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances)
to sync secrets with any number of GCP projects within an organization. To achieve this,
you can set the `project_id` parameter to the target project to sync secrets with:
```shell-session
$ vault write sys/sync/destinations/gcp-sm/my-dest \
project_id='target-project-id'
```
This overrides the project ID derived from the service account JSON credentials or application
default credentials. The service account must be [authorized](https://cloud.google.com/iam/docs/service-account-overview#locations)
to perform Secret Manager actions in the target project.
## Access management
You can allow or restrict access to secrets based on
[IAM conditions](https://cloud.google.com/iam/docs/conditions-resource-attributes#resource-name)
against the fully-qualified resource name. For secrets in Secret Manager, a fully-qualified resource name must have the following
format:
`projects/<project_number>/secrets/<secret_name>`
<Tip title="Use the project number, not the project ID">
The project **number** is not the same as the project **ID**. Project numbers
are **numeric** while project IDs are **alphanumeric**. They can be found on
the Project info panel in the web dashboard or on the Welcome screen.
</Tip>
For example, the default secret name template prepends the word `vault` to the
beginning of secret names. To prevent Vault from modifying secrets that were not
created by a sync operation, you can use a role binding against the resource
name with the `startsWith` condition:
<CodeBlockConfig hideClipboard>
resource.name.startsWith("projects/<project_number>/secrets/vault")
</CodeBlockConfig>
To prevent out-of-band overwrites, simply add a negative condition with `!` on any
write-access role bindings not being used by Vault that contain Secret Manager permissions:
<CodeBlockConfig hideClipboard>
!(resource.name.startsWith("projects/<project_number>/secrets/vault"))
</CodeBlockConfig>
To add conditions to IAM principles in GCP, click "+ADD IAM CONDITION" on the **Assign Roles** screen.
![Assign Roles screen in GCP with the "+ADD IAM CONDITION" link circled in red](/img/gcp-add-iam-conditions_light.png#light-theme-only)
![Assign Roles screen in GCP with the "+ADD IAM CONDITION" link circled in red](/img/gcp-add-iam-conditions_dark.png#dark-theme-only)
<Tip title="Refer to Google's Overview of IAM Conditions documentation">
[Google's documentation](https://cloud.google.com/iam/docs/conditions-overview) on IAM Conditions provides
further information on how they work and how they should be used, as well as their limits.
</Tip>
## API
Please see the [secrets sync API](/vault/api-docs/system/secrets-sync) for more details. | vault | layout docs page title Sync secrets from Vault to GCP Secret Manager description Automatically sync and unsync the secrets from Vault to GCP Secret Manager to centralize visibility and control of secrets lifecycle management Sync secrets from Vault to GCP Secret Manager The Google Cloud Platform GCP Secret Manager sync destination allows Vault to safely synchronize secrets to your GCP projects This is a low footprint option that enables your applications to benefit from Vault managed secrets without requiring them to connect directly with Vault This guide walks you through the configuration process Prerequisites Ability to read or create KVv2 secrets Ability to create GCP Service Account credentials with access to the Secret Manager Ability to create sync destinations and associations on your Vault server Setup 1 If you do not already have a Service Account navigate to the IAM Admin page in the Google Cloud console to create a new Service Account https cloud google com iam docs service accounts create with the necessary permissions vault docs sync gcpsm permissions Instructions vault docs sync gcpsm provision service account to provision this Service Account via Terraform can be found below 1 Configure a sync destination with the Service Account JSON credentials created in the previous step See docs for alternative ways vault docs secrets gcp authentication to pass in the credentials parameter shell session vault write sys sync destinations gcp sm my dest credentials path to credentials json replication locations us east1 us west1 Output CodeBlockConfig hideClipboard plaintext Key Value connection details map credentials replication locations us east1 us west1 name my dest type gcp sm CodeBlockConfig Usage 1 If you do not already have a KVv2 secret to sync mount a new KVv2 secrets engine shell session vault secrets enable path my kv kv v2 Output CodeBlockConfig hideClipboard plaintext Success Enabled the kv v2 secrets engine at my kv CodeBlockConfig 1 Create secrets you wish to sync with a target GCP Secret Manager shell session vault kv put mount my kv my secret foo bar Output CodeBlockConfig hideClipboard plaintext Secret Path my kv data my secret Metadata Key Value created time timestamp custom metadata nil deletion time n a destroyed false version 1 CodeBlockConfig 1 Create an association between the destination and a secret to synchronize shell session vault write sys sync destinations gcp sm my dest associations set mount my kv secret name my secret Output CodeBlockConfig hideClipboard plaintext Key Value associated secrets map kv 1234 my secret map accessor kv 1234 secret name my secret sync status SYNCED updated at timestamp store name my dest store type gcp sm CodeBlockConfig 1 Navigate to the Secret Manager https console cloud google com security secret manager in the Google Cloud console to confirm your secret was successfully created in your GCP project Moving forward any modification on the Vault secret will be propagated in near real time to its GCP Secret Manager counterpart Creating a new secret version in Vault will create a new version in GCP Secret Manager Deleting the secret or the association in Vault will delete the secret in your GCP project as well Replication policy GCP can target specific geographic regions to provide strict control on where your applications store data and sync secrets You can target specific GCP regions for each sync destinations during creation which will limit where Vault writes secrets Regardless of the region limits on writes synced secrets are always readable globally when the client has the required permissions Permissions The credentials given to Vault must have the following permissions to synchronize secrets shell session secretmanager secrets create secretmanager secrets delete secretmanager secrets update secretmanager versions add secretmanager versions destroy Provision service account Vault needs to be configured with credentials to establish a trust relationship with your GCP project so it can manage Secret Manager secrets on your behalf The IAM Admin page in the Google Cloud console can be used to create a new Service Account https cloud google com iam docs service accounts create with access to the Secret Manager You can equally use the Terraform Google provider https registry terraform io providers hashicorp google latest docs authentication and configuration to provision a GCP Service Account with the appropriate policies 1 Copy paste this HCL snippet into a secrets sync setup tf file hcl provider google See https registry terraform io providers hashicorp google latest docs authentication and configuration to setup the Google Provider for options on how to configure this provider The following parameters or environment variables are typically used Parameters region Optional project credentials Environment Variables GOOGLE REGION optional GOOGLE PROJECT GOOGLE CREDENTIALS The path to a service account key file with the Service Account Admin Service Account Key Admin Secret Manager Admin and Project IAM Admin roles attached data google client config config resource google service account vault secrets sync account account id gcp sm vault secrets sync description service account for Vault Secrets Sync feature Production environments should use a more restricted role The built in secret manager admin role is used as an example for simplicity data google iam policy vault secrets sync iam policy binding role roles secretmanager admin members google service account vault secrets sync account email resource google project iam member vault secrets sync iam member project data google client config config project role roles secretmanager admin member google service account vault secrets sync account member resource google service account key vault secrets sync account key service account id google service account vault secrets sync account name public key type TYPE X509 PEM FILE resource local file vault secrets sync credentials file content base64decode google service account key vault secrets sync account key private key filename gcp sm sync service account credentials json output vault secrets sync credentials file path value abspath path module local file sync service account credentials file filename 1 Execute a plan to validate the Terraform Google provider is properly configured shell session terraform init terraform plan Output CodeBlockConfig hideClipboard plaintext Plan 4 to add 0 to change 0 to destroy CodeBlockConfig 1 Execute an apply to provision the Service Account shell session terraform apply Output CodeBlockConfig hideClipboard plaintext Apply complete Resources 4 added 0 changed 0 destroyed Outputs sync service account credentials file path to credentials file gcp sm sync service account credentials json CodeBlockConfig The generated Service Account credentials file can then be used to configure the Vault GCP Secret Manager destination following the setup vault docs sync gcpsm setup steps Targeting specific GCP projects By default the target GCP project to sync secrets with is derived from the service account JSON credentials vault api docs system secrets sync credentials or application default credentials for a particular GCP sync destination This means secrets will be synced within the parent project of the configured service account In some cases it s desirable to use a single service account or workload identity https cloud google com compute docs access create enable service accounts for instances to sync secrets with any number of GCP projects within an organization To achieve this you can set the project id parameter to the target project to sync secrets with shell session vault write sys sync destinations gcp sm my dest project id target project id This overrides the project ID derived from the service account JSON credentials or application default credentials The service account must be authorized https cloud google com iam docs service account overview locations to perform Secret Manager actions in the target project Access management You can allow or restrict access to secrets based on IAM conditions https cloud google com iam docs conditions resource attributes resource name against the fully qualified resource name For secrets in Secret Manager a fully qualified resource name must have the following format projects project number secrets secret name Tip title Use the project number not the project ID The project number is not the same as the project ID Project numbers are numeric while project IDs are alphanumeric They can be found on the Project info panel in the web dashboard or on the Welcome screen Tip For example the default secret name template prepends the word vault to the beginning of secret names To prevent Vault from modifying secrets that were not created by a sync operation you can use a role binding against the resource name with the startsWith condition CodeBlockConfig hideClipboard resource name startsWith projects project number secrets vault CodeBlockConfig To prevent out of band overwrites simply add a negative condition with on any write access role bindings not being used by Vault that contain Secret Manager permissions CodeBlockConfig hideClipboard resource name startsWith projects project number secrets vault CodeBlockConfig To add conditions to IAM principles in GCP click ADD IAM CONDITION on the Assign Roles screen Assign Roles screen in GCP with the ADD IAM CONDITION link circled in red img gcp add iam conditions light png light theme only Assign Roles screen in GCP with the ADD IAM CONDITION link circled in red img gcp add iam conditions dark png dark theme only Tip title Refer to Google s Overview of IAM Conditions documentation Google s documentation https cloud google com iam docs conditions overview on IAM Conditions provides further information on how they work and how they should be used as well as their limits Tip API Please see the secrets sync API vault api docs system secrets sync for more details |
vault page title Sync secrets from Vault to GitHub Sync secrets from Vault to GitHub Automatically sync and unsync the secrets from Vault to GitHub to centralize visibility and control of secrets lifecycle management layout docs The GitHub actions sync destination allows Vault to safely synchronize secrets as GitHub organization repository or environment secrets | ---
layout: docs
page_title: Sync secrets from Vault to GitHub
description: >-
Automatically sync and unsync the secrets from Vault to GitHub to centralize visibility and control of secrets lifecycle management.
---
# Sync secrets from Vault to GitHub
The GitHub actions sync destination allows Vault to safely synchronize secrets as GitHub organization, repository, or environment secrets.
This is a low footprint option that enables your applications to benefit from Vault-managed secrets without requiring them
to connect directly with Vault. This guide walks you through the configuration process.
Prerequisites:
* Ability to read or create KVv2 secrets
* Ability to create GitHub fine-grained or personal tokens (or a GitHub application) with access to modify organization and/or repository secrets
* Ability to create sync destinations and associations on your Vault server
## Setup
1. To get started with syncing Vault secrets to your GitHub, you will need a configured [GitHub application](#github-application) or an
[access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens)
that has write permission on the target sync location in GitHub for "Secrets". The "Secrets" permissions in GitHub automatically includes read-only "Metadata" access.
<Warning title="Pitfalls of using an access token">
Access tokens are tied to a user account and can be revoked at any time, causing disruptions to the sync process.
GitHub applications are long-lived and do not expire. Using a GitHub application for authentication is preferred over using a personal access token.
</Warning>
### Repositories
Use `vault write` to configure a repository sync destination with an access token:
```shell-session
$ vault write sys/sync/destinations/gh/DESTINATION_NAME \
access_token="GITHUB_ACCESS_TOKEN" \
secrets_location="GITHUB_SECRETS_LOCATION" \
repository_owner="GITHUB_OWNER_NAME" \
repository_name="GITHUB_REPO_NAME"
```
For example:
<CodeBlockConfig hideClipboard>
```
$ vault write sys/sync/destinations/gh/hcrepo-sandbox \
access_token="github_pat_11ABC000000000000000000000DEF" \
secrets_location="repository" \
repository_owner="hashicorp" \
repository_name="hcrepo"
Key Value
--- -----
connection_details map[access_token:***** secrets_location:repository repository_owner:hashicorp repository_name:hcrepo]
name hcrepo-sandbox
type gh
```
</CodeBlockConfig>
### Environments
Use `vault write` to configure an environment sync destination:
```shell-session
$ vault write sys/sync/destinations/gh/DESTINATION_NAME \
access_token="GITHUB_ACCESS_TOKEN" \
secrets_location="GITHUB_SECRETS_LOCATION" \
repository_owner="GITHUB_OWNER_NAME" \
repository_name="GITHUB_REPO_NAME" \
environment_name="GITHUB_ENVIRONMENT_NAME"
```
For example:
<CodeBlockConfig hideClipboard>
```
$ vault write sys/sync/destinations/gh/hcrepo-sandbox \
access_token="github_pat_11ABC000000000000000000000DEF" \
secrets_location="repository" \
repository_owner="hashicorp" \
repository_name="hcrepo" \
environment_name="sandbox"
Key Value
--- -----
connection_details map[access_token:***** secrets_location:repository environment_name:sandbox repository_owner:hashicorp repository_name:hcrepo]
name hcrepo-sandbox
type gh
```
</CodeBlockConfig>
### Organizations
@include 'alerts/beta.mdx'
Beta limitations:
- You cannot update visibility (`organization_visibility`) after creating a
secrets sync destination.
- You cannot update the list of repositories with access to synced secrets
(`selected_repository_names`) after creating a secrets sync destination.
Sync secrets to GitHub organization to share those secrets across repositories
in the organizations. You choose to make secrets global to the organization,
limited to private/internal repos, or limited to specifically named repositories.
Refer to the [Secrets sync API docs](/vault/docs/sync/github#api) for detailed
configuration information.
<Warning>
Organization secrets are
[not visible to private repositories for GitHub Free accounts](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions#creating-secrets-for-an-organization).
</Warning>
Use `vault write` to configure an organization sync destination:
```shell-session
$ vault write sys/sync/destinations/gh/DESTINATION_NAME \
access_token="GITHUB_ACCESS_TOKEN" \
secrets_location="GITHUB_SECRETS_LOCATION" \
organization_name="ORGANIZATION_NAME" \
organization_visibility="ORGANIZATION_VISIBILITY"
```
For example:
<CodeBlockConfig hideClipboard>
```
$ vault write sys/sync/destinations/gh/hcrepo-sandbox \
access_token="github_pat_11ABC000000000000000000000DEF" \
secrets_location="organization" \
organization_name="hashicorp" \
organization_visibility="selected" \
selected_repository_names="hcrepo-1,hcrepo-2"
Key Value
--- -----
connection_details map[access_token:***** secrets_location:organization organization_name:hashicorp organization_visibility:all selected_repository_names:[hcrepo-1 hcrepo-2]]
name hcrepo-sandbox
type gh
```
</CodeBlockConfig>
## Usage
1. If you do not already have a KVv2 secret to sync, mount a new KVv2 secrets engine.
```shell-session
$ vault secrets enable -path=my-kv kv-v2
```
**Output:**
<CodeBlockConfig hideClipboard>
```
Success! Enabled the kv-v2 secrets engine at: my-kv/
```
</CodeBlockConfig>
1. Create secrets you wish to sync with a target GitHub repository for Actions.
```shell-session
$ vault kv put -mount='my-kv' my-secret key1='val1' key2='val2'
```
**Output:**
<CodeBlockConfig hideClipboard>
```plaintext
==== Secret Path ====
my-kv/data/my-secret
======= Metadata =======
Key Value
--- -----
created_time <timestamp>
custom_metadata <nil>
deletion_time n/a
destroyed false
version 1
```
</CodeBlockConfig>
1. Create an association between the destination and a secret to synchronize.
```shell-session
$ vault write sys/sync/destinations/gh/my-dest/associations/set \
mount='my-kv' \
secret_name='my-secret'
```
**Output:**
<CodeBlockConfig hideClipboard>
```plaintext
Key Value
--- -----
associated_secrets map[kv_1234/my-secret:map[accessor:kv_1234 secret_name:my-secret sync_status:SYNCED updated_at:<timestamp>>]]
store_name my-dest
store_type gh
```
</CodeBlockConfig>
1. Navigate to your GitHub repository settings to confirm your secret was successfully created.
Moving forward, any modification on the Vault secret will be propagated in near-real time to its GitHub secrets
counterpart. Creating a new secret version in Vault will create a new version in GitHub. Deleting the secret
or the association in Vault will delete the secret in GitHub as well.
## Security
<Note>
Vault syncs secrets differently depending on whether you have configured
`secret-key` or `secret-path` [granularity](/vault/docs/sync#granularity):
- `secret-key` granularity splits KVv2 secrets from Vault into key-value pairs
and stores the pairs as distinct entries in GitHub. For example,
`secrets.key1="val1"` and `secrets.key2="val2"`.
- `secret-path` granularity stores secrets as a single JSON string that contains
all the associated key-value pairs. For example, `{"key1":"val1", "key2":"val2"}`.
Since GitHub limits secrets to single-value secrets, the sync granularity defaults to `secret-key`.
</Note>
If using the secret-path granularity, it is strongly advised to mask individual values for each sub-key to prevent the
unintended disclosure of secrets in any GitHub Action outputs. The following snippet illustrates how to mask each secret values:
```yaml
name: Mask synced secret values
on:
workflow_dispatch
jobs:
synced-secret-examples:
runs-on: ubuntu-latest
steps:
- name: ✓ Mask synced secret values
run: |
for v in $(echo '$' | jq -r '.[]'); do
echo "::add-mask::$v"
done
```
If the GitHub destination uses the default `secret-key` granularity, the values are masked by GitHub automatically.
## GitHub application
Instead of authenticating with a personal access token, you can choose to
authenticate with a
[custom GitHub application](https://docs.github.com/en/apps/creating-github-apps/registering-a-github-app/registering-a-github-app).
Start by following the GitHub instructions for
[installing a GitHub app](https://docs.github.com/en/apps/using-github-apps/installing-your-own-github-app).
to install your GitHub application on a specified repository and note the
assigned installation ID.
<Tip title="Your installation ID is in the app URL">
You can find your assigned installation ID in the URL path parameter:
`https://github.com/settings/installations/<INSTALLATION_ID>`
</Tip>
Then add your GitHub application to your Vault instance.
To use your GitHub application with Vault:
- The application must have permission to read and write secrets.
- You must generate a private key for the application on GitHub.
- The application must be installed on the repository you want to sync secrets with.
- You must know the application ID assigned by GitHub.
- You must know the installation ID assigned by GitHub.
Callback, redirect URLs, and webhooks are not required at this time.
To configure the application in Vault, use `vault write` with the
`sys/sync/github-apps` endpoint to assign a unique name and set the relevant
information:
<CodeBlockConfig hideClipboard>
```shell-session
$ vault write sys/sync/github-apps/<APP_NAME> \
app_id=<APP_ID> \
private_key=@/path/to/private/key
Key Value
--- -----
app_id <app-id>
fingerprint <fingerprint>
name <app-name>
private_key *****
```
</CodeBlockConfig>
<Tip title="Fingerprint verification">
Vault returns the fingerprint of the private_key provided to ensure that the
correct private key was configured and that it was not tampered with along the way.
You can compare the fingerprint to the one provided by GitHub.
For more information, see [Verifying private keys](https://docs.github.com/en/apps/creating-github-apps/authenticating-with-a-github-app/managing-private-keys-for-github-apps#verifying-private-keys).
</Tip>
Next, use `vault write` with the `sys/sync/destinations/gh` endpoint to
configure a GitHub destination that references your new GitHub application:
<CodeBlockConfig hideClipboard>
```shell-session
$ vault write sys/sync/destinations/gh/<DESTINATION_NAME> \
installation_id=<INSTALLATION_ID> \
repository_owner=<GITHUB_USER> \
repository_name=<MY_REPO_NAME> \
app_name=<APP_NAME>
Key Value
--- -----
connection_details map[app_config:map[app_name:<app-name>] installation_id:<installation-id> repository_name:<repo-name> repository_owner:<repo-owner>]
name my-dest
options map[custom_tags:map[] granularity_level:secret-key secret_name_template:VAULT___]
type gh
```
</CodeBlockConfig>
You can now [use your GitHub application to sync secrets with your GitHub repository](#usage).
## API
Please see the [secrets sync API](/vault/api-docs/system/secrets-sync) for more details. | vault | layout docs page title Sync secrets from Vault to GitHub description Automatically sync and unsync the secrets from Vault to GitHub to centralize visibility and control of secrets lifecycle management Sync secrets from Vault to GitHub The GitHub actions sync destination allows Vault to safely synchronize secrets as GitHub organization repository or environment secrets This is a low footprint option that enables your applications to benefit from Vault managed secrets without requiring them to connect directly with Vault This guide walks you through the configuration process Prerequisites Ability to read or create KVv2 secrets Ability to create GitHub fine grained or personal tokens or a GitHub application with access to modify organization and or repository secrets Ability to create sync destinations and associations on your Vault server Setup 1 To get started with syncing Vault secrets to your GitHub you will need a configured GitHub application github application or an access token https docs github com en authentication keeping your account and data secure managing your personal access tokens that has write permission on the target sync location in GitHub for Secrets The Secrets permissions in GitHub automatically includes read only Metadata access Warning title Pitfalls of using an access token Access tokens are tied to a user account and can be revoked at any time causing disruptions to the sync process GitHub applications are long lived and do not expire Using a GitHub application for authentication is preferred over using a personal access token Warning Repositories Use vault write to configure a repository sync destination with an access token shell session vault write sys sync destinations gh DESTINATION NAME access token GITHUB ACCESS TOKEN secrets location GITHUB SECRETS LOCATION repository owner GITHUB OWNER NAME repository name GITHUB REPO NAME For example CodeBlockConfig hideClipboard vault write sys sync destinations gh hcrepo sandbox access token github pat 11ABC000000000000000000000DEF secrets location repository repository owner hashicorp repository name hcrepo Key Value connection details map access token secrets location repository repository owner hashicorp repository name hcrepo name hcrepo sandbox type gh CodeBlockConfig Environments Use vault write to configure an environment sync destination shell session vault write sys sync destinations gh DESTINATION NAME access token GITHUB ACCESS TOKEN secrets location GITHUB SECRETS LOCATION repository owner GITHUB OWNER NAME repository name GITHUB REPO NAME environment name GITHUB ENVIRONMENT NAME For example CodeBlockConfig hideClipboard vault write sys sync destinations gh hcrepo sandbox access token github pat 11ABC000000000000000000000DEF secrets location repository repository owner hashicorp repository name hcrepo environment name sandbox Key Value connection details map access token secrets location repository environment name sandbox repository owner hashicorp repository name hcrepo name hcrepo sandbox type gh CodeBlockConfig Organizations include alerts beta mdx Beta limitations You cannot update visibility organization visibility after creating a secrets sync destination You cannot update the list of repositories with access to synced secrets selected repository names after creating a secrets sync destination Sync secrets to GitHub organization to share those secrets across repositories in the organizations You choose to make secrets global to the organization limited to private internal repos or limited to specifically named repositories Refer to the Secrets sync API docs vault docs sync github api for detailed configuration information Warning Organization secrets are not visible to private repositories for GitHub Free accounts https docs github com en actions security for github actions security guides using secrets in github actions creating secrets for an organization Warning Use vault write to configure an organization sync destination shell session vault write sys sync destinations gh DESTINATION NAME access token GITHUB ACCESS TOKEN secrets location GITHUB SECRETS LOCATION organization name ORGANIZATION NAME organization visibility ORGANIZATION VISIBILITY For example CodeBlockConfig hideClipboard vault write sys sync destinations gh hcrepo sandbox access token github pat 11ABC000000000000000000000DEF secrets location organization organization name hashicorp organization visibility selected selected repository names hcrepo 1 hcrepo 2 Key Value connection details map access token secrets location organization organization name hashicorp organization visibility all selected repository names hcrepo 1 hcrepo 2 name hcrepo sandbox type gh CodeBlockConfig Usage 1 If you do not already have a KVv2 secret to sync mount a new KVv2 secrets engine shell session vault secrets enable path my kv kv v2 Output CodeBlockConfig hideClipboard Success Enabled the kv v2 secrets engine at my kv CodeBlockConfig 1 Create secrets you wish to sync with a target GitHub repository for Actions shell session vault kv put mount my kv my secret key1 val1 key2 val2 Output CodeBlockConfig hideClipboard plaintext Secret Path my kv data my secret Metadata Key Value created time timestamp custom metadata nil deletion time n a destroyed false version 1 CodeBlockConfig 1 Create an association between the destination and a secret to synchronize shell session vault write sys sync destinations gh my dest associations set mount my kv secret name my secret Output CodeBlockConfig hideClipboard plaintext Key Value associated secrets map kv 1234 my secret map accessor kv 1234 secret name my secret sync status SYNCED updated at timestamp store name my dest store type gh CodeBlockConfig 1 Navigate to your GitHub repository settings to confirm your secret was successfully created Moving forward any modification on the Vault secret will be propagated in near real time to its GitHub secrets counterpart Creating a new secret version in Vault will create a new version in GitHub Deleting the secret or the association in Vault will delete the secret in GitHub as well Security Note Vault syncs secrets differently depending on whether you have configured secret key or secret path granularity vault docs sync granularity secret key granularity splits KVv2 secrets from Vault into key value pairs and stores the pairs as distinct entries in GitHub For example secrets key1 val1 and secrets key2 val2 secret path granularity stores secrets as a single JSON string that contains all the associated key value pairs For example key1 val1 key2 val2 Since GitHub limits secrets to single value secrets the sync granularity defaults to secret key Note If using the secret path granularity it is strongly advised to mask individual values for each sub key to prevent the unintended disclosure of secrets in any GitHub Action outputs The following snippet illustrates how to mask each secret values yaml name Mask synced secret values on workflow dispatch jobs synced secret examples runs on ubuntu latest steps name Mask synced secret values run for v in echo jq r do echo add mask v done If the GitHub destination uses the default secret key granularity the values are masked by GitHub automatically GitHub application Instead of authenticating with a personal access token you can choose to authenticate with a custom GitHub application https docs github com en apps creating github apps registering a github app registering a github app Start by following the GitHub instructions for installing a GitHub app https docs github com en apps using github apps installing your own github app to install your GitHub application on a specified repository and note the assigned installation ID Tip title Your installation ID is in the app URL You can find your assigned installation ID in the URL path parameter https github com settings installations INSTALLATION ID Tip Then add your GitHub application to your Vault instance To use your GitHub application with Vault The application must have permission to read and write secrets You must generate a private key for the application on GitHub The application must be installed on the repository you want to sync secrets with You must know the application ID assigned by GitHub You must know the installation ID assigned by GitHub Callback redirect URLs and webhooks are not required at this time To configure the application in Vault use vault write with the sys sync github apps endpoint to assign a unique name and set the relevant information CodeBlockConfig hideClipboard shell session vault write sys sync github apps APP NAME app id APP ID private key path to private key Key Value app id app id fingerprint fingerprint name app name private key CodeBlockConfig Tip title Fingerprint verification Vault returns the fingerprint of the private key provided to ensure that the correct private key was configured and that it was not tampered with along the way You can compare the fingerprint to the one provided by GitHub For more information see Verifying private keys https docs github com en apps creating github apps authenticating with a github app managing private keys for github apps verifying private keys Tip Next use vault write with the sys sync destinations gh endpoint to configure a GitHub destination that references your new GitHub application CodeBlockConfig hideClipboard shell session vault write sys sync destinations gh DESTINATION NAME installation id INSTALLATION ID repository owner GITHUB USER repository name MY REPO NAME app name APP NAME Key Value connection details map app config map app name app name installation id installation id repository name repo name repository owner repo owner name my dest options map custom tags map granularity level secret key secret name template VAULT type gh CodeBlockConfig You can now use your GitHub application to sync secrets with your GitHub repository usage API Please see the secrets sync API vault api docs system secrets sync for more details |
vault page title Secrets sync Use secrets sync feature to automatically sync Vault managed secrets with external destinations to centralize secrets lifecycle management layout docs Secrets sync EnterpriseAlert product vault | ---
layout: docs
page_title: Secrets sync
description: >-
Use secrets sync feature to automatically sync Vault-managed secrets with external destinations to centralize secrets lifecycle management.
---
# Secrets sync
<EnterpriseAlert product="vault" />
In certain circumstances, fetching secrets directly from Vault is impossible or impractical. To help with this challenge,
Vault can maintain a one-way sync for KVv2 secrets into various destinations that are easier to access for some clients.
With this, Vault remains the system of records but can cache a subset of secrets on various external systems acting as
trusted last-mile delivery systems.
A secret that is associated from a Vault KVv2 Secrets Engine into an external destination is actively managed by a continuous
process. If the secret value is updated in Vault, the secret is updated in the destination as well. If the secret is deleted
from Vault, it is deleted on the external system as well. This process is asynchronous and event-based. Vault propagates
modifications into the proper destinations automatically in a handful of seconds.
<Note title="Not related to HCP Vault Secrets">
Secrets sync is a Vault Enterprise feature. For information on secrets sync
with [HCP Vault Secrets](/hcp/docs/vault-secrets), refer to the HashiCorp Cloud
Platform documentation for
[Vault Secrets integrations](/hcp/docs/vault-secrets/integrations).
</Note>
## Activating the feature
The secrets sync feature requires manual activation through a one-time trigger. If a sync-related endpoint is called prior to
activation, an error response will be received indicating that the feature has not been activated yet. Be sure to understand the
potential [client count impacts](#client-counts) of using secrets sync before proceeding.
Activating the feature can be done through one of several methods:
1. Activation directly through the UI.
1. Acitvation through the CLI:
```shell-session
vault write -f sys/activation-flags/secrets-sync/activate
```
1. Activation through a POST or PUT request:
```shell-session
$ curl \
--request PUT \
--header "X-Vault-Token: ..." \
http://127.0.0.1:8200/v1/sys/activation-flags/secrets-sync/activate
```
## Destinations
Secrets can be synced into various external systems, called destinations. The supported destinations are:
* [AWS Secrets Manager](/vault/docs/sync/awssm)
* [Azure Key Vault](/vault/docs/sync/azurekv)
* [GCP Secret Manager](/vault/docs/sync/gcpsm)
* [GitHub Repository Actions](/vault/docs/sync/github)
* [Vercel Projects](/vault/docs/sync/vercelproject)
## Associations
Syncing a secret into one of the external systems is done by creating a connection between it and a destination, which is
called an association. These associations are created via Vault's API by adding a KVv2 secret target to one of the configured
destinations. Each association keeps track of that secret's current sync status, the timestamp of its last status change, and
the error code of the last sync or unsync operation if it failed. Each destination can have any number of secret associations.
## Sync statuses
There are several sync statuses which relay information about the outcome of the latest sync
operation to have occurred on that secret. The status information is stored inside each
association object returned by the endpoint and, upon failure, includes an error code describing the cause of the failure.
| Status | Description |
|:-------------------------|:------------------------------------------------------------------------------------------------|
| `UNKNOWN` | Vault is unable to determine the current state of the secret in regard to the external service. |
| `PENDING` | An operation is queued for that secret and has not been processed yet. |
| `SYNCED` | The sync operation was successful and sent the secret to the external destination. |
| `UNSYNCED` | The unsync operation was successful and removed the secret from the external destination. |
| `INTERNAL_VAULT_ERROR` | The operation failed due to an issue internal to Vault. |
| `CLIENT_SIDE_ERROR` | The operation failed due to a configuration error such as invalid privileges. |
| `EXTERNAL_SERVICE_ERROR` | The operation failed due to an issue with the external service such as a temporary downtime. |
## Name template
By default, the name of synced secrets follows this format: `vault/<accessor>/<secret-path>`. The casing and delimiters
may change as they are normalized according to the valid character set of each destination type. This pattern was chosen to
prevent accidental name collisions and to clearly identify where the secret is coming from.
Every destination allows you to customize this name pattern by configuring a `secret_name_template` field to best suit
individual use cases. The templates use a subset of the go-template syntax for extra flexibility.
The following placeholders are available:
| Placeholder | Description |
|:--------------------|:------------------------------------------------------------------------------------------------------------|
| `DestinationType` | The type of the destination, e.g. "aws-sm" |
| `DestinationName` | The name of the destination |
| `NamespacePath` | The full namespace path where the secret being synced is located |
| `NamespaceBaseName` | The segment following the last `/` character from the full path |
| `NamespaceID` | The internal unique ID identifying the namespace, e.g. `RQegM` |
| `MountPath` | The full mount path where the secret being synced is located |
| `MountBaseName` | The segment following the last `/` character from the full path |
| `MountAccessor` | The internal unique ID identifying the mount, e.g. `kv_1234` |
| `SecretPath` | The full secret path |
| `SecretBaseName` | The segment following the last `/` character from the full path |
| `SecretKey` | The individual secret key being synced, only available if the destination uses the `secret-key` granularity |
Let's assume we want to sync the following secret:
<CodeBlockConfig hideClipboard>
$ VAULT_NAMESPACE=ns1/ns2 vault kv get -mount=path/to/kv1 path/to/secret1
========== Secret Path ==========
path/to/kv1/data/path/to/secret1
======= Metadata =======
(...)
=== Data ===
Key Value
--- -----
foo bar
</CodeBlockConfig>
Let's look at some name template examples and the resulting secret name at the sync destination.
| Name template | Result |
|:-----------------------------------------|:-----------------------|
| prefix- | prefix-path/to/secret1 |
| | SECRET1 |
| _ | kv_1234_foo |
| | path_to_secret1 |
Name templates can be updated. The new template is only effective for new secrets associated with the destination and does
not affect the secrets synced with the previous template. It is possible to update an association to force a recreate operation.
The secret synced with the old template will be deleted and a new secret using the new template version will be synced.
## Custom tags
A destination can also have custom tags so that every secret associated to it that is synced will share that same set of tags.
Additionally, a default tag value of `hashicorp:vault` is used to denote any secret that is synced via Vault Enterprise. Similar
to secret names, tag keys and values are normalized according to the valid character set of each destination type.
## Granularity
Vault KV-v2 secrets are multi-value and their data is represented in JSON. Multi-value secrets are useful to bundle closely
related information together like a username & password pair. However, most secret management systems only support single-value
entries. Secrets sync allows you to choose the granularity that best suits your use case for each destination by specifying a `granularity`
field.
The `secret-path` granularity syncs the entire JSON content of the Vault secret as a single entry at the destination. If
the destination does not support multi-value secret the JSON is encoded as a single-value JSON-string.
The `secret-key` granularity syncs each Vault key-value pair as a distinct entry at the destination. If the value itself is a list or map
it is encoded as a JSON blob.
Granularity can be updated. The new granularity only affects secrets newly associated with the destination and does
not modify the previously synced secrets. It is possible to update an association to force a recreate operation.
The secret synced with the old granularity will be deleted and new secrets will be synced according to the new granularity.
## Security
~> Note: Vault does not control the permissions at the destination. It is the responsibility
of the operator to configure and maintain proper access controls on the external system so synced
secrets are not accessed unintentionally.
### Vault access requirements
Vault verifies the client has read access on the secret before syncing it with any destination. This additional check is
there to prevent users from maliciously or unintentionally leveraging elevated permissions on an external system to access
secrets they normally wouldn't be able to.
Let's assume we have a secret located at `path/to/data/secret1` and a user with write access to the sync feature,
but no read access to that secret. This scenario is equivalent to this ACL policy:
<CodeBlockConfig hideClipboard>
# Allow full access to the sync feature
path "sys/sync/*" {
capabilities = ["read", "list", "create", "update", "delete"]
}
# Allow read access to the secret mount path/to
path "path/to/*" {
capabilities = ["read"]
}
# Deny access to a specific secret
path "path/to/data/my-secret-1" {
capabilities = ["deny"]
}
</CodeBlockConfig>
If a client with this policy tries to read this secret they will receive an unauthorized error:
<CodeBlockConfig hideClipboard>
$ vault kv get -mount=path/to my-secret-1
Error reading path/to/data/my-secret-1: Error making API request.
URL: GET http://127.0.0.1:8200/v1/path/to/data/my-secret-1
Code: 403. Errors:
* 1 error occurred:
* permission denied
</CodeBlockConfig>
Likewise, if the client tries to sync this secret to any destination they will receive a similar unauthorized error:
<CodeBlockConfig hideClipboard>
$ vault write sys/sync/destinations/$TYPE/$NAME/associations/set \
mount="path/to" \
secret_name="my-secret-1"
Error writing data to sys/sync/destinations/$TYPE/$NAME/associations/set: Error making API request.
URL: PUT http://127.0.0.1:8200/v1/sys/sync/destinations/$TYPE/$NAME/associations/set
Code: 403. Errors:
* permission denied to read the content of the secret my-secret-1 in mount path/to
</CodeBlockConfig>
This read access verification is only done when creating or updating an association. Once the association is created, revoking
read access to the policy that was used to sync the secret has no effect.
### Collisions and overwrites
Secrets Sync operates with a last-write-wins strategy. If a secret with the same name already exists at the destination,
Vault overwrites it when syncing a secret. There are also no automatic mechanisms to prevent a principal with sufficient
privileges at the destination from overwriting a secret synced by Vault.
To prevent Vault from accidentally overwriting existing secrets, it is recommended to use either a name pattern or
built-in tags as an extra policy condition on the role used to configure a Vault sync destination. A negative condition on other
policies may be used to prevent out-of-band overwrites to Vault secrets from non-Vault roles.
To see examples of policies that provide this type of restriction, refer to the access management section of the documentation
for each destination type below:
* [AWS Access Management](/vault/docs/sync/awssm#access-management)
* [GCP Access Management](/vault/docs/sync/gcpsm#access-management)
## Reconciliation
Vault Secrets Sync is designed to automatically recover from transient failures
in two ways: operation retries and reconciliation scans.
Operation retries happen when a sync operation fails. Vault automatically
retries the operation with exponential backoff. Operation retries help in
situations where your network becomes unreliable or overwhelmed.
Reconciliation scans happen periodically in a background thread. Vault scans all
secrets currently managed by the sync system to identify and update out-of-date
secrets, and to ensure that any configured destinations are up-to-date.
Reconciliation scans help in situations where there are external service downtimes
that are outside of your control and provide a way to automatically recover and self-heal.
Operation retries and reconciliation scans are both enabled by default.
Note that reconciliation process do not protect from out-of-band updates
that occur directly in the external service. The secrets sync system is designed to be
one-way and does not support bidirectional sync at this time.
## Client counts
Each secret that is synced with one or more destinations is counted as a
distinct client in Vault's client counting. See [entity assignments with secret
sync](/vault/docs/concepts/client-count#secret-sync-clients)
for more information.
## API
Please see the [secrets sync API](/vault/api-docs/system/secrets-sync) for more details. | vault | layout docs page title Secrets sync description Use secrets sync feature to automatically sync Vault managed secrets with external destinations to centralize secrets lifecycle management Secrets sync EnterpriseAlert product vault In certain circumstances fetching secrets directly from Vault is impossible or impractical To help with this challenge Vault can maintain a one way sync for KVv2 secrets into various destinations that are easier to access for some clients With this Vault remains the system of records but can cache a subset of secrets on various external systems acting as trusted last mile delivery systems A secret that is associated from a Vault KVv2 Secrets Engine into an external destination is actively managed by a continuous process If the secret value is updated in Vault the secret is updated in the destination as well If the secret is deleted from Vault it is deleted on the external system as well This process is asynchronous and event based Vault propagates modifications into the proper destinations automatically in a handful of seconds Note title Not related to HCP Vault Secrets Secrets sync is a Vault Enterprise feature For information on secrets sync with HCP Vault Secrets hcp docs vault secrets refer to the HashiCorp Cloud Platform documentation for Vault Secrets integrations hcp docs vault secrets integrations Note Activating the feature The secrets sync feature requires manual activation through a one time trigger If a sync related endpoint is called prior to activation an error response will be received indicating that the feature has not been activated yet Be sure to understand the potential client count impacts client counts of using secrets sync before proceeding Activating the feature can be done through one of several methods 1 Activation directly through the UI 1 Acitvation through the CLI shell session vault write f sys activation flags secrets sync activate 1 Activation through a POST or PUT request shell session curl request PUT header X Vault Token http 127 0 0 1 8200 v1 sys activation flags secrets sync activate Destinations Secrets can be synced into various external systems called destinations The supported destinations are AWS Secrets Manager vault docs sync awssm Azure Key Vault vault docs sync azurekv GCP Secret Manager vault docs sync gcpsm GitHub Repository Actions vault docs sync github Vercel Projects vault docs sync vercelproject Associations Syncing a secret into one of the external systems is done by creating a connection between it and a destination which is called an association These associations are created via Vault s API by adding a KVv2 secret target to one of the configured destinations Each association keeps track of that secret s current sync status the timestamp of its last status change and the error code of the last sync or unsync operation if it failed Each destination can have any number of secret associations Sync statuses There are several sync statuses which relay information about the outcome of the latest sync operation to have occurred on that secret The status information is stored inside each association object returned by the endpoint and upon failure includes an error code describing the cause of the failure Status Description UNKNOWN Vault is unable to determine the current state of the secret in regard to the external service PENDING An operation is queued for that secret and has not been processed yet SYNCED The sync operation was successful and sent the secret to the external destination UNSYNCED The unsync operation was successful and removed the secret from the external destination INTERNAL VAULT ERROR The operation failed due to an issue internal to Vault CLIENT SIDE ERROR The operation failed due to a configuration error such as invalid privileges EXTERNAL SERVICE ERROR The operation failed due to an issue with the external service such as a temporary downtime Name template By default the name of synced secrets follows this format vault accessor secret path The casing and delimiters may change as they are normalized according to the valid character set of each destination type This pattern was chosen to prevent accidental name collisions and to clearly identify where the secret is coming from Every destination allows you to customize this name pattern by configuring a secret name template field to best suit individual use cases The templates use a subset of the go template syntax for extra flexibility The following placeholders are available Placeholder Description DestinationType The type of the destination e g aws sm DestinationName The name of the destination NamespacePath The full namespace path where the secret being synced is located NamespaceBaseName The segment following the last character from the full path NamespaceID The internal unique ID identifying the namespace e g RQegM MountPath The full mount path where the secret being synced is located MountBaseName The segment following the last character from the full path MountAccessor The internal unique ID identifying the mount e g kv 1234 SecretPath The full secret path SecretBaseName The segment following the last character from the full path SecretKey The individual secret key being synced only available if the destination uses the secret key granularity Let s assume we want to sync the following secret CodeBlockConfig hideClipboard VAULT NAMESPACE ns1 ns2 vault kv get mount path to kv1 path to secret1 Secret Path path to kv1 data path to secret1 Metadata Data Key Value foo bar CodeBlockConfig Let s look at some name template examples and the resulting secret name at the sync destination Name template Result prefix prefix path to secret1 SECRET1 kv 1234 foo path to secret1 Name templates can be updated The new template is only effective for new secrets associated with the destination and does not affect the secrets synced with the previous template It is possible to update an association to force a recreate operation The secret synced with the old template will be deleted and a new secret using the new template version will be synced Custom tags A destination can also have custom tags so that every secret associated to it that is synced will share that same set of tags Additionally a default tag value of hashicorp vault is used to denote any secret that is synced via Vault Enterprise Similar to secret names tag keys and values are normalized according to the valid character set of each destination type Granularity Vault KV v2 secrets are multi value and their data is represented in JSON Multi value secrets are useful to bundle closely related information together like a username password pair However most secret management systems only support single value entries Secrets sync allows you to choose the granularity that best suits your use case for each destination by specifying a granularity field The secret path granularity syncs the entire JSON content of the Vault secret as a single entry at the destination If the destination does not support multi value secret the JSON is encoded as a single value JSON string The secret key granularity syncs each Vault key value pair as a distinct entry at the destination If the value itself is a list or map it is encoded as a JSON blob Granularity can be updated The new granularity only affects secrets newly associated with the destination and does not modify the previously synced secrets It is possible to update an association to force a recreate operation The secret synced with the old granularity will be deleted and new secrets will be synced according to the new granularity Security Note Vault does not control the permissions at the destination It is the responsibility of the operator to configure and maintain proper access controls on the external system so synced secrets are not accessed unintentionally Vault access requirements Vault verifies the client has read access on the secret before syncing it with any destination This additional check is there to prevent users from maliciously or unintentionally leveraging elevated permissions on an external system to access secrets they normally wouldn t be able to Let s assume we have a secret located at path to data secret1 and a user with write access to the sync feature but no read access to that secret This scenario is equivalent to this ACL policy CodeBlockConfig hideClipboard Allow full access to the sync feature path sys sync capabilities read list create update delete Allow read access to the secret mount path to path path to capabilities read Deny access to a specific secret path path to data my secret 1 capabilities deny CodeBlockConfig If a client with this policy tries to read this secret they will receive an unauthorized error CodeBlockConfig hideClipboard vault kv get mount path to my secret 1 Error reading path to data my secret 1 Error making API request URL GET http 127 0 0 1 8200 v1 path to data my secret 1 Code 403 Errors 1 error occurred permission denied CodeBlockConfig Likewise if the client tries to sync this secret to any destination they will receive a similar unauthorized error CodeBlockConfig hideClipboard vault write sys sync destinations TYPE NAME associations set mount path to secret name my secret 1 Error writing data to sys sync destinations TYPE NAME associations set Error making API request URL PUT http 127 0 0 1 8200 v1 sys sync destinations TYPE NAME associations set Code 403 Errors permission denied to read the content of the secret my secret 1 in mount path to CodeBlockConfig This read access verification is only done when creating or updating an association Once the association is created revoking read access to the policy that was used to sync the secret has no effect Collisions and overwrites Secrets Sync operates with a last write wins strategy If a secret with the same name already exists at the destination Vault overwrites it when syncing a secret There are also no automatic mechanisms to prevent a principal with sufficient privileges at the destination from overwriting a secret synced by Vault To prevent Vault from accidentally overwriting existing secrets it is recommended to use either a name pattern or built in tags as an extra policy condition on the role used to configure a Vault sync destination A negative condition on other policies may be used to prevent out of band overwrites to Vault secrets from non Vault roles To see examples of policies that provide this type of restriction refer to the access management section of the documentation for each destination type below AWS Access Management vault docs sync awssm access management GCP Access Management vault docs sync gcpsm access management Reconciliation Vault Secrets Sync is designed to automatically recover from transient failures in two ways operation retries and reconciliation scans Operation retries happen when a sync operation fails Vault automatically retries the operation with exponential backoff Operation retries help in situations where your network becomes unreliable or overwhelmed Reconciliation scans happen periodically in a background thread Vault scans all secrets currently managed by the sync system to identify and update out of date secrets and to ensure that any configured destinations are up to date Reconciliation scans help in situations where there are external service downtimes that are outside of your control and provide a way to automatically recover and self heal Operation retries and reconciliation scans are both enabled by default Note that reconciliation process do not protect from out of band updates that occur directly in the external service The secrets sync system is designed to be one way and does not support bidirectional sync at this time Client counts Each secret that is synced with one or more destinations is counted as a distinct client in Vault s client counting See entity assignments with secret sync vault docs concepts client count secret sync clients for more information API Please see the secrets sync API vault api docs system secrets sync for more details |
vault page title Sync secrets from Vault to Azure Key Vault The Azure Key Vault destination enables Vault to sync and unsync secrets of your choosing into Sync secrets from Vault to Azure Key Vault layout docs Automatically sync and unsync the secrets from Vault to Azure Key Vault to centralize visibility and control of secrets lifecycle management | ---
layout: docs
page_title: Sync secrets from Vault to Azure Key Vault
description: >-
Automatically sync and unsync the secrets from Vault to Azure Key Vault to centralize visibility and control of secrets lifecycle management.
---
# Sync secrets from Vault to Azure Key Vault
The Azure Key Vault destination enables Vault to sync and unsync secrets of your choosing into
an external Azure account. When configured, Vault will actively maintain the state of each externally-synced
secret in realtime. This includes sending new secrets, updating existing secret values, and removing
secrets when they either get dissociated from the destination or deleted from Vault.
Prerequisites:
* Ability to read or create KVv2 secrets
* Ability to create Azure AD user credentials with access to an Azure Key Vault
* Ability to create sync destinations and associations on your Vault server
## Setup
1. If you do not already have an Azure Key Vault instance, navigate to the Azure Portal to create a new
[Key Vault](https://learn.microsoft.com/en-us/azure/key-vault/general/quick-create-portal).
1. A service principal with a client id and client secret will be needed to configure Azure Key Vault as a
sync destination. This [guide](https://learn.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal)
will walk you through creating the service principal.
1. Once the service principal is created, the next step is to
[grant the service principal](https://learn.microsoft.com/en-us/azure/key-vault/general/rbac-guide?tabs=azure-cli)
access to Azure Key Vault. To quickly get started, we recommend using the "Key Vault Secrets Officer" built-in role,
which gives sufficient access to manage secrets. For more information, see the [Permissions](#permissions) section.
1. Configure a sync destination with the service principal credentials and Key Vault URI created in the previous steps.
```shell-session
$ vault write sys/sync/destinations/azure-kv/my-azure-1 \
key_vault_uri="$KEY_VAULT_URI" \
client_id="$CLIENT_ID" \
client_secret="$CLIENT_SECRET" \
tenant_id="$TENANT_ID"
```
**Output:**
<CodeBlockConfig hideClipboard>
```plaintext
Key Value
--- -----
connection_details map[client_id:123 client_secret:***** key_vault_uri:***** tenant_id:123]
name my-azure-1
type azure-kv
```
</CodeBlockConfig>
## Usage
1. If you do not already have a KVv2 secret to sync, mount a new KVv2 secrets engine.
```shell-session
$ vault secrets enable -path='my-kv' kv-v2
```
**Output:**
<CodeBlockConfig hideClipboard>
```plaintext
Success! Enabled the kv-v2 secrets engine at: my-kv/
```
</CodeBlockConfig>
1. Create secrets you wish to sync with a target Azure Key Vault.
```shell-session
$ vault kv put -mount='my-kv' my-secret foo='bar'
```
**Output:**
<CodeBlockConfig hideClipboard>
```plaintext
==== Secret Path ====
my-kv/data/my-secret
======= Metadata =======
Key Value
--- -----
created_time 2023-09-19T13:17:23.395109Z
custom_metadata <nil>
deletion_time n/a
destroyed false
version 1
```
</CodeBlockConfig>
1. Create an association between the destination and a secret to synchronize.
```shell-session
$ vault write sys/sync/destinations/azure-kv/my-azure-1/associations/set \
mount='my-kv' \
secret_name='my-secret'
```
**Output:**
<CodeBlockConfig hideClipboard>
```plaintext
Key Value
--- -----
associated_secrets map[kv_7532a8b4/my-secret:map[accessor:kv_7532a8b4 secret_name:my-secret sync_status:SYNCED updated_at:2023-09-21T13:53:24.839885-07:00]]
store_name my-azure-1
store_type azure-kv
```
</CodeBlockConfig>
1. Navigate to [Azure Key Vault](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.KeyVault%2Fvaults)
in the Azure portal to confirm your secret was successfully created.
Moving forward, any modification on the Vault secret will be propagated in near real time to its Azure Key Vault
counterpart. Creating a new secret version in Vault will create a new version in Azure Key Vault. Deleting the secret
or the association in Vault will delete the secret in your Azure Key Vault as well.
## Permissions
For a more minimal set of permissions, you can create a
[custom role](https://learn.microsoft.com/en-us/azure/role-based-access-control/custom-roles#steps-to-create-a-custom-role)
using the following JSON role definition. Be sure to replace the subscription id placeholder.
```json
{
"properties": {
"roleName": "Key Vault Secrets Reader Writer",
"description": "Custom role for reading and updating Azure Key Vault secrets.",
"permissions": [
{
"actions": [
"Microsoft.KeyVault/vaults/secrets/read",
"Microsoft.KeyVault/vaults/secrets/write"
],
"notActions": [],
"dataActions": [
"Microsoft.KeyVault/vaults/secrets/delete",
"Microsoft.KeyVault/vaults/secrets/backup/action",
"Microsoft.KeyVault/vaults/secrets/purge/action",
"Microsoft.KeyVault/vaults/secrets/recover/action",
"Microsoft.KeyVault/vaults/secrets/restore/action",
"Microsoft.KeyVault/vaults/secrets/readMetadata/action",
"Microsoft.KeyVault/vaults/secrets/getSecret/action",
"Microsoft.KeyVault/vaults/secrets/setSecret/action"
],
"notDataActions": []
}
],
"assignableScopes": [
"/subscriptions/{subscriptionId}/"
]
}
}
```
## Access management
You can allow or restrict access to secrets by using a separate Azure Key Vault instance for Vault sync destinations.
This corresponds with Microsoft's currently-recommended
[best practices](https://learn.microsoft.com/en-us/azure/key-vault/general/best-practices)
for managing secrets in Key Vault. Maintaining a boundary between Vault-managed secrets and other secrets through
separate Key Vaults provides increased security and access control.
Azure roles can be created to grant the necessary permissions for the service principal to access the Key Vault
with [role-based access control](https://learn.microsoft.com/en-us/azure/role-based-access-control/overview).
A role assignment can be set for the Vault user principal to provide it the role's permissions within the Key Vault
instance, its resource group, or subscription. Additionally,
[Azure policies](https://learn.microsoft.com/en-us/azure/key-vault/general/azure-policy) may further refine access control
limitations, such as denying the Vault user principal access to non-Vault related Key Vaults. The inverse, denying other
users any write-access to the Vault-related Key Vault, may be another choice.
## API
Please see the [secrets sync API](/vault/api-docs/system/secrets-sync) for more details. | vault | layout docs page title Sync secrets from Vault to Azure Key Vault description Automatically sync and unsync the secrets from Vault to Azure Key Vault to centralize visibility and control of secrets lifecycle management Sync secrets from Vault to Azure Key Vault The Azure Key Vault destination enables Vault to sync and unsync secrets of your choosing into an external Azure account When configured Vault will actively maintain the state of each externally synced secret in realtime This includes sending new secrets updating existing secret values and removing secrets when they either get dissociated from the destination or deleted from Vault Prerequisites Ability to read or create KVv2 secrets Ability to create Azure AD user credentials with access to an Azure Key Vault Ability to create sync destinations and associations on your Vault server Setup 1 If you do not already have an Azure Key Vault instance navigate to the Azure Portal to create a new Key Vault https learn microsoft com en us azure key vault general quick create portal 1 A service principal with a client id and client secret will be needed to configure Azure Key Vault as a sync destination This guide https learn microsoft com en us azure active directory develop howto create service principal portal will walk you through creating the service principal 1 Once the service principal is created the next step is to grant the service principal https learn microsoft com en us azure key vault general rbac guide tabs azure cli access to Azure Key Vault To quickly get started we recommend using the Key Vault Secrets Officer built in role which gives sufficient access to manage secrets For more information see the Permissions permissions section 1 Configure a sync destination with the service principal credentials and Key Vault URI created in the previous steps shell session vault write sys sync destinations azure kv my azure 1 key vault uri KEY VAULT URI client id CLIENT ID client secret CLIENT SECRET tenant id TENANT ID Output CodeBlockConfig hideClipboard plaintext Key Value connection details map client id 123 client secret key vault uri tenant id 123 name my azure 1 type azure kv CodeBlockConfig Usage 1 If you do not already have a KVv2 secret to sync mount a new KVv2 secrets engine shell session vault secrets enable path my kv kv v2 Output CodeBlockConfig hideClipboard plaintext Success Enabled the kv v2 secrets engine at my kv CodeBlockConfig 1 Create secrets you wish to sync with a target Azure Key Vault shell session vault kv put mount my kv my secret foo bar Output CodeBlockConfig hideClipboard plaintext Secret Path my kv data my secret Metadata Key Value created time 2023 09 19T13 17 23 395109Z custom metadata nil deletion time n a destroyed false version 1 CodeBlockConfig 1 Create an association between the destination and a secret to synchronize shell session vault write sys sync destinations azure kv my azure 1 associations set mount my kv secret name my secret Output CodeBlockConfig hideClipboard plaintext Key Value associated secrets map kv 7532a8b4 my secret map accessor kv 7532a8b4 secret name my secret sync status SYNCED updated at 2023 09 21T13 53 24 839885 07 00 store name my azure 1 store type azure kv CodeBlockConfig 1 Navigate to Azure Key Vault https portal azure com view HubsExtension BrowseResource resourceType Microsoft KeyVault 2Fvaults in the Azure portal to confirm your secret was successfully created Moving forward any modification on the Vault secret will be propagated in near real time to its Azure Key Vault counterpart Creating a new secret version in Vault will create a new version in Azure Key Vault Deleting the secret or the association in Vault will delete the secret in your Azure Key Vault as well Permissions For a more minimal set of permissions you can create a custom role https learn microsoft com en us azure role based access control custom roles steps to create a custom role using the following JSON role definition Be sure to replace the subscription id placeholder json properties roleName Key Vault Secrets Reader Writer description Custom role for reading and updating Azure Key Vault secrets permissions actions Microsoft KeyVault vaults secrets read Microsoft KeyVault vaults secrets write notActions dataActions Microsoft KeyVault vaults secrets delete Microsoft KeyVault vaults secrets backup action Microsoft KeyVault vaults secrets purge action Microsoft KeyVault vaults secrets recover action Microsoft KeyVault vaults secrets restore action Microsoft KeyVault vaults secrets readMetadata action Microsoft KeyVault vaults secrets getSecret action Microsoft KeyVault vaults secrets setSecret action notDataActions assignableScopes subscriptions subscriptionId Access management You can allow or restrict access to secrets by using a separate Azure Key Vault instance for Vault sync destinations This corresponds with Microsoft s currently recommended best practices https learn microsoft com en us azure key vault general best practices for managing secrets in Key Vault Maintaining a boundary between Vault managed secrets and other secrets through separate Key Vaults provides increased security and access control Azure roles can be created to grant the necessary permissions for the service principal to access the Key Vault with role based access control https learn microsoft com en us azure role based access control overview A role assignment can be set for the Vault user principal to provide it the role s permissions within the Key Vault instance its resource group or subscription Additionally Azure policies https learn microsoft com en us azure key vault general azure policy may further refine access control limitations such as denying the Vault user principal access to non Vault related Key Vaults The inverse denying other users any write access to the Vault related Key Vault may be another choice API Please see the secrets sync API vault api docs system secrets sync for more details |
vault Sync secrets from Vault to AWS Secrets Manager Automatically sync and unsync the secrets from Vault to AWS Secrets Manager to centralize visibility and control of secrets lifecycle management page title Sync secrets from Vault to AWS Secrets Manager The AWS Secrets Manager destination enables Vault to sync and unsync secrets of your choosing into layout docs | ---
layout: docs
page_title: Sync secrets from Vault to AWS Secrets Manager
description: >-
Automatically sync and unsync the secrets from Vault to AWS Secrets Manager to centralize visibility and control of secrets lifecycle management.
---
# Sync secrets from Vault to AWS Secrets Manager
The AWS Secrets Manager destination enables Vault to sync and unsync secrets of your choosing into
an external AWS account. When configured, Vault will actively maintain the state of each externally-synced
secret in near-realtime. This includes sending new secrets, updating existing secret values, and removing
secrets when they either get dissociated from the destination or deleted from Vault. This enables the
ability to keep control of all your secrets localized while leveraging the benefits of the AWS Secrets Manager.
Prerequisites:
* Ability to read or create KVv2 secrets
* Ability to create AWS IAM user and access keys with access to the Secrets Manager
* Ability to create sync destinations and associations on your Vault server
## Setup
1. Navigate to the [AWS Identity and Access Management (IAM) console](https://us-east-1.console.aws.amazon.com/iamv2/home#/home)
to configure a IAM user with access to the Secrets Manager. The following is an example policy outlining the required
permissions to use secrets syncing.
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"secretsmanager:Create*",
"secretsmanager:Update*",
"secretsmanager:Delete*",
"secretsmanager:TagResource"
],
"Resource": "arn:aws:secretsmanager:*:*:secret:vault*"
}
]
}
```
1. Configure a sync destination with the IAM user credentials created in the previous step.
```shell-session
$ vault write sys/sync/destinations/aws-sm/my-awssm-1 \
access_key_id="$ACCESS_KEY_ID" \
secret_access_key="$SECRET_ACCESS_KEY" \
region='us-east-1'
```
**Output:**
<CodeBlockConfig hideClipboard>
```plaintext
Key Value
--- -----
connection_details map[access_key_id:***** region:us-east-1 secret_access_key:*****]
name my-awssm-1
type aws-sm
```
</CodeBlockConfig>
## Usage
1. If you do not already have a KVv2 secret to sync, mount a new KVv2 secrets engine.
```shell-session
$ vault secrets enable -path=my-kv kv-v2
```
**Output:**
<CodeBlockConfig hideClipboard>
```plaintext
Success! Enabled the kv-v2 secrets engine at: my-kv/
```
</CodeBlockConfig>
1. Create secrets you wish to sync with a target AWS Secrets Manager.
```shell-session
$ vault kv put -mount=my-kv my-secret foo='bar'
```
**Output:**
<CodeBlockConfig hideClipboard>
```plaintext
==== Secret Path ====
my-kv/data/my-secret
======= Metadata =======
Key Value
--- -----
created_time 2023-09-19T13:17:23.395109Z
custom_metadata <nil>
deletion_time n/a
destroyed false
version 1
```
</CodeBlockConfig>
1. Create an association between the destination and a secret to synchronize.
```shell-session
$ vault write sys/sync/destinations/aws-sm/my-awssm-1/associations/set \
mount='my-kv' \
secret_name='my-secret'
```
**Output:**
<CodeBlockConfig hideClipboard>
```plaintext
Key Value
--- -----
associated_secrets map[kv_37993f8a/my-secret:map[accessor:kv_37993f8a secret_name:my-secret sync_status:SYNCED updated_at:2023-09-19T13:17:35.085581-05:00]]
store_name aws1
store_type aws-sm
```
</CodeBlockConfig>
1. Navigate to the [Secrets Manager](https://console.aws.amazon.com/secretsmanager/) in the AWS console
to confirm your secret was successfully synced.
Moving forward, any modification on the Vault secret will be propagated to its AWS Secrets Manager
counterpart. Creating a new secret version in Vault will update the one in AWS to the new version. Deleting either
the secret or the association in Vault will delete the secret in your AWS account as well.
## Access management
You can allow or restrict access to secrets by attaching AWS Resource Tags
to secrets. For example, the following AWS IAM policy prevents Vault from
modifying secrets that were not created by a sync operation:
<CodeBlockConfig hideClipboard>
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"secretsmanager:*",
],
"Resource": "*",
"Condition": {
"StringEquals": {
"secretsmanager:ResourceTag/hashicorp:vault": "" # This tag is automatically added by Vault on every synced secrets
}
}
}
]
}
</CodeBlockConfig>
To prevent out-of-band overwrites, we recommend adding a negative condition on
all write-access policies not used by Vault:
<CodeBlockConfig hideClipboard>
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": [
"secretsmanager:*"
],
"Resource": "*",
"Condition": {
"StringNotEquals": {
"secretsmanager:ResourceTag/hashicorp:vault": "" # This tag is automatically added by Vault on every synced secrets
}
}
}
]
}
</CodeBlockConfig>
<Warning title="Use wildcards with extreme caution">
The previous examples use wildcards for the sake of brevity. We strongly
recommend you use the principle of least privilege to restrict actions and
resources for each use case to the minimum necessary requirements.
</Warning>
## Tutorial
Refer to the [Vault Enterprise Secrets Sync tutorial](/vault/tutorials/enterprise/secrets-sync)
to learn how to configure the secrets sync between Vault and AWS Secrets Manager.
## API
Please see the [secrets sync API](/vault/api-docs/system/secrets-sync) for more details. | vault | layout docs page title Sync secrets from Vault to AWS Secrets Manager description Automatically sync and unsync the secrets from Vault to AWS Secrets Manager to centralize visibility and control of secrets lifecycle management Sync secrets from Vault to AWS Secrets Manager The AWS Secrets Manager destination enables Vault to sync and unsync secrets of your choosing into an external AWS account When configured Vault will actively maintain the state of each externally synced secret in near realtime This includes sending new secrets updating existing secret values and removing secrets when they either get dissociated from the destination or deleted from Vault This enables the ability to keep control of all your secrets localized while leveraging the benefits of the AWS Secrets Manager Prerequisites Ability to read or create KVv2 secrets Ability to create AWS IAM user and access keys with access to the Secrets Manager Ability to create sync destinations and associations on your Vault server Setup 1 Navigate to the AWS Identity and Access Management IAM console https us east 1 console aws amazon com iamv2 home home to configure a IAM user with access to the Secrets Manager The following is an example policy outlining the required permissions to use secrets syncing json Version 2012 10 17 Statement Effect Allow Action secretsmanager Create secretsmanager Update secretsmanager Delete secretsmanager TagResource Resource arn aws secretsmanager secret vault 1 Configure a sync destination with the IAM user credentials created in the previous step shell session vault write sys sync destinations aws sm my awssm 1 access key id ACCESS KEY ID secret access key SECRET ACCESS KEY region us east 1 Output CodeBlockConfig hideClipboard plaintext Key Value connection details map access key id region us east 1 secret access key name my awssm 1 type aws sm CodeBlockConfig Usage 1 If you do not already have a KVv2 secret to sync mount a new KVv2 secrets engine shell session vault secrets enable path my kv kv v2 Output CodeBlockConfig hideClipboard plaintext Success Enabled the kv v2 secrets engine at my kv CodeBlockConfig 1 Create secrets you wish to sync with a target AWS Secrets Manager shell session vault kv put mount my kv my secret foo bar Output CodeBlockConfig hideClipboard plaintext Secret Path my kv data my secret Metadata Key Value created time 2023 09 19T13 17 23 395109Z custom metadata nil deletion time n a destroyed false version 1 CodeBlockConfig 1 Create an association between the destination and a secret to synchronize shell session vault write sys sync destinations aws sm my awssm 1 associations set mount my kv secret name my secret Output CodeBlockConfig hideClipboard plaintext Key Value associated secrets map kv 37993f8a my secret map accessor kv 37993f8a secret name my secret sync status SYNCED updated at 2023 09 19T13 17 35 085581 05 00 store name aws1 store type aws sm CodeBlockConfig 1 Navigate to the Secrets Manager https console aws amazon com secretsmanager in the AWS console to confirm your secret was successfully synced Moving forward any modification on the Vault secret will be propagated to its AWS Secrets Manager counterpart Creating a new secret version in Vault will update the one in AWS to the new version Deleting either the secret or the association in Vault will delete the secret in your AWS account as well Access management You can allow or restrict access to secrets by attaching AWS Resource Tags to secrets For example the following AWS IAM policy prevents Vault from modifying secrets that were not created by a sync operation CodeBlockConfig hideClipboard Version 2012 10 17 Statement Effect Allow Action secretsmanager Resource Condition StringEquals secretsmanager ResourceTag hashicorp vault This tag is automatically added by Vault on every synced secrets CodeBlockConfig To prevent out of band overwrites we recommend adding a negative condition on all write access policies not used by Vault CodeBlockConfig hideClipboard Version 2012 10 17 Statement Effect Deny Action secretsmanager Resource Condition StringNotEquals secretsmanager ResourceTag hashicorp vault This tag is automatically added by Vault on every synced secrets CodeBlockConfig Warning title Use wildcards with extreme caution The previous examples use wildcards for the sake of brevity We strongly recommend you use the principle of least privilege to restrict actions and resources for each use case to the minimum necessary requirements Warning Tutorial Refer to the Vault Enterprise Secrets Sync tutorial vault tutorials enterprise secrets sync to learn how to configure the secrets sync between Vault and AWS Secrets Manager API Please see the secrets sync API vault api docs system secrets sync for more details |
vault Secrets import allows you to safely onboard secrets from external sources into Vault KV for management include alerts enterprise only mdx page title Secrets import layout docs Secrets import | ---
layout: docs
page_title: Secrets import
description: Secrets import allows you to safely onboard secrets from external sources into Vault KV for management.
---
# Secrets import
@include 'alerts/enterprise-only.mdx'
@include 'alerts/alpha.mdx'
Distributing sensitive information across multiple external systems creates
several challenges, including:
- Increased operational overhead.
- Increased exposure risk from data sprawl.
- Increased risk of outdated and out-of-sync information.
Using Vault as a single source of truth (SSOT) for sensitive data increases
security and reduces management overhead, but migrating preexisting data from multiple
and/or varied sources can be complex and costly.
The secrets import process helps you automate and streamline your sensitive data
migration with codified import plans as HCL files. Import plans tell Vault which KVv2 secrets
engine instance to store the expected secret data in, the source system for which data will be
read from, and how to filter this data. Three HCL blocks make this possible:
- The `destination` block defines target KVv2 mounts.
- The `source` block provides credentials for connecting to the external system.
- The `mapping` block defines how Vault should decide which data gets imported before
writing the information to KVv2.
## Destinations
Vault stores imported secrets in a Vault KVv2 secrets engine mount. Destination
blocks start with `destination_vault` and define the desired KVv2 mount path and
an optional namespace. The combination of these represent the exact location in your
Vault instance you want the information stored.
### HCL syntax
```hcl
destination_vault {
name = "my-dest-1"
namespace = "ns-1"
mount = "mount-1"
}
```
- `name` `(string: <required>)` - A unique name for the destination block that can
be referenced in subsequent mapping blocks.
- `mount` `(string: <required>)` - The mount path for the target KVv2 instance.
- `address` `(string)` - Optional network address of the Vault server with the
KVv2 secrets engine enabled. By default, the Vault client's address will be used.
- `token` `(string)` - Optional authentication token for the Vault server at the
specified address. By default, the Vault client's token will be used.
- `namespace` `(string)` - Optional namespace path containing the specified KVv2
mount. By default, Vault looks for the KVv2 mount under the root namespace.
## Sources
Vault can import secrets from the following sources:
- [GCP Secret Manager](/vault/docs/import/gcpsm)
To pull data from a source during import, Vault needs read credentials for the
external system. You can provide credentials directly as part of the import
plan, or use Vault to automatically generate dynamic credentials if you already
have the corresponding secrets engine configured.
### HCL syntax
Source blocks start with `source_<external_system>` and include any connection
information required by the target system or the secrets engine to leverage. For example:
```hcl
source_gcp {
name = "my-gcp-source-1"
credentials = "@/path/to/service-account-key.json"
}
```
- `name` `(string: <required>)` - A unique name for the source block that can be
referenced in subsequent mapping blocks.
- `credentials` `(string: <required>)` - Path to a credential file or token with
read permissions for the target system.
Depending on the source system, additional information may be required. Refer to
the connection documentation for your source system to determine the full set of
required fields for that system type.
## Mappings
Mappings glue the source and destination together and filter the migrated data,
to determine what is imported and what is ignored. Vault currently supports the
following mapping methods:
- [mapping_passthrough](/vault/docs/import/mappings#passthrough)
- [mapping_metadata](/vault/docs/import/mappings#metadata)
- [mapping_regex](/vault/docs/import/mappings#regex)
### HCL syntax
Mapping blocks start with `mapping_<filter_type>` and require a source name,
destination name, an execution priority, and any corresponding transformations
or filters that apply for each mapping type. For example:
```hcl
mapping_regex {
name = "my-map-1"
source = "my-gcp-source-1"
destination = "my-dest-1"
priority = 1
expression = "^database/.*$"
}
```
- `name` `(string: <required>)` - A unique name for the mapping block.
- `source` `(string: <required>)` - The name of a previously-defined source block
**from** which the data should be read.
- `destination` `(string: <required>)` - The name of a previously defined
destination block **to** which the data should be written.
- `priority` `(integer: <required>)` - The order in which Vault should apply the
mapping block during the import process. The lower the number, the higher the
priority. For example, a mapping with priority 1 executes before a mapping
with priority 2.
Depending on the filter type, additional fields may be required or possible. Refer
to the [import mappings documentation](/vault/docs/import/mappings) for the available
supported options and for a list of each mapping's specific fields.
<Tip title="Priority matters">
Vault applies mapping definitions in priority order and a given secret only
matches to the first mapping that applies. Once Vault imports a secret with a
particular mapping, subsequent reads from the same source will ignore that
secret. See the [priority section](/vault/docs/import/mappings#priority) for an example.
</Tip> | vault | layout docs page title Secrets import description Secrets import allows you to safely onboard secrets from external sources into Vault KV for management Secrets import include alerts enterprise only mdx include alerts alpha mdx Distributing sensitive information across multiple external systems creates several challenges including Increased operational overhead Increased exposure risk from data sprawl Increased risk of outdated and out of sync information Using Vault as a single source of truth SSOT for sensitive data increases security and reduces management overhead but migrating preexisting data from multiple and or varied sources can be complex and costly The secrets import process helps you automate and streamline your sensitive data migration with codified import plans as HCL files Import plans tell Vault which KVv2 secrets engine instance to store the expected secret data in the source system for which data will be read from and how to filter this data Three HCL blocks make this possible The destination block defines target KVv2 mounts The source block provides credentials for connecting to the external system The mapping block defines how Vault should decide which data gets imported before writing the information to KVv2 Destinations Vault stores imported secrets in a Vault KVv2 secrets engine mount Destination blocks start with destination vault and define the desired KVv2 mount path and an optional namespace The combination of these represent the exact location in your Vault instance you want the information stored HCL syntax hcl destination vault name my dest 1 namespace ns 1 mount mount 1 name string required A unique name for the destination block that can be referenced in subsequent mapping blocks mount string required The mount path for the target KVv2 instance address string Optional network address of the Vault server with the KVv2 secrets engine enabled By default the Vault client s address will be used token string Optional authentication token for the Vault server at the specified address By default the Vault client s token will be used namespace string Optional namespace path containing the specified KVv2 mount By default Vault looks for the KVv2 mount under the root namespace Sources Vault can import secrets from the following sources GCP Secret Manager vault docs import gcpsm To pull data from a source during import Vault needs read credentials for the external system You can provide credentials directly as part of the import plan or use Vault to automatically generate dynamic credentials if you already have the corresponding secrets engine configured HCL syntax Source blocks start with source external system and include any connection information required by the target system or the secrets engine to leverage For example hcl source gcp name my gcp source 1 credentials path to service account key json name string required A unique name for the source block that can be referenced in subsequent mapping blocks credentials string required Path to a credential file or token with read permissions for the target system Depending on the source system additional information may be required Refer to the connection documentation for your source system to determine the full set of required fields for that system type Mappings Mappings glue the source and destination together and filter the migrated data to determine what is imported and what is ignored Vault currently supports the following mapping methods mapping passthrough vault docs import mappings passthrough mapping metadata vault docs import mappings metadata mapping regex vault docs import mappings regex HCL syntax Mapping blocks start with mapping filter type and require a source name destination name an execution priority and any corresponding transformations or filters that apply for each mapping type For example hcl mapping regex name my map 1 source my gcp source 1 destination my dest 1 priority 1 expression database name string required A unique name for the mapping block source string required The name of a previously defined source block from which the data should be read destination string required The name of a previously defined destination block to which the data should be written priority integer required The order in which Vault should apply the mapping block during the import process The lower the number the higher the priority For example a mapping with priority 1 executes before a mapping with priority 2 Depending on the filter type additional fields may be required or possible Refer to the import mappings documentation vault docs import mappings for the available supported options and for a list of each mapping s specific fields Tip title Priority matters Vault applies mapping definitions in priority order and a given secret only matches to the first mapping that applies Once Vault imports a secret with a particular mapping subsequent reads from the same source will ignore that secret See the priority section vault docs import mappings priority for an example Tip |
vault used to filter the scanned secrets and determine which will be imported in to Vault Vault supports multiple filter types for mapping blocks Each of the types provides a different mechanism Import mappings Mappings lets users apply various filtering methods to secrets being imported in to Vault layout docs page title Secrets import mappings | ---
layout: docs
page_title: Secrets import mappings
description: Mappings lets users apply various filtering methods to secrets being imported in to Vault.
---
# Import mappings
Vault supports multiple filter types for mapping blocks. Each of the types provides a different mechanism
used to filter the scanned secrets and determine which will be imported in to Vault.
## Argument reference
Refer to the [HCL syntax](/vault/docs/import#hcl-syntax-2) for arguments common to all mapping types.
## Passthrough mapping filters
The passthrough mapping block `mapping_passthrough` allows all secrets through from the specified source to the
specified destination. For example, one use case is setting it as a base-case for imported secrets. By assigning
it the lowest priority in the import plan, all other mapping blocks will be applied first. Secrets that fail
to match any of the previous mappings will fall through to the passthrough block and be collected in a single
KVv2 location.
### Additional arguments
There are no extra arguments to specify in a `mapping_passthrough` block.
### Example
In this example, every single secret that `my-gcp-source-1` scans from GCP Secret Manager will be imported
to the KVv2 secrets engine mount defined in `my-dest-1`.
```hcl
mapping_passthrough {
name = "my-map-1"
source = "my-gcp-source-1"
destination = "my-dest-1"
priority = 1
}
```
## Metadata
The metadata mapping block `mapping_metadata` allows secrets through from the specified source to the specified
destination if they contain matching metadata key-value pairs. Metadata is not supported in all external secret
management systems, and ones that do may use different terminology for metadata. For example, AWS allows tags
on secrets while [GCP](/vault/docs/import/gcpsm) allows labels.
### Additional arguments
* `tags` `(string: <required>)` - A set of key-value pairs to match on secrets from the external system. All of the specified
keys must be found on a secret and all of the values must be exact matches. Specifying a key in this mapping with
an empty value, i.e. `""`, acts as a wildcard match to the external system's key's value.
### Example
In this example, `my-map-1` will only import the secrets into the destination `my-dest-1` that contain a tag with
a key named `importable` and its value set to `true`.
```hcl
mapping_metadata {
name = "my-map-1"
source = "my-gcp-source-1"
destination = "my-dest-1"
priority = 1
tags = {
"importable" = "true"
}
}
```
## Regex
The regex mapping block `mapping_regex` allows secrets through from the specified source to the specified
destination if their secret name passes a regular expression check.
### Additional arguments
* `expression` `(string: <required>)` - The regular expression used to match secrets' names from the external system.
### Example
In this example, any secret in the GCP source whose name begins with `database/` will be imported into Vault.
```hcl
mapping_regex {
name = "my-map-1"
source = "my-gcp-source-1"
destination = "my-dest-1"
priority = 1
expression = "^database/.*$"
}
```
## Priority
Priority works in a "first match" fashion where lower values are higher priority. To explain in more detail,
consider the above metadata example with a second additional mapping.
Below are two metadata mappings. The first, `my-map-1`, has a priority of 1. This will only import the secrets
into the destination `my-dest-1` that contain both tag keys `database` and `importable`. Each of these keys' values
must also match to `users` and `true` respectively. The second, `my-map-2`, has a priority of 2. Even though all
the secrets in the first mapping would also qualify for the second mapping's filtering rule, those secrets will only
be imported into `my-dest-1` because of `my-map-2`'s lower priority. All remaining secrets that have the tag
`importable` with a value of `true` will be imported into `my-dest-2`.
```hcl
mapping_metadata {
name = "my-map-1"
source = "my-gcp-source-1"
destination = "my-dest-1"
priority = 1
tags = {
"database" = "users"
"importable" = "true"
}
}
mapping_metadata {
name = "my-map-2"
source = "my-gcp-source-1"
destination = "my-dest-2"
priority = 2
tags = {
"importable" = "true"
}
}
``` | vault | layout docs page title Secrets import mappings description Mappings lets users apply various filtering methods to secrets being imported in to Vault Import mappings Vault supports multiple filter types for mapping blocks Each of the types provides a different mechanism used to filter the scanned secrets and determine which will be imported in to Vault Argument reference Refer to the HCL syntax vault docs import hcl syntax 2 for arguments common to all mapping types Passthrough mapping filters The passthrough mapping block mapping passthrough allows all secrets through from the specified source to the specified destination For example one use case is setting it as a base case for imported secrets By assigning it the lowest priority in the import plan all other mapping blocks will be applied first Secrets that fail to match any of the previous mappings will fall through to the passthrough block and be collected in a single KVv2 location Additional arguments There are no extra arguments to specify in a mapping passthrough block Example In this example every single secret that my gcp source 1 scans from GCP Secret Manager will be imported to the KVv2 secrets engine mount defined in my dest 1 hcl mapping passthrough name my map 1 source my gcp source 1 destination my dest 1 priority 1 Metadata The metadata mapping block mapping metadata allows secrets through from the specified source to the specified destination if they contain matching metadata key value pairs Metadata is not supported in all external secret management systems and ones that do may use different terminology for metadata For example AWS allows tags on secrets while GCP vault docs import gcpsm allows labels Additional arguments tags string required A set of key value pairs to match on secrets from the external system All of the specified keys must be found on a secret and all of the values must be exact matches Specifying a key in this mapping with an empty value i e acts as a wildcard match to the external system s key s value Example In this example my map 1 will only import the secrets into the destination my dest 1 that contain a tag with a key named importable and its value set to true hcl mapping metadata name my map 1 source my gcp source 1 destination my dest 1 priority 1 tags importable true Regex The regex mapping block mapping regex allows secrets through from the specified source to the specified destination if their secret name passes a regular expression check Additional arguments expression string required The regular expression used to match secrets names from the external system Example In this example any secret in the GCP source whose name begins with database will be imported into Vault hcl mapping regex name my map 1 source my gcp source 1 destination my dest 1 priority 1 expression database Priority Priority works in a first match fashion where lower values are higher priority To explain in more detail consider the above metadata example with a second additional mapping Below are two metadata mappings The first my map 1 has a priority of 1 This will only import the secrets into the destination my dest 1 that contain both tag keys database and importable Each of these keys values must also match to users and true respectively The second my map 2 has a priority of 2 Even though all the secrets in the first mapping would also qualify for the second mapping s filtering rule those secrets will only be imported into my dest 1 because of my map 2 s lower priority All remaining secrets that have the tag importable with a value of true will be imported into my dest 2 hcl mapping metadata name my map 1 source my gcp source 1 destination my dest 1 priority 1 tags database users importable true mapping metadata name my map 2 source my gcp source 1 destination my dest 2 priority 2 tags importable true |
vault This quick start will explore how to use Vault client libraries inside your application code to store and retrieve your first secret value Vault takes the security burden away from developers by providing a secure centralized secret store for an application s sensitive data credentials certificates encryption keys and more Learn how to store and retrieve your first secret Developer quick start layout docs page title Developer Quick Start | ---
layout: docs
page_title: Developer Quick Start
description: Learn how to store and retrieve your first secret.
---
# Developer quick start
This quick start will explore how to use Vault client libraries inside your application code to store and retrieve your first secret value. Vault takes the security burden away from developers by providing a secure, centralized secret store for an application’s sensitive data: credentials, certificates, encryption keys, and more.
The complete code samples for the steps below are available here:
- [Go](https://github.com/hashicorp/vault-examples/blob/main/examples/_quick-start/go/example.go)
- [Ruby](https://github.com/hashicorp/vault-examples/blob/main/examples/_quick-start/ruby/example.rb)
- [C#](https://github.com/hashicorp/vault-examples/blob/main/examples/_quick-start/dotnet/Example.cs)
- [Python](https://github.com/hashicorp/vault-examples/blob/main/examples/_quick-start/python/example.py)
- [Java (Spring)](https://github.com/hashicorp/vault-examples/blob/main/examples/_quick-start/java/Example.java)
- [OpenAPI-based Go](https://github.com/hashicorp/vault-client-go/#getting-started)
- [OpenAPI-based .NET](https://github.com/hashicorp/vault-client-dotnet/#getting-started)
For an out-of-the-box runnable demo application showcasing these concepts and more, see the hello-vault repositories ([Go](https://github.com/hashicorp/hello-vault-go), [C#](https://github.com/hashicorp/hello-vault-dotnet) and [Java/Spring Boot](https://github.com/hashicorp/hello-vault-spring)).
## Prerequisites
- [Docker](https://docs.docker.com/get-docker/) or a [local installation](/vault/tutorials/getting-started/getting-started-install) of the Vault binary
- A development environment applicable to one of the languages in this quick start (currently **Go**, **Ruby**, **C#**, **Python**, **Java (Spring)**, and **Bash (curl)**)
-> **Note**: Make sure you are using the [latest version](https://docs.docker.com/engine/release-notes/) of Docker. Older versions may not work. As of 1.12.0, the recommended version of Docker is 20.10.17 or higher.
## Step 1: start Vault
!> **Warning**: This in-memory “dev” server is useful for practicing with Vault locally for the first time, but is insecure and **should never be used in production**. For developers who need to manage their own production Vault installations, this [page](/vault/tutorials/operations/production-hardening) provides some guidance on how to make your setup more production-friendly.
Run the Vault server in a non-production "dev" mode in one of the following ways:
**For Docker users, run this command**:
```shell-session
$ docker run -p 8200:8200 -e 'VAULT_DEV_ROOT_TOKEN_ID=dev-only-token' hashicorp/vault
```
**For non-Docker users, run this command**:
```shell-session
$ vault server -dev -dev-root-token-id="dev-only-token"
```
The `-dev-root-token-id` flag for dev servers tells the Vault server to allow full root access to anyone who presents a token with the specified value (in this case "dev-only-token").
!> **Warning**: The [root token](/vault/docs/concepts/tokens#root-tokens) is useful for development, but allows full access to all data and functionality of Vault, so it must be carefully guarded in production. Ideally, even an administrator of Vault would use their own token with limited privileges instead of the root token.
Vault is now listening over HTTP on port **8200**. With all the setup out of the way, it's time to get coding!
## Step 2: install a client library
To read and write secrets in your application, you need to first configure a client to connect to Vault.
Let's install the Vault client library for your language of choice.
-> **Note**: Some of these libraries are currently community-maintained.
<Tabs>
<Tab heading="Go" group="go">
[Go](https://pkg.go.dev/github.com/hashicorp/vault/api) (official) client library:
```shell-session
$ go get github.com/hashicorp/vault/api
```
Now, let's add the import statements for the client library to the top of the file.
<CodeBlockConfig heading="import statements for client library" lineNumbers>
```go
import vault "github.com/hashicorp/vault/api"
```
</CodeBlockConfig>
</Tab>
<Tab heading="Ruby" group="ruby">
[Ruby](https://github.com/hashicorp/vault-ruby) (official) client library:
```shell-session
$ gem install vault
```
Now, let's add the import statements for the client library to the top of the file.
<CodeBlockConfig heading="import statements for client library" lineNumbers>
```ruby
require "vault"
```
</CodeBlockConfig>
</Tab>
<Tab heading="C#" group="cs">
[C#](https://github.com/rajanadar/VaultSharp) client library:
```shell-session
$ dotnet add package VaultSharp
```
Now, let's add the import statements for the client library to the top of the file.
<CodeBlockConfig heading="import statements for client library" lineNumbers>
```cs
using VaultSharp;
using VaultSharp.V1.AuthMethods;
using VaultSharp.V1.AuthMethods.Token;
using VaultSharp.V1.Commons;
```
</CodeBlockConfig>
</Tab>
<Tab heading="Python" group="python">
[Python](https://github.com/hvac/hvac) client library:
```shell-session
$ pip install hvac
```
Now, let's add the import statements for the client library to the top of the file.
<CodeBlockConfig heading="import statements for client library" lineNumbers>
```Python
import hvac
```
</CodeBlockConfig>
</Tab>
<Tab heading="Java" group="java">
[Java (Spring)](https://spring.io/projects/spring-vault) client library:
Add the following to pom.xml:
```xml
<dependency>
<groupId>org.springframework.vault</groupId>
<artifactId>spring-vault-core</artifactId>
<version>2.3.1</version>
</dependency>
```
Now, let's add the import statements for the client library to the top of the file.
<CodeBlockConfig heading="import statements for client library" lineNumbers>
```Java
import org.springframework.vault.authentication.TokenAuthentication;
import org.springframework.vault.client.VaultEndpoint;
import org.springframework.vault.support.Versioned;
import org.springframework.vault.core.VaultTemplate;
```
</CodeBlockConfig>
</Tab>
<Tab heading="OpenAPI Go (Beta)" group="openAPI-go">
[OpenAPI Go](https://github.com/hashicorp/vault-client-go) (Beta) client library:
```shell-session
$ go get github.com/hashicorp/vault-client-go
```
Now, let's add the import statements for the client library to the top of the file.
<CodeBlockConfig heading="import statements for client library" lineNumbers>
```go
import (
"github.com/hashicorp/vault-client-go"
"github.com/hashicorp/vault-client-go/schema"
)
```
</CodeBlockConfig>
</Tab>
<Tab heading="OpenAPI .NET (Beta)" group="openAPI-dotnet">
[OpenAPI .NET](https://github.com/hashicorp/vault-client-dotnet) (Beta) client library:
Vault is a package available at [Hashicorp Nuget](https://www.nuget.org/profiles/hashicorp).
```shell-session
$ nuget install HashiCorp.Vault -Version "0.1.0-beta"
```
**Or:**
```shell-session
$ dotnet add package Hashicorp.Vault -version "0.1.0-beta"
```
Now, let's add the import statements for the client library to the top of the file.
<CodeBlockConfig heading="import statements for client library" lineNumbers>
```cs
using Vault;
using Vault.Client;
```
</CodeBlockConfig>
</Tab>
</Tabs>
## Step 3: authenticate to Vault
A variety of [authentication methods](/vault/docs/auth) can be used to prove your application's identity to the Vault server. To explore more secure authentication methods, such as via Kubernetes or your cloud provider, see the auth code snippets in the [vault-examples](https://github.com/hashicorp/vault-examples) repository.
To keep things simple for our example, we'll just use the root token created in **Step 1**.
Paste the following code to initialize a new Vault client that will use token-based authentication for all its requests:
<Tabs>
<Tab heading="Go">
```go
config := vault.DefaultConfig()
config.Address = "http://127.0.0.1:8200"
client, err := vault.NewClient(config)
if err != nil {
log.Fatalf("unable to initialize Vault client: %v", err)
}
client.SetToken("dev-only-token")
```
</Tab>
<Tab heading="Ruby" group="ruby">
```ruby
Vault.configure do |config|
config.address = "http://127.0.0.1:8200"
config.token = "dev-only-token"
end
```
</Tab>
<Tab heading="C#" group="cs">
```cs
IAuthMethodInfo authMethod = new TokenAuthMethodInfo(vaultToken: "dev-only-token");
VaultClientSettings vaultClientSettings = new
VaultClientSettings("http://127.0.0.1:8200", authMethod);
IVaultClient vaultClient = new VaultClient(vaultClientSettings);
```
</Tab>
<Tab heading="Python" group="python">
```Python
client = hvac.Client(
url='http://127.0.0.1:8200',
token='dev-only-token',
)
```
</Tab>
<Tab heading="Java" group="java">
```Java
VaultEndpoint vaultEndpoint = new VaultEndpoint();
vaultEndpoint.setHost("127.0.0.1");
vaultEndpoint.setPort(8200);
vaultEndpoint.setScheme("http");
VaultTemplate vaultTemplate = new VaultTemplate(
vaultEndpoint,
new TokenAuthentication("dev-only-token")
);
```
</Tab>
<Tab heading="Bash" group="bash">
```shell-session
$ export VAULT_TOKEN="dev-only-token"
```
</Tab>
<Tab heading="OpenAPI Go (Beta)" group="openAPI-go">
```go
client, err := vault.New(
vault.WithAddress("http://127.0.0.1:8200"),
vault.WithRequestTimeout(30*time.Second),
)
if err != nil {
log.Fatal(err)
}
if err := client.SetToken("dev-only-token"); err != nil {
log.Fatal(err)
}
```
</Tab>
<Tab heading="OpenAPI .NET (Beta)" group="openAPI-dotnet">
```cs
string address = "http://127.0.0.1:8200";
VaultConfiguration config = new VaultConfiguration(address);
VaultClient vaultClient = new VaultClient(config);
vaultClient.SetToken("dev-only-token");
```
</Tab>
</Tabs>
## Step 4: store a secret
Secrets are sensitive data like API keys and passwords that we shouldn’t be storing in our code or configuration files. Instead, we want to store values like this in Vault.
We'll use the Vault client we just initialized to write a secret to Vault, like so:
<Tabs>
<Tab heading="Go">
```go
secretData := map[string]interface{}{
"password": "Hashi123",
}
_, err = client.KVv2("secret").Put(context.Background(), "my-secret-password", secretData)
if err != nil {
log.Fatalf("unable to write secret: %v", err)
}
fmt.Println("Secret written successfully.")
```
</Tab>
<Tab heading="Ruby" group="ruby">
```ruby
secret_data = {data: {password: "Hashi123"}}
Vault.logical.write("secret/data/my-secret-password", secret_data)
puts "Secret written successfully."
```
</Tab>
<Tab heading="C#" group="cs">
```cs
var secretData = new Dictionary<string, object> { { "password", "Hashi123" } };
vaultClient.V1.Secrets.KeyValue.V2.WriteSecretAsync(
path: "/my-secret-password",
data: secretData,
mountPoint: "secret"
).Wait();
Console.WriteLine("Secret written successfully.");
```
</Tab>
<Tab heading="Python" group="python">
```Python
create_response = client.secrets.kv.v2.create_or_update_secret(
path='my-secret-password',
secret=dict(password='Hashi123'),
)
print('Secret written successfully.')
```
</Tab>
<Tab heading="Java" group="java">
```Java
Map<String, String> data = new HashMap<>();
data.put("password", "Hashi123");
Versioned.Metadata createResponse = vaultTemplate
.opsForVersionedKeyValue("secret")
.put("my-secret-password", data);
System.out.println("Secret written successfully.");
```
</Tab>
<Tab heading="Bash" group="bash">
```shell-session
$ curl \
--header "X-Vault-Token: $VAULT_TOKEN" \
--header "Content-Type: application/json" \
--request POST \
--data '{"data": {"password": "Hashi123"}}' \
http://127.0.0.1:8200/v1/secret/data/my-secret-password
```
</Tab>
<Tab heading="OpenAPI Go (Beta)" group="openAPI-go">
```go
_, err = client.Secrets.KVv2Write(context.Background(), "my-secret-password", schema.KVv2WriteRequest{
Data: map[string]any{
"password": "Hashi123",
},
})
if err != nil {
log.Fatal(err)
}
log.Println("Secret written successfully.")
```
</Tab>
<Tab heading="OpenAPI .NET (Beta)" group="openAPI-dotnet">
```cs
var secretData = new Dictionary<string, string> { { "password", "Hashi123" } };
// Write a secret
var kvRequestData = new KVv2WriteRequest(secretData);
vaultClient.Secrets.KVv2Write("my-secret-password", kvRequestData);
```
</Tab>
</Tabs>
A common way of storing secrets is as key-value pairs using the [KV secrets engine (v2)](/vault/docs/secrets/kv/kv-v2). In the code we've just added, `password` is the key in the key-value pair, and `Hashi123` is the value.
We also provided the path to our secret in Vault. We will reference this path in a moment when we learn how to retrieve our secret.
Run the code now, and you should see `Secret written successfully`. If not, check that you've used the correct value for the root token and Vault server address.
## Step 5: retrieve a secret
Now that we know how to write a secret, let's practice reading one.
Underneath the line where you wrote a secret to Vault, let's add a few more lines, where we will be retrieving the secret and unpacking the value:
<Tabs>
<Tab heading="Go">
```go
secret, err := client.KVv2("secret").Get(context.Background(), "my-secret-password")
if err != nil {
log.Fatalf("unable to read secret: %v", err)
}
value, ok := secret.Data["password"].(string)
if !ok {
log.Fatalf("value type assertion failed: %T %#v", secret.Data["password"], secret.Data["password"])
}
```
</Tab>
<Tab heading="Ruby" group="ruby">
```ruby
secret = Vault.logical.read("secret/data/my-secret-password")
password = secret.data[:data][:password]
```
</Tab>
<Tab heading="C#" group="cs">
```cs
Secret<SecretData> secret = vaultClient.V1.Secrets.KeyValue.V2.ReadSecretAsync(
path: "/my-secret-password",
mountPoint: "secret"
).Result;
var password = secret.Data.Data["password"];
```
</Tab>
<Tab heading="Python" group="python">
```Python
read_response = client.secrets.kv.read_secret_version(path='my-secret-password')
password = read_response['data']['data']['password']
```
</Tab>
<Tab heading="Java" group="java">
```Java
Versioned<Map<String, Object>> readResponse = vaultTemplate
.opsForVersionedKeyValue("secret")
.get("my-secret-password");
String password = "";
if (readResponse != null && readResponse.hasData()) {
password = (String) readResponse.getData().get("password");
}
```
</Tab>
<Tab heading="Bash" group="bash">
```shell-session
$ curl \
--header "X-Vault-Token: $VAULT_TOKEN" \
http://127.0.0.1:8200/v1/secret/data/my-secret-password > secrets.json
```
</Tab>
<Tab heading="OpenAPI Go (Beta)" group="openAPI-go">
```go
s, err := client.Secrets.KVv2Read(context.Background(), "my-secret-password")
if err != nil {
log.Fatal(err)
}
log.Println("Secret retrieved:", s.Data)
```
</Tab>
<Tab heading="OpenAPI .NET (Beta)" group="openAPI-dotnet">
```cs
VaultResponse<Object> resp = vaultClient.Secrets.KVv2Read("my-secret-password");
Console.WriteLine(resp.Data);
```
</Tab>
</Tabs>
Last, confirm that the value we unpacked from the read response is correct:
<Tabs>
<Tab heading="Go">
```go
if value != "Hashi123" {
log.Fatalf("unexpected password value %q retrieved from vault", value)
}
fmt.Println("Access granted!")
```
</Tab>
<Tab heading="Ruby" group="ruby">
```ruby
abort "Unexpected password" if password != "Hashi123"
puts "Access granted!"
```
</Tab>
<Tab heading="C#" group="cs">
```cs
if (password.ToString() != "Hashi123")
{
throw new System.Exception("Unexpected password");
}
Console.WriteLine("Access granted!");
```
</Tab>
<Tab heading="Python" group="python">
```Python
if password != 'Hashi123':
sys.exit('unexpected password')
print('Access granted!')
```
</Tab>
<Tab heading="Java" group="java">
```Java
if (!password.equals("Hashi123")) {
throw new Exception("Unexpected password");
}
System.out.println("Access granted!");
```
</Tab>
<Tab heading="Bash" group="bash">
```shell-session
$ cat secrets.json | jq '.data.data'
```
</Tab>
</Tabs>
If the secret was fetched successfully, you should see the `Access granted!` message after you run the code. If not, check to see if you provided the correct path to your secret.
**That's it! You've just written and retrieved your first Vault secret!**
# Additional examples
For more secure examples of client authentication, see the auth snippets in the [vault-examples](https://github.com/hashicorp/vault-examples) repo.
For a runnable demo app that demonstrates more features, for example, how to keep your connection to Vault alive and how to connect to a database using Vault's dynamic database credentials, see the sample application hello-vault ([Go](https://github.com/hashicorp/hello-vault-go), [C#](https://github.com/hashicorp/hello-vault-dotnet)).
To learn how to integrate applications with Vault without needing to always change your application code, see the [Vault Agent](/vault/docs/agent-and-proxy/agent) documentation. | vault | layout docs page title Developer Quick Start description Learn how to store and retrieve your first secret Developer quick start This quick start will explore how to use Vault client libraries inside your application code to store and retrieve your first secret value Vault takes the security burden away from developers by providing a secure centralized secret store for an application s sensitive data credentials certificates encryption keys and more The complete code samples for the steps below are available here Go https github com hashicorp vault examples blob main examples quick start go example go Ruby https github com hashicorp vault examples blob main examples quick start ruby example rb C https github com hashicorp vault examples blob main examples quick start dotnet Example cs Python https github com hashicorp vault examples blob main examples quick start python example py Java Spring https github com hashicorp vault examples blob main examples quick start java Example java OpenAPI based Go https github com hashicorp vault client go getting started OpenAPI based NET https github com hashicorp vault client dotnet getting started For an out of the box runnable demo application showcasing these concepts and more see the hello vault repositories Go https github com hashicorp hello vault go C https github com hashicorp hello vault dotnet and Java Spring Boot https github com hashicorp hello vault spring Prerequisites Docker https docs docker com get docker or a local installation vault tutorials getting started getting started install of the Vault binary A development environment applicable to one of the languages in this quick start currently Go Ruby C Python Java Spring and Bash curl Note Make sure you are using the latest version https docs docker com engine release notes of Docker Older versions may not work As of 1 12 0 the recommended version of Docker is 20 10 17 or higher Step 1 start Vault Warning This in memory dev server is useful for practicing with Vault locally for the first time but is insecure and should never be used in production For developers who need to manage their own production Vault installations this page vault tutorials operations production hardening provides some guidance on how to make your setup more production friendly Run the Vault server in a non production dev mode in one of the following ways For Docker users run this command shell session docker run p 8200 8200 e VAULT DEV ROOT TOKEN ID dev only token hashicorp vault For non Docker users run this command shell session vault server dev dev root token id dev only token The dev root token id flag for dev servers tells the Vault server to allow full root access to anyone who presents a token with the specified value in this case dev only token Warning The root token vault docs concepts tokens root tokens is useful for development but allows full access to all data and functionality of Vault so it must be carefully guarded in production Ideally even an administrator of Vault would use their own token with limited privileges instead of the root token Vault is now listening over HTTP on port 8200 With all the setup out of the way it s time to get coding Step 2 install a client library To read and write secrets in your application you need to first configure a client to connect to Vault Let s install the Vault client library for your language of choice Note Some of these libraries are currently community maintained Tabs Tab heading Go group go Go https pkg go dev github com hashicorp vault api official client library shell session go get github com hashicorp vault api Now let s add the import statements for the client library to the top of the file CodeBlockConfig heading import statements for client library lineNumbers go import vault github com hashicorp vault api CodeBlockConfig Tab Tab heading Ruby group ruby Ruby https github com hashicorp vault ruby official client library shell session gem install vault Now let s add the import statements for the client library to the top of the file CodeBlockConfig heading import statements for client library lineNumbers ruby require vault CodeBlockConfig Tab Tab heading C group cs C https github com rajanadar VaultSharp client library shell session dotnet add package VaultSharp Now let s add the import statements for the client library to the top of the file CodeBlockConfig heading import statements for client library lineNumbers cs using VaultSharp using VaultSharp V1 AuthMethods using VaultSharp V1 AuthMethods Token using VaultSharp V1 Commons CodeBlockConfig Tab Tab heading Python group python Python https github com hvac hvac client library shell session pip install hvac Now let s add the import statements for the client library to the top of the file CodeBlockConfig heading import statements for client library lineNumbers Python import hvac CodeBlockConfig Tab Tab heading Java group java Java Spring https spring io projects spring vault client library Add the following to pom xml xml dependency groupId org springframework vault groupId artifactId spring vault core artifactId version 2 3 1 version dependency Now let s add the import statements for the client library to the top of the file CodeBlockConfig heading import statements for client library lineNumbers Java import org springframework vault authentication TokenAuthentication import org springframework vault client VaultEndpoint import org springframework vault support Versioned import org springframework vault core VaultTemplate CodeBlockConfig Tab Tab heading OpenAPI Go Beta group openAPI go OpenAPI Go https github com hashicorp vault client go Beta client library shell session go get github com hashicorp vault client go Now let s add the import statements for the client library to the top of the file CodeBlockConfig heading import statements for client library lineNumbers go import github com hashicorp vault client go github com hashicorp vault client go schema CodeBlockConfig Tab Tab heading OpenAPI NET Beta group openAPI dotnet OpenAPI NET https github com hashicorp vault client dotnet Beta client library Vault is a package available at Hashicorp Nuget https www nuget org profiles hashicorp shell session nuget install HashiCorp Vault Version 0 1 0 beta Or shell session dotnet add package Hashicorp Vault version 0 1 0 beta Now let s add the import statements for the client library to the top of the file CodeBlockConfig heading import statements for client library lineNumbers cs using Vault using Vault Client CodeBlockConfig Tab Tabs Step 3 authenticate to Vault A variety of authentication methods vault docs auth can be used to prove your application s identity to the Vault server To explore more secure authentication methods such as via Kubernetes or your cloud provider see the auth code snippets in the vault examples https github com hashicorp vault examples repository To keep things simple for our example we ll just use the root token created in Step 1 Paste the following code to initialize a new Vault client that will use token based authentication for all its requests Tabs Tab heading Go go config vault DefaultConfig config Address http 127 0 0 1 8200 client err vault NewClient config if err nil log Fatalf unable to initialize Vault client v err client SetToken dev only token Tab Tab heading Ruby group ruby ruby Vault configure do config config address http 127 0 0 1 8200 config token dev only token end Tab Tab heading C group cs cs IAuthMethodInfo authMethod new TokenAuthMethodInfo vaultToken dev only token VaultClientSettings vaultClientSettings new VaultClientSettings http 127 0 0 1 8200 authMethod IVaultClient vaultClient new VaultClient vaultClientSettings Tab Tab heading Python group python Python client hvac Client url http 127 0 0 1 8200 token dev only token Tab Tab heading Java group java Java VaultEndpoint vaultEndpoint new VaultEndpoint vaultEndpoint setHost 127 0 0 1 vaultEndpoint setPort 8200 vaultEndpoint setScheme http VaultTemplate vaultTemplate new VaultTemplate vaultEndpoint new TokenAuthentication dev only token Tab Tab heading Bash group bash shell session export VAULT TOKEN dev only token Tab Tab heading OpenAPI Go Beta group openAPI go go client err vault New vault WithAddress http 127 0 0 1 8200 vault WithRequestTimeout 30 time Second if err nil log Fatal err if err client SetToken dev only token err nil log Fatal err Tab Tab heading OpenAPI NET Beta group openAPI dotnet cs string address http 127 0 0 1 8200 VaultConfiguration config new VaultConfiguration address VaultClient vaultClient new VaultClient config vaultClient SetToken dev only token Tab Tabs Step 4 store a secret Secrets are sensitive data like API keys and passwords that we shouldn t be storing in our code or configuration files Instead we want to store values like this in Vault We ll use the Vault client we just initialized to write a secret to Vault like so Tabs Tab heading Go go secretData map string interface password Hashi123 err client KVv2 secret Put context Background my secret password secretData if err nil log Fatalf unable to write secret v err fmt Println Secret written successfully Tab Tab heading Ruby group ruby ruby secret data data password Hashi123 Vault logical write secret data my secret password secret data puts Secret written successfully Tab Tab heading C group cs cs var secretData new Dictionary string object password Hashi123 vaultClient V1 Secrets KeyValue V2 WriteSecretAsync path my secret password data secretData mountPoint secret Wait Console WriteLine Secret written successfully Tab Tab heading Python group python Python create response client secrets kv v2 create or update secret path my secret password secret dict password Hashi123 print Secret written successfully Tab Tab heading Java group java Java Map String String data new HashMap data put password Hashi123 Versioned Metadata createResponse vaultTemplate opsForVersionedKeyValue secret put my secret password data System out println Secret written successfully Tab Tab heading Bash group bash shell session curl header X Vault Token VAULT TOKEN header Content Type application json request POST data data password Hashi123 http 127 0 0 1 8200 v1 secret data my secret password Tab Tab heading OpenAPI Go Beta group openAPI go go err client Secrets KVv2Write context Background my secret password schema KVv2WriteRequest Data map string any password Hashi123 if err nil log Fatal err log Println Secret written successfully Tab Tab heading OpenAPI NET Beta group openAPI dotnet cs var secretData new Dictionary string string password Hashi123 Write a secret var kvRequestData new KVv2WriteRequest secretData vaultClient Secrets KVv2Write my secret password kvRequestData Tab Tabs A common way of storing secrets is as key value pairs using the KV secrets engine v2 vault docs secrets kv kv v2 In the code we ve just added password is the key in the key value pair and Hashi123 is the value We also provided the path to our secret in Vault We will reference this path in a moment when we learn how to retrieve our secret Run the code now and you should see Secret written successfully If not check that you ve used the correct value for the root token and Vault server address Step 5 retrieve a secret Now that we know how to write a secret let s practice reading one Underneath the line where you wrote a secret to Vault let s add a few more lines where we will be retrieving the secret and unpacking the value Tabs Tab heading Go go secret err client KVv2 secret Get context Background my secret password if err nil log Fatalf unable to read secret v err value ok secret Data password string if ok log Fatalf value type assertion failed T v secret Data password secret Data password Tab Tab heading Ruby group ruby ruby secret Vault logical read secret data my secret password password secret data data password Tab Tab heading C group cs cs Secret SecretData secret vaultClient V1 Secrets KeyValue V2 ReadSecretAsync path my secret password mountPoint secret Result var password secret Data Data password Tab Tab heading Python group python Python read response client secrets kv read secret version path my secret password password read response data data password Tab Tab heading Java group java Java Versioned Map String Object readResponse vaultTemplate opsForVersionedKeyValue secret get my secret password String password if readResponse null readResponse hasData password String readResponse getData get password Tab Tab heading Bash group bash shell session curl header X Vault Token VAULT TOKEN http 127 0 0 1 8200 v1 secret data my secret password secrets json Tab Tab heading OpenAPI Go Beta group openAPI go go s err client Secrets KVv2Read context Background my secret password if err nil log Fatal err log Println Secret retrieved s Data Tab Tab heading OpenAPI NET Beta group openAPI dotnet cs VaultResponse Object resp vaultClient Secrets KVv2Read my secret password Console WriteLine resp Data Tab Tabs Last confirm that the value we unpacked from the read response is correct Tabs Tab heading Go go if value Hashi123 log Fatalf unexpected password value q retrieved from vault value fmt Println Access granted Tab Tab heading Ruby group ruby ruby abort Unexpected password if password Hashi123 puts Access granted Tab Tab heading C group cs cs if password ToString Hashi123 throw new System Exception Unexpected password Console WriteLine Access granted Tab Tab heading Python group python Python if password Hashi123 sys exit unexpected password print Access granted Tab Tab heading Java group java Java if password equals Hashi123 throw new Exception Unexpected password System out println Access granted Tab Tab heading Bash group bash shell session cat secrets json jq data data Tab Tabs If the secret was fetched successfully you should see the Access granted message after you run the code If not check to see if you provided the correct path to your secret That s it You ve just written and retrieved your first Vault secret Additional examples For more secure examples of client authentication see the auth snippets in the vault examples https github com hashicorp vault examples repo For a runnable demo app that demonstrates more features for example how to keep your connection to Vault alive and how to connect to a database using Vault s dynamic database credentials see the sample application hello vault Go https github com hashicorp hello vault go C https github com hashicorp hello vault dotnet To learn how to integrate applications with Vault without needing to always change your application code see the Vault Agent vault docs agent and proxy agent documentation |
tekton weight 102 Migrating From Tekton to Tekton Migrating from Tekton v1alpha1 | <!--
---
linkTitle: "Migrating from Tekton v1alpha1"
weight: 102
---
-->
# Migrating From Tekton `v1alpha1` to Tekton `v1beta1`
- [Changes to fields](#changes-to-fields)
- [Changes to input parameters](#changes-to-input-parameters)
- [Replacing `PipelineResources` with `Tasks`](#replacing-pipelineresources-with-tasks)
- [Replacing a `git` resource](#replacing-a-git-resource)
- [Replacing a `pullrequest` resource](#replacing-a-pullrequest-resource)
- [Replacing a `gcs` resource](#replacing-a-gcs-resource)
- [Replacing an `image` resource](#replacing-an-image-resource)
- [Replacing a `cluster` resource](#replacing-a-cluster-resource)
- [Changes to `PipelineResources`](#changes-to-pipelineresources)
This document describes the differences between `v1alpha1` Tekton entities and their
`v1beta1` counterparts. It also describes how to replace the supported types of
`PipelineResources` with `Tasks` from the Tekton Catalog of equivalent functionality.
## Changes to fields
In Tekton `v1beta1`, the following fields have been changed:
| Old field | New field |
| --------- | ----------|
| `spec.inputs.params` | [`spec.params`](#changes-to-input-parameters) |
| `spec.inputs` | Removed from `Tasks` |
| `spec.outputs` | Removed from `Tasks` |
| `spec.inputs.resources` | [`spec.resources.inputs`](#changes-to-pipelineresources) |
| `spec.outputs.resources` | [`spec.resources.outputs`](#changes-to-pipelineresources) |
## Changes to input parameters
In Tekton `v1beta1`, input parameters have been moved from `spec.inputs.params` to `spec.params`.
For example, consider the following `v1alpha1` parameters:
```yaml
# Task.yaml (v1alpha1)
spec:
inputs:
params:
- name: ADDR
description: Address to curl.
type: string
# TaskRun.yaml (v1alpha1)
spec:
inputs:
params:
- name: ADDR
value: https://example.com/foo.json
```
The above parameters are now represented as follows in `v1beta1`:
```yaml
# Task.yaml (v1beta1)
spec:
params:
- name: ADDR
description: Address to curl.
type: string
# TaskRun.yaml (v1beta1)
spec:
params:
- name: ADDR
value: https://example.com/foo.json
```
## Replacing `PipelineResources` with `Tasks`
See ["Replacing PipelineResources with Tasks"](https://github.com/tektoncd/pipeline/blob/main/docs/pipelineresources.md#replacing-pipelineresources-with-tasks) for information and examples on how to replace PipelineResources when migrating from v1alpha1 to v1beta1.
## Changes to PipelineResources
In Tekton `v1beta1`, `PipelineResources` have been moved from `spec.input.resources`
and `spec.output.resources` to `spec.resources.inputs` and `spec.resources.outputs`,
respectively.
For example, consider the following `v1alpha1` definition:
```yaml
# Task.yaml (v1alpha1)
spec:
inputs:
resources:
- name: skaffold
type: git
outputs:
resources:
- name: baked-image
type: image
# TaskRun.yaml (v1alpha1)
spec:
inputs:
resources:
- name: skaffold
resourceSpec:
type: git
params:
- name: revision
value: v0.32.0
- name: url
value: https://github.com/GoogleContainerTools/skaffold
outputs:
resources:
- name: baked-image
resourceSpec:
- type: image
params:
- name: url
value: gcr.io/foo/bar
```
The above definition becomes the following in `v1beta1`:
```yaml
# Task.yaml (v1beta1)
spec:
resources:
inputs:
- name: src-repo
type: git
outputs:
- name: baked-image
type: image
# TaskRun.yaml (v1beta1)
spec:
resources:
inputs:
- name: src-repo
resourceSpec:
type: git
params:
- name: revision
value: main
- name: url
value: https://github.com/tektoncd/pipeline
outputs:
- name: baked-image
resourceSpec:
- type: image
params:
- name: url
value: gcr.io/foo/bar
``` | tekton | linkTitle Migrating from Tekton v1alpha1 weight 102 Migrating From Tekton v1alpha1 to Tekton v1beta1 Changes to fields changes to fields Changes to input parameters changes to input parameters Replacing PipelineResources with Tasks replacing pipelineresources with tasks Replacing a git resource replacing a git resource Replacing a pullrequest resource replacing a pullrequest resource Replacing a gcs resource replacing a gcs resource Replacing an image resource replacing an image resource Replacing a cluster resource replacing a cluster resource Changes to PipelineResources changes to pipelineresources This document describes the differences between v1alpha1 Tekton entities and their v1beta1 counterparts It also describes how to replace the supported types of PipelineResources with Tasks from the Tekton Catalog of equivalent functionality Changes to fields In Tekton v1beta1 the following fields have been changed Old field New field spec inputs params spec params changes to input parameters spec inputs Removed from Tasks spec outputs Removed from Tasks spec inputs resources spec resources inputs changes to pipelineresources spec outputs resources spec resources outputs changes to pipelineresources Changes to input parameters In Tekton v1beta1 input parameters have been moved from spec inputs params to spec params For example consider the following v1alpha1 parameters yaml Task yaml v1alpha1 spec inputs params name ADDR description Address to curl type string TaskRun yaml v1alpha1 spec inputs params name ADDR value https example com foo json The above parameters are now represented as follows in v1beta1 yaml Task yaml v1beta1 spec params name ADDR description Address to curl type string TaskRun yaml v1beta1 spec params name ADDR value https example com foo json Replacing PipelineResources with Tasks See Replacing PipelineResources with Tasks https github com tektoncd pipeline blob main docs pipelineresources md replacing pipelineresources with tasks for information and examples on how to replace PipelineResources when migrating from v1alpha1 to v1beta1 Changes to PipelineResources In Tekton v1beta1 PipelineResources have been moved from spec input resources and spec output resources to spec resources inputs and spec resources outputs respectively For example consider the following v1alpha1 definition yaml Task yaml v1alpha1 spec inputs resources name skaffold type git outputs resources name baked image type image TaskRun yaml v1alpha1 spec inputs resources name skaffold resourceSpec type git params name revision value v0 32 0 name url value https github com GoogleContainerTools skaffold outputs resources name baked image resourceSpec type image params name url value gcr io foo bar The above definition becomes the following in v1beta1 yaml Task yaml v1beta1 spec resources inputs name src repo type git outputs name baked image type image TaskRun yaml v1beta1 spec resources inputs name src repo resourceSpec type git params name revision value main name url value https github com tektoncd pipeline outputs name baked image resourceSpec type image params name url value gcr io foo bar |
tekton toc weight 204 PipelineRuns PipelineRuns | <!--
---
linkTitle: "PipelineRuns"
weight: 204
---
-->
# PipelineRuns
<!-- toc -->
- [PipelineRuns](#pipelineruns)
- [Overview](#overview)
- [Configuring a <code>PipelineRun</code>](#configuring-a-pipelinerun)
- [Specifying the target <code>Pipeline</code>](#specifying-the-target-pipeline)
- [Tekton Bundles](#tekton-bundles)
- [Remote Pipelines](#remote-pipelines)
- [Specifying Task-level `ComputeResources`](#specifying-task-level-computeresources)
- [Specifying <code>Parameters</code>](#specifying-parameters)
- [Propagated Parameters](#propagated-parameters)
- [Scope and Precedence](#scope-and-precedence)
- [Default Values](#default-values)
- [Object Parameters](#object-parameters)
- [Specifying custom <code>ServiceAccount</code> credentials](#specifying-custom-serviceaccount-credentials)
- [Mapping <code>ServiceAccount</code> credentials to <code>Tasks</code>](#mapping-serviceaccount-credentials-to-tasks)
- [Specifying a <code>Pod</code> template](#specifying-a-pod-template)
- [Specifying taskRunSpecs](#specifying-taskrunspecs)
- [Specifying <code>Workspaces</code>](#specifying-workspaces)
- [Propagated Workspaces](#propagated-workspaces)
- [Referenced TaskRuns within Embedded PipelineRuns](#referenced-taskruns-within-embedded-pipelineruns)
- [Specifying <code>LimitRange</code> values](#specifying-limitrange-values)
- [Configuring a failure timeout](#configuring-a-failure-timeout)
- [<code>PipelineRun</code> status](#pipelinerun-status)
- [The <code>status</code> field](#the-status-field)
- [Monitoring execution status](#monitoring-execution-status)
- [Marking off user errors](#marking-off-user-errors)
- [Cancelling a <code>PipelineRun</code>](#cancelling-a-pipelinerun)
- [Gracefully cancelling a <code>PipelineRun</code>](#gracefully-cancelling-a-pipelinerun)
- [Gracefully stopping a <code>PipelineRun</code>](#gracefully-stopping-a-pipelinerun)
- [Pending <code>PipelineRuns</code>](#pending-pipelineruns)
<!-- /toc -->
## Overview
A `PipelineRun` allows you to instantiate and execute a [`Pipeline`](pipelines.md) on-cluster.
A `Pipeline` specifies one or more `Tasks` in the desired order of execution. A `PipelineRun`
executes the `Tasks` in the `Pipeline` in the order they are specified until all `Tasks` have
executed successfully or a failure occurs.
**Note:** A `PipelineRun` automatically creates corresponding `TaskRuns` for every
`Task` in your `Pipeline`.
The `Status` field tracks the current state of a `PipelineRun`, and can be used to monitor
progress.
This field contains the status of every `TaskRun`, as well as the full `PipelineSpec` used
to instantiate this `PipelineRun`, for full auditability.
## Configuring a `PipelineRun`
A `PipelineRun` definition supports the following fields:
- Required:
- [`apiVersion`][kubernetes-overview] - Specifies the API version. For example
`tekton.dev/v1beta1`.
- [`kind`][kubernetes-overview] - Indicates that this resource object is a `PipelineRun` object.
- [`metadata`][kubernetes-overview] - Specifies the metadata that uniquely identifies the
`PipelineRun` object. For example, a `name`.
- [`spec`][kubernetes-overview] - Specifies the configuration information for
this `PipelineRun` object.
- [`pipelineRef` or `pipelineSpec`](#specifying-the-target-pipeline) - Specifies the target [`Pipeline`](pipelines.md).
- Optional:
- [`params`](#specifying-parameters) - Specifies the desired execution parameters for the `Pipeline`.
- [`serviceAccountName`](#specifying-custom-serviceaccount-credentials) - Specifies a `ServiceAccount`
object that supplies specific execution credentials for the `Pipeline`.
- [`status`](#cancelling-a-pipelinerun) - Specifies options for cancelling a `PipelineRun`.
- [`taskRunSpecs`](#specifying-taskrunspecs) - Specifies a list of `PipelineRunTaskSpec` which allows for setting `ServiceAccountName`, [`Pod` template](./podtemplates.md), and `Metadata` for each task. This overrides the `Pod` template set for the entire `Pipeline`.
- [`timeout`](#configuring-a-failure-timeout) - Specifies the timeout before the `PipelineRun` fails. `timeout` is deprecated and will eventually be removed, so consider using `timeouts` instead.
- [`timeouts`](#configuring-a-failure-timeout) - Specifies the timeout before the `PipelineRun` fails. `timeouts` allows more granular timeout configuration, at the pipeline, tasks, and finally levels
- [`podTemplate`](#specifying-a-pod-template) - Specifies a [`Pod` template](./podtemplates.md) to use as the basis for the configuration of the `Pod` that executes each `Task`.
- [`workspaces`](#specifying-workspaces) - Specifies a set of workspace bindings which must match the names of workspaces declared in the pipeline being used.
[kubernetes-overview]:
https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields
### Specifying the target `Pipeline`
You must specify the target `Pipeline` that you want the `PipelineRun` to execute, either by referencing
an existing `Pipeline` definition, or embedding a `Pipeline` definition directly in the `PipelineRun`.
To specify the target `Pipeline` by reference, use the `pipelineRef` field:
```yaml
spec:
pipelineRef:
name: mypipeline
```
To embed a `Pipeline` definition in the `PipelineRun`, use the `pipelineSpec` field:
```yaml
spec:
pipelineSpec:
tasks:
- name: task1
taskRef:
name: mytask
```
The `Pipeline` in the [`pipelineSpec` example](../examples/v1/pipelineruns/pipelinerun-with-pipelinespec.yaml)
example displays morning and evening greetings. Once you create and execute it, you can check the logs for its `Pods`:
```bash
kubectl logs $(kubectl get pods -o name | grep pipelinerun-echo-greetings-echo-good-morning)
Good Morning, Bob!
kubectl logs $(kubectl get pods -o name | grep pipelinerun-echo-greetings-echo-good-night)
Good Night, Bob!
```
You can also embed a `Task` definition the embedded `Pipeline` definition:
```yaml
spec:
pipelineSpec:
tasks:
- name: task1
taskSpec:
steps: ...
```
In the [`taskSpec` in `pipelineSpec` example](../examples/v1/pipelineruns/pipelinerun-with-pipelinespec-and-taskspec.yaml)
it's `Tasks` all the way down!
You can also specify labels and annotations with `taskSpec` which are propagated to each `taskRun` and then to the
respective pods. These labels can be used to identify and filter pods for further actions (such as collecting pod metrics,
and cleaning up completed pod with certain labels, etc) even being part of one single Pipeline.
```yaml
spec:
pipelineSpec:
tasks:
- name: task1
taskSpec:
metadata:
labels:
pipeline-sdk-type: kfp
# ...
- name: task2
taskSpec:
metadata:
labels:
pipeline-sdk-type: tfx
# ...
```
#### Tekton Bundles
A `Tekton Bundle` is an OCI artifact that contains Tekton resources like `Tasks` which can be referenced within a `taskRef`.
You can reference a `Tekton bundle` in a `TaskRef` in both `v1` and `v1beta1` using [remote resolution](./bundle-resolver.md#pipeline-resolution). The example syntax shown below for `v1` uses remote resolution and requires enabling [beta features](./additional-configs.md#beta-features).
```yaml
spec:
pipelineRef:
resolver: bundles
params:
- name: bundle
value: docker.io/myrepo/mycatalog:v1.0
- name: name
value: mypipeline
- name: kind
value: Pipeline
```
The syntax and caveats are similar to using `Tekton Bundles` for `Task` references
in [Pipelines](pipelines.md#tekton-bundles) or [TaskRuns](taskruns.md#tekton-bundles).
`Tekton Bundles` may be constructed with any toolsets that produce valid OCI image artifacts
so long as the artifact adheres to the [contract](tekton-bundle-contracts.md).
#### Remote Pipelines
**([beta feature](https://github.com/tektoncd/pipeline/blob/main/docs/install.md#beta-features))**
A `pipelineRef` field may specify a Pipeline in a remote location such as git.
Support for specific types of remote will depend on the Resolvers your
cluster's operator has installed. For more information including a tutorial, please check [resolution docs](resolution.md). The below example demonstrates
referencing a Pipeline in git:
```yaml
spec:
pipelineRef:
resolver: git
params:
- name: url
value: https://github.com/tektoncd/catalog.git
- name: revision
value: abc123
- name: pathInRepo
value: /pipeline/buildpacks/0.1/buildpacks.yaml
```
### Specifying Task-level `ComputeResources`
**([alpha only](https://github.com/tektoncd/pipeline/blob/main/docs/additional-configs.md#alpha-features))**
Task-level compute resources can be configured in `PipelineRun.TaskRunSpecs.ComputeResources` or `TaskRun.ComputeResources`.
e.g.
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Pipeline
metadata:
name: pipeline
spec:
tasks:
- name: task
---
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pipelinerun
spec:
pipelineRef:
name: pipeline
taskRunSpecs:
- pipelineTaskName: task
computeResources:
requests:
cpu: 2
```
Further details and examples could be found in [Compute Resources in Tekton](https://github.com/tektoncd/pipeline/blob/main/docs/compute-resources.md).
### Specifying `Parameters`
(See also [Specifying Parameters in Tasks](tasks.md#specifying-parameters))
You can specify `Parameters` that you want to pass to the `Pipeline` during execution,
including different values of the same parameter for different `Tasks` in the `Pipeline`.
**Note:** You must specify all the `Parameters` that the `Pipeline` expects. Parameters
that have default values specified in Pipeline are not required to be provided by PipelineRun.
For example:
```yaml
spec:
params:
- name: pl-param-x
value: "100"
- name: pl-param-y
value: "500"
```
You can pass in extra `Parameters` if needed depending on your use cases. An example use
case is when your CI system autogenerates `PipelineRuns` and it has `Parameters` it wants to
provide to all `PipelineRuns`. Because you can pass in extra `Parameters`, you don't have to
go through the complexity of checking each `Pipeline` and providing only the required params.
#### Parameter Enums
> :seedling: **`enum` is an [alpha](additional-configs.md#alpha-features) feature.** The `enable-param-enum` feature flag must be set to `"true"` to enable this feature.
If a `Parameter` is guarded by `Enum` in the `Pipeline`, you can only provide `Parameter` values in the `PipelineRun` that are predefined in the `Param.Enum` in the `Pipeline`. The `PipelineRun` will fail with reason `InvalidParamValue` otherwise.
Tekton will also the validate the `param` values passed to any referenced `Tasks` (via `taskRef`) if `Enum` is specified for the `Task`. The `PipelineRun` will fail with reason `InvalidParamValue` if `Enum` validation is failed for any of the `PipelineTask`.
You can also specify `Enum` in an embedded `Pipeline` in a `PipelineRun`. The same `Param` validation will be executed in this scenario.
See more details in [Param.Enum](./pipelines.md#param-enum).
#### Propagated Parameters
When using an inlined spec, parameters from the parent `PipelineRun` will be
propagated to any inlined specs without needing to be explicitly defined. This
allows authors to simplify specs by automatically propagating top-level
parameters down to other inlined resources.
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: pr-echo-
spec:
params:
- name: HELLO
value: "Hello World!"
- name: BYE
value: "Bye World!"
pipelineSpec:
tasks:
- name: echo-hello
taskSpec:
steps:
- name: echo
image: ubuntu
script: |
#!/usr/bin/env bash
echo "$(params.HELLO)"
- name: echo-bye
taskSpec:
steps:
- name: echo
image: ubuntu
script: |
#!/usr/bin/env bash
echo "$(params.BYE)"
```
On executing the pipeline run, the parameters will be interpolated during resolution.
The specifications are not mutated before storage and so it remains the same.
The status is updated.
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pr-echo-szzs9
...
spec:
params:
- name: HELLO
value: Hello World!
- name: BYE
value: Bye World!
pipelineSpec:
tasks:
- name: echo-hello
taskSpec:
steps:
- image: ubuntu
name: echo
script: |
#!/usr/bin/env bash
echo "$(params.HELLO)"
- name: echo-bye
taskSpec:
steps:
- image: ubuntu
name: echo
script: |
#!/usr/bin/env bash
echo "$(params.BYE)"
status:
conditions:
- lastTransitionTime: "2022-04-07T12:34:58Z"
message: 'Tasks Completed: 2 (Failed: 0, Canceled 0), Skipped: 0'
reason: Succeeded
status: "True"
type: Succeeded
pipelineSpec:
...
childReferences:
- name: pr-echo-szzs9-echo-hello
pipelineTaskName: echo-hello
kind: TaskRun
- name: pr-echo-szzs9-echo-bye
pipelineTaskName: echo-bye
kind: TaskRun
```
##### Scope and Precedence
When Parameters names conflict, the inner scope would take precedence as shown in this example:
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: pr-echo-
spec:
params:
- name: HELLO
value: "Hello World!"
- name: BYE
value: "Bye World!"
pipelineSpec:
tasks:
- name: echo-hello
params:
- name: HELLO
value: "Sasa World!"
taskSpec:
params:
- name: HELLO
type: string
steps:
- name: echo
image: ubuntu
script: |
#!/usr/bin/env bash
echo "$(params.HELLO)"
...
```
resolves to
```yaml
# Successful execution of the above PipelineRun
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pr-echo-szzs9
...
spec:
...
status:
conditions:
- lastTransitionTime: "2022-04-07T12:34:58Z"
message: 'Tasks Completed: 2 (Failed: 0, Canceled 0), Skipped: 0'
reason: Succeeded
status: "True"
type: Succeeded
...
childReferences:
- name: pr-echo-szzs9-echo-hello
pipelineTaskName: echo-hello
kind: TaskRun
...
```
##### Default Values
When `Parameter` specifications have default values, the `Parameter` value provided at runtime would take precedence to give users control, as shown in this example:
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: pr-echo-
spec:
params:
- name: HELLO
value: "Hello World!"
- name: BYE
value: "Bye World!"
pipelineSpec:
tasks:
- name: echo-hello
taskSpec:
params:
- name: HELLO
type: string
default: "Sasa World!"
steps:
- name: echo
image: ubuntu
script: |
#!/usr/bin/env bash
echo "$(params.HELLO)"
...
```
resolves to
```yaml
# Successful execution of the above PipelineRun
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pr-echo-szzs9
...
spec:
...
status:
conditions:
- lastTransitionTime: "2022-04-07T12:34:58Z"
message: 'Tasks Completed: 2 (Failed: 0, Canceled 0), Skipped: 0'
reason: Succeeded
status: "True"
type: Succeeded
...
childReferences:
- name: pr-echo-szzs9-echo-hello
pipelineTaskName: echo-hello
kind: TaskRun
...
```
##### Referenced Resources
When a PipelineRun definition has referenced specifications but does not explicitly pass Parameters, the PipelineRun will be created but the execution will fail because of missing Parameters.
```yaml
# Invalid PipelineRun attempting to propagate Parameters to referenced Tasks
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: pr-echo-
spec:
params:
- name: HELLO
value: "Hello World!"
- name: BYE
value: "Bye World!"
pipelineSpec:
tasks:
- name: echo-hello
taskRef:
name: echo-hello
- name: echo-bye
taskRef:
name: echo-bye
---
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Task
metadata:
name: echo-hello
spec:
steps:
- name: echo
image: ubuntu
script: |
#!/usr/bin/env bash
echo "$(params.HELLO)"
---
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Task
metadata:
name: echo-bye
spec:
steps:
- name: echo
image: ubuntu
script: |
#!/usr/bin/env bash
echo "$(params.BYE)"
```
Fails as follows:
```yaml
# Failed execution of the above PipelineRun
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pr-echo-24lmf
...
spec:
params:
- name: HELLO
value: Hello World!
- name: BYE
value: Bye World!
pipelineSpec:
tasks:
- name: echo-hello
taskRef:
kind: Task
name: echo-hello
- name: echo-bye
taskRef:
kind: Task
name: echo-bye
status:
conditions:
- lastTransitionTime: "2022-04-07T20:24:51Z"
message: 'invalid input params for task echo-hello: missing values for
these params which have no default values: [HELLO]'
reason: PipelineValidationFailed
status: "False"
type: Succeeded
...
```
##### Object Parameters
When using an inlined spec, object parameters from the parent `PipelineRun` will also be
propagated to any inlined specs without needing to be explicitly defined. This
allows authors to simplify specs by automatically propagating top-level
parameters down to other inlined resources.
When propagating object parameters, scope and precedence also holds as shown below.
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: pipelinerun-object-param-result
spec:
params:
- name: gitrepo
value:
url: abc.com
commit: sha123
pipelineSpec:
tasks:
- name: task1
params:
- name: gitrepo
value:
branch: main
url: xyz.com
taskSpec:
steps:
- name: write-result
image: bash
args: [
"echo",
"--url=$(params.gitrepo.url)",
"--commit=$(params.gitrepo.commit)",
"--branch=$(params.gitrepo.branch)",
]
```
resolves to
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pipelinerun-object-param-resultpxp59
...
spec:
params:
- name: gitrepo
value:
commit: sha123
url: abc.com
pipelineSpec:
tasks:
- name: task1
params:
- name: gitrepo
value:
branch: main
url: xyz.com
taskSpec:
metadata: {}
spec: null
steps:
- args:
- echo
- --url=$(params.gitrepo.url)
- --commit=$(params.gitrepo.commit)
- --branch=$(params.gitrepo.branch)
image: bash
name: write-result
status:
completionTime: "2022-09-08T17:22:01Z"
conditions:
- lastTransitionTime: "2022-09-08T17:22:01Z"
message: 'Tasks Completed: 1 (Failed: 0, Cancelled 0), Skipped: 0'
reason: Succeeded
status: "True"
type: Succeeded
pipelineSpec:
tasks:
- name: task1
params:
- name: gitrepo
value:
branch: main
url: xyz.com
taskSpec:
metadata: {}
spec: null
steps:
- args:
- echo
- --url=xyz.com
- --commit=sha123
- --branch=main
image: bash
name: write-result
startTime: "2022-09-08T17:21:57Z"
childReferences:
- name: pipelinerun-object-param-resultpxp59-task1
pipelineTaskName: task1
kind: TaskRun
...
taskSpec:
steps:
- args:
- echo
- --url=xyz.com
- --commit=sha123
- --branch=main
image: bash
name: write-result
```
### Specifying custom `ServiceAccount` credentials
You can execute the `Pipeline` in your `PipelineRun` with a specific set of credentials by
specifying a `ServiceAccount` object name in the `serviceAccountName` field in your `PipelineRun`
definition. If you do not explicitly specify this, the `TaskRuns` created by your `PipelineRun`
will execute with the credentials specified in the `configmap-defaults` `ConfigMap`. If this
default is not specified, the `TaskRuns` will execute with the [`default` service account](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server)
set for the target [`namespace`](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/).
For more information, see [`ServiceAccount`](auth.md).
[`Custom tasks`](pipelines.md#using-custom-tasks) may or may not use a service account name.
Consult the documentation of the custom task that you are using to determine whether it supports a service account name.
### Mapping `ServiceAccount` credentials to `Tasks`
If you require more granularity in specifying execution credentials, use the `taskRunSpecs[].taskServiceAccountName` field to
map a specific `serviceAccountName` value to a specific `Task` in the `Pipeline`. This overrides the global
`serviceAccountName` you may have set for the `Pipeline` as described in the previous section.
For example, if you specify these mappings:
```yaml
spec:
taskRunTemplate:
serviceAccountName: sa-1
taskRunSpecs:
- pipelineTaskName: build-task
serviceAccountName: sa-for-build
```
```yaml
spec:
serviceAccountName: sa-1
taskRunSpecs:
- pipelineTaskName: build-task
taskServiceAccountName: sa-for-build
```
for this `Pipeline`:
```yaml
kind: Pipeline
spec:
tasks:
- name: build-task
taskRef:
name: build-push
- name: test-task
taskRef:
name: test
```
then `test-task` will execute using the `sa-1` account while `build-task` will execute with `sa-for-build`.
#### Propagated Results
When using an embedded spec, `Results` from the parent `PipelineRun` will be
propagated to any inlined specs without needing to be explicitly defined. This
allows authors to simplify specs by automatically propagating top-level
results down to other inlined resources.
**`Result` substitutions will only be made for `name`, `commands`, `args`, `env` and `script` fields of `steps`, `sidecars`.**
```yaml
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
name: uid-pipeline-run
spec:
pipelineSpec:
tasks:
- name: add-uid
taskSpec:
results:
- name: uid
type: string
steps:
- name: add-uid
image: busybox
command: ["/bin/sh", "-c"]
args:
- echo "1001" | tee $(results.uid.path)
- name: show-uid
# params:
# - name: uid
# value: $(tasks.add-uid.results.uid)
taskSpec:
steps:
- name: show-uid
image: busybox
command: ["/bin/sh", "-c"]
args:
- echo $(tasks.add-uid.results.uid)
# - echo $(params.uid)
```
On executing the `PipelineRun`, the `Results` will be interpolated during resolution.
```yaml
name: uid-pipeline-run-show-uid
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
...
spec:
taskSpec:
steps:
args:
echo 1001
command:
- /bin/sh
- -c
image: busybox
name: show-uid
status:
completionTime: 2023-09-11T07:34:28Z
conditions:
lastTransitionTime: 2023-09-11T07:34:28Z
message: All Steps have completed executing
reason: Succeeded
status: True
type: Succeeded
podName: uid-pipeline-run-show-uid-pod
steps:
container: step-show-uid
name: show-uid
taskSpec:
steps:
args:
echo 1001
command:
/bin/sh
-c
computeResources:
image: busybox
name: show-uid
```
### Specifying a `Pod` template
You can specify a [`Pod` template](podtemplates.md) configuration that will serve as the configuration starting
point for the `Pod` in which the container images specified in your `Tasks` will execute. This allows you to
customize the `Pod` configuration specifically for each `TaskRun`.
In the following example, the `Task` defines a `volumeMount` object named `my-cache`. The `PipelineRun`
provisions this object for the `Task` using a `persistentVolumeClaim` and executes it as user 1001.
```yaml
apiVersion: tekton.dev/v1
kind: Task
metadata:
name: mytask
spec:
steps:
- name: writesomething
image: ubuntu
command: ["bash", "-c"]
args: ["echo 'foo' > /my-cache/bar"]
volumeMounts:
- name: my-cache
mountPath: /my-cache
---
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: mypipeline
spec:
tasks:
- name: task1
taskRef:
name: mytask
---
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
name: mypipelinerun
spec:
pipelineRef:
name: mypipeline
taskRunTemplate:
podTemplate:
securityContext:
runAsNonRoot: true
runAsUser: 1001
volumes:
- name: my-cache
persistentVolumeClaim:
claimName: my-volume-claim
```
```yaml
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: mytask
spec:
steps:
- name: writesomething
image: ubuntu
command: ["bash", "-c"]
args: ["echo 'foo' > /my-cache/bar"]
volumeMounts:
- name: my-cache
mountPath: /my-cache
---
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: mypipeline
spec:
tasks:
- name: task1
taskRef:
name: mytask
---
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: mypipelinerun
spec:
pipelineRef:
name: mypipeline
podTemplate:
securityContext:
runAsNonRoot: true
runAsUser: 1001
volumes:
- name: my-cache
persistentVolumeClaim:
claimName: my-volume-claim
```
[`Custom tasks`](pipelines.md#using-custom-tasks) may or may not use a pod template.
Consult the documentation of the custom task that you are using to determine whether it supports a pod template.
### Specifying taskRunSpecs
Specifies a list of `PipelineTaskRunSpec` which contains `TaskServiceAccountName`, `TaskPodTemplate`
and `PipelineTaskName`. Mapping the specs to the corresponding `Task` based upon the `TaskName` a PipelineTask
will run with the configured `TaskServiceAccountName` and `TaskPodTemplate` overwriting the pipeline
wide `ServiceAccountName` and [`podTemplate`](./podtemplates.md) configuration,
for example:
```yaml
spec:
podTemplate:
securityContext:
runAsUser: 1000
runAsGroup: 2000
fsGroup: 3000
taskRunSpecs:
- pipelineTaskName: build-task
serviceAccountName: sa-for-build
podTemplate:
nodeSelector:
disktype: ssd
```
```yaml
spec:
podTemplate:
securityContext:
runAsUser: 1000
runAsGroup: 2000
fsGroup: 3000
taskRunSpecs:
- pipelineTaskName: build-task
taskServiceAccountName: sa-for-build
taskPodTemplate:
nodeSelector:
disktype: ssd
```
If used with this `Pipeline`, `build-task` will use the task specific `PodTemplate` (where `nodeSelector` has `disktype` equal to `ssd`)
along with `securityContext` from the `pipelineRun.spec.podTemplate`.
`PipelineTaskRunSpec` may also contain `StepSpecs` and `SidecarSpecs`; see
[Overriding `Task` `Steps` and `Sidecars`](./taskruns.md#overriding-task-steps-and-sidecars) for more information.
The optional annotations and labels can be added under a `Metadata` field as for a specific running context.
e.g.
Rendering needed secrets with Vault:
```yaml
spec:
pipelineRef:
name: pipeline-name
taskRunSpecs:
- pipelineTaskName: task-name
metadata:
annotations:
vault.hashicorp.com/agent-inject-secret-foo: "/path/to/foo"
vault.hashicorp.com/role: role-name
```
Updating labels applied in a runtime context:
```yaml
spec:
pipelineRef:
name: pipeline-name
taskRunSpecs:
- pipelineTaskName: task-name
metadata:
labels:
app: cloudevent
```
If a metadata key is present in different levels, the value that will be used in the `PipelineRun` is determined using this precedence order: `PipelineRun.spec.taskRunSpec.metadata` > `PipelineRun.metadata` > `Pipeline.spec.tasks.taskSpec.metadata`.
### Specifying `Workspaces`
If your `Pipeline` specifies one or more `Workspaces`, you must map those `Workspaces` to
the corresponding physical volumes in your `PipelineRun` definition. For example, you
can map a `PersistentVolumeClaim` volume to a `Workspace` as follows:
```yaml
workspaces:
- name: myworkspace # must match workspace name in Task
persistentVolumeClaim:
claimName: mypvc # this PVC must already exist
subPath: my-subdir
```
`workspaces[].subPath` can be an absolute value or can reference `pipelineRun` context variables, such as,
`$(context.pipelineRun.name)` or `$(context.pipelineRun.uid)`.
You can pass in extra `Workspaces` if needed depending on your use cases. An example use
case is when your CI system autogenerates `PipelineRuns` and it has `Workspaces` it wants to
provide to all `PipelineRuns`. Because you can pass in extra `Workspaces`, you don't have to
go through the complexity of checking each `Pipeline` and providing only the required `Workspaces`:
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Pipeline
metadata:
name: pipeline
spec:
tasks:
- name: task
---
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pipelinerun
spec:
pipelineRef:
name: pipeline
workspaces:
- name: unusedworkspace
persistentVolumeClaim:
claimName: mypvc
```
For more information, see the following topics:
- For information on mapping `Workspaces` to `Volumes`, see [Specifying `Workspaces` in `PipelineRuns`](workspaces.md#specifying-workspaces-in-pipelineruns).
- For a list of supported `Volume` types, see [Specifying `VolumeSources` in `Workspaces`](workspaces.md#specifying-volumesources-in-workspaces).
- For an end-to-end example, see [`Workspaces` in a `PipelineRun`](../examples/v1/pipelineruns/workspaces.yaml).
[`Custom tasks`](pipelines.md#using-custom-tasks) may or may not use workspaces.
Consult the documentation of the custom task that you are using to determine whether it supports workspaces.
#### Propagated Workspaces
When using an embedded spec, workspaces from the parent `PipelineRun` will be
propagated to any inlined specs without needing to be explicitly defined. This
allows authors to simplify specs by automatically propagating top-level
workspaces down to other inlined resources.
**Workspace substutions will only be made for `commands`, `args` and `script` fields of `steps`, `stepTemplates`, and `sidecars`.**
```yaml
# Inline specifications of a PipelineRun
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: recipe-time-
spec:
workspaces:
- name: shared-data
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 16Mi
volumeMode: Filesystem
pipelineSpec:
#workspaces:
# - name: shared-data
tasks:
- name: fetch-secure-data
# workspaces:
# - name: shared-data
taskSpec:
# workspaces:
# - name: shared-data
steps:
- name: fetch-and-write-secure
image: ubuntu
script: |
echo hi >> $(workspaces.shared-data.path)/recipe.txt
- name: print-the-recipe
# workspaces:
# - name: shared-data
runAfter:
- fetch-secure-data
taskSpec:
# workspaces:
# - name: shared-data
steps:
- name: print-secrets
image: ubuntu
script: cat $(workspaces.shared-data.path)/recipe.txt
```
On executing the pipeline run, the workspaces will be interpolated during resolution.
```yaml
# Successful execution of the above PipelineRun
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: recipe-time-
...
spec:
pipelineSpec:
...
status:
completionTime: "2022-06-02T18:17:02Z"
conditions:
- lastTransitionTime: "2022-06-02T18:17:02Z"
message: 'Tasks Completed: 2 (Failed: 0, Canceled 0), Skipped: 0'
reason: Succeeded
status: "True"
type: Succeeded
pipelineSpec:
...
childReferences:
- name: recipe-time-lslt9-fetch-secure-data
pipelineTaskName: fetch-secure-data
kind: TaskRun
- name: recipe-time-lslt9-print-the-recipe
pipelineTaskName: print-the-recipe
kind: TaskRun
```
##### Workspace Referenced Resources
`Workspaces` cannot be propagated to referenced specifications. For example, the following Pipeline will fail when executed because the workspaces defined in the PipelineRun cannot be propagated to the referenced Pipeline.
```yaml
# PipelineRun attempting to propagate Workspaces to referenced Tasks
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-task-storage
spec:
resources:
requests:
storage: 16Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
---
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Pipeline
metadata:
name: fetch-and-print-recipe
spec:
tasks:
- name: fetch-the-recipe
taskRef:
name: fetch-secure-data
- name: print-the-recipe
taskRef:
name: print-data
runAfter:
- fetch-the-recipe
---
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: recipe-time-
spec:
pipelineRef:
name: fetch-and-print-recipe
workspaces:
- name: shared-data
persistentVolumeClaim:
claimName: shared-task-storage
```
Upon execution, this will cause failures:
```yaml
# Failed execution of the above PipelineRun
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: recipe-time-
...
spec:
pipelineRef:
name: fetch-and-print-recipe
workspaces:
- name: shared-data
persistentVolumeClaim:
claimName: shared-task-storage
status:
completionTime: "2022-06-02T19:02:58Z"
conditions:
- lastTransitionTime: "2022-06-02T19:02:58Z"
message: 'Tasks Completed: 1 (Failed: 1, Canceled 0), Skipped: 1'
reason: Failed
status: "False"
type: Succeeded
pipelineSpec:
...
childReferences:
- name: recipe-time-v5scg-fetch-the-recipe
pipelineTaskName: fetch-the-recipe
kind: TaskRun
```
#### Referenced TaskRuns within Embedded PipelineRuns
As mentioned in the [Workspace Referenced Resources](#workspace-referenced-resources), workspaces can only be propagated from PipelineRuns to embedded Pipeline specs, not Pipeline references. Similarly, workspaces can only be propagated from a Pipeline to embedded Task specs, not referenced Tasks. For example:
```yaml
# PipelineRun attempting to propagate Workspaces to referenced Tasks
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Task
metadata:
name: fetch-secure-data
spec:
workspaces: # If Referenced, Workspaces need to be explicitly declared
- name: shared-data
steps:
- name: fetch-and-write
image: ubuntu
script: |
echo $(workspaces.shared-data.path)
---
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: recipe-time-
spec:
workspaces:
- name: shared-data
persistentVolumeClaim:
claimName: shared-task-storage
pipelineSpec:
# workspaces: # Since this is embedded specs, Workspaces don’t need to be declared
# ...
tasks:
- name: fetch-the-recipe
workspaces: # If referencing resources, Workspaces need to be explicitly declared
- name: shared-data
taskRef: # Referencing a resource
name: fetch-secure-data
- name: print-the-recipe
# workspaces: # Since this is embedded specs, Workspaces don’t need to be declared
# ...
taskSpec:
# workspaces: # Since this is embedded specs, Workspaces don’t need to be declared
# ...
steps:
- name: print-secrets
image: ubuntu
script: cat $(workspaces.shared-data.path)/recipe.txt
runAfter:
- fetch-the-recipe
```
The above pipelinerun successfully resolves to:
```yaml
# Successful execution of the above PipelineRun
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: recipe-time-
...
spec:
pipelineSpec:
...
workspaces:
- name: shared-data
persistentVolumeClaim:
claimName: shared-task-storage
status:
completionTime: "2022-06-09T18:42:14Z"
conditions:
- lastTransitionTime: "2022-06-09T18:42:14Z"
message: 'Tasks Completed: 2 (Failed: 0, Cancelled 0), Skipped: 0'
reason: Succeeded
status: "True"
type: Succeeded
pipelineSpec:
...
childReferences:
- name: recipe-time-pj6l7-fetch-the-recipe
pipelineTaskName: fetch-the-recipe
kind: TaskRun
- name: recipe-time-pj6l7-print-the-recipe
pipelineTaskName: print-the-recipe
kind: TaskRun
```
### Specifying `LimitRange` values
In order to only consume the bare minimum amount of resources needed to execute one `Step` at a
time from the invoked `Task`, Tekton will request the compute values for CPU, memory, and ephemeral
storage for each `Step` based on the [`LimitRange`](https://kubernetes.io/docs/concepts/policy/limit-range/)
object(s), if present. Any `Request` or `Limit` specified by the user (on `Task` for example) will be left unchanged.
For more information, see the [`LimitRange` support in Pipeline](./compute-resources.md#limitrange-support).
### Configuring a failure timeout
You can use the `timeouts` field to set the `PipelineRun's` desired timeout value in minutes.
There are three sub-fields:
- `pipeline`: specifies the timeout for the entire PipelineRun. Defaults to to the global configurable default timeout of 60 minutes.
When `timeouts.pipeline` has elapsed, any running child TaskRuns will be canceled, regardless of whether they are normal Tasks
or `finally` Tasks, and the PipelineRun will fail.
- `tasks`: specifies the timeout for the cumulative time taken by non-`finally` Tasks specified in `pipeline.spec.tasks`.
To specify a timeout for an individual Task, use `pipeline.spec.tasks[].timeout`.
When `timeouts.tasks` has elapsed, any running child TaskRuns will be canceled, finally Tasks will run if `timeouts.finally` is specified,
and the PipelineRun will fail.
- `finally`: the timeout for the cumulative time taken by `finally` Tasks specified in `pipeline.spec.finally`.
(Since all `finally` Tasks run in parallel, this is functionally equivalent to the timeout for any `finally` Task.)
When `timeouts.finally` has elapsed, any running `finally` TaskRuns will be canceled,
and the PipelineRun will fail.
For example:
```yaml
timeouts:
pipeline: "0h0m60s"
tasks: "0h0m40s"
finally: "0h0m20s"
```
All three sub-fields are optional, and will be automatically processed according to the following constraint:
* `timeouts.pipeline >= timeouts.tasks + timeouts.finally`
Each `timeout` field is a `duration` conforming to Go's
[`ParseDuration`](https://golang.org/pkg/time/#ParseDuration) format. For example, valid
values are `1h30m`, `1h`, `1m`, and `60s`.
If any of the sub-fields are set to "0", there is no timeout for that section of the PipelineRun,
meaning that it will run until it completes successfully or encounters an error.
To set `timeouts.tasks` or `timeouts.finally` to "0", you must also set `timeouts.pipeline` to "0".
The global default timeout is set to 60 minutes when you first install Tekton. You can set
a different global default timeout value using the `default-timeout-minutes` field in
[`config/config-defaults.yaml`](./../config/config-defaults.yaml).
Example timeouts usages are as follows:
Combination 1: Set the timeout for the entire `pipeline` and reserve a portion of it for `tasks`.
```yaml
kind: PipelineRun
spec:
timeouts:
pipeline: "0h4m0s"
tasks: "0h1m0s"
```
Combination 2: Set the timeout for the entire `pipeline` and reserve a portion of it for `finally`.
```yaml
kind: PipelineRun
spec:
timeouts:
pipeline: "0h4m0s"
finally: "0h3m0s"
```
Combination 3: Set only a `tasks` timeout, with no timeout for the entire `pipeline`.
```yaml
kind: PipelineRun
spec:
timeouts:
pipeline: "0" # No timeout
tasks: "0h3m0s"
```
Combination : Set only a `finally` timeout, with no timeout for the entire `pipeline`.
```yaml
kind: PipelineRun
spec:
timeouts:
pipeline: "0" # No timeout
finally: "0h3m0s"
```
You can also use the *Deprecated* `timeout` field to set the `PipelineRun's` desired timeout value in minutes.
If you do not specify this value in the `PipelineRun`, the global default timeout value applies.
If you set the timeout to 0, the `PipelineRun` fails immediately upon encountering an error.
> :warning: ** `timeout` is deprecated and will be removed in future versions. Consider using `timeouts` instead.
> :note: An internal detail of the `PipelineRun` and `TaskRun` reconcilers in the Tekton controller is that it will requeue a `PipelineRun` or `TaskRun` for re-evaluation, versus waiting for the next update, under certain conditions. The wait time for that re-queueing is the elapsed time subtracted from the timeout; however, if the timeout is set to '0', that calculation produces a negative number, and the new reconciliation event will fire immediately, which can impact overall performance, which is counter to the intent of wait time calculation. So instead, the reconcilers will use the configured global timeout as the wait time when the associated timeout has been set to '0'.
## `PipelineRun` status
### The `status` field
Your `PipelineRun`'s `status` field can contain the following fields:
- Required:
<!-- wokeignore:rule=master -->
- `status` - Most relevant, `status.conditions`, which contains the latest observations of the `PipelineRun`'s state. [See here](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties) for information on typical status properties.
- `startTime` - The time at which the `PipelineRun` began executing, in [RFC3339](https://tools.ietf.org/html/rfc3339) format.
- `completionTime` - The time at which the `PipelineRun` finished executing, in [RFC3339](https://tools.ietf.org/html/rfc3339) format.
- [`pipelineSpec`](pipelines.md#configuring-a-pipeline) - The exact `PipelineSpec` used when starting the `PipelineRun`.
- Optional:
- [`pipelineResults`](pipelines.md#emitting-results-from-a-pipeline) - Results emitted by this `PipelineRun`.
- `skippedTasks` - A list of `Task`s which were skipped when running this `PipelineRun` due to [when expressions](pipelines.md#guard-task-execution-using-when-expressions), including the when expressions applying to the skipped task.
- `childReferences` - A list of references to each `TaskRun` or `Run` in this `PipelineRun`, which can be used to look up the status of the underlying `TaskRun` or `Run`. Each entry contains the following:
- [`kind`][kubernetes-overview] - Generally either `TaskRun` or `Run`.
- [`apiVersion`][kubernetes-overview] - The API version for the underlying `TaskRun` or `Run`.
- [`whenExpressions`](pipelines.md#guard-task-execution-using-when-expressions) - The list of when expressions guarding the execution of this task.
- `provenance` - Metadata about the runtime configuration and the resources used in the PipelineRun. The data in the `provenance` field will be recorded into the build provenance by the provenance generator i.e. (Tekton Chains). Currently, there are 2 subfields:
- `refSource`: the source from where a remote pipeline definition was fetched.
- `featureFlags`: the configuration data of the `feature-flags` configmap.
- `finallyStartTime`- The time at which the PipelineRun's `finally` Tasks, if any, began
executing, in [RFC3339](https://tools.ietf.org/html/rfc3339) format.
### Monitoring execution status
As your `PipelineRun` executes, its `status` field accumulates information on the execution of each `TaskRun`
as well as the `PipelineRun` as a whole. This information includes the name of the pipeline `Task` associated
to a `TaskRun`, the complete [status of the `TaskRun`](taskruns.md#monitoring-execution-status) and details
about `whenExpressions` that may be associated to a `TaskRun`.
The following example shows an extract from the `status` field of a `PipelineRun` that has executed successfully:
```yaml
completionTime: "2020-05-04T02:19:14Z"
conditions:
- lastTransitionTime: "2020-05-04T02:19:14Z"
message: "Tasks Completed: 4, Skipped: 0"
reason: Succeeded
status: "True"
type: Succeeded
startTime: "2020-05-04T02:00:11Z"
childReferences:
- name: triggers-release-nightly-frwmw-build
pipelineTaskName: build
kind: TaskRun
```
The following tables shows how to read the overall status of a `PipelineRun`.
Completion time is set once a `PipelineRun` reaches status `True` or `False`:
`status` | `reason` | `completionTime` is set | Description
:--------|:-------------------|:-----------------------:|-------------------------------------------------------------------------------------:
Unknown | Started | No | The `PipelineRun` has just been picked up by the controller.
Unknown | Running | No | The `PipelineRun` has been validate and started to perform its work.
Unknown | Cancelled | No | The user requested the PipelineRun to be cancelled. Cancellation has not be done yet.
True | Succeeded | Yes | The `PipelineRun` completed successfully.
True | Completed | Yes | The `PipelineRun` completed successfully, one or more Tasks were skipped.
False | Failed | Yes | The `PipelineRun` failed because one of the `TaskRuns` failed.
False | \[Error message\] | Yes | The `PipelineRun` failed with a permanent error (usually validation).
False | Cancelled | Yes | The `PipelineRun` was cancelled successfully.
False | PipelineRunTimeout | Yes | The `PipelineRun` timed out.
False | CreateRunFailed | Yes | The `PipelineRun` create run resources failed.
When a `PipelineRun` changes status, [events](events.md#pipelineruns) are triggered accordingly.
When a `PipelineRun` has `Tasks` that were `skipped`, the `reason` for skipping the task will be listed in the `Skipped Tasks` section of the `status` of the `PipelineRun`.
When a `PipelineRun` has `Tasks` with [`when` expressions](pipelines.md#guard-task-execution-using-when-expressions):
- If the `when` expressions evaluate to `true`, the `Task` is executed then the `TaskRun` and its resolved `when` expressions will be listed in the `Task Runs` section of the `status` of the `PipelineRun`.
- If the `when` expressions evaluate to `false`, the `Task` is skipped then its name and its resolved `when` expressions will be listed in the `Skipped Tasks` section of the `status` of the `PipelineRun`.
```yaml
Conditions:
Last Transition Time: 2020-08-27T15:07:34Z
Message: Tasks Completed: 1 (Failed: 0, Cancelled 0), Skipped: 1
Reason: Completed
Status: True
Type: Succeeded
Skipped Tasks:
Name: skip-this-task
Reason: When Expressions evaluated to false
When Expressions:
Input: foo
Operator: in
Values:
bar
Input: foo
Operator: notin
Values:
foo
ChildReferences:
- Name: pipelinerun-to-skip-task-run-this-task
Pipeline Task Name: run-this-task
Kind: TaskRun
```
The name of the `TaskRuns` and `Runs` owned by a `PipelineRun` are univocally associated to the owning resource.
If a `PipelineRun` resource is deleted and created with the same name, the child `TaskRuns` will be created with the
same name as before. The base format of the name is `<pipelinerun-name>-<pipelinetask-name>`. If the `PipelineTask`
has a `Matrix`, the name will have an int suffix with format `<pipelinerun-name>-<pipelinetask-name>-<combination-id>`.
The name may vary according the logic of [`kmeta.ChildName`](https://pkg.go.dev/github.com/knative/pkg/kmeta#ChildName).
Some examples:
| `PipelineRun` Name | `PipelineTask` Name | `TaskRun` Names |
|----------------------------------------------------------|--------------------------------------------------------------|----------------------------------------------------------------------------------------|
| pipeline-run | task1 | pipeline-run-task1 |
| pipeline-run | task2-0123456789-0123456789-0123456789-0123456789-0123456789 | pipeline-runee4a397d6eab67777d4e6f9991cd19e6-task2-0123456789-0 |
| pipeline-run-0123456789-0123456789-0123456789-0123456789 | task3 | pipeline-run-0123456789-0123456789-0123456789-0123456789-task3 |
| pipeline-run-0123456789-0123456789-0123456789-0123456789 | task2-0123456789-0123456789-0123456789-0123456789-0123456789 | pipeline-run-0123456789-012345607ad8c7aac5873cdfabe472a68996b5c |
| pipeline-run | task4 (with 2x2 `Matrix`) | pipeline-run-task1-0, pipeline-run-task1-2, pipeline-run-task1-3, pipeline-run-task1-4 |
### Marking off user errors
A user error in Tekton is any mistake made by user, such as a syntax error when specifying pipelines, tasks. User errors can occur in various stages of the Tekton pipeline, from authoring the pipeline configuration to executing the pipelines. They are currently explicitly labeled in the Run's conditions message, for example:
```yaml
# Failed PipelineRun with message labeled "[User error]"
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
...
spec:
...
status:
...
conditions:
- lastTransitionTime: "2022-06-02T19:02:58Z"
message: '[User error] PipelineRun default parameters is missing some parameters required by
Pipeline pipelinerun-with-params''s parameters: pipelineRun missing parameters:
[pl-param-x]'
reason: 'ParameterMissing'
status: "False"
type: Succeeded
```
```console
~/pipeline$ tkn pr list
NAME STARTED DURATION STATUS
pipelinerun-with-params 5 seconds ago 0s Failed(ParameterMissing)
```
## Cancelling a `PipelineRun`
To cancel a `PipelineRun` that's currently executing, update its definition
to mark it as "Cancelled". When you do so, the spawned `TaskRuns` are also marked
as cancelled, all associated `Pods` are deleted, and their `Retries` are not executed.
Pending `finally` tasks are not scheduled.
For example:
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: go-example-git
spec:
# […]
status: "Cancelled"
```
## Gracefully cancelling a `PipelineRun`
To gracefully cancel a `PipelineRun` that's currently executing, update its definition
to mark it as "CancelledRunFinally". When you do so, the spawned `TaskRuns` are also marked
as cancelled, all associated `Pods` are deleted, and their `Retries` are not executed.
`finally` tasks are scheduled normally.
For example:
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: go-example-git
spec:
# […]
status: "CancelledRunFinally"
```
## Gracefully stopping a `PipelineRun`
To gracefully stop a `PipelineRun` that's currently executing, update its definition
to mark it as "StoppedRunFinally". When you do so, the spawned `TaskRuns` are completed normally,
including executing their `retries`, but no new non-`finally` task is scheduled. `finally` tasks are executed afterwards.
For example:
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: go-example-git
spec:
# […]
status: "StoppedRunFinally"
```
## Pending `PipelineRuns`
A `PipelineRun` can be created as a "pending" `PipelineRun` meaning that it will not actually be started until the pending status is cleared.
Note that a `PipelineRun` can only be marked "pending" before it has started, this setting is invalid after the `PipelineRun` has been started.
To mark a `PipelineRun` as pending, set `.spec.status` to `PipelineRunPending` when creating it:
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: go-example-git
spec:
# […]
status: "PipelineRunPending"
```
To start the PipelineRun, clear the `.spec.status` field. Alternatively, update the value to `Cancelled` to cancel it.
---
Except as otherwise noted, the content of this page is licensed under the
[Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/),
and code samples are licensed under the
[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). | tekton | linkTitle PipelineRuns weight 204 PipelineRuns toc PipelineRuns pipelineruns Overview overview Configuring a code PipelineRun code configuring a pipelinerun Specifying the target code Pipeline code specifying the target pipeline Tekton Bundles tekton bundles Remote Pipelines remote pipelines Specifying Task level ComputeResources specifying task level computeresources Specifying code Parameters code specifying parameters Propagated Parameters propagated parameters Scope and Precedence scope and precedence Default Values default values Object Parameters object parameters Specifying custom code ServiceAccount code credentials specifying custom serviceaccount credentials Mapping code ServiceAccount code credentials to code Tasks code mapping serviceaccount credentials to tasks Specifying a code Pod code template specifying a pod template Specifying taskRunSpecs specifying taskrunspecs Specifying code Workspaces code specifying workspaces Propagated Workspaces propagated workspaces Referenced TaskRuns within Embedded PipelineRuns referenced taskruns within embedded pipelineruns Specifying code LimitRange code values specifying limitrange values Configuring a failure timeout configuring a failure timeout code PipelineRun code status pipelinerun status The code status code field the status field Monitoring execution status monitoring execution status Marking off user errors marking off user errors Cancelling a code PipelineRun code cancelling a pipelinerun Gracefully cancelling a code PipelineRun code gracefully cancelling a pipelinerun Gracefully stopping a code PipelineRun code gracefully stopping a pipelinerun Pending code PipelineRuns code pending pipelineruns toc Overview A PipelineRun allows you to instantiate and execute a Pipeline pipelines md on cluster A Pipeline specifies one or more Tasks in the desired order of execution A PipelineRun executes the Tasks in the Pipeline in the order they are specified until all Tasks have executed successfully or a failure occurs Note A PipelineRun automatically creates corresponding TaskRuns for every Task in your Pipeline The Status field tracks the current state of a PipelineRun and can be used to monitor progress This field contains the status of every TaskRun as well as the full PipelineSpec used to instantiate this PipelineRun for full auditability Configuring a PipelineRun A PipelineRun definition supports the following fields Required apiVersion kubernetes overview Specifies the API version For example tekton dev v1beta1 kind kubernetes overview Indicates that this resource object is a PipelineRun object metadata kubernetes overview Specifies the metadata that uniquely identifies the PipelineRun object For example a name spec kubernetes overview Specifies the configuration information for this PipelineRun object pipelineRef or pipelineSpec specifying the target pipeline Specifies the target Pipeline pipelines md Optional params specifying parameters Specifies the desired execution parameters for the Pipeline serviceAccountName specifying custom serviceaccount credentials Specifies a ServiceAccount object that supplies specific execution credentials for the Pipeline status cancelling a pipelinerun Specifies options for cancelling a PipelineRun taskRunSpecs specifying taskrunspecs Specifies a list of PipelineRunTaskSpec which allows for setting ServiceAccountName Pod template podtemplates md and Metadata for each task This overrides the Pod template set for the entire Pipeline timeout configuring a failure timeout Specifies the timeout before the PipelineRun fails timeout is deprecated and will eventually be removed so consider using timeouts instead timeouts configuring a failure timeout Specifies the timeout before the PipelineRun fails timeouts allows more granular timeout configuration at the pipeline tasks and finally levels podTemplate specifying a pod template Specifies a Pod template podtemplates md to use as the basis for the configuration of the Pod that executes each Task workspaces specifying workspaces Specifies a set of workspace bindings which must match the names of workspaces declared in the pipeline being used kubernetes overview https kubernetes io docs concepts overview working with objects kubernetes objects required fields Specifying the target Pipeline You must specify the target Pipeline that you want the PipelineRun to execute either by referencing an existing Pipeline definition or embedding a Pipeline definition directly in the PipelineRun To specify the target Pipeline by reference use the pipelineRef field yaml spec pipelineRef name mypipeline To embed a Pipeline definition in the PipelineRun use the pipelineSpec field yaml spec pipelineSpec tasks name task1 taskRef name mytask The Pipeline in the pipelineSpec example examples v1 pipelineruns pipelinerun with pipelinespec yaml example displays morning and evening greetings Once you create and execute it you can check the logs for its Pods bash kubectl logs kubectl get pods o name grep pipelinerun echo greetings echo good morning Good Morning Bob kubectl logs kubectl get pods o name grep pipelinerun echo greetings echo good night Good Night Bob You can also embed a Task definition the embedded Pipeline definition yaml spec pipelineSpec tasks name task1 taskSpec steps In the taskSpec in pipelineSpec example examples v1 pipelineruns pipelinerun with pipelinespec and taskspec yaml it s Tasks all the way down You can also specify labels and annotations with taskSpec which are propagated to each taskRun and then to the respective pods These labels can be used to identify and filter pods for further actions such as collecting pod metrics and cleaning up completed pod with certain labels etc even being part of one single Pipeline yaml spec pipelineSpec tasks name task1 taskSpec metadata labels pipeline sdk type kfp name task2 taskSpec metadata labels pipeline sdk type tfx Tekton Bundles A Tekton Bundle is an OCI artifact that contains Tekton resources like Tasks which can be referenced within a taskRef You can reference a Tekton bundle in a TaskRef in both v1 and v1beta1 using remote resolution bundle resolver md pipeline resolution The example syntax shown below for v1 uses remote resolution and requires enabling beta features additional configs md beta features yaml spec pipelineRef resolver bundles params name bundle value docker io myrepo mycatalog v1 0 name name value mypipeline name kind value Pipeline The syntax and caveats are similar to using Tekton Bundles for Task references in Pipelines pipelines md tekton bundles or TaskRuns taskruns md tekton bundles Tekton Bundles may be constructed with any toolsets that produce valid OCI image artifacts so long as the artifact adheres to the contract tekton bundle contracts md Remote Pipelines beta feature https github com tektoncd pipeline blob main docs install md beta features A pipelineRef field may specify a Pipeline in a remote location such as git Support for specific types of remote will depend on the Resolvers your cluster s operator has installed For more information including a tutorial please check resolution docs resolution md The below example demonstrates referencing a Pipeline in git yaml spec pipelineRef resolver git params name url value https github com tektoncd catalog git name revision value abc123 name pathInRepo value pipeline buildpacks 0 1 buildpacks yaml Specifying Task level ComputeResources alpha only https github com tektoncd pipeline blob main docs additional configs md alpha features Task level compute resources can be configured in PipelineRun TaskRunSpecs ComputeResources or TaskRun ComputeResources e g yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind Pipeline metadata name pipeline spec tasks name task apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata name pipelinerun spec pipelineRef name pipeline taskRunSpecs pipelineTaskName task computeResources requests cpu 2 Further details and examples could be found in Compute Resources in Tekton https github com tektoncd pipeline blob main docs compute resources md Specifying Parameters See also Specifying Parameters in Tasks tasks md specifying parameters You can specify Parameters that you want to pass to the Pipeline during execution including different values of the same parameter for different Tasks in the Pipeline Note You must specify all the Parameters that the Pipeline expects Parameters that have default values specified in Pipeline are not required to be provided by PipelineRun For example yaml spec params name pl param x value 100 name pl param y value 500 You can pass in extra Parameters if needed depending on your use cases An example use case is when your CI system autogenerates PipelineRuns and it has Parameters it wants to provide to all PipelineRuns Because you can pass in extra Parameters you don t have to go through the complexity of checking each Pipeline and providing only the required params Parameter Enums seedling enum is an alpha additional configs md alpha features feature The enable param enum feature flag must be set to true to enable this feature If a Parameter is guarded by Enum in the Pipeline you can only provide Parameter values in the PipelineRun that are predefined in the Param Enum in the Pipeline The PipelineRun will fail with reason InvalidParamValue otherwise Tekton will also the validate the param values passed to any referenced Tasks via taskRef if Enum is specified for the Task The PipelineRun will fail with reason InvalidParamValue if Enum validation is failed for any of the PipelineTask You can also specify Enum in an embedded Pipeline in a PipelineRun The same Param validation will be executed in this scenario See more details in Param Enum pipelines md param enum Propagated Parameters When using an inlined spec parameters from the parent PipelineRun will be propagated to any inlined specs without needing to be explicitly defined This allows authors to simplify specs by automatically propagating top level parameters down to other inlined resources yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata generateName pr echo spec params name HELLO value Hello World name BYE value Bye World pipelineSpec tasks name echo hello taskSpec steps name echo image ubuntu script usr bin env bash echo params HELLO name echo bye taskSpec steps name echo image ubuntu script usr bin env bash echo params BYE On executing the pipeline run the parameters will be interpolated during resolution The specifications are not mutated before storage and so it remains the same The status is updated yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata name pr echo szzs9 spec params name HELLO value Hello World name BYE value Bye World pipelineSpec tasks name echo hello taskSpec steps image ubuntu name echo script usr bin env bash echo params HELLO name echo bye taskSpec steps image ubuntu name echo script usr bin env bash echo params BYE status conditions lastTransitionTime 2022 04 07T12 34 58Z message Tasks Completed 2 Failed 0 Canceled 0 Skipped 0 reason Succeeded status True type Succeeded pipelineSpec childReferences name pr echo szzs9 echo hello pipelineTaskName echo hello kind TaskRun name pr echo szzs9 echo bye pipelineTaskName echo bye kind TaskRun Scope and Precedence When Parameters names conflict the inner scope would take precedence as shown in this example yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata generateName pr echo spec params name HELLO value Hello World name BYE value Bye World pipelineSpec tasks name echo hello params name HELLO value Sasa World taskSpec params name HELLO type string steps name echo image ubuntu script usr bin env bash echo params HELLO resolves to yaml Successful execution of the above PipelineRun apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata name pr echo szzs9 spec status conditions lastTransitionTime 2022 04 07T12 34 58Z message Tasks Completed 2 Failed 0 Canceled 0 Skipped 0 reason Succeeded status True type Succeeded childReferences name pr echo szzs9 echo hello pipelineTaskName echo hello kind TaskRun Default Values When Parameter specifications have default values the Parameter value provided at runtime would take precedence to give users control as shown in this example yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata generateName pr echo spec params name HELLO value Hello World name BYE value Bye World pipelineSpec tasks name echo hello taskSpec params name HELLO type string default Sasa World steps name echo image ubuntu script usr bin env bash echo params HELLO resolves to yaml Successful execution of the above PipelineRun apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata name pr echo szzs9 spec status conditions lastTransitionTime 2022 04 07T12 34 58Z message Tasks Completed 2 Failed 0 Canceled 0 Skipped 0 reason Succeeded status True type Succeeded childReferences name pr echo szzs9 echo hello pipelineTaskName echo hello kind TaskRun Referenced Resources When a PipelineRun definition has referenced specifications but does not explicitly pass Parameters the PipelineRun will be created but the execution will fail because of missing Parameters yaml Invalid PipelineRun attempting to propagate Parameters to referenced Tasks apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata generateName pr echo spec params name HELLO value Hello World name BYE value Bye World pipelineSpec tasks name echo hello taskRef name echo hello name echo bye taskRef name echo bye apiVersion tekton dev v1 or tekton dev v1beta1 kind Task metadata name echo hello spec steps name echo image ubuntu script usr bin env bash echo params HELLO apiVersion tekton dev v1 or tekton dev v1beta1 kind Task metadata name echo bye spec steps name echo image ubuntu script usr bin env bash echo params BYE Fails as follows yaml Failed execution of the above PipelineRun apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata name pr echo 24lmf spec params name HELLO value Hello World name BYE value Bye World pipelineSpec tasks name echo hello taskRef kind Task name echo hello name echo bye taskRef kind Task name echo bye status conditions lastTransitionTime 2022 04 07T20 24 51Z message invalid input params for task echo hello missing values for these params which have no default values HELLO reason PipelineValidationFailed status False type Succeeded Object Parameters When using an inlined spec object parameters from the parent PipelineRun will also be propagated to any inlined specs without needing to be explicitly defined This allows authors to simplify specs by automatically propagating top level parameters down to other inlined resources When propagating object parameters scope and precedence also holds as shown below yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata generateName pipelinerun object param result spec params name gitrepo value url abc com commit sha123 pipelineSpec tasks name task1 params name gitrepo value branch main url xyz com taskSpec steps name write result image bash args echo url params gitrepo url commit params gitrepo commit branch params gitrepo branch resolves to yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata name pipelinerun object param resultpxp59 spec params name gitrepo value commit sha123 url abc com pipelineSpec tasks name task1 params name gitrepo value branch main url xyz com taskSpec metadata spec null steps args echo url params gitrepo url commit params gitrepo commit branch params gitrepo branch image bash name write result status completionTime 2022 09 08T17 22 01Z conditions lastTransitionTime 2022 09 08T17 22 01Z message Tasks Completed 1 Failed 0 Cancelled 0 Skipped 0 reason Succeeded status True type Succeeded pipelineSpec tasks name task1 params name gitrepo value branch main url xyz com taskSpec metadata spec null steps args echo url xyz com commit sha123 branch main image bash name write result startTime 2022 09 08T17 21 57Z childReferences name pipelinerun object param resultpxp59 task1 pipelineTaskName task1 kind TaskRun taskSpec steps args echo url xyz com commit sha123 branch main image bash name write result Specifying custom ServiceAccount credentials You can execute the Pipeline in your PipelineRun with a specific set of credentials by specifying a ServiceAccount object name in the serviceAccountName field in your PipelineRun definition If you do not explicitly specify this the TaskRuns created by your PipelineRun will execute with the credentials specified in the configmap defaults ConfigMap If this default is not specified the TaskRuns will execute with the default service account https kubernetes io docs tasks configure pod container configure service account use the default service account to access the api server set for the target namespace https kubernetes io docs concepts overview working with objects namespaces For more information see ServiceAccount auth md Custom tasks pipelines md using custom tasks may or may not use a service account name Consult the documentation of the custom task that you are using to determine whether it supports a service account name Mapping ServiceAccount credentials to Tasks If you require more granularity in specifying execution credentials use the taskRunSpecs taskServiceAccountName field to map a specific serviceAccountName value to a specific Task in the Pipeline This overrides the global serviceAccountName you may have set for the Pipeline as described in the previous section For example if you specify these mappings yaml spec taskRunTemplate serviceAccountName sa 1 taskRunSpecs pipelineTaskName build task serviceAccountName sa for build yaml spec serviceAccountName sa 1 taskRunSpecs pipelineTaskName build task taskServiceAccountName sa for build for this Pipeline yaml kind Pipeline spec tasks name build task taskRef name build push name test task taskRef name test then test task will execute using the sa 1 account while build task will execute with sa for build Propagated Results When using an embedded spec Results from the parent PipelineRun will be propagated to any inlined specs without needing to be explicitly defined This allows authors to simplify specs by automatically propagating top level results down to other inlined resources Result substitutions will only be made for name commands args env and script fields of steps sidecars yaml apiVersion tekton dev v1 kind PipelineRun metadata name uid pipeline run spec pipelineSpec tasks name add uid taskSpec results name uid type string steps name add uid image busybox command bin sh c args echo 1001 tee results uid path name show uid params name uid value tasks add uid results uid taskSpec steps name show uid image busybox command bin sh c args echo tasks add uid results uid echo params uid On executing the PipelineRun the Results will be interpolated during resolution yaml name uid pipeline run show uid apiVersion tekton dev v1 kind TaskRun metadata spec taskSpec steps args echo 1001 command bin sh c image busybox name show uid status completionTime 2023 09 11T07 34 28Z conditions lastTransitionTime 2023 09 11T07 34 28Z message All Steps have completed executing reason Succeeded status True type Succeeded podName uid pipeline run show uid pod steps container step show uid name show uid taskSpec steps args echo 1001 command bin sh c computeResources image busybox name show uid Specifying a Pod template You can specify a Pod template podtemplates md configuration that will serve as the configuration starting point for the Pod in which the container images specified in your Tasks will execute This allows you to customize the Pod configuration specifically for each TaskRun In the following example the Task defines a volumeMount object named my cache The PipelineRun provisions this object for the Task using a persistentVolumeClaim and executes it as user 1001 yaml apiVersion tekton dev v1 kind Task metadata name mytask spec steps name writesomething image ubuntu command bash c args echo foo my cache bar volumeMounts name my cache mountPath my cache apiVersion tekton dev v1 kind Pipeline metadata name mypipeline spec tasks name task1 taskRef name mytask apiVersion tekton dev v1 kind PipelineRun metadata name mypipelinerun spec pipelineRef name mypipeline taskRunTemplate podTemplate securityContext runAsNonRoot true runAsUser 1001 volumes name my cache persistentVolumeClaim claimName my volume claim yaml apiVersion tekton dev v1beta1 kind Task metadata name mytask spec steps name writesomething image ubuntu command bash c args echo foo my cache bar volumeMounts name my cache mountPath my cache apiVersion tekton dev v1beta1 kind Pipeline metadata name mypipeline spec tasks name task1 taskRef name mytask apiVersion tekton dev v1beta1 kind PipelineRun metadata name mypipelinerun spec pipelineRef name mypipeline podTemplate securityContext runAsNonRoot true runAsUser 1001 volumes name my cache persistentVolumeClaim claimName my volume claim Custom tasks pipelines md using custom tasks may or may not use a pod template Consult the documentation of the custom task that you are using to determine whether it supports a pod template Specifying taskRunSpecs Specifies a list of PipelineTaskRunSpec which contains TaskServiceAccountName TaskPodTemplate and PipelineTaskName Mapping the specs to the corresponding Task based upon the TaskName a PipelineTask will run with the configured TaskServiceAccountName and TaskPodTemplate overwriting the pipeline wide ServiceAccountName and podTemplate podtemplates md configuration for example yaml spec podTemplate securityContext runAsUser 1000 runAsGroup 2000 fsGroup 3000 taskRunSpecs pipelineTaskName build task serviceAccountName sa for build podTemplate nodeSelector disktype ssd yaml spec podTemplate securityContext runAsUser 1000 runAsGroup 2000 fsGroup 3000 taskRunSpecs pipelineTaskName build task taskServiceAccountName sa for build taskPodTemplate nodeSelector disktype ssd If used with this Pipeline build task will use the task specific PodTemplate where nodeSelector has disktype equal to ssd along with securityContext from the pipelineRun spec podTemplate PipelineTaskRunSpec may also contain StepSpecs and SidecarSpecs see Overriding Task Steps and Sidecars taskruns md overriding task steps and sidecars for more information The optional annotations and labels can be added under a Metadata field as for a specific running context e g Rendering needed secrets with Vault yaml spec pipelineRef name pipeline name taskRunSpecs pipelineTaskName task name metadata annotations vault hashicorp com agent inject secret foo path to foo vault hashicorp com role role name Updating labels applied in a runtime context yaml spec pipelineRef name pipeline name taskRunSpecs pipelineTaskName task name metadata labels app cloudevent If a metadata key is present in different levels the value that will be used in the PipelineRun is determined using this precedence order PipelineRun spec taskRunSpec metadata PipelineRun metadata Pipeline spec tasks taskSpec metadata Specifying Workspaces If your Pipeline specifies one or more Workspaces you must map those Workspaces to the corresponding physical volumes in your PipelineRun definition For example you can map a PersistentVolumeClaim volume to a Workspace as follows yaml workspaces name myworkspace must match workspace name in Task persistentVolumeClaim claimName mypvc this PVC must already exist subPath my subdir workspaces subPath can be an absolute value or can reference pipelineRun context variables such as context pipelineRun name or context pipelineRun uid You can pass in extra Workspaces if needed depending on your use cases An example use case is when your CI system autogenerates PipelineRuns and it has Workspaces it wants to provide to all PipelineRuns Because you can pass in extra Workspaces you don t have to go through the complexity of checking each Pipeline and providing only the required Workspaces yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind Pipeline metadata name pipeline spec tasks name task apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata name pipelinerun spec pipelineRef name pipeline workspaces name unusedworkspace persistentVolumeClaim claimName mypvc For more information see the following topics For information on mapping Workspaces to Volumes see Specifying Workspaces in PipelineRuns workspaces md specifying workspaces in pipelineruns For a list of supported Volume types see Specifying VolumeSources in Workspaces workspaces md specifying volumesources in workspaces For an end to end example see Workspaces in a PipelineRun examples v1 pipelineruns workspaces yaml Custom tasks pipelines md using custom tasks may or may not use workspaces Consult the documentation of the custom task that you are using to determine whether it supports workspaces Propagated Workspaces When using an embedded spec workspaces from the parent PipelineRun will be propagated to any inlined specs without needing to be explicitly defined This allows authors to simplify specs by automatically propagating top level workspaces down to other inlined resources Workspace substutions will only be made for commands args and script fields of steps stepTemplates and sidecars yaml Inline specifications of a PipelineRun apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata generateName recipe time spec workspaces name shared data volumeClaimTemplate spec accessModes ReadWriteOnce resources requests storage 16Mi volumeMode Filesystem pipelineSpec workspaces name shared data tasks name fetch secure data workspaces name shared data taskSpec workspaces name shared data steps name fetch and write secure image ubuntu script echo hi workspaces shared data path recipe txt name print the recipe workspaces name shared data runAfter fetch secure data taskSpec workspaces name shared data steps name print secrets image ubuntu script cat workspaces shared data path recipe txt On executing the pipeline run the workspaces will be interpolated during resolution yaml Successful execution of the above PipelineRun apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata generateName recipe time spec pipelineSpec status completionTime 2022 06 02T18 17 02Z conditions lastTransitionTime 2022 06 02T18 17 02Z message Tasks Completed 2 Failed 0 Canceled 0 Skipped 0 reason Succeeded status True type Succeeded pipelineSpec childReferences name recipe time lslt9 fetch secure data pipelineTaskName fetch secure data kind TaskRun name recipe time lslt9 print the recipe pipelineTaskName print the recipe kind TaskRun Workspace Referenced Resources Workspaces cannot be propagated to referenced specifications For example the following Pipeline will fail when executed because the workspaces defined in the PipelineRun cannot be propagated to the referenced Pipeline yaml PipelineRun attempting to propagate Workspaces to referenced Tasks apiVersion v1 kind PersistentVolumeClaim metadata name shared task storage spec resources requests storage 16Mi volumeMode Filesystem accessModes ReadWriteOnce apiVersion tekton dev v1 or tekton dev v1beta1 kind Pipeline metadata name fetch and print recipe spec tasks name fetch the recipe taskRef name fetch secure data name print the recipe taskRef name print data runAfter fetch the recipe apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata generateName recipe time spec pipelineRef name fetch and print recipe workspaces name shared data persistentVolumeClaim claimName shared task storage Upon execution this will cause failures yaml Failed execution of the above PipelineRun apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata generateName recipe time spec pipelineRef name fetch and print recipe workspaces name shared data persistentVolumeClaim claimName shared task storage status completionTime 2022 06 02T19 02 58Z conditions lastTransitionTime 2022 06 02T19 02 58Z message Tasks Completed 1 Failed 1 Canceled 0 Skipped 1 reason Failed status False type Succeeded pipelineSpec childReferences name recipe time v5scg fetch the recipe pipelineTaskName fetch the recipe kind TaskRun Referenced TaskRuns within Embedded PipelineRuns As mentioned in the Workspace Referenced Resources workspace referenced resources workspaces can only be propagated from PipelineRuns to embedded Pipeline specs not Pipeline references Similarly workspaces can only be propagated from a Pipeline to embedded Task specs not referenced Tasks For example yaml PipelineRun attempting to propagate Workspaces to referenced Tasks apiVersion tekton dev v1 or tekton dev v1beta1 kind Task metadata name fetch secure data spec workspaces If Referenced Workspaces need to be explicitly declared name shared data steps name fetch and write image ubuntu script echo workspaces shared data path apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata generateName recipe time spec workspaces name shared data persistentVolumeClaim claimName shared task storage pipelineSpec workspaces Since this is embedded specs Workspaces don t need to be declared tasks name fetch the recipe workspaces If referencing resources Workspaces need to be explicitly declared name shared data taskRef Referencing a resource name fetch secure data name print the recipe workspaces Since this is embedded specs Workspaces don t need to be declared taskSpec workspaces Since this is embedded specs Workspaces don t need to be declared steps name print secrets image ubuntu script cat workspaces shared data path recipe txt runAfter fetch the recipe The above pipelinerun successfully resolves to yaml Successful execution of the above PipelineRun apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata generateName recipe time spec pipelineSpec workspaces name shared data persistentVolumeClaim claimName shared task storage status completionTime 2022 06 09T18 42 14Z conditions lastTransitionTime 2022 06 09T18 42 14Z message Tasks Completed 2 Failed 0 Cancelled 0 Skipped 0 reason Succeeded status True type Succeeded pipelineSpec childReferences name recipe time pj6l7 fetch the recipe pipelineTaskName fetch the recipe kind TaskRun name recipe time pj6l7 print the recipe pipelineTaskName print the recipe kind TaskRun Specifying LimitRange values In order to only consume the bare minimum amount of resources needed to execute one Step at a time from the invoked Task Tekton will request the compute values for CPU memory and ephemeral storage for each Step based on the LimitRange https kubernetes io docs concepts policy limit range object s if present Any Request or Limit specified by the user on Task for example will be left unchanged For more information see the LimitRange support in Pipeline compute resources md limitrange support Configuring a failure timeout You can use the timeouts field to set the PipelineRun s desired timeout value in minutes There are three sub fields pipeline specifies the timeout for the entire PipelineRun Defaults to to the global configurable default timeout of 60 minutes When timeouts pipeline has elapsed any running child TaskRuns will be canceled regardless of whether they are normal Tasks or finally Tasks and the PipelineRun will fail tasks specifies the timeout for the cumulative time taken by non finally Tasks specified in pipeline spec tasks To specify a timeout for an individual Task use pipeline spec tasks timeout When timeouts tasks has elapsed any running child TaskRuns will be canceled finally Tasks will run if timeouts finally is specified and the PipelineRun will fail finally the timeout for the cumulative time taken by finally Tasks specified in pipeline spec finally Since all finally Tasks run in parallel this is functionally equivalent to the timeout for any finally Task When timeouts finally has elapsed any running finally TaskRuns will be canceled and the PipelineRun will fail For example yaml timeouts pipeline 0h0m60s tasks 0h0m40s finally 0h0m20s All three sub fields are optional and will be automatically processed according to the following constraint timeouts pipeline timeouts tasks timeouts finally Each timeout field is a duration conforming to Go s ParseDuration https golang org pkg time ParseDuration format For example valid values are 1h30m 1h 1m and 60s If any of the sub fields are set to 0 there is no timeout for that section of the PipelineRun meaning that it will run until it completes successfully or encounters an error To set timeouts tasks or timeouts finally to 0 you must also set timeouts pipeline to 0 The global default timeout is set to 60 minutes when you first install Tekton You can set a different global default timeout value using the default timeout minutes field in config config defaults yaml config config defaults yaml Example timeouts usages are as follows Combination 1 Set the timeout for the entire pipeline and reserve a portion of it for tasks yaml kind PipelineRun spec timeouts pipeline 0h4m0s tasks 0h1m0s Combination 2 Set the timeout for the entire pipeline and reserve a portion of it for finally yaml kind PipelineRun spec timeouts pipeline 0h4m0s finally 0h3m0s Combination 3 Set only a tasks timeout with no timeout for the entire pipeline yaml kind PipelineRun spec timeouts pipeline 0 No timeout tasks 0h3m0s Combination Set only a finally timeout with no timeout for the entire pipeline yaml kind PipelineRun spec timeouts pipeline 0 No timeout finally 0h3m0s You can also use the Deprecated timeout field to set the PipelineRun s desired timeout value in minutes If you do not specify this value in the PipelineRun the global default timeout value applies If you set the timeout to 0 the PipelineRun fails immediately upon encountering an error warning timeout is deprecated and will be removed in future versions Consider using timeouts instead note An internal detail of the PipelineRun and TaskRun reconcilers in the Tekton controller is that it will requeue a PipelineRun or TaskRun for re evaluation versus waiting for the next update under certain conditions The wait time for that re queueing is the elapsed time subtracted from the timeout however if the timeout is set to 0 that calculation produces a negative number and the new reconciliation event will fire immediately which can impact overall performance which is counter to the intent of wait time calculation So instead the reconcilers will use the configured global timeout as the wait time when the associated timeout has been set to 0 PipelineRun status The status field Your PipelineRun s status field can contain the following fields Required wokeignore rule master status Most relevant status conditions which contains the latest observations of the PipelineRun s state See here https github com kubernetes community blob master contributors devel sig architecture api conventions md typical status properties for information on typical status properties startTime The time at which the PipelineRun began executing in RFC3339 https tools ietf org html rfc3339 format completionTime The time at which the PipelineRun finished executing in RFC3339 https tools ietf org html rfc3339 format pipelineSpec pipelines md configuring a pipeline The exact PipelineSpec used when starting the PipelineRun Optional pipelineResults pipelines md emitting results from a pipeline Results emitted by this PipelineRun skippedTasks A list of Task s which were skipped when running this PipelineRun due to when expressions pipelines md guard task execution using when expressions including the when expressions applying to the skipped task childReferences A list of references to each TaskRun or Run in this PipelineRun which can be used to look up the status of the underlying TaskRun or Run Each entry contains the following kind kubernetes overview Generally either TaskRun or Run apiVersion kubernetes overview The API version for the underlying TaskRun or Run whenExpressions pipelines md guard task execution using when expressions The list of when expressions guarding the execution of this task provenance Metadata about the runtime configuration and the resources used in the PipelineRun The data in the provenance field will be recorded into the build provenance by the provenance generator i e Tekton Chains Currently there are 2 subfields refSource the source from where a remote pipeline definition was fetched featureFlags the configuration data of the feature flags configmap finallyStartTime The time at which the PipelineRun s finally Tasks if any began executing in RFC3339 https tools ietf org html rfc3339 format Monitoring execution status As your PipelineRun executes its status field accumulates information on the execution of each TaskRun as well as the PipelineRun as a whole This information includes the name of the pipeline Task associated to a TaskRun the complete status of the TaskRun taskruns md monitoring execution status and details about whenExpressions that may be associated to a TaskRun The following example shows an extract from the status field of a PipelineRun that has executed successfully yaml completionTime 2020 05 04T02 19 14Z conditions lastTransitionTime 2020 05 04T02 19 14Z message Tasks Completed 4 Skipped 0 reason Succeeded status True type Succeeded startTime 2020 05 04T02 00 11Z childReferences name triggers release nightly frwmw build pipelineTaskName build kind TaskRun The following tables shows how to read the overall status of a PipelineRun Completion time is set once a PipelineRun reaches status True or False status reason completionTime is set Description Unknown Started No The PipelineRun has just been picked up by the controller Unknown Running No The PipelineRun has been validate and started to perform its work Unknown Cancelled No The user requested the PipelineRun to be cancelled Cancellation has not be done yet True Succeeded Yes The PipelineRun completed successfully True Completed Yes The PipelineRun completed successfully one or more Tasks were skipped False Failed Yes The PipelineRun failed because one of the TaskRuns failed False Error message Yes The PipelineRun failed with a permanent error usually validation False Cancelled Yes The PipelineRun was cancelled successfully False PipelineRunTimeout Yes The PipelineRun timed out False CreateRunFailed Yes The PipelineRun create run resources failed When a PipelineRun changes status events events md pipelineruns are triggered accordingly When a PipelineRun has Tasks that were skipped the reason for skipping the task will be listed in the Skipped Tasks section of the status of the PipelineRun When a PipelineRun has Tasks with when expressions pipelines md guard task execution using when expressions If the when expressions evaluate to true the Task is executed then the TaskRun and its resolved when expressions will be listed in the Task Runs section of the status of the PipelineRun If the when expressions evaluate to false the Task is skipped then its name and its resolved when expressions will be listed in the Skipped Tasks section of the status of the PipelineRun yaml Conditions Last Transition Time 2020 08 27T15 07 34Z Message Tasks Completed 1 Failed 0 Cancelled 0 Skipped 1 Reason Completed Status True Type Succeeded Skipped Tasks Name skip this task Reason When Expressions evaluated to false When Expressions Input foo Operator in Values bar Input foo Operator notin Values foo ChildReferences Name pipelinerun to skip task run this task Pipeline Task Name run this task Kind TaskRun The name of the TaskRuns and Runs owned by a PipelineRun are univocally associated to the owning resource If a PipelineRun resource is deleted and created with the same name the child TaskRuns will be created with the same name as before The base format of the name is pipelinerun name pipelinetask name If the PipelineTask has a Matrix the name will have an int suffix with format pipelinerun name pipelinetask name combination id The name may vary according the logic of kmeta ChildName https pkg go dev github com knative pkg kmeta ChildName Some examples PipelineRun Name PipelineTask Name TaskRun Names pipeline run task1 pipeline run task1 pipeline run task2 0123456789 0123456789 0123456789 0123456789 0123456789 pipeline runee4a397d6eab67777d4e6f9991cd19e6 task2 0123456789 0 pipeline run 0123456789 0123456789 0123456789 0123456789 task3 pipeline run 0123456789 0123456789 0123456789 0123456789 task3 pipeline run 0123456789 0123456789 0123456789 0123456789 task2 0123456789 0123456789 0123456789 0123456789 0123456789 pipeline run 0123456789 012345607ad8c7aac5873cdfabe472a68996b5c pipeline run task4 with 2x2 Matrix pipeline run task1 0 pipeline run task1 2 pipeline run task1 3 pipeline run task1 4 Marking off user errors A user error in Tekton is any mistake made by user such as a syntax error when specifying pipelines tasks User errors can occur in various stages of the Tekton pipeline from authoring the pipeline configuration to executing the pipelines They are currently explicitly labeled in the Run s conditions message for example yaml Failed PipelineRun with message labeled User error apiVersion tekton dev v1 kind PipelineRun metadata spec status conditions lastTransitionTime 2022 06 02T19 02 58Z message User error PipelineRun default parameters is missing some parameters required by Pipeline pipelinerun with params s parameters pipelineRun missing parameters pl param x reason ParameterMissing status False type Succeeded console pipeline tkn pr list NAME STARTED DURATION STATUS pipelinerun with params 5 seconds ago 0s Failed ParameterMissing Cancelling a PipelineRun To cancel a PipelineRun that s currently executing update its definition to mark it as Cancelled When you do so the spawned TaskRuns are also marked as cancelled all associated Pods are deleted and their Retries are not executed Pending finally tasks are not scheduled For example yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata name go example git spec status Cancelled Gracefully cancelling a PipelineRun To gracefully cancel a PipelineRun that s currently executing update its definition to mark it as CancelledRunFinally When you do so the spawned TaskRuns are also marked as cancelled all associated Pods are deleted and their Retries are not executed finally tasks are scheduled normally For example yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata name go example git spec status CancelledRunFinally Gracefully stopping a PipelineRun To gracefully stop a PipelineRun that s currently executing update its definition to mark it as StoppedRunFinally When you do so the spawned TaskRuns are completed normally including executing their retries but no new non finally task is scheduled finally tasks are executed afterwards For example yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata name go example git spec status StoppedRunFinally Pending PipelineRuns A PipelineRun can be created as a pending PipelineRun meaning that it will not actually be started until the pending status is cleared Note that a PipelineRun can only be marked pending before it has started this setting is invalid after the PipelineRun has been started To mark a PipelineRun as pending set spec status to PipelineRunPending when creating it yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata name go example git spec status PipelineRunPending To start the PipelineRun clear the spec status field Alternatively update the value to Cancelled to cancel it Except as otherwise noted the content of this page is licensed under the Creative Commons Attribution 4 0 License https creativecommons org licenses by 4 0 and code samples are licensed under the Apache 2 0 License https www apache org licenses LICENSE 2 0 |
tekton Debug Debug weight 108 | <!--
---
linkTitle: "Debug"
weight: 108
---
-->
# Debug
- [Overview](#overview)
- [Debugging TaskRuns](#debugging-taskruns)
- [Adding Breakpoints](#adding-breakpoints)
- [Breakpoint on Failure](#breakpoint-on-failure)
- [Failure of a Step](#failure-of-a-step)
- [Halting a Step on failure](#halting-a-step-on-failure)
- [Exiting onfailure breakpoint](#exiting-onfailure-breakpoint)
- [Breakpoint before step](#breakpoint-before-step)
- [Debug Environment](#debug-environment)
- [Mounts](#mounts)
- [Debug Scripts](#debug-scripts)
## Overview
`Debug` spec is used for troubleshooting and breakpointing runtime resources. This doc helps understand the inner
workings of debug in Tekton. Currently only the `TaskRun` resource is supported.
This is an alpha feature. The `enable-api-fields` feature flag [must be set to `"alpha"`](./install.md)
to specify `debug` in a `taskRun`.
## Debugging TaskRuns
The following provides explanation on how Debugging TaskRuns is possible through Tekton. To understand how to use
the debug spec for TaskRuns follow the [TaskRun Debugging Documentation](taskruns.md#debugging-a-taskrun).
### Breakpoint on Failure
Halting a TaskRun execution on Failure of a step.
#### Failure of a Step
The entrypoint binary is used to manage the lifecycle of a step. Steps are aligned beforehand by the TaskRun controller
allowing each step to run in a particular order. This is done using `-wait_file` and the `-post_file` flags. The former
let's the entrypoint binary know that it has to wait on creation of a particular file before starting execution of the step.
And the latter provides information on the step number and signal the next step on completion of the step.
On success of a step, the `-post-file` is written as is, signalling the next step which would have the same argument given
for `-wait_file` to resume the entrypoint process and move ahead with the step.
On failure of a step, the `-post_file` is written with appending `.err` to it denoting that the previous step has failed with
and error. The subsequent steps are skipped in this case as well, marking the TaskRun as a failure.
#### Halting a Step on failure
The failed step writes `<step-no>.err` to `/tekton/run` and stops running completely. To be able to debug a step we would
need it to continue running (not exit), not skip the next steps and signal health of the step. By disabling step skipping,
stopping write of the `<step-no>.err` file and waiting on a signal by the user to disable the halt, we would be simulating a
"breakpoint".
In this breakpoint, which is essentially a limbo state the TaskRun finds itself in, the user can interact with the step
environment using a CLI or an IDE.
#### Exiting onfailure breakpoint
To exit a step which has been paused upon failure, the step would wait on a file similar to `<step-no>.breakpointexit` which
would unpause and exit the step container. eg: Step 0 fails and is paused. Writing `0.breakpointexit` in `/tekton/run`
would unpause and exit the step container.
### Breakpoint before step
TaskRun will be stuck waiting for user debugging before the step execution.
When beforeStep-Breakpoint takes effect, the user can see the following information
from the corresponding step container log:
```
debug before step breakpoint has taken effect, waiting for user's decision:
1) continue, use cmd: /tekton/debug/scripts/debug-beforestep-continue
2) fail-continue, use cmd: /tekton/debug/scripts/debug-beforestep-fail-continue
```
1. Executing /tekton/debug/scripts/debug-beforestep-continue will continue to execute the step program
2. Executing /tekton/debug/scripts/debug-beforestep-fail-continue will not continue to execute the task, and will mark the step as failed
## Debug Environment
Additional environment augmentations made available to the TaskRun Pod to aid in troubleshooting and managing step lifecycle.
### Mounts
`/tekton/debug/scripts` : Contains scripts which the user can run to mark the step as a success, failure or exit the breakpoint.
Shared between all the containers.
`/tekton/debug/info/<n>` : Contains information about the step. Single EmptyDir shared between all step containers, but renamed
to reflect step number. eg: Step 0 will have `/tekton/debug/info/0`, Step 1 will have `/tekton/debug/info/1` etc.
### Debug Scripts
`/tekton/debug/scripts/debug-continue` : Mark the step as completed with success by writing to `/tekton/run`. eg: User wants to exit
onfailure breakpoint for failed step 0. Running this script would create `/tekton/run/0` and `/tekton/run/0/out.breakpointexit`.
`/tekton/debug/scripts/debug-fail-continue` : Mark the step as completed with failure by writing to `/tekton/run`. eg: User wants to exit
onfailure breakpoint for failed step 0. Running this script would create `/tekton/run/0` and `/tekton/run/0/out.breakpointexit.err`.
`/tekton/debug/scripts/debug-beforestep-continue` : Mark the step continue to execute by writing to `/tekton/run`. eg: User wants to exit
before step breakpoint for before step 0. Running this script would create `/tekton/run/0` and `/tekton/run/0/out.beforestepexit`.
`/tekton/debug/scripts/debug-beforestep-fail-continue` : Mark the step not continue to execute by writing to `/tekton/run`. eg: User wants to exit
before step breakpoint for before step 0. Running this script would create `/tekton/run/0` and `/tekton/run/0/out.beforestepexit.err`. | tekton | linkTitle Debug weight 108 Debug Overview overview Debugging TaskRuns debugging taskruns Adding Breakpoints adding breakpoints Breakpoint on Failure breakpoint on failure Failure of a Step failure of a step Halting a Step on failure halting a step on failure Exiting onfailure breakpoint exiting onfailure breakpoint Breakpoint before step breakpoint before step Debug Environment debug environment Mounts mounts Debug Scripts debug scripts Overview Debug spec is used for troubleshooting and breakpointing runtime resources This doc helps understand the inner workings of debug in Tekton Currently only the TaskRun resource is supported This is an alpha feature The enable api fields feature flag must be set to alpha install md to specify debug in a taskRun Debugging TaskRuns The following provides explanation on how Debugging TaskRuns is possible through Tekton To understand how to use the debug spec for TaskRuns follow the TaskRun Debugging Documentation taskruns md debugging a taskrun Breakpoint on Failure Halting a TaskRun execution on Failure of a step Failure of a Step The entrypoint binary is used to manage the lifecycle of a step Steps are aligned beforehand by the TaskRun controller allowing each step to run in a particular order This is done using wait file and the post file flags The former let s the entrypoint binary know that it has to wait on creation of a particular file before starting execution of the step And the latter provides information on the step number and signal the next step on completion of the step On success of a step the post file is written as is signalling the next step which would have the same argument given for wait file to resume the entrypoint process and move ahead with the step On failure of a step the post file is written with appending err to it denoting that the previous step has failed with and error The subsequent steps are skipped in this case as well marking the TaskRun as a failure Halting a Step on failure The failed step writes step no err to tekton run and stops running completely To be able to debug a step we would need it to continue running not exit not skip the next steps and signal health of the step By disabling step skipping stopping write of the step no err file and waiting on a signal by the user to disable the halt we would be simulating a breakpoint In this breakpoint which is essentially a limbo state the TaskRun finds itself in the user can interact with the step environment using a CLI or an IDE Exiting onfailure breakpoint To exit a step which has been paused upon failure the step would wait on a file similar to step no breakpointexit which would unpause and exit the step container eg Step 0 fails and is paused Writing 0 breakpointexit in tekton run would unpause and exit the step container Breakpoint before step TaskRun will be stuck waiting for user debugging before the step execution When beforeStep Breakpoint takes effect the user can see the following information from the corresponding step container log debug before step breakpoint has taken effect waiting for user s decision 1 continue use cmd tekton debug scripts debug beforestep continue 2 fail continue use cmd tekton debug scripts debug beforestep fail continue 1 Executing tekton debug scripts debug beforestep continue will continue to execute the step program 2 Executing tekton debug scripts debug beforestep fail continue will not continue to execute the task and will mark the step as failed Debug Environment Additional environment augmentations made available to the TaskRun Pod to aid in troubleshooting and managing step lifecycle Mounts tekton debug scripts Contains scripts which the user can run to mark the step as a success failure or exit the breakpoint Shared between all the containers tekton debug info n Contains information about the step Single EmptyDir shared between all step containers but renamed to reflect step number eg Step 0 will have tekton debug info 0 Step 1 will have tekton debug info 1 etc Debug Scripts tekton debug scripts debug continue Mark the step as completed with success by writing to tekton run eg User wants to exit onfailure breakpoint for failed step 0 Running this script would create tekton run 0 and tekton run 0 out breakpointexit tekton debug scripts debug fail continue Mark the step as completed with failure by writing to tekton run eg User wants to exit onfailure breakpoint for failed step 0 Running this script would create tekton run 0 and tekton run 0 out breakpointexit err tekton debug scripts debug beforestep continue Mark the step continue to execute by writing to tekton run eg User wants to exit before step breakpoint for before step 0 Running this script would create tekton run 0 and tekton run 0 out beforestepexit tekton debug scripts debug beforestep fail continue Mark the step not continue to execute by writing to tekton run eg User wants to exit before step breakpoint for before step 0 Running this script would create tekton run 0 and tekton run 0 out beforestepexit err |
tekton Tekton Pipelines API Specification toc | # Tekton Pipelines API Specification
<!-- toc -->
- [Tekton Pipelines API Specification](#tekton-pipelines-api-specification)
- [Abstract](#abstract)
- [Background](#background)
- [Modifying This Specification](#modifying-this-specification)
- [Resource Overview - v1](#resource-overview---v1)
- [`Task`](#task)
- [`Pipeline`](#pipeline)
- [`TaskRun`](#taskrun)
- [`PipelineRun`](#pipelinerun)
- [Detailed Resource Types - `v1`](#detailed-resource-types---v1)
- [TypeMeta](#typemeta)
- [ObjectMeta](#objectmeta)
- [TaskSpec](#taskspec)
- [ParamSpec](#paramspec)
- [ParamType](#paramtype)
- [Step](#step)
- [Sidecar](#sidecar)
- [SecurityContext](#securitycontext)
- [TaskResult](#taskresult)
- [ResultsType](#resultstype)
- [PipelineSpec](#pipelinespec)
- [PipelineTask](#pipelinetask)
- [TaskRef](#taskref)
- [ResolverRef](#resolverref)
- [Param](#param)
- [ParamValue](#paramvalue)
- [PipelineResult](#pipelineresult)
- [TaskRunSpec](#taskrunspec)
- [TaskRunStatus](#taskrunstatus)
- [Condition](#condition)
- [StepState](#stepstate)
- [ContainerState](#containerstate)
- [`ContainerStateRunning`](#containerstaterunning)
- [`ContainerStateWaiting`](#containerstatewaiting)
- [`ContainerStateTerminated`](#containerstateterminated)
- [TaskRunResult](#taskrunresult)
- [SidecarState](#sidecarstate)
- [PipelineRunSpec](#pipelinerunspec)
- [PipelineRef](#pipelineref)
- [PipelineRunStatus](#pipelinerunstatus)
- [PipelineRunResult](#pipelinerunresult)
- [ChildStatusReference](#childstatusreference)
- [TimeoutFields](#timeoutfields)
- [WorkspaceDeclaration](#workspacedeclaration)
- [WorkspacePipelineTaskBinding](#workspacepipelinetaskbinding)
- [PipelineWorkspaceDeclaration](#pipelineworkspacedeclaration)
- [WorkspaceBinding](#workspacebinding)
- [EnvVar](#envvar)
- [Status Signalling](#status-signalling)
<!-- /toc -->
## Abstract
The Tekton Pipelines platform provides common abstractions for describing and executing container-based, run-to-completion workflows, typically in service of CI/CD scenarios. The Tekton Conformance Policy defines the requirements that Tekton implementations must meet to claim conformance with the Tekton API. [TEP-0131](https://github.com/tektoncd/community/blob/main/teps/0131-tekton-conformance-policy.md) lay out details of the policy itself.
According to the policy, Tekton implementations can claim Conformance on GA Primitives, thus, all API Spec in this doc is for Tekton V1 APIs. Implementations are only required to provide resource management (i.e. CRUD APIs) for Runtime Primitives (TaskRun and PipelineRun). For Authoring-time Primitives (Task and Pipeline), supporting CRUD APIs is not a requirement but we recommend referencing them in runtime types (e.g. from git, catalog, within the cluster etc.)
This document describes the structure, and lifecycle of Tekton resources. This document does not define the [runtime contract](https://tekton.dev/docs/pipelines/container-contract/) nor prescribe specific implementations of supporting services such as access control, observability, or resource management.
This document makes reference in a few places to different profiles for Tekton installations. A profile in this context is a set of operations, resources, and fields that are accessible to a developer interacting with a Tekton installation. Currently, only a single (minimal) profile for Tekton Pipelines is defined, but additional profiles may be defined in the future to standardize advanced functionality. A minimal profile is one that implements all of the “MUST”, “MUST NOT”, and “REQUIRED” conditions of this document.
## Background
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “NOT RECOMMENDED”, “MAY”, and “OPTIONAL” are to be interpreted as described in [RFC 2119](https://tools.ietf.org/html/rfc2119).
There is no formal specification of the Kubernetes API and Resource Model. This document assumes Kubernetes 1.25 behavior; this behavior will typically be supported by many future Kubernetes versions. Additionally, this document may reference specific core Kubernetes resources; these references may be illustrative (i.e. an implementation on Kubernetes) or descriptive (i.e. this Kubernetes resource MUST be exposed). References to these core Kubernetes resources will be annotated as either illustrative or descriptive.
## Modifying This Specification
This spec is a living document, meaning new resources and fields may be added, and may transition from being OPTIONAL to RECOMMENDED to REQUIRED over time. In general a resource or field should not be added as REQUIRED directly, as this may cause unsuspecting previously-conformant implementations to suddenly no longer be conformant. These should be first OPTIONAL or RECOMMENDED, then change to be REQUIRED once a survey of conformant implementations indicates that doing so will not cause undue burden on any implementation.
## Resource Overview - v1
The following schema defines a set of REQUIRED or RECOMMENDED resource fields on the Tekton resource types. Whether a field is REQUIRED or RECOMMENDED is denoted in the "Requirement" column.
Additional fields MAY be provided by particular implementations, however it is expected that most extension will be accomplished via the `metadata.labels` and `metadata.annotations` fields, as Tekton implementations MAY validate supplied resources against these fields and refuse resources which specify unknown fields.
Tekton implementations MUST NOT require `spec` fields outside this implementation; to do so would break interoperability between such implementations and implementations which implement validation of field names.
**NB:** All fields and resources not listed below are assumed to be **OPTIONAL**, not RECOMMENDED or REQUIRED.
### `Task`
A Task is a collection of Steps that is defined and arranged in a sequential order of execution.
| Field | Type | Requirement | Notes |
|--------------|-----------------------------|-------------|------------------------------------------------|
| `kind` | string | RECOMMENDED | Describes the type of the resource i.e. `Task` |
| `apiVersion` | string | RECOMMENDED | Schema version i.e. `v1` |
| `metadata` | [`ObjectMeta`](#objectmeta) | REQUIRED | Common metadata about a resource |
| `spec` | [`TaskSpec`](#taskspec) | REQUIRED | Defines the desired state of Task. |
**NB:** If `kind` and `apiVersion` are not supported, an alternative method of identifying the type of resource must be supported.
### `Pipeline`
A Pipeline is a collection of Tasks that is defined and arranged in a specific order of execution
| Field | Type | Requirement | Notes |
|--------------|---------------------------------|-------------|----------------------------------------------------|
| `kind` | string | RECOMMENDED | Describes the type of the resource i.e. `Pipeline` |
| `apiVersion` | string | RECOMMENDED | Schema version i.e. `v1` |
| `metadata` | [`ObjectMeta`](#objectmeta) | REQUIRED | Common metadata about a resource |
| `spec` | [`PipelineSpec`](#pipelinespec) | REQUIRED | Defines the desired state of Pipeline. |
**NB:** If `kind` and `apiVersion` are not supported, an alternative method of identifying the type of resource must be supported.
### `TaskRun`
A `TaskRun` represents an instantiation of a single execution of a `Task`. It can describe the steps of the Task directly.
| Field | Type | Requirement | Notes |
|--------------|-----------------------------------|-------------|--------------------------------------------------|
| `kind` | string | RECOMMENDED | Describes the type of the resource i.e.`TaskRun` |
| `apiVersion` | string | RECOMMENDED | Schema version i.e. `v1` |
| `metadata` | [`ObjectMeta`](#objectmeta) | REQUIRED | Common metadata about a resource |
| `spec` | [`TaskRunSpec`](#taskrunspec) | REQUIRED | Defines the desired state of TaskRun |
| `status` | [`TaskRunStatus`](#taskrunstatus) | REQUIRED | Defines the current status of TaskRun |
**NB:** If `kind` and `apiVersion` are not supported, an alternative method of identifying the type of resource must be supported.
### `PipelineRun`
A `PipelineRun` represents an instantiation of a single execution of a `Pipeline`. It can describe the spec of the Pipeline directly.
| Field | Type | Requirement | Notes |
|--------------|-------------------------------------------|-------------|------------------------------------------------------|
| `kind` | string | RECOMMENDED | Describes the type of the resource i.e.`PipelineRun` |
| `apiVersion` | string | RECOMMENDED | Schema version i.e. `v1` |
| `metadata` | [`ObjectMeta`](#objectmeta) | REQUIRED | Common metadata about a resource |
| `spec` | [`PipelineRunSpec`](#pipelinerunspec) | REQUIRED | Defines the desired state of PipelineRun |
| `status` | [`PipelineRunStatus`](#pipelinerunstatus) | REQUIRED | Defines the current status of PipelineRun |
**NB:** If `kind` and `apiVersion` are not supported, an alternative method of identifying the type of resource must be supported.
## Detailed Resource Types - `v1`
### TypeMeta
Derived from [Kuberentes Type Meta](https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#TypeMeta)
| Field | Type | Notes |
|--------------|--------|-------------------------------------------------------------------|
| `kind` | string | A string value representing the resource this object represents. |
| `apiVersion` | string | Defines the versioned schema of this representation of an object. |
### ObjectMeta
Derived from standard Kubernetes [meta.v1/ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#objectmeta-v1-meta) resource.
| Field | Type | Requirement | Notes |
| ------------------- | ------------------ | ------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `name` | string | REQUIRED | Mutually exclusive with the `generateName` field. |
| `labels` | map<string,string> | RECOMMENDED | |
| `annotations` | map<string,string> | RECOMMENDED | `annotations` are necessary in order to support integration with Tekton ecosystem tooling such as Results and Chains |
| `creationTimestamp` | string | REQUIRED (see note) | `creationTimestamp` MUST be populated by the implementation, in [RFC3339](https://tools.ietf.org/html/rfc3339). <br>The field is required for any runtimeTypes such as `TaskRun` and `PipelineRun` and RECOMMENDED for othet types. |
| `uid` | string | RECOMMENDED | If `uid` is not supported, the implementation must support another way of uniquely identifying a runtime object such as using a combination of `namespace` and `name` |
| `resourceVersion` | string | OPTIONAL | |
| `generation` | int64 | OPTIONAL | |
| `generateName` | string | RECOMMENDED | If supported by the implementation, when `generateName` is specified at creation, it MUST be prepended to a random string and set as the `name`, and not set on the subsequent response. |
### TaskSpec
Defines the desired state of Task
| Field | Type | Requirement | Notes |
|---------------|---------------------------------------------------|-------------|-------|
| `description` | string | REQUIRED | |
| `params` | [][`ParamSpec`](#paramspec) | REQUIRED | |
| `steps` | [][`Step`](#step) | REQUIRED | |
| `sidecars` | [][`Sidecar`](#sidecar) | REQUIRED | |
| `results` | [][`TaskResult`](#taskresult) | REQUIRED | |
| `workspaces` | [][`WorkspaceDeclaration`](#workspacedeclaration) | REQUIRED | |
### ParamSpec
Declares a parameter whose value has to be provided at runtime
| Field Name | Field Type | Requirement | Notes |
|---------------|-----------------------------|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `name` | string | REQUIRED | |
| `description` | string | REQUIRED | |
| `type` | [`ParamType`](#paramtype) | REQUIRED (see note) | The values `string` and `array` for this field are REQUIRED, and the value `object` is RECOMMENDED. |
| `properties` | map<string,PropertySpec> | RECOMMENDED | `PropertySpec` is a type that defines the spec of an individual key. See how to define the `properties` section in the [example](../examples/v1/taskruns/beta/object-param-result.yaml). |
| `default` | [`ParamValue`](#paramvalue) | REQUIRED | |
### ParamType
Defines the type of a parameter
string enum, allowed values are `string`, `array`, and `object`. Supporting `string` and `array` are required while the other types are optional for conformance.
### Step
A Step is a reference to a container image that executes a specific tool on a specific input and produces a specific output.
**NB:** All other fields inherited from the [core.v1/Container](https://godoc.org/k8s.io/api/core/v1#Container) type supported by the Kubernetes implementation are **OPTIONAL** for the purposes of this spec.
| Field Name | Field Type | Requirement | Notes |
| ----------------- | ------------------------------------- | ----------- | ----- |
| `name` | string | REQUIRED | |
| `image` | string | REQUIRED | |
| `args` | []string | REQUIRED | |
| `command` | []string | REQUIRED | |
| `workingDir` | []string | REQUIRED | |
| `env` | [][`EnvVar`](#envvar) | REQUIRED | |
| `script` | string | REQUIRED | |
| `securityContext` | [`SecurityContext`](#securitycontext) | REQUIRED | |
### Sidecar
Specifies a list of containers to run alongside the Steps in a Task. If sidecars are supported, the following fields are required:
| Field | Type | Requirement | Notes |
|-------------------|---------------------------------------|-------------|-----------------------------------------------------------------------------------------------------------------------------|
| `name` | string | REQUIRED | Name of the Sidecar specified as a DNS_LABEL. Each Sidecar in a Task must have a unique name (DNS_LABEL).Cannot be updated. |
| `image` | string | REQUIRED | [Container image name](https://kubernetes.io/docs/concepts/containers/images/#image-names) |
| `command` | []string | REQUIRED | Entrypoint array. Not executed within a shell. The image's ENTRYPOINT is used if this is not provided. |
| `args` | []string | REQUIRED | Arguments to the entrypoint. The image's CMD is used if this is not provided. |
| `script` | string | REQUIRED | Script is the contents of an executable file to execute. If Script is not empty, the Sidecar cannot have a Command or Args. |
| `securityContext` | [`SecurityContext`](#securitycontext) | REQUIRED | Defines the security options the Sidecar should be run with. |
### SecurityContext
All other fields derived from [core.v1/SecurityContext](https://pkg.go.dev/k8s.io/api/core/v1#SecurityContext) are OPTIONAL for the purposes of this spec.
| Field | Type | Requirement | Notes |
| ------------ | ---- | ----------- | ------------------------------------------------------ |
| `privileged` | bool | REQUIRED | Run the container in privileged mode. Default to false |
### TaskResult
Defines a result produced by a Task
| Field | Type | Requirement | Notes |
|---------------|-------------------------------|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `name` | string | REQUIRED | Declares the name by which a parameter is referenced. |
| `type` | [`ResultsType`](#resultstype) | REQUIRED | Type is the user-specified type of the result. The values `string` this field is REQUIRED, and the values `array` and `object` are RECOMMENDED. |
| `description` | string | RECOMMENDED | Description of the result |
| `properties` | map<string,PropertySpec> | RECOMMENDED | `PropertySpec` is a type that defines the spec of an individual key. See how to define the `properties` section in the [example](../examples/v1/taskruns/beta/object-param-result.yaml). |
### ResultsType
ResultsType indicates the type of a result.
string enum, Allowed values are `string`, `array`, and `object`. Supporting `string` is required while the other types are optional for conformance.
### PipelineSpec
Defines a pipeline
| Field | Type | Requirement | Notes |
|--------------|-------------------------------------------------------------------|-------------|--------------------------------------------------------------------------------------------------|
| `params` | [][`ParamSpec`](#paramspec) | REQUIRED | Params declares a list of input parameters that must be supplied when this Pipeline is run. |
| `tasks` | [][`PipelineTask`](#pipelinetask) | REQUIRED | Tasks declares the graph of Tasks that execute when this Pipeline is run. |
| `results` | [][`PipelineResult`](#pipelineresult) | REQUIRED | Values that this pipeline can output once run. |
| `finally` | [][`PipelineTask`](#pipelinetask) | REQUIRED | The list of Tasks that execute just before leaving the Pipeline |
| `workspaces` | [][`PipelineWorkspaceDeclaration`](#pipelineworkspacedeclaration) | REQUIRED | Workspaces declares a set of named workspaces that are expected to be provided by a PipelineRun. |
### PipelineTask
PiplineTask defines a task in a Pipeline, passing inputs from both `Params`` and from the output of previous tasks.
| Field | Type | Requirement | Notes |
|--------------|-------------------------------------------------------------------|-------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `name` | string | REQUIRED | The name of this task within the context of a Pipeline. Used as a coordinate with the from and runAfter fields to establish the execution order of tasks relative to one another. |
| `taskRef` | [`TaskRef`](#taskref) | RECOMMENDED | TaskRef is a reference to a task definition. Mutually exclusive with TaskSpec |
| `taskSpec` | [`TaskSpec`](#taskspec) | REQUIRED | TaskSpec is a specification of a task. Mutually exclusive with TaskRef |
| `runAfter` | []string | REQUIRED | RunAfter is the list of PipelineTask names that should be executed before this Task executes. (Used to force a specific ordering in graph execution.) |
| `params` | [][`Param`](#param) | REQUIRED | Declares parameters passed to this task. |
| `workspaces` | [][`WorkspacePipelineTaskBinding`](#workspacepipelinetaskbinding) | REQUIRED | Workspaces maps workspaces from the pipeline spec to the workspaces declared in the Task. |
| `timeout` | int64 | REQUIRED | Time after which the TaskRun times out. Setting the timeout to 0 implies no timeout. There isn't a default max timeout set. |
### TaskRef
Refers to a Task. Tasks should be referred either by a name or by using the Remote Resolution framework.
| Field | Type | Requirement | Notes |
|------------|---------------------|-------------|-------------------------|
| `name` | string | RECOMMENDED | Name of the referent. |
| `resolver` | string | RECOMMENDED | A field of ResolverRef. |
| `params` | [][`Param`](#param) | RECOMMENDED | A field of ResolverRef. |
### ResolverRef
| Field | Type | Requirement | Notes |
|------------|---------------------|-------------|-----------------------------------------------------------------------------------------------------------------------|
| `resolver` | string | RECOMMENDED | Resolver is the name of the resolver that should perform resolution of the referenced Tekton resource, such as "git". |
| `params` | [][`Param`](#param) | RECOMMENDED | Contains the parameters used to identify the referenced Tekton resource. |
### Param
Provides a value for the named paramter.
| Field | Type | Requirement | Notes |
|---------|-----------------------------|-------------|-------|
| `name` | string | REQUIRED | |
| `value` | [`ParamValue`](#paramvalue) | REQUIRED | |
### ParamValue
A `ParamValue` may be a string, a list of string, or a map of string to string.
### PipelineResult
| Field | Type | Requirement | Notes |
|---------|-------------------------------|-------------|-------|
| `name` | string | REQUIRED | |
| `type` | [`ResultsType`](#resultstype) | REQUIRED | |
| `value` | [`ParamValue`](#paramvalue) | REQUIRED | |
### TaskRunSpec
| Field | Type | Requirement | Notes |
|-----------------------|-----------------------------------------------------|-------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `params` | [][`Param`](#param) | REQUIRED | |
| `taskRef` | [`TaskRef`](#taskref) | REQUIRED | |
| `taskSpec` | [`TaskSpec`](#taskspec) | REQUIRED | |
| `workspaces` | [][`WorkspaceBinding`](#workspacebinding) | REQUIRED | |
| `timeout` | string (duration) | REQUIRED | Time after which one retry attempt times out. Defaults to 1 hour. |
| `status` | Enum:<br>- `""` (default)<br>- `"TaskRunCancelled"` | RECOMMENDED | |
| `serviceAccountName`^ | string | RECOMMENDED | In the Kubernetes implementation, `serviceAccountName` refers to a Kubernetes `ServiceAccount` resource that is assumed to exist in the same namespace. Other implementations MAY interpret this string differently, and impose other requirements on specified values. |
### TaskRunStatus
| Field | Type | Requirement | Notes |
|----------------------|-------------------------------------|-------------|---------------------------------------------------------------------------------------------------------------------------------|
| `conditions` | [][`Condition`](#condition) | REQUIRED | Condition type `Succeeded` MUST be populated. See [Status Signalling](#status-signalling) for details. Other types are OPTIONAL |
| `startTime` | string | REQUIRED | MUST be populated by the implementation, in [RFC3339](https://tools.ietf.org/html/rfc3339). |
| `completionTime` | string | REQUIRED | MUST be populated by the implementation, in [RFC3339](https://tools.ietf.org/html/rfc3339). |
| `taskSpec` | [`TaskSpec`](#taskspec) | REQUIRED | |
| `steps` | [][`StepState`](#stepstate) | REQUIRED | |
| `results` | [][`TaskRunResult`](#taskrunresult) | REQUIRED | |
| `sidecars` | [][`SidecarState`](#sidecarstate) | RECOMMENDED | |
| `observedGeneration` | int64 | RECOMMENDED | |
### Condition
| Field | Type | Requirement | Notes |
|-----------|--------|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `type` | string | REQUIRED | Required values: <br> `Succeeded`: specifies that the resource has finished.<br> Other OPTIONAL values: <br> `TaskRunResultsVerified` <br> `TrustedResourcesVerified` |
| `status` | string | REQUIRED | Valid values: <br> "UNKNOWN": .<br> "TRUE" <br> "FALSE". (Also see [Status Signalling](#status-signalling)) |
| `reason` | string | REQUIRED | The reason for the condition's last transition. |
| `message` | string | RECOMMENDED | Message describing the status and reason. |
### StepState
| Field | Type | Requirement | Notes |
|------------------|-------------------------------------|-------------|---------------------------|
| `name` | string | REQUIRED | Name of the StepState. |
| `imageID` | string | REQUIRED | Image ID of the StepState |
| `containerState` | [`ContainerState`](#containerstate) | REQUIRED | State of the container |
### ContainerState
| Field | Type | Requirement | Notes |
|--------------|------------------------------|-------------|--------------------------------------|
| `waiting` | [`ContainerStateWaiting`] | REQUIRED | Details about a waiting container |
| `running` | [`ContainerStateRunning`] | REQUIRED | Details about a running container |
| `terminated` | [`ContainerStateTerminated`] | REQUIRED | Details about a terminated container |
\* Only one of `waiting`, `running` or `terminated` can be returned at a time.
### `ContainerStateRunning`
| Field Name | Field Type | Requirement | Notes |
|--------------|------------|-------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------|
| `startedAt`* | string | REQUIRED | Time at which the container was last (re-)started.`startedAt` MUST be populated by the implementation, in [RFC3339](https://tools.ietf.org/html/rfc3339). |
### `ContainerStateWaiting`
| Field Name | Field Type | Requirement | Notes |
|------------|------------|-------------|---------------------------------------------------------|
| `reason` | string | REQUIRED | Reason the container is not yet running. |
| `message` | string | RECOMMENDED | Message regarding why the container is not yet running. |
### `ContainerStateTerminated`
| Field Name | Field Type | Requirement | Notes |
|---------------|------------|-------------|---------------------------------------------------------|
| `exitCode` | int32 | REQUIRED | Exit status from the last termination of the container. |
| `reason` | string | REQUIRED | Reason from the last termination of the container. |
| `message` | string | RECOMMENDED | Message regarding the last termination of the container |
| `startedAt`* | string | REQUIRED | Time at which the container was last (re-)started. |
| `finishedAt`* | string | REQUIRED | Time at which the container last terminated. |
\* `startedAt` and `finishedAt` MUST be populated by the implementation, in [RFC3339](https://tools.ietf.org/html/rfc3339).
### TaskRunResult
| Field | Type | Requirement | Notes |
|---------|-------------------------------|-------------|-------|
| `name` | string | REQUIRED | |
| `type` | [`ResultsType`](#resultstype) | REQUIRED | |
| `value` | [`ParamValue`](#paramvalue) | REQUIRED | |
### SidecarState
| Field | Type | Requirement | Notes |
|------------------|-------------------------------------|-------------|-------------------------------|
| `name` | string | RECOMMENDED | Name of the SidecarState. |
| `imageID` | string | RECOMMENDED | Image ID of the SidecarState. |
| `containerState` | [`ContainerState`](#containerstate) | RECOMMENDED | State of the container. |
### PipelineRunSpec
| Field | Type | Requirement | Notes |
|----------------|-----------------------------------------|-------------|------------------------------------------|
| `params` | [][`Param`](#param) | REQUIRED | |
| `pipelineRef` | [`PipelineRef`](#pipelineref) | RECOMMENDED | |
| `pipelineSpec` | [`PipelineSpec`](#pipelinespec) | REQUIRED | |
| `timeouts` | [`TimeoutFields`](#timeoutfields) | REQUIRED | Time after which the Pipeline times out. |
| `workspaces` | [`WorkspaceBinding`](#workspacebinding) | REQUIRED | |
### PipelineRef
| Field | Type | Requirement | Note |
|------------|-------------------|-------------|-------------------------|
| `name` | string | RECOMMENDED | Name of the referent. |
| `resolver` | string | RECOMMENDED | A field of ResolverRef. |
| `params` | [][Param](#param) | RECOMMENDED | A field of ResolverRef. |
### PipelineRunStatus
| Field | Type | Requirement | Notes |
|-------------------|-------------------------------------------------|-------------|---------------------------------------------------------------------------------------------------------------------------------|
| `conditions` | [][`Condition`](#condition) | REQUIRED | Condition type `Succeeded` MUST be populated. See [Status Signalling](#status-signalling) for details. Other types are OPTIONAL |
| `startTime` | string | REQUIRED | MUST be populated by the implementation, in [RFC3339](https://tools.ietf.org/html/rfc3339). |
| `completionTime` | string | REQUIRED | MUST be populated by the implementation, in [RFC3339](https://tools.ietf.org/html/rfc3339). |
| `pipelineSpec` | [`PipelineSpec`](#pipelinespec) | RECOMMEDED | Resolved spec of the pipeline that was executed |
| `results` | [][`PipelineRunResult`](#pipelinerunresult) | RECOMMENDED | Results produced from the pipeline |
| `childReferences` | [][ChildStatusReference](#childstatusreference) | REQUIRED | References to any child Runs created as part of executing the pipelinerun |
### PipelineRunResult
| Field | Type | Requirement | Notes |
|---------|-----------------------------|-------------|---------------------------------------------------------------------|
| `name` | string | RECOMMENDED | Name is the result's name as declared by the Pipeline |
| `value` | [`ParamValue`](#paramvalue) | RECOMMENDED | Value is the result returned from the execution of this PipelineRun |
### ChildStatusReference
| Field | Type | Requirement | Notes |
|--------------------|--------|-------------|-----------------------------------------------------------------------|
| `Name` | string | REQUIRED | Name is the name of the TaskRun this is referencing. |
| `PipelineTaskName` | string | REQUIRED | PipelineTaskName is the name of the PipelineTask this is referencing. |
### TimeoutFields
| Field | Type | Requirement | Notes |
|------------|-------------------|-------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `Pipeline` | string (duration) | REQUIRED | Pipeline sets the maximum allowed duration for execution of the entire pipeline. The sum of individual timeouts for tasks and finally must not exceed this value. |
| `Tasks` | string (duration) | REQUIRED | Tasks sets the maximum allowed duration of this pipeline's tasks |
| `Finally` | string (duration) | REQUIRED | Finally sets the maximum allowed duration of this pipeline's finally |
**string (duration)** : A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h"
**Note:** Currently three keys are accepted in the map: pipeline, tasks and finally. Timeouts.pipeline >= Timeouts.tasks + Timeouts.finally
### WorkspaceDeclaration
| Field | Type | Requirement | Notes |
|---------------|---------|-------------|---------------------------------------------------------------|
| `name` | string | REQUIRED | Name is the name by which you can bind the volume at runtime. |
| `description` | string | RECOMMENDED | |
| `mountPath` | string | RECOMMENDED | |
| `readOnly` | boolean | RECOMMENDED | Defaults to false. |
### WorkspacePipelineTaskBinding
| Field | Type | Requirement | Notes |
|-------------|--------|-------------|-----------------------------------------------------------------|
| `name` | string | REQUIRED | Name is the name of the workspace as declared by the task |
| `workspace` | string | REQUIRED | Workspace is the name of the workspace declared by the pipeline |
### PipelineWorkspaceDeclaration
| Field | Type | Requirement | Notes |
|--------|--------|-------------|------------------------------------------------------------------|
| `name` | string | REQUIRED | Name is the name of a workspace to be provided by a PipelineRun. |
### WorkspaceBinding
| Field Name | Field Type | Requirement | Notes |
|------------|--------------|-------------|-------|
| `name` | string | REQUIRED | |
| `emptyDir` | empty struct | REQUIRED | |
**NB:** All other Workspace types supported by the Kubernetes implementation are **OPTIONAL** for the purposes of this spec.
### EnvVar
| Field Name | Field Type | Requirement | Notes |
|------------|------------|-------------|-------|
| `name` | string | REQUIRED | |
| `value` | string | REQUIRED | |
**NB:** All other [EnvVar](https://godoc.org/k8s.io/api/core/v1#EnvVar) types inherited from [core.v1/EnvVar](https://godoc.org/k8s.io/api/core/v1#EnvVar) and supported by the Kubernetes implementation (e.g., `valueFrom`) are **OPTIONAL** for the purposes of this spec.
## Status Signalling
<!-- wokeignore:rule=master -->
The Tekton Pipelines API uses the [Kubernetes Conditions convention](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties) to communicate status and errors to the user.
`TaskRun`'s `status` field MUST have a `conditions` field, which must be a list of `Condition` objects of the following form:
| Field | Type | Requirement |
|-----------------------|---------------------------------------------------------------|-------------|
| `type` | string | REQUIRED |
| `status` | Enum:<br>- `"True"`<br>- `"False"`<br>- `"Unknown"` (default) | REQUIRED |
| `reason` | string | REQUIRED |
| `message` | string | REQUIRED |
| `severity` | Enum:<br>- `""` (default)<br>- `"Warning"`<br>- `"Info"` | REQUIRED |
| `lastTransitionTime`* | string | OPTIONAL |
\* If `lastTransitionTime` is populated by the implementation, it must be in [RFC3339](https://tools.ietf.org/html/rfc3339).
Additionally, the resource's `status.conditions` field MUST be managed as follows to enable clients to present useful diagnostic and error information to the user.
If a resource describes that it must report a Condition of the `type` `Succeeded`, then it must report it in the following manner:
- If the `status` field is `"True"`, that means the execution finished successfully.
- If the `status` field is `"False"`, that means the execution finished unsuccessfully -- the Condition's `reason` and `message` MUST include further diagnostic information.
- If the `status` field is `"Unknown"`, that means the execution is still ongoing, and clients can check again later until the Condition's `status` reports either `"True"` or `"False"`.
Resources MAY report Conditions with other `type`s, but none are REQUIRED or RECOMMENDED. | tekton | Tekton Pipelines API Specification toc Tekton Pipelines API Specification tekton pipelines api specification Abstract abstract Background background Modifying This Specification modifying this specification Resource Overview v1 resource overview v1 Task task Pipeline pipeline TaskRun taskrun PipelineRun pipelinerun Detailed Resource Types v1 detailed resource types v1 TypeMeta typemeta ObjectMeta objectmeta TaskSpec taskspec ParamSpec paramspec ParamType paramtype Step step Sidecar sidecar SecurityContext securitycontext TaskResult taskresult ResultsType resultstype PipelineSpec pipelinespec PipelineTask pipelinetask TaskRef taskref ResolverRef resolverref Param param ParamValue paramvalue PipelineResult pipelineresult TaskRunSpec taskrunspec TaskRunStatus taskrunstatus Condition condition StepState stepstate ContainerState containerstate ContainerStateRunning containerstaterunning ContainerStateWaiting containerstatewaiting ContainerStateTerminated containerstateterminated TaskRunResult taskrunresult SidecarState sidecarstate PipelineRunSpec pipelinerunspec PipelineRef pipelineref PipelineRunStatus pipelinerunstatus PipelineRunResult pipelinerunresult ChildStatusReference childstatusreference TimeoutFields timeoutfields WorkspaceDeclaration workspacedeclaration WorkspacePipelineTaskBinding workspacepipelinetaskbinding PipelineWorkspaceDeclaration pipelineworkspacedeclaration WorkspaceBinding workspacebinding EnvVar envvar Status Signalling status signalling toc Abstract The Tekton Pipelines platform provides common abstractions for describing and executing container based run to completion workflows typically in service of CI CD scenarios The Tekton Conformance Policy defines the requirements that Tekton implementations must meet to claim conformance with the Tekton API TEP 0131 https github com tektoncd community blob main teps 0131 tekton conformance policy md lay out details of the policy itself According to the policy Tekton implementations can claim Conformance on GA Primitives thus all API Spec in this doc is for Tekton V1 APIs Implementations are only required to provide resource management i e CRUD APIs for Runtime Primitives TaskRun and PipelineRun For Authoring time Primitives Task and Pipeline supporting CRUD APIs is not a requirement but we recommend referencing them in runtime types e g from git catalog within the cluster etc This document describes the structure and lifecycle of Tekton resources This document does not define the runtime contract https tekton dev docs pipelines container contract nor prescribe specific implementations of supporting services such as access control observability or resource management This document makes reference in a few places to different profiles for Tekton installations A profile in this context is a set of operations resources and fields that are accessible to a developer interacting with a Tekton installation Currently only a single minimal profile for Tekton Pipelines is defined but additional profiles may be defined in the future to standardize advanced functionality A minimal profile is one that implements all of the MUST MUST NOT and REQUIRED conditions of this document Background The key words MUST MUST NOT REQUIRED SHALL SHALL NOT SHOULD SHOULD NOT RECOMMENDED NOT RECOMMENDED MAY and OPTIONAL are to be interpreted as described in RFC 2119 https tools ietf org html rfc2119 There is no formal specification of the Kubernetes API and Resource Model This document assumes Kubernetes 1 25 behavior this behavior will typically be supported by many future Kubernetes versions Additionally this document may reference specific core Kubernetes resources these references may be illustrative i e an implementation on Kubernetes or descriptive i e this Kubernetes resource MUST be exposed References to these core Kubernetes resources will be annotated as either illustrative or descriptive Modifying This Specification This spec is a living document meaning new resources and fields may be added and may transition from being OPTIONAL to RECOMMENDED to REQUIRED over time In general a resource or field should not be added as REQUIRED directly as this may cause unsuspecting previously conformant implementations to suddenly no longer be conformant These should be first OPTIONAL or RECOMMENDED then change to be REQUIRED once a survey of conformant implementations indicates that doing so will not cause undue burden on any implementation Resource Overview v1 The following schema defines a set of REQUIRED or RECOMMENDED resource fields on the Tekton resource types Whether a field is REQUIRED or RECOMMENDED is denoted in the Requirement column Additional fields MAY be provided by particular implementations however it is expected that most extension will be accomplished via the metadata labels and metadata annotations fields as Tekton implementations MAY validate supplied resources against these fields and refuse resources which specify unknown fields Tekton implementations MUST NOT require spec fields outside this implementation to do so would break interoperability between such implementations and implementations which implement validation of field names NB All fields and resources not listed below are assumed to be OPTIONAL not RECOMMENDED or REQUIRED Task A Task is a collection of Steps that is defined and arranged in a sequential order of execution Field Type Requirement Notes kind string RECOMMENDED Describes the type of the resource i e Task apiVersion string RECOMMENDED Schema version i e v1 metadata ObjectMeta objectmeta REQUIRED Common metadata about a resource spec TaskSpec taskspec REQUIRED Defines the desired state of Task NB If kind and apiVersion are not supported an alternative method of identifying the type of resource must be supported Pipeline A Pipeline is a collection of Tasks that is defined and arranged in a specific order of execution Field Type Requirement Notes kind string RECOMMENDED Describes the type of the resource i e Pipeline apiVersion string RECOMMENDED Schema version i e v1 metadata ObjectMeta objectmeta REQUIRED Common metadata about a resource spec PipelineSpec pipelinespec REQUIRED Defines the desired state of Pipeline NB If kind and apiVersion are not supported an alternative method of identifying the type of resource must be supported TaskRun A TaskRun represents an instantiation of a single execution of a Task It can describe the steps of the Task directly Field Type Requirement Notes kind string RECOMMENDED Describes the type of the resource i e TaskRun apiVersion string RECOMMENDED Schema version i e v1 metadata ObjectMeta objectmeta REQUIRED Common metadata about a resource spec TaskRunSpec taskrunspec REQUIRED Defines the desired state of TaskRun status TaskRunStatus taskrunstatus REQUIRED Defines the current status of TaskRun NB If kind and apiVersion are not supported an alternative method of identifying the type of resource must be supported PipelineRun A PipelineRun represents an instantiation of a single execution of a Pipeline It can describe the spec of the Pipeline directly Field Type Requirement Notes kind string RECOMMENDED Describes the type of the resource i e PipelineRun apiVersion string RECOMMENDED Schema version i e v1 metadata ObjectMeta objectmeta REQUIRED Common metadata about a resource spec PipelineRunSpec pipelinerunspec REQUIRED Defines the desired state of PipelineRun status PipelineRunStatus pipelinerunstatus REQUIRED Defines the current status of PipelineRun NB If kind and apiVersion are not supported an alternative method of identifying the type of resource must be supported Detailed Resource Types v1 TypeMeta Derived from Kuberentes Type Meta https pkg go dev k8s io apimachinery pkg apis meta v1 TypeMeta Field Type Notes kind string A string value representing the resource this object represents apiVersion string Defines the versioned schema of this representation of an object ObjectMeta Derived from standard Kubernetes meta v1 ObjectMeta https kubernetes io docs reference generated kubernetes api v1 18 objectmeta v1 meta resource Field Type Requirement Notes name string REQUIRED Mutually exclusive with the generateName field labels map string string RECOMMENDED annotations map string string RECOMMENDED annotations are necessary in order to support integration with Tekton ecosystem tooling such as Results and Chains creationTimestamp string REQUIRED see note creationTimestamp MUST be populated by the implementation in RFC3339 https tools ietf org html rfc3339 br The field is required for any runtimeTypes such as TaskRun and PipelineRun and RECOMMENDED for othet types uid string RECOMMENDED If uid is not supported the implementation must support another way of uniquely identifying a runtime object such as using a combination of namespace and name resourceVersion string OPTIONAL generation int64 OPTIONAL generateName string RECOMMENDED If supported by the implementation when generateName is specified at creation it MUST be prepended to a random string and set as the name and not set on the subsequent response TaskSpec Defines the desired state of Task Field Type Requirement Notes description string REQUIRED params ParamSpec paramspec REQUIRED steps Step step REQUIRED sidecars Sidecar sidecar REQUIRED results TaskResult taskresult REQUIRED workspaces WorkspaceDeclaration workspacedeclaration REQUIRED ParamSpec Declares a parameter whose value has to be provided at runtime Field Name Field Type Requirement Notes name string REQUIRED description string REQUIRED type ParamType paramtype REQUIRED see note The values string and array for this field are REQUIRED and the value object is RECOMMENDED properties map string PropertySpec RECOMMENDED PropertySpec is a type that defines the spec of an individual key See how to define the properties section in the example examples v1 taskruns beta object param result yaml default ParamValue paramvalue REQUIRED ParamType Defines the type of a parameter string enum allowed values are string array and object Supporting string and array are required while the other types are optional for conformance Step A Step is a reference to a container image that executes a specific tool on a specific input and produces a specific output NB All other fields inherited from the core v1 Container https godoc org k8s io api core v1 Container type supported by the Kubernetes implementation are OPTIONAL for the purposes of this spec Field Name Field Type Requirement Notes name string REQUIRED image string REQUIRED args string REQUIRED command string REQUIRED workingDir string REQUIRED env EnvVar envvar REQUIRED script string REQUIRED securityContext SecurityContext securitycontext REQUIRED Sidecar Specifies a list of containers to run alongside the Steps in a Task If sidecars are supported the following fields are required Field Type Requirement Notes name string REQUIRED Name of the Sidecar specified as a DNS LABEL Each Sidecar in a Task must have a unique name DNS LABEL Cannot be updated image string REQUIRED Container image name https kubernetes io docs concepts containers images image names command string REQUIRED Entrypoint array Not executed within a shell The image s ENTRYPOINT is used if this is not provided args string REQUIRED Arguments to the entrypoint The image s CMD is used if this is not provided script string REQUIRED Script is the contents of an executable file to execute If Script is not empty the Sidecar cannot have a Command or Args securityContext SecurityContext securitycontext REQUIRED Defines the security options the Sidecar should be run with SecurityContext All other fields derived from core v1 SecurityContext https pkg go dev k8s io api core v1 SecurityContext are OPTIONAL for the purposes of this spec Field Type Requirement Notes privileged bool REQUIRED Run the container in privileged mode Default to false TaskResult Defines a result produced by a Task Field Type Requirement Notes name string REQUIRED Declares the name by which a parameter is referenced type ResultsType resultstype REQUIRED Type is the user specified type of the result The values string this field is REQUIRED and the values array and object are RECOMMENDED description string RECOMMENDED Description of the result properties map string PropertySpec RECOMMENDED PropertySpec is a type that defines the spec of an individual key See how to define the properties section in the example examples v1 taskruns beta object param result yaml ResultsType ResultsType indicates the type of a result string enum Allowed values are string array and object Supporting string is required while the other types are optional for conformance PipelineSpec Defines a pipeline Field Type Requirement Notes params ParamSpec paramspec REQUIRED Params declares a list of input parameters that must be supplied when this Pipeline is run tasks PipelineTask pipelinetask REQUIRED Tasks declares the graph of Tasks that execute when this Pipeline is run results PipelineResult pipelineresult REQUIRED Values that this pipeline can output once run finally PipelineTask pipelinetask REQUIRED The list of Tasks that execute just before leaving the Pipeline workspaces PipelineWorkspaceDeclaration pipelineworkspacedeclaration REQUIRED Workspaces declares a set of named workspaces that are expected to be provided by a PipelineRun PipelineTask PiplineTask defines a task in a Pipeline passing inputs from both Params and from the output of previous tasks Field Type Requirement Notes name string REQUIRED The name of this task within the context of a Pipeline Used as a coordinate with the from and runAfter fields to establish the execution order of tasks relative to one another taskRef TaskRef taskref RECOMMENDED TaskRef is a reference to a task definition Mutually exclusive with TaskSpec taskSpec TaskSpec taskspec REQUIRED TaskSpec is a specification of a task Mutually exclusive with TaskRef runAfter string REQUIRED RunAfter is the list of PipelineTask names that should be executed before this Task executes Used to force a specific ordering in graph execution params Param param REQUIRED Declares parameters passed to this task workspaces WorkspacePipelineTaskBinding workspacepipelinetaskbinding REQUIRED Workspaces maps workspaces from the pipeline spec to the workspaces declared in the Task timeout int64 REQUIRED Time after which the TaskRun times out Setting the timeout to 0 implies no timeout There isn t a default max timeout set TaskRef Refers to a Task Tasks should be referred either by a name or by using the Remote Resolution framework Field Type Requirement Notes name string RECOMMENDED Name of the referent resolver string RECOMMENDED A field of ResolverRef params Param param RECOMMENDED A field of ResolverRef ResolverRef Field Type Requirement Notes resolver string RECOMMENDED Resolver is the name of the resolver that should perform resolution of the referenced Tekton resource such as git params Param param RECOMMENDED Contains the parameters used to identify the referenced Tekton resource Param Provides a value for the named paramter Field Type Requirement Notes name string REQUIRED value ParamValue paramvalue REQUIRED ParamValue A ParamValue may be a string a list of string or a map of string to string PipelineResult Field Type Requirement Notes name string REQUIRED type ResultsType resultstype REQUIRED value ParamValue paramvalue REQUIRED TaskRunSpec Field Type Requirement Notes params Param param REQUIRED taskRef TaskRef taskref REQUIRED taskSpec TaskSpec taskspec REQUIRED workspaces WorkspaceBinding workspacebinding REQUIRED timeout string duration REQUIRED Time after which one retry attempt times out Defaults to 1 hour status Enum br default br TaskRunCancelled RECOMMENDED serviceAccountName string RECOMMENDED In the Kubernetes implementation serviceAccountName refers to a Kubernetes ServiceAccount resource that is assumed to exist in the same namespace Other implementations MAY interpret this string differently and impose other requirements on specified values TaskRunStatus Field Type Requirement Notes conditions Condition condition REQUIRED Condition type Succeeded MUST be populated See Status Signalling status signalling for details Other types are OPTIONAL startTime string REQUIRED MUST be populated by the implementation in RFC3339 https tools ietf org html rfc3339 completionTime string REQUIRED MUST be populated by the implementation in RFC3339 https tools ietf org html rfc3339 taskSpec TaskSpec taskspec REQUIRED steps StepState stepstate REQUIRED results TaskRunResult taskrunresult REQUIRED sidecars SidecarState sidecarstate RECOMMENDED observedGeneration int64 RECOMMENDED Condition Field Type Requirement Notes type string REQUIRED Required values br Succeeded specifies that the resource has finished br Other OPTIONAL values br TaskRunResultsVerified br TrustedResourcesVerified status string REQUIRED Valid values br UNKNOWN br TRUE br FALSE Also see Status Signalling status signalling reason string REQUIRED The reason for the condition s last transition message string RECOMMENDED Message describing the status and reason StepState Field Type Requirement Notes name string REQUIRED Name of the StepState imageID string REQUIRED Image ID of the StepState containerState ContainerState containerstate REQUIRED State of the container ContainerState Field Type Requirement Notes waiting ContainerStateWaiting REQUIRED Details about a waiting container running ContainerStateRunning REQUIRED Details about a running container terminated ContainerStateTerminated REQUIRED Details about a terminated container Only one of waiting running or terminated can be returned at a time ContainerStateRunning Field Name Field Type Requirement Notes startedAt string REQUIRED Time at which the container was last re started startedAt MUST be populated by the implementation in RFC3339 https tools ietf org html rfc3339 ContainerStateWaiting Field Name Field Type Requirement Notes reason string REQUIRED Reason the container is not yet running message string RECOMMENDED Message regarding why the container is not yet running ContainerStateTerminated Field Name Field Type Requirement Notes exitCode int32 REQUIRED Exit status from the last termination of the container reason string REQUIRED Reason from the last termination of the container message string RECOMMENDED Message regarding the last termination of the container startedAt string REQUIRED Time at which the container was last re started finishedAt string REQUIRED Time at which the container last terminated startedAt and finishedAt MUST be populated by the implementation in RFC3339 https tools ietf org html rfc3339 TaskRunResult Field Type Requirement Notes name string REQUIRED type ResultsType resultstype REQUIRED value ParamValue paramvalue REQUIRED SidecarState Field Type Requirement Notes name string RECOMMENDED Name of the SidecarState imageID string RECOMMENDED Image ID of the SidecarState containerState ContainerState containerstate RECOMMENDED State of the container PipelineRunSpec Field Type Requirement Notes params Param param REQUIRED pipelineRef PipelineRef pipelineref RECOMMENDED pipelineSpec PipelineSpec pipelinespec REQUIRED timeouts TimeoutFields timeoutfields REQUIRED Time after which the Pipeline times out workspaces WorkspaceBinding workspacebinding REQUIRED PipelineRef Field Type Requirement Note name string RECOMMENDED Name of the referent resolver string RECOMMENDED A field of ResolverRef params Param param RECOMMENDED A field of ResolverRef PipelineRunStatus Field Type Requirement Notes conditions Condition condition REQUIRED Condition type Succeeded MUST be populated See Status Signalling status signalling for details Other types are OPTIONAL startTime string REQUIRED MUST be populated by the implementation in RFC3339 https tools ietf org html rfc3339 completionTime string REQUIRED MUST be populated by the implementation in RFC3339 https tools ietf org html rfc3339 pipelineSpec PipelineSpec pipelinespec RECOMMEDED Resolved spec of the pipeline that was executed results PipelineRunResult pipelinerunresult RECOMMENDED Results produced from the pipeline childReferences ChildStatusReference childstatusreference REQUIRED References to any child Runs created as part of executing the pipelinerun PipelineRunResult Field Type Requirement Notes name string RECOMMENDED Name is the result s name as declared by the Pipeline value ParamValue paramvalue RECOMMENDED Value is the result returned from the execution of this PipelineRun ChildStatusReference Field Type Requirement Notes Name string REQUIRED Name is the name of the TaskRun this is referencing PipelineTaskName string REQUIRED PipelineTaskName is the name of the PipelineTask this is referencing TimeoutFields Field Type Requirement Notes Pipeline string duration REQUIRED Pipeline sets the maximum allowed duration for execution of the entire pipeline The sum of individual timeouts for tasks and finally must not exceed this value Tasks string duration REQUIRED Tasks sets the maximum allowed duration of this pipeline s tasks Finally string duration REQUIRED Finally sets the maximum allowed duration of this pipeline s finally string duration A duration string is a possibly signed sequence of decimal numbers each with optional fraction and a unit suffix such as 300ms 1 5h or 2h45m Valid time units are ns us or s ms s m h Note Currently three keys are accepted in the map pipeline tasks and finally Timeouts pipeline Timeouts tasks Timeouts finally WorkspaceDeclaration Field Type Requirement Notes name string REQUIRED Name is the name by which you can bind the volume at runtime description string RECOMMENDED mountPath string RECOMMENDED readOnly boolean RECOMMENDED Defaults to false WorkspacePipelineTaskBinding Field Type Requirement Notes name string REQUIRED Name is the name of the workspace as declared by the task workspace string REQUIRED Workspace is the name of the workspace declared by the pipeline PipelineWorkspaceDeclaration Field Type Requirement Notes name string REQUIRED Name is the name of a workspace to be provided by a PipelineRun WorkspaceBinding Field Name Field Type Requirement Notes name string REQUIRED emptyDir empty struct REQUIRED NB All other Workspace types supported by the Kubernetes implementation are OPTIONAL for the purposes of this spec EnvVar Field Name Field Type Requirement Notes name string REQUIRED value string REQUIRED NB All other EnvVar https godoc org k8s io api core v1 EnvVar types inherited from core v1 EnvVar https godoc org k8s io api core v1 EnvVar and supported by the Kubernetes implementation e g valueFrom are OPTIONAL for the purposes of this spec Status Signalling wokeignore rule master The Tekton Pipelines API uses the Kubernetes Conditions convention https github com kubernetes community blob master contributors devel sig architecture api conventions md typical status properties to communicate status and errors to the user TaskRun s status field MUST have a conditions field which must be a list of Condition objects of the following form Field Type Requirement type string REQUIRED status Enum br True br False br Unknown default REQUIRED reason string REQUIRED message string REQUIRED severity Enum br default br Warning br Info REQUIRED lastTransitionTime string OPTIONAL If lastTransitionTime is populated by the implementation it must be in RFC3339 https tools ietf org html rfc3339 Additionally the resource s status conditions field MUST be managed as follows to enable clients to present useful diagnostic and error information to the user If a resource describes that it must report a Condition of the type Succeeded then it must report it in the following manner If the status field is True that means the execution finished successfully If the status field is False that means the execution finished unsuccessfully the Condition s reason and message MUST include further diagnostic information If the status field is Unknown that means the execution is still ongoing and clients can check again later until the Condition s status reports either True or False Resources MAY report Conditions with other type s but none are REQUIRED or RECOMMENDED |
tekton remained in alpha while the other resource kinds were promoted to beta Replacing PipelineResources with Tasks Replacing PipelineResources with Tasks weight 207 | <!--
---
linkTitle: "Replacing PipelineResources with Tasks"
weight: 207
---
-->
## Replacing PipelineResources with Tasks
`PipelineResources` remained in alpha while the other resource kinds were promoted to beta.
Since then, **`PipelineResources` have been removed**.
Read more about the deprecation in [TEP-0074](https://github.com/tektoncd/community/blob/main/teps/0074-deprecate-pipelineresources.md).
_More on the reasoning and what's left to do in
[Why aren't PipelineResources in Beta?](resources.md#why-aren-t-pipelineresources-in-beta)._
To ease migration away from `PipelineResources`
[some types have an equivalent `Task` in the Catalog](#replacing-pipelineresources-with-tasks).
To use these replacement `Tasks` you will need to combine them with your existing `Tasks` via a `Pipeline`.
For example, if you were using this `Task` which was fetching from `git` and building with
`Kaniko`:
```yaml
apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
name: build-push-kaniko
spec:
inputs:
resources:
- name: workspace
type: git
params:
- name: pathToDockerFile
description: The path to the dockerfile to build
default: /workspace/workspace/Dockerfile
- name: pathToContext
description: The build context used by Kaniko
default: /workspace/workspace
outputs:
resources:
- name: builtImage
type: image
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:v0.17.1
env:
- name: "DOCKER_CONFIG"
value: "/tekton/home/.docker/"
args:
- --dockerfile=$(inputs.params.pathToDockerFile)
- --destination=$(outputs.resources.builtImage.url)
- --context=$(inputs.params.pathToContext)
- --oci-layout-path=$(inputs.resources.builtImage.path)
securityContext:
runAsUser: 0
```
To do the same thing with the `git` catalog `Task` and the kaniko `Task` you will need to combine them in a
`Pipeline`.
For example this Pipeline uses the Kaniko and `git` catalog Tasks:
```yaml
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: kaniko-pipeline
spec:
params:
- name: git-url
- name: git-revision
- name: image-name
- name: path-to-image-context
- name: path-to-dockerfile
workspaces:
- name: git-source
tasks:
- name: fetch-from-git
taskRef:
name: git-clone
params:
- name: url
value: $(params.git-url)
- name: revision
value: $(params.git-revision)
workspaces:
- name: output
workspace: git-source
- name: build-image
taskRef:
name: kaniko
params:
- name: IMAGE
value: $(params.image-name)
- name: CONTEXT
value: $(params.path-to-image-context)
- name: DOCKERFILE
value: $(params.path-to-dockerfile)
workspaces:
- name: source
workspace: git-source
# If you want you can add a Task that uses the IMAGE_DIGEST from the kaniko task
# via $(tasks.build-image.results.IMAGE_DIGEST) - this was a feature we hadn't been
# able to fully deliver with the Image PipelineResource!
```
_Note that [the `image` `PipelineResource` is gone in this example](#replacing-an-image-resource) (replaced with
a [`result`](tasks.md#emitting-results)), and also that now the `Task` doesn't need to know anything
about where the files come from that it builds from._
### Replacing a `git` resource
You can replace a `git` resource with the [`git-clone` Catalog `Task`](https://github.com/tektoncd/catalog/tree/main/task/git-clone).
### Replacing a `pullrequest` resource
You can replace a `pullrequest` resource with the [`pullrequest` Catalog `Task`](https://github.com/tektoncd/catalog/tree/main/task/pull-request).
### Replacing a `gcs` resource
You can replace a `gcs` resource with the [`gcs` Catalog `Task`](https://github.com/tektoncd/catalog/tree/main/task/gcs-generic).
### Replacing an `image` resource
Since the `image` resource is simply a way to share the digest of a built image with subsequent
`Tasks` in your `Pipeline`, you can use [`Task` results](tasks.md#emitting-results) to
achieve equivalent functionality.
For examples of replacing an `image` resource, see the following Catalog `Tasks`:
- The [Kaniko Catalog `Task`](https://github.com/tektoncd/catalog/blob/v1beta1/kaniko/)
illustrates how to write the digest of an image to a result.
- The [Buildah Catalog `Task`](https://github.com/tektoncd/catalog/blob/v1beta1/buildah/)
illustrates how to accept an image digest as a parameter.
### Replacing a `cluster` resource
You can replace a `cluster` resource with the [`kubeconfig-creator` Catalog `Task`](https://github.com/tektoncd/catalog/tree/main/task/kubeconfig-creator).
### Replacing a `cloudEvent` resource
You can replace a `cloudEvent` resource with the [`CloudEvent` Catalog `Task`](https://github.com/tektoncd/catalog/tree/main/task/cloudevent). | tekton | linkTitle Replacing PipelineResources with Tasks weight 207 Replacing PipelineResources with Tasks PipelineResources remained in alpha while the other resource kinds were promoted to beta Since then PipelineResources have been removed Read more about the deprecation in TEP 0074 https github com tektoncd community blob main teps 0074 deprecate pipelineresources md More on the reasoning and what s left to do in Why aren t PipelineResources in Beta resources md why aren t pipelineresources in beta To ease migration away from PipelineResources some types have an equivalent Task in the Catalog replacing pipelineresources with tasks To use these replacement Tasks you will need to combine them with your existing Tasks via a Pipeline For example if you were using this Task which was fetching from git and building with Kaniko yaml apiVersion tekton dev v1alpha1 kind Task metadata name build push kaniko spec inputs resources name workspace type git params name pathToDockerFile description The path to the dockerfile to build default workspace workspace Dockerfile name pathToContext description The build context used by Kaniko default workspace workspace outputs resources name builtImage type image steps name build and push image gcr io kaniko project executor v0 17 1 env name DOCKER CONFIG value tekton home docker args dockerfile inputs params pathToDockerFile destination outputs resources builtImage url context inputs params pathToContext oci layout path inputs resources builtImage path securityContext runAsUser 0 To do the same thing with the git catalog Task and the kaniko Task you will need to combine them in a Pipeline For example this Pipeline uses the Kaniko and git catalog Tasks yaml apiVersion tekton dev v1beta1 kind Pipeline metadata name kaniko pipeline spec params name git url name git revision name image name name path to image context name path to dockerfile workspaces name git source tasks name fetch from git taskRef name git clone params name url value params git url name revision value params git revision workspaces name output workspace git source name build image taskRef name kaniko params name IMAGE value params image name name CONTEXT value params path to image context name DOCKERFILE value params path to dockerfile workspaces name source workspace git source If you want you can add a Task that uses the IMAGE DIGEST from the kaniko task via tasks build image results IMAGE DIGEST this was a feature we hadn t been able to fully deliver with the Image PipelineResource Note that the image PipelineResource is gone in this example replacing an image resource replaced with a result tasks md emitting results and also that now the Task doesn t need to know anything about where the files come from that it builds from Replacing a git resource You can replace a git resource with the git clone Catalog Task https github com tektoncd catalog tree main task git clone Replacing a pullrequest resource You can replace a pullrequest resource with the pullrequest Catalog Task https github com tektoncd catalog tree main task pull request Replacing a gcs resource You can replace a gcs resource with the gcs Catalog Task https github com tektoncd catalog tree main task gcs generic Replacing an image resource Since the image resource is simply a way to share the digest of a built image with subsequent Tasks in your Pipeline you can use Task results tasks md emitting results to achieve equivalent functionality For examples of replacing an image resource see the following Catalog Tasks The Kaniko Catalog Task https github com tektoncd catalog blob v1beta1 kaniko illustrates how to write the digest of an image to a result The Buildah Catalog Task https github com tektoncd catalog blob v1beta1 buildah illustrates how to accept an image digest as a parameter Replacing a cluster resource You can replace a cluster resource with the kubeconfig creator Catalog Task https github com tektoncd catalog tree main task kubeconfig creator Replacing a cloudEvent resource You can replace a cloudEvent resource with the CloudEvent Catalog Task https github com tektoncd catalog tree main task cloudevent |
tekton Get started with Resolvers weight 103 Getting Started with Resolvers | <!--
---
linkTitle: "Get started with Resolvers"
weight: 103
---
-->
# Getting Started with Resolvers
## Introduction
This guide will take you from an empty Kubernetes cluster to a
functioning Tekton Pipelines installation and a PipelineRun executing
with a Pipeline stored in a git repo.
## Prerequisites
- A computer with
[`kubectl`](https://kubernetes.io/docs/tasks/tools/#kubectl).
- A Kubernetes cluster running at least Kubernetes 1.28. A [`kind`
cluster](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
should work fine for following the guide on your local machine.
- An image registry that you can push images to. If you're using `kind`
make sure your `KO_DOCKER_REPO` environment variable is set to
`kind.local`.
- A publicly available git repository where you can put a pipeline yaml
file.
## Step 1: Install Tekton Pipelines and the Resolvers
See [the installation instructions for Tekton Pipeline](./install.md#installing-tekton-pipelines-on-kubernetes), and
[the installation instructions for the built-in resolvers](./install.md#installing-and-configuring-remote-task-and-pipeline-resolution).
## Step 2: Ensure Pipelines is configured to enable resolvers
Starting with v0.41.0, remote resolvers for Tekton Pipelines are enabled by default,
but can be disabled via feature flags in the `resolvers-feature-flags` configmap in
the `tekton-pipelines-resolvers` namespace. Check that configmap to verify that the
resolvers you wish to have enabled are set to `"true"`.
The feature flags for the built-in resolvers are:
* The `bundles` resolver: `enable-bundles-resolver`
* The `git` resolver: `enable-git-resolver`
* The `hub` resolver: `enable-hub-resolver`
* The `cluster` resolver: `enable-cluster-resolver`
## Step 3: Try it out!
In order to test out your install you'll need a Pipeline stored in a
public git repository. First cd into a clone of your repo and then
create a new branch:
```sh
# checkout a new branch in the public repo you're using
git checkout -b add-a-simple-pipeline
```
Then create a basic pipeline:
```sh
cat <<"EOF" > pipeline.yaml
kind: Pipeline
apiVersion: tekton.dev/v1beta1
metadata:
name: a-simple-pipeline
spec:
params:
- name: username
tasks:
- name: task-1
params:
- name: username
value: $(params.username)
taskSpec:
params:
- name: username
steps:
- image: alpine:3.15
script: |
echo "hello $(params.username)"
EOF
```
Commit the pipeline and push it to your git repo:
```sh
git add ./pipeline.yaml
git commit -m "Add a basic pipeline to test Tekton Pipeline remote resolution"
# push to your publicly accessible repository, replacing origin with
# your git remote's name
git push origin add-a-simple-pipeline
```
And finally create a `PipelineRun` that uses your pipeline:
```sh
# first assign your public repo's url to an environment variable
REPO_URL=# insert your repo's url here
# create a pipelinerun yaml file
cat <<EOF > pipelinerun.yaml
kind: PipelineRun
apiVersion: tekton.dev/v1beta1
metadata:
name: run-basic-pipeline-from-git
spec:
pipelineRef:
resolver: git
params:
- name: url
value: ${REPO_URL}
- name: revision
value: add-a-simple-pipeline
- name: pathInRepo
value: pipeline.yaml
params:
- name: username
value: liza
EOF
# execute the pipelinerun
kubectl apply -f ./pipelinerun.yaml
```
## Step 4: Monitor the PipelineRun
First let's watch the PipelineRun to see if it succeeds:
```sh
kubectl get pipelineruns -w
```
Shortly the PipelineRun should move into a Succeeded state.
Now we can check the logs of the PipelineRun's only task:
```sh
kubectl logs run-basic-pipeline-from-git-task-1-pod
# This should print "hello liza"
```
---
Except as otherwise noted, the content of this page is licensed under the
[Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/),
and code samples are licensed under the
[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). | tekton | linkTitle Get started with Resolvers weight 103 Getting Started with Resolvers Introduction This guide will take you from an empty Kubernetes cluster to a functioning Tekton Pipelines installation and a PipelineRun executing with a Pipeline stored in a git repo Prerequisites A computer with kubectl https kubernetes io docs tasks tools kubectl A Kubernetes cluster running at least Kubernetes 1 28 A kind cluster https kind sigs k8s io docs user quick start installation should work fine for following the guide on your local machine An image registry that you can push images to If you re using kind make sure your KO DOCKER REPO environment variable is set to kind local A publicly available git repository where you can put a pipeline yaml file Step 1 Install Tekton Pipelines and the Resolvers See the installation instructions for Tekton Pipeline install md installing tekton pipelines on kubernetes and the installation instructions for the built in resolvers install md installing and configuring remote task and pipeline resolution Step 2 Ensure Pipelines is configured to enable resolvers Starting with v0 41 0 remote resolvers for Tekton Pipelines are enabled by default but can be disabled via feature flags in the resolvers feature flags configmap in the tekton pipelines resolvers namespace Check that configmap to verify that the resolvers you wish to have enabled are set to true The feature flags for the built in resolvers are The bundles resolver enable bundles resolver The git resolver enable git resolver The hub resolver enable hub resolver The cluster resolver enable cluster resolver Step 3 Try it out In order to test out your install you ll need a Pipeline stored in a public git repository First cd into a clone of your repo and then create a new branch sh checkout a new branch in the public repo you re using git checkout b add a simple pipeline Then create a basic pipeline sh cat EOF pipeline yaml kind Pipeline apiVersion tekton dev v1beta1 metadata name a simple pipeline spec params name username tasks name task 1 params name username value params username taskSpec params name username steps image alpine 3 15 script echo hello params username EOF Commit the pipeline and push it to your git repo sh git add pipeline yaml git commit m Add a basic pipeline to test Tekton Pipeline remote resolution push to your publicly accessible repository replacing origin with your git remote s name git push origin add a simple pipeline And finally create a PipelineRun that uses your pipeline sh first assign your public repo s url to an environment variable REPO URL insert your repo s url here create a pipelinerun yaml file cat EOF pipelinerun yaml kind PipelineRun apiVersion tekton dev v1beta1 metadata name run basic pipeline from git spec pipelineRef resolver git params name url value REPO URL name revision value add a simple pipeline name pathInRepo value pipeline yaml params name username value liza EOF execute the pipelinerun kubectl apply f pipelinerun yaml Step 4 Monitor the PipelineRun First let s watch the PipelineRun to see if it succeeds sh kubectl get pipelineruns w Shortly the PipelineRun should move into a Succeeded state Now we can check the logs of the PipelineRun s only task sh kubectl logs run basic pipeline from git task 1 pod This should print hello liza Except as otherwise noted the content of this page is licensed under the Creative Commons Attribution 4 0 License https creativecommons org licenses by 4 0 and code samples are licensed under the Apache 2 0 License https www apache org licenses LICENSE 2 0 |
tekton weight 402 Tekton Bundle Contract v0 1 When using a Tekton Bundle in a task or pipeline reference the OCI artifact backing the Tekton Bundles Contract | <!--
---
linkTitle: "Tekton Bundles Contract"
weight: 402
---
-->
# Tekton Bundle Contract v0.1
When using a Tekton Bundle in a task or pipeline reference, the OCI artifact backing the
bundle must adhere to the following contract.
## Contract
Only Tekton CRDs (eg, `Task` or `Pipeline`) may reside in a Tekton Bundle used as a Tekton
bundle reference.
Each layer of the image must map 1:1 with a single Tekton resource (eg Task).
*No more than 20* individual layers (Pipelines and/or Tasks) maybe placed in a single image.
Each layer must contain all of the following annotations:
- `dev.tekton.image.name` => `ObjectMeta.Name` of the resource
- `dev.tekton.image.kind` => `TypeMeta.Kind` of the resource, all lower-cased and singular (eg, `task`)
- `dev.tekton.image.apiVersion` => `TypeMeta.APIVersion` of the resource (eg
"tekton.dev/v1beta1")
The union of the { `dev.tekton.image.apiVersion`, `dev.tekton.image.kind`, `dev.tekton.image.name` }
annotations on a given layer must be unique among all layers of that image. In practical terms, this means no two
"tasks" can have the same name for example.
Each layer must be compressed and stored with a supported OCI MIME type *except* for `+zstd` types. For list of the
supported types see
<!-- wokeignore:rule=master -->
[the official spec](https://github.com/opencontainers/image-spec/blob/master/layer.md#zstd-media-types).
Furthermore, each layer must contain a YAML or JSON representation of the underlying resource. If the resource is
missing any identifying fields (missing an `apiVersion` for instance) then it will be considered invalid.
Any tool creating a Tekton bundle must enforce this format and ensure that the annotations and contents all match and
conform to this spec. Additionally, the Tekton controller will reject non-conforming Tekton Bundles.
## Examples
Say you wanted to create a Tekton Bundle out of the following resources:
```yaml
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: foo
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: bar
---
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: foobar
```
If we imagine what the contents of the resulting bundle look like, it would look something like this (YAML is just for
illustrative purposes):
```
# my-bundle
layers:
- annotations:
- name: "dev.tekton.image.name"
value: "foo"
- name: "dev.tekton.image.kind"
value: "Task"
- name: "dev.tekton.image.apiVersion"
value: "tekton.dev/v1beta1"
contents: <compressed bytes of Task object>
- annotations:
- name: "dev.tekton.image.name"
value: "bar"
- name: "dev.tekton.image.kind"
value: "Task"
- name: "dev.tekton.image.apiVersion"
value: "tekton.dev/v1beta1"
contents: <compressed bytes of Task object>
- annotations:
- name: "dev.tekton.image.name"
value: "foobar"
- name: "dev.tekton.image.kind"
value: "Pipeline"
- name: "dev.tekton.image.apiVersion"
value: "tekton.dev/v1beta1"
contents: <compressed bytes of Pipeline object>
```
---
Except as otherwise noted, the content of this page is licensed under the
[Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/),
and code samples are licensed under the
[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). | tekton | linkTitle Tekton Bundles Contract weight 402 Tekton Bundle Contract v0 1 When using a Tekton Bundle in a task or pipeline reference the OCI artifact backing the bundle must adhere to the following contract Contract Only Tekton CRDs eg Task or Pipeline may reside in a Tekton Bundle used as a Tekton bundle reference Each layer of the image must map 1 1 with a single Tekton resource eg Task No more than 20 individual layers Pipelines and or Tasks maybe placed in a single image Each layer must contain all of the following annotations dev tekton image name ObjectMeta Name of the resource dev tekton image kind TypeMeta Kind of the resource all lower cased and singular eg task dev tekton image apiVersion TypeMeta APIVersion of the resource eg tekton dev v1beta1 The union of the dev tekton image apiVersion dev tekton image kind dev tekton image name annotations on a given layer must be unique among all layers of that image In practical terms this means no two tasks can have the same name for example Each layer must be compressed and stored with a supported OCI MIME type except for zstd types For list of the supported types see wokeignore rule master the official spec https github com opencontainers image spec blob master layer md zstd media types Furthermore each layer must contain a YAML or JSON representation of the underlying resource If the resource is missing any identifying fields missing an apiVersion for instance then it will be considered invalid Any tool creating a Tekton bundle must enforce this format and ensure that the annotations and contents all match and conform to this spec Additionally the Tekton controller will reject non conforming Tekton Bundles Examples Say you wanted to create a Tekton Bundle out of the following resources yaml apiVersion tekton dev v1beta1 kind Task metadata name foo apiVersion tekton dev v1beta1 kind Task metadata name bar apiVersion tekton dev v1beta1 kind Pipeline metadata name foobar If we imagine what the contents of the resulting bundle look like it would look something like this YAML is just for illustrative purposes my bundle layers annotations name dev tekton image name value foo name dev tekton image kind value Task name dev tekton image apiVersion value tekton dev v1beta1 contents compressed bytes of Task object annotations name dev tekton image name value bar name dev tekton image kind value Task name dev tekton image apiVersion value tekton dev v1beta1 contents compressed bytes of Task object annotations name dev tekton image name value foobar name dev tekton image kind value Pipeline name dev tekton image apiVersion value tekton dev v1beta1 contents compressed bytes of Pipeline object Except as otherwise noted the content of this page is licensed under the Creative Commons Attribution 4 0 License https creativecommons org licenses by 4 0 and code samples are licensed under the Apache 2 0 License https www apache org licenses LICENSE 2 0 |
tekton weight 404 ul Pipeline API p Packages p title Pipeline API | <!--
---
title: Pipeline API
linkTitle: Pipeline API
weight: 404
---
-->
<p>Packages:</p>
<ul>
<li>
<a href="#resolution.tekton.dev%2fv1alpha1">resolution.tekton.dev/v1alpha1</a>
</li>
<li>
<a href="#resolution.tekton.dev%2fv1beta1">resolution.tekton.dev/v1beta1</a>
</li>
<li>
<a href="#tekton.dev%2fv1">tekton.dev/v1</a>
</li>
<li>
<a href="#tekton.dev%2fv1alpha1">tekton.dev/v1alpha1</a>
</li>
<li>
<a href="#tekton.dev%2fv1beta1">tekton.dev/v1beta1</a>
</li>
</ul>
<h2 id="resolution.tekton.dev/v1alpha1">resolution.tekton.dev/v1alpha1</h2>
<div>
</div>
Resource Types:
<ul></ul>
<h3 id="resolution.tekton.dev/v1alpha1.ResolutionRequest">ResolutionRequest
</h3>
<div>
<p>ResolutionRequest is an object for requesting the content of
a Tekton resource like a pipeline.yaml.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>metadata</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta">
Kubernetes meta/v1.ObjectMeta
</a>
</em>
</td>
<td>
<em>(Optional)</em>
Refer to the Kubernetes API documentation for the fields of the
<code>metadata</code> field.
</td>
</tr>
<tr>
<td>
<code>spec</code><br/>
<em>
<a href="#resolution.tekton.dev/v1alpha1.ResolutionRequestSpec">
ResolutionRequestSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Spec holds the information for the request part of the resource request.</p>
<br/>
<br/>
<table>
<tr>
<td>
<code>params</code><br/>
<em>
map[string]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Parameters are the runtime attributes passed to
the resolver to help it figure out how to resolve the
resource being requested. For example: repo URL, commit SHA,
path to file, the kind of authentication to leverage, etc.</p>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#resolution.tekton.dev/v1alpha1.ResolutionRequestStatus">
ResolutionRequestStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Status communicates the state of the request and, ultimately,
the content of the resolved resource.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="resolution.tekton.dev/v1alpha1.ResolutionRequestSpec">ResolutionRequestSpec
</h3>
<p>
(<em>Appears on:</em><a href="#resolution.tekton.dev/v1alpha1.ResolutionRequest">ResolutionRequest</a>)
</p>
<div>
<p>ResolutionRequestSpec are all the fields in the spec of the
ResolutionRequest CRD.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>params</code><br/>
<em>
map[string]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Parameters are the runtime attributes passed to
the resolver to help it figure out how to resolve the
resource being requested. For example: repo URL, commit SHA,
path to file, the kind of authentication to leverage, etc.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="resolution.tekton.dev/v1alpha1.ResolutionRequestStatus">ResolutionRequestStatus
</h3>
<p>
(<em>Appears on:</em><a href="#resolution.tekton.dev/v1alpha1.ResolutionRequest">ResolutionRequest</a>)
</p>
<div>
<p>ResolutionRequestStatus are all the fields in a ResolutionRequest’s
status subresource.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>Status</code><br/>
<em>
<a href="https://pkg.go.dev/knative.dev/pkg/apis/duck/v1#Status">
knative.dev/pkg/apis/duck/v1.Status
</a>
</em>
</td>
<td>
<p>
(Members of <code>Status</code> are embedded into this type.)
</p>
</td>
</tr>
<tr>
<td>
<code>ResolutionRequestStatusFields</code><br/>
<em>
<a href="#resolution.tekton.dev/v1alpha1.ResolutionRequestStatusFields">
ResolutionRequestStatusFields
</a>
</em>
</td>
<td>
<p>
(Members of <code>ResolutionRequestStatusFields</code> are embedded into this type.)
</p>
</td>
</tr>
</tbody>
</table>
<h3 id="resolution.tekton.dev/v1alpha1.ResolutionRequestStatusFields">ResolutionRequestStatusFields
</h3>
<p>
(<em>Appears on:</em><a href="#resolution.tekton.dev/v1alpha1.ResolutionRequestStatus">ResolutionRequestStatus</a>)
</p>
<div>
<p>ResolutionRequestStatusFields are the ResolutionRequest-specific fields
for the status subresource.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>data</code><br/>
<em>
string
</em>
</td>
<td>
<p>Data is a string representation of the resolved content
of the requested resource in-lined into the ResolutionRequest
object.</p>
</td>
</tr>
<tr>
<td>
<code>refSource</code><br/>
<em>
<a href="#tekton.dev/v1.RefSource">
RefSource
</a>
</em>
</td>
<td>
<p>RefSource is the source reference of the remote data that records where the remote
file came from including the url, digest and the entrypoint.</p>
</td>
</tr>
</tbody>
</table>
<hr/>
<h2 id="resolution.tekton.dev/v1beta1">resolution.tekton.dev/v1beta1</h2>
<div>
</div>
Resource Types:
<ul></ul>
<h3 id="resolution.tekton.dev/v1beta1.ResolutionRequest">ResolutionRequest
</h3>
<div>
<p>ResolutionRequest is an object for requesting the content of
a Tekton resource like a pipeline.yaml.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>metadata</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta">
Kubernetes meta/v1.ObjectMeta
</a>
</em>
</td>
<td>
<em>(Optional)</em>
Refer to the Kubernetes API documentation for the fields of the
<code>metadata</code> field.
</td>
</tr>
<tr>
<td>
<code>spec</code><br/>
<em>
<a href="#resolution.tekton.dev/v1beta1.ResolutionRequestSpec">
ResolutionRequestSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Spec holds the information for the request part of the resource request.</p>
<br/>
<br/>
<table>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1.Param">
[]Param
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Parameters are the runtime attributes passed to
the resolver to help it figure out how to resolve the
resource being requested. For example: repo URL, commit SHA,
path to file, the kind of authentication to leverage, etc.</p>
</td>
</tr>
<tr>
<td>
<code>url</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>URL is the runtime url passed to the resolver
to help it figure out how to resolver the resource being
requested.
This is currently at an ALPHA stability level and subject to
alpha API compatibility policies.</p>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#resolution.tekton.dev/v1beta1.ResolutionRequestStatus">
ResolutionRequestStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Status communicates the state of the request and, ultimately,
the content of the resolved resource.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="resolution.tekton.dev/v1beta1.ResolutionRequestSpec">ResolutionRequestSpec
</h3>
<p>
(<em>Appears on:</em><a href="#resolution.tekton.dev/v1beta1.ResolutionRequest">ResolutionRequest</a>)
</p>
<div>
<p>ResolutionRequestSpec are all the fields in the spec of the
ResolutionRequest CRD.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1.Param">
[]Param
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Parameters are the runtime attributes passed to
the resolver to help it figure out how to resolve the
resource being requested. For example: repo URL, commit SHA,
path to file, the kind of authentication to leverage, etc.</p>
</td>
</tr>
<tr>
<td>
<code>url</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>URL is the runtime url passed to the resolver
to help it figure out how to resolver the resource being
requested.
This is currently at an ALPHA stability level and subject to
alpha API compatibility policies.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="resolution.tekton.dev/v1beta1.ResolutionRequestStatus">ResolutionRequestStatus
</h3>
<p>
(<em>Appears on:</em><a href="#resolution.tekton.dev/v1beta1.ResolutionRequest">ResolutionRequest</a>)
</p>
<div>
<p>ResolutionRequestStatus are all the fields in a ResolutionRequest’s
status subresource.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>Status</code><br/>
<em>
<a href="https://pkg.go.dev/knative.dev/pkg/apis/duck/v1#Status">
knative.dev/pkg/apis/duck/v1.Status
</a>
</em>
</td>
<td>
<p>
(Members of <code>Status</code> are embedded into this type.)
</p>
</td>
</tr>
<tr>
<td>
<code>ResolutionRequestStatusFields</code><br/>
<em>
<a href="#resolution.tekton.dev/v1beta1.ResolutionRequestStatusFields">
ResolutionRequestStatusFields
</a>
</em>
</td>
<td>
<p>
(Members of <code>ResolutionRequestStatusFields</code> are embedded into this type.)
</p>
</td>
</tr>
</tbody>
</table>
<h3 id="resolution.tekton.dev/v1beta1.ResolutionRequestStatusFields">ResolutionRequestStatusFields
</h3>
<p>
(<em>Appears on:</em><a href="#resolution.tekton.dev/v1beta1.ResolutionRequestStatus">ResolutionRequestStatus</a>)
</p>
<div>
<p>ResolutionRequestStatusFields are the ResolutionRequest-specific fields
for the status subresource.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>data</code><br/>
<em>
string
</em>
</td>
<td>
<p>Data is a string representation of the resolved content
of the requested resource in-lined into the ResolutionRequest
object.</p>
</td>
</tr>
<tr>
<td>
<code>source</code><br/>
<em>
<a href="#tekton.dev/v1.RefSource">
RefSource
</a>
</em>
</td>
<td>
<p>Deprecated: Use RefSource instead</p>
</td>
</tr>
<tr>
<td>
<code>refSource</code><br/>
<em>
<a href="#tekton.dev/v1.RefSource">
RefSource
</a>
</em>
</td>
<td>
<p>RefSource is the source reference of the remote data that records the url, digest
and the entrypoint.</p>
</td>
</tr>
</tbody>
</table>
<hr/>
<h2 id="tekton.dev/v1">tekton.dev/v1</h2>
<div>
<p>Package v1 contains API Schema definitions for the pipeline v1 API group</p>
</div>
Resource Types:
<ul><li>
<a href="#tekton.dev/v1.Pipeline">Pipeline</a>
</li><li>
<a href="#tekton.dev/v1.PipelineRun">PipelineRun</a>
</li><li>
<a href="#tekton.dev/v1.Task">Task</a>
</li><li>
<a href="#tekton.dev/v1.TaskRun">TaskRun</a>
</li></ul>
<h3 id="tekton.dev/v1.Pipeline">Pipeline
</h3>
<div>
<p>Pipeline describes a list of Tasks to execute. It expresses how outputs
of tasks feed into inputs of subsequent tasks.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>apiVersion</code><br/>
string</td>
<td>
<code>
tekton.dev/v1
</code>
</td>
</tr>
<tr>
<td>
<code>kind</code><br/>
string
</td>
<td><code>Pipeline</code></td>
</tr>
<tr>
<td>
<code>metadata</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta">
Kubernetes meta/v1.ObjectMeta
</a>
</em>
</td>
<td>
<em>(Optional)</em>
Refer to the Kubernetes API documentation for the fields of the
<code>metadata</code> field.
</td>
</tr>
<tr>
<td>
<code>spec</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineSpec">
PipelineSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Spec holds the desired state of the Pipeline from the client</p>
<br/>
<br/>
<table>
<tr>
<td>
<code>displayName</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>DisplayName is a user-facing name of the pipeline that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is a user-facing description of the pipeline that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>tasks</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineTask">
[]PipelineTask
</a>
</em>
</td>
<td>
<p>Tasks declares the graph of Tasks that execute when this Pipeline is run.</p>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1.ParamSpecs">
ParamSpecs
</a>
</em>
</td>
<td>
<p>Params declares a list of input parameters that must be supplied when
this Pipeline is run.</p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineWorkspaceDeclaration">
[]PipelineWorkspaceDeclaration
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Workspaces declares a set of named workspaces that are expected to be
provided by a PipelineRun.</p>
</td>
</tr>
<tr>
<td>
<code>results</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineResult">
[]PipelineResult
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Results are values that this pipeline can output once run</p>
</td>
</tr>
<tr>
<td>
<code>finally</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineTask">
[]PipelineTask
</a>
</em>
</td>
<td>
<p>Finally declares the list of Tasks that execute just before leaving the Pipeline
i.e. either after all Tasks are finished executing successfully
or after a failure which would result in ending the Pipeline</p>
</td>
</tr>
</table>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.PipelineRun">PipelineRun
</h3>
<div>
<p>PipelineRun represents a single execution of a Pipeline. PipelineRuns are how
the graph of Tasks declared in a Pipeline are executed; they specify inputs
to Pipelines such as parameter values and capture operational aspects of the
Tasks execution such as service account and tolerations. Creating a
PipelineRun creates TaskRuns for Tasks in the referenced Pipeline.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>apiVersion</code><br/>
string</td>
<td>
<code>
tekton.dev/v1
</code>
</td>
</tr>
<tr>
<td>
<code>kind</code><br/>
string
</td>
<td><code>PipelineRun</code></td>
</tr>
<tr>
<td>
<code>metadata</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta">
Kubernetes meta/v1.ObjectMeta
</a>
</em>
</td>
<td>
<em>(Optional)</em>
Refer to the Kubernetes API documentation for the fields of the
<code>metadata</code> field.
</td>
</tr>
<tr>
<td>
<code>spec</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineRunSpec">
PipelineRunSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<br/>
<br/>
<table>
<tr>
<td>
<code>pipelineRef</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineRef">
PipelineRef
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>pipelineSpec</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineSpec">
PipelineSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Specifying PipelineSpec can be disabled by setting
<code>disable-inline-spec</code> feature flag..</p>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1.Params">
Params
</a>
</em>
</td>
<td>
<p>Params is a list of parameter names and values.</p>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineRunSpecStatus">
PipelineRunSpecStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Used for cancelling a pipelinerun (and maybe more later on)</p>
</td>
</tr>
<tr>
<td>
<code>timeouts</code><br/>
<em>
<a href="#tekton.dev/v1.TimeoutFields">
TimeoutFields
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Time after which the Pipeline times out.
Currently three keys are accepted in the map
pipeline, tasks and finally
with Timeouts.pipeline >= Timeouts.tasks + Timeouts.finally</p>
</td>
</tr>
<tr>
<td>
<code>taskRunTemplate</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineTaskRunTemplate">
PipelineTaskRunTemplate
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>TaskRunTemplate represent template of taskrun</p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1.WorkspaceBinding">
[]WorkspaceBinding
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Workspaces holds a set of workspace bindings that must match names
with those declared in the pipeline.</p>
</td>
</tr>
<tr>
<td>
<code>taskRunSpecs</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineTaskRunSpec">
[]PipelineTaskRunSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>TaskRunSpecs holds a set of runtime specs</p>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineRunStatus">
PipelineRunStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.Task">Task
</h3>
<div>
<p>Task represents a collection of sequential steps that are run as part of a
Pipeline using a set of inputs and producing a set of outputs. Tasks execute
when TaskRuns are created that provide the input parameters and resources and
output resources the Task requires.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>apiVersion</code><br/>
string</td>
<td>
<code>
tekton.dev/v1
</code>
</td>
</tr>
<tr>
<td>
<code>kind</code><br/>
string
</td>
<td><code>Task</code></td>
</tr>
<tr>
<td>
<code>metadata</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta">
Kubernetes meta/v1.ObjectMeta
</a>
</em>
</td>
<td>
<em>(Optional)</em>
Refer to the Kubernetes API documentation for the fields of the
<code>metadata</code> field.
</td>
</tr>
<tr>
<td>
<code>spec</code><br/>
<em>
<a href="#tekton.dev/v1.TaskSpec">
TaskSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Spec holds the desired state of the Task from the client</p>
<br/>
<br/>
<table>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1.ParamSpecs">
ParamSpecs
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Params is a list of input parameters required to run the task. Params
must be supplied as inputs in TaskRuns unless they declare a default
value.</p>
</td>
</tr>
<tr>
<td>
<code>displayName</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>DisplayName is a user-facing name of the task that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is a user-facing description of the task that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>steps</code><br/>
<em>
<a href="#tekton.dev/v1.Step">
[]Step
</a>
</em>
</td>
<td>
<p>Steps are the steps of the build; each step is run sequentially with the
source mounted into /workspace.</p>
</td>
</tr>
<tr>
<td>
<code>volumes</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volume-v1-core">
[]Kubernetes core/v1.Volume
</a>
</em>
</td>
<td>
<p>Volumes is a collection of volumes that are available to mount into the
steps of the build.</p>
</td>
</tr>
<tr>
<td>
<code>stepTemplate</code><br/>
<em>
<a href="#tekton.dev/v1.StepTemplate">
StepTemplate
</a>
</em>
</td>
<td>
<p>StepTemplate can be used as the basis for all step containers within the
Task, so that the steps inherit settings on the base container.</p>
</td>
</tr>
<tr>
<td>
<code>sidecars</code><br/>
<em>
<a href="#tekton.dev/v1.Sidecar">
[]Sidecar
</a>
</em>
</td>
<td>
<p>Sidecars are run alongside the Task’s step containers. They begin before
the steps start and end after the steps complete.</p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1.WorkspaceDeclaration">
[]WorkspaceDeclaration
</a>
</em>
</td>
<td>
<p>Workspaces are the volumes that this Task requires.</p>
</td>
</tr>
<tr>
<td>
<code>results</code><br/>
<em>
<a href="#tekton.dev/v1.TaskResult">
[]TaskResult
</a>
</em>
</td>
<td>
<p>Results are values that this Task can output</p>
</td>
</tr>
</table>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.TaskRun">TaskRun
</h3>
<div>
<p>TaskRun represents a single execution of a Task. TaskRuns are how the steps
specified in a Task are executed; they specify the parameters and resources
used to run the steps in a Task.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>apiVersion</code><br/>
string</td>
<td>
<code>
tekton.dev/v1
</code>
</td>
</tr>
<tr>
<td>
<code>kind</code><br/>
string
</td>
<td><code>TaskRun</code></td>
</tr>
<tr>
<td>
<code>metadata</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta">
Kubernetes meta/v1.ObjectMeta
</a>
</em>
</td>
<td>
<em>(Optional)</em>
Refer to the Kubernetes API documentation for the fields of the
<code>metadata</code> field.
</td>
</tr>
<tr>
<td>
<code>spec</code><br/>
<em>
<a href="#tekton.dev/v1.TaskRunSpec">
TaskRunSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<br/>
<br/>
<table>
<tr>
<td>
<code>debug</code><br/>
<em>
<a href="#tekton.dev/v1.TaskRunDebug">
TaskRunDebug
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1.Params">
Params
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>serviceAccountName</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>taskRef</code><br/>
<em>
<a href="#tekton.dev/v1.TaskRef">
TaskRef
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>no more than one of the TaskRef and TaskSpec may be specified.</p>
</td>
</tr>
<tr>
<td>
<code>taskSpec</code><br/>
<em>
<a href="#tekton.dev/v1.TaskSpec">
TaskSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Specifying PipelineSpec can be disabled by setting
<code>disable-inline-spec</code> feature flag..</p>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#tekton.dev/v1.TaskRunSpecStatus">
TaskRunSpecStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Used for cancelling a TaskRun (and maybe more later on)</p>
</td>
</tr>
<tr>
<td>
<code>statusMessage</code><br/>
<em>
<a href="#tekton.dev/v1.TaskRunSpecStatusMessage">
TaskRunSpecStatusMessage
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Status message for cancellation.</p>
</td>
</tr>
<tr>
<td>
<code>retries</code><br/>
<em>
int
</em>
</td>
<td>
<em>(Optional)</em>
<p>Retries represents how many times this TaskRun should be retried in the event of task failure.</p>
</td>
</tr>
<tr>
<td>
<code>timeout</code><br/>
<em>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration">
Kubernetes meta/v1.Duration
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Time after which one retry attempt times out. Defaults to 1 hour.
Refer Go’s ParseDuration documentation for expected format: <a href="https://golang.org/pkg/time/#ParseDuration">https://golang.org/pkg/time/#ParseDuration</a></p>
</td>
</tr>
<tr>
<td>
<code>podTemplate</code><br/>
<em>
<a href="#tekton.dev/unversioned.Template">
Template
</a>
</em>
</td>
<td>
<p>PodTemplate holds pod specific configuration</p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1.WorkspaceBinding">
[]WorkspaceBinding
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Workspaces is a list of WorkspaceBindings from volumes to workspaces.</p>
</td>
</tr>
<tr>
<td>
<code>stepSpecs</code><br/>
<em>
<a href="#tekton.dev/v1.TaskRunStepSpec">
[]TaskRunStepSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Specs to apply to Steps in this TaskRun.
If a field is specified in both a Step and a StepSpec,
the value from the StepSpec will be used.
This field is only supported when the alpha feature gate is enabled.</p>
</td>
</tr>
<tr>
<td>
<code>sidecarSpecs</code><br/>
<em>
<a href="#tekton.dev/v1.TaskRunSidecarSpec">
[]TaskRunSidecarSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Specs to apply to Sidecars in this TaskRun.
If a field is specified in both a Sidecar and a SidecarSpec,
the value from the SidecarSpec will be used.
This field is only supported when the alpha feature gate is enabled.</p>
</td>
</tr>
<tr>
<td>
<code>computeResources</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core">
Kubernetes core/v1.ResourceRequirements
</a>
</em>
</td>
<td>
<p>Compute resources to use for this TaskRun</p>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#tekton.dev/v1.TaskRunStatus">
TaskRunStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.Algorithm">Algorithm
(<code>string</code> alias)</h3>
<div>
<p>Algorithm Standard cryptographic hash algorithm</p>
</div>
<h3 id="tekton.dev/v1.Artifact">Artifact
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.Artifacts">Artifacts</a>, <a href="#tekton.dev/v1.StepState">StepState</a>)
</p>
<div>
<p>TaskRunStepArtifact represents an artifact produced or used by a step within a task run.
It directly uses the Artifact type for its structure.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>The artifact’s identifying category name</p>
</td>
</tr>
<tr>
<td>
<code>values</code><br/>
<em>
<a href="#tekton.dev/v1.ArtifactValue">
[]ArtifactValue
</a>
</em>
</td>
<td>
<p>A collection of values related to the artifact</p>
</td>
</tr>
<tr>
<td>
<code>buildOutput</code><br/>
<em>
bool
</em>
</td>
<td>
<p>Indicate if the artifact is a build output or a by-product</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.ArtifactValue">ArtifactValue
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.Artifact">Artifact</a>)
</p>
<div>
<p>ArtifactValue represents a specific value or data element within an Artifact.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>digest</code><br/>
<em>
map[github.com/tektoncd/pipeline/pkg/apis/pipeline/v1.Algorithm]string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>uri</code><br/>
<em>
string
</em>
</td>
<td>
<p>Algorithm-specific digests for verifying the content (e.g., SHA256)</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.Artifacts">Artifacts
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.TaskRunStatusFields">TaskRunStatusFields</a>)
</p>
<div>
<p>Artifacts represents the collection of input and output artifacts associated with
a task run or a similar process. Artifacts in this context are units of data or resources
that the process either consumes as input or produces as output.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>inputs</code><br/>
<em>
<a href="#tekton.dev/v1.Artifact">
[]Artifact
</a>
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>outputs</code><br/>
<em>
<a href="#tekton.dev/v1.Artifact">
[]Artifact
</a>
</em>
</td>
<td>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.ChildStatusReference">ChildStatusReference
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineRunStatusFields">PipelineRunStatusFields</a>)
</p>
<div>
<p>ChildStatusReference is used to point to the statuses of individual TaskRuns and Runs within this PipelineRun.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name is the name of the TaskRun or Run this is referencing.</p>
</td>
</tr>
<tr>
<td>
<code>displayName</code><br/>
<em>
string
</em>
</td>
<td>
<p>DisplayName is a user-facing name of the pipelineTask that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>pipelineTaskName</code><br/>
<em>
string
</em>
</td>
<td>
<p>PipelineTaskName is the name of the PipelineTask this is referencing.</p>
</td>
</tr>
<tr>
<td>
<code>whenExpressions</code><br/>
<em>
<a href="#tekton.dev/v1.WhenExpression">
[]WhenExpression
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>WhenExpressions is the list of checks guarding the execution of the PipelineTask</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.Combination">Combination
(<code>map[string]string</code> alias)</h3>
<div>
<p>Combination is a map, mainly defined to hold a single combination from a Matrix with key as param.Name and value as param.Value</p>
</div>
<h3 id="tekton.dev/v1.Combinations">Combinations
(<code>[]github.com/tektoncd/pipeline/pkg/apis/pipeline/v1.Combination</code> alias)</h3>
<div>
<p>Combinations is a Combination list</p>
</div>
<h3 id="tekton.dev/v1.EmbeddedTask">EmbeddedTask
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineTask">PipelineTask</a>)
</p>
<div>
<p>EmbeddedTask is used to define a Task inline within a Pipeline’s PipelineTasks.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>spec</code><br/>
<em>
k8s.io/apimachinery/pkg/runtime.RawExtension
</em>
</td>
<td>
<em>(Optional)</em>
<p>Spec is a specification of a custom task</p>
<br/>
<br/>
<table>
<tr>
<td>
<code>-</code><br/>
<em>
[]byte
</em>
</td>
<td>
<p>Raw is the underlying serialization of this object.</p>
<p>TODO: Determine how to detect ContentType and ContentEncoding of ‘Raw’ data.</p>
</td>
</tr>
<tr>
<td>
<code>-</code><br/>
<em>
k8s.io/apimachinery/pkg/runtime.Object
</em>
</td>
<td>
<p>Object can hold a representation of this extension - useful for working with versioned
structs.</p>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td>
<code>metadata</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineTaskMetadata">
PipelineTaskMetadata
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>TaskSpec</code><br/>
<em>
<a href="#tekton.dev/v1.TaskSpec">
TaskSpec
</a>
</em>
</td>
<td>
<p>
(Members of <code>TaskSpec</code> are embedded into this type.)
</p>
<em>(Optional)</em>
<p>TaskSpec is a specification of a task</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.IncludeParams">IncludeParams
</h3>
<div>
<p>IncludeParams allows passing in a specific combinations of Parameters into the Matrix.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name the specified combination</p>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1.Params">
Params
</a>
</em>
</td>
<td>
<p>Params takes only <code>Parameters</code> of type <code>"string"</code>
The names of the <code>params</code> must match the names of the <code>params</code> in the underlying <code>Task</code></p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.Matrix">Matrix
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineTask">PipelineTask</a>)
</p>
<div>
<p>Matrix is used to fan out Tasks in a Pipeline</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1.Params">
Params
</a>
</em>
</td>
<td>
<p>Params is a list of parameters used to fan out the pipelineTask
Params takes only <code>Parameters</code> of type <code>"array"</code>
Each array element is supplied to the <code>PipelineTask</code> by substituting <code>params</code> of type <code>"string"</code> in the underlying <code>Task</code>.
The names of the <code>params</code> in the <code>Matrix</code> must match the names of the <code>params</code> in the underlying <code>Task</code> that they will be substituting.</p>
</td>
</tr>
<tr>
<td>
<code>include</code><br/>
<em>
<a href="#tekton.dev/v1.IncludeParamsList">
IncludeParamsList
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Include is a list of IncludeParams which allows passing in specific combinations of Parameters into the Matrix.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.OnErrorType">OnErrorType
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.Step">Step</a>)
</p>
<div>
<p>OnErrorType defines a list of supported exiting behavior of a container on error</p>
</div>
<table>
<thead>
<tr>
<th>Value</th>
<th>Description</th>
</tr>
</thead>
<tbody><tr><td><p>"continue"</p></td>
<td><p>Continue indicates continue executing the rest of the steps irrespective of the container exit code</p>
</td>
</tr><tr><td><p>"stopAndFail"</p></td>
<td><p>StopAndFail indicates exit the taskRun if the container exits with non-zero exit code</p>
</td>
</tr></tbody>
</table>
<h3 id="tekton.dev/v1.Param">Param
</h3>
<p>
(<em>Appears on:</em><a href="#resolution.tekton.dev/v1beta1.ResolutionRequestSpec">ResolutionRequestSpec</a>)
</p>
<div>
<p>Param declares an ParamValues to use for the parameter called name.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>value</code><br/>
<em>
<a href="#tekton.dev/v1.ParamValue">
ParamValue
</a>
</em>
</td>
<td>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.ParamSpec">ParamSpec
</h3>
<div>
<p>ParamSpec defines arbitrary parameters needed beyond typed inputs (such as
resources). Parameter values are provided by users as inputs on a TaskRun
or PipelineRun.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name declares the name by which a parameter is referenced.</p>
</td>
</tr>
<tr>
<td>
<code>type</code><br/>
<em>
<a href="#tekton.dev/v1.ParamType">
ParamType
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Type is the user-specified type of the parameter. The possible types
are currently “string”, “array” and “object”, and “string” is the default.</p>
</td>
</tr>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is a user-facing description of the parameter that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>properties</code><br/>
<em>
<a href="#tekton.dev/v1.PropertySpec">
map[string]github.com/tektoncd/pipeline/pkg/apis/pipeline/v1.PropertySpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Properties is the JSON Schema properties to support key-value pairs parameter.</p>
</td>
</tr>
<tr>
<td>
<code>default</code><br/>
<em>
<a href="#tekton.dev/v1.ParamValue">
ParamValue
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Default is the value a parameter takes if no input value is supplied. If
default is set, a Task may be executed without a supplied value for the
parameter.</p>
</td>
</tr>
<tr>
<td>
<code>enum</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Enum declares a set of allowed param input values for tasks/pipelines that can be validated.
If Enum is not set, no input validation is performed for the param.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.ParamSpecs">ParamSpecs
(<code>[]github.com/tektoncd/pipeline/pkg/apis/pipeline/v1.ParamSpec</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineSpec">PipelineSpec</a>, <a href="#tekton.dev/v1.TaskSpec">TaskSpec</a>, <a href="#tekton.dev/v1alpha1.StepActionSpec">StepActionSpec</a>, <a href="#tekton.dev/v1beta1.StepActionSpec">StepActionSpec</a>)
</p>
<div>
<p>ParamSpecs is a list of ParamSpec</p>
</div>
<h3 id="tekton.dev/v1.ParamType">ParamType
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.ParamSpec">ParamSpec</a>, <a href="#tekton.dev/v1.ParamValue">ParamValue</a>, <a href="#tekton.dev/v1.PropertySpec">PropertySpec</a>)
</p>
<div>
<p>ParamType indicates the type of an input parameter;
Used to distinguish between a single string and an array of strings.</p>
</div>
<table>
<thead>
<tr>
<th>Value</th>
<th>Description</th>
</tr>
</thead>
<tbody><tr><td><p>"array"</p></td>
<td></td>
</tr><tr><td><p>"object"</p></td>
<td></td>
</tr><tr><td><p>"string"</p></td>
<td></td>
</tr></tbody>
</table>
<h3 id="tekton.dev/v1.ParamValue">ParamValue
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.Param">Param</a>, <a href="#tekton.dev/v1.ParamSpec">ParamSpec</a>, <a href="#tekton.dev/v1.PipelineResult">PipelineResult</a>, <a href="#tekton.dev/v1.PipelineRunResult">PipelineRunResult</a>, <a href="#tekton.dev/v1.TaskResult">TaskResult</a>, <a href="#tekton.dev/v1.TaskRunResult">TaskRunResult</a>)
</p>
<div>
<p>ResultValue is a type alias of ParamValue</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>Type</code><br/>
<em>
<a href="#tekton.dev/v1.ParamType">
ParamType
</a>
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>StringVal</code><br/>
<em>
string
</em>
</td>
<td>
<p>Represents the stored type of ParamValues.</p>
</td>
</tr>
<tr>
<td>
<code>ArrayVal</code><br/>
<em>
[]string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>ObjectVal</code><br/>
<em>
map[string]string
</em>
</td>
<td>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.Params">Params
(<code>[]github.com/tektoncd/pipeline/pkg/apis/pipeline/v1.Param</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.IncludeParams">IncludeParams</a>, <a href="#tekton.dev/v1.Matrix">Matrix</a>, <a href="#tekton.dev/v1.PipelineRunSpec">PipelineRunSpec</a>, <a href="#tekton.dev/v1.PipelineTask">PipelineTask</a>, <a href="#tekton.dev/v1.ResolverRef">ResolverRef</a>, <a href="#tekton.dev/v1.Step">Step</a>, <a href="#tekton.dev/v1.TaskRunInputs">TaskRunInputs</a>, <a href="#tekton.dev/v1.TaskRunSpec">TaskRunSpec</a>)
</p>
<div>
<p>Params is a list of Param</p>
</div>
<h3 id="tekton.dev/v1.PipelineRef">PipelineRef
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineRunSpec">PipelineRunSpec</a>, <a href="#tekton.dev/v1.PipelineTask">PipelineTask</a>)
</p>
<div>
<p>PipelineRef can be used to refer to a specific instance of a Pipeline.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name of the referent; More info: <a href="http://kubernetes.io/docs/user-guide/identifiers#names">http://kubernetes.io/docs/user-guide/identifiers#names</a></p>
</td>
</tr>
<tr>
<td>
<code>apiVersion</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>API version of the referent</p>
</td>
</tr>
<tr>
<td>
<code>ResolverRef</code><br/>
<em>
<a href="#tekton.dev/v1.ResolverRef">
ResolverRef
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>ResolverRef allows referencing a Pipeline in a remote location
like a git repo. This field is only supported when the alpha
feature gate is enabled.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.PipelineResult">PipelineResult
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineSpec">PipelineSpec</a>)
</p>
<div>
<p>PipelineResult used to describe the results of a pipeline</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name the given name</p>
</td>
</tr>
<tr>
<td>
<code>type</code><br/>
<em>
<a href="#tekton.dev/v1.ResultsType">
ResultsType
</a>
</em>
</td>
<td>
<p>Type is the user-specified type of the result.
The possible types are ‘string’, ‘array’, and ‘object’, with ‘string’ as the default.
‘array’ and ‘object’ types are alpha features.</p>
</td>
</tr>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is a human-readable description of the result</p>
</td>
</tr>
<tr>
<td>
<code>value</code><br/>
<em>
<a href="#tekton.dev/v1.ParamValue">
ParamValue
</a>
</em>
</td>
<td>
<p>Value the expression used to retrieve the value</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.PipelineRunReason">PipelineRunReason
(<code>string</code> alias)</h3>
<div>
<p>PipelineRunReason represents a reason for the pipeline run “Succeeded” condition</p>
</div>
<table>
<thead>
<tr>
<th>Value</th>
<th>Description</th>
</tr>
</thead>
<tbody><tr><td><p>"CELEvaluationFailed"</p></td>
<td><p>ReasonCELEvaluationFailed indicates the pipeline fails the CEL evaluation</p>
</td>
</tr><tr><td><p>"Cancelled"</p></td>
<td><p>PipelineRunReasonCancelled is the reason set when the PipelineRun cancelled by the user
This reason may be found with a corev1.ConditionFalse status, if the cancellation was processed successfully
This reason may be found with a corev1.ConditionUnknown status, if the cancellation is being processed or failed</p>
</td>
</tr><tr><td><p>"CancelledRunningFinally"</p></td>
<td><p>PipelineRunReasonCancelledRunningFinally indicates that pipeline has been gracefully cancelled
and no new Tasks will be scheduled by the controller, but final tasks are now running</p>
</td>
</tr><tr><td><p>"Completed"</p></td>
<td><p>PipelineRunReasonCompleted is the reason set when the PipelineRun completed successfully with one or more skipped Tasks</p>
</td>
</tr><tr><td><p>"PipelineRunCouldntCancel"</p></td>
<td><p>ReasonCouldntCancel indicates that a PipelineRun was cancelled but attempting to update
all of the running TaskRuns as cancelled failed.</p>
</td>
</tr><tr><td><p>"CouldntGetPipeline"</p></td>
<td><p>ReasonCouldntGetPipeline indicates that the reason for the failure status is that the
associated Pipeline couldn’t be retrieved</p>
</td>
</tr><tr><td><p>"CouldntGetPipelineResult"</p></td>
<td><p>PipelineRunReasonCouldntGetPipelineResult indicates that the pipeline fails to retrieve the
referenced result. This could be due to failed TaskRuns or Runs that were supposed to produce
the results</p>
</td>
</tr><tr><td><p>"CouldntGetTask"</p></td>
<td><p>ReasonCouldntGetTask indicates that the reason for the failure status is that the
associated Pipeline’s Tasks couldn’t all be retrieved</p>
</td>
</tr><tr><td><p>"PipelineRunCouldntTimeOut"</p></td>
<td><p>ReasonCouldntTimeOut indicates that a PipelineRun was timed out but attempting to update
all of the running TaskRuns as timed out failed.</p>
</td>
</tr><tr><td><p>"CreateRunFailed"</p></td>
<td><p>ReasonCreateRunFailed indicates that the pipeline fails to create the taskrun or other run resources</p>
</td>
</tr><tr><td><p>"Failed"</p></td>
<td><p>PipelineRunReasonFailed is the reason set when the PipelineRun completed with a failure</p>
</td>
</tr><tr><td><p>"PipelineValidationFailed"</p></td>
<td><p>ReasonFailedValidation indicates that the reason for failure status is
that pipelinerun failed runtime validation</p>
</td>
</tr><tr><td><p>"InvalidPipelineResourceBindings"</p></td>
<td><p>ReasonInvalidBindings indicates that the reason for the failure status is that the
PipelineResources bound in the PipelineRun didn’t match those declared in the Pipeline</p>
</td>
</tr><tr><td><p>"PipelineInvalidGraph"</p></td>
<td><p>ReasonInvalidGraph indicates that the reason for the failure status is that the
associated Pipeline is an invalid graph (a.k.a wrong order, cycle, …)</p>
</td>
</tr><tr><td><p>"InvalidMatrixParameterTypes"</p></td>
<td><p>ReasonInvalidMatrixParameterTypes indicates a matrix contains invalid parameter types</p>
</td>
</tr><tr><td><p>"InvalidParamValue"</p></td>
<td><p>PipelineRunReasonInvalidParamValue indicates that the PipelineRun Param input value is not allowed.</p>
</td>
</tr><tr><td><p>"InvalidPipelineResultReference"</p></td>
<td><p>PipelineRunReasonInvalidPipelineResultReference indicates a pipeline result was declared
by the pipeline but not initialized in the pipelineTask</p>
</td>
</tr><tr><td><p>"InvalidTaskResultReference"</p></td>
<td><p>ReasonInvalidTaskResultReference indicates a task result was declared
but was not initialized by that task</p>
</td>
</tr><tr><td><p>"InvalidTaskRunSpecs"</p></td>
<td><p>ReasonInvalidTaskRunSpec indicates that PipelineRun.Spec.TaskRunSpecs[].PipelineTaskName is defined with
a not exist taskName in pipelineSpec.</p>
</td>
</tr><tr><td><p>"InvalidWorkspaceBindings"</p></td>
<td><p>ReasonInvalidWorkspaceBinding indicates that a Pipeline expects a workspace but a
PipelineRun has provided an invalid binding.</p>
</td>
</tr><tr><td><p>"ObjectParameterMissKeys"</p></td>
<td><p>ReasonObjectParameterMissKeys indicates that the object param value provided from PipelineRun spec
misses some keys required for the object param declared in Pipeline spec.</p>
</td>
</tr><tr><td><p>"ParamArrayIndexingInvalid"</p></td>
<td><p>ReasonParamArrayIndexingInvalid indicates that the use of param array indexing is out of bound.</p>
</td>
</tr><tr><td><p>"ParameterMissing"</p></td>
<td><p>ReasonParameterMissing indicates that the reason for the failure status is that the
associated PipelineRun didn’t provide all the required parameters</p>
</td>
</tr><tr><td><p>"ParameterTypeMismatch"</p></td>
<td><p>ReasonParameterTypeMismatch indicates that the reason for the failure status is that
parameter(s) declared in the PipelineRun do not have the some declared type as the
parameters(s) declared in the Pipeline that they are supposed to override.</p>
</td>
</tr><tr><td><p>"PipelineRunPending"</p></td>
<td><p>PipelineRunReasonPending is the reason set when the PipelineRun is in the pending state</p>
</td>
</tr><tr><td><p>"RequiredWorkspaceMarkedOptional"</p></td>
<td><p>ReasonRequiredWorkspaceMarkedOptional indicates an optional workspace
has been passed to a Task that is expecting a non-optional workspace</p>
</td>
</tr><tr><td><p>"ResolvingPipelineRef"</p></td>
<td><p>ReasonResolvingPipelineRef indicates that the PipelineRun is waiting for
its pipelineRef to be asynchronously resolved.</p>
</td>
</tr><tr><td><p>"ResourceVerificationFailed"</p></td>
<td><p>ReasonResourceVerificationFailed indicates that the pipeline fails the trusted resource verification,
it could be the content has changed, signature is invalid or public key is invalid</p>
</td>
</tr><tr><td><p>"Running"</p></td>
<td><p>PipelineRunReasonRunning is the reason set when the PipelineRun is running</p>
</td>
</tr><tr><td><p>"Started"</p></td>
<td><p>PipelineRunReasonStarted is the reason set when the PipelineRun has just started</p>
</td>
</tr><tr><td><p>"StoppedRunningFinally"</p></td>
<td><p>PipelineRunReasonStoppedRunningFinally indicates that pipeline has been gracefully stopped
and no new Tasks will be scheduled by the controller, but final tasks are now running</p>
</td>
</tr><tr><td><p>"PipelineRunStopping"</p></td>
<td><p>PipelineRunReasonStopping indicates that no new Tasks will be scheduled by the controller, and the
pipeline will stop once all running tasks complete their work</p>
</td>
</tr><tr><td><p>"Succeeded"</p></td>
<td><p>PipelineRunReasonSuccessful is the reason set when the PipelineRun completed successfully</p>
</td>
</tr><tr><td><p>"PipelineRunTimeout"</p></td>
<td><p>PipelineRunReasonTimedOut is the reason set when the PipelineRun has timed out</p>
</td>
</tr></tbody>
</table>
<h3 id="tekton.dev/v1.PipelineRunResult">PipelineRunResult
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineRunStatusFields">PipelineRunStatusFields</a>)
</p>
<div>
<p>PipelineRunResult used to describe the results of a pipeline</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name is the result’s name as declared by the Pipeline</p>
</td>
</tr>
<tr>
<td>
<code>value</code><br/>
<em>
<a href="#tekton.dev/v1.ParamValue">
ParamValue
</a>
</em>
</td>
<td>
<p>Value is the result returned from the execution of this PipelineRun</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.PipelineRunRunStatus">PipelineRunRunStatus
</h3>
<div>
<p>PipelineRunRunStatus contains the name of the PipelineTask for this Run and the Run’s Status</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>pipelineTaskName</code><br/>
<em>
string
</em>
</td>
<td>
<p>PipelineTaskName is the name of the PipelineTask.</p>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#tekton.dev/v1beta1.CustomRunStatus">
CustomRunStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Status is the RunStatus for the corresponding Run</p>
</td>
</tr>
<tr>
<td>
<code>whenExpressions</code><br/>
<em>
<a href="#tekton.dev/v1.WhenExpression">
[]WhenExpression
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>WhenExpressions is the list of checks guarding the execution of the PipelineTask</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.PipelineRunSpec">PipelineRunSpec
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineRun">PipelineRun</a>)
</p>
<div>
<p>PipelineRunSpec defines the desired state of PipelineRun</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>pipelineRef</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineRef">
PipelineRef
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>pipelineSpec</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineSpec">
PipelineSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Specifying PipelineSpec can be disabled by setting
<code>disable-inline-spec</code> feature flag..</p>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1.Params">
Params
</a>
</em>
</td>
<td>
<p>Params is a list of parameter names and values.</p>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineRunSpecStatus">
PipelineRunSpecStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Used for cancelling a pipelinerun (and maybe more later on)</p>
</td>
</tr>
<tr>
<td>
<code>timeouts</code><br/>
<em>
<a href="#tekton.dev/v1.TimeoutFields">
TimeoutFields
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Time after which the Pipeline times out.
Currently three keys are accepted in the map
pipeline, tasks and finally
with Timeouts.pipeline >= Timeouts.tasks + Timeouts.finally</p>
</td>
</tr>
<tr>
<td>
<code>taskRunTemplate</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineTaskRunTemplate">
PipelineTaskRunTemplate
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>TaskRunTemplate represent template of taskrun</p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1.WorkspaceBinding">
[]WorkspaceBinding
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Workspaces holds a set of workspace bindings that must match names
with those declared in the pipeline.</p>
</td>
</tr>
<tr>
<td>
<code>taskRunSpecs</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineTaskRunSpec">
[]PipelineTaskRunSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>TaskRunSpecs holds a set of runtime specs</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.PipelineRunSpecStatus">PipelineRunSpecStatus
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineRunSpec">PipelineRunSpec</a>)
</p>
<div>
<p>PipelineRunSpecStatus defines the pipelinerun spec status the user can provide</p>
</div>
<h3 id="tekton.dev/v1.PipelineRunStatus">PipelineRunStatus
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineRun">PipelineRun</a>)
</p>
<div>
<p>PipelineRunStatus defines the observed state of PipelineRun</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>Status</code><br/>
<em>
<a href="https://pkg.go.dev/knative.dev/pkg/apis/duck/v1#Status">
knative.dev/pkg/apis/duck/v1.Status
</a>
</em>
</td>
<td>
<p>
(Members of <code>Status</code> are embedded into this type.)
</p>
</td>
</tr>
<tr>
<td>
<code>PipelineRunStatusFields</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineRunStatusFields">
PipelineRunStatusFields
</a>
</em>
</td>
<td>
<p>
(Members of <code>PipelineRunStatusFields</code> are embedded into this type.)
</p>
<p>PipelineRunStatusFields inlines the status fields.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.PipelineRunStatusFields">PipelineRunStatusFields
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineRunStatus">PipelineRunStatus</a>)
</p>
<div>
<p>PipelineRunStatusFields holds the fields of PipelineRunStatus’ status.
This is defined separately and inlined so that other types can readily
consume these fields via duck typing.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>startTime</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#time-v1-meta">
Kubernetes meta/v1.Time
</a>
</em>
</td>
<td>
<p>StartTime is the time the PipelineRun is actually started.</p>
</td>
</tr>
<tr>
<td>
<code>completionTime</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#time-v1-meta">
Kubernetes meta/v1.Time
</a>
</em>
</td>
<td>
<p>CompletionTime is the time the PipelineRun completed.</p>
</td>
</tr>
<tr>
<td>
<code>results</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineRunResult">
[]PipelineRunResult
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Results are the list of results written out by the pipeline task’s containers</p>
</td>
</tr>
<tr>
<td>
<code>pipelineSpec</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineSpec">
PipelineSpec
</a>
</em>
</td>
<td>
<p>PipelineRunSpec contains the exact spec used to instantiate the run</p>
</td>
</tr>
<tr>
<td>
<code>skippedTasks</code><br/>
<em>
<a href="#tekton.dev/v1.SkippedTask">
[]SkippedTask
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>list of tasks that were skipped due to when expressions evaluating to false</p>
</td>
</tr>
<tr>
<td>
<code>childReferences</code><br/>
<em>
<a href="#tekton.dev/v1.ChildStatusReference">
[]ChildStatusReference
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>list of TaskRun and Run names, PipelineTask names, and API versions/kinds for children of this PipelineRun.</p>
</td>
</tr>
<tr>
<td>
<code>finallyStartTime</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#time-v1-meta">
Kubernetes meta/v1.Time
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>FinallyStartTime is when all non-finally tasks have been completed and only finally tasks are being executed.</p>
</td>
</tr>
<tr>
<td>
<code>provenance</code><br/>
<em>
<a href="#tekton.dev/v1.Provenance">
Provenance
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Provenance contains some key authenticated metadata about how a software artifact was built (what sources, what inputs/outputs, etc.).</p>
</td>
</tr>
<tr>
<td>
<code>spanContext</code><br/>
<em>
map[string]string
</em>
</td>
<td>
<p>SpanContext contains tracing span context fields</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.PipelineRunTaskRunStatus">PipelineRunTaskRunStatus
</h3>
<div>
<p>PipelineRunTaskRunStatus contains the name of the PipelineTask for this TaskRun and the TaskRun’s Status</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>pipelineTaskName</code><br/>
<em>
string
</em>
</td>
<td>
<p>PipelineTaskName is the name of the PipelineTask.</p>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#tekton.dev/v1.TaskRunStatus">
TaskRunStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Status is the TaskRunStatus for the corresponding TaskRun</p>
</td>
</tr>
<tr>
<td>
<code>whenExpressions</code><br/>
<em>
<a href="#tekton.dev/v1.WhenExpression">
[]WhenExpression
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>WhenExpressions is the list of checks guarding the execution of the PipelineTask</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.PipelineSpec">PipelineSpec
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.Pipeline">Pipeline</a>, <a href="#tekton.dev/v1.PipelineRunSpec">PipelineRunSpec</a>, <a href="#tekton.dev/v1.PipelineRunStatusFields">PipelineRunStatusFields</a>, <a href="#tekton.dev/v1.PipelineTask">PipelineTask</a>)
</p>
<div>
<p>PipelineSpec defines the desired state of Pipeline.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>displayName</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>DisplayName is a user-facing name of the pipeline that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is a user-facing description of the pipeline that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>tasks</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineTask">
[]PipelineTask
</a>
</em>
</td>
<td>
<p>Tasks declares the graph of Tasks that execute when this Pipeline is run.</p>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1.ParamSpecs">
ParamSpecs
</a>
</em>
</td>
<td>
<p>Params declares a list of input parameters that must be supplied when
this Pipeline is run.</p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineWorkspaceDeclaration">
[]PipelineWorkspaceDeclaration
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Workspaces declares a set of named workspaces that are expected to be
provided by a PipelineRun.</p>
</td>
</tr>
<tr>
<td>
<code>results</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineResult">
[]PipelineResult
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Results are values that this pipeline can output once run</p>
</td>
</tr>
<tr>
<td>
<code>finally</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineTask">
[]PipelineTask
</a>
</em>
</td>
<td>
<p>Finally declares the list of Tasks that execute just before leaving the Pipeline
i.e. either after all Tasks are finished executing successfully
or after a failure which would result in ending the Pipeline</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.PipelineTask">PipelineTask
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineSpec">PipelineSpec</a>)
</p>
<div>
<p>PipelineTask defines a task in a Pipeline, passing inputs from both
Params and from the output of previous tasks.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name is the name of this task within the context of a Pipeline. Name is
used as a coordinate with the <code>from</code> and <code>runAfter</code> fields to establish
the execution order of tasks relative to one another.</p>
</td>
</tr>
<tr>
<td>
<code>displayName</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>DisplayName is the display name of this task within the context of a Pipeline.
This display name may be used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is the description of this task within the context of a Pipeline.
This description may be used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>taskRef</code><br/>
<em>
<a href="#tekton.dev/v1.TaskRef">
TaskRef
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>TaskRef is a reference to a task definition.</p>
</td>
</tr>
<tr>
<td>
<code>taskSpec</code><br/>
<em>
<a href="#tekton.dev/v1.EmbeddedTask">
EmbeddedTask
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>TaskSpec is a specification of a task
Specifying TaskSpec can be disabled by setting
<code>disable-inline-spec</code> feature flag..</p>
</td>
</tr>
<tr>
<td>
<code>when</code><br/>
<em>
<a href="#tekton.dev/v1.WhenExpressions">
WhenExpressions
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>When is a list of when expressions that need to be true for the task to run</p>
</td>
</tr>
<tr>
<td>
<code>retries</code><br/>
<em>
int
</em>
</td>
<td>
<em>(Optional)</em>
<p>Retries represents how many times this task should be retried in case of task failure: ConditionSucceeded set to False</p>
</td>
</tr>
<tr>
<td>
<code>runAfter</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>RunAfter is the list of PipelineTask names that should be executed before
this Task executes. (Used to force a specific ordering in graph execution.)</p>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1.Params">
Params
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Parameters declares parameters passed to this task.</p>
</td>
</tr>
<tr>
<td>
<code>matrix</code><br/>
<em>
<a href="#tekton.dev/v1.Matrix">
Matrix
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Matrix declares parameters used to fan out this task.</p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1.WorkspacePipelineTaskBinding">
[]WorkspacePipelineTaskBinding
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Workspaces maps workspaces from the pipeline spec to the workspaces
declared in the Task.</p>
</td>
</tr>
<tr>
<td>
<code>timeout</code><br/>
<em>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration">
Kubernetes meta/v1.Duration
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Time after which the TaskRun times out. Defaults to 1 hour.
Refer Go’s ParseDuration documentation for expected format: <a href="https://golang.org/pkg/time/#ParseDuration">https://golang.org/pkg/time/#ParseDuration</a></p>
</td>
</tr>
<tr>
<td>
<code>pipelineRef</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineRef">
PipelineRef
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>PipelineRef is a reference to a pipeline definition
Note: PipelineRef is in preview mode and not yet supported</p>
</td>
</tr>
<tr>
<td>
<code>pipelineSpec</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineSpec">
PipelineSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>PipelineSpec is a specification of a pipeline
Note: PipelineSpec is in preview mode and not yet supported
Specifying PipelineSpec can be disabled by setting
<code>disable-inline-spec</code> feature flag..</p>
</td>
</tr>
<tr>
<td>
<code>onError</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineTaskOnErrorType">
PipelineTaskOnErrorType
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>OnError defines the exiting behavior of a PipelineRun on error
can be set to [ continue | stopAndFail ]</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.PipelineTaskMetadata">PipelineTaskMetadata
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.EmbeddedTask">EmbeddedTask</a>, <a href="#tekton.dev/v1.PipelineTaskRunSpec">PipelineTaskRunSpec</a>)
</p>
<div>
<p>PipelineTaskMetadata contains the labels or annotations for an EmbeddedTask</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>labels</code><br/>
<em>
map[string]string
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>annotations</code><br/>
<em>
map[string]string
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.PipelineTaskOnErrorType">PipelineTaskOnErrorType
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineTask">PipelineTask</a>)
</p>
<div>
<p>PipelineTaskOnErrorType defines a list of supported failure handling behaviors of a PipelineTask on error</p>
</div>
<table>
<thead>
<tr>
<th>Value</th>
<th>Description</th>
</tr>
</thead>
<tbody><tr><td><p>"continue"</p></td>
<td><p>PipelineTaskContinue indicates to continue executing the rest of the DAG when the PipelineTask fails</p>
</td>
</tr><tr><td><p>"stopAndFail"</p></td>
<td><p>PipelineTaskStopAndFail indicates to stop and fail the PipelineRun if the PipelineTask fails</p>
</td>
</tr></tbody>
</table>
<h3 id="tekton.dev/v1.PipelineTaskParam">PipelineTaskParam
</h3>
<div>
<p>PipelineTaskParam is used to provide arbitrary string parameters to a Task.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>value</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.PipelineTaskRun">PipelineTaskRun
</h3>
<div>
<p>PipelineTaskRun reports the results of running a step in the Task. Each
task has the potential to succeed or fail (based on the exit code)
and produces logs.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.PipelineTaskRunSpec">PipelineTaskRunSpec
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineRunSpec">PipelineRunSpec</a>)
</p>
<div>
<p>PipelineTaskRunSpec can be used to configure specific
specs for a concrete Task</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>pipelineTaskName</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>serviceAccountName</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>podTemplate</code><br/>
<em>
<a href="#tekton.dev/unversioned.Template">
Template
</a>
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>stepSpecs</code><br/>
<em>
<a href="#tekton.dev/v1.TaskRunStepSpec">
[]TaskRunStepSpec
</a>
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>sidecarSpecs</code><br/>
<em>
<a href="#tekton.dev/v1.TaskRunSidecarSpec">
[]TaskRunSidecarSpec
</a>
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>metadata</code><br/>
<em>
<a href="#tekton.dev/v1.PipelineTaskMetadata">
PipelineTaskMetadata
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>computeResources</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core">
Kubernetes core/v1.ResourceRequirements
</a>
</em>
</td>
<td>
<p>Compute resources to use for this TaskRun</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.PipelineTaskRunTemplate">PipelineTaskRunTemplate
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineRunSpec">PipelineRunSpec</a>)
</p>
<div>
<p>PipelineTaskRunTemplate is used to specify run specifications for all Task in pipelinerun.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>podTemplate</code><br/>
<em>
<a href="#tekton.dev/unversioned.Template">
Template
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>serviceAccountName</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.PipelineWorkspaceDeclaration">PipelineWorkspaceDeclaration
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineSpec">PipelineSpec</a>)
</p>
<div>
<p>WorkspacePipelineDeclaration creates a named slot in a Pipeline that a PipelineRun
is expected to populate with a workspace binding.</p>
<p>Deprecated: use PipelineWorkspaceDeclaration type instead</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name is the name of a workspace to be provided by a PipelineRun.</p>
</td>
</tr>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is a human readable string describing how the workspace will be
used in the Pipeline. It can be useful to include a bit of detail about which
tasks are intended to have access to the data on the workspace.</p>
</td>
</tr>
<tr>
<td>
<code>optional</code><br/>
<em>
bool
</em>
</td>
<td>
<p>Optional marks a Workspace as not being required in PipelineRuns. By default
this field is false and so declared workspaces are required.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.PropertySpec">PropertySpec
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.ParamSpec">ParamSpec</a>, <a href="#tekton.dev/v1.StepResult">StepResult</a>, <a href="#tekton.dev/v1.TaskResult">TaskResult</a>)
</p>
<div>
<p>PropertySpec defines the struct for object keys</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>type</code><br/>
<em>
<a href="#tekton.dev/v1.ParamType">
ParamType
</a>
</em>
</td>
<td>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.Provenance">Provenance
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineRunStatusFields">PipelineRunStatusFields</a>, <a href="#tekton.dev/v1.StepState">StepState</a>, <a href="#tekton.dev/v1.TaskRunStatusFields">TaskRunStatusFields</a>)
</p>
<div>
<p>Provenance contains metadata about resources used in the TaskRun/PipelineRun
such as the source from where a remote build definition was fetched.
This field aims to carry minimum amoumt of metadata in *Run status so that
Tekton Chains can capture them in the provenance.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>refSource</code><br/>
<em>
<a href="#tekton.dev/v1.RefSource">
RefSource
</a>
</em>
</td>
<td>
<p>RefSource identifies the source where a remote task/pipeline came from.</p>
</td>
</tr>
<tr>
<td>
<code>featureFlags</code><br/>
<em>
github.com/tektoncd/pipeline/pkg/apis/config.FeatureFlags
</em>
</td>
<td>
<p>FeatureFlags identifies the feature flags that were used during the task/pipeline run</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.Ref">Ref
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.Step">Step</a>)
</p>
<div>
<p>Ref can be used to refer to a specific instance of a StepAction.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name of the referenced step</p>
</td>
</tr>
<tr>
<td>
<code>ResolverRef</code><br/>
<em>
<a href="#tekton.dev/v1.ResolverRef">
ResolverRef
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>ResolverRef allows referencing a StepAction in a remote location
like a git repo.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.RefSource">RefSource
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.Provenance">Provenance</a>, <a href="#resolution.tekton.dev/v1alpha1.ResolutionRequestStatusFields">ResolutionRequestStatusFields</a>, <a href="#resolution.tekton.dev/v1beta1.ResolutionRequestStatusFields">ResolutionRequestStatusFields</a>)
</p>
<div>
<p>RefSource contains the information that can uniquely identify where a remote
built definition came from i.e. Git repositories, Tekton Bundles in OCI registry
and hub.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>uri</code><br/>
<em>
string
</em>
</td>
<td>
<p>URI indicates the identity of the source of the build definition.
Example: “<a href="https://github.com/tektoncd/catalog"">https://github.com/tektoncd/catalog”</a></p>
</td>
</tr>
<tr>
<td>
<code>digest</code><br/>
<em>
map[string]string
</em>
</td>
<td>
<p>Digest is a collection of cryptographic digests for the contents of the artifact specified by URI.
Example: {“sha1”: “f99d13e554ffcb696dee719fa85b695cb5b0f428”}</p>
</td>
</tr>
<tr>
<td>
<code>entryPoint</code><br/>
<em>
string
</em>
</td>
<td>
<p>EntryPoint identifies the entry point into the build. This is often a path to a
build definition file and/or a target label within that file.
Example: “task/git-clone/0.8/git-clone.yaml”</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.ResolverName">ResolverName
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.ResolverRef">ResolverRef</a>)
</p>
<div>
<p>ResolverName is the name of a resolver from which a resource can be
requested.</p>
</div>
<h3 id="tekton.dev/v1.ResolverRef">ResolverRef
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineRef">PipelineRef</a>, <a href="#tekton.dev/v1.Ref">Ref</a>, <a href="#tekton.dev/v1.TaskRef">TaskRef</a>)
</p>
<div>
<p>ResolverRef can be used to refer to a Pipeline or Task in a remote
location like a git repo. This feature is in beta and these fields
are only available when the beta feature gate is enabled.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>resolver</code><br/>
<em>
<a href="#tekton.dev/v1.ResolverName">
ResolverName
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Resolver is the name of the resolver that should perform
resolution of the referenced Tekton resource, such as “git”.</p>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1.Params">
Params
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Params contains the parameters used to identify the
referenced Tekton resource. Example entries might include
“repo” or “path” but the set of params ultimately depends on
the chosen resolver.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.ResultRef">ResultRef
</h3>
<div>
<p>ResultRef is a type that represents a reference to a task run result</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>pipelineTask</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>result</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>resultsIndex</code><br/>
<em>
int
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>property</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.ResultsType">ResultsType
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineResult">PipelineResult</a>, <a href="#tekton.dev/v1.StepResult">StepResult</a>, <a href="#tekton.dev/v1.TaskResult">TaskResult</a>, <a href="#tekton.dev/v1.TaskRunResult">TaskRunResult</a>)
</p>
<div>
<p>ResultsType indicates the type of a result;
Used to distinguish between a single string and an array of strings.
Note that there is ResultType used to find out whether a
RunResult is from a task result or not, which is different from
this ResultsType.</p>
</div>
<table>
<thead>
<tr>
<th>Value</th>
<th>Description</th>
</tr>
</thead>
<tbody><tr><td><p>"array"</p></td>
<td></td>
</tr><tr><td><p>"object"</p></td>
<td></td>
</tr><tr><td><p>"string"</p></td>
<td></td>
</tr></tbody>
</table>
<h3 id="tekton.dev/v1.Sidecar">Sidecar
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.TaskSpec">TaskSpec</a>)
</p>
<div>
<p>Sidecar has nearly the same data structure as Step but does not have the ability to timeout.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name of the Sidecar specified as a DNS_LABEL.
Each Sidecar in a Task must have a unique name (DNS_LABEL).
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>image</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Image reference name.
More info: <a href="https://kubernetes.io/docs/concepts/containers/images">https://kubernetes.io/docs/concepts/containers/images</a></p>
</td>
</tr>
<tr>
<td>
<code>command</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Entrypoint array. Not executed within a shell.
The image’s ENTRYPOINT is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the Sidecar’s environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will
produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell</a></p>
</td>
</tr>
<tr>
<td>
<code>args</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Arguments to the entrypoint.
The image’s CMD is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the Sidecar’s environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will
produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell</a></p>
</td>
</tr>
<tr>
<td>
<code>workingDir</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Sidecar’s working directory.
If not specified, the container runtime’s default will be used, which
might be configured in the container image.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>ports</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#containerport-v1-core">
[]Kubernetes core/v1.ContainerPort
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>List of ports to expose from the Sidecar. Exposing a port here gives
the system additional information about the network connections a
container uses, but is primarily informational. Not specifying a port here
DOES NOT prevent that port from being exposed. Any port which is
listening on the default “0.0.0.0” address inside a container will be
accessible from the network.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>envFrom</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#envfromsource-v1-core">
[]Kubernetes core/v1.EnvFromSource
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>List of sources to populate environment variables in the Sidecar.
The keys defined within a source must be a C_IDENTIFIER. All invalid keys
will be reported as an event when the container is starting. When a key exists in multiple
sources, the value associated with the last source will take precedence.
Values defined by an Env with a duplicate key will take precedence.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>env</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#envvar-v1-core">
[]Kubernetes core/v1.EnvVar
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>List of environment variables to set in the Sidecar.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>computeResources</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core">
Kubernetes core/v1.ResourceRequirements
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>ComputeResources required by this Sidecar.
Cannot be updated.
More info: <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/">https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/</a></p>
</td>
</tr>
<tr>
<td>
<code>volumeMounts</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumemount-v1-core">
[]Kubernetes core/v1.VolumeMount
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Volumes to mount into the Sidecar’s filesystem.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>volumeDevices</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumedevice-v1-core">
[]Kubernetes core/v1.VolumeDevice
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>volumeDevices is the list of block devices to be used by the Sidecar.</p>
</td>
</tr>
<tr>
<td>
<code>livenessProbe</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#probe-v1-core">
Kubernetes core/v1.Probe
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Periodic probe of Sidecar liveness.
Container will be restarted if the probe fails.
Cannot be updated.
More info: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes</a></p>
</td>
</tr>
<tr>
<td>
<code>readinessProbe</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#probe-v1-core">
Kubernetes core/v1.Probe
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Periodic probe of Sidecar service readiness.
Container will be removed from service endpoints if the probe fails.
Cannot be updated.
More info: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes</a></p>
</td>
</tr>
<tr>
<td>
<code>startupProbe</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#probe-v1-core">
Kubernetes core/v1.Probe
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>StartupProbe indicates that the Pod the Sidecar is running in has successfully initialized.
If specified, no other probes are executed until this completes successfully.
If this probe fails, the Pod will be restarted, just as if the livenessProbe failed.
This can be used to provide different probe parameters at the beginning of a Pod’s lifecycle,
when it might take a long time to load data or warm a cache, than during steady-state operation.
This cannot be updated.
More info: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes</a></p>
</td>
</tr>
<tr>
<td>
<code>lifecycle</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#lifecycle-v1-core">
Kubernetes core/v1.Lifecycle
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Actions that the management system should take in response to Sidecar lifecycle events.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>terminationMessagePath</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Optional: Path at which the file to which the Sidecar’s termination message
will be written is mounted into the Sidecar’s filesystem.
Message written is intended to be brief final status, such as an assertion failure message.
Will be truncated by the node if greater than 4096 bytes. The total message length across
all containers will be limited to 12kb.
Defaults to /dev/termination-log.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>terminationMessagePolicy</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#terminationmessagepolicy-v1-core">
Kubernetes core/v1.TerminationMessagePolicy
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Indicate how the termination message should be populated. File will use the contents of
terminationMessagePath to populate the Sidecar status message on both success and failure.
FallbackToLogsOnError will use the last chunk of Sidecar log output if the termination
message file is empty and the Sidecar exited with an error.
The log output is limited to 2048 bytes or 80 lines, whichever is smaller.
Defaults to File.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>imagePullPolicy</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#pullpolicy-v1-core">
Kubernetes core/v1.PullPolicy
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Image pull policy.
One of Always, Never, IfNotPresent.
Defaults to Always if :latest tag is specified, or IfNotPresent otherwise.
Cannot be updated.
More info: <a href="https://kubernetes.io/docs/concepts/containers/images#updating-images">https://kubernetes.io/docs/concepts/containers/images#updating-images</a></p>
</td>
</tr>
<tr>
<td>
<code>securityContext</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#securitycontext-v1-core">
Kubernetes core/v1.SecurityContext
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>SecurityContext defines the security options the Sidecar should be run with.
If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.
More info: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/</a></p>
</td>
</tr>
<tr>
<td>
<code>stdin</code><br/>
<em>
bool
</em>
</td>
<td>
<em>(Optional)</em>
<p>Whether this Sidecar should allocate a buffer for stdin in the container runtime. If this
is not set, reads from stdin in the Sidecar will always result in EOF.
Default is false.</p>
</td>
</tr>
<tr>
<td>
<code>stdinOnce</code><br/>
<em>
bool
</em>
</td>
<td>
<em>(Optional)</em>
<p>Whether the container runtime should close the stdin channel after it has been opened by
a single attach. When stdin is true the stdin stream will remain open across multiple attach
sessions. If stdinOnce is set to true, stdin is opened on Sidecar start, is empty until the
first client attaches to stdin, and then remains open and accepts data until the client disconnects,
at which time stdin is closed and remains closed until the Sidecar is restarted. If this
flag is false, a container processes that reads from stdin will never receive an EOF.
Default is false</p>
</td>
</tr>
<tr>
<td>
<code>tty</code><br/>
<em>
bool
</em>
</td>
<td>
<em>(Optional)</em>
<p>Whether this Sidecar should allocate a TTY for itself, also requires ‘stdin’ to be true.
Default is false.</p>
</td>
</tr>
<tr>
<td>
<code>script</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Script is the contents of an executable file to execute.</p>
<p>If Script is not empty, the Step cannot have an Command or Args.</p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1.WorkspaceUsage">
[]WorkspaceUsage
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>This is an alpha field. You must set the “enable-api-fields” feature flag to “alpha”
for this field to be supported.</p>
<p>Workspaces is a list of workspaces from the Task that this Sidecar wants
exclusive access to. Adding a workspace to this list means that any
other Step or Sidecar that does not also request this Workspace will
not have access to it.</p>
</td>
</tr>
<tr>
<td>
<code>restartPolicy</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#containerrestartpolicy-v1-core">
Kubernetes core/v1.ContainerRestartPolicy
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>RestartPolicy refers to kubernetes RestartPolicy. It can only be set for an
initContainer and must have it’s policy set to “Always”. It is currently
left optional to help support Kubernetes versions prior to 1.29 when this feature
was introduced.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.SidecarState">SidecarState
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.TaskRunStatusFields">TaskRunStatusFields</a>)
</p>
<div>
<p>SidecarState reports the results of running a sidecar in a Task.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>ContainerState</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#containerstate-v1-core">
Kubernetes core/v1.ContainerState
</a>
</em>
</td>
<td>
<p>
(Members of <code>ContainerState</code> are embedded into this type.)
</p>
</td>
</tr>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>container</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>imageID</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.SkippedTask">SkippedTask
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineRunStatusFields">PipelineRunStatusFields</a>)
</p>
<div>
<p>SkippedTask is used to describe the Tasks that were skipped due to their When Expressions
evaluating to False. This is a struct because we are looking into including more details
about the When Expressions that caused this Task to be skipped.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name is the Pipeline Task name</p>
</td>
</tr>
<tr>
<td>
<code>reason</code><br/>
<em>
<a href="#tekton.dev/v1.SkippingReason">
SkippingReason
</a>
</em>
</td>
<td>
<p>Reason is the cause of the PipelineTask being skipped.</p>
</td>
</tr>
<tr>
<td>
<code>whenExpressions</code><br/>
<em>
<a href="#tekton.dev/v1.WhenExpression">
[]WhenExpression
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>WhenExpressions is the list of checks guarding the execution of the PipelineTask</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.SkippingReason">SkippingReason
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.SkippedTask">SkippedTask</a>)
</p>
<div>
<p>SkippingReason explains why a PipelineTask was skipped.</p>
</div>
<table>
<thead>
<tr>
<th>Value</th>
<th>Description</th>
</tr>
</thead>
<tbody><tr><td><p>"Matrix Parameters have an empty array"</p></td>
<td><p>EmptyArrayInMatrixParams means the task was skipped because Matrix parameters contain empty array.</p>
</td>
</tr><tr><td><p>"PipelineRun Finally timeout has been reached"</p></td>
<td><p>FinallyTimedOutSkip means the task was skipped because the PipelineRun has passed its Timeouts.Finally.</p>
</td>
</tr><tr><td><p>"PipelineRun was gracefully cancelled"</p></td>
<td><p>GracefullyCancelledSkip means the task was skipped because the pipeline run has been gracefully cancelled</p>
</td>
</tr><tr><td><p>"PipelineRun was gracefully stopped"</p></td>
<td><p>GracefullyStoppedSkip means the task was skipped because the pipeline run has been gracefully stopped</p>
</td>
</tr><tr><td><p>"Results were missing"</p></td>
<td><p>MissingResultsSkip means the task was skipped because it’s missing necessary results</p>
</td>
</tr><tr><td><p>"None"</p></td>
<td><p>None means the task was not skipped</p>
</td>
</tr><tr><td><p>"Parent Tasks were skipped"</p></td>
<td><p>ParentTasksSkip means the task was skipped because its parent was skipped</p>
</td>
</tr><tr><td><p>"PipelineRun timeout has been reached"</p></td>
<td><p>PipelineTimedOutSkip means the task was skipped because the PipelineRun has passed its overall timeout.</p>
</td>
</tr><tr><td><p>"PipelineRun was stopping"</p></td>
<td><p>StoppingSkip means the task was skipped because the pipeline run is stopping</p>
</td>
</tr><tr><td><p>"PipelineRun Tasks timeout has been reached"</p></td>
<td><p>TasksTimedOutSkip means the task was skipped because the PipelineRun has passed its Timeouts.Tasks.</p>
</td>
</tr><tr><td><p>"When Expressions evaluated to false"</p></td>
<td><p>WhenExpressionsSkip means the task was skipped due to at least one of its when expressions evaluating to false</p>
</td>
</tr></tbody>
</table>
<h3 id="tekton.dev/v1.Step">Step
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.TaskSpec">TaskSpec</a>)
</p>
<div>
<p>Step runs a subcomponent of a Task</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name of the Step specified as a DNS_LABEL.
Each Step in a Task must have a unique name.</p>
</td>
</tr>
<tr>
<td>
<code>image</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Docker image name.
More info: <a href="https://kubernetes.io/docs/concepts/containers/images">https://kubernetes.io/docs/concepts/containers/images</a></p>
</td>
</tr>
<tr>
<td>
<code>command</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Entrypoint array. Not executed within a shell.
The image’s ENTRYPOINT is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the container’s environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will
produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell</a></p>
</td>
</tr>
<tr>
<td>
<code>args</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Arguments to the entrypoint.
The image’s CMD is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the container’s environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will
produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell</a></p>
</td>
</tr>
<tr>
<td>
<code>workingDir</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Step’s working directory.
If not specified, the container runtime’s default will be used, which
might be configured in the container image.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>envFrom</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#envfromsource-v1-core">
[]Kubernetes core/v1.EnvFromSource
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>List of sources to populate environment variables in the Step.
The keys defined within a source must be a C_IDENTIFIER. All invalid keys
will be reported as an event when the Step is starting. When a key exists in multiple
sources, the value associated with the last source will take precedence.
Values defined by an Env with a duplicate key will take precedence.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>env</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#envvar-v1-core">
[]Kubernetes core/v1.EnvVar
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>List of environment variables to set in the Step.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>computeResources</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core">
Kubernetes core/v1.ResourceRequirements
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>ComputeResources required by this Step.
Cannot be updated.
More info: <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/">https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/</a></p>
</td>
</tr>
<tr>
<td>
<code>volumeMounts</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumemount-v1-core">
[]Kubernetes core/v1.VolumeMount
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Volumes to mount into the Step’s filesystem.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>volumeDevices</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumedevice-v1-core">
[]Kubernetes core/v1.VolumeDevice
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>volumeDevices is the list of block devices to be used by the Step.</p>
</td>
</tr>
<tr>
<td>
<code>imagePullPolicy</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#pullpolicy-v1-core">
Kubernetes core/v1.PullPolicy
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Image pull policy.
One of Always, Never, IfNotPresent.
Defaults to Always if :latest tag is specified, or IfNotPresent otherwise.
Cannot be updated.
More info: <a href="https://kubernetes.io/docs/concepts/containers/images#updating-images">https://kubernetes.io/docs/concepts/containers/images#updating-images</a></p>
</td>
</tr>
<tr>
<td>
<code>securityContext</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#securitycontext-v1-core">
Kubernetes core/v1.SecurityContext
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>SecurityContext defines the security options the Step should be run with.
If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.
More info: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/</a></p>
</td>
</tr>
<tr>
<td>
<code>script</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Script is the contents of an executable file to execute.</p>
<p>If Script is not empty, the Step cannot have an Command and the Args will be passed to the Script.</p>
</td>
</tr>
<tr>
<td>
<code>timeout</code><br/>
<em>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration">
Kubernetes meta/v1.Duration
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Timeout is the time after which the step times out. Defaults to never.
Refer to Go’s ParseDuration documentation for expected format: <a href="https://golang.org/pkg/time/#ParseDuration">https://golang.org/pkg/time/#ParseDuration</a></p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1.WorkspaceUsage">
[]WorkspaceUsage
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>This is an alpha field. You must set the “enable-api-fields” feature flag to “alpha”
for this field to be supported.</p>
<p>Workspaces is a list of workspaces from the Task that this Step wants
exclusive access to. Adding a workspace to this list means that any
other Step or Sidecar that does not also request this Workspace will
not have access to it.</p>
</td>
</tr>
<tr>
<td>
<code>onError</code><br/>
<em>
<a href="#tekton.dev/v1.OnErrorType">
OnErrorType
</a>
</em>
</td>
<td>
<p>OnError defines the exiting behavior of a container on error
can be set to [ continue | stopAndFail ]</p>
</td>
</tr>
<tr>
<td>
<code>stdoutConfig</code><br/>
<em>
<a href="#tekton.dev/v1.StepOutputConfig">
StepOutputConfig
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Stores configuration for the stdout stream of the step.</p>
</td>
</tr>
<tr>
<td>
<code>stderrConfig</code><br/>
<em>
<a href="#tekton.dev/v1.StepOutputConfig">
StepOutputConfig
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Stores configuration for the stderr stream of the step.</p>
</td>
</tr>
<tr>
<td>
<code>ref</code><br/>
<em>
<a href="#tekton.dev/v1.Ref">
Ref
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Contains the reference to an existing StepAction.</p>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1.Params">
Params
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Params declares parameters passed to this step action.</p>
</td>
</tr>
<tr>
<td>
<code>results</code><br/>
<em>
<a href="#tekton.dev/v1.StepResult">
[]StepResult
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Results declares StepResults produced by the Step.</p>
<p>This is field is at an ALPHA stability level and gated by “enable-step-actions” feature flag.</p>
<p>It can be used in an inlined Step when used to store Results to $(step.results.resultName.path).
It cannot be used when referencing StepActions using [v1.Step.Ref].
The Results declared by the StepActions will be stored here instead.</p>
</td>
</tr>
<tr>
<td>
<code>when</code><br/>
<em>
<a href="#tekton.dev/v1.WhenExpressions">
WhenExpressions
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>When is a list of when expressions that need to be true for the task to run</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.StepOutputConfig">StepOutputConfig
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.Step">Step</a>)
</p>
<div>
<p>StepOutputConfig stores configuration for a step output stream.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>path</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Path to duplicate stdout stream to on container’s local filesystem.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.StepResult">StepResult
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.Step">Step</a>, <a href="#tekton.dev/v1alpha1.StepActionSpec">StepActionSpec</a>, <a href="#tekton.dev/v1beta1.Step">Step</a>, <a href="#tekton.dev/v1beta1.StepActionSpec">StepActionSpec</a>)
</p>
<div>
<p>StepResult used to describe the Results of a Step.</p>
<p>This is field is at an BETA stability level and gated by “enable-step-actions” feature flag.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name the given name</p>
</td>
</tr>
<tr>
<td>
<code>type</code><br/>
<em>
<a href="#tekton.dev/v1.ResultsType">
ResultsType
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>The possible types are ‘string’, ‘array’, and ‘object’, with ‘string’ as the default.</p>
</td>
</tr>
<tr>
<td>
<code>properties</code><br/>
<em>
<a href="#tekton.dev/v1.PropertySpec">
map[string]github.com/tektoncd/pipeline/pkg/apis/pipeline/v1.PropertySpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Properties is the JSON Schema properties to support key-value pairs results.</p>
</td>
</tr>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is a human-readable description of the result</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.StepState">StepState
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.TaskRunStatusFields">TaskRunStatusFields</a>)
</p>
<div>
<p>StepState reports the results of running a step in a Task.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>ContainerState</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#containerstate-v1-core">
Kubernetes core/v1.ContainerState
</a>
</em>
</td>
<td>
<p>
(Members of <code>ContainerState</code> are embedded into this type.)
</p>
</td>
</tr>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>container</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>imageID</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>results</code><br/>
<em>
<a href="#tekton.dev/v1.TaskRunResult">
[]TaskRunResult
</a>
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>provenance</code><br/>
<em>
<a href="#tekton.dev/v1.Provenance">
Provenance
</a>
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>terminationReason</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>inputs</code><br/>
<em>
<a href="#tekton.dev/v1.Artifact">
[]Artifact
</a>
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>outputs</code><br/>
<em>
<a href="#tekton.dev/v1.Artifact">
[]Artifact
</a>
</em>
</td>
<td>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.StepTemplate">StepTemplate
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.TaskSpec">TaskSpec</a>)
</p>
<div>
<p>StepTemplate is a template for a Step</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>image</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Image reference name.
More info: <a href="https://kubernetes.io/docs/concepts/containers/images">https://kubernetes.io/docs/concepts/containers/images</a></p>
</td>
</tr>
<tr>
<td>
<code>command</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Entrypoint array. Not executed within a shell.
The image’s ENTRYPOINT is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the Step’s environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will
produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell</a></p>
</td>
</tr>
<tr>
<td>
<code>args</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Arguments to the entrypoint.
The image’s CMD is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the Step’s environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will
produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell</a></p>
</td>
</tr>
<tr>
<td>
<code>workingDir</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Step’s working directory.
If not specified, the container runtime’s default will be used, which
might be configured in the container image.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>envFrom</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#envfromsource-v1-core">
[]Kubernetes core/v1.EnvFromSource
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>List of sources to populate environment variables in the Step.
The keys defined within a source must be a C_IDENTIFIER. All invalid keys
will be reported as an event when the Step is starting. When a key exists in multiple
sources, the value associated with the last source will take precedence.
Values defined by an Env with a duplicate key will take precedence.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>env</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#envvar-v1-core">
[]Kubernetes core/v1.EnvVar
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>List of environment variables to set in the Step.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>computeResources</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core">
Kubernetes core/v1.ResourceRequirements
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>ComputeResources required by this Step.
Cannot be updated.
More info: <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/">https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/</a></p>
</td>
</tr>
<tr>
<td>
<code>volumeMounts</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumemount-v1-core">
[]Kubernetes core/v1.VolumeMount
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Volumes to mount into the Step’s filesystem.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>volumeDevices</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumedevice-v1-core">
[]Kubernetes core/v1.VolumeDevice
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>volumeDevices is the list of block devices to be used by the Step.</p>
</td>
</tr>
<tr>
<td>
<code>imagePullPolicy</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#pullpolicy-v1-core">
Kubernetes core/v1.PullPolicy
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Image pull policy.
One of Always, Never, IfNotPresent.
Defaults to Always if :latest tag is specified, or IfNotPresent otherwise.
Cannot be updated.
More info: <a href="https://kubernetes.io/docs/concepts/containers/images#updating-images">https://kubernetes.io/docs/concepts/containers/images#updating-images</a></p>
</td>
</tr>
<tr>
<td>
<code>securityContext</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#securitycontext-v1-core">
Kubernetes core/v1.SecurityContext
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>SecurityContext defines the security options the Step should be run with.
If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.
More info: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/</a></p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.TaskBreakpoints">TaskBreakpoints
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.TaskRunDebug">TaskRunDebug</a>)
</p>
<div>
<p>TaskBreakpoints defines the breakpoint config for a particular Task</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>onFailure</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>if enabled, pause TaskRun on failure of a step
failed step will not exit</p>
</td>
</tr>
<tr>
<td>
<code>beforeSteps</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.TaskKind">TaskKind
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.TaskRef">TaskRef</a>)
</p>
<div>
<p>TaskKind defines the type of Task used by the pipeline.</p>
</div>
<table>
<thead>
<tr>
<th>Value</th>
<th>Description</th>
</tr>
</thead>
<tbody><tr><td><p>"ClusterTask"</p></td>
<td><p>ClusterTaskRefKind is the task type for a reference to a task with cluster scope.
ClusterTasks are not supported in v1, but v1 types may reference ClusterTasks.</p>
</td>
</tr><tr><td><p>"Task"</p></td>
<td><p>NamespacedTaskKind indicates that the task type has a namespaced scope.</p>
</td>
</tr></tbody>
</table>
<h3 id="tekton.dev/v1.TaskRef">TaskRef
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineTask">PipelineTask</a>, <a href="#tekton.dev/v1.TaskRunSpec">TaskRunSpec</a>)
</p>
<div>
<p>TaskRef can be used to refer to a specific instance of a task.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name of the referent; More info: <a href="http://kubernetes.io/docs/user-guide/identifiers#names">http://kubernetes.io/docs/user-guide/identifiers#names</a></p>
</td>
</tr>
<tr>
<td>
<code>kind</code><br/>
<em>
<a href="#tekton.dev/v1.TaskKind">
TaskKind
</a>
</em>
</td>
<td>
<p>TaskKind indicates the Kind of the Task:
1. Namespaced Task when Kind is set to “Task”. If Kind is “”, it defaults to “Task”.
2. Custom Task when Kind is non-empty and APIVersion is non-empty</p>
</td>
</tr>
<tr>
<td>
<code>apiVersion</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>API version of the referent
Note: A Task with non-empty APIVersion and Kind is considered a Custom Task</p>
</td>
</tr>
<tr>
<td>
<code>ResolverRef</code><br/>
<em>
<a href="#tekton.dev/v1.ResolverRef">
ResolverRef
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>ResolverRef allows referencing a Task in a remote location
like a git repo. This field is only supported when the alpha
feature gate is enabled.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.TaskResult">TaskResult
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.TaskSpec">TaskSpec</a>)
</p>
<div>
<p>TaskResult used to describe the results of a task</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name the given name</p>
</td>
</tr>
<tr>
<td>
<code>type</code><br/>
<em>
<a href="#tekton.dev/v1.ResultsType">
ResultsType
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Type is the user-specified type of the result. The possible type
is currently “string” and will support “array” in following work.</p>
</td>
</tr>
<tr>
<td>
<code>properties</code><br/>
<em>
<a href="#tekton.dev/v1.PropertySpec">
map[string]github.com/tektoncd/pipeline/pkg/apis/pipeline/v1.PropertySpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Properties is the JSON Schema properties to support key-value pairs results.</p>
</td>
</tr>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is a human-readable description of the result</p>
</td>
</tr>
<tr>
<td>
<code>value</code><br/>
<em>
<a href="#tekton.dev/v1.ParamValue">
ParamValue
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Value the expression used to retrieve the value of the result from an underlying Step.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.TaskRunDebug">TaskRunDebug
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.TaskRunSpec">TaskRunSpec</a>)
</p>
<div>
<p>TaskRunDebug defines the breakpoint config for a particular TaskRun</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>breakpoints</code><br/>
<em>
<a href="#tekton.dev/v1.TaskBreakpoints">
TaskBreakpoints
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.TaskRunInputs">TaskRunInputs
</h3>
<div>
<p>TaskRunInputs holds the input values that this task was invoked with.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1.Params">
Params
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.TaskRunReason">TaskRunReason
(<code>string</code> alias)</h3>
<div>
<p>TaskRunReason is an enum used to store all TaskRun reason for
the Succeeded condition that are controlled by the TaskRun itself. Failure
reasons that emerge from underlying resources are not included here</p>
</div>
<table>
<thead>
<tr>
<th>Value</th>
<th>Description</th>
</tr>
</thead>
<tbody><tr><td><p>"TaskRunCancelled"</p></td>
<td><p>TaskRunReasonCancelled is the reason set when the TaskRun is cancelled by the user</p>
</td>
</tr><tr><td><p>"Failed"</p></td>
<td><p>TaskRunReasonFailed is the reason set when the TaskRun completed with a failure</p>
</td>
</tr><tr><td><p>"TaskRunResolutionFailed"</p></td>
<td><p>TaskRunReasonFailedResolution indicated that the reason for failure status is
that references within the TaskRun could not be resolved</p>
</td>
</tr><tr><td><p>"TaskRunValidationFailed"</p></td>
<td><p>TaskRunReasonFailedValidation indicated that the reason for failure status is
that taskrun failed runtime validation</p>
</td>
</tr><tr><td><p>"FailureIgnored"</p></td>
<td><p>TaskRunReasonFailureIgnored is the reason set when the Taskrun has failed due to pod execution error and the failure is ignored for the owning PipelineRun.
TaskRuns failed due to reconciler/validation error should not use this reason.</p>
</td>
</tr><tr><td><p>"TaskRunImagePullFailed"</p></td>
<td><p>TaskRunReasonImagePullFailed is the reason set when the step of a task fails due to image not being pulled</p>
</td>
</tr><tr><td><p>"InvalidParamValue"</p></td>
<td><p>TaskRunReasonInvalidParamValue indicates that the TaskRun Param input value is not allowed.</p>
</td>
</tr><tr><td><p>"ResourceVerificationFailed"</p></td>
<td><p>TaskRunReasonResourceVerificationFailed indicates that the task fails the trusted resource verification,
it could be the content has changed, signature is invalid or public key is invalid</p>
</td>
</tr><tr><td><p>"TaskRunResultLargerThanAllowedLimit"</p></td>
<td><p>TaskRunReasonResultLargerThanAllowedLimit is the reason set when one of the results exceeds its maximum allowed limit of 1 KB</p>
</td>
</tr><tr><td><p>"Running"</p></td>
<td><p>TaskRunReasonRunning is the reason set when the TaskRun is running</p>
</td>
</tr><tr><td><p>"Started"</p></td>
<td><p>TaskRunReasonStarted is the reason set when the TaskRun has just started</p>
</td>
</tr><tr><td><p>"TaskRunStopSidecarFailed"</p></td>
<td><p>TaskRunReasonStopSidecarFailed indicates that the sidecar is not properly stopped.</p>
</td>
</tr><tr><td><p>"Succeeded"</p></td>
<td><p>TaskRunReasonSuccessful is the reason set when the TaskRun completed successfully</p>
</td>
</tr><tr><td><p>"TaskValidationFailed"</p></td>
<td><p>TaskRunReasonTaskFailedValidation indicated that the reason for failure status is
that task failed runtime validation</p>
</td>
</tr><tr><td><p>"TaskRunTimeout"</p></td>
<td><p>TaskRunReasonTimedOut is the reason set when one TaskRun execution has timed out</p>
</td>
</tr><tr><td><p>"ToBeRetried"</p></td>
<td><p>TaskRunReasonToBeRetried is the reason set when the last TaskRun execution failed, and will be retried</p>
</td>
</tr></tbody>
</table>
<h3 id="tekton.dev/v1.TaskRunResult">TaskRunResult
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.StepState">StepState</a>, <a href="#tekton.dev/v1.TaskRunStatusFields">TaskRunStatusFields</a>)
</p>
<div>
<p>TaskRunStepResult is a type alias of TaskRunResult</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name the given name</p>
</td>
</tr>
<tr>
<td>
<code>type</code><br/>
<em>
<a href="#tekton.dev/v1.ResultsType">
ResultsType
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Type is the user-specified type of the result. The possible type
is currently “string” and will support “array” in following work.</p>
</td>
</tr>
<tr>
<td>
<code>value</code><br/>
<em>
<a href="#tekton.dev/v1.ParamValue">
ParamValue
</a>
</em>
</td>
<td>
<p>Value the given value of the result</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.TaskRunSidecarSpec">TaskRunSidecarSpec
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineTaskRunSpec">PipelineTaskRunSpec</a>, <a href="#tekton.dev/v1.TaskRunSpec">TaskRunSpec</a>)
</p>
<div>
<p>TaskRunSidecarSpec is used to override the values of a Sidecar in the corresponding Task.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>The name of the Sidecar to override.</p>
</td>
</tr>
<tr>
<td>
<code>computeResources</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core">
Kubernetes core/v1.ResourceRequirements
</a>
</em>
</td>
<td>
<p>The resource requirements to apply to the Sidecar.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.TaskRunSpec">TaskRunSpec
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.TaskRun">TaskRun</a>)
</p>
<div>
<p>TaskRunSpec defines the desired state of TaskRun</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>debug</code><br/>
<em>
<a href="#tekton.dev/v1.TaskRunDebug">
TaskRunDebug
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1.Params">
Params
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>serviceAccountName</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>taskRef</code><br/>
<em>
<a href="#tekton.dev/v1.TaskRef">
TaskRef
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>no more than one of the TaskRef and TaskSpec may be specified.</p>
</td>
</tr>
<tr>
<td>
<code>taskSpec</code><br/>
<em>
<a href="#tekton.dev/v1.TaskSpec">
TaskSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Specifying PipelineSpec can be disabled by setting
<code>disable-inline-spec</code> feature flag..</p>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#tekton.dev/v1.TaskRunSpecStatus">
TaskRunSpecStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Used for cancelling a TaskRun (and maybe more later on)</p>
</td>
</tr>
<tr>
<td>
<code>statusMessage</code><br/>
<em>
<a href="#tekton.dev/v1.TaskRunSpecStatusMessage">
TaskRunSpecStatusMessage
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Status message for cancellation.</p>
</td>
</tr>
<tr>
<td>
<code>retries</code><br/>
<em>
int
</em>
</td>
<td>
<em>(Optional)</em>
<p>Retries represents how many times this TaskRun should be retried in the event of task failure.</p>
</td>
</tr>
<tr>
<td>
<code>timeout</code><br/>
<em>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration">
Kubernetes meta/v1.Duration
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Time after which one retry attempt times out. Defaults to 1 hour.
Refer Go’s ParseDuration documentation for expected format: <a href="https://golang.org/pkg/time/#ParseDuration">https://golang.org/pkg/time/#ParseDuration</a></p>
</td>
</tr>
<tr>
<td>
<code>podTemplate</code><br/>
<em>
<a href="#tekton.dev/unversioned.Template">
Template
</a>
</em>
</td>
<td>
<p>PodTemplate holds pod specific configuration</p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1.WorkspaceBinding">
[]WorkspaceBinding
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Workspaces is a list of WorkspaceBindings from volumes to workspaces.</p>
</td>
</tr>
<tr>
<td>
<code>stepSpecs</code><br/>
<em>
<a href="#tekton.dev/v1.TaskRunStepSpec">
[]TaskRunStepSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Specs to apply to Steps in this TaskRun.
If a field is specified in both a Step and a StepSpec,
the value from the StepSpec will be used.
This field is only supported when the alpha feature gate is enabled.</p>
</td>
</tr>
<tr>
<td>
<code>sidecarSpecs</code><br/>
<em>
<a href="#tekton.dev/v1.TaskRunSidecarSpec">
[]TaskRunSidecarSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Specs to apply to Sidecars in this TaskRun.
If a field is specified in both a Sidecar and a SidecarSpec,
the value from the SidecarSpec will be used.
This field is only supported when the alpha feature gate is enabled.</p>
</td>
</tr>
<tr>
<td>
<code>computeResources</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core">
Kubernetes core/v1.ResourceRequirements
</a>
</em>
</td>
<td>
<p>Compute resources to use for this TaskRun</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.TaskRunSpecStatus">TaskRunSpecStatus
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.TaskRunSpec">TaskRunSpec</a>)
</p>
<div>
<p>TaskRunSpecStatus defines the TaskRun spec status the user can provide</p>
</div>
<h3 id="tekton.dev/v1.TaskRunSpecStatusMessage">TaskRunSpecStatusMessage
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.TaskRunSpec">TaskRunSpec</a>)
</p>
<div>
<p>TaskRunSpecStatusMessage defines human readable status messages for the TaskRun.</p>
</div>
<table>
<thead>
<tr>
<th>Value</th>
<th>Description</th>
</tr>
</thead>
<tbody><tr><td><p>"TaskRun cancelled as the PipelineRun it belongs to has been cancelled."</p></td>
<td><p>TaskRunCancelledByPipelineMsg indicates that the PipelineRun of which this
TaskRun was a part of has been cancelled.</p>
</td>
</tr><tr><td><p>"TaskRun cancelled as the PipelineRun it belongs to has timed out."</p></td>
<td><p>TaskRunCancelledByPipelineTimeoutMsg indicates that the TaskRun was cancelled because the PipelineRun running it timed out.</p>
</td>
</tr></tbody>
</table>
<h3 id="tekton.dev/v1.TaskRunStatus">TaskRunStatus
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.TaskRun">TaskRun</a>, <a href="#tekton.dev/v1.PipelineRunTaskRunStatus">PipelineRunTaskRunStatus</a>, <a href="#tekton.dev/v1.TaskRunStatusFields">TaskRunStatusFields</a>)
</p>
<div>
<p>TaskRunStatus defines the observed state of TaskRun</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>Status</code><br/>
<em>
<a href="https://pkg.go.dev/knative.dev/pkg/apis/duck/v1#Status">
knative.dev/pkg/apis/duck/v1.Status
</a>
</em>
</td>
<td>
<p>
(Members of <code>Status</code> are embedded into this type.)
</p>
</td>
</tr>
<tr>
<td>
<code>TaskRunStatusFields</code><br/>
<em>
<a href="#tekton.dev/v1.TaskRunStatusFields">
TaskRunStatusFields
</a>
</em>
</td>
<td>
<p>
(Members of <code>TaskRunStatusFields</code> are embedded into this type.)
</p>
<p>TaskRunStatusFields inlines the status fields.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.TaskRunStatusFields">TaskRunStatusFields
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.TaskRunStatus">TaskRunStatus</a>)
</p>
<div>
<p>TaskRunStatusFields holds the fields of TaskRun’s status. This is defined
separately and inlined so that other types can readily consume these fields
via duck typing.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>podName</code><br/>
<em>
string
</em>
</td>
<td>
<p>PodName is the name of the pod responsible for executing this task’s steps.</p>
</td>
</tr>
<tr>
<td>
<code>startTime</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#time-v1-meta">
Kubernetes meta/v1.Time
</a>
</em>
</td>
<td>
<p>StartTime is the time the build is actually started.</p>
</td>
</tr>
<tr>
<td>
<code>completionTime</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#time-v1-meta">
Kubernetes meta/v1.Time
</a>
</em>
</td>
<td>
<p>CompletionTime is the time the build completed.</p>
</td>
</tr>
<tr>
<td>
<code>steps</code><br/>
<em>
<a href="#tekton.dev/v1.StepState">
[]StepState
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Steps describes the state of each build step container.</p>
</td>
</tr>
<tr>
<td>
<code>retriesStatus</code><br/>
<em>
<a href="#tekton.dev/v1.TaskRunStatus">
[]TaskRunStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>RetriesStatus contains the history of TaskRunStatus in case of a retry in order to keep record of failures.
All TaskRunStatus stored in RetriesStatus will have no date within the RetriesStatus as is redundant.</p>
</td>
</tr>
<tr>
<td>
<code>results</code><br/>
<em>
<a href="#tekton.dev/v1.TaskRunResult">
[]TaskRunResult
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Results are the list of results written out by the task’s containers</p>
</td>
</tr>
<tr>
<td>
<code>artifacts</code><br/>
<em>
<a href="#tekton.dev/v1.Artifacts">
Artifacts
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Artifacts are the list of artifacts written out by the task’s containers</p>
</td>
</tr>
<tr>
<td>
<code>sidecars</code><br/>
<em>
<a href="#tekton.dev/v1.SidecarState">
[]SidecarState
</a>
</em>
</td>
<td>
<p>The list has one entry per sidecar in the manifest. Each entry is
represents the imageid of the corresponding sidecar.</p>
</td>
</tr>
<tr>
<td>
<code>taskSpec</code><br/>
<em>
<a href="#tekton.dev/v1.TaskSpec">
TaskSpec
</a>
</em>
</td>
<td>
<p>TaskSpec contains the Spec from the dereferenced Task definition used to instantiate this TaskRun.</p>
</td>
</tr>
<tr>
<td>
<code>provenance</code><br/>
<em>
<a href="#tekton.dev/v1.Provenance">
Provenance
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Provenance contains some key authenticated metadata about how a software artifact was built (what sources, what inputs/outputs, etc.).</p>
</td>
</tr>
<tr>
<td>
<code>spanContext</code><br/>
<em>
map[string]string
</em>
</td>
<td>
<p>SpanContext contains tracing span context fields</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.TaskRunStepSpec">TaskRunStepSpec
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineTaskRunSpec">PipelineTaskRunSpec</a>, <a href="#tekton.dev/v1.TaskRunSpec">TaskRunSpec</a>)
</p>
<div>
<p>TaskRunStepSpec is used to override the values of a Step in the corresponding Task.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>The name of the Step to override.</p>
</td>
</tr>
<tr>
<td>
<code>computeResources</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core">
Kubernetes core/v1.ResourceRequirements
</a>
</em>
</td>
<td>
<p>The resource requirements to apply to the Step.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.TaskSpec">TaskSpec
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.Task">Task</a>, <a href="#tekton.dev/v1.EmbeddedTask">EmbeddedTask</a>, <a href="#tekton.dev/v1.TaskRunSpec">TaskRunSpec</a>, <a href="#tekton.dev/v1.TaskRunStatusFields">TaskRunStatusFields</a>)
</p>
<div>
<p>TaskSpec defines the desired state of Task.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1.ParamSpecs">
ParamSpecs
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Params is a list of input parameters required to run the task. Params
must be supplied as inputs in TaskRuns unless they declare a default
value.</p>
</td>
</tr>
<tr>
<td>
<code>displayName</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>DisplayName is a user-facing name of the task that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is a user-facing description of the task that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>steps</code><br/>
<em>
<a href="#tekton.dev/v1.Step">
[]Step
</a>
</em>
</td>
<td>
<p>Steps are the steps of the build; each step is run sequentially with the
source mounted into /workspace.</p>
</td>
</tr>
<tr>
<td>
<code>volumes</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volume-v1-core">
[]Kubernetes core/v1.Volume
</a>
</em>
</td>
<td>
<p>Volumes is a collection of volumes that are available to mount into the
steps of the build.</p>
</td>
</tr>
<tr>
<td>
<code>stepTemplate</code><br/>
<em>
<a href="#tekton.dev/v1.StepTemplate">
StepTemplate
</a>
</em>
</td>
<td>
<p>StepTemplate can be used as the basis for all step containers within the
Task, so that the steps inherit settings on the base container.</p>
</td>
</tr>
<tr>
<td>
<code>sidecars</code><br/>
<em>
<a href="#tekton.dev/v1.Sidecar">
[]Sidecar
</a>
</em>
</td>
<td>
<p>Sidecars are run alongside the Task’s step containers. They begin before
the steps start and end after the steps complete.</p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1.WorkspaceDeclaration">
[]WorkspaceDeclaration
</a>
</em>
</td>
<td>
<p>Workspaces are the volumes that this Task requires.</p>
</td>
</tr>
<tr>
<td>
<code>results</code><br/>
<em>
<a href="#tekton.dev/v1.TaskResult">
[]TaskResult
</a>
</em>
</td>
<td>
<p>Results are values that this Task can output</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.TimeoutFields">TimeoutFields
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineRunSpec">PipelineRunSpec</a>)
</p>
<div>
<p>TimeoutFields allows granular specification of pipeline, task, and finally timeouts</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>pipeline</code><br/>
<em>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration">
Kubernetes meta/v1.Duration
</a>
</em>
</td>
<td>
<p>Pipeline sets the maximum allowed duration for execution of the entire pipeline. The sum of individual timeouts for tasks and finally must not exceed this value.</p>
</td>
</tr>
<tr>
<td>
<code>tasks</code><br/>
<em>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration">
Kubernetes meta/v1.Duration
</a>
</em>
</td>
<td>
<p>Tasks sets the maximum allowed duration of this pipeline’s tasks</p>
</td>
</tr>
<tr>
<td>
<code>finally</code><br/>
<em>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration">
Kubernetes meta/v1.Duration
</a>
</em>
</td>
<td>
<p>Finally sets the maximum allowed duration of this pipeline’s finally</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.WhenExpression">WhenExpression
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.ChildStatusReference">ChildStatusReference</a>, <a href="#tekton.dev/v1.PipelineRunRunStatus">PipelineRunRunStatus</a>, <a href="#tekton.dev/v1.PipelineRunTaskRunStatus">PipelineRunTaskRunStatus</a>, <a href="#tekton.dev/v1.SkippedTask">SkippedTask</a>)
</p>
<div>
<p>WhenExpression allows a PipelineTask to declare expressions to be evaluated before the Task is run
to determine whether the Task should be executed or skipped</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>input</code><br/>
<em>
string
</em>
</td>
<td>
<p>Input is the string for guard checking which can be a static input or an output from a parent Task</p>
</td>
</tr>
<tr>
<td>
<code>operator</code><br/>
<em>
k8s.io/apimachinery/pkg/selection.Operator
</em>
</td>
<td>
<p>Operator that represents an Input’s relationship to the values</p>
</td>
</tr>
<tr>
<td>
<code>values</code><br/>
<em>
[]string
</em>
</td>
<td>
<p>Values is an array of strings, which is compared against the input, for guard checking
It must be non-empty</p>
</td>
</tr>
<tr>
<td>
<code>cel</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>CEL is a string of Common Language Expression, which can be used to conditionally execute
the task based on the result of the expression evaluation
More info about CEL syntax: <a href="https://github.com/google/cel-spec/blob/master/doc/langdef.md">https://github.com/google/cel-spec/blob/master/doc/langdef.md</a></p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.WhenExpressions">WhenExpressions
(<code>[]github.com/tektoncd/pipeline/pkg/apis/pipeline/v1.WhenExpression</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineTask">PipelineTask</a>, <a href="#tekton.dev/v1.Step">Step</a>)
</p>
<div>
<p>WhenExpressions are used to specify whether a Task should be executed or skipped
All of them need to evaluate to True for a guarded Task to be executed.</p>
</div>
<h3 id="tekton.dev/v1.WorkspaceBinding">WorkspaceBinding
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineRunSpec">PipelineRunSpec</a>, <a href="#tekton.dev/v1.TaskRunSpec">TaskRunSpec</a>)
</p>
<div>
<p>WorkspaceBinding maps a Task’s declared workspace to a Volume.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name is the name of the workspace populated by the volume.</p>
</td>
</tr>
<tr>
<td>
<code>subPath</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>SubPath is optionally a directory on the volume which should be used
for this binding (i.e. the volume will be mounted at this sub directory).</p>
</td>
</tr>
<tr>
<td>
<code>volumeClaimTemplate</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#persistentvolumeclaim-v1-core">
Kubernetes core/v1.PersistentVolumeClaim
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>VolumeClaimTemplate is a template for a claim that will be created in the same namespace.
The PipelineRun controller is responsible for creating a unique claim for each instance of PipelineRun.</p>
</td>
</tr>
<tr>
<td>
<code>persistentVolumeClaim</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#persistentvolumeclaimvolumesource-v1-core">
Kubernetes core/v1.PersistentVolumeClaimVolumeSource
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>PersistentVolumeClaimVolumeSource represents a reference to a
PersistentVolumeClaim in the same namespace. Either this OR EmptyDir can be used.</p>
</td>
</tr>
<tr>
<td>
<code>emptyDir</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#emptydirvolumesource-v1-core">
Kubernetes core/v1.EmptyDirVolumeSource
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>EmptyDir represents a temporary directory that shares a Task’s lifetime.
More info: <a href="https://kubernetes.io/docs/concepts/storage/volumes#emptydir">https://kubernetes.io/docs/concepts/storage/volumes#emptydir</a>
Either this OR PersistentVolumeClaim can be used.</p>
</td>
</tr>
<tr>
<td>
<code>configMap</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#configmapvolumesource-v1-core">
Kubernetes core/v1.ConfigMapVolumeSource
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>ConfigMap represents a configMap that should populate this workspace.</p>
</td>
</tr>
<tr>
<td>
<code>secret</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secretvolumesource-v1-core">
Kubernetes core/v1.SecretVolumeSource
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Secret represents a secret that should populate this workspace.</p>
</td>
</tr>
<tr>
<td>
<code>projected</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#projectedvolumesource-v1-core">
Kubernetes core/v1.ProjectedVolumeSource
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Projected represents a projected volume that should populate this workspace.</p>
</td>
</tr>
<tr>
<td>
<code>csi</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#csivolumesource-v1-core">
Kubernetes core/v1.CSIVolumeSource
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>CSI (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.WorkspaceDeclaration">WorkspaceDeclaration
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.TaskSpec">TaskSpec</a>)
</p>
<div>
<p>WorkspaceDeclaration is a declaration of a volume that a Task requires.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name is the name by which you can bind the volume at runtime.</p>
</td>
</tr>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is an optional human readable description of this volume.</p>
</td>
</tr>
<tr>
<td>
<code>mountPath</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>MountPath overrides the directory that the volume will be made available at.</p>
</td>
</tr>
<tr>
<td>
<code>readOnly</code><br/>
<em>
bool
</em>
</td>
<td>
<p>ReadOnly dictates whether a mounted volume is writable. By default this
field is false and so mounted volumes are writable.</p>
</td>
</tr>
<tr>
<td>
<code>optional</code><br/>
<em>
bool
</em>
</td>
<td>
<p>Optional marks a Workspace as not being required in TaskRuns. By default
this field is false and so declared workspaces are required.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.WorkspacePipelineTaskBinding">WorkspacePipelineTaskBinding
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.PipelineTask">PipelineTask</a>)
</p>
<div>
<p>WorkspacePipelineTaskBinding describes how a workspace passed into the pipeline should be
mapped to a task’s declared workspace.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name is the name of the workspace as declared by the task</p>
</td>
</tr>
<tr>
<td>
<code>workspace</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Workspace is the name of the workspace declared by the pipeline</p>
</td>
</tr>
<tr>
<td>
<code>subPath</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>SubPath is optionally a directory on the volume which should be used
for this binding (i.e. the volume will be mounted at this sub directory).</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1.WorkspaceUsage">WorkspaceUsage
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1.Sidecar">Sidecar</a>, <a href="#tekton.dev/v1.Step">Step</a>)
</p>
<div>
<p>WorkspaceUsage is used by a Step or Sidecar to declare that it wants isolated access
to a Workspace defined in a Task.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name is the name of the workspace this Step or Sidecar wants access to.</p>
</td>
</tr>
<tr>
<td>
<code>mountPath</code><br/>
<em>
string
</em>
</td>
<td>
<p>MountPath is the path that the workspace should be mounted to inside the Step or Sidecar,
overriding any MountPath specified in the Task’s WorkspaceDeclaration.</p>
</td>
</tr>
</tbody>
</table>
<hr/>
<h2 id="tekton.dev/v1alpha1">tekton.dev/v1alpha1</h2>
<div>
<p>Package v1alpha1 contains API Schema definitions for the pipeline v1alpha1 API group</p>
</div>
Resource Types:
<ul><li>
<a href="#tekton.dev/v1alpha1.Run">Run</a>
</li><li>
<a href="#tekton.dev/v1alpha1.StepAction">StepAction</a>
</li><li>
<a href="#tekton.dev/v1alpha1.VerificationPolicy">VerificationPolicy</a>
</li><li>
<a href="#tekton.dev/v1alpha1.PipelineResource">PipelineResource</a>
</li></ul>
<h3 id="tekton.dev/v1alpha1.Run">Run
</h3>
<div>
<p>Run represents a single execution of a Custom Task.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>apiVersion</code><br/>
string</td>
<td>
<code>
tekton.dev/v1alpha1
</code>
</td>
</tr>
<tr>
<td>
<code>kind</code><br/>
string
</td>
<td><code>Run</code></td>
</tr>
<tr>
<td>
<code>metadata</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta">
Kubernetes meta/v1.ObjectMeta
</a>
</em>
</td>
<td>
<em>(Optional)</em>
Refer to the Kubernetes API documentation for the fields of the
<code>metadata</code> field.
</td>
</tr>
<tr>
<td>
<code>spec</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.RunSpec">
RunSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<br/>
<br/>
<table>
<tr>
<td>
<code>ref</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRef">
TaskRef
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>spec</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.EmbeddedRunSpec">
EmbeddedRunSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Spec is a specification of a custom task</p>
<br/>
<br/>
<table>
</table>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Params">
Params
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.RunSpecStatus">
RunSpecStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Used for cancelling a run (and maybe more later on)</p>
</td>
</tr>
<tr>
<td>
<code>statusMessage</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.RunSpecStatusMessage">
RunSpecStatusMessage
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Status message for cancellation.</p>
</td>
</tr>
<tr>
<td>
<code>retries</code><br/>
<em>
int
</em>
</td>
<td>
<em>(Optional)</em>
<p>Used for propagating retries count to custom tasks</p>
</td>
</tr>
<tr>
<td>
<code>serviceAccountName</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>podTemplate</code><br/>
<em>
<a href="#tekton.dev/unversioned.Template">
Template
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>PodTemplate holds pod specific configuration</p>
</td>
</tr>
<tr>
<td>
<code>timeout</code><br/>
<em>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration">
Kubernetes meta/v1.Duration
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Time after which the custom-task times out.
Refer Go’s ParseDuration documentation for expected format: <a href="https://golang.org/pkg/time/#ParseDuration">https://golang.org/pkg/time/#ParseDuration</a></p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1beta1.WorkspaceBinding">
[]WorkspaceBinding
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Workspaces is a list of WorkspaceBindings from volumes to workspaces.</p>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.RunStatus">
RunStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1alpha1.StepAction">StepAction
</h3>
<div>
<p>StepAction represents the actionable components of Step.
The Step can only reference it from the cluster or using remote resolution.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>apiVersion</code><br/>
string</td>
<td>
<code>
tekton.dev/v1alpha1
</code>
</td>
</tr>
<tr>
<td>
<code>kind</code><br/>
string
</td>
<td><code>StepAction</code></td>
</tr>
<tr>
<td>
<code>metadata</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta">
Kubernetes meta/v1.ObjectMeta
</a>
</em>
</td>
<td>
<em>(Optional)</em>
Refer to the Kubernetes API documentation for the fields of the
<code>metadata</code> field.
</td>
</tr>
<tr>
<td>
<code>spec</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.StepActionSpec">
StepActionSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Spec holds the desired state of the Step from the client</p>
<br/>
<br/>
<table>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is a user-facing description of the stepaction that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>image</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Image reference name to run for this StepAction.
More info: <a href="https://kubernetes.io/docs/concepts/containers/images">https://kubernetes.io/docs/concepts/containers/images</a></p>
</td>
</tr>
<tr>
<td>
<code>command</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Entrypoint array. Not executed within a shell.
The image’s ENTRYPOINT is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the container’s environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will
produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell</a></p>
</td>
</tr>
<tr>
<td>
<code>args</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Arguments to the entrypoint.
The image’s CMD is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the container’s environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will
produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell</a></p>
</td>
</tr>
<tr>
<td>
<code>env</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#envvar-v1-core">
[]Kubernetes core/v1.EnvVar
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>List of environment variables to set in the container.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>script</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Script is the contents of an executable file to execute.</p>
<p>If Script is not empty, the Step cannot have an Command and the Args will be passed to the Script.</p>
</td>
</tr>
<tr>
<td>
<code>workingDir</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Step’s working directory.
If not specified, the container runtime’s default will be used, which
might be configured in the container image.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1.ParamSpecs">
ParamSpecs
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Params is a list of input parameters required to run the stepAction.
Params must be supplied as inputs in Steps unless they declare a defaultvalue.</p>
</td>
</tr>
<tr>
<td>
<code>results</code><br/>
<em>
<a href="#tekton.dev/v1.StepResult">
[]StepResult
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Results are values that this StepAction can output</p>
</td>
</tr>
<tr>
<td>
<code>securityContext</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#securitycontext-v1-core">
Kubernetes core/v1.SecurityContext
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>SecurityContext defines the security options the Step should be run with.
If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.
More info: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/</a>
The value set in StepAction will take precedence over the value from Task.</p>
</td>
</tr>
<tr>
<td>
<code>volumeMounts</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumemount-v1-core">
[]Kubernetes core/v1.VolumeMount
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Volumes to mount into the Step’s filesystem.
Cannot be updated.</p>
</td>
</tr>
</table>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1alpha1.VerificationPolicy">VerificationPolicy
</h3>
<div>
<p>VerificationPolicy defines the rules to verify Tekton resources.
VerificationPolicy can config the mapping from resources to a list of public
keys, so when verifying the resources we can use the corresponding public keys.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>apiVersion</code><br/>
string</td>
<td>
<code>
tekton.dev/v1alpha1
</code>
</td>
</tr>
<tr>
<td>
<code>kind</code><br/>
string
</td>
<td><code>VerificationPolicy</code></td>
</tr>
<tr>
<td>
<code>metadata</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta">
Kubernetes meta/v1.ObjectMeta
</a>
</em>
</td>
<td>
<em>(Optional)</em>
Refer to the Kubernetes API documentation for the fields of the
<code>metadata</code> field.
</td>
</tr>
<tr>
<td>
<code>spec</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.VerificationPolicySpec">
VerificationPolicySpec
</a>
</em>
</td>
<td>
<p>Spec holds the desired state of the VerificationPolicy.</p>
<br/>
<br/>
<table>
<tr>
<td>
<code>resources</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.ResourcePattern">
[]ResourcePattern
</a>
</em>
</td>
<td>
<p>Resources defines the patterns of resources sources that should be subject to this policy.
For example, we may want to apply this Policy from a certain GitHub repo.
Then the ResourcesPattern should be valid regex. E.g. If using gitresolver, and we want to config keys from a certain git repo.
<code>ResourcesPattern</code> can be <code>https://github.com/tektoncd/catalog.git</code>, we will use regex to filter out those resources.</p>
</td>
</tr>
<tr>
<td>
<code>authorities</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.Authority">
[]Authority
</a>
</em>
</td>
<td>
<p>Authorities defines the rules for validating signatures.</p>
</td>
</tr>
<tr>
<td>
<code>mode</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.ModeType">
ModeType
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Mode controls whether a failing policy will fail the taskrun/pipelinerun, or only log the warnings
enforce - fail the taskrun/pipelinerun if verification fails (default)
warn - don’t fail the taskrun/pipelinerun if verification fails but log warnings</p>
</td>
</tr>
</table>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1alpha1.PipelineResource">PipelineResource
</h3>
<div>
<p>PipelineResource describes a resource that is an input to or output from a
Task.</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>apiVersion</code><br/>
string</td>
<td>
<code>
tekton.dev/v1alpha1
</code>
</td>
</tr>
<tr>
<td>
<code>kind</code><br/>
string
</td>
<td><code>PipelineResource</code></td>
</tr>
<tr>
<td>
<code>metadata</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta">
Kubernetes meta/v1.ObjectMeta
</a>
</em>
</td>
<td>
<em>(Optional)</em>
Refer to the Kubernetes API documentation for the fields of the
<code>metadata</code> field.
</td>
</tr>
<tr>
<td>
<code>spec</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.PipelineResourceSpec">
PipelineResourceSpec
</a>
</em>
</td>
<td>
<p>Spec holds the desired state of the PipelineResource from the client</p>
<br/>
<br/>
<table>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is a user-facing description of the resource that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>type</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.ResourceParam">
[]ResourceParam
</a>
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>secrets</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.SecretParam">
[]SecretParam
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Secrets to fetch to populate some of resource fields</p>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.PipelineResourceStatus">
PipelineResourceStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Status is used to communicate the observed state of the PipelineResource from
the controller, but was unused as there is no controller for PipelineResource.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1alpha1.Authority">Authority
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1alpha1.VerificationPolicySpec">VerificationPolicySpec</a>)
</p>
<div>
<p>The Authority block defines the keys for validating signatures.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name is the name for this authority.</p>
</td>
</tr>
<tr>
<td>
<code>key</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.KeyRef">
KeyRef
</a>
</em>
</td>
<td>
<p>Key contains the public key to validate the resource.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1alpha1.EmbeddedRunSpec">EmbeddedRunSpec
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1alpha1.RunSpec">RunSpec</a>)
</p>
<div>
<p>EmbeddedRunSpec allows custom task definitions to be embedded</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>metadata</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineTaskMetadata">
PipelineTaskMetadata
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>spec</code><br/>
<em>
k8s.io/apimachinery/pkg/runtime.RawExtension
</em>
</td>
<td>
<em>(Optional)</em>
<p>Spec is a specification of a custom task</p>
<br/>
<br/>
<table>
<tr>
<td>
<code>-</code><br/>
<em>
[]byte
</em>
</td>
<td>
<p>Raw is the underlying serialization of this object.</p>
<p>TODO: Determine how to detect ContentType and ContentEncoding of ‘Raw’ data.</p>
</td>
</tr>
<tr>
<td>
<code>-</code><br/>
<em>
k8s.io/apimachinery/pkg/runtime.Object
</em>
</td>
<td>
<p>Object can hold a representation of this extension - useful for working with versioned
structs.</p>
</td>
</tr>
</table>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1alpha1.HashAlgorithm">HashAlgorithm
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1alpha1.KeyRef">KeyRef</a>)
</p>
<div>
<p>HashAlgorithm defines the hash algorithm used for the public key</p>
</div>
<h3 id="tekton.dev/v1alpha1.KeyRef">KeyRef
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1alpha1.Authority">Authority</a>)
</p>
<div>
<p>KeyRef defines the reference to a public key</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>secretRef</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secretreference-v1-core">
Kubernetes core/v1.SecretReference
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>SecretRef sets a reference to a secret with the key.</p>
</td>
</tr>
<tr>
<td>
<code>data</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Data contains the inline public key.</p>
</td>
</tr>
<tr>
<td>
<code>kms</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>KMS contains the KMS url of the public key
Supported formats differ based on the KMS system used.
One example of a KMS url could be:
gcpkms://projects/[PROJECT]/locations/[LOCATION]>/keyRings/[KEYRING]/cryptoKeys/[KEY]/cryptoKeyVersions/[KEY_VERSION]
For more examples please refer <a href="https://docs.sigstore.dev/cosign/kms_support">https://docs.sigstore.dev/cosign/kms_support</a>.
Note that the KMS is not supported yet.</p>
</td>
</tr>
<tr>
<td>
<code>hashAlgorithm</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.HashAlgorithm">
HashAlgorithm
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>HashAlgorithm always defaults to sha256 if the algorithm hasn’t been explicitly set</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1alpha1.ModeType">ModeType
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1alpha1.VerificationPolicySpec">VerificationPolicySpec</a>)
</p>
<div>
<p>ModeType indicates the type of a mode for VerificationPolicy</p>
</div>
<h3 id="tekton.dev/v1alpha1.ResourcePattern">ResourcePattern
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1alpha1.VerificationPolicySpec">VerificationPolicySpec</a>)
</p>
<div>
<p>ResourcePattern defines the pattern of the resource source</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>pattern</code><br/>
<em>
string
</em>
</td>
<td>
<p>Pattern defines a resource pattern. Regex is created to filter resources based on <code>Pattern</code>
Example patterns:
GitHub resource: <a href="https://github.com/tektoncd/catalog.git">https://github.com/tektoncd/catalog.git</a>, <a href="https://github.com/tektoncd/*">https://github.com/tektoncd/*</a>
Bundle resource: gcr.io/tekton-releases/catalog/upstream/git-clone, gcr.io/tekton-releases/catalog/upstream/*
Hub resource: <a href="https://artifacthub.io/*">https://artifacthub.io/*</a>,</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1alpha1.RunReason">RunReason
(<code>string</code> alias)</h3>
<div>
<p>RunReason is an enum used to store all Run reason for the Succeeded condition that are controlled by the Run itself.</p>
</div>
<h3 id="tekton.dev/v1alpha1.RunSpec">RunSpec
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1alpha1.Run">Run</a>)
</p>
<div>
<p>RunSpec defines the desired state of Run</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>ref</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRef">
TaskRef
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>spec</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.EmbeddedRunSpec">
EmbeddedRunSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Spec is a specification of a custom task</p>
<br/>
<br/>
<table>
</table>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Params">
Params
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.RunSpecStatus">
RunSpecStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Used for cancelling a run (and maybe more later on)</p>
</td>
</tr>
<tr>
<td>
<code>statusMessage</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.RunSpecStatusMessage">
RunSpecStatusMessage
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Status message for cancellation.</p>
</td>
</tr>
<tr>
<td>
<code>retries</code><br/>
<em>
int
</em>
</td>
<td>
<em>(Optional)</em>
<p>Used for propagating retries count to custom tasks</p>
</td>
</tr>
<tr>
<td>
<code>serviceAccountName</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>podTemplate</code><br/>
<em>
<a href="#tekton.dev/unversioned.Template">
Template
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>PodTemplate holds pod specific configuration</p>
</td>
</tr>
<tr>
<td>
<code>timeout</code><br/>
<em>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration">
Kubernetes meta/v1.Duration
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Time after which the custom-task times out.
Refer Go’s ParseDuration documentation for expected format: <a href="https://golang.org/pkg/time/#ParseDuration">https://golang.org/pkg/time/#ParseDuration</a></p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1beta1.WorkspaceBinding">
[]WorkspaceBinding
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Workspaces is a list of WorkspaceBindings from volumes to workspaces.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1alpha1.RunSpecStatus">RunSpecStatus
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1alpha1.RunSpec">RunSpec</a>)
</p>
<div>
<p>RunSpecStatus defines the taskrun spec status the user can provide</p>
</div>
<h3 id="tekton.dev/v1alpha1.RunSpecStatusMessage">RunSpecStatusMessage
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1alpha1.RunSpec">RunSpec</a>)
</p>
<div>
<p>RunSpecStatusMessage defines human readable status messages for the TaskRun.</p>
</div>
<h3 id="tekton.dev/v1alpha1.StepActionObject">StepActionObject
</h3>
<div>
<p>StepActionObject is implemented by StepAction</p>
</div>
<h3 id="tekton.dev/v1alpha1.StepActionSpec">StepActionSpec
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1alpha1.StepAction">StepAction</a>)
</p>
<div>
<p>StepActionSpec contains the actionable components of a step.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is a user-facing description of the stepaction that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>image</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Image reference name to run for this StepAction.
More info: <a href="https://kubernetes.io/docs/concepts/containers/images">https://kubernetes.io/docs/concepts/containers/images</a></p>
</td>
</tr>
<tr>
<td>
<code>command</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Entrypoint array. Not executed within a shell.
The image’s ENTRYPOINT is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the container’s environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will
produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell</a></p>
</td>
</tr>
<tr>
<td>
<code>args</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Arguments to the entrypoint.
The image’s CMD is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the container’s environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will
produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell</a></p>
</td>
</tr>
<tr>
<td>
<code>env</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#envvar-v1-core">
[]Kubernetes core/v1.EnvVar
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>List of environment variables to set in the container.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>script</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Script is the contents of an executable file to execute.</p>
<p>If Script is not empty, the Step cannot have an Command and the Args will be passed to the Script.</p>
</td>
</tr>
<tr>
<td>
<code>workingDir</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Step’s working directory.
If not specified, the container runtime’s default will be used, which
might be configured in the container image.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1.ParamSpecs">
ParamSpecs
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Params is a list of input parameters required to run the stepAction.
Params must be supplied as inputs in Steps unless they declare a defaultvalue.</p>
</td>
</tr>
<tr>
<td>
<code>results</code><br/>
<em>
<a href="#tekton.dev/v1.StepResult">
[]StepResult
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Results are values that this StepAction can output</p>
</td>
</tr>
<tr>
<td>
<code>securityContext</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#securitycontext-v1-core">
Kubernetes core/v1.SecurityContext
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>SecurityContext defines the security options the Step should be run with.
If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.
More info: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/</a>
The value set in StepAction will take precedence over the value from Task.</p>
</td>
</tr>
<tr>
<td>
<code>volumeMounts</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumemount-v1-core">
[]Kubernetes core/v1.VolumeMount
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Volumes to mount into the Step’s filesystem.
Cannot be updated.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1alpha1.VerificationPolicySpec">VerificationPolicySpec
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1alpha1.VerificationPolicy">VerificationPolicy</a>)
</p>
<div>
<p>VerificationPolicySpec defines the patterns and authorities.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>resources</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.ResourcePattern">
[]ResourcePattern
</a>
</em>
</td>
<td>
<p>Resources defines the patterns of resources sources that should be subject to this policy.
For example, we may want to apply this Policy from a certain GitHub repo.
Then the ResourcesPattern should be valid regex. E.g. If using gitresolver, and we want to config keys from a certain git repo.
<code>ResourcesPattern</code> can be <code>https://github.com/tektoncd/catalog.git</code>, we will use regex to filter out those resources.</p>
</td>
</tr>
<tr>
<td>
<code>authorities</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.Authority">
[]Authority
</a>
</em>
</td>
<td>
<p>Authorities defines the rules for validating signatures.</p>
</td>
</tr>
<tr>
<td>
<code>mode</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.ModeType">
ModeType
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Mode controls whether a failing policy will fail the taskrun/pipelinerun, or only log the warnings
enforce - fail the taskrun/pipelinerun if verification fails (default)
warn - don’t fail the taskrun/pipelinerun if verification fails but log warnings</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1alpha1.PipelineResourceSpec">PipelineResourceSpec
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1alpha1.PipelineResource">PipelineResource</a>, <a href="#tekton.dev/v1beta1.PipelineResourceBinding">PipelineResourceBinding</a>)
</p>
<div>
<p>PipelineResourceSpec defines an individual resources used in the pipeline.</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is a user-facing description of the resource that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>type</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.ResourceParam">
[]ResourceParam
</a>
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>secrets</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.SecretParam">
[]SecretParam
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Secrets to fetch to populate some of resource fields</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1alpha1.PipelineResourceStatus">PipelineResourceStatus
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1alpha1.PipelineResource">PipelineResource</a>)
</p>
<div>
<p>PipelineResourceStatus does not contain anything because PipelineResources on their own
do not have a status</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</div>
<h3 id="tekton.dev/v1alpha1.ResourceDeclaration">ResourceDeclaration
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.TaskResource">TaskResource</a>)
</p>
<div>
<p>ResourceDeclaration defines an input or output PipelineResource declared as a requirement
by another type such as a Task or Condition. The Name field will be used to refer to these
PipelineResources within the type’s definition, and when provided as an Input, the Name will be the
path to the volume mounted containing this PipelineResource as an input (e.g.
an input Resource named <code>workspace</code> will be mounted at <code>/workspace</code>).</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name declares the name by which a resource is referenced in the
definition. Resources may be referenced by name in the definition of a
Task’s steps.</p>
</td>
</tr>
<tr>
<td>
<code>type</code><br/>
<em>
string
</em>
</td>
<td>
<p>Type is the type of this resource;</p>
</td>
</tr>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is a user-facing description of the declared resource that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>targetPath</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>TargetPath is the path in workspace directory where the resource
will be copied.</p>
</td>
</tr>
<tr>
<td>
<code>optional</code><br/>
<em>
bool
</em>
</td>
<td>
<p>Optional declares the resource as optional.
By default optional is set to false which makes a resource required.
optional: true - the resource is considered optional
optional: false - the resource is considered required (equivalent of not specifying it)</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1alpha1.ResourceParam">ResourceParam
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1alpha1.PipelineResourceSpec">PipelineResourceSpec</a>)
</p>
<div>
<p>ResourceParam declares a string value to use for the parameter called Name, and is used in
the specific context of PipelineResources.</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>value</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1alpha1.SecretParam">SecretParam
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1alpha1.PipelineResourceSpec">PipelineResourceSpec</a>)
</p>
<div>
<p>SecretParam indicates which secret can be used to populate a field of the resource</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>fieldName</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>secretKey</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>secretName</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1alpha1.RunResult">RunResult
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1alpha1.RunStatusFields">RunStatusFields</a>)
</p>
<div>
<p>RunResult used to describe the results of a task</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name the given name</p>
</td>
</tr>
<tr>
<td>
<code>value</code><br/>
<em>
string
</em>
</td>
<td>
<p>Value the given value of the result</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1alpha1.RunStatus">RunStatus
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1alpha1.Run">Run</a>, <a href="#tekton.dev/v1alpha1.RunStatusFields">RunStatusFields</a>)
</p>
<div>
<p>RunStatus defines the observed state of Run</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>Status</code><br/>
<em>
<a href="https://pkg.go.dev/knative.dev/pkg/apis/duck/v1#Status">
knative.dev/pkg/apis/duck/v1.Status
</a>
</em>
</td>
<td>
<p>
(Members of <code>Status</code> are embedded into this type.)
</p>
</td>
</tr>
<tr>
<td>
<code>RunStatusFields</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.RunStatusFields">
RunStatusFields
</a>
</em>
</td>
<td>
<p>
(Members of <code>RunStatusFields</code> are embedded into this type.)
</p>
<p>RunStatusFields inlines the status fields.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1alpha1.RunStatusFields">RunStatusFields
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1alpha1.RunStatus">RunStatus</a>)
</p>
<div>
<p>RunStatusFields holds the fields of Run’s status. This is defined
separately and inlined so that other types can readily consume these fields
via duck typing.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>startTime</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#time-v1-meta">
Kubernetes meta/v1.Time
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>StartTime is the time the build is actually started.</p>
</td>
</tr>
<tr>
<td>
<code>completionTime</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#time-v1-meta">
Kubernetes meta/v1.Time
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>CompletionTime is the time the build completed.</p>
</td>
</tr>
<tr>
<td>
<code>results</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.RunResult">
[]RunResult
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Results reports any output result values to be consumed by later
tasks in a pipeline.</p>
</td>
</tr>
<tr>
<td>
<code>retriesStatus</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.RunStatus">
[]RunStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>RetriesStatus contains the history of RunStatus, in case of a retry.</p>
</td>
</tr>
<tr>
<td>
<code>extraFields</code><br/>
<em>
k8s.io/apimachinery/pkg/runtime.RawExtension
</em>
</td>
<td>
<p>ExtraFields holds arbitrary fields provided by the custom task
controller.</p>
</td>
</tr>
</tbody>
</table>
<hr/>
<h2 id="tekton.dev/v1beta1">tekton.dev/v1beta1</h2>
<div>
<p>Package v1beta1 contains API Schema definitions for the pipeline v1beta1 API group</p>
</div>
Resource Types:
<ul><li>
<a href="#tekton.dev/v1beta1.ClusterTask">ClusterTask</a>
</li><li>
<a href="#tekton.dev/v1beta1.CustomRun">CustomRun</a>
</li><li>
<a href="#tekton.dev/v1beta1.Pipeline">Pipeline</a>
</li><li>
<a href="#tekton.dev/v1beta1.PipelineRun">PipelineRun</a>
</li><li>
<a href="#tekton.dev/v1beta1.StepAction">StepAction</a>
</li><li>
<a href="#tekton.dev/v1beta1.Task">Task</a>
</li><li>
<a href="#tekton.dev/v1beta1.TaskRun">TaskRun</a>
</li></ul>
<h3 id="tekton.dev/v1beta1.ClusterTask">ClusterTask
</h3>
<div>
<p>ClusterTask is a Task with a cluster scope. ClusterTasks are used to
represent Tasks that should be publicly addressable from any namespace in the
cluster.</p>
<p>Deprecated: Please use the cluster resolver instead.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>apiVersion</code><br/>
string</td>
<td>
<code>
tekton.dev/v1beta1
</code>
</td>
</tr>
<tr>
<td>
<code>kind</code><br/>
string
</td>
<td><code>ClusterTask</code></td>
</tr>
<tr>
<td>
<code>metadata</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta">
Kubernetes meta/v1.ObjectMeta
</a>
</em>
</td>
<td>
<em>(Optional)</em>
Refer to the Kubernetes API documentation for the fields of the
<code>metadata</code> field.
</td>
</tr>
<tr>
<td>
<code>spec</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskSpec">
TaskSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Spec holds the desired state of the Task from the client</p>
<br/>
<br/>
<table>
<tr>
<td>
<code>resources</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskResources">
TaskResources
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Resources is a list input and output resource to run the task
Resources are represented in TaskRuns as bindings to instances of
PipelineResources.</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1beta1.ParamSpecs">
ParamSpecs
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Params is a list of input parameters required to run the task. Params
must be supplied as inputs in TaskRuns unless they declare a default
value.</p>
</td>
</tr>
<tr>
<td>
<code>displayName</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>DisplayName is a user-facing name of the task that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is a user-facing description of the task that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>steps</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Step">
[]Step
</a>
</em>
</td>
<td>
<p>Steps are the steps of the build; each step is run sequentially with the
source mounted into /workspace.</p>
</td>
</tr>
<tr>
<td>
<code>volumes</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volume-v1-core">
[]Kubernetes core/v1.Volume
</a>
</em>
</td>
<td>
<p>Volumes is a collection of volumes that are available to mount into the
steps of the build.</p>
</td>
</tr>
<tr>
<td>
<code>stepTemplate</code><br/>
<em>
<a href="#tekton.dev/v1beta1.StepTemplate">
StepTemplate
</a>
</em>
</td>
<td>
<p>StepTemplate can be used as the basis for all step containers within the
Task, so that the steps inherit settings on the base container.</p>
</td>
</tr>
<tr>
<td>
<code>sidecars</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Sidecar">
[]Sidecar
</a>
</em>
</td>
<td>
<p>Sidecars are run alongside the Task’s step containers. They begin before
the steps start and end after the steps complete.</p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1beta1.WorkspaceDeclaration">
[]WorkspaceDeclaration
</a>
</em>
</td>
<td>
<p>Workspaces are the volumes that this Task requires.</p>
</td>
</tr>
<tr>
<td>
<code>results</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskResult">
[]TaskResult
</a>
</em>
</td>
<td>
<p>Results are values that this Task can output</p>
</td>
</tr>
</table>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.CustomRun">CustomRun
</h3>
<div>
<p>CustomRun represents a single execution of a Custom Task.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>apiVersion</code><br/>
string</td>
<td>
<code>
tekton.dev/v1beta1
</code>
</td>
</tr>
<tr>
<td>
<code>kind</code><br/>
string
</td>
<td><code>CustomRun</code></td>
</tr>
<tr>
<td>
<code>metadata</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta">
Kubernetes meta/v1.ObjectMeta
</a>
</em>
</td>
<td>
<em>(Optional)</em>
Refer to the Kubernetes API documentation for the fields of the
<code>metadata</code> field.
</td>
</tr>
<tr>
<td>
<code>spec</code><br/>
<em>
<a href="#tekton.dev/v1beta1.CustomRunSpec">
CustomRunSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<br/>
<br/>
<table>
<tr>
<td>
<code>customRef</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRef">
TaskRef
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>customSpec</code><br/>
<em>
<a href="#tekton.dev/v1beta1.EmbeddedCustomRunSpec">
EmbeddedCustomRunSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Spec is a specification of a custom task</p>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Params">
Params
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#tekton.dev/v1beta1.CustomRunSpecStatus">
CustomRunSpecStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Used for cancelling a customrun (and maybe more later on)</p>
</td>
</tr>
<tr>
<td>
<code>statusMessage</code><br/>
<em>
<a href="#tekton.dev/v1beta1.CustomRunSpecStatusMessage">
CustomRunSpecStatusMessage
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Status message for cancellation.</p>
</td>
</tr>
<tr>
<td>
<code>retries</code><br/>
<em>
int
</em>
</td>
<td>
<em>(Optional)</em>
<p>Used for propagating retries count to custom tasks</p>
</td>
</tr>
<tr>
<td>
<code>serviceAccountName</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>timeout</code><br/>
<em>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration">
Kubernetes meta/v1.Duration
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Time after which the custom-task times out.
Refer Go’s ParseDuration documentation for expected format: <a href="https://golang.org/pkg/time/#ParseDuration">https://golang.org/pkg/time/#ParseDuration</a></p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1beta1.WorkspaceBinding">
[]WorkspaceBinding
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Workspaces is a list of WorkspaceBindings from volumes to workspaces.</p>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#tekton.dev/v1beta1.CustomRunStatus">
CustomRunStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.Pipeline">Pipeline
</h3>
<div>
<p>Pipeline describes a list of Tasks to execute. It expresses how outputs
of tasks feed into inputs of subsequent tasks.</p>
<p>Deprecated: Please use v1.Pipeline instead.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>apiVersion</code><br/>
string</td>
<td>
<code>
tekton.dev/v1beta1
</code>
</td>
</tr>
<tr>
<td>
<code>kind</code><br/>
string
</td>
<td><code>Pipeline</code></td>
</tr>
<tr>
<td>
<code>metadata</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta">
Kubernetes meta/v1.ObjectMeta
</a>
</em>
</td>
<td>
<em>(Optional)</em>
Refer to the Kubernetes API documentation for the fields of the
<code>metadata</code> field.
</td>
</tr>
<tr>
<td>
<code>spec</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineSpec">
PipelineSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Spec holds the desired state of the Pipeline from the client</p>
<br/>
<br/>
<table>
<tr>
<td>
<code>displayName</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>DisplayName is a user-facing name of the pipeline that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is a user-facing description of the pipeline that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>resources</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineDeclaredResource">
[]PipelineDeclaredResource
</a>
</em>
</td>
<td>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</td>
</tr>
<tr>
<td>
<code>tasks</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineTask">
[]PipelineTask
</a>
</em>
</td>
<td>
<p>Tasks declares the graph of Tasks that execute when this Pipeline is run.</p>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1beta1.ParamSpecs">
ParamSpecs
</a>
</em>
</td>
<td>
<p>Params declares a list of input parameters that must be supplied when
this Pipeline is run.</p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineWorkspaceDeclaration">
[]PipelineWorkspaceDeclaration
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Workspaces declares a set of named workspaces that are expected to be
provided by a PipelineRun.</p>
</td>
</tr>
<tr>
<td>
<code>results</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineResult">
[]PipelineResult
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Results are values that this pipeline can output once run</p>
</td>
</tr>
<tr>
<td>
<code>finally</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineTask">
[]PipelineTask
</a>
</em>
</td>
<td>
<p>Finally declares the list of Tasks that execute just before leaving the Pipeline
i.e. either after all Tasks are finished executing successfully
or after a failure which would result in ending the Pipeline</p>
</td>
</tr>
</table>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.PipelineRun">PipelineRun
</h3>
<div>
<p>PipelineRun represents a single execution of a Pipeline. PipelineRuns are how
the graph of Tasks declared in a Pipeline are executed; they specify inputs
to Pipelines such as parameter values and capture operational aspects of the
Tasks execution such as service account and tolerations. Creating a
PipelineRun creates TaskRuns for Tasks in the referenced Pipeline.</p>
<p>Deprecated: Please use v1.PipelineRun instead.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>apiVersion</code><br/>
string</td>
<td>
<code>
tekton.dev/v1beta1
</code>
</td>
</tr>
<tr>
<td>
<code>kind</code><br/>
string
</td>
<td><code>PipelineRun</code></td>
</tr>
<tr>
<td>
<code>metadata</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta">
Kubernetes meta/v1.ObjectMeta
</a>
</em>
</td>
<td>
<em>(Optional)</em>
Refer to the Kubernetes API documentation for the fields of the
<code>metadata</code> field.
</td>
</tr>
<tr>
<td>
<code>spec</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineRunSpec">
PipelineRunSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<br/>
<br/>
<table>
<tr>
<td>
<code>pipelineRef</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineRef">
PipelineRef
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>pipelineSpec</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineSpec">
PipelineSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Specifying PipelineSpec can be disabled by setting
<code>disable-inline-spec</code> feature flag..</p>
</td>
</tr>
<tr>
<td>
<code>resources</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineResourceBinding">
[]PipelineResourceBinding
</a>
</em>
</td>
<td>
<p>Resources is a list of bindings specifying which actual instances of
PipelineResources to use for the resources the Pipeline has declared
it needs.</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Params">
Params
</a>
</em>
</td>
<td>
<p>Params is a list of parameter names and values.</p>
</td>
</tr>
<tr>
<td>
<code>serviceAccountName</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineRunSpecStatus">
PipelineRunSpecStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Used for cancelling a pipelinerun (and maybe more later on)</p>
</td>
</tr>
<tr>
<td>
<code>timeouts</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TimeoutFields">
TimeoutFields
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Time after which the Pipeline times out.
Currently three keys are accepted in the map
pipeline, tasks and finally
with Timeouts.pipeline >= Timeouts.tasks + Timeouts.finally</p>
</td>
</tr>
<tr>
<td>
<code>timeout</code><br/>
<em>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration">
Kubernetes meta/v1.Duration
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Timeout is the Time after which the Pipeline times out.
Defaults to never.
Refer to Go’s ParseDuration documentation for expected format: <a href="https://golang.org/pkg/time/#ParseDuration">https://golang.org/pkg/time/#ParseDuration</a></p>
<p>Deprecated: use pipelineRunSpec.Timeouts.Pipeline instead</p>
</td>
</tr>
<tr>
<td>
<code>podTemplate</code><br/>
<em>
<a href="#tekton.dev/unversioned.Template">
Template
</a>
</em>
</td>
<td>
<p>PodTemplate holds pod specific configuration</p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1beta1.WorkspaceBinding">
[]WorkspaceBinding
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Workspaces holds a set of workspace bindings that must match names
with those declared in the pipeline.</p>
</td>
</tr>
<tr>
<td>
<code>taskRunSpecs</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineTaskRunSpec">
[]PipelineTaskRunSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>TaskRunSpecs holds a set of runtime specs</p>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineRunStatus">
PipelineRunStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.StepAction">StepAction
</h3>
<div>
<p>StepAction represents the actionable components of Step.
The Step can only reference it from the cluster or using remote resolution.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>apiVersion</code><br/>
string</td>
<td>
<code>
tekton.dev/v1beta1
</code>
</td>
</tr>
<tr>
<td>
<code>kind</code><br/>
string
</td>
<td><code>StepAction</code></td>
</tr>
<tr>
<td>
<code>metadata</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta">
Kubernetes meta/v1.ObjectMeta
</a>
</em>
</td>
<td>
<em>(Optional)</em>
Refer to the Kubernetes API documentation for the fields of the
<code>metadata</code> field.
</td>
</tr>
<tr>
<td>
<code>spec</code><br/>
<em>
<a href="#tekton.dev/v1beta1.StepActionSpec">
StepActionSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Spec holds the desired state of the Step from the client</p>
<br/>
<br/>
<table>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is a user-facing description of the stepaction that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>image</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Image reference name to run for this StepAction.
More info: <a href="https://kubernetes.io/docs/concepts/containers/images">https://kubernetes.io/docs/concepts/containers/images</a></p>
</td>
</tr>
<tr>
<td>
<code>command</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Entrypoint array. Not executed within a shell.
The image’s ENTRYPOINT is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the container’s environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will
produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell</a></p>
</td>
</tr>
<tr>
<td>
<code>args</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Arguments to the entrypoint.
The image’s CMD is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the container’s environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will
produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell</a></p>
</td>
</tr>
<tr>
<td>
<code>env</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#envvar-v1-core">
[]Kubernetes core/v1.EnvVar
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>List of environment variables to set in the container.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>script</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Script is the contents of an executable file to execute.</p>
<p>If Script is not empty, the Step cannot have an Command and the Args will be passed to the Script.</p>
</td>
</tr>
<tr>
<td>
<code>workingDir</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Step’s working directory.
If not specified, the container runtime’s default will be used, which
might be configured in the container image.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1.ParamSpecs">
ParamSpecs
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Params is a list of input parameters required to run the stepAction.
Params must be supplied as inputs in Steps unless they declare a defaultvalue.</p>
</td>
</tr>
<tr>
<td>
<code>results</code><br/>
<em>
<a href="#tekton.dev/v1.StepResult">
[]StepResult
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Results are values that this StepAction can output</p>
</td>
</tr>
<tr>
<td>
<code>securityContext</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#securitycontext-v1-core">
Kubernetes core/v1.SecurityContext
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>SecurityContext defines the security options the Step should be run with.
If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.
More info: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/</a>
The value set in StepAction will take precedence over the value from Task.</p>
</td>
</tr>
<tr>
<td>
<code>volumeMounts</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumemount-v1-core">
[]Kubernetes core/v1.VolumeMount
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Volumes to mount into the Step’s filesystem.
Cannot be updated.</p>
</td>
</tr>
</table>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.Task">Task
</h3>
<div>
<p>Task represents a collection of sequential steps that are run as part of a
Pipeline using a set of inputs and producing a set of outputs. Tasks execute
when TaskRuns are created that provide the input parameters and resources and
output resources the Task requires.</p>
<p>Deprecated: Please use v1.Task instead.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>apiVersion</code><br/>
string</td>
<td>
<code>
tekton.dev/v1beta1
</code>
</td>
</tr>
<tr>
<td>
<code>kind</code><br/>
string
</td>
<td><code>Task</code></td>
</tr>
<tr>
<td>
<code>metadata</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta">
Kubernetes meta/v1.ObjectMeta
</a>
</em>
</td>
<td>
<em>(Optional)</em>
Refer to the Kubernetes API documentation for the fields of the
<code>metadata</code> field.
</td>
</tr>
<tr>
<td>
<code>spec</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskSpec">
TaskSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Spec holds the desired state of the Task from the client</p>
<br/>
<br/>
<table>
<tr>
<td>
<code>resources</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskResources">
TaskResources
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Resources is a list input and output resource to run the task
Resources are represented in TaskRuns as bindings to instances of
PipelineResources.</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1beta1.ParamSpecs">
ParamSpecs
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Params is a list of input parameters required to run the task. Params
must be supplied as inputs in TaskRuns unless they declare a default
value.</p>
</td>
</tr>
<tr>
<td>
<code>displayName</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>DisplayName is a user-facing name of the task that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is a user-facing description of the task that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>steps</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Step">
[]Step
</a>
</em>
</td>
<td>
<p>Steps are the steps of the build; each step is run sequentially with the
source mounted into /workspace.</p>
</td>
</tr>
<tr>
<td>
<code>volumes</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volume-v1-core">
[]Kubernetes core/v1.Volume
</a>
</em>
</td>
<td>
<p>Volumes is a collection of volumes that are available to mount into the
steps of the build.</p>
</td>
</tr>
<tr>
<td>
<code>stepTemplate</code><br/>
<em>
<a href="#tekton.dev/v1beta1.StepTemplate">
StepTemplate
</a>
</em>
</td>
<td>
<p>StepTemplate can be used as the basis for all step containers within the
Task, so that the steps inherit settings on the base container.</p>
</td>
</tr>
<tr>
<td>
<code>sidecars</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Sidecar">
[]Sidecar
</a>
</em>
</td>
<td>
<p>Sidecars are run alongside the Task’s step containers. They begin before
the steps start and end after the steps complete.</p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1beta1.WorkspaceDeclaration">
[]WorkspaceDeclaration
</a>
</em>
</td>
<td>
<p>Workspaces are the volumes that this Task requires.</p>
</td>
</tr>
<tr>
<td>
<code>results</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskResult">
[]TaskResult
</a>
</em>
</td>
<td>
<p>Results are values that this Task can output</p>
</td>
</tr>
</table>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.TaskRun">TaskRun
</h3>
<div>
<p>TaskRun represents a single execution of a Task. TaskRuns are how the steps
specified in a Task are executed; they specify the parameters and resources
used to run the steps in a Task.</p>
<p>Deprecated: Please use v1.TaskRun instead.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>apiVersion</code><br/>
string</td>
<td>
<code>
tekton.dev/v1beta1
</code>
</td>
</tr>
<tr>
<td>
<code>kind</code><br/>
string
</td>
<td><code>TaskRun</code></td>
</tr>
<tr>
<td>
<code>metadata</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta">
Kubernetes meta/v1.ObjectMeta
</a>
</em>
</td>
<td>
<em>(Optional)</em>
Refer to the Kubernetes API documentation for the fields of the
<code>metadata</code> field.
</td>
</tr>
<tr>
<td>
<code>spec</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRunSpec">
TaskRunSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<br/>
<br/>
<table>
<tr>
<td>
<code>debug</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRunDebug">
TaskRunDebug
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Params">
Params
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>resources</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRunResources">
TaskRunResources
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</td>
</tr>
<tr>
<td>
<code>serviceAccountName</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>taskRef</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRef">
TaskRef
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>no more than one of the TaskRef and TaskSpec may be specified.</p>
</td>
</tr>
<tr>
<td>
<code>taskSpec</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskSpec">
TaskSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Specifying PipelineSpec can be disabled by setting
<code>disable-inline-spec</code> feature flag..</p>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRunSpecStatus">
TaskRunSpecStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Used for cancelling a TaskRun (and maybe more later on)</p>
</td>
</tr>
<tr>
<td>
<code>statusMessage</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRunSpecStatusMessage">
TaskRunSpecStatusMessage
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Status message for cancellation.</p>
</td>
</tr>
<tr>
<td>
<code>retries</code><br/>
<em>
int
</em>
</td>
<td>
<em>(Optional)</em>
<p>Retries represents how many times this TaskRun should be retried in the event of Task failure.</p>
</td>
</tr>
<tr>
<td>
<code>timeout</code><br/>
<em>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration">
Kubernetes meta/v1.Duration
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Time after which one retry attempt times out. Defaults to 1 hour.
Refer Go’s ParseDuration documentation for expected format: <a href="https://golang.org/pkg/time/#ParseDuration">https://golang.org/pkg/time/#ParseDuration</a></p>
</td>
</tr>
<tr>
<td>
<code>podTemplate</code><br/>
<em>
<a href="#tekton.dev/unversioned.Template">
Template
</a>
</em>
</td>
<td>
<p>PodTemplate holds pod specific configuration</p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1beta1.WorkspaceBinding">
[]WorkspaceBinding
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Workspaces is a list of WorkspaceBindings from volumes to workspaces.</p>
</td>
</tr>
<tr>
<td>
<code>stepOverrides</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRunStepOverride">
[]TaskRunStepOverride
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Overrides to apply to Steps in this TaskRun.
If a field is specified in both a Step and a StepOverride,
the value from the StepOverride will be used.
This field is only supported when the alpha feature gate is enabled.</p>
</td>
</tr>
<tr>
<td>
<code>sidecarOverrides</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRunSidecarOverride">
[]TaskRunSidecarOverride
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Overrides to apply to Sidecars in this TaskRun.
If a field is specified in both a Sidecar and a SidecarOverride,
the value from the SidecarOverride will be used.
This field is only supported when the alpha feature gate is enabled.</p>
</td>
</tr>
<tr>
<td>
<code>computeResources</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core">
Kubernetes core/v1.ResourceRequirements
</a>
</em>
</td>
<td>
<p>Compute resources to use for this TaskRun</p>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRunStatus">
TaskRunStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.Algorithm">Algorithm
(<code>string</code> alias)</h3>
<div>
<p>Algorithm Standard cryptographic hash algorithm</p>
</div>
<h3 id="tekton.dev/v1beta1.Artifact">Artifact
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.Artifacts">Artifacts</a>, <a href="#tekton.dev/v1beta1.StepState">StepState</a>)
</p>
<div>
<p>TaskRunStepArtifact represents an artifact produced or used by a step within a task run.
It directly uses the Artifact type for its structure.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>The artifact’s identifying category name</p>
</td>
</tr>
<tr>
<td>
<code>values</code><br/>
<em>
<a href="#tekton.dev/v1beta1.ArtifactValue">
[]ArtifactValue
</a>
</em>
</td>
<td>
<p>A collection of values related to the artifact</p>
</td>
</tr>
<tr>
<td>
<code>buildOutput</code><br/>
<em>
bool
</em>
</td>
<td>
<p>Indicate if the artifact is a build output or a by-product</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.ArtifactValue">ArtifactValue
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.Artifact">Artifact</a>)
</p>
<div>
<p>ArtifactValue represents a specific value or data element within an Artifact.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>digest</code><br/>
<em>
map[github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.Algorithm]string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>uri</code><br/>
<em>
string
</em>
</td>
<td>
<p>Algorithm-specific digests for verifying the content (e.g., SHA256)</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.Artifacts">Artifacts
</h3>
<div>
<p>Artifacts represents the collection of input and output artifacts associated with
a task run or a similar process. Artifacts in this context are units of data or resources
that the process either consumes as input or produces as output.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>inputs</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Artifact">
[]Artifact
</a>
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>outputs</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Artifact">
[]Artifact
</a>
</em>
</td>
<td>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.ChildStatusReference">ChildStatusReference
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineRunStatusFields">PipelineRunStatusFields</a>)
</p>
<div>
<p>ChildStatusReference is used to point to the statuses of individual TaskRuns and Runs within this PipelineRun.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name is the name of the TaskRun or Run this is referencing.</p>
</td>
</tr>
<tr>
<td>
<code>displayName</code><br/>
<em>
string
</em>
</td>
<td>
<p>DisplayName is a user-facing name of the pipelineTask that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>pipelineTaskName</code><br/>
<em>
string
</em>
</td>
<td>
<p>PipelineTaskName is the name of the PipelineTask this is referencing.</p>
</td>
</tr>
<tr>
<td>
<code>whenExpressions</code><br/>
<em>
<a href="#tekton.dev/v1beta1.WhenExpression">
[]WhenExpression
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>WhenExpressions is the list of checks guarding the execution of the PipelineTask</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.CloudEventCondition">CloudEventCondition
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.CloudEventDeliveryState">CloudEventDeliveryState</a>)
</p>
<div>
<p>CloudEventCondition is a string that represents the condition of the event.</p>
</div>
<h3 id="tekton.dev/v1beta1.CloudEventDelivery">CloudEventDelivery
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.TaskRunStatusFields">TaskRunStatusFields</a>)
</p>
<div>
<p>CloudEventDelivery is the target of a cloud event along with the state of
delivery.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>target</code><br/>
<em>
string
</em>
</td>
<td>
<p>Target points to an addressable</p>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#tekton.dev/v1beta1.CloudEventDeliveryState">
CloudEventDeliveryState
</a>
</em>
</td>
<td>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.CloudEventDeliveryState">CloudEventDeliveryState
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.CloudEventDelivery">CloudEventDelivery</a>)
</p>
<div>
<p>CloudEventDeliveryState reports the state of a cloud event to be sent.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>condition</code><br/>
<em>
<a href="#tekton.dev/v1beta1.CloudEventCondition">
CloudEventCondition
</a>
</em>
</td>
<td>
<p>Current status</p>
</td>
</tr>
<tr>
<td>
<code>sentAt</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#time-v1-meta">
Kubernetes meta/v1.Time
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>SentAt is the time at which the last attempt to send the event was made</p>
</td>
</tr>
<tr>
<td>
<code>message</code><br/>
<em>
string
</em>
</td>
<td>
<p>Error is the text of error (if any)</p>
</td>
</tr>
<tr>
<td>
<code>retryCount</code><br/>
<em>
int32
</em>
</td>
<td>
<p>RetryCount is the number of attempts of sending the cloud event</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.Combination">Combination
(<code>map[string]string</code> alias)</h3>
<div>
<p>Combination is a map, mainly defined to hold a single combination from a Matrix with key as param.Name and value as param.Value</p>
</div>
<h3 id="tekton.dev/v1beta1.Combinations">Combinations
(<code>[]github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.Combination</code> alias)</h3>
<div>
<p>Combinations is a Combination list</p>
</div>
<h3 id="tekton.dev/v1beta1.ConfigSource">ConfigSource
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.Provenance">Provenance</a>)
</p>
<div>
<p>ConfigSource contains the information that can uniquely identify where a remote
built definition came from i.e. Git repositories, Tekton Bundles in OCI registry
and hub.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>uri</code><br/>
<em>
string
</em>
</td>
<td>
<p>URI indicates the identity of the source of the build definition.
Example: “<a href="https://github.com/tektoncd/catalog"">https://github.com/tektoncd/catalog”</a></p>
</td>
</tr>
<tr>
<td>
<code>digest</code><br/>
<em>
map[string]string
</em>
</td>
<td>
<p>Digest is a collection of cryptographic digests for the contents of the artifact specified by URI.
Example: {“sha1”: “f99d13e554ffcb696dee719fa85b695cb5b0f428”}</p>
</td>
</tr>
<tr>
<td>
<code>entryPoint</code><br/>
<em>
string
</em>
</td>
<td>
<p>EntryPoint identifies the entry point into the build. This is often a path to a
build definition file and/or a target label within that file.
Example: “task/git-clone/0.8/git-clone.yaml”</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.CustomRunReason">CustomRunReason
(<code>string</code> alias)</h3>
<div>
<p>CustomRunReason is an enum used to store all Run reason for the Succeeded condition that are controlled by the CustomRun itself.</p>
</div>
<h3 id="tekton.dev/v1beta1.CustomRunSpec">CustomRunSpec
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.CustomRun">CustomRun</a>)
</p>
<div>
<p>CustomRunSpec defines the desired state of CustomRun</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>customRef</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRef">
TaskRef
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>customSpec</code><br/>
<em>
<a href="#tekton.dev/v1beta1.EmbeddedCustomRunSpec">
EmbeddedCustomRunSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Spec is a specification of a custom task</p>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Params">
Params
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#tekton.dev/v1beta1.CustomRunSpecStatus">
CustomRunSpecStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Used for cancelling a customrun (and maybe more later on)</p>
</td>
</tr>
<tr>
<td>
<code>statusMessage</code><br/>
<em>
<a href="#tekton.dev/v1beta1.CustomRunSpecStatusMessage">
CustomRunSpecStatusMessage
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Status message for cancellation.</p>
</td>
</tr>
<tr>
<td>
<code>retries</code><br/>
<em>
int
</em>
</td>
<td>
<em>(Optional)</em>
<p>Used for propagating retries count to custom tasks</p>
</td>
</tr>
<tr>
<td>
<code>serviceAccountName</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>timeout</code><br/>
<em>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration">
Kubernetes meta/v1.Duration
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Time after which the custom-task times out.
Refer Go’s ParseDuration documentation for expected format: <a href="https://golang.org/pkg/time/#ParseDuration">https://golang.org/pkg/time/#ParseDuration</a></p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1beta1.WorkspaceBinding">
[]WorkspaceBinding
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Workspaces is a list of WorkspaceBindings from volumes to workspaces.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.CustomRunSpecStatus">CustomRunSpecStatus
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.CustomRunSpec">CustomRunSpec</a>)
</p>
<div>
<p>CustomRunSpecStatus defines the taskrun spec status the user can provide</p>
</div>
<h3 id="tekton.dev/v1beta1.CustomRunSpecStatusMessage">CustomRunSpecStatusMessage
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.CustomRunSpec">CustomRunSpec</a>)
</p>
<div>
<p>CustomRunSpecStatusMessage defines human readable status messages for the TaskRun.</p>
</div>
<h3 id="tekton.dev/v1beta1.EmbeddedCustomRunSpec">EmbeddedCustomRunSpec
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.CustomRunSpec">CustomRunSpec</a>)
</p>
<div>
<p>EmbeddedCustomRunSpec allows custom task definitions to be embedded</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>metadata</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineTaskMetadata">
PipelineTaskMetadata
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>spec</code><br/>
<em>
k8s.io/apimachinery/pkg/runtime.RawExtension
</em>
</td>
<td>
<em>(Optional)</em>
<p>Spec is a specification of a custom task</p>
<br/>
<br/>
<table>
<tr>
<td>
<code>-</code><br/>
<em>
[]byte
</em>
</td>
<td>
<p>Raw is the underlying serialization of this object.</p>
<p>TODO: Determine how to detect ContentType and ContentEncoding of ‘Raw’ data.</p>
</td>
</tr>
<tr>
<td>
<code>-</code><br/>
<em>
k8s.io/apimachinery/pkg/runtime.Object
</em>
</td>
<td>
<p>Object can hold a representation of this extension - useful for working with versioned
structs.</p>
</td>
</tr>
</table>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.EmbeddedTask">EmbeddedTask
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineTask">PipelineTask</a>)
</p>
<div>
<p>EmbeddedTask is used to define a Task inline within a Pipeline’s PipelineTasks.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>spec</code><br/>
<em>
k8s.io/apimachinery/pkg/runtime.RawExtension
</em>
</td>
<td>
<em>(Optional)</em>
<p>Spec is a specification of a custom task</p>
<br/>
<br/>
<table>
<tr>
<td>
<code>-</code><br/>
<em>
[]byte
</em>
</td>
<td>
<p>Raw is the underlying serialization of this object.</p>
<p>TODO: Determine how to detect ContentType and ContentEncoding of ‘Raw’ data.</p>
</td>
</tr>
<tr>
<td>
<code>-</code><br/>
<em>
k8s.io/apimachinery/pkg/runtime.Object
</em>
</td>
<td>
<p>Object can hold a representation of this extension - useful for working with versioned
structs.</p>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td>
<code>metadata</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineTaskMetadata">
PipelineTaskMetadata
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>TaskSpec</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskSpec">
TaskSpec
</a>
</em>
</td>
<td>
<p>
(Members of <code>TaskSpec</code> are embedded into this type.)
</p>
<em>(Optional)</em>
<p>TaskSpec is a specification of a task</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.IncludeParams">IncludeParams
</h3>
<div>
<p>IncludeParams allows passing in a specific combinations of Parameters into the Matrix.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name the specified combination</p>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Params">
Params
</a>
</em>
</td>
<td>
<p>Params takes only <code>Parameters</code> of type <code>"string"</code>
The names of the <code>params</code> must match the names of the <code>params</code> in the underlying <code>Task</code></p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.InternalTaskModifier">InternalTaskModifier
</h3>
<div>
<p>InternalTaskModifier implements TaskModifier for resources that are built-in to Tekton Pipelines.</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>stepsToPrepend</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Step">
[]Step
</a>
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>stepsToAppend</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Step">
[]Step
</a>
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>volumes</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volume-v1-core">
[]Kubernetes core/v1.Volume
</a>
</em>
</td>
<td>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.Matrix">Matrix
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineTask">PipelineTask</a>)
</p>
<div>
<p>Matrix is used to fan out Tasks in a Pipeline</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Params">
Params
</a>
</em>
</td>
<td>
<p>Params is a list of parameters used to fan out the pipelineTask
Params takes only <code>Parameters</code> of type <code>"array"</code>
Each array element is supplied to the <code>PipelineTask</code> by substituting <code>params</code> of type <code>"string"</code> in the underlying <code>Task</code>.
The names of the <code>params</code> in the <code>Matrix</code> must match the names of the <code>params</code> in the underlying <code>Task</code> that they will be substituting.</p>
</td>
</tr>
<tr>
<td>
<code>include</code><br/>
<em>
<a href="#tekton.dev/v1beta1.IncludeParamsList">
IncludeParamsList
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Include is a list of IncludeParams which allows passing in specific combinations of Parameters into the Matrix.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.OnErrorType">OnErrorType
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.Step">Step</a>)
</p>
<div>
<p>OnErrorType defines a list of supported exiting behavior of a container on error</p>
</div>
<h3 id="tekton.dev/v1beta1.Param">Param
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.TaskRunInputs">TaskRunInputs</a>)
</p>
<div>
<p>Param declares an ParamValues to use for the parameter called name.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>value</code><br/>
<em>
<a href="#tekton.dev/v1beta1.ParamValue">
ParamValue
</a>
</em>
</td>
<td>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.ParamSpec">ParamSpec
</h3>
<div>
<p>ParamSpec defines arbitrary parameters needed beyond typed inputs (such as
resources). Parameter values are provided by users as inputs on a TaskRun
or PipelineRun.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name declares the name by which a parameter is referenced.</p>
</td>
</tr>
<tr>
<td>
<code>type</code><br/>
<em>
<a href="#tekton.dev/v1beta1.ParamType">
ParamType
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Type is the user-specified type of the parameter. The possible types
are currently “string”, “array” and “object”, and “string” is the default.</p>
</td>
</tr>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is a user-facing description of the parameter that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>properties</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PropertySpec">
map[string]github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.PropertySpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Properties is the JSON Schema properties to support key-value pairs parameter.</p>
</td>
</tr>
<tr>
<td>
<code>default</code><br/>
<em>
<a href="#tekton.dev/v1beta1.ParamValue">
ParamValue
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Default is the value a parameter takes if no input value is supplied. If
default is set, a Task may be executed without a supplied value for the
parameter.</p>
</td>
</tr>
<tr>
<td>
<code>enum</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Enum declares a set of allowed param input values for tasks/pipelines that can be validated.
If Enum is not set, no input validation is performed for the param.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.ParamSpecs">ParamSpecs
(<code>[]github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.ParamSpec</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineSpec">PipelineSpec</a>, <a href="#tekton.dev/v1beta1.TaskSpec">TaskSpec</a>)
</p>
<div>
<p>ParamSpecs is a list of ParamSpec</p>
</div>
<h3 id="tekton.dev/v1beta1.ParamType">ParamType
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.ParamSpec">ParamSpec</a>, <a href="#tekton.dev/v1beta1.ParamValue">ParamValue</a>, <a href="#tekton.dev/v1beta1.PropertySpec">PropertySpec</a>)
</p>
<div>
<p>ParamType indicates the type of an input parameter;
Used to distinguish between a single string and an array of strings.</p>
</div>
<h3 id="tekton.dev/v1beta1.ParamValue">ParamValue
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.Param">Param</a>, <a href="#tekton.dev/v1beta1.ParamSpec">ParamSpec</a>, <a href="#tekton.dev/v1beta1.PipelineResult">PipelineResult</a>, <a href="#tekton.dev/v1beta1.PipelineRunResult">PipelineRunResult</a>, <a href="#tekton.dev/v1beta1.TaskResult">TaskResult</a>, <a href="#tekton.dev/v1beta1.TaskRunResult">TaskRunResult</a>)
</p>
<div>
<p>ResultValue is a type alias of ParamValue</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>Type</code><br/>
<em>
<a href="#tekton.dev/v1beta1.ParamType">
ParamType
</a>
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>StringVal</code><br/>
<em>
string
</em>
</td>
<td>
<p>Represents the stored type of ParamValues.</p>
</td>
</tr>
<tr>
<td>
<code>ArrayVal</code><br/>
<em>
[]string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>ObjectVal</code><br/>
<em>
map[string]string
</em>
</td>
<td>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.Params">Params
(<code>[]github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.Param</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1alpha1.RunSpec">RunSpec</a>, <a href="#tekton.dev/v1beta1.CustomRunSpec">CustomRunSpec</a>, <a href="#tekton.dev/v1beta1.IncludeParams">IncludeParams</a>, <a href="#tekton.dev/v1beta1.Matrix">Matrix</a>, <a href="#tekton.dev/v1beta1.PipelineRunSpec">PipelineRunSpec</a>, <a href="#tekton.dev/v1beta1.PipelineTask">PipelineTask</a>, <a href="#tekton.dev/v1beta1.ResolverRef">ResolverRef</a>, <a href="#tekton.dev/v1beta1.Step">Step</a>, <a href="#tekton.dev/v1beta1.TaskRunSpec">TaskRunSpec</a>)
</p>
<div>
<p>Params is a list of Param</p>
</div>
<h3 id="tekton.dev/v1beta1.PipelineDeclaredResource">PipelineDeclaredResource
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineSpec">PipelineSpec</a>)
</p>
<div>
<p>PipelineDeclaredResource is used by a Pipeline to declare the types of the
PipelineResources that it will required to run and names which can be used to
refer to these PipelineResources in PipelineTaskResourceBindings.</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name is the name that will be used by the Pipeline to refer to this resource.
It does not directly correspond to the name of any PipelineResources Task
inputs or outputs, and it does not correspond to the actual names of the
PipelineResources that will be bound in the PipelineRun.</p>
</td>
</tr>
<tr>
<td>
<code>type</code><br/>
<em>
string
</em>
</td>
<td>
<p>Type is the type of the PipelineResource.</p>
</td>
</tr>
<tr>
<td>
<code>optional</code><br/>
<em>
bool
</em>
</td>
<td>
<p>Optional declares the resource as optional.
optional: true - the resource is considered optional
optional: false - the resource is considered required (default/equivalent of not specifying it)</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.PipelineObject">PipelineObject
</h3>
<div>
<p>PipelineObject is implemented by Pipeline</p>
</div>
<h3 id="tekton.dev/v1beta1.PipelineRef">PipelineRef
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineRunSpec">PipelineRunSpec</a>, <a href="#tekton.dev/v1beta1.PipelineTask">PipelineTask</a>)
</p>
<div>
<p>PipelineRef can be used to refer to a specific instance of a Pipeline.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name of the referent; More info: <a href="http://kubernetes.io/docs/user-guide/identifiers#names">http://kubernetes.io/docs/user-guide/identifiers#names</a></p>
</td>
</tr>
<tr>
<td>
<code>apiVersion</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>API version of the referent</p>
</td>
</tr>
<tr>
<td>
<code>bundle</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Bundle url reference to a Tekton Bundle.</p>
<p>Deprecated: Please use ResolverRef with the bundles resolver instead.
The field is staying there for go client backward compatibility, but is not used/allowed anymore.</p>
</td>
</tr>
<tr>
<td>
<code>ResolverRef</code><br/>
<em>
<a href="#tekton.dev/v1beta1.ResolverRef">
ResolverRef
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>ResolverRef allows referencing a Pipeline in a remote location
like a git repo. This field is only supported when the alpha
feature gate is enabled.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.PipelineResourceBinding">PipelineResourceBinding
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineRunSpec">PipelineRunSpec</a>, <a href="#tekton.dev/v1beta1.TaskResourceBinding">TaskResourceBinding</a>)
</p>
<div>
<p>PipelineResourceBinding connects a reference to an instance of a PipelineResource
with a PipelineResource dependency that the Pipeline has declared</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name is the name of the PipelineResource in the Pipeline’s declaration</p>
</td>
</tr>
<tr>
<td>
<code>resourceRef</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineResourceRef">
PipelineResourceRef
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>ResourceRef is a reference to the instance of the actual PipelineResource
that should be used</p>
</td>
</tr>
<tr>
<td>
<code>resourceSpec</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.PipelineResourceSpec">
PipelineResourceSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>ResourceSpec is specification of a resource that should be created and
consumed by the task</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.PipelineResourceInterface">PipelineResourceInterface
</h3>
<div>
<p>PipelineResourceInterface interface to be implemented by different PipelineResource types</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</div>
<h3 id="tekton.dev/v1beta1.PipelineResourceRef">PipelineResourceRef
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineResourceBinding">PipelineResourceBinding</a>)
</p>
<div>
<p>PipelineResourceRef can be used to refer to a specific instance of a Resource</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name of the referent; More info: <a href="http://kubernetes.io/docs/user-guide/identifiers#names">http://kubernetes.io/docs/user-guide/identifiers#names</a></p>
</td>
</tr>
<tr>
<td>
<code>apiVersion</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>API version of the referent</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.PipelineResult">PipelineResult
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineSpec">PipelineSpec</a>)
</p>
<div>
<p>PipelineResult used to describe the results of a pipeline</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name the given name</p>
</td>
</tr>
<tr>
<td>
<code>type</code><br/>
<em>
<a href="#tekton.dev/v1beta1.ResultsType">
ResultsType
</a>
</em>
</td>
<td>
<p>Type is the user-specified type of the result.
The possible types are ‘string’, ‘array’, and ‘object’, with ‘string’ as the default.
‘array’ and ‘object’ types are alpha features.</p>
</td>
</tr>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is a human-readable description of the result</p>
</td>
</tr>
<tr>
<td>
<code>value</code><br/>
<em>
<a href="#tekton.dev/v1beta1.ParamValue">
ParamValue
</a>
</em>
</td>
<td>
<p>Value the expression used to retrieve the value</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.PipelineRunReason">PipelineRunReason
(<code>string</code> alias)</h3>
<div>
<p>PipelineRunReason represents a reason for the pipeline run “Succeeded” condition</p>
</div>
<h3 id="tekton.dev/v1beta1.PipelineRunResult">PipelineRunResult
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineRunStatusFields">PipelineRunStatusFields</a>)
</p>
<div>
<p>PipelineRunResult used to describe the results of a pipeline</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name is the result’s name as declared by the Pipeline</p>
</td>
</tr>
<tr>
<td>
<code>value</code><br/>
<em>
<a href="#tekton.dev/v1beta1.ParamValue">
ParamValue
</a>
</em>
</td>
<td>
<p>Value is the result returned from the execution of this PipelineRun</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.PipelineRunRunStatus">PipelineRunRunStatus
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineRunStatusFields">PipelineRunStatusFields</a>)
</p>
<div>
<p>PipelineRunRunStatus contains the name of the PipelineTask for this CustomRun or Run and the CustomRun or Run’s Status</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>pipelineTaskName</code><br/>
<em>
string
</em>
</td>
<td>
<p>PipelineTaskName is the name of the PipelineTask.</p>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#tekton.dev/v1beta1.CustomRunStatus">
CustomRunStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Status is the CustomRunStatus for the corresponding CustomRun or Run</p>
</td>
</tr>
<tr>
<td>
<code>whenExpressions</code><br/>
<em>
<a href="#tekton.dev/v1beta1.WhenExpression">
[]WhenExpression
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>WhenExpressions is the list of checks guarding the execution of the PipelineTask</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.PipelineRunSpec">PipelineRunSpec
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineRun">PipelineRun</a>)
</p>
<div>
<p>PipelineRunSpec defines the desired state of PipelineRun</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>pipelineRef</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineRef">
PipelineRef
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>pipelineSpec</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineSpec">
PipelineSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Specifying PipelineSpec can be disabled by setting
<code>disable-inline-spec</code> feature flag..</p>
</td>
</tr>
<tr>
<td>
<code>resources</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineResourceBinding">
[]PipelineResourceBinding
</a>
</em>
</td>
<td>
<p>Resources is a list of bindings specifying which actual instances of
PipelineResources to use for the resources the Pipeline has declared
it needs.</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Params">
Params
</a>
</em>
</td>
<td>
<p>Params is a list of parameter names and values.</p>
</td>
</tr>
<tr>
<td>
<code>serviceAccountName</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineRunSpecStatus">
PipelineRunSpecStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Used for cancelling a pipelinerun (and maybe more later on)</p>
</td>
</tr>
<tr>
<td>
<code>timeouts</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TimeoutFields">
TimeoutFields
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Time after which the Pipeline times out.
Currently three keys are accepted in the map
pipeline, tasks and finally
with Timeouts.pipeline >= Timeouts.tasks + Timeouts.finally</p>
</td>
</tr>
<tr>
<td>
<code>timeout</code><br/>
<em>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration">
Kubernetes meta/v1.Duration
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Timeout is the Time after which the Pipeline times out.
Defaults to never.
Refer to Go’s ParseDuration documentation for expected format: <a href="https://golang.org/pkg/time/#ParseDuration">https://golang.org/pkg/time/#ParseDuration</a></p>
<p>Deprecated: use pipelineRunSpec.Timeouts.Pipeline instead</p>
</td>
</tr>
<tr>
<td>
<code>podTemplate</code><br/>
<em>
<a href="#tekton.dev/unversioned.Template">
Template
</a>
</em>
</td>
<td>
<p>PodTemplate holds pod specific configuration</p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1beta1.WorkspaceBinding">
[]WorkspaceBinding
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Workspaces holds a set of workspace bindings that must match names
with those declared in the pipeline.</p>
</td>
</tr>
<tr>
<td>
<code>taskRunSpecs</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineTaskRunSpec">
[]PipelineTaskRunSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>TaskRunSpecs holds a set of runtime specs</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.PipelineRunSpecStatus">PipelineRunSpecStatus
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineRunSpec">PipelineRunSpec</a>)
</p>
<div>
<p>PipelineRunSpecStatus defines the pipelinerun spec status the user can provide</p>
</div>
<h3 id="tekton.dev/v1beta1.PipelineRunStatus">PipelineRunStatus
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineRun">PipelineRun</a>)
</p>
<div>
<p>PipelineRunStatus defines the observed state of PipelineRun</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>Status</code><br/>
<em>
<a href="https://pkg.go.dev/knative.dev/pkg/apis/duck/v1#Status">
knative.dev/pkg/apis/duck/v1.Status
</a>
</em>
</td>
<td>
<p>
(Members of <code>Status</code> are embedded into this type.)
</p>
</td>
</tr>
<tr>
<td>
<code>PipelineRunStatusFields</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineRunStatusFields">
PipelineRunStatusFields
</a>
</em>
</td>
<td>
<p>
(Members of <code>PipelineRunStatusFields</code> are embedded into this type.)
</p>
<p>PipelineRunStatusFields inlines the status fields.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.PipelineRunStatusFields">PipelineRunStatusFields
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineRunStatus">PipelineRunStatus</a>)
</p>
<div>
<p>PipelineRunStatusFields holds the fields of PipelineRunStatus’ status.
This is defined separately and inlined so that other types can readily
consume these fields via duck typing.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>startTime</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#time-v1-meta">
Kubernetes meta/v1.Time
</a>
</em>
</td>
<td>
<p>StartTime is the time the PipelineRun is actually started.</p>
</td>
</tr>
<tr>
<td>
<code>completionTime</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#time-v1-meta">
Kubernetes meta/v1.Time
</a>
</em>
</td>
<td>
<p>CompletionTime is the time the PipelineRun completed.</p>
</td>
</tr>
<tr>
<td>
<code>taskRuns</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineRunTaskRunStatus">
map[string]*github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.PipelineRunTaskRunStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>TaskRuns is a map of PipelineRunTaskRunStatus with the taskRun name as the key.</p>
<p>Deprecated: use ChildReferences instead. As of v0.45.0, this field is no
longer populated and is only included for backwards compatibility with
older server versions.</p>
</td>
</tr>
<tr>
<td>
<code>runs</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineRunRunStatus">
map[string]*github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.PipelineRunRunStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Runs is a map of PipelineRunRunStatus with the run name as the key</p>
<p>Deprecated: use ChildReferences instead. As of v0.45.0, this field is no
longer populated and is only included for backwards compatibility with
older server versions.</p>
</td>
</tr>
<tr>
<td>
<code>pipelineResults</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineRunResult">
[]PipelineRunResult
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>PipelineResults are the list of results written out by the pipeline task’s containers</p>
</td>
</tr>
<tr>
<td>
<code>pipelineSpec</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineSpec">
PipelineSpec
</a>
</em>
</td>
<td>
<p>PipelineRunSpec contains the exact spec used to instantiate the run</p>
</td>
</tr>
<tr>
<td>
<code>skippedTasks</code><br/>
<em>
<a href="#tekton.dev/v1beta1.SkippedTask">
[]SkippedTask
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>list of tasks that were skipped due to when expressions evaluating to false</p>
</td>
</tr>
<tr>
<td>
<code>childReferences</code><br/>
<em>
<a href="#tekton.dev/v1beta1.ChildStatusReference">
[]ChildStatusReference
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>list of TaskRun and Run names, PipelineTask names, and API versions/kinds for children of this PipelineRun.</p>
</td>
</tr>
<tr>
<td>
<code>finallyStartTime</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#time-v1-meta">
Kubernetes meta/v1.Time
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>FinallyStartTime is when all non-finally tasks have been completed and only finally tasks are being executed.</p>
</td>
</tr>
<tr>
<td>
<code>provenance</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Provenance">
Provenance
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Provenance contains some key authenticated metadata about how a software artifact was built (what sources, what inputs/outputs, etc.).</p>
</td>
</tr>
<tr>
<td>
<code>spanContext</code><br/>
<em>
map[string]string
</em>
</td>
<td>
<p>SpanContext contains tracing span context fields</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.PipelineRunTaskRunStatus">PipelineRunTaskRunStatus
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineRunStatusFields">PipelineRunStatusFields</a>)
</p>
<div>
<p>PipelineRunTaskRunStatus contains the name of the PipelineTask for this TaskRun and the TaskRun’s Status</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>pipelineTaskName</code><br/>
<em>
string
</em>
</td>
<td>
<p>PipelineTaskName is the name of the PipelineTask.</p>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRunStatus">
TaskRunStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Status is the TaskRunStatus for the corresponding TaskRun</p>
</td>
</tr>
<tr>
<td>
<code>whenExpressions</code><br/>
<em>
<a href="#tekton.dev/v1beta1.WhenExpression">
[]WhenExpression
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>WhenExpressions is the list of checks guarding the execution of the PipelineTask</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.PipelineSpec">PipelineSpec
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.Pipeline">Pipeline</a>, <a href="#tekton.dev/v1beta1.PipelineRunSpec">PipelineRunSpec</a>, <a href="#tekton.dev/v1beta1.PipelineRunStatusFields">PipelineRunStatusFields</a>, <a href="#tekton.dev/v1beta1.PipelineTask">PipelineTask</a>)
</p>
<div>
<p>PipelineSpec defines the desired state of Pipeline.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>displayName</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>DisplayName is a user-facing name of the pipeline that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is a user-facing description of the pipeline that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>resources</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineDeclaredResource">
[]PipelineDeclaredResource
</a>
</em>
</td>
<td>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</td>
</tr>
<tr>
<td>
<code>tasks</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineTask">
[]PipelineTask
</a>
</em>
</td>
<td>
<p>Tasks declares the graph of Tasks that execute when this Pipeline is run.</p>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1beta1.ParamSpecs">
ParamSpecs
</a>
</em>
</td>
<td>
<p>Params declares a list of input parameters that must be supplied when
this Pipeline is run.</p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineWorkspaceDeclaration">
[]PipelineWorkspaceDeclaration
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Workspaces declares a set of named workspaces that are expected to be
provided by a PipelineRun.</p>
</td>
</tr>
<tr>
<td>
<code>results</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineResult">
[]PipelineResult
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Results are values that this pipeline can output once run</p>
</td>
</tr>
<tr>
<td>
<code>finally</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineTask">
[]PipelineTask
</a>
</em>
</td>
<td>
<p>Finally declares the list of Tasks that execute just before leaving the Pipeline
i.e. either after all Tasks are finished executing successfully
or after a failure which would result in ending the Pipeline</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.PipelineTask">PipelineTask
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineSpec">PipelineSpec</a>)
</p>
<div>
<p>PipelineTask defines a task in a Pipeline, passing inputs from both
Params and from the output of previous tasks.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name is the name of this task within the context of a Pipeline. Name is
used as a coordinate with the <code>from</code> and <code>runAfter</code> fields to establish
the execution order of tasks relative to one another.</p>
</td>
</tr>
<tr>
<td>
<code>displayName</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>DisplayName is the display name of this task within the context of a Pipeline.
This display name may be used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is the description of this task within the context of a Pipeline.
This description may be used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>taskRef</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRef">
TaskRef
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>TaskRef is a reference to a task definition.</p>
</td>
</tr>
<tr>
<td>
<code>taskSpec</code><br/>
<em>
<a href="#tekton.dev/v1beta1.EmbeddedTask">
EmbeddedTask
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>TaskSpec is a specification of a task
Specifying TaskSpec can be disabled by setting
<code>disable-inline-spec</code> feature flag..</p>
</td>
</tr>
<tr>
<td>
<code>when</code><br/>
<em>
<a href="#tekton.dev/v1beta1.WhenExpressions">
WhenExpressions
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>WhenExpressions is a list of when expressions that need to be true for the task to run</p>
</td>
</tr>
<tr>
<td>
<code>retries</code><br/>
<em>
int
</em>
</td>
<td>
<em>(Optional)</em>
<p>Retries represents how many times this task should be retried in case of task failure: ConditionSucceeded set to False</p>
</td>
</tr>
<tr>
<td>
<code>runAfter</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>RunAfter is the list of PipelineTask names that should be executed before
this Task executes. (Used to force a specific ordering in graph execution.)</p>
</td>
</tr>
<tr>
<td>
<code>resources</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineTaskResources">
PipelineTaskResources
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Params">
Params
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Parameters declares parameters passed to this task.</p>
</td>
</tr>
<tr>
<td>
<code>matrix</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Matrix">
Matrix
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Matrix declares parameters used to fan out this task.</p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1beta1.WorkspacePipelineTaskBinding">
[]WorkspacePipelineTaskBinding
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Workspaces maps workspaces from the pipeline spec to the workspaces
declared in the Task.</p>
</td>
</tr>
<tr>
<td>
<code>timeout</code><br/>
<em>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration">
Kubernetes meta/v1.Duration
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Time after which the TaskRun times out. Defaults to 1 hour.
Refer Go’s ParseDuration documentation for expected format: <a href="https://golang.org/pkg/time/#ParseDuration">https://golang.org/pkg/time/#ParseDuration</a></p>
</td>
</tr>
<tr>
<td>
<code>pipelineRef</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineRef">
PipelineRef
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>PipelineRef is a reference to a pipeline definition
Note: PipelineRef is in preview mode and not yet supported</p>
</td>
</tr>
<tr>
<td>
<code>pipelineSpec</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineSpec">
PipelineSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>PipelineSpec is a specification of a pipeline
Note: PipelineSpec is in preview mode and not yet supported
Specifying TaskSpec can be disabled by setting
<code>disable-inline-spec</code> feature flag..</p>
</td>
</tr>
<tr>
<td>
<code>onError</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineTaskOnErrorType">
PipelineTaskOnErrorType
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>OnError defines the exiting behavior of a PipelineRun on error
can be set to [ continue | stopAndFail ]</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.PipelineTaskInputResource">PipelineTaskInputResource
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineTaskResources">PipelineTaskResources</a>)
</p>
<div>
<p>PipelineTaskInputResource maps the name of a declared PipelineResource input
dependency in a Task to the resource in the Pipeline’s DeclaredPipelineResources
that should be used. This input may come from a previous task.</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name is the name of the PipelineResource as declared by the Task.</p>
</td>
</tr>
<tr>
<td>
<code>resource</code><br/>
<em>
string
</em>
</td>
<td>
<p>Resource is the name of the DeclaredPipelineResource to use.</p>
</td>
</tr>
<tr>
<td>
<code>from</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>From is the list of PipelineTask names that the resource has to come from.
(Implies an ordering in the execution graph.)</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.PipelineTaskMetadata">PipelineTaskMetadata
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1alpha1.EmbeddedRunSpec">EmbeddedRunSpec</a>, <a href="#tekton.dev/v1beta1.EmbeddedCustomRunSpec">EmbeddedCustomRunSpec</a>, <a href="#tekton.dev/v1beta1.EmbeddedTask">EmbeddedTask</a>, <a href="#tekton.dev/v1beta1.PipelineTaskRunSpec">PipelineTaskRunSpec</a>)
</p>
<div>
<p>PipelineTaskMetadata contains the labels or annotations for an EmbeddedTask</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>labels</code><br/>
<em>
map[string]string
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>annotations</code><br/>
<em>
map[string]string
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.PipelineTaskOnErrorType">PipelineTaskOnErrorType
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineTask">PipelineTask</a>)
</p>
<div>
<p>PipelineTaskOnErrorType defines a list of supported failure handling behaviors of a PipelineTask on error</p>
</div>
<h3 id="tekton.dev/v1beta1.PipelineTaskOutputResource">PipelineTaskOutputResource
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineTaskResources">PipelineTaskResources</a>)
</p>
<div>
<p>PipelineTaskOutputResource maps the name of a declared PipelineResource output
dependency in a Task to the resource in the Pipeline’s DeclaredPipelineResources
that should be used.</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name is the name of the PipelineResource as declared by the Task.</p>
</td>
</tr>
<tr>
<td>
<code>resource</code><br/>
<em>
string
</em>
</td>
<td>
<p>Resource is the name of the DeclaredPipelineResource to use.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.PipelineTaskParam">PipelineTaskParam
</h3>
<div>
<p>PipelineTaskParam is used to provide arbitrary string parameters to a Task.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>value</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.PipelineTaskResources">PipelineTaskResources
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineTask">PipelineTask</a>)
</p>
<div>
<p>PipelineTaskResources allows a Pipeline to declare how its DeclaredPipelineResources
should be provided to a Task as its inputs and outputs.</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>inputs</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineTaskInputResource">
[]PipelineTaskInputResource
</a>
</em>
</td>
<td>
<p>Inputs holds the mapping from the PipelineResources declared in
DeclaredPipelineResources to the input PipelineResources required by the Task.</p>
</td>
</tr>
<tr>
<td>
<code>outputs</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineTaskOutputResource">
[]PipelineTaskOutputResource
</a>
</em>
</td>
<td>
<p>Outputs holds the mapping from the PipelineResources declared in
DeclaredPipelineResources to the input PipelineResources required by the Task.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.PipelineTaskRun">PipelineTaskRun
</h3>
<div>
<p>PipelineTaskRun reports the results of running a step in the Task. Each
task has the potential to succeed or fail (based on the exit code)
and produces logs.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.PipelineTaskRunSpec">PipelineTaskRunSpec
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineRunSpec">PipelineRunSpec</a>)
</p>
<div>
<p>PipelineTaskRunSpec can be used to configure specific
specs for a concrete Task</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>pipelineTaskName</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>taskServiceAccountName</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>taskPodTemplate</code><br/>
<em>
<a href="#tekton.dev/unversioned.Template">
Template
</a>
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>stepOverrides</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRunStepOverride">
[]TaskRunStepOverride
</a>
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>sidecarOverrides</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRunSidecarOverride">
[]TaskRunSidecarOverride
</a>
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>metadata</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineTaskMetadata">
PipelineTaskMetadata
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>computeResources</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core">
Kubernetes core/v1.ResourceRequirements
</a>
</em>
</td>
<td>
<p>Compute resources to use for this TaskRun</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.PipelineWorkspaceDeclaration">PipelineWorkspaceDeclaration
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineSpec">PipelineSpec</a>)
</p>
<div>
<p>WorkspacePipelineDeclaration creates a named slot in a Pipeline that a PipelineRun
is expected to populate with a workspace binding.</p>
<p>Deprecated: use PipelineWorkspaceDeclaration type instead</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name is the name of a workspace to be provided by a PipelineRun.</p>
</td>
</tr>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is a human readable string describing how the workspace will be
used in the Pipeline. It can be useful to include a bit of detail about which
tasks are intended to have access to the data on the workspace.</p>
</td>
</tr>
<tr>
<td>
<code>optional</code><br/>
<em>
bool
</em>
</td>
<td>
<p>Optional marks a Workspace as not being required in PipelineRuns. By default
this field is false and so declared workspaces are required.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.PropertySpec">PropertySpec
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.ParamSpec">ParamSpec</a>, <a href="#tekton.dev/v1beta1.TaskResult">TaskResult</a>)
</p>
<div>
<p>PropertySpec defines the struct for object keys</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>type</code><br/>
<em>
<a href="#tekton.dev/v1beta1.ParamType">
ParamType
</a>
</em>
</td>
<td>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.Provenance">Provenance
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineRunStatusFields">PipelineRunStatusFields</a>, <a href="#tekton.dev/v1beta1.StepState">StepState</a>, <a href="#tekton.dev/v1beta1.TaskRunStatusFields">TaskRunStatusFields</a>)
</p>
<div>
<p>Provenance contains metadata about resources used in the TaskRun/PipelineRun
such as the source from where a remote build definition was fetched.
This field aims to carry minimum amoumt of metadata in *Run status so that
Tekton Chains can capture them in the provenance.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>configSource</code><br/>
<em>
<a href="#tekton.dev/v1beta1.ConfigSource">
ConfigSource
</a>
</em>
</td>
<td>
<p>Deprecated: Use RefSource instead</p>
</td>
</tr>
<tr>
<td>
<code>refSource</code><br/>
<em>
<a href="#tekton.dev/v1beta1.RefSource">
RefSource
</a>
</em>
</td>
<td>
<p>RefSource identifies the source where a remote task/pipeline came from.</p>
</td>
</tr>
<tr>
<td>
<code>featureFlags</code><br/>
<em>
github.com/tektoncd/pipeline/pkg/apis/config.FeatureFlags
</em>
</td>
<td>
<p>FeatureFlags identifies the feature flags that were used during the task/pipeline run</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.Ref">Ref
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.Step">Step</a>)
</p>
<div>
<p>Ref can be used to refer to a specific instance of a StepAction.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name of the referenced step</p>
</td>
</tr>
<tr>
<td>
<code>ResolverRef</code><br/>
<em>
<a href="#tekton.dev/v1beta1.ResolverRef">
ResolverRef
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>ResolverRef allows referencing a StepAction in a remote location
like a git repo.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.RefSource">RefSource
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.Provenance">Provenance</a>)
</p>
<div>
<p>RefSource contains the information that can uniquely identify where a remote
built definition came from i.e. Git repositories, Tekton Bundles in OCI registry
and hub.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>uri</code><br/>
<em>
string
</em>
</td>
<td>
<p>URI indicates the identity of the source of the build definition.
Example: “<a href="https://github.com/tektoncd/catalog"">https://github.com/tektoncd/catalog”</a></p>
</td>
</tr>
<tr>
<td>
<code>digest</code><br/>
<em>
map[string]string
</em>
</td>
<td>
<p>Digest is a collection of cryptographic digests for the contents of the artifact specified by URI.
Example: {“sha1”: “f99d13e554ffcb696dee719fa85b695cb5b0f428”}</p>
</td>
</tr>
<tr>
<td>
<code>entryPoint</code><br/>
<em>
string
</em>
</td>
<td>
<p>EntryPoint identifies the entry point into the build. This is often a path to a
build definition file and/or a target label within that file.
Example: “task/git-clone/0.8/git-clone.yaml”</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.ResolverName">ResolverName
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.ResolverRef">ResolverRef</a>)
</p>
<div>
<p>ResolverName is the name of a resolver from which a resource can be
requested.</p>
</div>
<h3 id="tekton.dev/v1beta1.ResolverRef">ResolverRef
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineRef">PipelineRef</a>, <a href="#tekton.dev/v1beta1.Ref">Ref</a>, <a href="#tekton.dev/v1beta1.TaskRef">TaskRef</a>)
</p>
<div>
<p>ResolverRef can be used to refer to a Pipeline or Task in a remote
location like a git repo.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>resolver</code><br/>
<em>
<a href="#tekton.dev/v1beta1.ResolverName">
ResolverName
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Resolver is the name of the resolver that should perform
resolution of the referenced Tekton resource, such as “git”.</p>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Params">
Params
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Params contains the parameters used to identify the
referenced Tekton resource. Example entries might include
“repo” or “path” but the set of params ultimately depends on
the chosen resolver.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.ResultRef">ResultRef
</h3>
<div>
<p>ResultRef is a type that represents a reference to a task run result</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>pipelineTask</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>result</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>resultsIndex</code><br/>
<em>
int
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>property</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.ResultsType">ResultsType
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineResult">PipelineResult</a>, <a href="#tekton.dev/v1beta1.TaskResult">TaskResult</a>, <a href="#tekton.dev/v1beta1.TaskRunResult">TaskRunResult</a>)
</p>
<div>
<p>ResultsType indicates the type of a result;
Used to distinguish between a single string and an array of strings.
Note that there is ResultType used to find out whether a
RunResult is from a task result or not, which is different from
this ResultsType.</p>
</div>
<h3 id="tekton.dev/v1beta1.RunObject">RunObject
</h3>
<div>
<p>RunObject is implemented by CustomRun and Run</p>
</div>
<h3 id="tekton.dev/v1beta1.Sidecar">Sidecar
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.TaskSpec">TaskSpec</a>)
</p>
<div>
<p>Sidecar has nearly the same data structure as Step but does not have the ability to timeout.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name of the Sidecar specified as a DNS_LABEL.
Each Sidecar in a Task must have a unique name (DNS_LABEL).
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>image</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Image name to be used by the Sidecar.
More info: <a href="https://kubernetes.io/docs/concepts/containers/images">https://kubernetes.io/docs/concepts/containers/images</a></p>
</td>
</tr>
<tr>
<td>
<code>command</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Entrypoint array. Not executed within a shell.
The image’s ENTRYPOINT is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the Sidecar’s environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will
produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell</a></p>
</td>
</tr>
<tr>
<td>
<code>args</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Arguments to the entrypoint.
The image’s CMD is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the container’s environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will
produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell</a></p>
</td>
</tr>
<tr>
<td>
<code>workingDir</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Sidecar’s working directory.
If not specified, the container runtime’s default will be used, which
might be configured in the container image.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>ports</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#containerport-v1-core">
[]Kubernetes core/v1.ContainerPort
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>List of ports to expose from the Sidecar. Exposing a port here gives
the system additional information about the network connections a
container uses, but is primarily informational. Not specifying a port here
DOES NOT prevent that port from being exposed. Any port which is
listening on the default “0.0.0.0” address inside a container will be
accessible from the network.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>envFrom</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#envfromsource-v1-core">
[]Kubernetes core/v1.EnvFromSource
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>List of sources to populate environment variables in the Sidecar.
The keys defined within a source must be a C_IDENTIFIER. All invalid keys
will be reported as an event when the Sidecar is starting. When a key exists in multiple
sources, the value associated with the last source will take precedence.
Values defined by an Env with a duplicate key will take precedence.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>env</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#envvar-v1-core">
[]Kubernetes core/v1.EnvVar
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>List of environment variables to set in the Sidecar.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>resources</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core">
Kubernetes core/v1.ResourceRequirements
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Compute Resources required by this Sidecar.
Cannot be updated.
More info: <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/">https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/</a></p>
</td>
</tr>
<tr>
<td>
<code>volumeMounts</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumemount-v1-core">
[]Kubernetes core/v1.VolumeMount
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Volumes to mount into the Sidecar’s filesystem.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>volumeDevices</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumedevice-v1-core">
[]Kubernetes core/v1.VolumeDevice
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>volumeDevices is the list of block devices to be used by the Sidecar.</p>
</td>
</tr>
<tr>
<td>
<code>livenessProbe</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#probe-v1-core">
Kubernetes core/v1.Probe
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Periodic probe of Sidecar liveness.
Container will be restarted if the probe fails.
Cannot be updated.
More info: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes</a></p>
</td>
</tr>
<tr>
<td>
<code>readinessProbe</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#probe-v1-core">
Kubernetes core/v1.Probe
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Periodic probe of Sidecar service readiness.
Container will be removed from service endpoints if the probe fails.
Cannot be updated.
More info: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes</a></p>
</td>
</tr>
<tr>
<td>
<code>startupProbe</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#probe-v1-core">
Kubernetes core/v1.Probe
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>StartupProbe indicates that the Pod the Sidecar is running in has successfully initialized.
If specified, no other probes are executed until this completes successfully.
If this probe fails, the Pod will be restarted, just as if the livenessProbe failed.
This can be used to provide different probe parameters at the beginning of a Pod’s lifecycle,
when it might take a long time to load data or warm a cache, than during steady-state operation.
This cannot be updated.
More info: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes</a></p>
</td>
</tr>
<tr>
<td>
<code>lifecycle</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#lifecycle-v1-core">
Kubernetes core/v1.Lifecycle
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Actions that the management system should take in response to Sidecar lifecycle events.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>terminationMessagePath</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Optional: Path at which the file to which the Sidecar’s termination message
will be written is mounted into the Sidecar’s filesystem.
Message written is intended to be brief final status, such as an assertion failure message.
Will be truncated by the node if greater than 4096 bytes. The total message length across
all containers will be limited to 12kb.
Defaults to /dev/termination-log.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>terminationMessagePolicy</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#terminationmessagepolicy-v1-core">
Kubernetes core/v1.TerminationMessagePolicy
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Indicate how the termination message should be populated. File will use the contents of
terminationMessagePath to populate the Sidecar status message on both success and failure.
FallbackToLogsOnError will use the last chunk of Sidecar log output if the termination
message file is empty and the Sidecar exited with an error.
The log output is limited to 2048 bytes or 80 lines, whichever is smaller.
Defaults to File.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>imagePullPolicy</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#pullpolicy-v1-core">
Kubernetes core/v1.PullPolicy
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Image pull policy.
One of Always, Never, IfNotPresent.
Defaults to Always if :latest tag is specified, or IfNotPresent otherwise.
Cannot be updated.
More info: <a href="https://kubernetes.io/docs/concepts/containers/images#updating-images">https://kubernetes.io/docs/concepts/containers/images#updating-images</a></p>
</td>
</tr>
<tr>
<td>
<code>securityContext</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#securitycontext-v1-core">
Kubernetes core/v1.SecurityContext
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>SecurityContext defines the security options the Sidecar should be run with.
If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.
More info: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/</a></p>
</td>
</tr>
<tr>
<td>
<code>stdin</code><br/>
<em>
bool
</em>
</td>
<td>
<em>(Optional)</em>
<p>Whether this Sidecar should allocate a buffer for stdin in the container runtime. If this
is not set, reads from stdin in the Sidecar will always result in EOF.
Default is false.</p>
</td>
</tr>
<tr>
<td>
<code>stdinOnce</code><br/>
<em>
bool
</em>
</td>
<td>
<em>(Optional)</em>
<p>Whether the container runtime should close the stdin channel after it has been opened by
a single attach. When stdin is true the stdin stream will remain open across multiple attach
sessions. If stdinOnce is set to true, stdin is opened on Sidecar start, is empty until the
first client attaches to stdin, and then remains open and accepts data until the client disconnects,
at which time stdin is closed and remains closed until the Sidecar is restarted. If this
flag is false, a container processes that reads from stdin will never receive an EOF.
Default is false</p>
</td>
</tr>
<tr>
<td>
<code>tty</code><br/>
<em>
bool
</em>
</td>
<td>
<em>(Optional)</em>
<p>Whether this Sidecar should allocate a TTY for itself, also requires ‘stdin’ to be true.
Default is false.</p>
</td>
</tr>
<tr>
<td>
<code>script</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Script is the contents of an executable file to execute.</p>
<p>If Script is not empty, the Step cannot have an Command or Args.</p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1beta1.WorkspaceUsage">
[]WorkspaceUsage
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>This is an alpha field. You must set the “enable-api-fields” feature flag to “alpha”
for this field to be supported.</p>
<p>Workspaces is a list of workspaces from the Task that this Sidecar wants
exclusive access to. Adding a workspace to this list means that any
other Step or Sidecar that does not also request this Workspace will
not have access to it.</p>
</td>
</tr>
<tr>
<td>
<code>restartPolicy</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#containerrestartpolicy-v1-core">
Kubernetes core/v1.ContainerRestartPolicy
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>RestartPolicy refers to kubernetes RestartPolicy. It can only be set for an
initContainer and must have it’s policy set to “Always”. It is currently
left optional to help support Kubernetes versions prior to 1.29 when this feature
was introduced.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.SidecarState">SidecarState
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.TaskRunStatusFields">TaskRunStatusFields</a>)
</p>
<div>
<p>SidecarState reports the results of running a sidecar in a Task.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>ContainerState</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#containerstate-v1-core">
Kubernetes core/v1.ContainerState
</a>
</em>
</td>
<td>
<p>
(Members of <code>ContainerState</code> are embedded into this type.)
</p>
</td>
</tr>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>container</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>imageID</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.SkippedTask">SkippedTask
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineRunStatusFields">PipelineRunStatusFields</a>)
</p>
<div>
<p>SkippedTask is used to describe the Tasks that were skipped due to their When Expressions
evaluating to False. This is a struct because we are looking into including more details
about the When Expressions that caused this Task to be skipped.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name is the Pipeline Task name</p>
</td>
</tr>
<tr>
<td>
<code>reason</code><br/>
<em>
<a href="#tekton.dev/v1beta1.SkippingReason">
SkippingReason
</a>
</em>
</td>
<td>
<p>Reason is the cause of the PipelineTask being skipped.</p>
</td>
</tr>
<tr>
<td>
<code>whenExpressions</code><br/>
<em>
<a href="#tekton.dev/v1beta1.WhenExpression">
[]WhenExpression
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>WhenExpressions is the list of checks guarding the execution of the PipelineTask</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.SkippingReason">SkippingReason
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.SkippedTask">SkippedTask</a>)
</p>
<div>
<p>SkippingReason explains why a PipelineTask was skipped.</p>
</div>
<h3 id="tekton.dev/v1beta1.Step">Step
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.InternalTaskModifier">InternalTaskModifier</a>, <a href="#tekton.dev/v1beta1.TaskSpec">TaskSpec</a>)
</p>
<div>
<p>Step runs a subcomponent of a Task</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name of the Step specified as a DNS_LABEL.
Each Step in a Task must have a unique name.</p>
</td>
</tr>
<tr>
<td>
<code>image</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Image reference name to run for this Step.
More info: <a href="https://kubernetes.io/docs/concepts/containers/images">https://kubernetes.io/docs/concepts/containers/images</a></p>
</td>
</tr>
<tr>
<td>
<code>command</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Entrypoint array. Not executed within a shell.
The image’s ENTRYPOINT is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the container’s environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will
produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell</a></p>
</td>
</tr>
<tr>
<td>
<code>args</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Arguments to the entrypoint.
The image’s CMD is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the container’s environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will
produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell</a></p>
</td>
</tr>
<tr>
<td>
<code>workingDir</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Step’s working directory.
If not specified, the container runtime’s default will be used, which
might be configured in the container image.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>ports</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#containerport-v1-core">
[]Kubernetes core/v1.ContainerPort
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>List of ports to expose from the Step’s container. Exposing a port here gives
the system additional information about the network connections a
container uses, but is primarily informational. Not specifying a port here
DOES NOT prevent that port from being exposed. Any port which is
listening on the default “0.0.0.0” address inside a container will be
accessible from the network.
Cannot be updated.</p>
<p>Deprecated: This field will be removed in a future release.</p>
</td>
</tr>
<tr>
<td>
<code>envFrom</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#envfromsource-v1-core">
[]Kubernetes core/v1.EnvFromSource
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>List of sources to populate environment variables in the container.
The keys defined within a source must be a C_IDENTIFIER. All invalid keys
will be reported as an event when the container is starting. When a key exists in multiple
sources, the value associated with the last source will take precedence.
Values defined by an Env with a duplicate key will take precedence.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>env</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#envvar-v1-core">
[]Kubernetes core/v1.EnvVar
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>List of environment variables to set in the container.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>resources</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core">
Kubernetes core/v1.ResourceRequirements
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Compute Resources required by this Step.
Cannot be updated.
More info: <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/">https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/</a></p>
</td>
</tr>
<tr>
<td>
<code>volumeMounts</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumemount-v1-core">
[]Kubernetes core/v1.VolumeMount
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Volumes to mount into the Step’s filesystem.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>volumeDevices</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumedevice-v1-core">
[]Kubernetes core/v1.VolumeDevice
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>volumeDevices is the list of block devices to be used by the Step.</p>
</td>
</tr>
<tr>
<td>
<code>livenessProbe</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#probe-v1-core">
Kubernetes core/v1.Probe
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Periodic probe of container liveness.
Step will be restarted if the probe fails.
Cannot be updated.
More info: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes</a></p>
<p>Deprecated: This field will be removed in a future release.</p>
</td>
</tr>
<tr>
<td>
<code>readinessProbe</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#probe-v1-core">
Kubernetes core/v1.Probe
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Periodic probe of container service readiness.
Step will be removed from service endpoints if the probe fails.
Cannot be updated.
More info: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes</a></p>
<p>Deprecated: This field will be removed in a future release.</p>
</td>
</tr>
<tr>
<td>
<code>startupProbe</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#probe-v1-core">
Kubernetes core/v1.Probe
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>DeprecatedStartupProbe indicates that the Pod this Step runs in has successfully initialized.
If specified, no other probes are executed until this completes successfully.
If this probe fails, the Pod will be restarted, just as if the livenessProbe failed.
This can be used to provide different probe parameters at the beginning of a Pod’s lifecycle,
when it might take a long time to load data or warm a cache, than during steady-state operation.
This cannot be updated.
More info: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes</a></p>
<p>Deprecated: This field will be removed in a future release.</p>
</td>
</tr>
<tr>
<td>
<code>lifecycle</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#lifecycle-v1-core">
Kubernetes core/v1.Lifecycle
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Actions that the management system should take in response to container lifecycle events.
Cannot be updated.</p>
<p>Deprecated: This field will be removed in a future release.</p>
</td>
</tr>
<tr>
<td>
<code>terminationMessagePath</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Deprecated: This field will be removed in a future release and can’t be meaningfully used.</p>
</td>
</tr>
<tr>
<td>
<code>terminationMessagePolicy</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#terminationmessagepolicy-v1-core">
Kubernetes core/v1.TerminationMessagePolicy
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Deprecated: This field will be removed in a future release and can’t be meaningfully used.</p>
</td>
</tr>
<tr>
<td>
<code>imagePullPolicy</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#pullpolicy-v1-core">
Kubernetes core/v1.PullPolicy
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Image pull policy.
One of Always, Never, IfNotPresent.
Defaults to Always if :latest tag is specified, or IfNotPresent otherwise.
Cannot be updated.
More info: <a href="https://kubernetes.io/docs/concepts/containers/images#updating-images">https://kubernetes.io/docs/concepts/containers/images#updating-images</a></p>
</td>
</tr>
<tr>
<td>
<code>securityContext</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#securitycontext-v1-core">
Kubernetes core/v1.SecurityContext
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>SecurityContext defines the security options the Step should be run with.
If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.
More info: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/</a></p>
</td>
</tr>
<tr>
<td>
<code>stdin</code><br/>
<em>
bool
</em>
</td>
<td>
<em>(Optional)</em>
<p>Whether this container should allocate a buffer for stdin in the container runtime. If this
is not set, reads from stdin in the container will always result in EOF.
Default is false.</p>
<p>Deprecated: This field will be removed in a future release.</p>
</td>
</tr>
<tr>
<td>
<code>stdinOnce</code><br/>
<em>
bool
</em>
</td>
<td>
<em>(Optional)</em>
<p>Whether the container runtime should close the stdin channel after it has been opened by
a single attach. When stdin is true the stdin stream will remain open across multiple attach
sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the
first client attaches to stdin, and then remains open and accepts data until the client disconnects,
at which time stdin is closed and remains closed until the container is restarted. If this
flag is false, a container processes that reads from stdin will never receive an EOF.
Default is false</p>
<p>Deprecated: This field will be removed in a future release.</p>
</td>
</tr>
<tr>
<td>
<code>tty</code><br/>
<em>
bool
</em>
</td>
<td>
<em>(Optional)</em>
<p>Whether this container should allocate a DeprecatedTTY for itself, also requires ‘stdin’ to be true.
Default is false.</p>
<p>Deprecated: This field will be removed in a future release.</p>
</td>
</tr>
<tr>
<td>
<code>script</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Script is the contents of an executable file to execute.</p>
<p>If Script is not empty, the Step cannot have an Command and the Args will be passed to the Script.</p>
</td>
</tr>
<tr>
<td>
<code>timeout</code><br/>
<em>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration">
Kubernetes meta/v1.Duration
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Timeout is the time after which the step times out. Defaults to never.
Refer to Go’s ParseDuration documentation for expected format: <a href="https://golang.org/pkg/time/#ParseDuration">https://golang.org/pkg/time/#ParseDuration</a></p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1beta1.WorkspaceUsage">
[]WorkspaceUsage
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>This is an alpha field. You must set the “enable-api-fields” feature flag to “alpha”
for this field to be supported.</p>
<p>Workspaces is a list of workspaces from the Task that this Step wants
exclusive access to. Adding a workspace to this list means that any
other Step or Sidecar that does not also request this Workspace will
not have access to it.</p>
</td>
</tr>
<tr>
<td>
<code>onError</code><br/>
<em>
<a href="#tekton.dev/v1beta1.OnErrorType">
OnErrorType
</a>
</em>
</td>
<td>
<p>OnError defines the exiting behavior of a container on error
can be set to [ continue | stopAndFail ]</p>
</td>
</tr>
<tr>
<td>
<code>stdoutConfig</code><br/>
<em>
<a href="#tekton.dev/v1beta1.StepOutputConfig">
StepOutputConfig
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Stores configuration for the stdout stream of the step.</p>
</td>
</tr>
<tr>
<td>
<code>stderrConfig</code><br/>
<em>
<a href="#tekton.dev/v1beta1.StepOutputConfig">
StepOutputConfig
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Stores configuration for the stderr stream of the step.</p>
</td>
</tr>
<tr>
<td>
<code>ref</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Ref">
Ref
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Contains the reference to an existing StepAction.</p>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Params">
Params
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Params declares parameters passed to this step action.</p>
</td>
</tr>
<tr>
<td>
<code>results</code><br/>
<em>
<a href="#tekton.dev/v1.StepResult">
[]StepResult
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Results declares StepResults produced by the Step.</p>
<p>This is field is at an ALPHA stability level and gated by “enable-step-actions” feature flag.</p>
<p>It can be used in an inlined Step when used to store Results to $(step.results.resultName.path).
It cannot be used when referencing StepActions using [v1beta1.Step.Ref].
The Results declared by the StepActions will be stored here instead.</p>
</td>
</tr>
<tr>
<td>
<code>when</code><br/>
<em>
<a href="#tekton.dev/v1beta1.WhenExpressions">
WhenExpressions
</a>
</em>
</td>
<td>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.StepActionObject">StepActionObject
</h3>
<div>
<p>StepActionObject is implemented by StepAction</p>
</div>
<h3 id="tekton.dev/v1beta1.StepActionSpec">StepActionSpec
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.StepAction">StepAction</a>)
</p>
<div>
<p>StepActionSpec contains the actionable components of a step.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is a user-facing description of the stepaction that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>image</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Image reference name to run for this StepAction.
More info: <a href="https://kubernetes.io/docs/concepts/containers/images">https://kubernetes.io/docs/concepts/containers/images</a></p>
</td>
</tr>
<tr>
<td>
<code>command</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Entrypoint array. Not executed within a shell.
The image’s ENTRYPOINT is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the container’s environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will
produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell</a></p>
</td>
</tr>
<tr>
<td>
<code>args</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Arguments to the entrypoint.
The image’s CMD is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the container’s environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will
produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell</a></p>
</td>
</tr>
<tr>
<td>
<code>env</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#envvar-v1-core">
[]Kubernetes core/v1.EnvVar
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>List of environment variables to set in the container.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>script</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Script is the contents of an executable file to execute.</p>
<p>If Script is not empty, the Step cannot have an Command and the Args will be passed to the Script.</p>
</td>
</tr>
<tr>
<td>
<code>workingDir</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Step’s working directory.
If not specified, the container runtime’s default will be used, which
might be configured in the container image.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1.ParamSpecs">
ParamSpecs
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Params is a list of input parameters required to run the stepAction.
Params must be supplied as inputs in Steps unless they declare a defaultvalue.</p>
</td>
</tr>
<tr>
<td>
<code>results</code><br/>
<em>
<a href="#tekton.dev/v1.StepResult">
[]StepResult
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Results are values that this StepAction can output</p>
</td>
</tr>
<tr>
<td>
<code>securityContext</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#securitycontext-v1-core">
Kubernetes core/v1.SecurityContext
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>SecurityContext defines the security options the Step should be run with.
If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.
More info: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/</a>
The value set in StepAction will take precedence over the value from Task.</p>
</td>
</tr>
<tr>
<td>
<code>volumeMounts</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumemount-v1-core">
[]Kubernetes core/v1.VolumeMount
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Volumes to mount into the Step’s filesystem.
Cannot be updated.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.StepOutputConfig">StepOutputConfig
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.Step">Step</a>)
</p>
<div>
<p>StepOutputConfig stores configuration for a step output stream.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>path</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Path to duplicate stdout stream to on container’s local filesystem.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.StepState">StepState
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.TaskRunStatusFields">TaskRunStatusFields</a>)
</p>
<div>
<p>StepState reports the results of running a step in a Task.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>ContainerState</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#containerstate-v1-core">
Kubernetes core/v1.ContainerState
</a>
</em>
</td>
<td>
<p>
(Members of <code>ContainerState</code> are embedded into this type.)
</p>
</td>
</tr>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>container</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>imageID</code><br/>
<em>
string
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>results</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRunResult">
[]TaskRunResult
</a>
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>provenance</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Provenance">
Provenance
</a>
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>inputs</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Artifact">
[]Artifact
</a>
</em>
</td>
<td>
</td>
</tr>
<tr>
<td>
<code>outputs</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Artifact">
[]Artifact
</a>
</em>
</td>
<td>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.StepTemplate">StepTemplate
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.TaskSpec">TaskSpec</a>)
</p>
<div>
<p>StepTemplate is a template for a Step</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Default name for each Step specified as a DNS_LABEL.
Each Step in a Task must have a unique name.
Cannot be updated.</p>
<p>Deprecated: This field will be removed in a future release.</p>
</td>
</tr>
<tr>
<td>
<code>image</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Default image name to use for each Step.
More info: <a href="https://kubernetes.io/docs/concepts/containers/images">https://kubernetes.io/docs/concepts/containers/images</a>
This field is optional to allow higher level config management to default or override
container images in workload controllers like Deployments and StatefulSets.</p>
</td>
</tr>
<tr>
<td>
<code>command</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Entrypoint array. Not executed within a shell.
The docker image’s ENTRYPOINT is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the Step’s environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will
produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell</a></p>
</td>
</tr>
<tr>
<td>
<code>args</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Arguments to the entrypoint.
The image’s CMD is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the Step’s environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. “$$(VAR_NAME)” will
produce the string literal “$(VAR_NAME)”. Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell</a></p>
</td>
</tr>
<tr>
<td>
<code>workingDir</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Step’s working directory.
If not specified, the container runtime’s default will be used, which
might be configured in the container image.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>ports</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#containerport-v1-core">
[]Kubernetes core/v1.ContainerPort
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>List of ports to expose from the Step’s container. Exposing a port here gives
the system additional information about the network connections a
container uses, but is primarily informational. Not specifying a port here
DOES NOT prevent that port from being exposed. Any port which is
listening on the default “0.0.0.0” address inside a container will be
accessible from the network.
Cannot be updated.</p>
<p>Deprecated: This field will be removed in a future release.</p>
</td>
</tr>
<tr>
<td>
<code>envFrom</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#envfromsource-v1-core">
[]Kubernetes core/v1.EnvFromSource
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>List of sources to populate environment variables in the Step.
The keys defined within a source must be a C_IDENTIFIER. All invalid keys
will be reported as an event when the container is starting. When a key exists in multiple
sources, the value associated with the last source will take precedence.
Values defined by an Env with a duplicate key will take precedence.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>env</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#envvar-v1-core">
[]Kubernetes core/v1.EnvVar
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>List of environment variables to set in the container.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>resources</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core">
Kubernetes core/v1.ResourceRequirements
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Compute Resources required by this Step.
Cannot be updated.
More info: <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/">https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/</a></p>
</td>
</tr>
<tr>
<td>
<code>volumeMounts</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumemount-v1-core">
[]Kubernetes core/v1.VolumeMount
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Volumes to mount into the Step’s filesystem.
Cannot be updated.</p>
</td>
</tr>
<tr>
<td>
<code>volumeDevices</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumedevice-v1-core">
[]Kubernetes core/v1.VolumeDevice
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>volumeDevices is the list of block devices to be used by the Step.</p>
</td>
</tr>
<tr>
<td>
<code>livenessProbe</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#probe-v1-core">
Kubernetes core/v1.Probe
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Periodic probe of container liveness.
Container will be restarted if the probe fails.
Cannot be updated.
More info: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes</a></p>
<p>Deprecated: This field will be removed in a future release.</p>
</td>
</tr>
<tr>
<td>
<code>readinessProbe</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#probe-v1-core">
Kubernetes core/v1.Probe
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Periodic probe of container service readiness.
Container will be removed from service endpoints if the probe fails.
Cannot be updated.
More info: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes</a></p>
<p>Deprecated: This field will be removed in a future release.</p>
</td>
</tr>
<tr>
<td>
<code>startupProbe</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#probe-v1-core">
Kubernetes core/v1.Probe
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>DeprecatedStartupProbe indicates that the Pod has successfully initialized.
If specified, no other probes are executed until this completes successfully.
If this probe fails, the Pod will be restarted, just as if the livenessProbe failed.
This can be used to provide different probe parameters at the beginning of a Pod’s lifecycle,
when it might take a long time to load data or warm a cache, than during steady-state operation.
This cannot be updated.
More info: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes</a></p>
<p>Deprecated: This field will be removed in a future release.</p>
</td>
</tr>
<tr>
<td>
<code>lifecycle</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#lifecycle-v1-core">
Kubernetes core/v1.Lifecycle
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Actions that the management system should take in response to container lifecycle events.
Cannot be updated.</p>
<p>Deprecated: This field will be removed in a future release.</p>
</td>
</tr>
<tr>
<td>
<code>terminationMessagePath</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Deprecated: This field will be removed in a future release and cannot be meaningfully used.</p>
</td>
</tr>
<tr>
<td>
<code>terminationMessagePolicy</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#terminationmessagepolicy-v1-core">
Kubernetes core/v1.TerminationMessagePolicy
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Deprecated: This field will be removed in a future release and cannot be meaningfully used.</p>
</td>
</tr>
<tr>
<td>
<code>imagePullPolicy</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#pullpolicy-v1-core">
Kubernetes core/v1.PullPolicy
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Image pull policy.
One of Always, Never, IfNotPresent.
Defaults to Always if :latest tag is specified, or IfNotPresent otherwise.
Cannot be updated.
More info: <a href="https://kubernetes.io/docs/concepts/containers/images#updating-images">https://kubernetes.io/docs/concepts/containers/images#updating-images</a></p>
</td>
</tr>
<tr>
<td>
<code>securityContext</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#securitycontext-v1-core">
Kubernetes core/v1.SecurityContext
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>SecurityContext defines the security options the Step should be run with.
If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.
More info: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/</a></p>
</td>
</tr>
<tr>
<td>
<code>stdin</code><br/>
<em>
bool
</em>
</td>
<td>
<em>(Optional)</em>
<p>Whether this Step should allocate a buffer for stdin in the container runtime. If this
is not set, reads from stdin in the Step will always result in EOF.
Default is false.</p>
<p>Deprecated: This field will be removed in a future release.</p>
</td>
</tr>
<tr>
<td>
<code>stdinOnce</code><br/>
<em>
bool
</em>
</td>
<td>
<em>(Optional)</em>
<p>Whether the container runtime should close the stdin channel after it has been opened by
a single attach. When stdin is true the stdin stream will remain open across multiple attach
sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the
first client attaches to stdin, and then remains open and accepts data until the client disconnects,
at which time stdin is closed and remains closed until the container is restarted. If this
flag is false, a container processes that reads from stdin will never receive an EOF.
Default is false</p>
<p>Deprecated: This field will be removed in a future release.</p>
</td>
</tr>
<tr>
<td>
<code>tty</code><br/>
<em>
bool
</em>
</td>
<td>
<em>(Optional)</em>
<p>Whether this Step should allocate a DeprecatedTTY for itself, also requires ‘stdin’ to be true.
Default is false.</p>
<p>Deprecated: This field will be removed in a future release.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.TaskBreakpoints">TaskBreakpoints
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.TaskRunDebug">TaskRunDebug</a>)
</p>
<div>
<p>TaskBreakpoints defines the breakpoint config for a particular Task</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>onFailure</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>if enabled, pause TaskRun on failure of a step
failed step will not exit</p>
</td>
</tr>
<tr>
<td>
<code>beforeSteps</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.TaskKind">TaskKind
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.TaskRef">TaskRef</a>)
</p>
<div>
<p>TaskKind defines the type of Task used by the pipeline.</p>
</div>
<h3 id="tekton.dev/v1beta1.TaskModifier">TaskModifier
</h3>
<div>
<p>TaskModifier is an interface to be implemented by different PipelineResources</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</div>
<h3 id="tekton.dev/v1beta1.TaskObject">TaskObject
</h3>
<div>
<p>TaskObject is implemented by Task and ClusterTask</p>
</div>
<h3 id="tekton.dev/v1beta1.TaskRef">TaskRef
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1alpha1.RunSpec">RunSpec</a>, <a href="#tekton.dev/v1beta1.CustomRunSpec">CustomRunSpec</a>, <a href="#tekton.dev/v1beta1.PipelineTask">PipelineTask</a>, <a href="#tekton.dev/v1beta1.TaskRunSpec">TaskRunSpec</a>)
</p>
<div>
<p>TaskRef can be used to refer to a specific instance of a task.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name of the referent; More info: <a href="http://kubernetes.io/docs/user-guide/identifiers#names">http://kubernetes.io/docs/user-guide/identifiers#names</a></p>
</td>
</tr>
<tr>
<td>
<code>kind</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskKind">
TaskKind
</a>
</em>
</td>
<td>
<p>TaskKind indicates the Kind of the Task:
1. Namespaced Task when Kind is set to “Task”. If Kind is “”, it defaults to “Task”.
2. Cluster-Scoped Task when Kind is set to “ClusterTask”
3. Custom Task when Kind is non-empty and APIVersion is non-empty</p>
</td>
</tr>
<tr>
<td>
<code>apiVersion</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>API version of the referent
Note: A Task with non-empty APIVersion and Kind is considered a Custom Task</p>
</td>
</tr>
<tr>
<td>
<code>bundle</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Bundle url reference to a Tekton Bundle.</p>
<p>Deprecated: Please use ResolverRef with the bundles resolver instead.
The field is staying there for go client backward compatibility, but is not used/allowed anymore.</p>
</td>
</tr>
<tr>
<td>
<code>ResolverRef</code><br/>
<em>
<a href="#tekton.dev/v1beta1.ResolverRef">
ResolverRef
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>ResolverRef allows referencing a Task in a remote location
like a git repo. This field is only supported when the alpha
feature gate is enabled.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.TaskResource">TaskResource
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.TaskResources">TaskResources</a>)
</p>
<div>
<p>TaskResource defines an input or output Resource declared as a requirement
by a Task. The Name field will be used to refer to these Resources within
the Task definition, and when provided as an Input, the Name will be the
path to the volume mounted containing this Resource as an input (e.g.
an input Resource named <code>workspace</code> will be mounted at <code>/workspace</code>).</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>ResourceDeclaration</code><br/>
<em>
<a href="#tekton.dev/v1alpha1.ResourceDeclaration">
ResourceDeclaration
</a>
</em>
</td>
<td>
<p>
(Members of <code>ResourceDeclaration</code> are embedded into this type.)
</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.TaskResourceBinding">TaskResourceBinding
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.TaskRunInputs">TaskRunInputs</a>, <a href="#tekton.dev/v1beta1.TaskRunOutputs">TaskRunOutputs</a>, <a href="#tekton.dev/v1beta1.TaskRunResources">TaskRunResources</a>)
</p>
<div>
<p>TaskResourceBinding points to the PipelineResource that
will be used for the Task input or output called Name.</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>PipelineResourceBinding</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PipelineResourceBinding">
PipelineResourceBinding
</a>
</em>
</td>
<td>
<p>
(Members of <code>PipelineResourceBinding</code> are embedded into this type.)
</p>
</td>
</tr>
<tr>
<td>
<code>paths</code><br/>
<em>
[]string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Paths will probably be removed in #1284, and then PipelineResourceBinding can be used instead.
The optional Path field corresponds to a path on disk at which the Resource can be found
(used when providing the resource via mounted volume, overriding the default logic to fetch the Resource).</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.TaskResources">TaskResources
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.TaskSpec">TaskSpec</a>)
</p>
<div>
<p>TaskResources allows a Pipeline to declare how its DeclaredPipelineResources
should be provided to a Task as its inputs and outputs.</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>inputs</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskResource">
[]TaskResource
</a>
</em>
</td>
<td>
<p>Inputs holds the mapping from the PipelineResources declared in
DeclaredPipelineResources to the input PipelineResources required by the Task.</p>
</td>
</tr>
<tr>
<td>
<code>outputs</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskResource">
[]TaskResource
</a>
</em>
</td>
<td>
<p>Outputs holds the mapping from the PipelineResources declared in
DeclaredPipelineResources to the input PipelineResources required by the Task.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.TaskResult">TaskResult
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.TaskSpec">TaskSpec</a>)
</p>
<div>
<p>TaskResult used to describe the results of a task</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name the given name</p>
</td>
</tr>
<tr>
<td>
<code>type</code><br/>
<em>
<a href="#tekton.dev/v1beta1.ResultsType">
ResultsType
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Type is the user-specified type of the result. The possible type
is currently “string” and will support “array” in following work.</p>
</td>
</tr>
<tr>
<td>
<code>properties</code><br/>
<em>
<a href="#tekton.dev/v1beta1.PropertySpec">
map[string]github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.PropertySpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Properties is the JSON Schema properties to support key-value pairs results.</p>
</td>
</tr>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is a human-readable description of the result</p>
</td>
</tr>
<tr>
<td>
<code>value</code><br/>
<em>
<a href="#tekton.dev/v1beta1.ParamValue">
ParamValue
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Value the expression used to retrieve the value of the result from an underlying Step.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.TaskRunConditionType">TaskRunConditionType
(<code>string</code> alias)</h3>
<div>
<p>TaskRunConditionType is an enum used to store TaskRun custom
conditions such as one used in spire results verification</p>
</div>
<h3 id="tekton.dev/v1beta1.TaskRunDebug">TaskRunDebug
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.TaskRunSpec">TaskRunSpec</a>)
</p>
<div>
<p>TaskRunDebug defines the breakpoint config for a particular TaskRun</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>breakpoints</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskBreakpoints">
TaskBreakpoints
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.TaskRunInputs">TaskRunInputs
</h3>
<div>
<p>TaskRunInputs holds the input values that this task was invoked with.</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>resources</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskResourceBinding">
[]TaskResourceBinding
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Param">
[]Param
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.TaskRunOutputs">TaskRunOutputs
</h3>
<div>
<p>TaskRunOutputs holds the output values that this task was invoked with.</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>resources</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskResourceBinding">
[]TaskResourceBinding
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.TaskRunReason">TaskRunReason
(<code>string</code> alias)</h3>
<div>
<p>TaskRunReason is an enum used to store all TaskRun reason for
the Succeeded condition that are controlled by the TaskRun itself. Failure
reasons that emerge from underlying resources are not included here</p>
</div>
<h3 id="tekton.dev/v1beta1.TaskRunResources">TaskRunResources
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.TaskRunSpec">TaskRunSpec</a>)
</p>
<div>
<p>TaskRunResources allows a TaskRun to declare inputs and outputs TaskResourceBinding</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>inputs</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskResourceBinding">
[]TaskResourceBinding
</a>
</em>
</td>
<td>
<p>Inputs holds the inputs resources this task was invoked with</p>
</td>
</tr>
<tr>
<td>
<code>outputs</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskResourceBinding">
[]TaskResourceBinding
</a>
</em>
</td>
<td>
<p>Outputs holds the inputs resources this task was invoked with</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.TaskRunResult">TaskRunResult
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.StepState">StepState</a>, <a href="#tekton.dev/v1beta1.TaskRunStatusFields">TaskRunStatusFields</a>)
</p>
<div>
<p>TaskRunStepResult is a type alias of TaskRunResult</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name the given name</p>
</td>
</tr>
<tr>
<td>
<code>type</code><br/>
<em>
<a href="#tekton.dev/v1beta1.ResultsType">
ResultsType
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Type is the user-specified type of the result. The possible type
is currently “string” and will support “array” in following work.</p>
</td>
</tr>
<tr>
<td>
<code>value</code><br/>
<em>
<a href="#tekton.dev/v1beta1.ParamValue">
ParamValue
</a>
</em>
</td>
<td>
<p>Value the given value of the result</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.TaskRunSidecarOverride">TaskRunSidecarOverride
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineTaskRunSpec">PipelineTaskRunSpec</a>, <a href="#tekton.dev/v1beta1.TaskRunSpec">TaskRunSpec</a>)
</p>
<div>
<p>TaskRunSidecarOverride is used to override the values of a Sidecar in the corresponding Task.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>The name of the Sidecar to override.</p>
</td>
</tr>
<tr>
<td>
<code>resources</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core">
Kubernetes core/v1.ResourceRequirements
</a>
</em>
</td>
<td>
<p>The resource requirements to apply to the Sidecar.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.TaskRunSpec">TaskRunSpec
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.TaskRun">TaskRun</a>)
</p>
<div>
<p>TaskRunSpec defines the desired state of TaskRun</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>debug</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRunDebug">
TaskRunDebug
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Params">
Params
</a>
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>resources</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRunResources">
TaskRunResources
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</td>
</tr>
<tr>
<td>
<code>serviceAccountName</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
</td>
</tr>
<tr>
<td>
<code>taskRef</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRef">
TaskRef
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>no more than one of the TaskRef and TaskSpec may be specified.</p>
</td>
</tr>
<tr>
<td>
<code>taskSpec</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskSpec">
TaskSpec
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Specifying PipelineSpec can be disabled by setting
<code>disable-inline-spec</code> feature flag..</p>
</td>
</tr>
<tr>
<td>
<code>status</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRunSpecStatus">
TaskRunSpecStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Used for cancelling a TaskRun (and maybe more later on)</p>
</td>
</tr>
<tr>
<td>
<code>statusMessage</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRunSpecStatusMessage">
TaskRunSpecStatusMessage
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Status message for cancellation.</p>
</td>
</tr>
<tr>
<td>
<code>retries</code><br/>
<em>
int
</em>
</td>
<td>
<em>(Optional)</em>
<p>Retries represents how many times this TaskRun should be retried in the event of Task failure.</p>
</td>
</tr>
<tr>
<td>
<code>timeout</code><br/>
<em>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration">
Kubernetes meta/v1.Duration
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Time after which one retry attempt times out. Defaults to 1 hour.
Refer Go’s ParseDuration documentation for expected format: <a href="https://golang.org/pkg/time/#ParseDuration">https://golang.org/pkg/time/#ParseDuration</a></p>
</td>
</tr>
<tr>
<td>
<code>podTemplate</code><br/>
<em>
<a href="#tekton.dev/unversioned.Template">
Template
</a>
</em>
</td>
<td>
<p>PodTemplate holds pod specific configuration</p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1beta1.WorkspaceBinding">
[]WorkspaceBinding
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Workspaces is a list of WorkspaceBindings from volumes to workspaces.</p>
</td>
</tr>
<tr>
<td>
<code>stepOverrides</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRunStepOverride">
[]TaskRunStepOverride
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Overrides to apply to Steps in this TaskRun.
If a field is specified in both a Step and a StepOverride,
the value from the StepOverride will be used.
This field is only supported when the alpha feature gate is enabled.</p>
</td>
</tr>
<tr>
<td>
<code>sidecarOverrides</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRunSidecarOverride">
[]TaskRunSidecarOverride
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Overrides to apply to Sidecars in this TaskRun.
If a field is specified in both a Sidecar and a SidecarOverride,
the value from the SidecarOverride will be used.
This field is only supported when the alpha feature gate is enabled.</p>
</td>
</tr>
<tr>
<td>
<code>computeResources</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core">
Kubernetes core/v1.ResourceRequirements
</a>
</em>
</td>
<td>
<p>Compute resources to use for this TaskRun</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.TaskRunSpecStatus">TaskRunSpecStatus
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.TaskRunSpec">TaskRunSpec</a>)
</p>
<div>
<p>TaskRunSpecStatus defines the TaskRun spec status the user can provide</p>
</div>
<h3 id="tekton.dev/v1beta1.TaskRunSpecStatusMessage">TaskRunSpecStatusMessage
(<code>string</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.TaskRunSpec">TaskRunSpec</a>)
</p>
<div>
<p>TaskRunSpecStatusMessage defines human readable status messages for the TaskRun.</p>
</div>
<h3 id="tekton.dev/v1beta1.TaskRunStatus">TaskRunStatus
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.TaskRun">TaskRun</a>, <a href="#tekton.dev/v1beta1.PipelineRunTaskRunStatus">PipelineRunTaskRunStatus</a>, <a href="#tekton.dev/v1beta1.TaskRunStatusFields">TaskRunStatusFields</a>)
</p>
<div>
<p>TaskRunStatus defines the observed state of TaskRun</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>Status</code><br/>
<em>
<a href="https://pkg.go.dev/knative.dev/pkg/apis/duck/v1#Status">
knative.dev/pkg/apis/duck/v1.Status
</a>
</em>
</td>
<td>
<p>
(Members of <code>Status</code> are embedded into this type.)
</p>
</td>
</tr>
<tr>
<td>
<code>TaskRunStatusFields</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRunStatusFields">
TaskRunStatusFields
</a>
</em>
</td>
<td>
<p>
(Members of <code>TaskRunStatusFields</code> are embedded into this type.)
</p>
<p>TaskRunStatusFields inlines the status fields.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.TaskRunStatusFields">TaskRunStatusFields
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.TaskRunStatus">TaskRunStatus</a>)
</p>
<div>
<p>TaskRunStatusFields holds the fields of TaskRun’s status. This is defined
separately and inlined so that other types can readily consume these fields
via duck typing.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>podName</code><br/>
<em>
string
</em>
</td>
<td>
<p>PodName is the name of the pod responsible for executing this task’s steps.</p>
</td>
</tr>
<tr>
<td>
<code>startTime</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#time-v1-meta">
Kubernetes meta/v1.Time
</a>
</em>
</td>
<td>
<p>StartTime is the time the build is actually started.</p>
</td>
</tr>
<tr>
<td>
<code>completionTime</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#time-v1-meta">
Kubernetes meta/v1.Time
</a>
</em>
</td>
<td>
<p>CompletionTime is the time the build completed.</p>
</td>
</tr>
<tr>
<td>
<code>steps</code><br/>
<em>
<a href="#tekton.dev/v1beta1.StepState">
[]StepState
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Steps describes the state of each build step container.</p>
</td>
</tr>
<tr>
<td>
<code>cloudEvents</code><br/>
<em>
<a href="#tekton.dev/v1beta1.CloudEventDelivery">
[]CloudEventDelivery
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>CloudEvents describe the state of each cloud event requested via a
CloudEventResource.</p>
<p>Deprecated: Removed in v0.44.0.</p>
</td>
</tr>
<tr>
<td>
<code>retriesStatus</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRunStatus">
[]TaskRunStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>RetriesStatus contains the history of TaskRunStatus in case of a retry in order to keep record of failures.
All TaskRunStatus stored in RetriesStatus will have no date within the RetriesStatus as is redundant.</p>
</td>
</tr>
<tr>
<td>
<code>resourcesResult</code><br/>
<em>
[]github.com/tektoncd/pipeline/pkg/result.RunResult
</em>
</td>
<td>
<em>(Optional)</em>
<p>Results from Resources built during the TaskRun.
This is tomb-stoned along with the removal of pipelineResources
Deprecated: this field is not populated and is preserved only for backwards compatibility</p>
</td>
</tr>
<tr>
<td>
<code>taskResults</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskRunResult">
[]TaskRunResult
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>TaskRunResults are the list of results written out by the task’s containers</p>
</td>
</tr>
<tr>
<td>
<code>sidecars</code><br/>
<em>
<a href="#tekton.dev/v1beta1.SidecarState">
[]SidecarState
</a>
</em>
</td>
<td>
<p>The list has one entry per sidecar in the manifest. Each entry is
represents the imageid of the corresponding sidecar.</p>
</td>
</tr>
<tr>
<td>
<code>taskSpec</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskSpec">
TaskSpec
</a>
</em>
</td>
<td>
<p>TaskSpec contains the Spec from the dereferenced Task definition used to instantiate this TaskRun.</p>
</td>
</tr>
<tr>
<td>
<code>provenance</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Provenance">
Provenance
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Provenance contains some key authenticated metadata about how a software artifact was built (what sources, what inputs/outputs, etc.).</p>
</td>
</tr>
<tr>
<td>
<code>spanContext</code><br/>
<em>
map[string]string
</em>
</td>
<td>
<p>SpanContext contains tracing span context fields</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.TaskRunStepOverride">TaskRunStepOverride
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineTaskRunSpec">PipelineTaskRunSpec</a>, <a href="#tekton.dev/v1beta1.TaskRunSpec">TaskRunSpec</a>)
</p>
<div>
<p>TaskRunStepOverride is used to override the values of a Step in the corresponding Task.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>The name of the Step to override.</p>
</td>
</tr>
<tr>
<td>
<code>resources</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core">
Kubernetes core/v1.ResourceRequirements
</a>
</em>
</td>
<td>
<p>The resource requirements to apply to the Step.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.TaskSpec">TaskSpec
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.ClusterTask">ClusterTask</a>, <a href="#tekton.dev/v1beta1.Task">Task</a>, <a href="#tekton.dev/v1beta1.EmbeddedTask">EmbeddedTask</a>, <a href="#tekton.dev/v1beta1.TaskRunSpec">TaskRunSpec</a>, <a href="#tekton.dev/v1beta1.TaskRunStatusFields">TaskRunStatusFields</a>)
</p>
<div>
<p>TaskSpec defines the desired state of Task.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>resources</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskResources">
TaskResources
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Resources is a list input and output resource to run the task
Resources are represented in TaskRuns as bindings to instances of
PipelineResources.</p>
<p>Deprecated: Unused, preserved only for backwards compatibility</p>
</td>
</tr>
<tr>
<td>
<code>params</code><br/>
<em>
<a href="#tekton.dev/v1beta1.ParamSpecs">
ParamSpecs
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Params is a list of input parameters required to run the task. Params
must be supplied as inputs in TaskRuns unless they declare a default
value.</p>
</td>
</tr>
<tr>
<td>
<code>displayName</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>DisplayName is a user-facing name of the task that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is a user-facing description of the task that may be
used to populate a UI.</p>
</td>
</tr>
<tr>
<td>
<code>steps</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Step">
[]Step
</a>
</em>
</td>
<td>
<p>Steps are the steps of the build; each step is run sequentially with the
source mounted into /workspace.</p>
</td>
</tr>
<tr>
<td>
<code>volumes</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volume-v1-core">
[]Kubernetes core/v1.Volume
</a>
</em>
</td>
<td>
<p>Volumes is a collection of volumes that are available to mount into the
steps of the build.</p>
</td>
</tr>
<tr>
<td>
<code>stepTemplate</code><br/>
<em>
<a href="#tekton.dev/v1beta1.StepTemplate">
StepTemplate
</a>
</em>
</td>
<td>
<p>StepTemplate can be used as the basis for all step containers within the
Task, so that the steps inherit settings on the base container.</p>
</td>
</tr>
<tr>
<td>
<code>sidecars</code><br/>
<em>
<a href="#tekton.dev/v1beta1.Sidecar">
[]Sidecar
</a>
</em>
</td>
<td>
<p>Sidecars are run alongside the Task’s step containers. They begin before
the steps start and end after the steps complete.</p>
</td>
</tr>
<tr>
<td>
<code>workspaces</code><br/>
<em>
<a href="#tekton.dev/v1beta1.WorkspaceDeclaration">
[]WorkspaceDeclaration
</a>
</em>
</td>
<td>
<p>Workspaces are the volumes that this Task requires.</p>
</td>
</tr>
<tr>
<td>
<code>results</code><br/>
<em>
<a href="#tekton.dev/v1beta1.TaskResult">
[]TaskResult
</a>
</em>
</td>
<td>
<p>Results are values that this Task can output</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.TimeoutFields">TimeoutFields
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineRunSpec">PipelineRunSpec</a>)
</p>
<div>
<p>TimeoutFields allows granular specification of pipeline, task, and finally timeouts</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>pipeline</code><br/>
<em>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration">
Kubernetes meta/v1.Duration
</a>
</em>
</td>
<td>
<p>Pipeline sets the maximum allowed duration for execution of the entire pipeline. The sum of individual timeouts for tasks and finally must not exceed this value.</p>
</td>
</tr>
<tr>
<td>
<code>tasks</code><br/>
<em>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration">
Kubernetes meta/v1.Duration
</a>
</em>
</td>
<td>
<p>Tasks sets the maximum allowed duration of this pipeline’s tasks</p>
</td>
</tr>
<tr>
<td>
<code>finally</code><br/>
<em>
<a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration">
Kubernetes meta/v1.Duration
</a>
</em>
</td>
<td>
<p>Finally sets the maximum allowed duration of this pipeline’s finally</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.WhenExpression">WhenExpression
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.ChildStatusReference">ChildStatusReference</a>, <a href="#tekton.dev/v1beta1.PipelineRunRunStatus">PipelineRunRunStatus</a>, <a href="#tekton.dev/v1beta1.PipelineRunTaskRunStatus">PipelineRunTaskRunStatus</a>, <a href="#tekton.dev/v1beta1.SkippedTask">SkippedTask</a>)
</p>
<div>
<p>WhenExpression allows a PipelineTask to declare expressions to be evaluated before the Task is run
to determine whether the Task should be executed or skipped</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>input</code><br/>
<em>
string
</em>
</td>
<td>
<p>Input is the string for guard checking which can be a static input or an output from a parent Task</p>
</td>
</tr>
<tr>
<td>
<code>operator</code><br/>
<em>
k8s.io/apimachinery/pkg/selection.Operator
</em>
</td>
<td>
<p>Operator that represents an Input’s relationship to the values</p>
</td>
</tr>
<tr>
<td>
<code>values</code><br/>
<em>
[]string
</em>
</td>
<td>
<p>Values is an array of strings, which is compared against the input, for guard checking
It must be non-empty</p>
</td>
</tr>
<tr>
<td>
<code>cel</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>CEL is a string of Common Language Expression, which can be used to conditionally execute
the task based on the result of the expression evaluation
More info about CEL syntax: <a href="https://github.com/google/cel-spec/blob/master/doc/langdef.md">https://github.com/google/cel-spec/blob/master/doc/langdef.md</a></p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.WhenExpressions">WhenExpressions
(<code>[]github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1.WhenExpression</code> alias)</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineTask">PipelineTask</a>, <a href="#tekton.dev/v1beta1.Step">Step</a>)
</p>
<div>
<p>WhenExpressions are used to specify whether a Task should be executed or skipped
All of them need to evaluate to True for a guarded Task to be executed.</p>
</div>
<h3 id="tekton.dev/v1beta1.WorkspaceBinding">WorkspaceBinding
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1alpha1.RunSpec">RunSpec</a>, <a href="#tekton.dev/v1beta1.CustomRunSpec">CustomRunSpec</a>, <a href="#tekton.dev/v1beta1.PipelineRunSpec">PipelineRunSpec</a>, <a href="#tekton.dev/v1beta1.TaskRunSpec">TaskRunSpec</a>)
</p>
<div>
<p>WorkspaceBinding maps a Task’s declared workspace to a Volume.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name is the name of the workspace populated by the volume.</p>
</td>
</tr>
<tr>
<td>
<code>subPath</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>SubPath is optionally a directory on the volume which should be used
for this binding (i.e. the volume will be mounted at this sub directory).</p>
</td>
</tr>
<tr>
<td>
<code>volumeClaimTemplate</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#persistentvolumeclaim-v1-core">
Kubernetes core/v1.PersistentVolumeClaim
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>VolumeClaimTemplate is a template for a claim that will be created in the same namespace.
The PipelineRun controller is responsible for creating a unique claim for each instance of PipelineRun.</p>
</td>
</tr>
<tr>
<td>
<code>persistentVolumeClaim</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#persistentvolumeclaimvolumesource-v1-core">
Kubernetes core/v1.PersistentVolumeClaimVolumeSource
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>PersistentVolumeClaimVolumeSource represents a reference to a
PersistentVolumeClaim in the same namespace. Either this OR EmptyDir can be used.</p>
</td>
</tr>
<tr>
<td>
<code>emptyDir</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#emptydirvolumesource-v1-core">
Kubernetes core/v1.EmptyDirVolumeSource
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>EmptyDir represents a temporary directory that shares a Task’s lifetime.
More info: <a href="https://kubernetes.io/docs/concepts/storage/volumes#emptydir">https://kubernetes.io/docs/concepts/storage/volumes#emptydir</a>
Either this OR PersistentVolumeClaim can be used.</p>
</td>
</tr>
<tr>
<td>
<code>configMap</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#configmapvolumesource-v1-core">
Kubernetes core/v1.ConfigMapVolumeSource
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>ConfigMap represents a configMap that should populate this workspace.</p>
</td>
</tr>
<tr>
<td>
<code>secret</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secretvolumesource-v1-core">
Kubernetes core/v1.SecretVolumeSource
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Secret represents a secret that should populate this workspace.</p>
</td>
</tr>
<tr>
<td>
<code>projected</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#projectedvolumesource-v1-core">
Kubernetes core/v1.ProjectedVolumeSource
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Projected represents a projected volume that should populate this workspace.</p>
</td>
</tr>
<tr>
<td>
<code>csi</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#csivolumesource-v1-core">
Kubernetes core/v1.CSIVolumeSource
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>CSI (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.WorkspaceDeclaration">WorkspaceDeclaration
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.TaskSpec">TaskSpec</a>)
</p>
<div>
<p>WorkspaceDeclaration is a declaration of a volume that a Task requires.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name is the name by which you can bind the volume at runtime.</p>
</td>
</tr>
<tr>
<td>
<code>description</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Description is an optional human readable description of this volume.</p>
</td>
</tr>
<tr>
<td>
<code>mountPath</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>MountPath overrides the directory that the volume will be made available at.</p>
</td>
</tr>
<tr>
<td>
<code>readOnly</code><br/>
<em>
bool
</em>
</td>
<td>
<p>ReadOnly dictates whether a mounted volume is writable. By default this
field is false and so mounted volumes are writable.</p>
</td>
</tr>
<tr>
<td>
<code>optional</code><br/>
<em>
bool
</em>
</td>
<td>
<p>Optional marks a Workspace as not being required in TaskRuns. By default
this field is false and so declared workspaces are required.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.WorkspacePipelineTaskBinding">WorkspacePipelineTaskBinding
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.PipelineTask">PipelineTask</a>)
</p>
<div>
<p>WorkspacePipelineTaskBinding describes how a workspace passed into the pipeline should be
mapped to a task’s declared workspace.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name is the name of the workspace as declared by the task</p>
</td>
</tr>
<tr>
<td>
<code>workspace</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>Workspace is the name of the workspace declared by the pipeline</p>
</td>
</tr>
<tr>
<td>
<code>subPath</code><br/>
<em>
string
</em>
</td>
<td>
<em>(Optional)</em>
<p>SubPath is optionally a directory on the volume which should be used
for this binding (i.e. the volume will be mounted at this sub directory).</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.WorkspaceUsage">WorkspaceUsage
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.Sidecar">Sidecar</a>, <a href="#tekton.dev/v1beta1.Step">Step</a>)
</p>
<div>
<p>WorkspaceUsage is used by a Step or Sidecar to declare that it wants isolated access
to a Workspace defined in a Task.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name is the name of the workspace this Step or Sidecar wants access to.</p>
</td>
</tr>
<tr>
<td>
<code>mountPath</code><br/>
<em>
string
</em>
</td>
<td>
<p>MountPath is the path that the workspace should be mounted to inside the Step or Sidecar,
overriding any MountPath specified in the Task’s WorkspaceDeclaration.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.CustomRunResult">CustomRunResult
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.CustomRunStatusFields">CustomRunStatusFields</a>)
</p>
<div>
<p>CustomRunResult used to describe the results of a task</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>name</code><br/>
<em>
string
</em>
</td>
<td>
<p>Name the given name</p>
</td>
</tr>
<tr>
<td>
<code>value</code><br/>
<em>
string
</em>
</td>
<td>
<p>Value the given value of the result</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.CustomRunStatus">CustomRunStatus
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.CustomRun">CustomRun</a>, <a href="#tekton.dev/v1.PipelineRunRunStatus">PipelineRunRunStatus</a>, <a href="#tekton.dev/v1beta1.PipelineRunRunStatus">PipelineRunRunStatus</a>, <a href="#tekton.dev/v1beta1.CustomRunStatusFields">CustomRunStatusFields</a>)
</p>
<div>
<p>CustomRunStatus defines the observed state of CustomRun</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>Status</code><br/>
<em>
<a href="https://pkg.go.dev/knative.dev/pkg/apis/duck/v1#Status">
knative.dev/pkg/apis/duck/v1.Status
</a>
</em>
</td>
<td>
<p>
(Members of <code>Status</code> are embedded into this type.)
</p>
</td>
</tr>
<tr>
<td>
<code>CustomRunStatusFields</code><br/>
<em>
<a href="#tekton.dev/v1beta1.CustomRunStatusFields">
CustomRunStatusFields
</a>
</em>
</td>
<td>
<p>
(Members of <code>CustomRunStatusFields</code> are embedded into this type.)
</p>
<p>CustomRunStatusFields inlines the status fields.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="tekton.dev/v1beta1.CustomRunStatusFields">CustomRunStatusFields
</h3>
<p>
(<em>Appears on:</em><a href="#tekton.dev/v1beta1.CustomRunStatus">CustomRunStatus</a>)
</p>
<div>
<p>CustomRunStatusFields holds the fields of CustomRun’s status. This is defined
separately and inlined so that other types can readily consume these fields
via duck typing.</p>
</div>
<table>
<thead>
<tr>
<th>Field</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code>startTime</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#time-v1-meta">
Kubernetes meta/v1.Time
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>StartTime is the time the build is actually started.</p>
</td>
</tr>
<tr>
<td>
<code>completionTime</code><br/>
<em>
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#time-v1-meta">
Kubernetes meta/v1.Time
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>CompletionTime is the time the build completed.</p>
</td>
</tr>
<tr>
<td>
<code>results</code><br/>
<em>
<a href="#tekton.dev/v1beta1.CustomRunResult">
[]CustomRunResult
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>Results reports any output result values to be consumed by later
tasks in a pipeline.</p>
</td>
</tr>
<tr>
<td>
<code>retriesStatus</code><br/>
<em>
<a href="#tekton.dev/v1beta1.CustomRunStatus">
[]CustomRunStatus
</a>
</em>
</td>
<td>
<em>(Optional)</em>
<p>RetriesStatus contains the history of CustomRunStatus, in case of a retry.</p>
</td>
</tr>
<tr>
<td>
<code>extraFields</code><br/>
<em>
k8s.io/apimachinery/pkg/runtime.RawExtension
</em>
</td>
<td>
<p>ExtraFields holds arbitrary fields provided by the custom task
controller.</p>
</td>
</tr>
</tbody>
</table>
<hr/>
<p><em>
Generated with <code>gen-crd-api-reference-docs</code>
.
</em></p> | tekton | title Pipeline API linkTitle Pipeline API weight 404 p Packages p ul li a href resolution tekton dev 2fv1alpha1 resolution tekton dev v1alpha1 a li li a href resolution tekton dev 2fv1beta1 resolution tekton dev v1beta1 a li li a href tekton dev 2fv1 tekton dev v1 a li li a href tekton dev 2fv1alpha1 tekton dev v1alpha1 a li li a href tekton dev 2fv1beta1 tekton dev v1beta1 a li ul h2 id resolution tekton dev v1alpha1 resolution tekton dev v1alpha1 h2 div div Resource Types ul ul h3 id resolution tekton dev v1alpha1 ResolutionRequest ResolutionRequest h3 div p ResolutionRequest is an object for requesting the content of a Tekton resource like a pipeline yaml p div table thead tr th Field th th Description th tr thead tbody tr td code metadata code br em a href https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Kubernetes meta v1 ObjectMeta a em td td em Optional em Refer to the Kubernetes API documentation for the fields of the code metadata code field td tr tr td code spec code br em a href resolution tekton dev v1alpha1 ResolutionRequestSpec ResolutionRequestSpec a em td td em Optional em p Spec holds the information for the request part of the resource request p br br table tr td code params code br em map string string em td td em Optional em p Parameters are the runtime attributes passed to the resolver to help it figure out how to resolve the resource being requested For example repo URL commit SHA path to file the kind of authentication to leverage etc p td tr table td tr tr td code status code br em a href resolution tekton dev v1alpha1 ResolutionRequestStatus ResolutionRequestStatus a em td td em Optional em p Status communicates the state of the request and ultimately the content of the resolved resource p td tr tbody table h3 id resolution tekton dev v1alpha1 ResolutionRequestSpec ResolutionRequestSpec h3 p em Appears on em a href resolution tekton dev v1alpha1 ResolutionRequest ResolutionRequest a p div p ResolutionRequestSpec are all the fields in the spec of the ResolutionRequest CRD p div table thead tr th Field th th Description th tr thead tbody tr td code params code br em map string string em td td em Optional em p Parameters are the runtime attributes passed to the resolver to help it figure out how to resolve the resource being requested For example repo URL commit SHA path to file the kind of authentication to leverage etc p td tr tbody table h3 id resolution tekton dev v1alpha1 ResolutionRequestStatus ResolutionRequestStatus h3 p em Appears on em a href resolution tekton dev v1alpha1 ResolutionRequest ResolutionRequest a p div p ResolutionRequestStatus are all the fields in a ResolutionRequest rsquo s status subresource p div table thead tr th Field th th Description th tr thead tbody tr td code Status code br em a href https pkg go dev knative dev pkg apis duck v1 Status knative dev pkg apis duck v1 Status a em td td p Members of code Status code are embedded into this type p td tr tr td code ResolutionRequestStatusFields code br em a href resolution tekton dev v1alpha1 ResolutionRequestStatusFields ResolutionRequestStatusFields a em td td p Members of code ResolutionRequestStatusFields code are embedded into this type p td tr tbody table h3 id resolution tekton dev v1alpha1 ResolutionRequestStatusFields ResolutionRequestStatusFields h3 p em Appears on em a href resolution tekton dev v1alpha1 ResolutionRequestStatus ResolutionRequestStatus a p div p ResolutionRequestStatusFields are the ResolutionRequest specific fields for the status subresource p div table thead tr th Field th th Description th tr thead tbody tr td code data code br em string em td td p Data is a string representation of the resolved content of the requested resource in lined into the ResolutionRequest object p td tr tr td code refSource code br em a href tekton dev v1 RefSource RefSource a em td td p RefSource is the source reference of the remote data that records where the remote file came from including the url digest and the entrypoint p td tr tbody table hr h2 id resolution tekton dev v1beta1 resolution tekton dev v1beta1 h2 div div Resource Types ul ul h3 id resolution tekton dev v1beta1 ResolutionRequest ResolutionRequest h3 div p ResolutionRequest is an object for requesting the content of a Tekton resource like a pipeline yaml p div table thead tr th Field th th Description th tr thead tbody tr td code metadata code br em a href https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Kubernetes meta v1 ObjectMeta a em td td em Optional em Refer to the Kubernetes API documentation for the fields of the code metadata code field td tr tr td code spec code br em a href resolution tekton dev v1beta1 ResolutionRequestSpec ResolutionRequestSpec a em td td em Optional em p Spec holds the information for the request part of the resource request p br br table tr td code params code br em a href tekton dev v1 Param Param a em td td em Optional em p Parameters are the runtime attributes passed to the resolver to help it figure out how to resolve the resource being requested For example repo URL commit SHA path to file the kind of authentication to leverage etc p td tr tr td code url code br em string em td td em Optional em p URL is the runtime url passed to the resolver to help it figure out how to resolver the resource being requested This is currently at an ALPHA stability level and subject to alpha API compatibility policies p td tr table td tr tr td code status code br em a href resolution tekton dev v1beta1 ResolutionRequestStatus ResolutionRequestStatus a em td td em Optional em p Status communicates the state of the request and ultimately the content of the resolved resource p td tr tbody table h3 id resolution tekton dev v1beta1 ResolutionRequestSpec ResolutionRequestSpec h3 p em Appears on em a href resolution tekton dev v1beta1 ResolutionRequest ResolutionRequest a p div p ResolutionRequestSpec are all the fields in the spec of the ResolutionRequest CRD p div table thead tr th Field th th Description th tr thead tbody tr td code params code br em a href tekton dev v1 Param Param a em td td em Optional em p Parameters are the runtime attributes passed to the resolver to help it figure out how to resolve the resource being requested For example repo URL commit SHA path to file the kind of authentication to leverage etc p td tr tr td code url code br em string em td td em Optional em p URL is the runtime url passed to the resolver to help it figure out how to resolver the resource being requested This is currently at an ALPHA stability level and subject to alpha API compatibility policies p td tr tbody table h3 id resolution tekton dev v1beta1 ResolutionRequestStatus ResolutionRequestStatus h3 p em Appears on em a href resolution tekton dev v1beta1 ResolutionRequest ResolutionRequest a p div p ResolutionRequestStatus are all the fields in a ResolutionRequest rsquo s status subresource p div table thead tr th Field th th Description th tr thead tbody tr td code Status code br em a href https pkg go dev knative dev pkg apis duck v1 Status knative dev pkg apis duck v1 Status a em td td p Members of code Status code are embedded into this type p td tr tr td code ResolutionRequestStatusFields code br em a href resolution tekton dev v1beta1 ResolutionRequestStatusFields ResolutionRequestStatusFields a em td td p Members of code ResolutionRequestStatusFields code are embedded into this type p td tr tbody table h3 id resolution tekton dev v1beta1 ResolutionRequestStatusFields ResolutionRequestStatusFields h3 p em Appears on em a href resolution tekton dev v1beta1 ResolutionRequestStatus ResolutionRequestStatus a p div p ResolutionRequestStatusFields are the ResolutionRequest specific fields for the status subresource p div table thead tr th Field th th Description th tr thead tbody tr td code data code br em string em td td p Data is a string representation of the resolved content of the requested resource in lined into the ResolutionRequest object p td tr tr td code source code br em a href tekton dev v1 RefSource RefSource a em td td p Deprecated Use RefSource instead p td tr tr td code refSource code br em a href tekton dev v1 RefSource RefSource a em td td p RefSource is the source reference of the remote data that records the url digest and the entrypoint p td tr tbody table hr h2 id tekton dev v1 tekton dev v1 h2 div p Package v1 contains API Schema definitions for the pipeline v1 API group p div Resource Types ul li a href tekton dev v1 Pipeline Pipeline a li li a href tekton dev v1 PipelineRun PipelineRun a li li a href tekton dev v1 Task Task a li li a href tekton dev v1 TaskRun TaskRun a li ul h3 id tekton dev v1 Pipeline Pipeline h3 div p Pipeline describes a list of Tasks to execute It expresses how outputs of tasks feed into inputs of subsequent tasks p div table thead tr th Field th th Description th tr thead tbody tr td code apiVersion code br string td td code tekton dev v1 code td tr tr td code kind code br string td td code Pipeline code td tr tr td code metadata code br em a href https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Kubernetes meta v1 ObjectMeta a em td td em Optional em Refer to the Kubernetes API documentation for the fields of the code metadata code field td tr tr td code spec code br em a href tekton dev v1 PipelineSpec PipelineSpec a em td td em Optional em p Spec holds the desired state of the Pipeline from the client p br br table tr td code displayName code br em string em td td em Optional em p DisplayName is a user facing name of the pipeline that may be used to populate a UI p td tr tr td code description code br em string em td td em Optional em p Description is a user facing description of the pipeline that may be used to populate a UI p td tr tr td code tasks code br em a href tekton dev v1 PipelineTask PipelineTask a em td td p Tasks declares the graph of Tasks that execute when this Pipeline is run p td tr tr td code params code br em a href tekton dev v1 ParamSpecs ParamSpecs a em td td p Params declares a list of input parameters that must be supplied when this Pipeline is run p td tr tr td code workspaces code br em a href tekton dev v1 PipelineWorkspaceDeclaration PipelineWorkspaceDeclaration a em td td em Optional em p Workspaces declares a set of named workspaces that are expected to be provided by a PipelineRun p td tr tr td code results code br em a href tekton dev v1 PipelineResult PipelineResult a em td td em Optional em p Results are values that this pipeline can output once run p td tr tr td code finally code br em a href tekton dev v1 PipelineTask PipelineTask a em td td p Finally declares the list of Tasks that execute just before leaving the Pipeline i e either after all Tasks are finished executing successfully or after a failure which would result in ending the Pipeline p td tr table td tr tbody table h3 id tekton dev v1 PipelineRun PipelineRun h3 div p PipelineRun represents a single execution of a Pipeline PipelineRuns are how the graph of Tasks declared in a Pipeline are executed they specify inputs to Pipelines such as parameter values and capture operational aspects of the Tasks execution such as service account and tolerations Creating a PipelineRun creates TaskRuns for Tasks in the referenced Pipeline p div table thead tr th Field th th Description th tr thead tbody tr td code apiVersion code br string td td code tekton dev v1 code td tr tr td code kind code br string td td code PipelineRun code td tr tr td code metadata code br em a href https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Kubernetes meta v1 ObjectMeta a em td td em Optional em Refer to the Kubernetes API documentation for the fields of the code metadata code field td tr tr td code spec code br em a href tekton dev v1 PipelineRunSpec PipelineRunSpec a em td td em Optional em br br table tr td code pipelineRef code br em a href tekton dev v1 PipelineRef PipelineRef a em td td em Optional em td tr tr td code pipelineSpec code br em a href tekton dev v1 PipelineSpec PipelineSpec a em td td em Optional em p Specifying PipelineSpec can be disabled by setting code disable inline spec code feature flag p td tr tr td code params code br em a href tekton dev v1 Params Params a em td td p Params is a list of parameter names and values p td tr tr td code status code br em a href tekton dev v1 PipelineRunSpecStatus PipelineRunSpecStatus a em td td em Optional em p Used for cancelling a pipelinerun and maybe more later on p td tr tr td code timeouts code br em a href tekton dev v1 TimeoutFields TimeoutFields a em td td em Optional em p Time after which the Pipeline times out Currently three keys are accepted in the map pipeline tasks and finally with Timeouts pipeline gt Timeouts tasks Timeouts finally p td tr tr td code taskRunTemplate code br em a href tekton dev v1 PipelineTaskRunTemplate PipelineTaskRunTemplate a em td td em Optional em p TaskRunTemplate represent template of taskrun p td tr tr td code workspaces code br em a href tekton dev v1 WorkspaceBinding WorkspaceBinding a em td td em Optional em p Workspaces holds a set of workspace bindings that must match names with those declared in the pipeline p td tr tr td code taskRunSpecs code br em a href tekton dev v1 PipelineTaskRunSpec PipelineTaskRunSpec a em td td em Optional em p TaskRunSpecs holds a set of runtime specs p td tr table td tr tr td code status code br em a href tekton dev v1 PipelineRunStatus PipelineRunStatus a em td td em Optional em td tr tbody table h3 id tekton dev v1 Task Task h3 div p Task represents a collection of sequential steps that are run as part of a Pipeline using a set of inputs and producing a set of outputs Tasks execute when TaskRuns are created that provide the input parameters and resources and output resources the Task requires p div table thead tr th Field th th Description th tr thead tbody tr td code apiVersion code br string td td code tekton dev v1 code td tr tr td code kind code br string td td code Task code td tr tr td code metadata code br em a href https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Kubernetes meta v1 ObjectMeta a em td td em Optional em Refer to the Kubernetes API documentation for the fields of the code metadata code field td tr tr td code spec code br em a href tekton dev v1 TaskSpec TaskSpec a em td td em Optional em p Spec holds the desired state of the Task from the client p br br table tr td code params code br em a href tekton dev v1 ParamSpecs ParamSpecs a em td td em Optional em p Params is a list of input parameters required to run the task Params must be supplied as inputs in TaskRuns unless they declare a default value p td tr tr td code displayName code br em string em td td em Optional em p DisplayName is a user facing name of the task that may be used to populate a UI p td tr tr td code description code br em string em td td em Optional em p Description is a user facing description of the task that may be used to populate a UI p td tr tr td code steps code br em a href tekton dev v1 Step Step a em td td p Steps are the steps of the build each step is run sequentially with the source mounted into workspace p td tr tr td code volumes code br em a href https kubernetes io docs reference generated kubernetes api v1 24 volume v1 core Kubernetes core v1 Volume a em td td p Volumes is a collection of volumes that are available to mount into the steps of the build p td tr tr td code stepTemplate code br em a href tekton dev v1 StepTemplate StepTemplate a em td td p StepTemplate can be used as the basis for all step containers within the Task so that the steps inherit settings on the base container p td tr tr td code sidecars code br em a href tekton dev v1 Sidecar Sidecar a em td td p Sidecars are run alongside the Task rsquo s step containers They begin before the steps start and end after the steps complete p td tr tr td code workspaces code br em a href tekton dev v1 WorkspaceDeclaration WorkspaceDeclaration a em td td p Workspaces are the volumes that this Task requires p td tr tr td code results code br em a href tekton dev v1 TaskResult TaskResult a em td td p Results are values that this Task can output p td tr table td tr tbody table h3 id tekton dev v1 TaskRun TaskRun h3 div p TaskRun represents a single execution of a Task TaskRuns are how the steps specified in a Task are executed they specify the parameters and resources used to run the steps in a Task p div table thead tr th Field th th Description th tr thead tbody tr td code apiVersion code br string td td code tekton dev v1 code td tr tr td code kind code br string td td code TaskRun code td tr tr td code metadata code br em a href https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Kubernetes meta v1 ObjectMeta a em td td em Optional em Refer to the Kubernetes API documentation for the fields of the code metadata code field td tr tr td code spec code br em a href tekton dev v1 TaskRunSpec TaskRunSpec a em td td em Optional em br br table tr td code debug code br em a href tekton dev v1 TaskRunDebug TaskRunDebug a em td td em Optional em td tr tr td code params code br em a href tekton dev v1 Params Params a em td td em Optional em td tr tr td code serviceAccountName code br em string em td td em Optional em td tr tr td code taskRef code br em a href tekton dev v1 TaskRef TaskRef a em td td em Optional em p no more than one of the TaskRef and TaskSpec may be specified p td tr tr td code taskSpec code br em a href tekton dev v1 TaskSpec TaskSpec a em td td em Optional em p Specifying PipelineSpec can be disabled by setting code disable inline spec code feature flag p td tr tr td code status code br em a href tekton dev v1 TaskRunSpecStatus TaskRunSpecStatus a em td td em Optional em p Used for cancelling a TaskRun and maybe more later on p td tr tr td code statusMessage code br em a href tekton dev v1 TaskRunSpecStatusMessage TaskRunSpecStatusMessage a em td td em Optional em p Status message for cancellation p td tr tr td code retries code br em int em td td em Optional em p Retries represents how many times this TaskRun should be retried in the event of task failure p td tr tr td code timeout code br em a href https godoc org k8s io apimachinery pkg apis meta v1 Duration Kubernetes meta v1 Duration a em td td em Optional em p Time after which one retry attempt times out Defaults to 1 hour Refer Go rsquo s ParseDuration documentation for expected format a href https golang org pkg time ParseDuration https golang org pkg time ParseDuration a p td tr tr td code podTemplate code br em a href tekton dev unversioned Template Template a em td td p PodTemplate holds pod specific configuration p td tr tr td code workspaces code br em a href tekton dev v1 WorkspaceBinding WorkspaceBinding a em td td em Optional em p Workspaces is a list of WorkspaceBindings from volumes to workspaces p td tr tr td code stepSpecs code br em a href tekton dev v1 TaskRunStepSpec TaskRunStepSpec a em td td em Optional em p Specs to apply to Steps in this TaskRun If a field is specified in both a Step and a StepSpec the value from the StepSpec will be used This field is only supported when the alpha feature gate is enabled p td tr tr td code sidecarSpecs code br em a href tekton dev v1 TaskRunSidecarSpec TaskRunSidecarSpec a em td td em Optional em p Specs to apply to Sidecars in this TaskRun If a field is specified in both a Sidecar and a SidecarSpec the value from the SidecarSpec will be used This field is only supported when the alpha feature gate is enabled p td tr tr td code computeResources code br em a href https kubernetes io docs reference generated kubernetes api v1 24 resourcerequirements v1 core Kubernetes core v1 ResourceRequirements a em td td p Compute resources to use for this TaskRun p td tr table td tr tr td code status code br em a href tekton dev v1 TaskRunStatus TaskRunStatus a em td td em Optional em td tr tbody table h3 id tekton dev v1 Algorithm Algorithm code string code alias h3 div p Algorithm Standard cryptographic hash algorithm p div h3 id tekton dev v1 Artifact Artifact h3 p em Appears on em a href tekton dev v1 Artifacts Artifacts a a href tekton dev v1 StepState StepState a p div p TaskRunStepArtifact represents an artifact produced or used by a step within a task run It directly uses the Artifact type for its structure p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p The artifact rsquo s identifying category name p td tr tr td code values code br em a href tekton dev v1 ArtifactValue ArtifactValue a em td td p A collection of values related to the artifact p td tr tr td code buildOutput code br em bool em td td p Indicate if the artifact is a build output or a by product p td tr tbody table h3 id tekton dev v1 ArtifactValue ArtifactValue h3 p em Appears on em a href tekton dev v1 Artifact Artifact a p div p ArtifactValue represents a specific value or data element within an Artifact p div table thead tr th Field th th Description th tr thead tbody tr td code digest code br em map github com tektoncd pipeline pkg apis pipeline v1 Algorithm string em td td td tr tr td code uri code br em string em td td p Algorithm specific digests for verifying the content e g SHA256 p td tr tbody table h3 id tekton dev v1 Artifacts Artifacts h3 p em Appears on em a href tekton dev v1 TaskRunStatusFields TaskRunStatusFields a p div p Artifacts represents the collection of input and output artifacts associated with a task run or a similar process Artifacts in this context are units of data or resources that the process either consumes as input or produces as output p div table thead tr th Field th th Description th tr thead tbody tr td code inputs code br em a href tekton dev v1 Artifact Artifact a em td td td tr tr td code outputs code br em a href tekton dev v1 Artifact Artifact a em td td td tr tbody table h3 id tekton dev v1 ChildStatusReference ChildStatusReference h3 p em Appears on em a href tekton dev v1 PipelineRunStatusFields PipelineRunStatusFields a p div p ChildStatusReference is used to point to the statuses of individual TaskRuns and Runs within this PipelineRun p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name is the name of the TaskRun or Run this is referencing p td tr tr td code displayName code br em string em td td p DisplayName is a user facing name of the pipelineTask that may be used to populate a UI p td tr tr td code pipelineTaskName code br em string em td td p PipelineTaskName is the name of the PipelineTask this is referencing p td tr tr td code whenExpressions code br em a href tekton dev v1 WhenExpression WhenExpression a em td td em Optional em p WhenExpressions is the list of checks guarding the execution of the PipelineTask p td tr tbody table h3 id tekton dev v1 Combination Combination code map string string code alias h3 div p Combination is a map mainly defined to hold a single combination from a Matrix with key as param Name and value as param Value p div h3 id tekton dev v1 Combinations Combinations code github com tektoncd pipeline pkg apis pipeline v1 Combination code alias h3 div p Combinations is a Combination list p div h3 id tekton dev v1 EmbeddedTask EmbeddedTask h3 p em Appears on em a href tekton dev v1 PipelineTask PipelineTask a p div p EmbeddedTask is used to define a Task inline within a Pipeline rsquo s PipelineTasks p div table thead tr th Field th th Description th tr thead tbody tr td code spec code br em k8s io apimachinery pkg runtime RawExtension em td td em Optional em p Spec is a specification of a custom task p br br table tr td code code br em byte em td td p Raw is the underlying serialization of this object p p TODO Determine how to detect ContentType and ContentEncoding of lsquo Raw rsquo data p td tr tr td code code br em k8s io apimachinery pkg runtime Object em td td p Object can hold a representation of this extension useful for working with versioned structs p td tr table td tr tr td code metadata code br em a href tekton dev v1 PipelineTaskMetadata PipelineTaskMetadata a em td td em Optional em td tr tr td code TaskSpec code br em a href tekton dev v1 TaskSpec TaskSpec a em td td p Members of code TaskSpec code are embedded into this type p em Optional em p TaskSpec is a specification of a task p td tr tbody table h3 id tekton dev v1 IncludeParams IncludeParams h3 div p IncludeParams allows passing in a specific combinations of Parameters into the Matrix p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name the specified combination p td tr tr td code params code br em a href tekton dev v1 Params Params a em td td p Params takes only code Parameters code of type code quot string quot code The names of the code params code must match the names of the code params code in the underlying code Task code p td tr tbody table h3 id tekton dev v1 Matrix Matrix h3 p em Appears on em a href tekton dev v1 PipelineTask PipelineTask a p div p Matrix is used to fan out Tasks in a Pipeline p div table thead tr th Field th th Description th tr thead tbody tr td code params code br em a href tekton dev v1 Params Params a em td td p Params is a list of parameters used to fan out the pipelineTask Params takes only code Parameters code of type code quot array quot code Each array element is supplied to the code PipelineTask code by substituting code params code of type code quot string quot code in the underlying code Task code The names of the code params code in the code Matrix code must match the names of the code params code in the underlying code Task code that they will be substituting p td tr tr td code include code br em a href tekton dev v1 IncludeParamsList IncludeParamsList a em td td em Optional em p Include is a list of IncludeParams which allows passing in specific combinations of Parameters into the Matrix p td tr tbody table h3 id tekton dev v1 OnErrorType OnErrorType code string code alias h3 p em Appears on em a href tekton dev v1 Step Step a p div p OnErrorType defines a list of supported exiting behavior of a container on error p div table thead tr th Value th th Description th tr thead tbody tr td p 34 continue 34 p td td p Continue indicates continue executing the rest of the steps irrespective of the container exit code p td tr tr td p 34 stopAndFail 34 p td td p StopAndFail indicates exit the taskRun if the container exits with non zero exit code p td tr tbody table h3 id tekton dev v1 Param Param h3 p em Appears on em a href resolution tekton dev v1beta1 ResolutionRequestSpec ResolutionRequestSpec a p div p Param declares an ParamValues to use for the parameter called name p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td td tr tr td code value code br em a href tekton dev v1 ParamValue ParamValue a em td td td tr tbody table h3 id tekton dev v1 ParamSpec ParamSpec h3 div p ParamSpec defines arbitrary parameters needed beyond typed inputs such as resources Parameter values are provided by users as inputs on a TaskRun or PipelineRun p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name declares the name by which a parameter is referenced p td tr tr td code type code br em a href tekton dev v1 ParamType ParamType a em td td em Optional em p Type is the user specified type of the parameter The possible types are currently ldquo string rdquo ldquo array rdquo and ldquo object rdquo and ldquo string rdquo is the default p td tr tr td code description code br em string em td td em Optional em p Description is a user facing description of the parameter that may be used to populate a UI p td tr tr td code properties code br em a href tekton dev v1 PropertySpec map string github com tektoncd pipeline pkg apis pipeline v1 PropertySpec a em td td em Optional em p Properties is the JSON Schema properties to support key value pairs parameter p td tr tr td code default code br em a href tekton dev v1 ParamValue ParamValue a em td td em Optional em p Default is the value a parameter takes if no input value is supplied If default is set a Task may be executed without a supplied value for the parameter p td tr tr td code enum code br em string em td td em Optional em p Enum declares a set of allowed param input values for tasks pipelines that can be validated If Enum is not set no input validation is performed for the param p td tr tbody table h3 id tekton dev v1 ParamSpecs ParamSpecs code github com tektoncd pipeline pkg apis pipeline v1 ParamSpec code alias h3 p em Appears on em a href tekton dev v1 PipelineSpec PipelineSpec a a href tekton dev v1 TaskSpec TaskSpec a a href tekton dev v1alpha1 StepActionSpec StepActionSpec a a href tekton dev v1beta1 StepActionSpec StepActionSpec a p div p ParamSpecs is a list of ParamSpec p div h3 id tekton dev v1 ParamType ParamType code string code alias h3 p em Appears on em a href tekton dev v1 ParamSpec ParamSpec a a href tekton dev v1 ParamValue ParamValue a a href tekton dev v1 PropertySpec PropertySpec a p div p ParamType indicates the type of an input parameter Used to distinguish between a single string and an array of strings p div table thead tr th Value th th Description th tr thead tbody tr td p 34 array 34 p td td td tr tr td p 34 object 34 p td td td tr tr td p 34 string 34 p td td td tr tbody table h3 id tekton dev v1 ParamValue ParamValue h3 p em Appears on em a href tekton dev v1 Param Param a a href tekton dev v1 ParamSpec ParamSpec a a href tekton dev v1 PipelineResult PipelineResult a a href tekton dev v1 PipelineRunResult PipelineRunResult a a href tekton dev v1 TaskResult TaskResult a a href tekton dev v1 TaskRunResult TaskRunResult a p div p ResultValue is a type alias of ParamValue p div table thead tr th Field th th Description th tr thead tbody tr td code Type code br em a href tekton dev v1 ParamType ParamType a em td td td tr tr td code StringVal code br em string em td td p Represents the stored type of ParamValues p td tr tr td code ArrayVal code br em string em td td td tr tr td code ObjectVal code br em map string string em td td td tr tbody table h3 id tekton dev v1 Params Params code github com tektoncd pipeline pkg apis pipeline v1 Param code alias h3 p em Appears on em a href tekton dev v1 IncludeParams IncludeParams a a href tekton dev v1 Matrix Matrix a a href tekton dev v1 PipelineRunSpec PipelineRunSpec a a href tekton dev v1 PipelineTask PipelineTask a a href tekton dev v1 ResolverRef ResolverRef a a href tekton dev v1 Step Step a a href tekton dev v1 TaskRunInputs TaskRunInputs a a href tekton dev v1 TaskRunSpec TaskRunSpec a p div p Params is a list of Param p div h3 id tekton dev v1 PipelineRef PipelineRef h3 p em Appears on em a href tekton dev v1 PipelineRunSpec PipelineRunSpec a a href tekton dev v1 PipelineTask PipelineTask a p div p PipelineRef can be used to refer to a specific instance of a Pipeline p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name of the referent More info a href http kubernetes io docs user guide identifiers names http kubernetes io docs user guide identifiers names a p td tr tr td code apiVersion code br em string em td td em Optional em p API version of the referent p td tr tr td code ResolverRef code br em a href tekton dev v1 ResolverRef ResolverRef a em td td em Optional em p ResolverRef allows referencing a Pipeline in a remote location like a git repo This field is only supported when the alpha feature gate is enabled p td tr tbody table h3 id tekton dev v1 PipelineResult PipelineResult h3 p em Appears on em a href tekton dev v1 PipelineSpec PipelineSpec a p div p PipelineResult used to describe the results of a pipeline p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name the given name p td tr tr td code type code br em a href tekton dev v1 ResultsType ResultsType a em td td p Type is the user specified type of the result The possible types are lsquo string rsquo lsquo array rsquo and lsquo object rsquo with lsquo string rsquo as the default lsquo array rsquo and lsquo object rsquo types are alpha features p td tr tr td code description code br em string em td td em Optional em p Description is a human readable description of the result p td tr tr td code value code br em a href tekton dev v1 ParamValue ParamValue a em td td p Value the expression used to retrieve the value p td tr tbody table h3 id tekton dev v1 PipelineRunReason PipelineRunReason code string code alias h3 div p PipelineRunReason represents a reason for the pipeline run ldquo Succeeded rdquo condition p div table thead tr th Value th th Description th tr thead tbody tr td p 34 CELEvaluationFailed 34 p td td p ReasonCELEvaluationFailed indicates the pipeline fails the CEL evaluation p td tr tr td p 34 Cancelled 34 p td td p PipelineRunReasonCancelled is the reason set when the PipelineRun cancelled by the user This reason may be found with a corev1 ConditionFalse status if the cancellation was processed successfully This reason may be found with a corev1 ConditionUnknown status if the cancellation is being processed or failed p td tr tr td p 34 CancelledRunningFinally 34 p td td p PipelineRunReasonCancelledRunningFinally indicates that pipeline has been gracefully cancelled and no new Tasks will be scheduled by the controller but final tasks are now running p td tr tr td p 34 Completed 34 p td td p PipelineRunReasonCompleted is the reason set when the PipelineRun completed successfully with one or more skipped Tasks p td tr tr td p 34 PipelineRunCouldntCancel 34 p td td p ReasonCouldntCancel indicates that a PipelineRun was cancelled but attempting to update all of the running TaskRuns as cancelled failed p td tr tr td p 34 CouldntGetPipeline 34 p td td p ReasonCouldntGetPipeline indicates that the reason for the failure status is that the associated Pipeline couldn rsquo t be retrieved p td tr tr td p 34 CouldntGetPipelineResult 34 p td td p PipelineRunReasonCouldntGetPipelineResult indicates that the pipeline fails to retrieve the referenced result This could be due to failed TaskRuns or Runs that were supposed to produce the results p td tr tr td p 34 CouldntGetTask 34 p td td p ReasonCouldntGetTask indicates that the reason for the failure status is that the associated Pipeline rsquo s Tasks couldn rsquo t all be retrieved p td tr tr td p 34 PipelineRunCouldntTimeOut 34 p td td p ReasonCouldntTimeOut indicates that a PipelineRun was timed out but attempting to update all of the running TaskRuns as timed out failed p td tr tr td p 34 CreateRunFailed 34 p td td p ReasonCreateRunFailed indicates that the pipeline fails to create the taskrun or other run resources p td tr tr td p 34 Failed 34 p td td p PipelineRunReasonFailed is the reason set when the PipelineRun completed with a failure p td tr tr td p 34 PipelineValidationFailed 34 p td td p ReasonFailedValidation indicates that the reason for failure status is that pipelinerun failed runtime validation p td tr tr td p 34 InvalidPipelineResourceBindings 34 p td td p ReasonInvalidBindings indicates that the reason for the failure status is that the PipelineResources bound in the PipelineRun didn rsquo t match those declared in the Pipeline p td tr tr td p 34 PipelineInvalidGraph 34 p td td p ReasonInvalidGraph indicates that the reason for the failure status is that the associated Pipeline is an invalid graph a k a wrong order cycle p td tr tr td p 34 InvalidMatrixParameterTypes 34 p td td p ReasonInvalidMatrixParameterTypes indicates a matrix contains invalid parameter types p td tr tr td p 34 InvalidParamValue 34 p td td p PipelineRunReasonInvalidParamValue indicates that the PipelineRun Param input value is not allowed p td tr tr td p 34 InvalidPipelineResultReference 34 p td td p PipelineRunReasonInvalidPipelineResultReference indicates a pipeline result was declared by the pipeline but not initialized in the pipelineTask p td tr tr td p 34 InvalidTaskResultReference 34 p td td p ReasonInvalidTaskResultReference indicates a task result was declared but was not initialized by that task p td tr tr td p 34 InvalidTaskRunSpecs 34 p td td p ReasonInvalidTaskRunSpec indicates that PipelineRun Spec TaskRunSpecs PipelineTaskName is defined with a not exist taskName in pipelineSpec p td tr tr td p 34 InvalidWorkspaceBindings 34 p td td p ReasonInvalidWorkspaceBinding indicates that a Pipeline expects a workspace but a PipelineRun has provided an invalid binding p td tr tr td p 34 ObjectParameterMissKeys 34 p td td p ReasonObjectParameterMissKeys indicates that the object param value provided from PipelineRun spec misses some keys required for the object param declared in Pipeline spec p td tr tr td p 34 ParamArrayIndexingInvalid 34 p td td p ReasonParamArrayIndexingInvalid indicates that the use of param array indexing is out of bound p td tr tr td p 34 ParameterMissing 34 p td td p ReasonParameterMissing indicates that the reason for the failure status is that the associated PipelineRun didn rsquo t provide all the required parameters p td tr tr td p 34 ParameterTypeMismatch 34 p td td p ReasonParameterTypeMismatch indicates that the reason for the failure status is that parameter s declared in the PipelineRun do not have the some declared type as the parameters s declared in the Pipeline that they are supposed to override p td tr tr td p 34 PipelineRunPending 34 p td td p PipelineRunReasonPending is the reason set when the PipelineRun is in the pending state p td tr tr td p 34 RequiredWorkspaceMarkedOptional 34 p td td p ReasonRequiredWorkspaceMarkedOptional indicates an optional workspace has been passed to a Task that is expecting a non optional workspace p td tr tr td p 34 ResolvingPipelineRef 34 p td td p ReasonResolvingPipelineRef indicates that the PipelineRun is waiting for its pipelineRef to be asynchronously resolved p td tr tr td p 34 ResourceVerificationFailed 34 p td td p ReasonResourceVerificationFailed indicates that the pipeline fails the trusted resource verification it could be the content has changed signature is invalid or public key is invalid p td tr tr td p 34 Running 34 p td td p PipelineRunReasonRunning is the reason set when the PipelineRun is running p td tr tr td p 34 Started 34 p td td p PipelineRunReasonStarted is the reason set when the PipelineRun has just started p td tr tr td p 34 StoppedRunningFinally 34 p td td p PipelineRunReasonStoppedRunningFinally indicates that pipeline has been gracefully stopped and no new Tasks will be scheduled by the controller but final tasks are now running p td tr tr td p 34 PipelineRunStopping 34 p td td p PipelineRunReasonStopping indicates that no new Tasks will be scheduled by the controller and the pipeline will stop once all running tasks complete their work p td tr tr td p 34 Succeeded 34 p td td p PipelineRunReasonSuccessful is the reason set when the PipelineRun completed successfully p td tr tr td p 34 PipelineRunTimeout 34 p td td p PipelineRunReasonTimedOut is the reason set when the PipelineRun has timed out p td tr tbody table h3 id tekton dev v1 PipelineRunResult PipelineRunResult h3 p em Appears on em a href tekton dev v1 PipelineRunStatusFields PipelineRunStatusFields a p div p PipelineRunResult used to describe the results of a pipeline p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name is the result rsquo s name as declared by the Pipeline p td tr tr td code value code br em a href tekton dev v1 ParamValue ParamValue a em td td p Value is the result returned from the execution of this PipelineRun p td tr tbody table h3 id tekton dev v1 PipelineRunRunStatus PipelineRunRunStatus h3 div p PipelineRunRunStatus contains the name of the PipelineTask for this Run and the Run rsquo s Status p div table thead tr th Field th th Description th tr thead tbody tr td code pipelineTaskName code br em string em td td p PipelineTaskName is the name of the PipelineTask p td tr tr td code status code br em a href tekton dev v1beta1 CustomRunStatus CustomRunStatus a em td td em Optional em p Status is the RunStatus for the corresponding Run p td tr tr td code whenExpressions code br em a href tekton dev v1 WhenExpression WhenExpression a em td td em Optional em p WhenExpressions is the list of checks guarding the execution of the PipelineTask p td tr tbody table h3 id tekton dev v1 PipelineRunSpec PipelineRunSpec h3 p em Appears on em a href tekton dev v1 PipelineRun PipelineRun a p div p PipelineRunSpec defines the desired state of PipelineRun p div table thead tr th Field th th Description th tr thead tbody tr td code pipelineRef code br em a href tekton dev v1 PipelineRef PipelineRef a em td td em Optional em td tr tr td code pipelineSpec code br em a href tekton dev v1 PipelineSpec PipelineSpec a em td td em Optional em p Specifying PipelineSpec can be disabled by setting code disable inline spec code feature flag p td tr tr td code params code br em a href tekton dev v1 Params Params a em td td p Params is a list of parameter names and values p td tr tr td code status code br em a href tekton dev v1 PipelineRunSpecStatus PipelineRunSpecStatus a em td td em Optional em p Used for cancelling a pipelinerun and maybe more later on p td tr tr td code timeouts code br em a href tekton dev v1 TimeoutFields TimeoutFields a em td td em Optional em p Time after which the Pipeline times out Currently three keys are accepted in the map pipeline tasks and finally with Timeouts pipeline gt Timeouts tasks Timeouts finally p td tr tr td code taskRunTemplate code br em a href tekton dev v1 PipelineTaskRunTemplate PipelineTaskRunTemplate a em td td em Optional em p TaskRunTemplate represent template of taskrun p td tr tr td code workspaces code br em a href tekton dev v1 WorkspaceBinding WorkspaceBinding a em td td em Optional em p Workspaces holds a set of workspace bindings that must match names with those declared in the pipeline p td tr tr td code taskRunSpecs code br em a href tekton dev v1 PipelineTaskRunSpec PipelineTaskRunSpec a em td td em Optional em p TaskRunSpecs holds a set of runtime specs p td tr tbody table h3 id tekton dev v1 PipelineRunSpecStatus PipelineRunSpecStatus code string code alias h3 p em Appears on em a href tekton dev v1 PipelineRunSpec PipelineRunSpec a p div p PipelineRunSpecStatus defines the pipelinerun spec status the user can provide p div h3 id tekton dev v1 PipelineRunStatus PipelineRunStatus h3 p em Appears on em a href tekton dev v1 PipelineRun PipelineRun a p div p PipelineRunStatus defines the observed state of PipelineRun p div table thead tr th Field th th Description th tr thead tbody tr td code Status code br em a href https pkg go dev knative dev pkg apis duck v1 Status knative dev pkg apis duck v1 Status a em td td p Members of code Status code are embedded into this type p td tr tr td code PipelineRunStatusFields code br em a href tekton dev v1 PipelineRunStatusFields PipelineRunStatusFields a em td td p Members of code PipelineRunStatusFields code are embedded into this type p p PipelineRunStatusFields inlines the status fields p td tr tbody table h3 id tekton dev v1 PipelineRunStatusFields PipelineRunStatusFields h3 p em Appears on em a href tekton dev v1 PipelineRunStatus PipelineRunStatus a p div p PipelineRunStatusFields holds the fields of PipelineRunStatus rsquo status This is defined separately and inlined so that other types can readily consume these fields via duck typing p div table thead tr th Field th th Description th tr thead tbody tr td code startTime code br em a href https kubernetes io docs reference generated kubernetes api v1 24 time v1 meta Kubernetes meta v1 Time a em td td p StartTime is the time the PipelineRun is actually started p td tr tr td code completionTime code br em a href https kubernetes io docs reference generated kubernetes api v1 24 time v1 meta Kubernetes meta v1 Time a em td td p CompletionTime is the time the PipelineRun completed p td tr tr td code results code br em a href tekton dev v1 PipelineRunResult PipelineRunResult a em td td em Optional em p Results are the list of results written out by the pipeline task rsquo s containers p td tr tr td code pipelineSpec code br em a href tekton dev v1 PipelineSpec PipelineSpec a em td td p PipelineRunSpec contains the exact spec used to instantiate the run p td tr tr td code skippedTasks code br em a href tekton dev v1 SkippedTask SkippedTask a em td td em Optional em p list of tasks that were skipped due to when expressions evaluating to false p td tr tr td code childReferences code br em a href tekton dev v1 ChildStatusReference ChildStatusReference a em td td em Optional em p list of TaskRun and Run names PipelineTask names and API versions kinds for children of this PipelineRun p td tr tr td code finallyStartTime code br em a href https kubernetes io docs reference generated kubernetes api v1 24 time v1 meta Kubernetes meta v1 Time a em td td em Optional em p FinallyStartTime is when all non finally tasks have been completed and only finally tasks are being executed p td tr tr td code provenance code br em a href tekton dev v1 Provenance Provenance a em td td em Optional em p Provenance contains some key authenticated metadata about how a software artifact was built what sources what inputs outputs etc p td tr tr td code spanContext code br em map string string em td td p SpanContext contains tracing span context fields p td tr tbody table h3 id tekton dev v1 PipelineRunTaskRunStatus PipelineRunTaskRunStatus h3 div p PipelineRunTaskRunStatus contains the name of the PipelineTask for this TaskRun and the TaskRun rsquo s Status p div table thead tr th Field th th Description th tr thead tbody tr td code pipelineTaskName code br em string em td td p PipelineTaskName is the name of the PipelineTask p td tr tr td code status code br em a href tekton dev v1 TaskRunStatus TaskRunStatus a em td td em Optional em p Status is the TaskRunStatus for the corresponding TaskRun p td tr tr td code whenExpressions code br em a href tekton dev v1 WhenExpression WhenExpression a em td td em Optional em p WhenExpressions is the list of checks guarding the execution of the PipelineTask p td tr tbody table h3 id tekton dev v1 PipelineSpec PipelineSpec h3 p em Appears on em a href tekton dev v1 Pipeline Pipeline a a href tekton dev v1 PipelineRunSpec PipelineRunSpec a a href tekton dev v1 PipelineRunStatusFields PipelineRunStatusFields a a href tekton dev v1 PipelineTask PipelineTask a p div p PipelineSpec defines the desired state of Pipeline p div table thead tr th Field th th Description th tr thead tbody tr td code displayName code br em string em td td em Optional em p DisplayName is a user facing name of the pipeline that may be used to populate a UI p td tr tr td code description code br em string em td td em Optional em p Description is a user facing description of the pipeline that may be used to populate a UI p td tr tr td code tasks code br em a href tekton dev v1 PipelineTask PipelineTask a em td td p Tasks declares the graph of Tasks that execute when this Pipeline is run p td tr tr td code params code br em a href tekton dev v1 ParamSpecs ParamSpecs a em td td p Params declares a list of input parameters that must be supplied when this Pipeline is run p td tr tr td code workspaces code br em a href tekton dev v1 PipelineWorkspaceDeclaration PipelineWorkspaceDeclaration a em td td em Optional em p Workspaces declares a set of named workspaces that are expected to be provided by a PipelineRun p td tr tr td code results code br em a href tekton dev v1 PipelineResult PipelineResult a em td td em Optional em p Results are values that this pipeline can output once run p td tr tr td code finally code br em a href tekton dev v1 PipelineTask PipelineTask a em td td p Finally declares the list of Tasks that execute just before leaving the Pipeline i e either after all Tasks are finished executing successfully or after a failure which would result in ending the Pipeline p td tr tbody table h3 id tekton dev v1 PipelineTask PipelineTask h3 p em Appears on em a href tekton dev v1 PipelineSpec PipelineSpec a p div p PipelineTask defines a task in a Pipeline passing inputs from both Params and from the output of previous tasks p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name is the name of this task within the context of a Pipeline Name is used as a coordinate with the code from code and code runAfter code fields to establish the execution order of tasks relative to one another p td tr tr td code displayName code br em string em td td em Optional em p DisplayName is the display name of this task within the context of a Pipeline This display name may be used to populate a UI p td tr tr td code description code br em string em td td em Optional em p Description is the description of this task within the context of a Pipeline This description may be used to populate a UI p td tr tr td code taskRef code br em a href tekton dev v1 TaskRef TaskRef a em td td em Optional em p TaskRef is a reference to a task definition p td tr tr td code taskSpec code br em a href tekton dev v1 EmbeddedTask EmbeddedTask a em td td em Optional em p TaskSpec is a specification of a task Specifying TaskSpec can be disabled by setting code disable inline spec code feature flag p td tr tr td code when code br em a href tekton dev v1 WhenExpressions WhenExpressions a em td td em Optional em p When is a list of when expressions that need to be true for the task to run p td tr tr td code retries code br em int em td td em Optional em p Retries represents how many times this task should be retried in case of task failure ConditionSucceeded set to False p td tr tr td code runAfter code br em string em td td em Optional em p RunAfter is the list of PipelineTask names that should be executed before this Task executes Used to force a specific ordering in graph execution p td tr tr td code params code br em a href tekton dev v1 Params Params a em td td em Optional em p Parameters declares parameters passed to this task p td tr tr td code matrix code br em a href tekton dev v1 Matrix Matrix a em td td em Optional em p Matrix declares parameters used to fan out this task p td tr tr td code workspaces code br em a href tekton dev v1 WorkspacePipelineTaskBinding WorkspacePipelineTaskBinding a em td td em Optional em p Workspaces maps workspaces from the pipeline spec to the workspaces declared in the Task p td tr tr td code timeout code br em a href https godoc org k8s io apimachinery pkg apis meta v1 Duration Kubernetes meta v1 Duration a em td td em Optional em p Time after which the TaskRun times out Defaults to 1 hour Refer Go rsquo s ParseDuration documentation for expected format a href https golang org pkg time ParseDuration https golang org pkg time ParseDuration a p td tr tr td code pipelineRef code br em a href tekton dev v1 PipelineRef PipelineRef a em td td em Optional em p PipelineRef is a reference to a pipeline definition Note PipelineRef is in preview mode and not yet supported p td tr tr td code pipelineSpec code br em a href tekton dev v1 PipelineSpec PipelineSpec a em td td em Optional em p PipelineSpec is a specification of a pipeline Note PipelineSpec is in preview mode and not yet supported Specifying PipelineSpec can be disabled by setting code disable inline spec code feature flag p td tr tr td code onError code br em a href tekton dev v1 PipelineTaskOnErrorType PipelineTaskOnErrorType a em td td em Optional em p OnError defines the exiting behavior of a PipelineRun on error can be set to continue stopAndFail p td tr tbody table h3 id tekton dev v1 PipelineTaskMetadata PipelineTaskMetadata h3 p em Appears on em a href tekton dev v1 EmbeddedTask EmbeddedTask a a href tekton dev v1 PipelineTaskRunSpec PipelineTaskRunSpec a p div p PipelineTaskMetadata contains the labels or annotations for an EmbeddedTask p div table thead tr th Field th th Description th tr thead tbody tr td code labels code br em map string string em td td em Optional em td tr tr td code annotations code br em map string string em td td em Optional em td tr tbody table h3 id tekton dev v1 PipelineTaskOnErrorType PipelineTaskOnErrorType code string code alias h3 p em Appears on em a href tekton dev v1 PipelineTask PipelineTask a p div p PipelineTaskOnErrorType defines a list of supported failure handling behaviors of a PipelineTask on error p div table thead tr th Value th th Description th tr thead tbody tr td p 34 continue 34 p td td p PipelineTaskContinue indicates to continue executing the rest of the DAG when the PipelineTask fails p td tr tr td p 34 stopAndFail 34 p td td p PipelineTaskStopAndFail indicates to stop and fail the PipelineRun if the PipelineTask fails p td tr tbody table h3 id tekton dev v1 PipelineTaskParam PipelineTaskParam h3 div p PipelineTaskParam is used to provide arbitrary string parameters to a Task p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td td tr tr td code value code br em string em td td td tr tbody table h3 id tekton dev v1 PipelineTaskRun PipelineTaskRun h3 div p PipelineTaskRun reports the results of running a step in the Task Each task has the potential to succeed or fail based on the exit code and produces logs p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td td tr tbody table h3 id tekton dev v1 PipelineTaskRunSpec PipelineTaskRunSpec h3 p em Appears on em a href tekton dev v1 PipelineRunSpec PipelineRunSpec a p div p PipelineTaskRunSpec can be used to configure specific specs for a concrete Task p div table thead tr th Field th th Description th tr thead tbody tr td code pipelineTaskName code br em string em td td td tr tr td code serviceAccountName code br em string em td td td tr tr td code podTemplate code br em a href tekton dev unversioned Template Template a em td td td tr tr td code stepSpecs code br em a href tekton dev v1 TaskRunStepSpec TaskRunStepSpec a em td td td tr tr td code sidecarSpecs code br em a href tekton dev v1 TaskRunSidecarSpec TaskRunSidecarSpec a em td td td tr tr td code metadata code br em a href tekton dev v1 PipelineTaskMetadata PipelineTaskMetadata a em td td em Optional em td tr tr td code computeResources code br em a href https kubernetes io docs reference generated kubernetes api v1 24 resourcerequirements v1 core Kubernetes core v1 ResourceRequirements a em td td p Compute resources to use for this TaskRun p td tr tbody table h3 id tekton dev v1 PipelineTaskRunTemplate PipelineTaskRunTemplate h3 p em Appears on em a href tekton dev v1 PipelineRunSpec PipelineRunSpec a p div p PipelineTaskRunTemplate is used to specify run specifications for all Task in pipelinerun p div table thead tr th Field th th Description th tr thead tbody tr td code podTemplate code br em a href tekton dev unversioned Template Template a em td td em Optional em td tr tr td code serviceAccountName code br em string em td td em Optional em td tr tbody table h3 id tekton dev v1 PipelineWorkspaceDeclaration PipelineWorkspaceDeclaration h3 p em Appears on em a href tekton dev v1 PipelineSpec PipelineSpec a p div p WorkspacePipelineDeclaration creates a named slot in a Pipeline that a PipelineRun is expected to populate with a workspace binding p p Deprecated use PipelineWorkspaceDeclaration type instead p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name is the name of a workspace to be provided by a PipelineRun p td tr tr td code description code br em string em td td em Optional em p Description is a human readable string describing how the workspace will be used in the Pipeline It can be useful to include a bit of detail about which tasks are intended to have access to the data on the workspace p td tr tr td code optional code br em bool em td td p Optional marks a Workspace as not being required in PipelineRuns By default this field is false and so declared workspaces are required p td tr tbody table h3 id tekton dev v1 PropertySpec PropertySpec h3 p em Appears on em a href tekton dev v1 ParamSpec ParamSpec a a href tekton dev v1 StepResult StepResult a a href tekton dev v1 TaskResult TaskResult a p div p PropertySpec defines the struct for object keys p div table thead tr th Field th th Description th tr thead tbody tr td code type code br em a href tekton dev v1 ParamType ParamType a em td td td tr tbody table h3 id tekton dev v1 Provenance Provenance h3 p em Appears on em a href tekton dev v1 PipelineRunStatusFields PipelineRunStatusFields a a href tekton dev v1 StepState StepState a a href tekton dev v1 TaskRunStatusFields TaskRunStatusFields a p div p Provenance contains metadata about resources used in the TaskRun PipelineRun such as the source from where a remote build definition was fetched This field aims to carry minimum amoumt of metadata in Run status so that Tekton Chains can capture them in the provenance p div table thead tr th Field th th Description th tr thead tbody tr td code refSource code br em a href tekton dev v1 RefSource RefSource a em td td p RefSource identifies the source where a remote task pipeline came from p td tr tr td code featureFlags code br em github com tektoncd pipeline pkg apis config FeatureFlags em td td p FeatureFlags identifies the feature flags that were used during the task pipeline run p td tr tbody table h3 id tekton dev v1 Ref Ref h3 p em Appears on em a href tekton dev v1 Step Step a p div p Ref can be used to refer to a specific instance of a StepAction p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name of the referenced step p td tr tr td code ResolverRef code br em a href tekton dev v1 ResolverRef ResolverRef a em td td em Optional em p ResolverRef allows referencing a StepAction in a remote location like a git repo p td tr tbody table h3 id tekton dev v1 RefSource RefSource h3 p em Appears on em a href tekton dev v1 Provenance Provenance a a href resolution tekton dev v1alpha1 ResolutionRequestStatusFields ResolutionRequestStatusFields a a href resolution tekton dev v1beta1 ResolutionRequestStatusFields ResolutionRequestStatusFields a p div p RefSource contains the information that can uniquely identify where a remote built definition came from i e Git repositories Tekton Bundles in OCI registry and hub p div table thead tr th Field th th Description th tr thead tbody tr td code uri code br em string em td td p URI indicates the identity of the source of the build definition Example ldquo a href https github com tektoncd catalog quot https github com tektoncd catalog rdquo a p td tr tr td code digest code br em map string string em td td p Digest is a collection of cryptographic digests for the contents of the artifact specified by URI Example ldquo sha1 rdquo ldquo f99d13e554ffcb696dee719fa85b695cb5b0f428 rdquo p td tr tr td code entryPoint code br em string em td td p EntryPoint identifies the entry point into the build This is often a path to a build definition file and or a target label within that file Example ldquo task git clone 0 8 git clone yaml rdquo p td tr tbody table h3 id tekton dev v1 ResolverName ResolverName code string code alias h3 p em Appears on em a href tekton dev v1 ResolverRef ResolverRef a p div p ResolverName is the name of a resolver from which a resource can be requested p div h3 id tekton dev v1 ResolverRef ResolverRef h3 p em Appears on em a href tekton dev v1 PipelineRef PipelineRef a a href tekton dev v1 Ref Ref a a href tekton dev v1 TaskRef TaskRef a p div p ResolverRef can be used to refer to a Pipeline or Task in a remote location like a git repo This feature is in beta and these fields are only available when the beta feature gate is enabled p div table thead tr th Field th th Description th tr thead tbody tr td code resolver code br em a href tekton dev v1 ResolverName ResolverName a em td td em Optional em p Resolver is the name of the resolver that should perform resolution of the referenced Tekton resource such as ldquo git rdquo p td tr tr td code params code br em a href tekton dev v1 Params Params a em td td em Optional em p Params contains the parameters used to identify the referenced Tekton resource Example entries might include ldquo repo rdquo or ldquo path rdquo but the set of params ultimately depends on the chosen resolver p td tr tbody table h3 id tekton dev v1 ResultRef ResultRef h3 div p ResultRef is a type that represents a reference to a task run result p div table thead tr th Field th th Description th tr thead tbody tr td code pipelineTask code br em string em td td td tr tr td code result code br em string em td td td tr tr td code resultsIndex code br em int em td td td tr tr td code property code br em string em td td td tr tbody table h3 id tekton dev v1 ResultsType ResultsType code string code alias h3 p em Appears on em a href tekton dev v1 PipelineResult PipelineResult a a href tekton dev v1 StepResult StepResult a a href tekton dev v1 TaskResult TaskResult a a href tekton dev v1 TaskRunResult TaskRunResult a p div p ResultsType indicates the type of a result Used to distinguish between a single string and an array of strings Note that there is ResultType used to find out whether a RunResult is from a task result or not which is different from this ResultsType p div table thead tr th Value th th Description th tr thead tbody tr td p 34 array 34 p td td td tr tr td p 34 object 34 p td td td tr tr td p 34 string 34 p td td td tr tbody table h3 id tekton dev v1 Sidecar Sidecar h3 p em Appears on em a href tekton dev v1 TaskSpec TaskSpec a p div p Sidecar has nearly the same data structure as Step but does not have the ability to timeout p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name of the Sidecar specified as a DNS LABEL Each Sidecar in a Task must have a unique name DNS LABEL Cannot be updated p td tr tr td code image code br em string em td td em Optional em p Image reference name More info a href https kubernetes io docs concepts containers images https kubernetes io docs concepts containers images a p td tr tr td code command code br em string em td td em Optional em p Entrypoint array Not executed within a shell The image rsquo s ENTRYPOINT is used if this is not provided Variable references VAR NAME are expanded using the Sidecar rsquo s environment If a variable cannot be resolved the reference in the input string will be unchanged Double are reduced to a single which allows for escaping the VAR NAME syntax i e ldquo VAR NAME rdquo will produce the string literal ldquo VAR NAME rdquo Escaped references will never be expanded regardless of whether the variable exists or not Cannot be updated More info a href https kubernetes io docs tasks inject data application define command argument container running a command in a shell https kubernetes io docs tasks inject data application define command argument container running a command in a shell a p td tr tr td code args code br em string em td td em Optional em p Arguments to the entrypoint The image rsquo s CMD is used if this is not provided Variable references VAR NAME are expanded using the Sidecar rsquo s environment If a variable cannot be resolved the reference in the input string will be unchanged Double are reduced to a single which allows for escaping the VAR NAME syntax i e ldquo VAR NAME rdquo will produce the string literal ldquo VAR NAME rdquo Escaped references will never be expanded regardless of whether the variable exists or not Cannot be updated More info a href https kubernetes io docs tasks inject data application define command argument container running a command in a shell https kubernetes io docs tasks inject data application define command argument container running a command in a shell a p td tr tr td code workingDir code br em string em td td em Optional em p Sidecar rsquo s working directory If not specified the container runtime rsquo s default will be used which might be configured in the container image Cannot be updated p td tr tr td code ports code br em a href https kubernetes io docs reference generated kubernetes api v1 24 containerport v1 core Kubernetes core v1 ContainerPort a em td td em Optional em p List of ports to expose from the Sidecar Exposing a port here gives the system additional information about the network connections a container uses but is primarily informational Not specifying a port here DOES NOT prevent that port from being exposed Any port which is listening on the default ldquo 0 0 0 0 rdquo address inside a container will be accessible from the network Cannot be updated p td tr tr td code envFrom code br em a href https kubernetes io docs reference generated kubernetes api v1 24 envfromsource v1 core Kubernetes core v1 EnvFromSource a em td td em Optional em p List of sources to populate environment variables in the Sidecar The keys defined within a source must be a C IDENTIFIER All invalid keys will be reported as an event when the container is starting When a key exists in multiple sources the value associated with the last source will take precedence Values defined by an Env with a duplicate key will take precedence Cannot be updated p td tr tr td code env code br em a href https kubernetes io docs reference generated kubernetes api v1 24 envvar v1 core Kubernetes core v1 EnvVar a em td td em Optional em p List of environment variables to set in the Sidecar Cannot be updated p td tr tr td code computeResources code br em a href https kubernetes io docs reference generated kubernetes api v1 24 resourcerequirements v1 core Kubernetes core v1 ResourceRequirements a em td td em Optional em p ComputeResources required by this Sidecar Cannot be updated More info a href https kubernetes io docs concepts configuration manage resources containers https kubernetes io docs concepts configuration manage resources containers a p td tr tr td code volumeMounts code br em a href https kubernetes io docs reference generated kubernetes api v1 24 volumemount v1 core Kubernetes core v1 VolumeMount a em td td em Optional em p Volumes to mount into the Sidecar rsquo s filesystem Cannot be updated p td tr tr td code volumeDevices code br em a href https kubernetes io docs reference generated kubernetes api v1 24 volumedevice v1 core Kubernetes core v1 VolumeDevice a em td td em Optional em p volumeDevices is the list of block devices to be used by the Sidecar p td tr tr td code livenessProbe code br em a href https kubernetes io docs reference generated kubernetes api v1 24 probe v1 core Kubernetes core v1 Probe a em td td em Optional em p Periodic probe of Sidecar liveness Container will be restarted if the probe fails Cannot be updated More info a href https kubernetes io docs concepts workloads pods pod lifecycle container probes https kubernetes io docs concepts workloads pods pod lifecycle container probes a p td tr tr td code readinessProbe code br em a href https kubernetes io docs reference generated kubernetes api v1 24 probe v1 core Kubernetes core v1 Probe a em td td em Optional em p Periodic probe of Sidecar service readiness Container will be removed from service endpoints if the probe fails Cannot be updated More info a href https kubernetes io docs concepts workloads pods pod lifecycle container probes https kubernetes io docs concepts workloads pods pod lifecycle container probes a p td tr tr td code startupProbe code br em a href https kubernetes io docs reference generated kubernetes api v1 24 probe v1 core Kubernetes core v1 Probe a em td td em Optional em p StartupProbe indicates that the Pod the Sidecar is running in has successfully initialized If specified no other probes are executed until this completes successfully If this probe fails the Pod will be restarted just as if the livenessProbe failed This can be used to provide different probe parameters at the beginning of a Pod rsquo s lifecycle when it might take a long time to load data or warm a cache than during steady state operation This cannot be updated More info a href https kubernetes io docs concepts workloads pods pod lifecycle container probes https kubernetes io docs concepts workloads pods pod lifecycle container probes a p td tr tr td code lifecycle code br em a href https kubernetes io docs reference generated kubernetes api v1 24 lifecycle v1 core Kubernetes core v1 Lifecycle a em td td em Optional em p Actions that the management system should take in response to Sidecar lifecycle events Cannot be updated p td tr tr td code terminationMessagePath code br em string em td td em Optional em p Optional Path at which the file to which the Sidecar rsquo s termination message will be written is mounted into the Sidecar rsquo s filesystem Message written is intended to be brief final status such as an assertion failure message Will be truncated by the node if greater than 4096 bytes The total message length across all containers will be limited to 12kb Defaults to dev termination log Cannot be updated p td tr tr td code terminationMessagePolicy code br em a href https kubernetes io docs reference generated kubernetes api v1 24 terminationmessagepolicy v1 core Kubernetes core v1 TerminationMessagePolicy a em td td em Optional em p Indicate how the termination message should be populated File will use the contents of terminationMessagePath to populate the Sidecar status message on both success and failure FallbackToLogsOnError will use the last chunk of Sidecar log output if the termination message file is empty and the Sidecar exited with an error The log output is limited to 2048 bytes or 80 lines whichever is smaller Defaults to File Cannot be updated p td tr tr td code imagePullPolicy code br em a href https kubernetes io docs reference generated kubernetes api v1 24 pullpolicy v1 core Kubernetes core v1 PullPolicy a em td td em Optional em p Image pull policy One of Always Never IfNotPresent Defaults to Always if latest tag is specified or IfNotPresent otherwise Cannot be updated More info a href https kubernetes io docs concepts containers images updating images https kubernetes io docs concepts containers images updating images a p td tr tr td code securityContext code br em a href https kubernetes io docs reference generated kubernetes api v1 24 securitycontext v1 core Kubernetes core v1 SecurityContext a em td td em Optional em p SecurityContext defines the security options the Sidecar should be run with If set the fields of SecurityContext override the equivalent fields of PodSecurityContext More info a href https kubernetes io docs tasks configure pod container security context https kubernetes io docs tasks configure pod container security context a p td tr tr td code stdin code br em bool em td td em Optional em p Whether this Sidecar should allocate a buffer for stdin in the container runtime If this is not set reads from stdin in the Sidecar will always result in EOF Default is false p td tr tr td code stdinOnce code br em bool em td td em Optional em p Whether the container runtime should close the stdin channel after it has been opened by a single attach When stdin is true the stdin stream will remain open across multiple attach sessions If stdinOnce is set to true stdin is opened on Sidecar start is empty until the first client attaches to stdin and then remains open and accepts data until the client disconnects at which time stdin is closed and remains closed until the Sidecar is restarted If this flag is false a container processes that reads from stdin will never receive an EOF Default is false p td tr tr td code tty code br em bool em td td em Optional em p Whether this Sidecar should allocate a TTY for itself also requires lsquo stdin rsquo to be true Default is false p td tr tr td code script code br em string em td td em Optional em p Script is the contents of an executable file to execute p p If Script is not empty the Step cannot have an Command or Args p td tr tr td code workspaces code br em a href tekton dev v1 WorkspaceUsage WorkspaceUsage a em td td em Optional em p This is an alpha field You must set the ldquo enable api fields rdquo feature flag to ldquo alpha rdquo for this field to be supported p p Workspaces is a list of workspaces from the Task that this Sidecar wants exclusive access to Adding a workspace to this list means that any other Step or Sidecar that does not also request this Workspace will not have access to it p td tr tr td code restartPolicy code br em a href https kubernetes io docs reference generated kubernetes api v1 24 containerrestartpolicy v1 core Kubernetes core v1 ContainerRestartPolicy a em td td em Optional em p RestartPolicy refers to kubernetes RestartPolicy It can only be set for an initContainer and must have it rsquo s policy set to ldquo Always rdquo It is currently left optional to help support Kubernetes versions prior to 1 29 when this feature was introduced p td tr tbody table h3 id tekton dev v1 SidecarState SidecarState h3 p em Appears on em a href tekton dev v1 TaskRunStatusFields TaskRunStatusFields a p div p SidecarState reports the results of running a sidecar in a Task p div table thead tr th Field th th Description th tr thead tbody tr td code ContainerState code br em a href https kubernetes io docs reference generated kubernetes api v1 24 containerstate v1 core Kubernetes core v1 ContainerState a em td td p Members of code ContainerState code are embedded into this type p td tr tr td code name code br em string em td td td tr tr td code container code br em string em td td td tr tr td code imageID code br em string em td td td tr tbody table h3 id tekton dev v1 SkippedTask SkippedTask h3 p em Appears on em a href tekton dev v1 PipelineRunStatusFields PipelineRunStatusFields a p div p SkippedTask is used to describe the Tasks that were skipped due to their When Expressions evaluating to False This is a struct because we are looking into including more details about the When Expressions that caused this Task to be skipped p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name is the Pipeline Task name p td tr tr td code reason code br em a href tekton dev v1 SkippingReason SkippingReason a em td td p Reason is the cause of the PipelineTask being skipped p td tr tr td code whenExpressions code br em a href tekton dev v1 WhenExpression WhenExpression a em td td em Optional em p WhenExpressions is the list of checks guarding the execution of the PipelineTask p td tr tbody table h3 id tekton dev v1 SkippingReason SkippingReason code string code alias h3 p em Appears on em a href tekton dev v1 SkippedTask SkippedTask a p div p SkippingReason explains why a PipelineTask was skipped p div table thead tr th Value th th Description th tr thead tbody tr td p 34 Matrix Parameters have an empty array 34 p td td p EmptyArrayInMatrixParams means the task was skipped because Matrix parameters contain empty array p td tr tr td p 34 PipelineRun Finally timeout has been reached 34 p td td p FinallyTimedOutSkip means the task was skipped because the PipelineRun has passed its Timeouts Finally p td tr tr td p 34 PipelineRun was gracefully cancelled 34 p td td p GracefullyCancelledSkip means the task was skipped because the pipeline run has been gracefully cancelled p td tr tr td p 34 PipelineRun was gracefully stopped 34 p td td p GracefullyStoppedSkip means the task was skipped because the pipeline run has been gracefully stopped p td tr tr td p 34 Results were missing 34 p td td p MissingResultsSkip means the task was skipped because it rsquo s missing necessary results p td tr tr td p 34 None 34 p td td p None means the task was not skipped p td tr tr td p 34 Parent Tasks were skipped 34 p td td p ParentTasksSkip means the task was skipped because its parent was skipped p td tr tr td p 34 PipelineRun timeout has been reached 34 p td td p PipelineTimedOutSkip means the task was skipped because the PipelineRun has passed its overall timeout p td tr tr td p 34 PipelineRun was stopping 34 p td td p StoppingSkip means the task was skipped because the pipeline run is stopping p td tr tr td p 34 PipelineRun Tasks timeout has been reached 34 p td td p TasksTimedOutSkip means the task was skipped because the PipelineRun has passed its Timeouts Tasks p td tr tr td p 34 When Expressions evaluated to false 34 p td td p WhenExpressionsSkip means the task was skipped due to at least one of its when expressions evaluating to false p td tr tbody table h3 id tekton dev v1 Step Step h3 p em Appears on em a href tekton dev v1 TaskSpec TaskSpec a p div p Step runs a subcomponent of a Task p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name of the Step specified as a DNS LABEL Each Step in a Task must have a unique name p td tr tr td code image code br em string em td td em Optional em p Docker image name More info a href https kubernetes io docs concepts containers images https kubernetes io docs concepts containers images a p td tr tr td code command code br em string em td td em Optional em p Entrypoint array Not executed within a shell The image rsquo s ENTRYPOINT is used if this is not provided Variable references VAR NAME are expanded using the container rsquo s environment If a variable cannot be resolved the reference in the input string will be unchanged Double are reduced to a single which allows for escaping the VAR NAME syntax i e ldquo VAR NAME rdquo will produce the string literal ldquo VAR NAME rdquo Escaped references will never be expanded regardless of whether the variable exists or not Cannot be updated More info a href https kubernetes io docs tasks inject data application define command argument container running a command in a shell https kubernetes io docs tasks inject data application define command argument container running a command in a shell a p td tr tr td code args code br em string em td td em Optional em p Arguments to the entrypoint The image rsquo s CMD is used if this is not provided Variable references VAR NAME are expanded using the container rsquo s environment If a variable cannot be resolved the reference in the input string will be unchanged Double are reduced to a single which allows for escaping the VAR NAME syntax i e ldquo VAR NAME rdquo will produce the string literal ldquo VAR NAME rdquo Escaped references will never be expanded regardless of whether the variable exists or not Cannot be updated More info a href https kubernetes io docs tasks inject data application define command argument container running a command in a shell https kubernetes io docs tasks inject data application define command argument container running a command in a shell a p td tr tr td code workingDir code br em string em td td em Optional em p Step rsquo s working directory If not specified the container runtime rsquo s default will be used which might be configured in the container image Cannot be updated p td tr tr td code envFrom code br em a href https kubernetes io docs reference generated kubernetes api v1 24 envfromsource v1 core Kubernetes core v1 EnvFromSource a em td td em Optional em p List of sources to populate environment variables in the Step The keys defined within a source must be a C IDENTIFIER All invalid keys will be reported as an event when the Step is starting When a key exists in multiple sources the value associated with the last source will take precedence Values defined by an Env with a duplicate key will take precedence Cannot be updated p td tr tr td code env code br em a href https kubernetes io docs reference generated kubernetes api v1 24 envvar v1 core Kubernetes core v1 EnvVar a em td td em Optional em p List of environment variables to set in the Step Cannot be updated p td tr tr td code computeResources code br em a href https kubernetes io docs reference generated kubernetes api v1 24 resourcerequirements v1 core Kubernetes core v1 ResourceRequirements a em td td em Optional em p ComputeResources required by this Step Cannot be updated More info a href https kubernetes io docs concepts configuration manage resources containers https kubernetes io docs concepts configuration manage resources containers a p td tr tr td code volumeMounts code br em a href https kubernetes io docs reference generated kubernetes api v1 24 volumemount v1 core Kubernetes core v1 VolumeMount a em td td em Optional em p Volumes to mount into the Step rsquo s filesystem Cannot be updated p td tr tr td code volumeDevices code br em a href https kubernetes io docs reference generated kubernetes api v1 24 volumedevice v1 core Kubernetes core v1 VolumeDevice a em td td em Optional em p volumeDevices is the list of block devices to be used by the Step p td tr tr td code imagePullPolicy code br em a href https kubernetes io docs reference generated kubernetes api v1 24 pullpolicy v1 core Kubernetes core v1 PullPolicy a em td td em Optional em p Image pull policy One of Always Never IfNotPresent Defaults to Always if latest tag is specified or IfNotPresent otherwise Cannot be updated More info a href https kubernetes io docs concepts containers images updating images https kubernetes io docs concepts containers images updating images a p td tr tr td code securityContext code br em a href https kubernetes io docs reference generated kubernetes api v1 24 securitycontext v1 core Kubernetes core v1 SecurityContext a em td td em Optional em p SecurityContext defines the security options the Step should be run with If set the fields of SecurityContext override the equivalent fields of PodSecurityContext More info a href https kubernetes io docs tasks configure pod container security context https kubernetes io docs tasks configure pod container security context a p td tr tr td code script code br em string em td td em Optional em p Script is the contents of an executable file to execute p p If Script is not empty the Step cannot have an Command and the Args will be passed to the Script p td tr tr td code timeout code br em a href https godoc org k8s io apimachinery pkg apis meta v1 Duration Kubernetes meta v1 Duration a em td td em Optional em p Timeout is the time after which the step times out Defaults to never Refer to Go rsquo s ParseDuration documentation for expected format a href https golang org pkg time ParseDuration https golang org pkg time ParseDuration a p td tr tr td code workspaces code br em a href tekton dev v1 WorkspaceUsage WorkspaceUsage a em td td em Optional em p This is an alpha field You must set the ldquo enable api fields rdquo feature flag to ldquo alpha rdquo for this field to be supported p p Workspaces is a list of workspaces from the Task that this Step wants exclusive access to Adding a workspace to this list means that any other Step or Sidecar that does not also request this Workspace will not have access to it p td tr tr td code onError code br em a href tekton dev v1 OnErrorType OnErrorType a em td td p OnError defines the exiting behavior of a container on error can be set to continue stopAndFail p td tr tr td code stdoutConfig code br em a href tekton dev v1 StepOutputConfig StepOutputConfig a em td td em Optional em p Stores configuration for the stdout stream of the step p td tr tr td code stderrConfig code br em a href tekton dev v1 StepOutputConfig StepOutputConfig a em td td em Optional em p Stores configuration for the stderr stream of the step p td tr tr td code ref code br em a href tekton dev v1 Ref Ref a em td td em Optional em p Contains the reference to an existing StepAction p td tr tr td code params code br em a href tekton dev v1 Params Params a em td td em Optional em p Params declares parameters passed to this step action p td tr tr td code results code br em a href tekton dev v1 StepResult StepResult a em td td em Optional em p Results declares StepResults produced by the Step p p This is field is at an ALPHA stability level and gated by ldquo enable step actions rdquo feature flag p p It can be used in an inlined Step when used to store Results to step results resultName path It cannot be used when referencing StepActions using v1 Step Ref The Results declared by the StepActions will be stored here instead p td tr tr td code when code br em a href tekton dev v1 WhenExpressions WhenExpressions a em td td em Optional em p When is a list of when expressions that need to be true for the task to run p td tr tbody table h3 id tekton dev v1 StepOutputConfig StepOutputConfig h3 p em Appears on em a href tekton dev v1 Step Step a p div p StepOutputConfig stores configuration for a step output stream p div table thead tr th Field th th Description th tr thead tbody tr td code path code br em string em td td em Optional em p Path to duplicate stdout stream to on container rsquo s local filesystem p td tr tbody table h3 id tekton dev v1 StepResult StepResult h3 p em Appears on em a href tekton dev v1 Step Step a a href tekton dev v1alpha1 StepActionSpec StepActionSpec a a href tekton dev v1beta1 Step Step a a href tekton dev v1beta1 StepActionSpec StepActionSpec a p div p StepResult used to describe the Results of a Step p p This is field is at an BETA stability level and gated by ldquo enable step actions rdquo feature flag p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name the given name p td tr tr td code type code br em a href tekton dev v1 ResultsType ResultsType a em td td em Optional em p The possible types are lsquo string rsquo lsquo array rsquo and lsquo object rsquo with lsquo string rsquo as the default p td tr tr td code properties code br em a href tekton dev v1 PropertySpec map string github com tektoncd pipeline pkg apis pipeline v1 PropertySpec a em td td em Optional em p Properties is the JSON Schema properties to support key value pairs results p td tr tr td code description code br em string em td td em Optional em p Description is a human readable description of the result p td tr tbody table h3 id tekton dev v1 StepState StepState h3 p em Appears on em a href tekton dev v1 TaskRunStatusFields TaskRunStatusFields a p div p StepState reports the results of running a step in a Task p div table thead tr th Field th th Description th tr thead tbody tr td code ContainerState code br em a href https kubernetes io docs reference generated kubernetes api v1 24 containerstate v1 core Kubernetes core v1 ContainerState a em td td p Members of code ContainerState code are embedded into this type p td tr tr td code name code br em string em td td td tr tr td code container code br em string em td td td tr tr td code imageID code br em string em td td td tr tr td code results code br em a href tekton dev v1 TaskRunResult TaskRunResult a em td td td tr tr td code provenance code br em a href tekton dev v1 Provenance Provenance a em td td td tr tr td code terminationReason code br em string em td td td tr tr td code inputs code br em a href tekton dev v1 Artifact Artifact a em td td td tr tr td code outputs code br em a href tekton dev v1 Artifact Artifact a em td td td tr tbody table h3 id tekton dev v1 StepTemplate StepTemplate h3 p em Appears on em a href tekton dev v1 TaskSpec TaskSpec a p div p StepTemplate is a template for a Step p div table thead tr th Field th th Description th tr thead tbody tr td code image code br em string em td td em Optional em p Image reference name More info a href https kubernetes io docs concepts containers images https kubernetes io docs concepts containers images a p td tr tr td code command code br em string em td td em Optional em p Entrypoint array Not executed within a shell The image rsquo s ENTRYPOINT is used if this is not provided Variable references VAR NAME are expanded using the Step rsquo s environment If a variable cannot be resolved the reference in the input string will be unchanged Double are reduced to a single which allows for escaping the VAR NAME syntax i e ldquo VAR NAME rdquo will produce the string literal ldquo VAR NAME rdquo Escaped references will never be expanded regardless of whether the variable exists or not Cannot be updated More info a href https kubernetes io docs tasks inject data application define command argument container running a command in a shell https kubernetes io docs tasks inject data application define command argument container running a command in a shell a p td tr tr td code args code br em string em td td em Optional em p Arguments to the entrypoint The image rsquo s CMD is used if this is not provided Variable references VAR NAME are expanded using the Step rsquo s environment If a variable cannot be resolved the reference in the input string will be unchanged Double are reduced to a single which allows for escaping the VAR NAME syntax i e ldquo VAR NAME rdquo will produce the string literal ldquo VAR NAME rdquo Escaped references will never be expanded regardless of whether the variable exists or not Cannot be updated More info a href https kubernetes io docs tasks inject data application define command argument container running a command in a shell https kubernetes io docs tasks inject data application define command argument container running a command in a shell a p td tr tr td code workingDir code br em string em td td em Optional em p Step rsquo s working directory If not specified the container runtime rsquo s default will be used which might be configured in the container image Cannot be updated p td tr tr td code envFrom code br em a href https kubernetes io docs reference generated kubernetes api v1 24 envfromsource v1 core Kubernetes core v1 EnvFromSource a em td td em Optional em p List of sources to populate environment variables in the Step The keys defined within a source must be a C IDENTIFIER All invalid keys will be reported as an event when the Step is starting When a key exists in multiple sources the value associated with the last source will take precedence Values defined by an Env with a duplicate key will take precedence Cannot be updated p td tr tr td code env code br em a href https kubernetes io docs reference generated kubernetes api v1 24 envvar v1 core Kubernetes core v1 EnvVar a em td td em Optional em p List of environment variables to set in the Step Cannot be updated p td tr tr td code computeResources code br em a href https kubernetes io docs reference generated kubernetes api v1 24 resourcerequirements v1 core Kubernetes core v1 ResourceRequirements a em td td em Optional em p ComputeResources required by this Step Cannot be updated More info a href https kubernetes io docs concepts configuration manage resources containers https kubernetes io docs concepts configuration manage resources containers a p td tr tr td code volumeMounts code br em a href https kubernetes io docs reference generated kubernetes api v1 24 volumemount v1 core Kubernetes core v1 VolumeMount a em td td em Optional em p Volumes to mount into the Step rsquo s filesystem Cannot be updated p td tr tr td code volumeDevices code br em a href https kubernetes io docs reference generated kubernetes api v1 24 volumedevice v1 core Kubernetes core v1 VolumeDevice a em td td em Optional em p volumeDevices is the list of block devices to be used by the Step p td tr tr td code imagePullPolicy code br em a href https kubernetes io docs reference generated kubernetes api v1 24 pullpolicy v1 core Kubernetes core v1 PullPolicy a em td td em Optional em p Image pull policy One of Always Never IfNotPresent Defaults to Always if latest tag is specified or IfNotPresent otherwise Cannot be updated More info a href https kubernetes io docs concepts containers images updating images https kubernetes io docs concepts containers images updating images a p td tr tr td code securityContext code br em a href https kubernetes io docs reference generated kubernetes api v1 24 securitycontext v1 core Kubernetes core v1 SecurityContext a em td td em Optional em p SecurityContext defines the security options the Step should be run with If set the fields of SecurityContext override the equivalent fields of PodSecurityContext More info a href https kubernetes io docs tasks configure pod container security context https kubernetes io docs tasks configure pod container security context a p td tr tbody table h3 id tekton dev v1 TaskBreakpoints TaskBreakpoints h3 p em Appears on em a href tekton dev v1 TaskRunDebug TaskRunDebug a p div p TaskBreakpoints defines the breakpoint config for a particular Task p div table thead tr th Field th th Description th tr thead tbody tr td code onFailure code br em string em td td em Optional em p if enabled pause TaskRun on failure of a step failed step will not exit p td tr tr td code beforeSteps code br em string em td td em Optional em td tr tbody table h3 id tekton dev v1 TaskKind TaskKind code string code alias h3 p em Appears on em a href tekton dev v1 TaskRef TaskRef a p div p TaskKind defines the type of Task used by the pipeline p div table thead tr th Value th th Description th tr thead tbody tr td p 34 ClusterTask 34 p td td p ClusterTaskRefKind is the task type for a reference to a task with cluster scope ClusterTasks are not supported in v1 but v1 types may reference ClusterTasks p td tr tr td p 34 Task 34 p td td p NamespacedTaskKind indicates that the task type has a namespaced scope p td tr tbody table h3 id tekton dev v1 TaskRef TaskRef h3 p em Appears on em a href tekton dev v1 PipelineTask PipelineTask a a href tekton dev v1 TaskRunSpec TaskRunSpec a p div p TaskRef can be used to refer to a specific instance of a task p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name of the referent More info a href http kubernetes io docs user guide identifiers names http kubernetes io docs user guide identifiers names a p td tr tr td code kind code br em a href tekton dev v1 TaskKind TaskKind a em td td p TaskKind indicates the Kind of the Task 1 Namespaced Task when Kind is set to ldquo Task rdquo If Kind is ldquo rdquo it defaults to ldquo Task rdquo 2 Custom Task when Kind is non empty and APIVersion is non empty p td tr tr td code apiVersion code br em string em td td em Optional em p API version of the referent Note A Task with non empty APIVersion and Kind is considered a Custom Task p td tr tr td code ResolverRef code br em a href tekton dev v1 ResolverRef ResolverRef a em td td em Optional em p ResolverRef allows referencing a Task in a remote location like a git repo This field is only supported when the alpha feature gate is enabled p td tr tbody table h3 id tekton dev v1 TaskResult TaskResult h3 p em Appears on em a href tekton dev v1 TaskSpec TaskSpec a p div p TaskResult used to describe the results of a task p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name the given name p td tr tr td code type code br em a href tekton dev v1 ResultsType ResultsType a em td td em Optional em p Type is the user specified type of the result The possible type is currently ldquo string rdquo and will support ldquo array rdquo in following work p td tr tr td code properties code br em a href tekton dev v1 PropertySpec map string github com tektoncd pipeline pkg apis pipeline v1 PropertySpec a em td td em Optional em p Properties is the JSON Schema properties to support key value pairs results p td tr tr td code description code br em string em td td em Optional em p Description is a human readable description of the result p td tr tr td code value code br em a href tekton dev v1 ParamValue ParamValue a em td td em Optional em p Value the expression used to retrieve the value of the result from an underlying Step p td tr tbody table h3 id tekton dev v1 TaskRunDebug TaskRunDebug h3 p em Appears on em a href tekton dev v1 TaskRunSpec TaskRunSpec a p div p TaskRunDebug defines the breakpoint config for a particular TaskRun p div table thead tr th Field th th Description th tr thead tbody tr td code breakpoints code br em a href tekton dev v1 TaskBreakpoints TaskBreakpoints a em td td em Optional em td tr tbody table h3 id tekton dev v1 TaskRunInputs TaskRunInputs h3 div p TaskRunInputs holds the input values that this task was invoked with p div table thead tr th Field th th Description th tr thead tbody tr td code params code br em a href tekton dev v1 Params Params a em td td em Optional em td tr tbody table h3 id tekton dev v1 TaskRunReason TaskRunReason code string code alias h3 div p TaskRunReason is an enum used to store all TaskRun reason for the Succeeded condition that are controlled by the TaskRun itself Failure reasons that emerge from underlying resources are not included here p div table thead tr th Value th th Description th tr thead tbody tr td p 34 TaskRunCancelled 34 p td td p TaskRunReasonCancelled is the reason set when the TaskRun is cancelled by the user p td tr tr td p 34 Failed 34 p td td p TaskRunReasonFailed is the reason set when the TaskRun completed with a failure p td tr tr td p 34 TaskRunResolutionFailed 34 p td td p TaskRunReasonFailedResolution indicated that the reason for failure status is that references within the TaskRun could not be resolved p td tr tr td p 34 TaskRunValidationFailed 34 p td td p TaskRunReasonFailedValidation indicated that the reason for failure status is that taskrun failed runtime validation p td tr tr td p 34 FailureIgnored 34 p td td p TaskRunReasonFailureIgnored is the reason set when the Taskrun has failed due to pod execution error and the failure is ignored for the owning PipelineRun TaskRuns failed due to reconciler validation error should not use this reason p td tr tr td p 34 TaskRunImagePullFailed 34 p td td p TaskRunReasonImagePullFailed is the reason set when the step of a task fails due to image not being pulled p td tr tr td p 34 InvalidParamValue 34 p td td p TaskRunReasonInvalidParamValue indicates that the TaskRun Param input value is not allowed p td tr tr td p 34 ResourceVerificationFailed 34 p td td p TaskRunReasonResourceVerificationFailed indicates that the task fails the trusted resource verification it could be the content has changed signature is invalid or public key is invalid p td tr tr td p 34 TaskRunResultLargerThanAllowedLimit 34 p td td p TaskRunReasonResultLargerThanAllowedLimit is the reason set when one of the results exceeds its maximum allowed limit of 1 KB p td tr tr td p 34 Running 34 p td td p TaskRunReasonRunning is the reason set when the TaskRun is running p td tr tr td p 34 Started 34 p td td p TaskRunReasonStarted is the reason set when the TaskRun has just started p td tr tr td p 34 TaskRunStopSidecarFailed 34 p td td p TaskRunReasonStopSidecarFailed indicates that the sidecar is not properly stopped p td tr tr td p 34 Succeeded 34 p td td p TaskRunReasonSuccessful is the reason set when the TaskRun completed successfully p td tr tr td p 34 TaskValidationFailed 34 p td td p TaskRunReasonTaskFailedValidation indicated that the reason for failure status is that task failed runtime validation p td tr tr td p 34 TaskRunTimeout 34 p td td p TaskRunReasonTimedOut is the reason set when one TaskRun execution has timed out p td tr tr td p 34 ToBeRetried 34 p td td p TaskRunReasonToBeRetried is the reason set when the last TaskRun execution failed and will be retried p td tr tbody table h3 id tekton dev v1 TaskRunResult TaskRunResult h3 p em Appears on em a href tekton dev v1 StepState StepState a a href tekton dev v1 TaskRunStatusFields TaskRunStatusFields a p div p TaskRunStepResult is a type alias of TaskRunResult p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name the given name p td tr tr td code type code br em a href tekton dev v1 ResultsType ResultsType a em td td em Optional em p Type is the user specified type of the result The possible type is currently ldquo string rdquo and will support ldquo array rdquo in following work p td tr tr td code value code br em a href tekton dev v1 ParamValue ParamValue a em td td p Value the given value of the result p td tr tbody table h3 id tekton dev v1 TaskRunSidecarSpec TaskRunSidecarSpec h3 p em Appears on em a href tekton dev v1 PipelineTaskRunSpec PipelineTaskRunSpec a a href tekton dev v1 TaskRunSpec TaskRunSpec a p div p TaskRunSidecarSpec is used to override the values of a Sidecar in the corresponding Task p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p The name of the Sidecar to override p td tr tr td code computeResources code br em a href https kubernetes io docs reference generated kubernetes api v1 24 resourcerequirements v1 core Kubernetes core v1 ResourceRequirements a em td td p The resource requirements to apply to the Sidecar p td tr tbody table h3 id tekton dev v1 TaskRunSpec TaskRunSpec h3 p em Appears on em a href tekton dev v1 TaskRun TaskRun a p div p TaskRunSpec defines the desired state of TaskRun p div table thead tr th Field th th Description th tr thead tbody tr td code debug code br em a href tekton dev v1 TaskRunDebug TaskRunDebug a em td td em Optional em td tr tr td code params code br em a href tekton dev v1 Params Params a em td td em Optional em td tr tr td code serviceAccountName code br em string em td td em Optional em td tr tr td code taskRef code br em a href tekton dev v1 TaskRef TaskRef a em td td em Optional em p no more than one of the TaskRef and TaskSpec may be specified p td tr tr td code taskSpec code br em a href tekton dev v1 TaskSpec TaskSpec a em td td em Optional em p Specifying PipelineSpec can be disabled by setting code disable inline spec code feature flag p td tr tr td code status code br em a href tekton dev v1 TaskRunSpecStatus TaskRunSpecStatus a em td td em Optional em p Used for cancelling a TaskRun and maybe more later on p td tr tr td code statusMessage code br em a href tekton dev v1 TaskRunSpecStatusMessage TaskRunSpecStatusMessage a em td td em Optional em p Status message for cancellation p td tr tr td code retries code br em int em td td em Optional em p Retries represents how many times this TaskRun should be retried in the event of task failure p td tr tr td code timeout code br em a href https godoc org k8s io apimachinery pkg apis meta v1 Duration Kubernetes meta v1 Duration a em td td em Optional em p Time after which one retry attempt times out Defaults to 1 hour Refer Go rsquo s ParseDuration documentation for expected format a href https golang org pkg time ParseDuration https golang org pkg time ParseDuration a p td tr tr td code podTemplate code br em a href tekton dev unversioned Template Template a em td td p PodTemplate holds pod specific configuration p td tr tr td code workspaces code br em a href tekton dev v1 WorkspaceBinding WorkspaceBinding a em td td em Optional em p Workspaces is a list of WorkspaceBindings from volumes to workspaces p td tr tr td code stepSpecs code br em a href tekton dev v1 TaskRunStepSpec TaskRunStepSpec a em td td em Optional em p Specs to apply to Steps in this TaskRun If a field is specified in both a Step and a StepSpec the value from the StepSpec will be used This field is only supported when the alpha feature gate is enabled p td tr tr td code sidecarSpecs code br em a href tekton dev v1 TaskRunSidecarSpec TaskRunSidecarSpec a em td td em Optional em p Specs to apply to Sidecars in this TaskRun If a field is specified in both a Sidecar and a SidecarSpec the value from the SidecarSpec will be used This field is only supported when the alpha feature gate is enabled p td tr tr td code computeResources code br em a href https kubernetes io docs reference generated kubernetes api v1 24 resourcerequirements v1 core Kubernetes core v1 ResourceRequirements a em td td p Compute resources to use for this TaskRun p td tr tbody table h3 id tekton dev v1 TaskRunSpecStatus TaskRunSpecStatus code string code alias h3 p em Appears on em a href tekton dev v1 TaskRunSpec TaskRunSpec a p div p TaskRunSpecStatus defines the TaskRun spec status the user can provide p div h3 id tekton dev v1 TaskRunSpecStatusMessage TaskRunSpecStatusMessage code string code alias h3 p em Appears on em a href tekton dev v1 TaskRunSpec TaskRunSpec a p div p TaskRunSpecStatusMessage defines human readable status messages for the TaskRun p div table thead tr th Value th th Description th tr thead tbody tr td p 34 TaskRun cancelled as the PipelineRun it belongs to has been cancelled 34 p td td p TaskRunCancelledByPipelineMsg indicates that the PipelineRun of which this TaskRun was a part of has been cancelled p td tr tr td p 34 TaskRun cancelled as the PipelineRun it belongs to has timed out 34 p td td p TaskRunCancelledByPipelineTimeoutMsg indicates that the TaskRun was cancelled because the PipelineRun running it timed out p td tr tbody table h3 id tekton dev v1 TaskRunStatus TaskRunStatus h3 p em Appears on em a href tekton dev v1 TaskRun TaskRun a a href tekton dev v1 PipelineRunTaskRunStatus PipelineRunTaskRunStatus a a href tekton dev v1 TaskRunStatusFields TaskRunStatusFields a p div p TaskRunStatus defines the observed state of TaskRun p div table thead tr th Field th th Description th tr thead tbody tr td code Status code br em a href https pkg go dev knative dev pkg apis duck v1 Status knative dev pkg apis duck v1 Status a em td td p Members of code Status code are embedded into this type p td tr tr td code TaskRunStatusFields code br em a href tekton dev v1 TaskRunStatusFields TaskRunStatusFields a em td td p Members of code TaskRunStatusFields code are embedded into this type p p TaskRunStatusFields inlines the status fields p td tr tbody table h3 id tekton dev v1 TaskRunStatusFields TaskRunStatusFields h3 p em Appears on em a href tekton dev v1 TaskRunStatus TaskRunStatus a p div p TaskRunStatusFields holds the fields of TaskRun rsquo s status This is defined separately and inlined so that other types can readily consume these fields via duck typing p div table thead tr th Field th th Description th tr thead tbody tr td code podName code br em string em td td p PodName is the name of the pod responsible for executing this task rsquo s steps p td tr tr td code startTime code br em a href https kubernetes io docs reference generated kubernetes api v1 24 time v1 meta Kubernetes meta v1 Time a em td td p StartTime is the time the build is actually started p td tr tr td code completionTime code br em a href https kubernetes io docs reference generated kubernetes api v1 24 time v1 meta Kubernetes meta v1 Time a em td td p CompletionTime is the time the build completed p td tr tr td code steps code br em a href tekton dev v1 StepState StepState a em td td em Optional em p Steps describes the state of each build step container p td tr tr td code retriesStatus code br em a href tekton dev v1 TaskRunStatus TaskRunStatus a em td td em Optional em p RetriesStatus contains the history of TaskRunStatus in case of a retry in order to keep record of failures All TaskRunStatus stored in RetriesStatus will have no date within the RetriesStatus as is redundant p td tr tr td code results code br em a href tekton dev v1 TaskRunResult TaskRunResult a em td td em Optional em p Results are the list of results written out by the task rsquo s containers p td tr tr td code artifacts code br em a href tekton dev v1 Artifacts Artifacts a em td td em Optional em p Artifacts are the list of artifacts written out by the task rsquo s containers p td tr tr td code sidecars code br em a href tekton dev v1 SidecarState SidecarState a em td td p The list has one entry per sidecar in the manifest Each entry is represents the imageid of the corresponding sidecar p td tr tr td code taskSpec code br em a href tekton dev v1 TaskSpec TaskSpec a em td td p TaskSpec contains the Spec from the dereferenced Task definition used to instantiate this TaskRun p td tr tr td code provenance code br em a href tekton dev v1 Provenance Provenance a em td td em Optional em p Provenance contains some key authenticated metadata about how a software artifact was built what sources what inputs outputs etc p td tr tr td code spanContext code br em map string string em td td p SpanContext contains tracing span context fields p td tr tbody table h3 id tekton dev v1 TaskRunStepSpec TaskRunStepSpec h3 p em Appears on em a href tekton dev v1 PipelineTaskRunSpec PipelineTaskRunSpec a a href tekton dev v1 TaskRunSpec TaskRunSpec a p div p TaskRunStepSpec is used to override the values of a Step in the corresponding Task p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p The name of the Step to override p td tr tr td code computeResources code br em a href https kubernetes io docs reference generated kubernetes api v1 24 resourcerequirements v1 core Kubernetes core v1 ResourceRequirements a em td td p The resource requirements to apply to the Step p td tr tbody table h3 id tekton dev v1 TaskSpec TaskSpec h3 p em Appears on em a href tekton dev v1 Task Task a a href tekton dev v1 EmbeddedTask EmbeddedTask a a href tekton dev v1 TaskRunSpec TaskRunSpec a a href tekton dev v1 TaskRunStatusFields TaskRunStatusFields a p div p TaskSpec defines the desired state of Task p div table thead tr th Field th th Description th tr thead tbody tr td code params code br em a href tekton dev v1 ParamSpecs ParamSpecs a em td td em Optional em p Params is a list of input parameters required to run the task Params must be supplied as inputs in TaskRuns unless they declare a default value p td tr tr td code displayName code br em string em td td em Optional em p DisplayName is a user facing name of the task that may be used to populate a UI p td tr tr td code description code br em string em td td em Optional em p Description is a user facing description of the task that may be used to populate a UI p td tr tr td code steps code br em a href tekton dev v1 Step Step a em td td p Steps are the steps of the build each step is run sequentially with the source mounted into workspace p td tr tr td code volumes code br em a href https kubernetes io docs reference generated kubernetes api v1 24 volume v1 core Kubernetes core v1 Volume a em td td p Volumes is a collection of volumes that are available to mount into the steps of the build p td tr tr td code stepTemplate code br em a href tekton dev v1 StepTemplate StepTemplate a em td td p StepTemplate can be used as the basis for all step containers within the Task so that the steps inherit settings on the base container p td tr tr td code sidecars code br em a href tekton dev v1 Sidecar Sidecar a em td td p Sidecars are run alongside the Task rsquo s step containers They begin before the steps start and end after the steps complete p td tr tr td code workspaces code br em a href tekton dev v1 WorkspaceDeclaration WorkspaceDeclaration a em td td p Workspaces are the volumes that this Task requires p td tr tr td code results code br em a href tekton dev v1 TaskResult TaskResult a em td td p Results are values that this Task can output p td tr tbody table h3 id tekton dev v1 TimeoutFields TimeoutFields h3 p em Appears on em a href tekton dev v1 PipelineRunSpec PipelineRunSpec a p div p TimeoutFields allows granular specification of pipeline task and finally timeouts p div table thead tr th Field th th Description th tr thead tbody tr td code pipeline code br em a href https godoc org k8s io apimachinery pkg apis meta v1 Duration Kubernetes meta v1 Duration a em td td p Pipeline sets the maximum allowed duration for execution of the entire pipeline The sum of individual timeouts for tasks and finally must not exceed this value p td tr tr td code tasks code br em a href https godoc org k8s io apimachinery pkg apis meta v1 Duration Kubernetes meta v1 Duration a em td td p Tasks sets the maximum allowed duration of this pipeline rsquo s tasks p td tr tr td code finally code br em a href https godoc org k8s io apimachinery pkg apis meta v1 Duration Kubernetes meta v1 Duration a em td td p Finally sets the maximum allowed duration of this pipeline rsquo s finally p td tr tbody table h3 id tekton dev v1 WhenExpression WhenExpression h3 p em Appears on em a href tekton dev v1 ChildStatusReference ChildStatusReference a a href tekton dev v1 PipelineRunRunStatus PipelineRunRunStatus a a href tekton dev v1 PipelineRunTaskRunStatus PipelineRunTaskRunStatus a a href tekton dev v1 SkippedTask SkippedTask a p div p WhenExpression allows a PipelineTask to declare expressions to be evaluated before the Task is run to determine whether the Task should be executed or skipped p div table thead tr th Field th th Description th tr thead tbody tr td code input code br em string em td td p Input is the string for guard checking which can be a static input or an output from a parent Task p td tr tr td code operator code br em k8s io apimachinery pkg selection Operator em td td p Operator that represents an Input rsquo s relationship to the values p td tr tr td code values code br em string em td td p Values is an array of strings which is compared against the input for guard checking It must be non empty p td tr tr td code cel code br em string em td td em Optional em p CEL is a string of Common Language Expression which can be used to conditionally execute the task based on the result of the expression evaluation More info about CEL syntax a href https github com google cel spec blob master doc langdef md https github com google cel spec blob master doc langdef md a p td tr tbody table h3 id tekton dev v1 WhenExpressions WhenExpressions code github com tektoncd pipeline pkg apis pipeline v1 WhenExpression code alias h3 p em Appears on em a href tekton dev v1 PipelineTask PipelineTask a a href tekton dev v1 Step Step a p div p WhenExpressions are used to specify whether a Task should be executed or skipped All of them need to evaluate to True for a guarded Task to be executed p div h3 id tekton dev v1 WorkspaceBinding WorkspaceBinding h3 p em Appears on em a href tekton dev v1 PipelineRunSpec PipelineRunSpec a a href tekton dev v1 TaskRunSpec TaskRunSpec a p div p WorkspaceBinding maps a Task rsquo s declared workspace to a Volume p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name is the name of the workspace populated by the volume p td tr tr td code subPath code br em string em td td em Optional em p SubPath is optionally a directory on the volume which should be used for this binding i e the volume will be mounted at this sub directory p td tr tr td code volumeClaimTemplate code br em a href https kubernetes io docs reference generated kubernetes api v1 24 persistentvolumeclaim v1 core Kubernetes core v1 PersistentVolumeClaim a em td td em Optional em p VolumeClaimTemplate is a template for a claim that will be created in the same namespace The PipelineRun controller is responsible for creating a unique claim for each instance of PipelineRun p td tr tr td code persistentVolumeClaim code br em a href https kubernetes io docs reference generated kubernetes api v1 24 persistentvolumeclaimvolumesource v1 core Kubernetes core v1 PersistentVolumeClaimVolumeSource a em td td em Optional em p PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace Either this OR EmptyDir can be used p td tr tr td code emptyDir code br em a href https kubernetes io docs reference generated kubernetes api v1 24 emptydirvolumesource v1 core Kubernetes core v1 EmptyDirVolumeSource a em td td em Optional em p EmptyDir represents a temporary directory that shares a Task rsquo s lifetime More info a href https kubernetes io docs concepts storage volumes emptydir https kubernetes io docs concepts storage volumes emptydir a Either this OR PersistentVolumeClaim can be used p td tr tr td code configMap code br em a href https kubernetes io docs reference generated kubernetes api v1 24 configmapvolumesource v1 core Kubernetes core v1 ConfigMapVolumeSource a em td td em Optional em p ConfigMap represents a configMap that should populate this workspace p td tr tr td code secret code br em a href https kubernetes io docs reference generated kubernetes api v1 24 secretvolumesource v1 core Kubernetes core v1 SecretVolumeSource a em td td em Optional em p Secret represents a secret that should populate this workspace p td tr tr td code projected code br em a href https kubernetes io docs reference generated kubernetes api v1 24 projectedvolumesource v1 core Kubernetes core v1 ProjectedVolumeSource a em td td em Optional em p Projected represents a projected volume that should populate this workspace p td tr tr td code csi code br em a href https kubernetes io docs reference generated kubernetes api v1 24 csivolumesource v1 core Kubernetes core v1 CSIVolumeSource a em td td em Optional em p CSI Container Storage Interface represents ephemeral storage that is handled by certain external CSI drivers p td tr tbody table h3 id tekton dev v1 WorkspaceDeclaration WorkspaceDeclaration h3 p em Appears on em a href tekton dev v1 TaskSpec TaskSpec a p div p WorkspaceDeclaration is a declaration of a volume that a Task requires p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name is the name by which you can bind the volume at runtime p td tr tr td code description code br em string em td td em Optional em p Description is an optional human readable description of this volume p td tr tr td code mountPath code br em string em td td em Optional em p MountPath overrides the directory that the volume will be made available at p td tr tr td code readOnly code br em bool em td td p ReadOnly dictates whether a mounted volume is writable By default this field is false and so mounted volumes are writable p td tr tr td code optional code br em bool em td td p Optional marks a Workspace as not being required in TaskRuns By default this field is false and so declared workspaces are required p td tr tbody table h3 id tekton dev v1 WorkspacePipelineTaskBinding WorkspacePipelineTaskBinding h3 p em Appears on em a href tekton dev v1 PipelineTask PipelineTask a p div p WorkspacePipelineTaskBinding describes how a workspace passed into the pipeline should be mapped to a task rsquo s declared workspace p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name is the name of the workspace as declared by the task p td tr tr td code workspace code br em string em td td em Optional em p Workspace is the name of the workspace declared by the pipeline p td tr tr td code subPath code br em string em td td em Optional em p SubPath is optionally a directory on the volume which should be used for this binding i e the volume will be mounted at this sub directory p td tr tbody table h3 id tekton dev v1 WorkspaceUsage WorkspaceUsage h3 p em Appears on em a href tekton dev v1 Sidecar Sidecar a a href tekton dev v1 Step Step a p div p WorkspaceUsage is used by a Step or Sidecar to declare that it wants isolated access to a Workspace defined in a Task p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name is the name of the workspace this Step or Sidecar wants access to p td tr tr td code mountPath code br em string em td td p MountPath is the path that the workspace should be mounted to inside the Step or Sidecar overriding any MountPath specified in the Task rsquo s WorkspaceDeclaration p td tr tbody table hr h2 id tekton dev v1alpha1 tekton dev v1alpha1 h2 div p Package v1alpha1 contains API Schema definitions for the pipeline v1alpha1 API group p div Resource Types ul li a href tekton dev v1alpha1 Run Run a li li a href tekton dev v1alpha1 StepAction StepAction a li li a href tekton dev v1alpha1 VerificationPolicy VerificationPolicy a li li a href tekton dev v1alpha1 PipelineResource PipelineResource a li ul h3 id tekton dev v1alpha1 Run Run h3 div p Run represents a single execution of a Custom Task p div table thead tr th Field th th Description th tr thead tbody tr td code apiVersion code br string td td code tekton dev v1alpha1 code td tr tr td code kind code br string td td code Run code td tr tr td code metadata code br em a href https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Kubernetes meta v1 ObjectMeta a em td td em Optional em Refer to the Kubernetes API documentation for the fields of the code metadata code field td tr tr td code spec code br em a href tekton dev v1alpha1 RunSpec RunSpec a em td td em Optional em br br table tr td code ref code br em a href tekton dev v1beta1 TaskRef TaskRef a em td td em Optional em td tr tr td code spec code br em a href tekton dev v1alpha1 EmbeddedRunSpec EmbeddedRunSpec a em td td em Optional em p Spec is a specification of a custom task p br br table table td tr tr td code params code br em a href tekton dev v1beta1 Params Params a em td td em Optional em td tr tr td code status code br em a href tekton dev v1alpha1 RunSpecStatus RunSpecStatus a em td td em Optional em p Used for cancelling a run and maybe more later on p td tr tr td code statusMessage code br em a href tekton dev v1alpha1 RunSpecStatusMessage RunSpecStatusMessage a em td td em Optional em p Status message for cancellation p td tr tr td code retries code br em int em td td em Optional em p Used for propagating retries count to custom tasks p td tr tr td code serviceAccountName code br em string em td td em Optional em td tr tr td code podTemplate code br em a href tekton dev unversioned Template Template a em td td em Optional em p PodTemplate holds pod specific configuration p td tr tr td code timeout code br em a href https godoc org k8s io apimachinery pkg apis meta v1 Duration Kubernetes meta v1 Duration a em td td em Optional em p Time after which the custom task times out Refer Go rsquo s ParseDuration documentation for expected format a href https golang org pkg time ParseDuration https golang org pkg time ParseDuration a p td tr tr td code workspaces code br em a href tekton dev v1beta1 WorkspaceBinding WorkspaceBinding a em td td em Optional em p Workspaces is a list of WorkspaceBindings from volumes to workspaces p td tr table td tr tr td code status code br em a href tekton dev v1alpha1 RunStatus RunStatus a em td td em Optional em td tr tbody table h3 id tekton dev v1alpha1 StepAction StepAction h3 div p StepAction represents the actionable components of Step The Step can only reference it from the cluster or using remote resolution p div table thead tr th Field th th Description th tr thead tbody tr td code apiVersion code br string td td code tekton dev v1alpha1 code td tr tr td code kind code br string td td code StepAction code td tr tr td code metadata code br em a href https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Kubernetes meta v1 ObjectMeta a em td td em Optional em Refer to the Kubernetes API documentation for the fields of the code metadata code field td tr tr td code spec code br em a href tekton dev v1alpha1 StepActionSpec StepActionSpec a em td td em Optional em p Spec holds the desired state of the Step from the client p br br table tr td code description code br em string em td td em Optional em p Description is a user facing description of the stepaction that may be used to populate a UI p td tr tr td code image code br em string em td td em Optional em p Image reference name to run for this StepAction More info a href https kubernetes io docs concepts containers images https kubernetes io docs concepts containers images a p td tr tr td code command code br em string em td td em Optional em p Entrypoint array Not executed within a shell The image rsquo s ENTRYPOINT is used if this is not provided Variable references VAR NAME are expanded using the container rsquo s environment If a variable cannot be resolved the reference in the input string will be unchanged Double are reduced to a single which allows for escaping the VAR NAME syntax i e ldquo VAR NAME rdquo will produce the string literal ldquo VAR NAME rdquo Escaped references will never be expanded regardless of whether the variable exists or not Cannot be updated More info a href https kubernetes io docs tasks inject data application define command argument container running a command in a shell https kubernetes io docs tasks inject data application define command argument container running a command in a shell a p td tr tr td code args code br em string em td td em Optional em p Arguments to the entrypoint The image rsquo s CMD is used if this is not provided Variable references VAR NAME are expanded using the container rsquo s environment If a variable cannot be resolved the reference in the input string will be unchanged Double are reduced to a single which allows for escaping the VAR NAME syntax i e ldquo VAR NAME rdquo will produce the string literal ldquo VAR NAME rdquo Escaped references will never be expanded regardless of whether the variable exists or not Cannot be updated More info a href https kubernetes io docs tasks inject data application define command argument container running a command in a shell https kubernetes io docs tasks inject data application define command argument container running a command in a shell a p td tr tr td code env code br em a href https kubernetes io docs reference generated kubernetes api v1 24 envvar v1 core Kubernetes core v1 EnvVar a em td td em Optional em p List of environment variables to set in the container Cannot be updated p td tr tr td code script code br em string em td td em Optional em p Script is the contents of an executable file to execute p p If Script is not empty the Step cannot have an Command and the Args will be passed to the Script p td tr tr td code workingDir code br em string em td td em Optional em p Step rsquo s working directory If not specified the container runtime rsquo s default will be used which might be configured in the container image Cannot be updated p td tr tr td code params code br em a href tekton dev v1 ParamSpecs ParamSpecs a em td td em Optional em p Params is a list of input parameters required to run the stepAction Params must be supplied as inputs in Steps unless they declare a defaultvalue p td tr tr td code results code br em a href tekton dev v1 StepResult StepResult a em td td em Optional em p Results are values that this StepAction can output p td tr tr td code securityContext code br em a href https kubernetes io docs reference generated kubernetes api v1 24 securitycontext v1 core Kubernetes core v1 SecurityContext a em td td em Optional em p SecurityContext defines the security options the Step should be run with If set the fields of SecurityContext override the equivalent fields of PodSecurityContext More info a href https kubernetes io docs tasks configure pod container security context https kubernetes io docs tasks configure pod container security context a The value set in StepAction will take precedence over the value from Task p td tr tr td code volumeMounts code br em a href https kubernetes io docs reference generated kubernetes api v1 24 volumemount v1 core Kubernetes core v1 VolumeMount a em td td em Optional em p Volumes to mount into the Step rsquo s filesystem Cannot be updated p td tr table td tr tbody table h3 id tekton dev v1alpha1 VerificationPolicy VerificationPolicy h3 div p VerificationPolicy defines the rules to verify Tekton resources VerificationPolicy can config the mapping from resources to a list of public keys so when verifying the resources we can use the corresponding public keys p div table thead tr th Field th th Description th tr thead tbody tr td code apiVersion code br string td td code tekton dev v1alpha1 code td tr tr td code kind code br string td td code VerificationPolicy code td tr tr td code metadata code br em a href https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Kubernetes meta v1 ObjectMeta a em td td em Optional em Refer to the Kubernetes API documentation for the fields of the code metadata code field td tr tr td code spec code br em a href tekton dev v1alpha1 VerificationPolicySpec VerificationPolicySpec a em td td p Spec holds the desired state of the VerificationPolicy p br br table tr td code resources code br em a href tekton dev v1alpha1 ResourcePattern ResourcePattern a em td td p Resources defines the patterns of resources sources that should be subject to this policy For example we may want to apply this Policy from a certain GitHub repo Then the ResourcesPattern should be valid regex E g If using gitresolver and we want to config keys from a certain git repo code ResourcesPattern code can be code https github com tektoncd catalog git code we will use regex to filter out those resources p td tr tr td code authorities code br em a href tekton dev v1alpha1 Authority Authority a em td td p Authorities defines the rules for validating signatures p td tr tr td code mode code br em a href tekton dev v1alpha1 ModeType ModeType a em td td em Optional em p Mode controls whether a failing policy will fail the taskrun pipelinerun or only log the warnings enforce fail the taskrun pipelinerun if verification fails default warn don rsquo t fail the taskrun pipelinerun if verification fails but log warnings p td tr table td tr tbody table h3 id tekton dev v1alpha1 PipelineResource PipelineResource h3 div p PipelineResource describes a resource that is an input to or output from a Task p p Deprecated Unused preserved only for backwards compatibility p div table thead tr th Field th th Description th tr thead tbody tr td code apiVersion code br string td td code tekton dev v1alpha1 code td tr tr td code kind code br string td td code PipelineResource code td tr tr td code metadata code br em a href https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Kubernetes meta v1 ObjectMeta a em td td em Optional em Refer to the Kubernetes API documentation for the fields of the code metadata code field td tr tr td code spec code br em a href tekton dev v1alpha1 PipelineResourceSpec PipelineResourceSpec a em td td p Spec holds the desired state of the PipelineResource from the client p br br table tr td code description code br em string em td td em Optional em p Description is a user facing description of the resource that may be used to populate a UI p td tr tr td code type code br em string em td td td tr tr td code params code br em a href tekton dev v1alpha1 ResourceParam ResourceParam a em td td td tr tr td code secrets code br em a href tekton dev v1alpha1 SecretParam SecretParam a em td td em Optional em p Secrets to fetch to populate some of resource fields p td tr table td tr tr td code status code br em a href tekton dev v1alpha1 PipelineResourceStatus PipelineResourceStatus a em td td em Optional em p Status is used to communicate the observed state of the PipelineResource from the controller but was unused as there is no controller for PipelineResource p td tr tbody table h3 id tekton dev v1alpha1 Authority Authority h3 p em Appears on em a href tekton dev v1alpha1 VerificationPolicySpec VerificationPolicySpec a p div p The Authority block defines the keys for validating signatures p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name is the name for this authority p td tr tr td code key code br em a href tekton dev v1alpha1 KeyRef KeyRef a em td td p Key contains the public key to validate the resource p td tr tbody table h3 id tekton dev v1alpha1 EmbeddedRunSpec EmbeddedRunSpec h3 p em Appears on em a href tekton dev v1alpha1 RunSpec RunSpec a p div p EmbeddedRunSpec allows custom task definitions to be embedded p div table thead tr th Field th th Description th tr thead tbody tr td code metadata code br em a href tekton dev v1beta1 PipelineTaskMetadata PipelineTaskMetadata a em td td em Optional em td tr tr td code spec code br em k8s io apimachinery pkg runtime RawExtension em td td em Optional em p Spec is a specification of a custom task p br br table tr td code code br em byte em td td p Raw is the underlying serialization of this object p p TODO Determine how to detect ContentType and ContentEncoding of lsquo Raw rsquo data p td tr tr td code code br em k8s io apimachinery pkg runtime Object em td td p Object can hold a representation of this extension useful for working with versioned structs p td tr table td tr tbody table h3 id tekton dev v1alpha1 HashAlgorithm HashAlgorithm code string code alias h3 p em Appears on em a href tekton dev v1alpha1 KeyRef KeyRef a p div p HashAlgorithm defines the hash algorithm used for the public key p div h3 id tekton dev v1alpha1 KeyRef KeyRef h3 p em Appears on em a href tekton dev v1alpha1 Authority Authority a p div p KeyRef defines the reference to a public key p div table thead tr th Field th th Description th tr thead tbody tr td code secretRef code br em a href https kubernetes io docs reference generated kubernetes api v1 24 secretreference v1 core Kubernetes core v1 SecretReference a em td td em Optional em p SecretRef sets a reference to a secret with the key p td tr tr td code data code br em string em td td em Optional em p Data contains the inline public key p td tr tr td code kms code br em string em td td em Optional em p KMS contains the KMS url of the public key Supported formats differ based on the KMS system used One example of a KMS url could be gcpkms projects PROJECT locations LOCATION gt keyRings KEYRING cryptoKeys KEY cryptoKeyVersions KEY VERSION For more examples please refer a href https docs sigstore dev cosign kms support https docs sigstore dev cosign kms support a Note that the KMS is not supported yet p td tr tr td code hashAlgorithm code br em a href tekton dev v1alpha1 HashAlgorithm HashAlgorithm a em td td em Optional em p HashAlgorithm always defaults to sha256 if the algorithm hasn rsquo t been explicitly set p td tr tbody table h3 id tekton dev v1alpha1 ModeType ModeType code string code alias h3 p em Appears on em a href tekton dev v1alpha1 VerificationPolicySpec VerificationPolicySpec a p div p ModeType indicates the type of a mode for VerificationPolicy p div h3 id tekton dev v1alpha1 ResourcePattern ResourcePattern h3 p em Appears on em a href tekton dev v1alpha1 VerificationPolicySpec VerificationPolicySpec a p div p ResourcePattern defines the pattern of the resource source p div table thead tr th Field th th Description th tr thead tbody tr td code pattern code br em string em td td p Pattern defines a resource pattern Regex is created to filter resources based on code Pattern code Example patterns GitHub resource a href https github com tektoncd catalog git https github com tektoncd catalog git a a href https github com tektoncd https github com tektoncd a Bundle resource gcr io tekton releases catalog upstream git clone gcr io tekton releases catalog upstream Hub resource a href https artifacthub io https artifacthub io a p td tr tbody table h3 id tekton dev v1alpha1 RunReason RunReason code string code alias h3 div p RunReason is an enum used to store all Run reason for the Succeeded condition that are controlled by the Run itself p div h3 id tekton dev v1alpha1 RunSpec RunSpec h3 p em Appears on em a href tekton dev v1alpha1 Run Run a p div p RunSpec defines the desired state of Run p div table thead tr th Field th th Description th tr thead tbody tr td code ref code br em a href tekton dev v1beta1 TaskRef TaskRef a em td td em Optional em td tr tr td code spec code br em a href tekton dev v1alpha1 EmbeddedRunSpec EmbeddedRunSpec a em td td em Optional em p Spec is a specification of a custom task p br br table table td tr tr td code params code br em a href tekton dev v1beta1 Params Params a em td td em Optional em td tr tr td code status code br em a href tekton dev v1alpha1 RunSpecStatus RunSpecStatus a em td td em Optional em p Used for cancelling a run and maybe more later on p td tr tr td code statusMessage code br em a href tekton dev v1alpha1 RunSpecStatusMessage RunSpecStatusMessage a em td td em Optional em p Status message for cancellation p td tr tr td code retries code br em int em td td em Optional em p Used for propagating retries count to custom tasks p td tr tr td code serviceAccountName code br em string em td td em Optional em td tr tr td code podTemplate code br em a href tekton dev unversioned Template Template a em td td em Optional em p PodTemplate holds pod specific configuration p td tr tr td code timeout code br em a href https godoc org k8s io apimachinery pkg apis meta v1 Duration Kubernetes meta v1 Duration a em td td em Optional em p Time after which the custom task times out Refer Go rsquo s ParseDuration documentation for expected format a href https golang org pkg time ParseDuration https golang org pkg time ParseDuration a p td tr tr td code workspaces code br em a href tekton dev v1beta1 WorkspaceBinding WorkspaceBinding a em td td em Optional em p Workspaces is a list of WorkspaceBindings from volumes to workspaces p td tr tbody table h3 id tekton dev v1alpha1 RunSpecStatus RunSpecStatus code string code alias h3 p em Appears on em a href tekton dev v1alpha1 RunSpec RunSpec a p div p RunSpecStatus defines the taskrun spec status the user can provide p div h3 id tekton dev v1alpha1 RunSpecStatusMessage RunSpecStatusMessage code string code alias h3 p em Appears on em a href tekton dev v1alpha1 RunSpec RunSpec a p div p RunSpecStatusMessage defines human readable status messages for the TaskRun p div h3 id tekton dev v1alpha1 StepActionObject StepActionObject h3 div p StepActionObject is implemented by StepAction p div h3 id tekton dev v1alpha1 StepActionSpec StepActionSpec h3 p em Appears on em a href tekton dev v1alpha1 StepAction StepAction a p div p StepActionSpec contains the actionable components of a step p div table thead tr th Field th th Description th tr thead tbody tr td code description code br em string em td td em Optional em p Description is a user facing description of the stepaction that may be used to populate a UI p td tr tr td code image code br em string em td td em Optional em p Image reference name to run for this StepAction More info a href https kubernetes io docs concepts containers images https kubernetes io docs concepts containers images a p td tr tr td code command code br em string em td td em Optional em p Entrypoint array Not executed within a shell The image rsquo s ENTRYPOINT is used if this is not provided Variable references VAR NAME are expanded using the container rsquo s environment If a variable cannot be resolved the reference in the input string will be unchanged Double are reduced to a single which allows for escaping the VAR NAME syntax i e ldquo VAR NAME rdquo will produce the string literal ldquo VAR NAME rdquo Escaped references will never be expanded regardless of whether the variable exists or not Cannot be updated More info a href https kubernetes io docs tasks inject data application define command argument container running a command in a shell https kubernetes io docs tasks inject data application define command argument container running a command in a shell a p td tr tr td code args code br em string em td td em Optional em p Arguments to the entrypoint The image rsquo s CMD is used if this is not provided Variable references VAR NAME are expanded using the container rsquo s environment If a variable cannot be resolved the reference in the input string will be unchanged Double are reduced to a single which allows for escaping the VAR NAME syntax i e ldquo VAR NAME rdquo will produce the string literal ldquo VAR NAME rdquo Escaped references will never be expanded regardless of whether the variable exists or not Cannot be updated More info a href https kubernetes io docs tasks inject data application define command argument container running a command in a shell https kubernetes io docs tasks inject data application define command argument container running a command in a shell a p td tr tr td code env code br em a href https kubernetes io docs reference generated kubernetes api v1 24 envvar v1 core Kubernetes core v1 EnvVar a em td td em Optional em p List of environment variables to set in the container Cannot be updated p td tr tr td code script code br em string em td td em Optional em p Script is the contents of an executable file to execute p p If Script is not empty the Step cannot have an Command and the Args will be passed to the Script p td tr tr td code workingDir code br em string em td td em Optional em p Step rsquo s working directory If not specified the container runtime rsquo s default will be used which might be configured in the container image Cannot be updated p td tr tr td code params code br em a href tekton dev v1 ParamSpecs ParamSpecs a em td td em Optional em p Params is a list of input parameters required to run the stepAction Params must be supplied as inputs in Steps unless they declare a defaultvalue p td tr tr td code results code br em a href tekton dev v1 StepResult StepResult a em td td em Optional em p Results are values that this StepAction can output p td tr tr td code securityContext code br em a href https kubernetes io docs reference generated kubernetes api v1 24 securitycontext v1 core Kubernetes core v1 SecurityContext a em td td em Optional em p SecurityContext defines the security options the Step should be run with If set the fields of SecurityContext override the equivalent fields of PodSecurityContext More info a href https kubernetes io docs tasks configure pod container security context https kubernetes io docs tasks configure pod container security context a The value set in StepAction will take precedence over the value from Task p td tr tr td code volumeMounts code br em a href https kubernetes io docs reference generated kubernetes api v1 24 volumemount v1 core Kubernetes core v1 VolumeMount a em td td em Optional em p Volumes to mount into the Step rsquo s filesystem Cannot be updated p td tr tbody table h3 id tekton dev v1alpha1 VerificationPolicySpec VerificationPolicySpec h3 p em Appears on em a href tekton dev v1alpha1 VerificationPolicy VerificationPolicy a p div p VerificationPolicySpec defines the patterns and authorities p div table thead tr th Field th th Description th tr thead tbody tr td code resources code br em a href tekton dev v1alpha1 ResourcePattern ResourcePattern a em td td p Resources defines the patterns of resources sources that should be subject to this policy For example we may want to apply this Policy from a certain GitHub repo Then the ResourcesPattern should be valid regex E g If using gitresolver and we want to config keys from a certain git repo code ResourcesPattern code can be code https github com tektoncd catalog git code we will use regex to filter out those resources p td tr tr td code authorities code br em a href tekton dev v1alpha1 Authority Authority a em td td p Authorities defines the rules for validating signatures p td tr tr td code mode code br em a href tekton dev v1alpha1 ModeType ModeType a em td td em Optional em p Mode controls whether a failing policy will fail the taskrun pipelinerun or only log the warnings enforce fail the taskrun pipelinerun if verification fails default warn don rsquo t fail the taskrun pipelinerun if verification fails but log warnings p td tr tbody table h3 id tekton dev v1alpha1 PipelineResourceSpec PipelineResourceSpec h3 p em Appears on em a href tekton dev v1alpha1 PipelineResource PipelineResource a a href tekton dev v1beta1 PipelineResourceBinding PipelineResourceBinding a p div p PipelineResourceSpec defines an individual resources used in the pipeline p p Deprecated Unused preserved only for backwards compatibility p div table thead tr th Field th th Description th tr thead tbody tr td code description code br em string em td td em Optional em p Description is a user facing description of the resource that may be used to populate a UI p td tr tr td code type code br em string em td td td tr tr td code params code br em a href tekton dev v1alpha1 ResourceParam ResourceParam a em td td td tr tr td code secrets code br em a href tekton dev v1alpha1 SecretParam SecretParam a em td td em Optional em p Secrets to fetch to populate some of resource fields p td tr tbody table h3 id tekton dev v1alpha1 PipelineResourceStatus PipelineResourceStatus h3 p em Appears on em a href tekton dev v1alpha1 PipelineResource PipelineResource a p div p PipelineResourceStatus does not contain anything because PipelineResources on their own do not have a status p p Deprecated Unused preserved only for backwards compatibility p div h3 id tekton dev v1alpha1 ResourceDeclaration ResourceDeclaration h3 p em Appears on em a href tekton dev v1beta1 TaskResource TaskResource a p div p ResourceDeclaration defines an input or output PipelineResource declared as a requirement by another type such as a Task or Condition The Name field will be used to refer to these PipelineResources within the type rsquo s definition and when provided as an Input the Name will be the path to the volume mounted containing this PipelineResource as an input e g an input Resource named code workspace code will be mounted at code workspace code p p Deprecated Unused preserved only for backwards compatibility p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name declares the name by which a resource is referenced in the definition Resources may be referenced by name in the definition of a Task rsquo s steps p td tr tr td code type code br em string em td td p Type is the type of this resource p td tr tr td code description code br em string em td td em Optional em p Description is a user facing description of the declared resource that may be used to populate a UI p td tr tr td code targetPath code br em string em td td em Optional em p TargetPath is the path in workspace directory where the resource will be copied p td tr tr td code optional code br em bool em td td p Optional declares the resource as optional By default optional is set to false which makes a resource required optional true the resource is considered optional optional false the resource is considered required equivalent of not specifying it p td tr tbody table h3 id tekton dev v1alpha1 ResourceParam ResourceParam h3 p em Appears on em a href tekton dev v1alpha1 PipelineResourceSpec PipelineResourceSpec a p div p ResourceParam declares a string value to use for the parameter called Name and is used in the specific context of PipelineResources p p Deprecated Unused preserved only for backwards compatibility p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td td tr tr td code value code br em string em td td td tr tbody table h3 id tekton dev v1alpha1 SecretParam SecretParam h3 p em Appears on em a href tekton dev v1alpha1 PipelineResourceSpec PipelineResourceSpec a p div p SecretParam indicates which secret can be used to populate a field of the resource p p Deprecated Unused preserved only for backwards compatibility p div table thead tr th Field th th Description th tr thead tbody tr td code fieldName code br em string em td td td tr tr td code secretKey code br em string em td td td tr tr td code secretName code br em string em td td td tr tbody table h3 id tekton dev v1alpha1 RunResult RunResult h3 p em Appears on em a href tekton dev v1alpha1 RunStatusFields RunStatusFields a p div p RunResult used to describe the results of a task p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name the given name p td tr tr td code value code br em string em td td p Value the given value of the result p td tr tbody table h3 id tekton dev v1alpha1 RunStatus RunStatus h3 p em Appears on em a href tekton dev v1alpha1 Run Run a a href tekton dev v1alpha1 RunStatusFields RunStatusFields a p div p RunStatus defines the observed state of Run p div table thead tr th Field th th Description th tr thead tbody tr td code Status code br em a href https pkg go dev knative dev pkg apis duck v1 Status knative dev pkg apis duck v1 Status a em td td p Members of code Status code are embedded into this type p td tr tr td code RunStatusFields code br em a href tekton dev v1alpha1 RunStatusFields RunStatusFields a em td td p Members of code RunStatusFields code are embedded into this type p p RunStatusFields inlines the status fields p td tr tbody table h3 id tekton dev v1alpha1 RunStatusFields RunStatusFields h3 p em Appears on em a href tekton dev v1alpha1 RunStatus RunStatus a p div p RunStatusFields holds the fields of Run rsquo s status This is defined separately and inlined so that other types can readily consume these fields via duck typing p div table thead tr th Field th th Description th tr thead tbody tr td code startTime code br em a href https kubernetes io docs reference generated kubernetes api v1 24 time v1 meta Kubernetes meta v1 Time a em td td em Optional em p StartTime is the time the build is actually started p td tr tr td code completionTime code br em a href https kubernetes io docs reference generated kubernetes api v1 24 time v1 meta Kubernetes meta v1 Time a em td td em Optional em p CompletionTime is the time the build completed p td tr tr td code results code br em a href tekton dev v1alpha1 RunResult RunResult a em td td em Optional em p Results reports any output result values to be consumed by later tasks in a pipeline p td tr tr td code retriesStatus code br em a href tekton dev v1alpha1 RunStatus RunStatus a em td td em Optional em p RetriesStatus contains the history of RunStatus in case of a retry p td tr tr td code extraFields code br em k8s io apimachinery pkg runtime RawExtension em td td p ExtraFields holds arbitrary fields provided by the custom task controller p td tr tbody table hr h2 id tekton dev v1beta1 tekton dev v1beta1 h2 div p Package v1beta1 contains API Schema definitions for the pipeline v1beta1 API group p div Resource Types ul li a href tekton dev v1beta1 ClusterTask ClusterTask a li li a href tekton dev v1beta1 CustomRun CustomRun a li li a href tekton dev v1beta1 Pipeline Pipeline a li li a href tekton dev v1beta1 PipelineRun PipelineRun a li li a href tekton dev v1beta1 StepAction StepAction a li li a href tekton dev v1beta1 Task Task a li li a href tekton dev v1beta1 TaskRun TaskRun a li ul h3 id tekton dev v1beta1 ClusterTask ClusterTask h3 div p ClusterTask is a Task with a cluster scope ClusterTasks are used to represent Tasks that should be publicly addressable from any namespace in the cluster p p Deprecated Please use the cluster resolver instead p div table thead tr th Field th th Description th tr thead tbody tr td code apiVersion code br string td td code tekton dev v1beta1 code td tr tr td code kind code br string td td code ClusterTask code td tr tr td code metadata code br em a href https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Kubernetes meta v1 ObjectMeta a em td td em Optional em Refer to the Kubernetes API documentation for the fields of the code metadata code field td tr tr td code spec code br em a href tekton dev v1beta1 TaskSpec TaskSpec a em td td em Optional em p Spec holds the desired state of the Task from the client p br br table tr td code resources code br em a href tekton dev v1beta1 TaskResources TaskResources a em td td em Optional em p Resources is a list input and output resource to run the task Resources are represented in TaskRuns as bindings to instances of PipelineResources p p Deprecated Unused preserved only for backwards compatibility p td tr tr td code params code br em a href tekton dev v1beta1 ParamSpecs ParamSpecs a em td td em Optional em p Params is a list of input parameters required to run the task Params must be supplied as inputs in TaskRuns unless they declare a default value p td tr tr td code displayName code br em string em td td em Optional em p DisplayName is a user facing name of the task that may be used to populate a UI p td tr tr td code description code br em string em td td em Optional em p Description is a user facing description of the task that may be used to populate a UI p td tr tr td code steps code br em a href tekton dev v1beta1 Step Step a em td td p Steps are the steps of the build each step is run sequentially with the source mounted into workspace p td tr tr td code volumes code br em a href https kubernetes io docs reference generated kubernetes api v1 24 volume v1 core Kubernetes core v1 Volume a em td td p Volumes is a collection of volumes that are available to mount into the steps of the build p td tr tr td code stepTemplate code br em a href tekton dev v1beta1 StepTemplate StepTemplate a em td td p StepTemplate can be used as the basis for all step containers within the Task so that the steps inherit settings on the base container p td tr tr td code sidecars code br em a href tekton dev v1beta1 Sidecar Sidecar a em td td p Sidecars are run alongside the Task rsquo s step containers They begin before the steps start and end after the steps complete p td tr tr td code workspaces code br em a href tekton dev v1beta1 WorkspaceDeclaration WorkspaceDeclaration a em td td p Workspaces are the volumes that this Task requires p td tr tr td code results code br em a href tekton dev v1beta1 TaskResult TaskResult a em td td p Results are values that this Task can output p td tr table td tr tbody table h3 id tekton dev v1beta1 CustomRun CustomRun h3 div p CustomRun represents a single execution of a Custom Task p div table thead tr th Field th th Description th tr thead tbody tr td code apiVersion code br string td td code tekton dev v1beta1 code td tr tr td code kind code br string td td code CustomRun code td tr tr td code metadata code br em a href https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Kubernetes meta v1 ObjectMeta a em td td em Optional em Refer to the Kubernetes API documentation for the fields of the code metadata code field td tr tr td code spec code br em a href tekton dev v1beta1 CustomRunSpec CustomRunSpec a em td td em Optional em br br table tr td code customRef code br em a href tekton dev v1beta1 TaskRef TaskRef a em td td em Optional em td tr tr td code customSpec code br em a href tekton dev v1beta1 EmbeddedCustomRunSpec EmbeddedCustomRunSpec a em td td em Optional em p Spec is a specification of a custom task p td tr tr td code params code br em a href tekton dev v1beta1 Params Params a em td td em Optional em td tr tr td code status code br em a href tekton dev v1beta1 CustomRunSpecStatus CustomRunSpecStatus a em td td em Optional em p Used for cancelling a customrun and maybe more later on p td tr tr td code statusMessage code br em a href tekton dev v1beta1 CustomRunSpecStatusMessage CustomRunSpecStatusMessage a em td td em Optional em p Status message for cancellation p td tr tr td code retries code br em int em td td em Optional em p Used for propagating retries count to custom tasks p td tr tr td code serviceAccountName code br em string em td td em Optional em td tr tr td code timeout code br em a href https godoc org k8s io apimachinery pkg apis meta v1 Duration Kubernetes meta v1 Duration a em td td em Optional em p Time after which the custom task times out Refer Go rsquo s ParseDuration documentation for expected format a href https golang org pkg time ParseDuration https golang org pkg time ParseDuration a p td tr tr td code workspaces code br em a href tekton dev v1beta1 WorkspaceBinding WorkspaceBinding a em td td em Optional em p Workspaces is a list of WorkspaceBindings from volumes to workspaces p td tr table td tr tr td code status code br em a href tekton dev v1beta1 CustomRunStatus CustomRunStatus a em td td em Optional em td tr tbody table h3 id tekton dev v1beta1 Pipeline Pipeline h3 div p Pipeline describes a list of Tasks to execute It expresses how outputs of tasks feed into inputs of subsequent tasks p p Deprecated Please use v1 Pipeline instead p div table thead tr th Field th th Description th tr thead tbody tr td code apiVersion code br string td td code tekton dev v1beta1 code td tr tr td code kind code br string td td code Pipeline code td tr tr td code metadata code br em a href https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Kubernetes meta v1 ObjectMeta a em td td em Optional em Refer to the Kubernetes API documentation for the fields of the code metadata code field td tr tr td code spec code br em a href tekton dev v1beta1 PipelineSpec PipelineSpec a em td td em Optional em p Spec holds the desired state of the Pipeline from the client p br br table tr td code displayName code br em string em td td em Optional em p DisplayName is a user facing name of the pipeline that may be used to populate a UI p td tr tr td code description code br em string em td td em Optional em p Description is a user facing description of the pipeline that may be used to populate a UI p td tr tr td code resources code br em a href tekton dev v1beta1 PipelineDeclaredResource PipelineDeclaredResource a em td td p Deprecated Unused preserved only for backwards compatibility p td tr tr td code tasks code br em a href tekton dev v1beta1 PipelineTask PipelineTask a em td td p Tasks declares the graph of Tasks that execute when this Pipeline is run p td tr tr td code params code br em a href tekton dev v1beta1 ParamSpecs ParamSpecs a em td td p Params declares a list of input parameters that must be supplied when this Pipeline is run p td tr tr td code workspaces code br em a href tekton dev v1beta1 PipelineWorkspaceDeclaration PipelineWorkspaceDeclaration a em td td em Optional em p Workspaces declares a set of named workspaces that are expected to be provided by a PipelineRun p td tr tr td code results code br em a href tekton dev v1beta1 PipelineResult PipelineResult a em td td em Optional em p Results are values that this pipeline can output once run p td tr tr td code finally code br em a href tekton dev v1beta1 PipelineTask PipelineTask a em td td p Finally declares the list of Tasks that execute just before leaving the Pipeline i e either after all Tasks are finished executing successfully or after a failure which would result in ending the Pipeline p td tr table td tr tbody table h3 id tekton dev v1beta1 PipelineRun PipelineRun h3 div p PipelineRun represents a single execution of a Pipeline PipelineRuns are how the graph of Tasks declared in a Pipeline are executed they specify inputs to Pipelines such as parameter values and capture operational aspects of the Tasks execution such as service account and tolerations Creating a PipelineRun creates TaskRuns for Tasks in the referenced Pipeline p p Deprecated Please use v1 PipelineRun instead p div table thead tr th Field th th Description th tr thead tbody tr td code apiVersion code br string td td code tekton dev v1beta1 code td tr tr td code kind code br string td td code PipelineRun code td tr tr td code metadata code br em a href https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Kubernetes meta v1 ObjectMeta a em td td em Optional em Refer to the Kubernetes API documentation for the fields of the code metadata code field td tr tr td code spec code br em a href tekton dev v1beta1 PipelineRunSpec PipelineRunSpec a em td td em Optional em br br table tr td code pipelineRef code br em a href tekton dev v1beta1 PipelineRef PipelineRef a em td td em Optional em td tr tr td code pipelineSpec code br em a href tekton dev v1beta1 PipelineSpec PipelineSpec a em td td em Optional em p Specifying PipelineSpec can be disabled by setting code disable inline spec code feature flag p td tr tr td code resources code br em a href tekton dev v1beta1 PipelineResourceBinding PipelineResourceBinding a em td td p Resources is a list of bindings specifying which actual instances of PipelineResources to use for the resources the Pipeline has declared it needs p p Deprecated Unused preserved only for backwards compatibility p td tr tr td code params code br em a href tekton dev v1beta1 Params Params a em td td p Params is a list of parameter names and values p td tr tr td code serviceAccountName code br em string em td td em Optional em td tr tr td code status code br em a href tekton dev v1beta1 PipelineRunSpecStatus PipelineRunSpecStatus a em td td em Optional em p Used for cancelling a pipelinerun and maybe more later on p td tr tr td code timeouts code br em a href tekton dev v1beta1 TimeoutFields TimeoutFields a em td td em Optional em p Time after which the Pipeline times out Currently three keys are accepted in the map pipeline tasks and finally with Timeouts pipeline gt Timeouts tasks Timeouts finally p td tr tr td code timeout code br em a href https godoc org k8s io apimachinery pkg apis meta v1 Duration Kubernetes meta v1 Duration a em td td em Optional em p Timeout is the Time after which the Pipeline times out Defaults to never Refer to Go rsquo s ParseDuration documentation for expected format a href https golang org pkg time ParseDuration https golang org pkg time ParseDuration a p p Deprecated use pipelineRunSpec Timeouts Pipeline instead p td tr tr td code podTemplate code br em a href tekton dev unversioned Template Template a em td td p PodTemplate holds pod specific configuration p td tr tr td code workspaces code br em a href tekton dev v1beta1 WorkspaceBinding WorkspaceBinding a em td td em Optional em p Workspaces holds a set of workspace bindings that must match names with those declared in the pipeline p td tr tr td code taskRunSpecs code br em a href tekton dev v1beta1 PipelineTaskRunSpec PipelineTaskRunSpec a em td td em Optional em p TaskRunSpecs holds a set of runtime specs p td tr table td tr tr td code status code br em a href tekton dev v1beta1 PipelineRunStatus PipelineRunStatus a em td td em Optional em td tr tbody table h3 id tekton dev v1beta1 StepAction StepAction h3 div p StepAction represents the actionable components of Step The Step can only reference it from the cluster or using remote resolution p div table thead tr th Field th th Description th tr thead tbody tr td code apiVersion code br string td td code tekton dev v1beta1 code td tr tr td code kind code br string td td code StepAction code td tr tr td code metadata code br em a href https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Kubernetes meta v1 ObjectMeta a em td td em Optional em Refer to the Kubernetes API documentation for the fields of the code metadata code field td tr tr td code spec code br em a href tekton dev v1beta1 StepActionSpec StepActionSpec a em td td em Optional em p Spec holds the desired state of the Step from the client p br br table tr td code description code br em string em td td em Optional em p Description is a user facing description of the stepaction that may be used to populate a UI p td tr tr td code image code br em string em td td em Optional em p Image reference name to run for this StepAction More info a href https kubernetes io docs concepts containers images https kubernetes io docs concepts containers images a p td tr tr td code command code br em string em td td em Optional em p Entrypoint array Not executed within a shell The image rsquo s ENTRYPOINT is used if this is not provided Variable references VAR NAME are expanded using the container rsquo s environment If a variable cannot be resolved the reference in the input string will be unchanged Double are reduced to a single which allows for escaping the VAR NAME syntax i e ldquo VAR NAME rdquo will produce the string literal ldquo VAR NAME rdquo Escaped references will never be expanded regardless of whether the variable exists or not Cannot be updated More info a href https kubernetes io docs tasks inject data application define command argument container running a command in a shell https kubernetes io docs tasks inject data application define command argument container running a command in a shell a p td tr tr td code args code br em string em td td em Optional em p Arguments to the entrypoint The image rsquo s CMD is used if this is not provided Variable references VAR NAME are expanded using the container rsquo s environment If a variable cannot be resolved the reference in the input string will be unchanged Double are reduced to a single which allows for escaping the VAR NAME syntax i e ldquo VAR NAME rdquo will produce the string literal ldquo VAR NAME rdquo Escaped references will never be expanded regardless of whether the variable exists or not Cannot be updated More info a href https kubernetes io docs tasks inject data application define command argument container running a command in a shell https kubernetes io docs tasks inject data application define command argument container running a command in a shell a p td tr tr td code env code br em a href https kubernetes io docs reference generated kubernetes api v1 24 envvar v1 core Kubernetes core v1 EnvVar a em td td em Optional em p List of environment variables to set in the container Cannot be updated p td tr tr td code script code br em string em td td em Optional em p Script is the contents of an executable file to execute p p If Script is not empty the Step cannot have an Command and the Args will be passed to the Script p td tr tr td code workingDir code br em string em td td em Optional em p Step rsquo s working directory If not specified the container runtime rsquo s default will be used which might be configured in the container image Cannot be updated p td tr tr td code params code br em a href tekton dev v1 ParamSpecs ParamSpecs a em td td em Optional em p Params is a list of input parameters required to run the stepAction Params must be supplied as inputs in Steps unless they declare a defaultvalue p td tr tr td code results code br em a href tekton dev v1 StepResult StepResult a em td td em Optional em p Results are values that this StepAction can output p td tr tr td code securityContext code br em a href https kubernetes io docs reference generated kubernetes api v1 24 securitycontext v1 core Kubernetes core v1 SecurityContext a em td td em Optional em p SecurityContext defines the security options the Step should be run with If set the fields of SecurityContext override the equivalent fields of PodSecurityContext More info a href https kubernetes io docs tasks configure pod container security context https kubernetes io docs tasks configure pod container security context a The value set in StepAction will take precedence over the value from Task p td tr tr td code volumeMounts code br em a href https kubernetes io docs reference generated kubernetes api v1 24 volumemount v1 core Kubernetes core v1 VolumeMount a em td td em Optional em p Volumes to mount into the Step rsquo s filesystem Cannot be updated p td tr table td tr tbody table h3 id tekton dev v1beta1 Task Task h3 div p Task represents a collection of sequential steps that are run as part of a Pipeline using a set of inputs and producing a set of outputs Tasks execute when TaskRuns are created that provide the input parameters and resources and output resources the Task requires p p Deprecated Please use v1 Task instead p div table thead tr th Field th th Description th tr thead tbody tr td code apiVersion code br string td td code tekton dev v1beta1 code td tr tr td code kind code br string td td code Task code td tr tr td code metadata code br em a href https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Kubernetes meta v1 ObjectMeta a em td td em Optional em Refer to the Kubernetes API documentation for the fields of the code metadata code field td tr tr td code spec code br em a href tekton dev v1beta1 TaskSpec TaskSpec a em td td em Optional em p Spec holds the desired state of the Task from the client p br br table tr td code resources code br em a href tekton dev v1beta1 TaskResources TaskResources a em td td em Optional em p Resources is a list input and output resource to run the task Resources are represented in TaskRuns as bindings to instances of PipelineResources p p Deprecated Unused preserved only for backwards compatibility p td tr tr td code params code br em a href tekton dev v1beta1 ParamSpecs ParamSpecs a em td td em Optional em p Params is a list of input parameters required to run the task Params must be supplied as inputs in TaskRuns unless they declare a default value p td tr tr td code displayName code br em string em td td em Optional em p DisplayName is a user facing name of the task that may be used to populate a UI p td tr tr td code description code br em string em td td em Optional em p Description is a user facing description of the task that may be used to populate a UI p td tr tr td code steps code br em a href tekton dev v1beta1 Step Step a em td td p Steps are the steps of the build each step is run sequentially with the source mounted into workspace p td tr tr td code volumes code br em a href https kubernetes io docs reference generated kubernetes api v1 24 volume v1 core Kubernetes core v1 Volume a em td td p Volumes is a collection of volumes that are available to mount into the steps of the build p td tr tr td code stepTemplate code br em a href tekton dev v1beta1 StepTemplate StepTemplate a em td td p StepTemplate can be used as the basis for all step containers within the Task so that the steps inherit settings on the base container p td tr tr td code sidecars code br em a href tekton dev v1beta1 Sidecar Sidecar a em td td p Sidecars are run alongside the Task rsquo s step containers They begin before the steps start and end after the steps complete p td tr tr td code workspaces code br em a href tekton dev v1beta1 WorkspaceDeclaration WorkspaceDeclaration a em td td p Workspaces are the volumes that this Task requires p td tr tr td code results code br em a href tekton dev v1beta1 TaskResult TaskResult a em td td p Results are values that this Task can output p td tr table td tr tbody table h3 id tekton dev v1beta1 TaskRun TaskRun h3 div p TaskRun represents a single execution of a Task TaskRuns are how the steps specified in a Task are executed they specify the parameters and resources used to run the steps in a Task p p Deprecated Please use v1 TaskRun instead p div table thead tr th Field th th Description th tr thead tbody tr td code apiVersion code br string td td code tekton dev v1beta1 code td tr tr td code kind code br string td td code TaskRun code td tr tr td code metadata code br em a href https kubernetes io docs reference generated kubernetes api v1 24 objectmeta v1 meta Kubernetes meta v1 ObjectMeta a em td td em Optional em Refer to the Kubernetes API documentation for the fields of the code metadata code field td tr tr td code spec code br em a href tekton dev v1beta1 TaskRunSpec TaskRunSpec a em td td em Optional em br br table tr td code debug code br em a href tekton dev v1beta1 TaskRunDebug TaskRunDebug a em td td em Optional em td tr tr td code params code br em a href tekton dev v1beta1 Params Params a em td td em Optional em td tr tr td code resources code br em a href tekton dev v1beta1 TaskRunResources TaskRunResources a em td td em Optional em p Deprecated Unused preserved only for backwards compatibility p td tr tr td code serviceAccountName code br em string em td td em Optional em td tr tr td code taskRef code br em a href tekton dev v1beta1 TaskRef TaskRef a em td td em Optional em p no more than one of the TaskRef and TaskSpec may be specified p td tr tr td code taskSpec code br em a href tekton dev v1beta1 TaskSpec TaskSpec a em td td em Optional em p Specifying PipelineSpec can be disabled by setting code disable inline spec code feature flag p td tr tr td code status code br em a href tekton dev v1beta1 TaskRunSpecStatus TaskRunSpecStatus a em td td em Optional em p Used for cancelling a TaskRun and maybe more later on p td tr tr td code statusMessage code br em a href tekton dev v1beta1 TaskRunSpecStatusMessage TaskRunSpecStatusMessage a em td td em Optional em p Status message for cancellation p td tr tr td code retries code br em int em td td em Optional em p Retries represents how many times this TaskRun should be retried in the event of Task failure p td tr tr td code timeout code br em a href https godoc org k8s io apimachinery pkg apis meta v1 Duration Kubernetes meta v1 Duration a em td td em Optional em p Time after which one retry attempt times out Defaults to 1 hour Refer Go rsquo s ParseDuration documentation for expected format a href https golang org pkg time ParseDuration https golang org pkg time ParseDuration a p td tr tr td code podTemplate code br em a href tekton dev unversioned Template Template a em td td p PodTemplate holds pod specific configuration p td tr tr td code workspaces code br em a href tekton dev v1beta1 WorkspaceBinding WorkspaceBinding a em td td em Optional em p Workspaces is a list of WorkspaceBindings from volumes to workspaces p td tr tr td code stepOverrides code br em a href tekton dev v1beta1 TaskRunStepOverride TaskRunStepOverride a em td td em Optional em p Overrides to apply to Steps in this TaskRun If a field is specified in both a Step and a StepOverride the value from the StepOverride will be used This field is only supported when the alpha feature gate is enabled p td tr tr td code sidecarOverrides code br em a href tekton dev v1beta1 TaskRunSidecarOverride TaskRunSidecarOverride a em td td em Optional em p Overrides to apply to Sidecars in this TaskRun If a field is specified in both a Sidecar and a SidecarOverride the value from the SidecarOverride will be used This field is only supported when the alpha feature gate is enabled p td tr tr td code computeResources code br em a href https kubernetes io docs reference generated kubernetes api v1 24 resourcerequirements v1 core Kubernetes core v1 ResourceRequirements a em td td p Compute resources to use for this TaskRun p td tr table td tr tr td code status code br em a href tekton dev v1beta1 TaskRunStatus TaskRunStatus a em td td em Optional em td tr tbody table h3 id tekton dev v1beta1 Algorithm Algorithm code string code alias h3 div p Algorithm Standard cryptographic hash algorithm p div h3 id tekton dev v1beta1 Artifact Artifact h3 p em Appears on em a href tekton dev v1beta1 Artifacts Artifacts a a href tekton dev v1beta1 StepState StepState a p div p TaskRunStepArtifact represents an artifact produced or used by a step within a task run It directly uses the Artifact type for its structure p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p The artifact rsquo s identifying category name p td tr tr td code values code br em a href tekton dev v1beta1 ArtifactValue ArtifactValue a em td td p A collection of values related to the artifact p td tr tr td code buildOutput code br em bool em td td p Indicate if the artifact is a build output or a by product p td tr tbody table h3 id tekton dev v1beta1 ArtifactValue ArtifactValue h3 p em Appears on em a href tekton dev v1beta1 Artifact Artifact a p div p ArtifactValue represents a specific value or data element within an Artifact p div table thead tr th Field th th Description th tr thead tbody tr td code digest code br em map github com tektoncd pipeline pkg apis pipeline v1beta1 Algorithm string em td td td tr tr td code uri code br em string em td td p Algorithm specific digests for verifying the content e g SHA256 p td tr tbody table h3 id tekton dev v1beta1 Artifacts Artifacts h3 div p Artifacts represents the collection of input and output artifacts associated with a task run or a similar process Artifacts in this context are units of data or resources that the process either consumes as input or produces as output p div table thead tr th Field th th Description th tr thead tbody tr td code inputs code br em a href tekton dev v1beta1 Artifact Artifact a em td td td tr tr td code outputs code br em a href tekton dev v1beta1 Artifact Artifact a em td td td tr tbody table h3 id tekton dev v1beta1 ChildStatusReference ChildStatusReference h3 p em Appears on em a href tekton dev v1beta1 PipelineRunStatusFields PipelineRunStatusFields a p div p ChildStatusReference is used to point to the statuses of individual TaskRuns and Runs within this PipelineRun p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name is the name of the TaskRun or Run this is referencing p td tr tr td code displayName code br em string em td td p DisplayName is a user facing name of the pipelineTask that may be used to populate a UI p td tr tr td code pipelineTaskName code br em string em td td p PipelineTaskName is the name of the PipelineTask this is referencing p td tr tr td code whenExpressions code br em a href tekton dev v1beta1 WhenExpression WhenExpression a em td td em Optional em p WhenExpressions is the list of checks guarding the execution of the PipelineTask p td tr tbody table h3 id tekton dev v1beta1 CloudEventCondition CloudEventCondition code string code alias h3 p em Appears on em a href tekton dev v1beta1 CloudEventDeliveryState CloudEventDeliveryState a p div p CloudEventCondition is a string that represents the condition of the event p div h3 id tekton dev v1beta1 CloudEventDelivery CloudEventDelivery h3 p em Appears on em a href tekton dev v1beta1 TaskRunStatusFields TaskRunStatusFields a p div p CloudEventDelivery is the target of a cloud event along with the state of delivery p div table thead tr th Field th th Description th tr thead tbody tr td code target code br em string em td td p Target points to an addressable p td tr tr td code status code br em a href tekton dev v1beta1 CloudEventDeliveryState CloudEventDeliveryState a em td td td tr tbody table h3 id tekton dev v1beta1 CloudEventDeliveryState CloudEventDeliveryState h3 p em Appears on em a href tekton dev v1beta1 CloudEventDelivery CloudEventDelivery a p div p CloudEventDeliveryState reports the state of a cloud event to be sent p div table thead tr th Field th th Description th tr thead tbody tr td code condition code br em a href tekton dev v1beta1 CloudEventCondition CloudEventCondition a em td td p Current status p td tr tr td code sentAt code br em a href https kubernetes io docs reference generated kubernetes api v1 24 time v1 meta Kubernetes meta v1 Time a em td td em Optional em p SentAt is the time at which the last attempt to send the event was made p td tr tr td code message code br em string em td td p Error is the text of error if any p td tr tr td code retryCount code br em int32 em td td p RetryCount is the number of attempts of sending the cloud event p td tr tbody table h3 id tekton dev v1beta1 Combination Combination code map string string code alias h3 div p Combination is a map mainly defined to hold a single combination from a Matrix with key as param Name and value as param Value p div h3 id tekton dev v1beta1 Combinations Combinations code github com tektoncd pipeline pkg apis pipeline v1beta1 Combination code alias h3 div p Combinations is a Combination list p div h3 id tekton dev v1beta1 ConfigSource ConfigSource h3 p em Appears on em a href tekton dev v1beta1 Provenance Provenance a p div p ConfigSource contains the information that can uniquely identify where a remote built definition came from i e Git repositories Tekton Bundles in OCI registry and hub p div table thead tr th Field th th Description th tr thead tbody tr td code uri code br em string em td td p URI indicates the identity of the source of the build definition Example ldquo a href https github com tektoncd catalog quot https github com tektoncd catalog rdquo a p td tr tr td code digest code br em map string string em td td p Digest is a collection of cryptographic digests for the contents of the artifact specified by URI Example ldquo sha1 rdquo ldquo f99d13e554ffcb696dee719fa85b695cb5b0f428 rdquo p td tr tr td code entryPoint code br em string em td td p EntryPoint identifies the entry point into the build This is often a path to a build definition file and or a target label within that file Example ldquo task git clone 0 8 git clone yaml rdquo p td tr tbody table h3 id tekton dev v1beta1 CustomRunReason CustomRunReason code string code alias h3 div p CustomRunReason is an enum used to store all Run reason for the Succeeded condition that are controlled by the CustomRun itself p div h3 id tekton dev v1beta1 CustomRunSpec CustomRunSpec h3 p em Appears on em a href tekton dev v1beta1 CustomRun CustomRun a p div p CustomRunSpec defines the desired state of CustomRun p div table thead tr th Field th th Description th tr thead tbody tr td code customRef code br em a href tekton dev v1beta1 TaskRef TaskRef a em td td em Optional em td tr tr td code customSpec code br em a href tekton dev v1beta1 EmbeddedCustomRunSpec EmbeddedCustomRunSpec a em td td em Optional em p Spec is a specification of a custom task p td tr tr td code params code br em a href tekton dev v1beta1 Params Params a em td td em Optional em td tr tr td code status code br em a href tekton dev v1beta1 CustomRunSpecStatus CustomRunSpecStatus a em td td em Optional em p Used for cancelling a customrun and maybe more later on p td tr tr td code statusMessage code br em a href tekton dev v1beta1 CustomRunSpecStatusMessage CustomRunSpecStatusMessage a em td td em Optional em p Status message for cancellation p td tr tr td code retries code br em int em td td em Optional em p Used for propagating retries count to custom tasks p td tr tr td code serviceAccountName code br em string em td td em Optional em td tr tr td code timeout code br em a href https godoc org k8s io apimachinery pkg apis meta v1 Duration Kubernetes meta v1 Duration a em td td em Optional em p Time after which the custom task times out Refer Go rsquo s ParseDuration documentation for expected format a href https golang org pkg time ParseDuration https golang org pkg time ParseDuration a p td tr tr td code workspaces code br em a href tekton dev v1beta1 WorkspaceBinding WorkspaceBinding a em td td em Optional em p Workspaces is a list of WorkspaceBindings from volumes to workspaces p td tr tbody table h3 id tekton dev v1beta1 CustomRunSpecStatus CustomRunSpecStatus code string code alias h3 p em Appears on em a href tekton dev v1beta1 CustomRunSpec CustomRunSpec a p div p CustomRunSpecStatus defines the taskrun spec status the user can provide p div h3 id tekton dev v1beta1 CustomRunSpecStatusMessage CustomRunSpecStatusMessage code string code alias h3 p em Appears on em a href tekton dev v1beta1 CustomRunSpec CustomRunSpec a p div p CustomRunSpecStatusMessage defines human readable status messages for the TaskRun p div h3 id tekton dev v1beta1 EmbeddedCustomRunSpec EmbeddedCustomRunSpec h3 p em Appears on em a href tekton dev v1beta1 CustomRunSpec CustomRunSpec a p div p EmbeddedCustomRunSpec allows custom task definitions to be embedded p div table thead tr th Field th th Description th tr thead tbody tr td code metadata code br em a href tekton dev v1beta1 PipelineTaskMetadata PipelineTaskMetadata a em td td em Optional em td tr tr td code spec code br em k8s io apimachinery pkg runtime RawExtension em td td em Optional em p Spec is a specification of a custom task p br br table tr td code code br em byte em td td p Raw is the underlying serialization of this object p p TODO Determine how to detect ContentType and ContentEncoding of lsquo Raw rsquo data p td tr tr td code code br em k8s io apimachinery pkg runtime Object em td td p Object can hold a representation of this extension useful for working with versioned structs p td tr table td tr tbody table h3 id tekton dev v1beta1 EmbeddedTask EmbeddedTask h3 p em Appears on em a href tekton dev v1beta1 PipelineTask PipelineTask a p div p EmbeddedTask is used to define a Task inline within a Pipeline rsquo s PipelineTasks p div table thead tr th Field th th Description th tr thead tbody tr td code spec code br em k8s io apimachinery pkg runtime RawExtension em td td em Optional em p Spec is a specification of a custom task p br br table tr td code code br em byte em td td p Raw is the underlying serialization of this object p p TODO Determine how to detect ContentType and ContentEncoding of lsquo Raw rsquo data p td tr tr td code code br em k8s io apimachinery pkg runtime Object em td td p Object can hold a representation of this extension useful for working with versioned structs p td tr table td tr tr td code metadata code br em a href tekton dev v1beta1 PipelineTaskMetadata PipelineTaskMetadata a em td td em Optional em td tr tr td code TaskSpec code br em a href tekton dev v1beta1 TaskSpec TaskSpec a em td td p Members of code TaskSpec code are embedded into this type p em Optional em p TaskSpec is a specification of a task p td tr tbody table h3 id tekton dev v1beta1 IncludeParams IncludeParams h3 div p IncludeParams allows passing in a specific combinations of Parameters into the Matrix p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name the specified combination p td tr tr td code params code br em a href tekton dev v1beta1 Params Params a em td td p Params takes only code Parameters code of type code quot string quot code The names of the code params code must match the names of the code params code in the underlying code Task code p td tr tbody table h3 id tekton dev v1beta1 InternalTaskModifier InternalTaskModifier h3 div p InternalTaskModifier implements TaskModifier for resources that are built in to Tekton Pipelines p p Deprecated Unused preserved only for backwards compatibility p div table thead tr th Field th th Description th tr thead tbody tr td code stepsToPrepend code br em a href tekton dev v1beta1 Step Step a em td td td tr tr td code stepsToAppend code br em a href tekton dev v1beta1 Step Step a em td td td tr tr td code volumes code br em a href https kubernetes io docs reference generated kubernetes api v1 24 volume v1 core Kubernetes core v1 Volume a em td td td tr tbody table h3 id tekton dev v1beta1 Matrix Matrix h3 p em Appears on em a href tekton dev v1beta1 PipelineTask PipelineTask a p div p Matrix is used to fan out Tasks in a Pipeline p div table thead tr th Field th th Description th tr thead tbody tr td code params code br em a href tekton dev v1beta1 Params Params a em td td p Params is a list of parameters used to fan out the pipelineTask Params takes only code Parameters code of type code quot array quot code Each array element is supplied to the code PipelineTask code by substituting code params code of type code quot string quot code in the underlying code Task code The names of the code params code in the code Matrix code must match the names of the code params code in the underlying code Task code that they will be substituting p td tr tr td code include code br em a href tekton dev v1beta1 IncludeParamsList IncludeParamsList a em td td em Optional em p Include is a list of IncludeParams which allows passing in specific combinations of Parameters into the Matrix p td tr tbody table h3 id tekton dev v1beta1 OnErrorType OnErrorType code string code alias h3 p em Appears on em a href tekton dev v1beta1 Step Step a p div p OnErrorType defines a list of supported exiting behavior of a container on error p div h3 id tekton dev v1beta1 Param Param h3 p em Appears on em a href tekton dev v1beta1 TaskRunInputs TaskRunInputs a p div p Param declares an ParamValues to use for the parameter called name p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td td tr tr td code value code br em a href tekton dev v1beta1 ParamValue ParamValue a em td td td tr tbody table h3 id tekton dev v1beta1 ParamSpec ParamSpec h3 div p ParamSpec defines arbitrary parameters needed beyond typed inputs such as resources Parameter values are provided by users as inputs on a TaskRun or PipelineRun p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name declares the name by which a parameter is referenced p td tr tr td code type code br em a href tekton dev v1beta1 ParamType ParamType a em td td em Optional em p Type is the user specified type of the parameter The possible types are currently ldquo string rdquo ldquo array rdquo and ldquo object rdquo and ldquo string rdquo is the default p td tr tr td code description code br em string em td td em Optional em p Description is a user facing description of the parameter that may be used to populate a UI p td tr tr td code properties code br em a href tekton dev v1beta1 PropertySpec map string github com tektoncd pipeline pkg apis pipeline v1beta1 PropertySpec a em td td em Optional em p Properties is the JSON Schema properties to support key value pairs parameter p td tr tr td code default code br em a href tekton dev v1beta1 ParamValue ParamValue a em td td em Optional em p Default is the value a parameter takes if no input value is supplied If default is set a Task may be executed without a supplied value for the parameter p td tr tr td code enum code br em string em td td em Optional em p Enum declares a set of allowed param input values for tasks pipelines that can be validated If Enum is not set no input validation is performed for the param p td tr tbody table h3 id tekton dev v1beta1 ParamSpecs ParamSpecs code github com tektoncd pipeline pkg apis pipeline v1beta1 ParamSpec code alias h3 p em Appears on em a href tekton dev v1beta1 PipelineSpec PipelineSpec a a href tekton dev v1beta1 TaskSpec TaskSpec a p div p ParamSpecs is a list of ParamSpec p div h3 id tekton dev v1beta1 ParamType ParamType code string code alias h3 p em Appears on em a href tekton dev v1beta1 ParamSpec ParamSpec a a href tekton dev v1beta1 ParamValue ParamValue a a href tekton dev v1beta1 PropertySpec PropertySpec a p div p ParamType indicates the type of an input parameter Used to distinguish between a single string and an array of strings p div h3 id tekton dev v1beta1 ParamValue ParamValue h3 p em Appears on em a href tekton dev v1beta1 Param Param a a href tekton dev v1beta1 ParamSpec ParamSpec a a href tekton dev v1beta1 PipelineResult PipelineResult a a href tekton dev v1beta1 PipelineRunResult PipelineRunResult a a href tekton dev v1beta1 TaskResult TaskResult a a href tekton dev v1beta1 TaskRunResult TaskRunResult a p div p ResultValue is a type alias of ParamValue p div table thead tr th Field th th Description th tr thead tbody tr td code Type code br em a href tekton dev v1beta1 ParamType ParamType a em td td td tr tr td code StringVal code br em string em td td p Represents the stored type of ParamValues p td tr tr td code ArrayVal code br em string em td td td tr tr td code ObjectVal code br em map string string em td td td tr tbody table h3 id tekton dev v1beta1 Params Params code github com tektoncd pipeline pkg apis pipeline v1beta1 Param code alias h3 p em Appears on em a href tekton dev v1alpha1 RunSpec RunSpec a a href tekton dev v1beta1 CustomRunSpec CustomRunSpec a a href tekton dev v1beta1 IncludeParams IncludeParams a a href tekton dev v1beta1 Matrix Matrix a a href tekton dev v1beta1 PipelineRunSpec PipelineRunSpec a a href tekton dev v1beta1 PipelineTask PipelineTask a a href tekton dev v1beta1 ResolverRef ResolverRef a a href tekton dev v1beta1 Step Step a a href tekton dev v1beta1 TaskRunSpec TaskRunSpec a p div p Params is a list of Param p div h3 id tekton dev v1beta1 PipelineDeclaredResource PipelineDeclaredResource h3 p em Appears on em a href tekton dev v1beta1 PipelineSpec PipelineSpec a p div p PipelineDeclaredResource is used by a Pipeline to declare the types of the PipelineResources that it will required to run and names which can be used to refer to these PipelineResources in PipelineTaskResourceBindings p p Deprecated Unused preserved only for backwards compatibility p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name is the name that will be used by the Pipeline to refer to this resource It does not directly correspond to the name of any PipelineResources Task inputs or outputs and it does not correspond to the actual names of the PipelineResources that will be bound in the PipelineRun p td tr tr td code type code br em string em td td p Type is the type of the PipelineResource p td tr tr td code optional code br em bool em td td p Optional declares the resource as optional optional true the resource is considered optional optional false the resource is considered required default equivalent of not specifying it p td tr tbody table h3 id tekton dev v1beta1 PipelineObject PipelineObject h3 div p PipelineObject is implemented by Pipeline p div h3 id tekton dev v1beta1 PipelineRef PipelineRef h3 p em Appears on em a href tekton dev v1beta1 PipelineRunSpec PipelineRunSpec a a href tekton dev v1beta1 PipelineTask PipelineTask a p div p PipelineRef can be used to refer to a specific instance of a Pipeline p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name of the referent More info a href http kubernetes io docs user guide identifiers names http kubernetes io docs user guide identifiers names a p td tr tr td code apiVersion code br em string em td td em Optional em p API version of the referent p td tr tr td code bundle code br em string em td td em Optional em p Bundle url reference to a Tekton Bundle p p Deprecated Please use ResolverRef with the bundles resolver instead The field is staying there for go client backward compatibility but is not used allowed anymore p td tr tr td code ResolverRef code br em a href tekton dev v1beta1 ResolverRef ResolverRef a em td td em Optional em p ResolverRef allows referencing a Pipeline in a remote location like a git repo This field is only supported when the alpha feature gate is enabled p td tr tbody table h3 id tekton dev v1beta1 PipelineResourceBinding PipelineResourceBinding h3 p em Appears on em a href tekton dev v1beta1 PipelineRunSpec PipelineRunSpec a a href tekton dev v1beta1 TaskResourceBinding TaskResourceBinding a p div p PipelineResourceBinding connects a reference to an instance of a PipelineResource with a PipelineResource dependency that the Pipeline has declared p p Deprecated Unused preserved only for backwards compatibility p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name is the name of the PipelineResource in the Pipeline rsquo s declaration p td tr tr td code resourceRef code br em a href tekton dev v1beta1 PipelineResourceRef PipelineResourceRef a em td td em Optional em p ResourceRef is a reference to the instance of the actual PipelineResource that should be used p td tr tr td code resourceSpec code br em a href tekton dev v1alpha1 PipelineResourceSpec PipelineResourceSpec a em td td em Optional em p ResourceSpec is specification of a resource that should be created and consumed by the task p td tr tbody table h3 id tekton dev v1beta1 PipelineResourceInterface PipelineResourceInterface h3 div p PipelineResourceInterface interface to be implemented by different PipelineResource types p p Deprecated Unused preserved only for backwards compatibility p div h3 id tekton dev v1beta1 PipelineResourceRef PipelineResourceRef h3 p em Appears on em a href tekton dev v1beta1 PipelineResourceBinding PipelineResourceBinding a p div p PipelineResourceRef can be used to refer to a specific instance of a Resource p p Deprecated Unused preserved only for backwards compatibility p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name of the referent More info a href http kubernetes io docs user guide identifiers names http kubernetes io docs user guide identifiers names a p td tr tr td code apiVersion code br em string em td td em Optional em p API version of the referent p td tr tbody table h3 id tekton dev v1beta1 PipelineResult PipelineResult h3 p em Appears on em a href tekton dev v1beta1 PipelineSpec PipelineSpec a p div p PipelineResult used to describe the results of a pipeline p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name the given name p td tr tr td code type code br em a href tekton dev v1beta1 ResultsType ResultsType a em td td p Type is the user specified type of the result The possible types are lsquo string rsquo lsquo array rsquo and lsquo object rsquo with lsquo string rsquo as the default lsquo array rsquo and lsquo object rsquo types are alpha features p td tr tr td code description code br em string em td td em Optional em p Description is a human readable description of the result p td tr tr td code value code br em a href tekton dev v1beta1 ParamValue ParamValue a em td td p Value the expression used to retrieve the value p td tr tbody table h3 id tekton dev v1beta1 PipelineRunReason PipelineRunReason code string code alias h3 div p PipelineRunReason represents a reason for the pipeline run ldquo Succeeded rdquo condition p div h3 id tekton dev v1beta1 PipelineRunResult PipelineRunResult h3 p em Appears on em a href tekton dev v1beta1 PipelineRunStatusFields PipelineRunStatusFields a p div p PipelineRunResult used to describe the results of a pipeline p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name is the result rsquo s name as declared by the Pipeline p td tr tr td code value code br em a href tekton dev v1beta1 ParamValue ParamValue a em td td p Value is the result returned from the execution of this PipelineRun p td tr tbody table h3 id tekton dev v1beta1 PipelineRunRunStatus PipelineRunRunStatus h3 p em Appears on em a href tekton dev v1beta1 PipelineRunStatusFields PipelineRunStatusFields a p div p PipelineRunRunStatus contains the name of the PipelineTask for this CustomRun or Run and the CustomRun or Run rsquo s Status p div table thead tr th Field th th Description th tr thead tbody tr td code pipelineTaskName code br em string em td td p PipelineTaskName is the name of the PipelineTask p td tr tr td code status code br em a href tekton dev v1beta1 CustomRunStatus CustomRunStatus a em td td em Optional em p Status is the CustomRunStatus for the corresponding CustomRun or Run p td tr tr td code whenExpressions code br em a href tekton dev v1beta1 WhenExpression WhenExpression a em td td em Optional em p WhenExpressions is the list of checks guarding the execution of the PipelineTask p td tr tbody table h3 id tekton dev v1beta1 PipelineRunSpec PipelineRunSpec h3 p em Appears on em a href tekton dev v1beta1 PipelineRun PipelineRun a p div p PipelineRunSpec defines the desired state of PipelineRun p div table thead tr th Field th th Description th tr thead tbody tr td code pipelineRef code br em a href tekton dev v1beta1 PipelineRef PipelineRef a em td td em Optional em td tr tr td code pipelineSpec code br em a href tekton dev v1beta1 PipelineSpec PipelineSpec a em td td em Optional em p Specifying PipelineSpec can be disabled by setting code disable inline spec code feature flag p td tr tr td code resources code br em a href tekton dev v1beta1 PipelineResourceBinding PipelineResourceBinding a em td td p Resources is a list of bindings specifying which actual instances of PipelineResources to use for the resources the Pipeline has declared it needs p p Deprecated Unused preserved only for backwards compatibility p td tr tr td code params code br em a href tekton dev v1beta1 Params Params a em td td p Params is a list of parameter names and values p td tr tr td code serviceAccountName code br em string em td td em Optional em td tr tr td code status code br em a href tekton dev v1beta1 PipelineRunSpecStatus PipelineRunSpecStatus a em td td em Optional em p Used for cancelling a pipelinerun and maybe more later on p td tr tr td code timeouts code br em a href tekton dev v1beta1 TimeoutFields TimeoutFields a em td td em Optional em p Time after which the Pipeline times out Currently three keys are accepted in the map pipeline tasks and finally with Timeouts pipeline gt Timeouts tasks Timeouts finally p td tr tr td code timeout code br em a href https godoc org k8s io apimachinery pkg apis meta v1 Duration Kubernetes meta v1 Duration a em td td em Optional em p Timeout is the Time after which the Pipeline times out Defaults to never Refer to Go rsquo s ParseDuration documentation for expected format a href https golang org pkg time ParseDuration https golang org pkg time ParseDuration a p p Deprecated use pipelineRunSpec Timeouts Pipeline instead p td tr tr td code podTemplate code br em a href tekton dev unversioned Template Template a em td td p PodTemplate holds pod specific configuration p td tr tr td code workspaces code br em a href tekton dev v1beta1 WorkspaceBinding WorkspaceBinding a em td td em Optional em p Workspaces holds a set of workspace bindings that must match names with those declared in the pipeline p td tr tr td code taskRunSpecs code br em a href tekton dev v1beta1 PipelineTaskRunSpec PipelineTaskRunSpec a em td td em Optional em p TaskRunSpecs holds a set of runtime specs p td tr tbody table h3 id tekton dev v1beta1 PipelineRunSpecStatus PipelineRunSpecStatus code string code alias h3 p em Appears on em a href tekton dev v1beta1 PipelineRunSpec PipelineRunSpec a p div p PipelineRunSpecStatus defines the pipelinerun spec status the user can provide p div h3 id tekton dev v1beta1 PipelineRunStatus PipelineRunStatus h3 p em Appears on em a href tekton dev v1beta1 PipelineRun PipelineRun a p div p PipelineRunStatus defines the observed state of PipelineRun p div table thead tr th Field th th Description th tr thead tbody tr td code Status code br em a href https pkg go dev knative dev pkg apis duck v1 Status knative dev pkg apis duck v1 Status a em td td p Members of code Status code are embedded into this type p td tr tr td code PipelineRunStatusFields code br em a href tekton dev v1beta1 PipelineRunStatusFields PipelineRunStatusFields a em td td p Members of code PipelineRunStatusFields code are embedded into this type p p PipelineRunStatusFields inlines the status fields p td tr tbody table h3 id tekton dev v1beta1 PipelineRunStatusFields PipelineRunStatusFields h3 p em Appears on em a href tekton dev v1beta1 PipelineRunStatus PipelineRunStatus a p div p PipelineRunStatusFields holds the fields of PipelineRunStatus rsquo status This is defined separately and inlined so that other types can readily consume these fields via duck typing p div table thead tr th Field th th Description th tr thead tbody tr td code startTime code br em a href https kubernetes io docs reference generated kubernetes api v1 24 time v1 meta Kubernetes meta v1 Time a em td td p StartTime is the time the PipelineRun is actually started p td tr tr td code completionTime code br em a href https kubernetes io docs reference generated kubernetes api v1 24 time v1 meta Kubernetes meta v1 Time a em td td p CompletionTime is the time the PipelineRun completed p td tr tr td code taskRuns code br em a href tekton dev v1beta1 PipelineRunTaskRunStatus map string github com tektoncd pipeline pkg apis pipeline v1beta1 PipelineRunTaskRunStatus a em td td em Optional em p TaskRuns is a map of PipelineRunTaskRunStatus with the taskRun name as the key p p Deprecated use ChildReferences instead As of v0 45 0 this field is no longer populated and is only included for backwards compatibility with older server versions p td tr tr td code runs code br em a href tekton dev v1beta1 PipelineRunRunStatus map string github com tektoncd pipeline pkg apis pipeline v1beta1 PipelineRunRunStatus a em td td em Optional em p Runs is a map of PipelineRunRunStatus with the run name as the key p p Deprecated use ChildReferences instead As of v0 45 0 this field is no longer populated and is only included for backwards compatibility with older server versions p td tr tr td code pipelineResults code br em a href tekton dev v1beta1 PipelineRunResult PipelineRunResult a em td td em Optional em p PipelineResults are the list of results written out by the pipeline task rsquo s containers p td tr tr td code pipelineSpec code br em a href tekton dev v1beta1 PipelineSpec PipelineSpec a em td td p PipelineRunSpec contains the exact spec used to instantiate the run p td tr tr td code skippedTasks code br em a href tekton dev v1beta1 SkippedTask SkippedTask a em td td em Optional em p list of tasks that were skipped due to when expressions evaluating to false p td tr tr td code childReferences code br em a href tekton dev v1beta1 ChildStatusReference ChildStatusReference a em td td em Optional em p list of TaskRun and Run names PipelineTask names and API versions kinds for children of this PipelineRun p td tr tr td code finallyStartTime code br em a href https kubernetes io docs reference generated kubernetes api v1 24 time v1 meta Kubernetes meta v1 Time a em td td em Optional em p FinallyStartTime is when all non finally tasks have been completed and only finally tasks are being executed p td tr tr td code provenance code br em a href tekton dev v1beta1 Provenance Provenance a em td td em Optional em p Provenance contains some key authenticated metadata about how a software artifact was built what sources what inputs outputs etc p td tr tr td code spanContext code br em map string string em td td p SpanContext contains tracing span context fields p td tr tbody table h3 id tekton dev v1beta1 PipelineRunTaskRunStatus PipelineRunTaskRunStatus h3 p em Appears on em a href tekton dev v1beta1 PipelineRunStatusFields PipelineRunStatusFields a p div p PipelineRunTaskRunStatus contains the name of the PipelineTask for this TaskRun and the TaskRun rsquo s Status p div table thead tr th Field th th Description th tr thead tbody tr td code pipelineTaskName code br em string em td td p PipelineTaskName is the name of the PipelineTask p td tr tr td code status code br em a href tekton dev v1beta1 TaskRunStatus TaskRunStatus a em td td em Optional em p Status is the TaskRunStatus for the corresponding TaskRun p td tr tr td code whenExpressions code br em a href tekton dev v1beta1 WhenExpression WhenExpression a em td td em Optional em p WhenExpressions is the list of checks guarding the execution of the PipelineTask p td tr tbody table h3 id tekton dev v1beta1 PipelineSpec PipelineSpec h3 p em Appears on em a href tekton dev v1beta1 Pipeline Pipeline a a href tekton dev v1beta1 PipelineRunSpec PipelineRunSpec a a href tekton dev v1beta1 PipelineRunStatusFields PipelineRunStatusFields a a href tekton dev v1beta1 PipelineTask PipelineTask a p div p PipelineSpec defines the desired state of Pipeline p div table thead tr th Field th th Description th tr thead tbody tr td code displayName code br em string em td td em Optional em p DisplayName is a user facing name of the pipeline that may be used to populate a UI p td tr tr td code description code br em string em td td em Optional em p Description is a user facing description of the pipeline that may be used to populate a UI p td tr tr td code resources code br em a href tekton dev v1beta1 PipelineDeclaredResource PipelineDeclaredResource a em td td p Deprecated Unused preserved only for backwards compatibility p td tr tr td code tasks code br em a href tekton dev v1beta1 PipelineTask PipelineTask a em td td p Tasks declares the graph of Tasks that execute when this Pipeline is run p td tr tr td code params code br em a href tekton dev v1beta1 ParamSpecs ParamSpecs a em td td p Params declares a list of input parameters that must be supplied when this Pipeline is run p td tr tr td code workspaces code br em a href tekton dev v1beta1 PipelineWorkspaceDeclaration PipelineWorkspaceDeclaration a em td td em Optional em p Workspaces declares a set of named workspaces that are expected to be provided by a PipelineRun p td tr tr td code results code br em a href tekton dev v1beta1 PipelineResult PipelineResult a em td td em Optional em p Results are values that this pipeline can output once run p td tr tr td code finally code br em a href tekton dev v1beta1 PipelineTask PipelineTask a em td td p Finally declares the list of Tasks that execute just before leaving the Pipeline i e either after all Tasks are finished executing successfully or after a failure which would result in ending the Pipeline p td tr tbody table h3 id tekton dev v1beta1 PipelineTask PipelineTask h3 p em Appears on em a href tekton dev v1beta1 PipelineSpec PipelineSpec a p div p PipelineTask defines a task in a Pipeline passing inputs from both Params and from the output of previous tasks p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name is the name of this task within the context of a Pipeline Name is used as a coordinate with the code from code and code runAfter code fields to establish the execution order of tasks relative to one another p td tr tr td code displayName code br em string em td td em Optional em p DisplayName is the display name of this task within the context of a Pipeline This display name may be used to populate a UI p td tr tr td code description code br em string em td td em Optional em p Description is the description of this task within the context of a Pipeline This description may be used to populate a UI p td tr tr td code taskRef code br em a href tekton dev v1beta1 TaskRef TaskRef a em td td em Optional em p TaskRef is a reference to a task definition p td tr tr td code taskSpec code br em a href tekton dev v1beta1 EmbeddedTask EmbeddedTask a em td td em Optional em p TaskSpec is a specification of a task Specifying TaskSpec can be disabled by setting code disable inline spec code feature flag p td tr tr td code when code br em a href tekton dev v1beta1 WhenExpressions WhenExpressions a em td td em Optional em p WhenExpressions is a list of when expressions that need to be true for the task to run p td tr tr td code retries code br em int em td td em Optional em p Retries represents how many times this task should be retried in case of task failure ConditionSucceeded set to False p td tr tr td code runAfter code br em string em td td em Optional em p RunAfter is the list of PipelineTask names that should be executed before this Task executes Used to force a specific ordering in graph execution p td tr tr td code resources code br em a href tekton dev v1beta1 PipelineTaskResources PipelineTaskResources a em td td em Optional em p Deprecated Unused preserved only for backwards compatibility p td tr tr td code params code br em a href tekton dev v1beta1 Params Params a em td td em Optional em p Parameters declares parameters passed to this task p td tr tr td code matrix code br em a href tekton dev v1beta1 Matrix Matrix a em td td em Optional em p Matrix declares parameters used to fan out this task p td tr tr td code workspaces code br em a href tekton dev v1beta1 WorkspacePipelineTaskBinding WorkspacePipelineTaskBinding a em td td em Optional em p Workspaces maps workspaces from the pipeline spec to the workspaces declared in the Task p td tr tr td code timeout code br em a href https godoc org k8s io apimachinery pkg apis meta v1 Duration Kubernetes meta v1 Duration a em td td em Optional em p Time after which the TaskRun times out Defaults to 1 hour Refer Go rsquo s ParseDuration documentation for expected format a href https golang org pkg time ParseDuration https golang org pkg time ParseDuration a p td tr tr td code pipelineRef code br em a href tekton dev v1beta1 PipelineRef PipelineRef a em td td em Optional em p PipelineRef is a reference to a pipeline definition Note PipelineRef is in preview mode and not yet supported p td tr tr td code pipelineSpec code br em a href tekton dev v1beta1 PipelineSpec PipelineSpec a em td td em Optional em p PipelineSpec is a specification of a pipeline Note PipelineSpec is in preview mode and not yet supported Specifying TaskSpec can be disabled by setting code disable inline spec code feature flag p td tr tr td code onError code br em a href tekton dev v1beta1 PipelineTaskOnErrorType PipelineTaskOnErrorType a em td td em Optional em p OnError defines the exiting behavior of a PipelineRun on error can be set to continue stopAndFail p td tr tbody table h3 id tekton dev v1beta1 PipelineTaskInputResource PipelineTaskInputResource h3 p em Appears on em a href tekton dev v1beta1 PipelineTaskResources PipelineTaskResources a p div p PipelineTaskInputResource maps the name of a declared PipelineResource input dependency in a Task to the resource in the Pipeline rsquo s DeclaredPipelineResources that should be used This input may come from a previous task p p Deprecated Unused preserved only for backwards compatibility p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name is the name of the PipelineResource as declared by the Task p td tr tr td code resource code br em string em td td p Resource is the name of the DeclaredPipelineResource to use p td tr tr td code from code br em string em td td em Optional em p From is the list of PipelineTask names that the resource has to come from Implies an ordering in the execution graph p td tr tbody table h3 id tekton dev v1beta1 PipelineTaskMetadata PipelineTaskMetadata h3 p em Appears on em a href tekton dev v1alpha1 EmbeddedRunSpec EmbeddedRunSpec a a href tekton dev v1beta1 EmbeddedCustomRunSpec EmbeddedCustomRunSpec a a href tekton dev v1beta1 EmbeddedTask EmbeddedTask a a href tekton dev v1beta1 PipelineTaskRunSpec PipelineTaskRunSpec a p div p PipelineTaskMetadata contains the labels or annotations for an EmbeddedTask p div table thead tr th Field th th Description th tr thead tbody tr td code labels code br em map string string em td td em Optional em td tr tr td code annotations code br em map string string em td td em Optional em td tr tbody table h3 id tekton dev v1beta1 PipelineTaskOnErrorType PipelineTaskOnErrorType code string code alias h3 p em Appears on em a href tekton dev v1beta1 PipelineTask PipelineTask a p div p PipelineTaskOnErrorType defines a list of supported failure handling behaviors of a PipelineTask on error p div h3 id tekton dev v1beta1 PipelineTaskOutputResource PipelineTaskOutputResource h3 p em Appears on em a href tekton dev v1beta1 PipelineTaskResources PipelineTaskResources a p div p PipelineTaskOutputResource maps the name of a declared PipelineResource output dependency in a Task to the resource in the Pipeline rsquo s DeclaredPipelineResources that should be used p p Deprecated Unused preserved only for backwards compatibility p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name is the name of the PipelineResource as declared by the Task p td tr tr td code resource code br em string em td td p Resource is the name of the DeclaredPipelineResource to use p td tr tbody table h3 id tekton dev v1beta1 PipelineTaskParam PipelineTaskParam h3 div p PipelineTaskParam is used to provide arbitrary string parameters to a Task p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td td tr tr td code value code br em string em td td td tr tbody table h3 id tekton dev v1beta1 PipelineTaskResources PipelineTaskResources h3 p em Appears on em a href tekton dev v1beta1 PipelineTask PipelineTask a p div p PipelineTaskResources allows a Pipeline to declare how its DeclaredPipelineResources should be provided to a Task as its inputs and outputs p p Deprecated Unused preserved only for backwards compatibility p div table thead tr th Field th th Description th tr thead tbody tr td code inputs code br em a href tekton dev v1beta1 PipelineTaskInputResource PipelineTaskInputResource a em td td p Inputs holds the mapping from the PipelineResources declared in DeclaredPipelineResources to the input PipelineResources required by the Task p td tr tr td code outputs code br em a href tekton dev v1beta1 PipelineTaskOutputResource PipelineTaskOutputResource a em td td p Outputs holds the mapping from the PipelineResources declared in DeclaredPipelineResources to the input PipelineResources required by the Task p td tr tbody table h3 id tekton dev v1beta1 PipelineTaskRun PipelineTaskRun h3 div p PipelineTaskRun reports the results of running a step in the Task Each task has the potential to succeed or fail based on the exit code and produces logs p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td td tr tbody table h3 id tekton dev v1beta1 PipelineTaskRunSpec PipelineTaskRunSpec h3 p em Appears on em a href tekton dev v1beta1 PipelineRunSpec PipelineRunSpec a p div p PipelineTaskRunSpec can be used to configure specific specs for a concrete Task p div table thead tr th Field th th Description th tr thead tbody tr td code pipelineTaskName code br em string em td td td tr tr td code taskServiceAccountName code br em string em td td td tr tr td code taskPodTemplate code br em a href tekton dev unversioned Template Template a em td td td tr tr td code stepOverrides code br em a href tekton dev v1beta1 TaskRunStepOverride TaskRunStepOverride a em td td td tr tr td code sidecarOverrides code br em a href tekton dev v1beta1 TaskRunSidecarOverride TaskRunSidecarOverride a em td td td tr tr td code metadata code br em a href tekton dev v1beta1 PipelineTaskMetadata PipelineTaskMetadata a em td td em Optional em td tr tr td code computeResources code br em a href https kubernetes io docs reference generated kubernetes api v1 24 resourcerequirements v1 core Kubernetes core v1 ResourceRequirements a em td td p Compute resources to use for this TaskRun p td tr tbody table h3 id tekton dev v1beta1 PipelineWorkspaceDeclaration PipelineWorkspaceDeclaration h3 p em Appears on em a href tekton dev v1beta1 PipelineSpec PipelineSpec a p div p WorkspacePipelineDeclaration creates a named slot in a Pipeline that a PipelineRun is expected to populate with a workspace binding p p Deprecated use PipelineWorkspaceDeclaration type instead p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name is the name of a workspace to be provided by a PipelineRun p td tr tr td code description code br em string em td td em Optional em p Description is a human readable string describing how the workspace will be used in the Pipeline It can be useful to include a bit of detail about which tasks are intended to have access to the data on the workspace p td tr tr td code optional code br em bool em td td p Optional marks a Workspace as not being required in PipelineRuns By default this field is false and so declared workspaces are required p td tr tbody table h3 id tekton dev v1beta1 PropertySpec PropertySpec h3 p em Appears on em a href tekton dev v1beta1 ParamSpec ParamSpec a a href tekton dev v1beta1 TaskResult TaskResult a p div p PropertySpec defines the struct for object keys p div table thead tr th Field th th Description th tr thead tbody tr td code type code br em a href tekton dev v1beta1 ParamType ParamType a em td td td tr tbody table h3 id tekton dev v1beta1 Provenance Provenance h3 p em Appears on em a href tekton dev v1beta1 PipelineRunStatusFields PipelineRunStatusFields a a href tekton dev v1beta1 StepState StepState a a href tekton dev v1beta1 TaskRunStatusFields TaskRunStatusFields a p div p Provenance contains metadata about resources used in the TaskRun PipelineRun such as the source from where a remote build definition was fetched This field aims to carry minimum amoumt of metadata in Run status so that Tekton Chains can capture them in the provenance p div table thead tr th Field th th Description th tr thead tbody tr td code configSource code br em a href tekton dev v1beta1 ConfigSource ConfigSource a em td td p Deprecated Use RefSource instead p td tr tr td code refSource code br em a href tekton dev v1beta1 RefSource RefSource a em td td p RefSource identifies the source where a remote task pipeline came from p td tr tr td code featureFlags code br em github com tektoncd pipeline pkg apis config FeatureFlags em td td p FeatureFlags identifies the feature flags that were used during the task pipeline run p td tr tbody table h3 id tekton dev v1beta1 Ref Ref h3 p em Appears on em a href tekton dev v1beta1 Step Step a p div p Ref can be used to refer to a specific instance of a StepAction p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name of the referenced step p td tr tr td code ResolverRef code br em a href tekton dev v1beta1 ResolverRef ResolverRef a em td td em Optional em p ResolverRef allows referencing a StepAction in a remote location like a git repo p td tr tbody table h3 id tekton dev v1beta1 RefSource RefSource h3 p em Appears on em a href tekton dev v1beta1 Provenance Provenance a p div p RefSource contains the information that can uniquely identify where a remote built definition came from i e Git repositories Tekton Bundles in OCI registry and hub p div table thead tr th Field th th Description th tr thead tbody tr td code uri code br em string em td td p URI indicates the identity of the source of the build definition Example ldquo a href https github com tektoncd catalog quot https github com tektoncd catalog rdquo a p td tr tr td code digest code br em map string string em td td p Digest is a collection of cryptographic digests for the contents of the artifact specified by URI Example ldquo sha1 rdquo ldquo f99d13e554ffcb696dee719fa85b695cb5b0f428 rdquo p td tr tr td code entryPoint code br em string em td td p EntryPoint identifies the entry point into the build This is often a path to a build definition file and or a target label within that file Example ldquo task git clone 0 8 git clone yaml rdquo p td tr tbody table h3 id tekton dev v1beta1 ResolverName ResolverName code string code alias h3 p em Appears on em a href tekton dev v1beta1 ResolverRef ResolverRef a p div p ResolverName is the name of a resolver from which a resource can be requested p div h3 id tekton dev v1beta1 ResolverRef ResolverRef h3 p em Appears on em a href tekton dev v1beta1 PipelineRef PipelineRef a a href tekton dev v1beta1 Ref Ref a a href tekton dev v1beta1 TaskRef TaskRef a p div p ResolverRef can be used to refer to a Pipeline or Task in a remote location like a git repo p div table thead tr th Field th th Description th tr thead tbody tr td code resolver code br em a href tekton dev v1beta1 ResolverName ResolverName a em td td em Optional em p Resolver is the name of the resolver that should perform resolution of the referenced Tekton resource such as ldquo git rdquo p td tr tr td code params code br em a href tekton dev v1beta1 Params Params a em td td em Optional em p Params contains the parameters used to identify the referenced Tekton resource Example entries might include ldquo repo rdquo or ldquo path rdquo but the set of params ultimately depends on the chosen resolver p td tr tbody table h3 id tekton dev v1beta1 ResultRef ResultRef h3 div p ResultRef is a type that represents a reference to a task run result p div table thead tr th Field th th Description th tr thead tbody tr td code pipelineTask code br em string em td td td tr tr td code result code br em string em td td td tr tr td code resultsIndex code br em int em td td td tr tr td code property code br em string em td td td tr tbody table h3 id tekton dev v1beta1 ResultsType ResultsType code string code alias h3 p em Appears on em a href tekton dev v1beta1 PipelineResult PipelineResult a a href tekton dev v1beta1 TaskResult TaskResult a a href tekton dev v1beta1 TaskRunResult TaskRunResult a p div p ResultsType indicates the type of a result Used to distinguish between a single string and an array of strings Note that there is ResultType used to find out whether a RunResult is from a task result or not which is different from this ResultsType p div h3 id tekton dev v1beta1 RunObject RunObject h3 div p RunObject is implemented by CustomRun and Run p div h3 id tekton dev v1beta1 Sidecar Sidecar h3 p em Appears on em a href tekton dev v1beta1 TaskSpec TaskSpec a p div p Sidecar has nearly the same data structure as Step but does not have the ability to timeout p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name of the Sidecar specified as a DNS LABEL Each Sidecar in a Task must have a unique name DNS LABEL Cannot be updated p td tr tr td code image code br em string em td td em Optional em p Image name to be used by the Sidecar More info a href https kubernetes io docs concepts containers images https kubernetes io docs concepts containers images a p td tr tr td code command code br em string em td td em Optional em p Entrypoint array Not executed within a shell The image rsquo s ENTRYPOINT is used if this is not provided Variable references VAR NAME are expanded using the Sidecar rsquo s environment If a variable cannot be resolved the reference in the input string will be unchanged Double are reduced to a single which allows for escaping the VAR NAME syntax i e ldquo VAR NAME rdquo will produce the string literal ldquo VAR NAME rdquo Escaped references will never be expanded regardless of whether the variable exists or not Cannot be updated More info a href https kubernetes io docs tasks inject data application define command argument container running a command in a shell https kubernetes io docs tasks inject data application define command argument container running a command in a shell a p td tr tr td code args code br em string em td td em Optional em p Arguments to the entrypoint The image rsquo s CMD is used if this is not provided Variable references VAR NAME are expanded using the container rsquo s environment If a variable cannot be resolved the reference in the input string will be unchanged Double are reduced to a single which allows for escaping the VAR NAME syntax i e ldquo VAR NAME rdquo will produce the string literal ldquo VAR NAME rdquo Escaped references will never be expanded regardless of whether the variable exists or not Cannot be updated More info a href https kubernetes io docs tasks inject data application define command argument container running a command in a shell https kubernetes io docs tasks inject data application define command argument container running a command in a shell a p td tr tr td code workingDir code br em string em td td em Optional em p Sidecar rsquo s working directory If not specified the container runtime rsquo s default will be used which might be configured in the container image Cannot be updated p td tr tr td code ports code br em a href https kubernetes io docs reference generated kubernetes api v1 24 containerport v1 core Kubernetes core v1 ContainerPort a em td td em Optional em p List of ports to expose from the Sidecar Exposing a port here gives the system additional information about the network connections a container uses but is primarily informational Not specifying a port here DOES NOT prevent that port from being exposed Any port which is listening on the default ldquo 0 0 0 0 rdquo address inside a container will be accessible from the network Cannot be updated p td tr tr td code envFrom code br em a href https kubernetes io docs reference generated kubernetes api v1 24 envfromsource v1 core Kubernetes core v1 EnvFromSource a em td td em Optional em p List of sources to populate environment variables in the Sidecar The keys defined within a source must be a C IDENTIFIER All invalid keys will be reported as an event when the Sidecar is starting When a key exists in multiple sources the value associated with the last source will take precedence Values defined by an Env with a duplicate key will take precedence Cannot be updated p td tr tr td code env code br em a href https kubernetes io docs reference generated kubernetes api v1 24 envvar v1 core Kubernetes core v1 EnvVar a em td td em Optional em p List of environment variables to set in the Sidecar Cannot be updated p td tr tr td code resources code br em a href https kubernetes io docs reference generated kubernetes api v1 24 resourcerequirements v1 core Kubernetes core v1 ResourceRequirements a em td td em Optional em p Compute Resources required by this Sidecar Cannot be updated More info a href https kubernetes io docs concepts configuration manage resources containers https kubernetes io docs concepts configuration manage resources containers a p td tr tr td code volumeMounts code br em a href https kubernetes io docs reference generated kubernetes api v1 24 volumemount v1 core Kubernetes core v1 VolumeMount a em td td em Optional em p Volumes to mount into the Sidecar rsquo s filesystem Cannot be updated p td tr tr td code volumeDevices code br em a href https kubernetes io docs reference generated kubernetes api v1 24 volumedevice v1 core Kubernetes core v1 VolumeDevice a em td td em Optional em p volumeDevices is the list of block devices to be used by the Sidecar p td tr tr td code livenessProbe code br em a href https kubernetes io docs reference generated kubernetes api v1 24 probe v1 core Kubernetes core v1 Probe a em td td em Optional em p Periodic probe of Sidecar liveness Container will be restarted if the probe fails Cannot be updated More info a href https kubernetes io docs concepts workloads pods pod lifecycle container probes https kubernetes io docs concepts workloads pods pod lifecycle container probes a p td tr tr td code readinessProbe code br em a href https kubernetes io docs reference generated kubernetes api v1 24 probe v1 core Kubernetes core v1 Probe a em td td em Optional em p Periodic probe of Sidecar service readiness Container will be removed from service endpoints if the probe fails Cannot be updated More info a href https kubernetes io docs concepts workloads pods pod lifecycle container probes https kubernetes io docs concepts workloads pods pod lifecycle container probes a p td tr tr td code startupProbe code br em a href https kubernetes io docs reference generated kubernetes api v1 24 probe v1 core Kubernetes core v1 Probe a em td td em Optional em p StartupProbe indicates that the Pod the Sidecar is running in has successfully initialized If specified no other probes are executed until this completes successfully If this probe fails the Pod will be restarted just as if the livenessProbe failed This can be used to provide different probe parameters at the beginning of a Pod rsquo s lifecycle when it might take a long time to load data or warm a cache than during steady state operation This cannot be updated More info a href https kubernetes io docs concepts workloads pods pod lifecycle container probes https kubernetes io docs concepts workloads pods pod lifecycle container probes a p td tr tr td code lifecycle code br em a href https kubernetes io docs reference generated kubernetes api v1 24 lifecycle v1 core Kubernetes core v1 Lifecycle a em td td em Optional em p Actions that the management system should take in response to Sidecar lifecycle events Cannot be updated p td tr tr td code terminationMessagePath code br em string em td td em Optional em p Optional Path at which the file to which the Sidecar rsquo s termination message will be written is mounted into the Sidecar rsquo s filesystem Message written is intended to be brief final status such as an assertion failure message Will be truncated by the node if greater than 4096 bytes The total message length across all containers will be limited to 12kb Defaults to dev termination log Cannot be updated p td tr tr td code terminationMessagePolicy code br em a href https kubernetes io docs reference generated kubernetes api v1 24 terminationmessagepolicy v1 core Kubernetes core v1 TerminationMessagePolicy a em td td em Optional em p Indicate how the termination message should be populated File will use the contents of terminationMessagePath to populate the Sidecar status message on both success and failure FallbackToLogsOnError will use the last chunk of Sidecar log output if the termination message file is empty and the Sidecar exited with an error The log output is limited to 2048 bytes or 80 lines whichever is smaller Defaults to File Cannot be updated p td tr tr td code imagePullPolicy code br em a href https kubernetes io docs reference generated kubernetes api v1 24 pullpolicy v1 core Kubernetes core v1 PullPolicy a em td td em Optional em p Image pull policy One of Always Never IfNotPresent Defaults to Always if latest tag is specified or IfNotPresent otherwise Cannot be updated More info a href https kubernetes io docs concepts containers images updating images https kubernetes io docs concepts containers images updating images a p td tr tr td code securityContext code br em a href https kubernetes io docs reference generated kubernetes api v1 24 securitycontext v1 core Kubernetes core v1 SecurityContext a em td td em Optional em p SecurityContext defines the security options the Sidecar should be run with If set the fields of SecurityContext override the equivalent fields of PodSecurityContext More info a href https kubernetes io docs tasks configure pod container security context https kubernetes io docs tasks configure pod container security context a p td tr tr td code stdin code br em bool em td td em Optional em p Whether this Sidecar should allocate a buffer for stdin in the container runtime If this is not set reads from stdin in the Sidecar will always result in EOF Default is false p td tr tr td code stdinOnce code br em bool em td td em Optional em p Whether the container runtime should close the stdin channel after it has been opened by a single attach When stdin is true the stdin stream will remain open across multiple attach sessions If stdinOnce is set to true stdin is opened on Sidecar start is empty until the first client attaches to stdin and then remains open and accepts data until the client disconnects at which time stdin is closed and remains closed until the Sidecar is restarted If this flag is false a container processes that reads from stdin will never receive an EOF Default is false p td tr tr td code tty code br em bool em td td em Optional em p Whether this Sidecar should allocate a TTY for itself also requires lsquo stdin rsquo to be true Default is false p td tr tr td code script code br em string em td td em Optional em p Script is the contents of an executable file to execute p p If Script is not empty the Step cannot have an Command or Args p td tr tr td code workspaces code br em a href tekton dev v1beta1 WorkspaceUsage WorkspaceUsage a em td td em Optional em p This is an alpha field You must set the ldquo enable api fields rdquo feature flag to ldquo alpha rdquo for this field to be supported p p Workspaces is a list of workspaces from the Task that this Sidecar wants exclusive access to Adding a workspace to this list means that any other Step or Sidecar that does not also request this Workspace will not have access to it p td tr tr td code restartPolicy code br em a href https kubernetes io docs reference generated kubernetes api v1 24 containerrestartpolicy v1 core Kubernetes core v1 ContainerRestartPolicy a em td td em Optional em p RestartPolicy refers to kubernetes RestartPolicy It can only be set for an initContainer and must have it rsquo s policy set to ldquo Always rdquo It is currently left optional to help support Kubernetes versions prior to 1 29 when this feature was introduced p td tr tbody table h3 id tekton dev v1beta1 SidecarState SidecarState h3 p em Appears on em a href tekton dev v1beta1 TaskRunStatusFields TaskRunStatusFields a p div p SidecarState reports the results of running a sidecar in a Task p div table thead tr th Field th th Description th tr thead tbody tr td code ContainerState code br em a href https kubernetes io docs reference generated kubernetes api v1 24 containerstate v1 core Kubernetes core v1 ContainerState a em td td p Members of code ContainerState code are embedded into this type p td tr tr td code name code br em string em td td td tr tr td code container code br em string em td td td tr tr td code imageID code br em string em td td td tr tbody table h3 id tekton dev v1beta1 SkippedTask SkippedTask h3 p em Appears on em a href tekton dev v1beta1 PipelineRunStatusFields PipelineRunStatusFields a p div p SkippedTask is used to describe the Tasks that were skipped due to their When Expressions evaluating to False This is a struct because we are looking into including more details about the When Expressions that caused this Task to be skipped p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name is the Pipeline Task name p td tr tr td code reason code br em a href tekton dev v1beta1 SkippingReason SkippingReason a em td td p Reason is the cause of the PipelineTask being skipped p td tr tr td code whenExpressions code br em a href tekton dev v1beta1 WhenExpression WhenExpression a em td td em Optional em p WhenExpressions is the list of checks guarding the execution of the PipelineTask p td tr tbody table h3 id tekton dev v1beta1 SkippingReason SkippingReason code string code alias h3 p em Appears on em a href tekton dev v1beta1 SkippedTask SkippedTask a p div p SkippingReason explains why a PipelineTask was skipped p div h3 id tekton dev v1beta1 Step Step h3 p em Appears on em a href tekton dev v1beta1 InternalTaskModifier InternalTaskModifier a a href tekton dev v1beta1 TaskSpec TaskSpec a p div p Step runs a subcomponent of a Task p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name of the Step specified as a DNS LABEL Each Step in a Task must have a unique name p td tr tr td code image code br em string em td td em Optional em p Image reference name to run for this Step More info a href https kubernetes io docs concepts containers images https kubernetes io docs concepts containers images a p td tr tr td code command code br em string em td td em Optional em p Entrypoint array Not executed within a shell The image rsquo s ENTRYPOINT is used if this is not provided Variable references VAR NAME are expanded using the container rsquo s environment If a variable cannot be resolved the reference in the input string will be unchanged Double are reduced to a single which allows for escaping the VAR NAME syntax i e ldquo VAR NAME rdquo will produce the string literal ldquo VAR NAME rdquo Escaped references will never be expanded regardless of whether the variable exists or not Cannot be updated More info a href https kubernetes io docs tasks inject data application define command argument container running a command in a shell https kubernetes io docs tasks inject data application define command argument container running a command in a shell a p td tr tr td code args code br em string em td td em Optional em p Arguments to the entrypoint The image rsquo s CMD is used if this is not provided Variable references VAR NAME are expanded using the container rsquo s environment If a variable cannot be resolved the reference in the input string will be unchanged Double are reduced to a single which allows for escaping the VAR NAME syntax i e ldquo VAR NAME rdquo will produce the string literal ldquo VAR NAME rdquo Escaped references will never be expanded regardless of whether the variable exists or not Cannot be updated More info a href https kubernetes io docs tasks inject data application define command argument container running a command in a shell https kubernetes io docs tasks inject data application define command argument container running a command in a shell a p td tr tr td code workingDir code br em string em td td em Optional em p Step rsquo s working directory If not specified the container runtime rsquo s default will be used which might be configured in the container image Cannot be updated p td tr tr td code ports code br em a href https kubernetes io docs reference generated kubernetes api v1 24 containerport v1 core Kubernetes core v1 ContainerPort a em td td em Optional em p List of ports to expose from the Step rsquo s container Exposing a port here gives the system additional information about the network connections a container uses but is primarily informational Not specifying a port here DOES NOT prevent that port from being exposed Any port which is listening on the default ldquo 0 0 0 0 rdquo address inside a container will be accessible from the network Cannot be updated p p Deprecated This field will be removed in a future release p td tr tr td code envFrom code br em a href https kubernetes io docs reference generated kubernetes api v1 24 envfromsource v1 core Kubernetes core v1 EnvFromSource a em td td em Optional em p List of sources to populate environment variables in the container The keys defined within a source must be a C IDENTIFIER All invalid keys will be reported as an event when the container is starting When a key exists in multiple sources the value associated with the last source will take precedence Values defined by an Env with a duplicate key will take precedence Cannot be updated p td tr tr td code env code br em a href https kubernetes io docs reference generated kubernetes api v1 24 envvar v1 core Kubernetes core v1 EnvVar a em td td em Optional em p List of environment variables to set in the container Cannot be updated p td tr tr td code resources code br em a href https kubernetes io docs reference generated kubernetes api v1 24 resourcerequirements v1 core Kubernetes core v1 ResourceRequirements a em td td em Optional em p Compute Resources required by this Step Cannot be updated More info a href https kubernetes io docs concepts configuration manage resources containers https kubernetes io docs concepts configuration manage resources containers a p td tr tr td code volumeMounts code br em a href https kubernetes io docs reference generated kubernetes api v1 24 volumemount v1 core Kubernetes core v1 VolumeMount a em td td em Optional em p Volumes to mount into the Step rsquo s filesystem Cannot be updated p td tr tr td code volumeDevices code br em a href https kubernetes io docs reference generated kubernetes api v1 24 volumedevice v1 core Kubernetes core v1 VolumeDevice a em td td em Optional em p volumeDevices is the list of block devices to be used by the Step p td tr tr td code livenessProbe code br em a href https kubernetes io docs reference generated kubernetes api v1 24 probe v1 core Kubernetes core v1 Probe a em td td em Optional em p Periodic probe of container liveness Step will be restarted if the probe fails Cannot be updated More info a href https kubernetes io docs concepts workloads pods pod lifecycle container probes https kubernetes io docs concepts workloads pods pod lifecycle container probes a p p Deprecated This field will be removed in a future release p td tr tr td code readinessProbe code br em a href https kubernetes io docs reference generated kubernetes api v1 24 probe v1 core Kubernetes core v1 Probe a em td td em Optional em p Periodic probe of container service readiness Step will be removed from service endpoints if the probe fails Cannot be updated More info a href https kubernetes io docs concepts workloads pods pod lifecycle container probes https kubernetes io docs concepts workloads pods pod lifecycle container probes a p p Deprecated This field will be removed in a future release p td tr tr td code startupProbe code br em a href https kubernetes io docs reference generated kubernetes api v1 24 probe v1 core Kubernetes core v1 Probe a em td td em Optional em p DeprecatedStartupProbe indicates that the Pod this Step runs in has successfully initialized If specified no other probes are executed until this completes successfully If this probe fails the Pod will be restarted just as if the livenessProbe failed This can be used to provide different probe parameters at the beginning of a Pod rsquo s lifecycle when it might take a long time to load data or warm a cache than during steady state operation This cannot be updated More info a href https kubernetes io docs concepts workloads pods pod lifecycle container probes https kubernetes io docs concepts workloads pods pod lifecycle container probes a p p Deprecated This field will be removed in a future release p td tr tr td code lifecycle code br em a href https kubernetes io docs reference generated kubernetes api v1 24 lifecycle v1 core Kubernetes core v1 Lifecycle a em td td em Optional em p Actions that the management system should take in response to container lifecycle events Cannot be updated p p Deprecated This field will be removed in a future release p td tr tr td code terminationMessagePath code br em string em td td em Optional em p Deprecated This field will be removed in a future release and can rsquo t be meaningfully used p td tr tr td code terminationMessagePolicy code br em a href https kubernetes io docs reference generated kubernetes api v1 24 terminationmessagepolicy v1 core Kubernetes core v1 TerminationMessagePolicy a em td td em Optional em p Deprecated This field will be removed in a future release and can rsquo t be meaningfully used p td tr tr td code imagePullPolicy code br em a href https kubernetes io docs reference generated kubernetes api v1 24 pullpolicy v1 core Kubernetes core v1 PullPolicy a em td td em Optional em p Image pull policy One of Always Never IfNotPresent Defaults to Always if latest tag is specified or IfNotPresent otherwise Cannot be updated More info a href https kubernetes io docs concepts containers images updating images https kubernetes io docs concepts containers images updating images a p td tr tr td code securityContext code br em a href https kubernetes io docs reference generated kubernetes api v1 24 securitycontext v1 core Kubernetes core v1 SecurityContext a em td td em Optional em p SecurityContext defines the security options the Step should be run with If set the fields of SecurityContext override the equivalent fields of PodSecurityContext More info a href https kubernetes io docs tasks configure pod container security context https kubernetes io docs tasks configure pod container security context a p td tr tr td code stdin code br em bool em td td em Optional em p Whether this container should allocate a buffer for stdin in the container runtime If this is not set reads from stdin in the container will always result in EOF Default is false p p Deprecated This field will be removed in a future release p td tr tr td code stdinOnce code br em bool em td td em Optional em p Whether the container runtime should close the stdin channel after it has been opened by a single attach When stdin is true the stdin stream will remain open across multiple attach sessions If stdinOnce is set to true stdin is opened on container start is empty until the first client attaches to stdin and then remains open and accepts data until the client disconnects at which time stdin is closed and remains closed until the container is restarted If this flag is false a container processes that reads from stdin will never receive an EOF Default is false p p Deprecated This field will be removed in a future release p td tr tr td code tty code br em bool em td td em Optional em p Whether this container should allocate a DeprecatedTTY for itself also requires lsquo stdin rsquo to be true Default is false p p Deprecated This field will be removed in a future release p td tr tr td code script code br em string em td td em Optional em p Script is the contents of an executable file to execute p p If Script is not empty the Step cannot have an Command and the Args will be passed to the Script p td tr tr td code timeout code br em a href https godoc org k8s io apimachinery pkg apis meta v1 Duration Kubernetes meta v1 Duration a em td td em Optional em p Timeout is the time after which the step times out Defaults to never Refer to Go rsquo s ParseDuration documentation for expected format a href https golang org pkg time ParseDuration https golang org pkg time ParseDuration a p td tr tr td code workspaces code br em a href tekton dev v1beta1 WorkspaceUsage WorkspaceUsage a em td td em Optional em p This is an alpha field You must set the ldquo enable api fields rdquo feature flag to ldquo alpha rdquo for this field to be supported p p Workspaces is a list of workspaces from the Task that this Step wants exclusive access to Adding a workspace to this list means that any other Step or Sidecar that does not also request this Workspace will not have access to it p td tr tr td code onError code br em a href tekton dev v1beta1 OnErrorType OnErrorType a em td td p OnError defines the exiting behavior of a container on error can be set to continue stopAndFail p td tr tr td code stdoutConfig code br em a href tekton dev v1beta1 StepOutputConfig StepOutputConfig a em td td em Optional em p Stores configuration for the stdout stream of the step p td tr tr td code stderrConfig code br em a href tekton dev v1beta1 StepOutputConfig StepOutputConfig a em td td em Optional em p Stores configuration for the stderr stream of the step p td tr tr td code ref code br em a href tekton dev v1beta1 Ref Ref a em td td em Optional em p Contains the reference to an existing StepAction p td tr tr td code params code br em a href tekton dev v1beta1 Params Params a em td td em Optional em p Params declares parameters passed to this step action p td tr tr td code results code br em a href tekton dev v1 StepResult StepResult a em td td em Optional em p Results declares StepResults produced by the Step p p This is field is at an ALPHA stability level and gated by ldquo enable step actions rdquo feature flag p p It can be used in an inlined Step when used to store Results to step results resultName path It cannot be used when referencing StepActions using v1beta1 Step Ref The Results declared by the StepActions will be stored here instead p td tr tr td code when code br em a href tekton dev v1beta1 WhenExpressions WhenExpressions a em td td td tr tbody table h3 id tekton dev v1beta1 StepActionObject StepActionObject h3 div p StepActionObject is implemented by StepAction p div h3 id tekton dev v1beta1 StepActionSpec StepActionSpec h3 p em Appears on em a href tekton dev v1beta1 StepAction StepAction a p div p StepActionSpec contains the actionable components of a step p div table thead tr th Field th th Description th tr thead tbody tr td code description code br em string em td td em Optional em p Description is a user facing description of the stepaction that may be used to populate a UI p td tr tr td code image code br em string em td td em Optional em p Image reference name to run for this StepAction More info a href https kubernetes io docs concepts containers images https kubernetes io docs concepts containers images a p td tr tr td code command code br em string em td td em Optional em p Entrypoint array Not executed within a shell The image rsquo s ENTRYPOINT is used if this is not provided Variable references VAR NAME are expanded using the container rsquo s environment If a variable cannot be resolved the reference in the input string will be unchanged Double are reduced to a single which allows for escaping the VAR NAME syntax i e ldquo VAR NAME rdquo will produce the string literal ldquo VAR NAME rdquo Escaped references will never be expanded regardless of whether the variable exists or not Cannot be updated More info a href https kubernetes io docs tasks inject data application define command argument container running a command in a shell https kubernetes io docs tasks inject data application define command argument container running a command in a shell a p td tr tr td code args code br em string em td td em Optional em p Arguments to the entrypoint The image rsquo s CMD is used if this is not provided Variable references VAR NAME are expanded using the container rsquo s environment If a variable cannot be resolved the reference in the input string will be unchanged Double are reduced to a single which allows for escaping the VAR NAME syntax i e ldquo VAR NAME rdquo will produce the string literal ldquo VAR NAME rdquo Escaped references will never be expanded regardless of whether the variable exists or not Cannot be updated More info a href https kubernetes io docs tasks inject data application define command argument container running a command in a shell https kubernetes io docs tasks inject data application define command argument container running a command in a shell a p td tr tr td code env code br em a href https kubernetes io docs reference generated kubernetes api v1 24 envvar v1 core Kubernetes core v1 EnvVar a em td td em Optional em p List of environment variables to set in the container Cannot be updated p td tr tr td code script code br em string em td td em Optional em p Script is the contents of an executable file to execute p p If Script is not empty the Step cannot have an Command and the Args will be passed to the Script p td tr tr td code workingDir code br em string em td td em Optional em p Step rsquo s working directory If not specified the container runtime rsquo s default will be used which might be configured in the container image Cannot be updated p td tr tr td code params code br em a href tekton dev v1 ParamSpecs ParamSpecs a em td td em Optional em p Params is a list of input parameters required to run the stepAction Params must be supplied as inputs in Steps unless they declare a defaultvalue p td tr tr td code results code br em a href tekton dev v1 StepResult StepResult a em td td em Optional em p Results are values that this StepAction can output p td tr tr td code securityContext code br em a href https kubernetes io docs reference generated kubernetes api v1 24 securitycontext v1 core Kubernetes core v1 SecurityContext a em td td em Optional em p SecurityContext defines the security options the Step should be run with If set the fields of SecurityContext override the equivalent fields of PodSecurityContext More info a href https kubernetes io docs tasks configure pod container security context https kubernetes io docs tasks configure pod container security context a The value set in StepAction will take precedence over the value from Task p td tr tr td code volumeMounts code br em a href https kubernetes io docs reference generated kubernetes api v1 24 volumemount v1 core Kubernetes core v1 VolumeMount a em td td em Optional em p Volumes to mount into the Step rsquo s filesystem Cannot be updated p td tr tbody table h3 id tekton dev v1beta1 StepOutputConfig StepOutputConfig h3 p em Appears on em a href tekton dev v1beta1 Step Step a p div p StepOutputConfig stores configuration for a step output stream p div table thead tr th Field th th Description th tr thead tbody tr td code path code br em string em td td em Optional em p Path to duplicate stdout stream to on container rsquo s local filesystem p td tr tbody table h3 id tekton dev v1beta1 StepState StepState h3 p em Appears on em a href tekton dev v1beta1 TaskRunStatusFields TaskRunStatusFields a p div p StepState reports the results of running a step in a Task p div table thead tr th Field th th Description th tr thead tbody tr td code ContainerState code br em a href https kubernetes io docs reference generated kubernetes api v1 24 containerstate v1 core Kubernetes core v1 ContainerState a em td td p Members of code ContainerState code are embedded into this type p td tr tr td code name code br em string em td td td tr tr td code container code br em string em td td td tr tr td code imageID code br em string em td td td tr tr td code results code br em a href tekton dev v1beta1 TaskRunResult TaskRunResult a em td td td tr tr td code provenance code br em a href tekton dev v1beta1 Provenance Provenance a em td td td tr tr td code inputs code br em a href tekton dev v1beta1 Artifact Artifact a em td td td tr tr td code outputs code br em a href tekton dev v1beta1 Artifact Artifact a em td td td tr tbody table h3 id tekton dev v1beta1 StepTemplate StepTemplate h3 p em Appears on em a href tekton dev v1beta1 TaskSpec TaskSpec a p div p StepTemplate is a template for a Step p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Default name for each Step specified as a DNS LABEL Each Step in a Task must have a unique name Cannot be updated p p Deprecated This field will be removed in a future release p td tr tr td code image code br em string em td td em Optional em p Default image name to use for each Step More info a href https kubernetes io docs concepts containers images https kubernetes io docs concepts containers images a This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets p td tr tr td code command code br em string em td td em Optional em p Entrypoint array Not executed within a shell The docker image rsquo s ENTRYPOINT is used if this is not provided Variable references VAR NAME are expanded using the Step rsquo s environment If a variable cannot be resolved the reference in the input string will be unchanged Double are reduced to a single which allows for escaping the VAR NAME syntax i e ldquo VAR NAME rdquo will produce the string literal ldquo VAR NAME rdquo Escaped references will never be expanded regardless of whether the variable exists or not Cannot be updated More info a href https kubernetes io docs tasks inject data application define command argument container running a command in a shell https kubernetes io docs tasks inject data application define command argument container running a command in a shell a p td tr tr td code args code br em string em td td em Optional em p Arguments to the entrypoint The image rsquo s CMD is used if this is not provided Variable references VAR NAME are expanded using the Step rsquo s environment If a variable cannot be resolved the reference in the input string will be unchanged Double are reduced to a single which allows for escaping the VAR NAME syntax i e ldquo VAR NAME rdquo will produce the string literal ldquo VAR NAME rdquo Escaped references will never be expanded regardless of whether the variable exists or not Cannot be updated More info a href https kubernetes io docs tasks inject data application define command argument container running a command in a shell https kubernetes io docs tasks inject data application define command argument container running a command in a shell a p td tr tr td code workingDir code br em string em td td em Optional em p Step rsquo s working directory If not specified the container runtime rsquo s default will be used which might be configured in the container image Cannot be updated p td tr tr td code ports code br em a href https kubernetes io docs reference generated kubernetes api v1 24 containerport v1 core Kubernetes core v1 ContainerPort a em td td em Optional em p List of ports to expose from the Step rsquo s container Exposing a port here gives the system additional information about the network connections a container uses but is primarily informational Not specifying a port here DOES NOT prevent that port from being exposed Any port which is listening on the default ldquo 0 0 0 0 rdquo address inside a container will be accessible from the network Cannot be updated p p Deprecated This field will be removed in a future release p td tr tr td code envFrom code br em a href https kubernetes io docs reference generated kubernetes api v1 24 envfromsource v1 core Kubernetes core v1 EnvFromSource a em td td em Optional em p List of sources to populate environment variables in the Step The keys defined within a source must be a C IDENTIFIER All invalid keys will be reported as an event when the container is starting When a key exists in multiple sources the value associated with the last source will take precedence Values defined by an Env with a duplicate key will take precedence Cannot be updated p td tr tr td code env code br em a href https kubernetes io docs reference generated kubernetes api v1 24 envvar v1 core Kubernetes core v1 EnvVar a em td td em Optional em p List of environment variables to set in the container Cannot be updated p td tr tr td code resources code br em a href https kubernetes io docs reference generated kubernetes api v1 24 resourcerequirements v1 core Kubernetes core v1 ResourceRequirements a em td td em Optional em p Compute Resources required by this Step Cannot be updated More info a href https kubernetes io docs concepts configuration manage resources containers https kubernetes io docs concepts configuration manage resources containers a p td tr tr td code volumeMounts code br em a href https kubernetes io docs reference generated kubernetes api v1 24 volumemount v1 core Kubernetes core v1 VolumeMount a em td td em Optional em p Volumes to mount into the Step rsquo s filesystem Cannot be updated p td tr tr td code volumeDevices code br em a href https kubernetes io docs reference generated kubernetes api v1 24 volumedevice v1 core Kubernetes core v1 VolumeDevice a em td td em Optional em p volumeDevices is the list of block devices to be used by the Step p td tr tr td code livenessProbe code br em a href https kubernetes io docs reference generated kubernetes api v1 24 probe v1 core Kubernetes core v1 Probe a em td td em Optional em p Periodic probe of container liveness Container will be restarted if the probe fails Cannot be updated More info a href https kubernetes io docs concepts workloads pods pod lifecycle container probes https kubernetes io docs concepts workloads pods pod lifecycle container probes a p p Deprecated This field will be removed in a future release p td tr tr td code readinessProbe code br em a href https kubernetes io docs reference generated kubernetes api v1 24 probe v1 core Kubernetes core v1 Probe a em td td em Optional em p Periodic probe of container service readiness Container will be removed from service endpoints if the probe fails Cannot be updated More info a href https kubernetes io docs concepts workloads pods pod lifecycle container probes https kubernetes io docs concepts workloads pods pod lifecycle container probes a p p Deprecated This field will be removed in a future release p td tr tr td code startupProbe code br em a href https kubernetes io docs reference generated kubernetes api v1 24 probe v1 core Kubernetes core v1 Probe a em td td em Optional em p DeprecatedStartupProbe indicates that the Pod has successfully initialized If specified no other probes are executed until this completes successfully If this probe fails the Pod will be restarted just as if the livenessProbe failed This can be used to provide different probe parameters at the beginning of a Pod rsquo s lifecycle when it might take a long time to load data or warm a cache than during steady state operation This cannot be updated More info a href https kubernetes io docs concepts workloads pods pod lifecycle container probes https kubernetes io docs concepts workloads pods pod lifecycle container probes a p p Deprecated This field will be removed in a future release p td tr tr td code lifecycle code br em a href https kubernetes io docs reference generated kubernetes api v1 24 lifecycle v1 core Kubernetes core v1 Lifecycle a em td td em Optional em p Actions that the management system should take in response to container lifecycle events Cannot be updated p p Deprecated This field will be removed in a future release p td tr tr td code terminationMessagePath code br em string em td td em Optional em p Deprecated This field will be removed in a future release and cannot be meaningfully used p td tr tr td code terminationMessagePolicy code br em a href https kubernetes io docs reference generated kubernetes api v1 24 terminationmessagepolicy v1 core Kubernetes core v1 TerminationMessagePolicy a em td td em Optional em p Deprecated This field will be removed in a future release and cannot be meaningfully used p td tr tr td code imagePullPolicy code br em a href https kubernetes io docs reference generated kubernetes api v1 24 pullpolicy v1 core Kubernetes core v1 PullPolicy a em td td em Optional em p Image pull policy One of Always Never IfNotPresent Defaults to Always if latest tag is specified or IfNotPresent otherwise Cannot be updated More info a href https kubernetes io docs concepts containers images updating images https kubernetes io docs concepts containers images updating images a p td tr tr td code securityContext code br em a href https kubernetes io docs reference generated kubernetes api v1 24 securitycontext v1 core Kubernetes core v1 SecurityContext a em td td em Optional em p SecurityContext defines the security options the Step should be run with If set the fields of SecurityContext override the equivalent fields of PodSecurityContext More info a href https kubernetes io docs tasks configure pod container security context https kubernetes io docs tasks configure pod container security context a p td tr tr td code stdin code br em bool em td td em Optional em p Whether this Step should allocate a buffer for stdin in the container runtime If this is not set reads from stdin in the Step will always result in EOF Default is false p p Deprecated This field will be removed in a future release p td tr tr td code stdinOnce code br em bool em td td em Optional em p Whether the container runtime should close the stdin channel after it has been opened by a single attach When stdin is true the stdin stream will remain open across multiple attach sessions If stdinOnce is set to true stdin is opened on container start is empty until the first client attaches to stdin and then remains open and accepts data until the client disconnects at which time stdin is closed and remains closed until the container is restarted If this flag is false a container processes that reads from stdin will never receive an EOF Default is false p p Deprecated This field will be removed in a future release p td tr tr td code tty code br em bool em td td em Optional em p Whether this Step should allocate a DeprecatedTTY for itself also requires lsquo stdin rsquo to be true Default is false p p Deprecated This field will be removed in a future release p td tr tbody table h3 id tekton dev v1beta1 TaskBreakpoints TaskBreakpoints h3 p em Appears on em a href tekton dev v1beta1 TaskRunDebug TaskRunDebug a p div p TaskBreakpoints defines the breakpoint config for a particular Task p div table thead tr th Field th th Description th tr thead tbody tr td code onFailure code br em string em td td em Optional em p if enabled pause TaskRun on failure of a step failed step will not exit p td tr tr td code beforeSteps code br em string em td td em Optional em td tr tbody table h3 id tekton dev v1beta1 TaskKind TaskKind code string code alias h3 p em Appears on em a href tekton dev v1beta1 TaskRef TaskRef a p div p TaskKind defines the type of Task used by the pipeline p div h3 id tekton dev v1beta1 TaskModifier TaskModifier h3 div p TaskModifier is an interface to be implemented by different PipelineResources p p Deprecated Unused preserved only for backwards compatibility p div h3 id tekton dev v1beta1 TaskObject TaskObject h3 div p TaskObject is implemented by Task and ClusterTask p div h3 id tekton dev v1beta1 TaskRef TaskRef h3 p em Appears on em a href tekton dev v1alpha1 RunSpec RunSpec a a href tekton dev v1beta1 CustomRunSpec CustomRunSpec a a href tekton dev v1beta1 PipelineTask PipelineTask a a href tekton dev v1beta1 TaskRunSpec TaskRunSpec a p div p TaskRef can be used to refer to a specific instance of a task p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name of the referent More info a href http kubernetes io docs user guide identifiers names http kubernetes io docs user guide identifiers names a p td tr tr td code kind code br em a href tekton dev v1beta1 TaskKind TaskKind a em td td p TaskKind indicates the Kind of the Task 1 Namespaced Task when Kind is set to ldquo Task rdquo If Kind is ldquo rdquo it defaults to ldquo Task rdquo 2 Cluster Scoped Task when Kind is set to ldquo ClusterTask rdquo 3 Custom Task when Kind is non empty and APIVersion is non empty p td tr tr td code apiVersion code br em string em td td em Optional em p API version of the referent Note A Task with non empty APIVersion and Kind is considered a Custom Task p td tr tr td code bundle code br em string em td td em Optional em p Bundle url reference to a Tekton Bundle p p Deprecated Please use ResolverRef with the bundles resolver instead The field is staying there for go client backward compatibility but is not used allowed anymore p td tr tr td code ResolverRef code br em a href tekton dev v1beta1 ResolverRef ResolverRef a em td td em Optional em p ResolverRef allows referencing a Task in a remote location like a git repo This field is only supported when the alpha feature gate is enabled p td tr tbody table h3 id tekton dev v1beta1 TaskResource TaskResource h3 p em Appears on em a href tekton dev v1beta1 TaskResources TaskResources a p div p TaskResource defines an input or output Resource declared as a requirement by a Task The Name field will be used to refer to these Resources within the Task definition and when provided as an Input the Name will be the path to the volume mounted containing this Resource as an input e g an input Resource named code workspace code will be mounted at code workspace code p p Deprecated Unused preserved only for backwards compatibility p div table thead tr th Field th th Description th tr thead tbody tr td code ResourceDeclaration code br em a href tekton dev v1alpha1 ResourceDeclaration ResourceDeclaration a em td td p Members of code ResourceDeclaration code are embedded into this type p td tr tbody table h3 id tekton dev v1beta1 TaskResourceBinding TaskResourceBinding h3 p em Appears on em a href tekton dev v1beta1 TaskRunInputs TaskRunInputs a a href tekton dev v1beta1 TaskRunOutputs TaskRunOutputs a a href tekton dev v1beta1 TaskRunResources TaskRunResources a p div p TaskResourceBinding points to the PipelineResource that will be used for the Task input or output called Name p p Deprecated Unused preserved only for backwards compatibility p div table thead tr th Field th th Description th tr thead tbody tr td code PipelineResourceBinding code br em a href tekton dev v1beta1 PipelineResourceBinding PipelineResourceBinding a em td td p Members of code PipelineResourceBinding code are embedded into this type p td tr tr td code paths code br em string em td td em Optional em p Paths will probably be removed in 1284 and then PipelineResourceBinding can be used instead The optional Path field corresponds to a path on disk at which the Resource can be found used when providing the resource via mounted volume overriding the default logic to fetch the Resource p td tr tbody table h3 id tekton dev v1beta1 TaskResources TaskResources h3 p em Appears on em a href tekton dev v1beta1 TaskSpec TaskSpec a p div p TaskResources allows a Pipeline to declare how its DeclaredPipelineResources should be provided to a Task as its inputs and outputs p p Deprecated Unused preserved only for backwards compatibility p div table thead tr th Field th th Description th tr thead tbody tr td code inputs code br em a href tekton dev v1beta1 TaskResource TaskResource a em td td p Inputs holds the mapping from the PipelineResources declared in DeclaredPipelineResources to the input PipelineResources required by the Task p td tr tr td code outputs code br em a href tekton dev v1beta1 TaskResource TaskResource a em td td p Outputs holds the mapping from the PipelineResources declared in DeclaredPipelineResources to the input PipelineResources required by the Task p td tr tbody table h3 id tekton dev v1beta1 TaskResult TaskResult h3 p em Appears on em a href tekton dev v1beta1 TaskSpec TaskSpec a p div p TaskResult used to describe the results of a task p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name the given name p td tr tr td code type code br em a href tekton dev v1beta1 ResultsType ResultsType a em td td em Optional em p Type is the user specified type of the result The possible type is currently ldquo string rdquo and will support ldquo array rdquo in following work p td tr tr td code properties code br em a href tekton dev v1beta1 PropertySpec map string github com tektoncd pipeline pkg apis pipeline v1beta1 PropertySpec a em td td em Optional em p Properties is the JSON Schema properties to support key value pairs results p td tr tr td code description code br em string em td td em Optional em p Description is a human readable description of the result p td tr tr td code value code br em a href tekton dev v1beta1 ParamValue ParamValue a em td td em Optional em p Value the expression used to retrieve the value of the result from an underlying Step p td tr tbody table h3 id tekton dev v1beta1 TaskRunConditionType TaskRunConditionType code string code alias h3 div p TaskRunConditionType is an enum used to store TaskRun custom conditions such as one used in spire results verification p div h3 id tekton dev v1beta1 TaskRunDebug TaskRunDebug h3 p em Appears on em a href tekton dev v1beta1 TaskRunSpec TaskRunSpec a p div p TaskRunDebug defines the breakpoint config for a particular TaskRun p div table thead tr th Field th th Description th tr thead tbody tr td code breakpoints code br em a href tekton dev v1beta1 TaskBreakpoints TaskBreakpoints a em td td em Optional em td tr tbody table h3 id tekton dev v1beta1 TaskRunInputs TaskRunInputs h3 div p TaskRunInputs holds the input values that this task was invoked with p p Deprecated Unused preserved only for backwards compatibility p div table thead tr th Field th th Description th tr thead tbody tr td code resources code br em a href tekton dev v1beta1 TaskResourceBinding TaskResourceBinding a em td td em Optional em td tr tr td code params code br em a href tekton dev v1beta1 Param Param a em td td em Optional em td tr tbody table h3 id tekton dev v1beta1 TaskRunOutputs TaskRunOutputs h3 div p TaskRunOutputs holds the output values that this task was invoked with p p Deprecated Unused preserved only for backwards compatibility p div table thead tr th Field th th Description th tr thead tbody tr td code resources code br em a href tekton dev v1beta1 TaskResourceBinding TaskResourceBinding a em td td em Optional em td tr tbody table h3 id tekton dev v1beta1 TaskRunReason TaskRunReason code string code alias h3 div p TaskRunReason is an enum used to store all TaskRun reason for the Succeeded condition that are controlled by the TaskRun itself Failure reasons that emerge from underlying resources are not included here p div h3 id tekton dev v1beta1 TaskRunResources TaskRunResources h3 p em Appears on em a href tekton dev v1beta1 TaskRunSpec TaskRunSpec a p div p TaskRunResources allows a TaskRun to declare inputs and outputs TaskResourceBinding p p Deprecated Unused preserved only for backwards compatibility p div table thead tr th Field th th Description th tr thead tbody tr td code inputs code br em a href tekton dev v1beta1 TaskResourceBinding TaskResourceBinding a em td td p Inputs holds the inputs resources this task was invoked with p td tr tr td code outputs code br em a href tekton dev v1beta1 TaskResourceBinding TaskResourceBinding a em td td p Outputs holds the inputs resources this task was invoked with p td tr tbody table h3 id tekton dev v1beta1 TaskRunResult TaskRunResult h3 p em Appears on em a href tekton dev v1beta1 StepState StepState a a href tekton dev v1beta1 TaskRunStatusFields TaskRunStatusFields a p div p TaskRunStepResult is a type alias of TaskRunResult p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name the given name p td tr tr td code type code br em a href tekton dev v1beta1 ResultsType ResultsType a em td td em Optional em p Type is the user specified type of the result The possible type is currently ldquo string rdquo and will support ldquo array rdquo in following work p td tr tr td code value code br em a href tekton dev v1beta1 ParamValue ParamValue a em td td p Value the given value of the result p td tr tbody table h3 id tekton dev v1beta1 TaskRunSidecarOverride TaskRunSidecarOverride h3 p em Appears on em a href tekton dev v1beta1 PipelineTaskRunSpec PipelineTaskRunSpec a a href tekton dev v1beta1 TaskRunSpec TaskRunSpec a p div p TaskRunSidecarOverride is used to override the values of a Sidecar in the corresponding Task p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p The name of the Sidecar to override p td tr tr td code resources code br em a href https kubernetes io docs reference generated kubernetes api v1 24 resourcerequirements v1 core Kubernetes core v1 ResourceRequirements a em td td p The resource requirements to apply to the Sidecar p td tr tbody table h3 id tekton dev v1beta1 TaskRunSpec TaskRunSpec h3 p em Appears on em a href tekton dev v1beta1 TaskRun TaskRun a p div p TaskRunSpec defines the desired state of TaskRun p div table thead tr th Field th th Description th tr thead tbody tr td code debug code br em a href tekton dev v1beta1 TaskRunDebug TaskRunDebug a em td td em Optional em td tr tr td code params code br em a href tekton dev v1beta1 Params Params a em td td em Optional em td tr tr td code resources code br em a href tekton dev v1beta1 TaskRunResources TaskRunResources a em td td em Optional em p Deprecated Unused preserved only for backwards compatibility p td tr tr td code serviceAccountName code br em string em td td em Optional em td tr tr td code taskRef code br em a href tekton dev v1beta1 TaskRef TaskRef a em td td em Optional em p no more than one of the TaskRef and TaskSpec may be specified p td tr tr td code taskSpec code br em a href tekton dev v1beta1 TaskSpec TaskSpec a em td td em Optional em p Specifying PipelineSpec can be disabled by setting code disable inline spec code feature flag p td tr tr td code status code br em a href tekton dev v1beta1 TaskRunSpecStatus TaskRunSpecStatus a em td td em Optional em p Used for cancelling a TaskRun and maybe more later on p td tr tr td code statusMessage code br em a href tekton dev v1beta1 TaskRunSpecStatusMessage TaskRunSpecStatusMessage a em td td em Optional em p Status message for cancellation p td tr tr td code retries code br em int em td td em Optional em p Retries represents how many times this TaskRun should be retried in the event of Task failure p td tr tr td code timeout code br em a href https godoc org k8s io apimachinery pkg apis meta v1 Duration Kubernetes meta v1 Duration a em td td em Optional em p Time after which one retry attempt times out Defaults to 1 hour Refer Go rsquo s ParseDuration documentation for expected format a href https golang org pkg time ParseDuration https golang org pkg time ParseDuration a p td tr tr td code podTemplate code br em a href tekton dev unversioned Template Template a em td td p PodTemplate holds pod specific configuration p td tr tr td code workspaces code br em a href tekton dev v1beta1 WorkspaceBinding WorkspaceBinding a em td td em Optional em p Workspaces is a list of WorkspaceBindings from volumes to workspaces p td tr tr td code stepOverrides code br em a href tekton dev v1beta1 TaskRunStepOverride TaskRunStepOverride a em td td em Optional em p Overrides to apply to Steps in this TaskRun If a field is specified in both a Step and a StepOverride the value from the StepOverride will be used This field is only supported when the alpha feature gate is enabled p td tr tr td code sidecarOverrides code br em a href tekton dev v1beta1 TaskRunSidecarOverride TaskRunSidecarOverride a em td td em Optional em p Overrides to apply to Sidecars in this TaskRun If a field is specified in both a Sidecar and a SidecarOverride the value from the SidecarOverride will be used This field is only supported when the alpha feature gate is enabled p td tr tr td code computeResources code br em a href https kubernetes io docs reference generated kubernetes api v1 24 resourcerequirements v1 core Kubernetes core v1 ResourceRequirements a em td td p Compute resources to use for this TaskRun p td tr tbody table h3 id tekton dev v1beta1 TaskRunSpecStatus TaskRunSpecStatus code string code alias h3 p em Appears on em a href tekton dev v1beta1 TaskRunSpec TaskRunSpec a p div p TaskRunSpecStatus defines the TaskRun spec status the user can provide p div h3 id tekton dev v1beta1 TaskRunSpecStatusMessage TaskRunSpecStatusMessage code string code alias h3 p em Appears on em a href tekton dev v1beta1 TaskRunSpec TaskRunSpec a p div p TaskRunSpecStatusMessage defines human readable status messages for the TaskRun p div h3 id tekton dev v1beta1 TaskRunStatus TaskRunStatus h3 p em Appears on em a href tekton dev v1beta1 TaskRun TaskRun a a href tekton dev v1beta1 PipelineRunTaskRunStatus PipelineRunTaskRunStatus a a href tekton dev v1beta1 TaskRunStatusFields TaskRunStatusFields a p div p TaskRunStatus defines the observed state of TaskRun p div table thead tr th Field th th Description th tr thead tbody tr td code Status code br em a href https pkg go dev knative dev pkg apis duck v1 Status knative dev pkg apis duck v1 Status a em td td p Members of code Status code are embedded into this type p td tr tr td code TaskRunStatusFields code br em a href tekton dev v1beta1 TaskRunStatusFields TaskRunStatusFields a em td td p Members of code TaskRunStatusFields code are embedded into this type p p TaskRunStatusFields inlines the status fields p td tr tbody table h3 id tekton dev v1beta1 TaskRunStatusFields TaskRunStatusFields h3 p em Appears on em a href tekton dev v1beta1 TaskRunStatus TaskRunStatus a p div p TaskRunStatusFields holds the fields of TaskRun rsquo s status This is defined separately and inlined so that other types can readily consume these fields via duck typing p div table thead tr th Field th th Description th tr thead tbody tr td code podName code br em string em td td p PodName is the name of the pod responsible for executing this task rsquo s steps p td tr tr td code startTime code br em a href https kubernetes io docs reference generated kubernetes api v1 24 time v1 meta Kubernetes meta v1 Time a em td td p StartTime is the time the build is actually started p td tr tr td code completionTime code br em a href https kubernetes io docs reference generated kubernetes api v1 24 time v1 meta Kubernetes meta v1 Time a em td td p CompletionTime is the time the build completed p td tr tr td code steps code br em a href tekton dev v1beta1 StepState StepState a em td td em Optional em p Steps describes the state of each build step container p td tr tr td code cloudEvents code br em a href tekton dev v1beta1 CloudEventDelivery CloudEventDelivery a em td td em Optional em p CloudEvents describe the state of each cloud event requested via a CloudEventResource p p Deprecated Removed in v0 44 0 p td tr tr td code retriesStatus code br em a href tekton dev v1beta1 TaskRunStatus TaskRunStatus a em td td em Optional em p RetriesStatus contains the history of TaskRunStatus in case of a retry in order to keep record of failures All TaskRunStatus stored in RetriesStatus will have no date within the RetriesStatus as is redundant p td tr tr td code resourcesResult code br em github com tektoncd pipeline pkg result RunResult em td td em Optional em p Results from Resources built during the TaskRun This is tomb stoned along with the removal of pipelineResources Deprecated this field is not populated and is preserved only for backwards compatibility p td tr tr td code taskResults code br em a href tekton dev v1beta1 TaskRunResult TaskRunResult a em td td em Optional em p TaskRunResults are the list of results written out by the task rsquo s containers p td tr tr td code sidecars code br em a href tekton dev v1beta1 SidecarState SidecarState a em td td p The list has one entry per sidecar in the manifest Each entry is represents the imageid of the corresponding sidecar p td tr tr td code taskSpec code br em a href tekton dev v1beta1 TaskSpec TaskSpec a em td td p TaskSpec contains the Spec from the dereferenced Task definition used to instantiate this TaskRun p td tr tr td code provenance code br em a href tekton dev v1beta1 Provenance Provenance a em td td em Optional em p Provenance contains some key authenticated metadata about how a software artifact was built what sources what inputs outputs etc p td tr tr td code spanContext code br em map string string em td td p SpanContext contains tracing span context fields p td tr tbody table h3 id tekton dev v1beta1 TaskRunStepOverride TaskRunStepOverride h3 p em Appears on em a href tekton dev v1beta1 PipelineTaskRunSpec PipelineTaskRunSpec a a href tekton dev v1beta1 TaskRunSpec TaskRunSpec a p div p TaskRunStepOverride is used to override the values of a Step in the corresponding Task p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p The name of the Step to override p td tr tr td code resources code br em a href https kubernetes io docs reference generated kubernetes api v1 24 resourcerequirements v1 core Kubernetes core v1 ResourceRequirements a em td td p The resource requirements to apply to the Step p td tr tbody table h3 id tekton dev v1beta1 TaskSpec TaskSpec h3 p em Appears on em a href tekton dev v1beta1 ClusterTask ClusterTask a a href tekton dev v1beta1 Task Task a a href tekton dev v1beta1 EmbeddedTask EmbeddedTask a a href tekton dev v1beta1 TaskRunSpec TaskRunSpec a a href tekton dev v1beta1 TaskRunStatusFields TaskRunStatusFields a p div p TaskSpec defines the desired state of Task p div table thead tr th Field th th Description th tr thead tbody tr td code resources code br em a href tekton dev v1beta1 TaskResources TaskResources a em td td em Optional em p Resources is a list input and output resource to run the task Resources are represented in TaskRuns as bindings to instances of PipelineResources p p Deprecated Unused preserved only for backwards compatibility p td tr tr td code params code br em a href tekton dev v1beta1 ParamSpecs ParamSpecs a em td td em Optional em p Params is a list of input parameters required to run the task Params must be supplied as inputs in TaskRuns unless they declare a default value p td tr tr td code displayName code br em string em td td em Optional em p DisplayName is a user facing name of the task that may be used to populate a UI p td tr tr td code description code br em string em td td em Optional em p Description is a user facing description of the task that may be used to populate a UI p td tr tr td code steps code br em a href tekton dev v1beta1 Step Step a em td td p Steps are the steps of the build each step is run sequentially with the source mounted into workspace p td tr tr td code volumes code br em a href https kubernetes io docs reference generated kubernetes api v1 24 volume v1 core Kubernetes core v1 Volume a em td td p Volumes is a collection of volumes that are available to mount into the steps of the build p td tr tr td code stepTemplate code br em a href tekton dev v1beta1 StepTemplate StepTemplate a em td td p StepTemplate can be used as the basis for all step containers within the Task so that the steps inherit settings on the base container p td tr tr td code sidecars code br em a href tekton dev v1beta1 Sidecar Sidecar a em td td p Sidecars are run alongside the Task rsquo s step containers They begin before the steps start and end after the steps complete p td tr tr td code workspaces code br em a href tekton dev v1beta1 WorkspaceDeclaration WorkspaceDeclaration a em td td p Workspaces are the volumes that this Task requires p td tr tr td code results code br em a href tekton dev v1beta1 TaskResult TaskResult a em td td p Results are values that this Task can output p td tr tbody table h3 id tekton dev v1beta1 TimeoutFields TimeoutFields h3 p em Appears on em a href tekton dev v1beta1 PipelineRunSpec PipelineRunSpec a p div p TimeoutFields allows granular specification of pipeline task and finally timeouts p div table thead tr th Field th th Description th tr thead tbody tr td code pipeline code br em a href https godoc org k8s io apimachinery pkg apis meta v1 Duration Kubernetes meta v1 Duration a em td td p Pipeline sets the maximum allowed duration for execution of the entire pipeline The sum of individual timeouts for tasks and finally must not exceed this value p td tr tr td code tasks code br em a href https godoc org k8s io apimachinery pkg apis meta v1 Duration Kubernetes meta v1 Duration a em td td p Tasks sets the maximum allowed duration of this pipeline rsquo s tasks p td tr tr td code finally code br em a href https godoc org k8s io apimachinery pkg apis meta v1 Duration Kubernetes meta v1 Duration a em td td p Finally sets the maximum allowed duration of this pipeline rsquo s finally p td tr tbody table h3 id tekton dev v1beta1 WhenExpression WhenExpression h3 p em Appears on em a href tekton dev v1beta1 ChildStatusReference ChildStatusReference a a href tekton dev v1beta1 PipelineRunRunStatus PipelineRunRunStatus a a href tekton dev v1beta1 PipelineRunTaskRunStatus PipelineRunTaskRunStatus a a href tekton dev v1beta1 SkippedTask SkippedTask a p div p WhenExpression allows a PipelineTask to declare expressions to be evaluated before the Task is run to determine whether the Task should be executed or skipped p div table thead tr th Field th th Description th tr thead tbody tr td code input code br em string em td td p Input is the string for guard checking which can be a static input or an output from a parent Task p td tr tr td code operator code br em k8s io apimachinery pkg selection Operator em td td p Operator that represents an Input rsquo s relationship to the values p td tr tr td code values code br em string em td td p Values is an array of strings which is compared against the input for guard checking It must be non empty p td tr tr td code cel code br em string em td td em Optional em p CEL is a string of Common Language Expression which can be used to conditionally execute the task based on the result of the expression evaluation More info about CEL syntax a href https github com google cel spec blob master doc langdef md https github com google cel spec blob master doc langdef md a p td tr tbody table h3 id tekton dev v1beta1 WhenExpressions WhenExpressions code github com tektoncd pipeline pkg apis pipeline v1beta1 WhenExpression code alias h3 p em Appears on em a href tekton dev v1beta1 PipelineTask PipelineTask a a href tekton dev v1beta1 Step Step a p div p WhenExpressions are used to specify whether a Task should be executed or skipped All of them need to evaluate to True for a guarded Task to be executed p div h3 id tekton dev v1beta1 WorkspaceBinding WorkspaceBinding h3 p em Appears on em a href tekton dev v1alpha1 RunSpec RunSpec a a href tekton dev v1beta1 CustomRunSpec CustomRunSpec a a href tekton dev v1beta1 PipelineRunSpec PipelineRunSpec a a href tekton dev v1beta1 TaskRunSpec TaskRunSpec a p div p WorkspaceBinding maps a Task rsquo s declared workspace to a Volume p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name is the name of the workspace populated by the volume p td tr tr td code subPath code br em string em td td em Optional em p SubPath is optionally a directory on the volume which should be used for this binding i e the volume will be mounted at this sub directory p td tr tr td code volumeClaimTemplate code br em a href https kubernetes io docs reference generated kubernetes api v1 24 persistentvolumeclaim v1 core Kubernetes core v1 PersistentVolumeClaim a em td td em Optional em p VolumeClaimTemplate is a template for a claim that will be created in the same namespace The PipelineRun controller is responsible for creating a unique claim for each instance of PipelineRun p td tr tr td code persistentVolumeClaim code br em a href https kubernetes io docs reference generated kubernetes api v1 24 persistentvolumeclaimvolumesource v1 core Kubernetes core v1 PersistentVolumeClaimVolumeSource a em td td em Optional em p PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace Either this OR EmptyDir can be used p td tr tr td code emptyDir code br em a href https kubernetes io docs reference generated kubernetes api v1 24 emptydirvolumesource v1 core Kubernetes core v1 EmptyDirVolumeSource a em td td em Optional em p EmptyDir represents a temporary directory that shares a Task rsquo s lifetime More info a href https kubernetes io docs concepts storage volumes emptydir https kubernetes io docs concepts storage volumes emptydir a Either this OR PersistentVolumeClaim can be used p td tr tr td code configMap code br em a href https kubernetes io docs reference generated kubernetes api v1 24 configmapvolumesource v1 core Kubernetes core v1 ConfigMapVolumeSource a em td td em Optional em p ConfigMap represents a configMap that should populate this workspace p td tr tr td code secret code br em a href https kubernetes io docs reference generated kubernetes api v1 24 secretvolumesource v1 core Kubernetes core v1 SecretVolumeSource a em td td em Optional em p Secret represents a secret that should populate this workspace p td tr tr td code projected code br em a href https kubernetes io docs reference generated kubernetes api v1 24 projectedvolumesource v1 core Kubernetes core v1 ProjectedVolumeSource a em td td em Optional em p Projected represents a projected volume that should populate this workspace p td tr tr td code csi code br em a href https kubernetes io docs reference generated kubernetes api v1 24 csivolumesource v1 core Kubernetes core v1 CSIVolumeSource a em td td em Optional em p CSI Container Storage Interface represents ephemeral storage that is handled by certain external CSI drivers p td tr tbody table h3 id tekton dev v1beta1 WorkspaceDeclaration WorkspaceDeclaration h3 p em Appears on em a href tekton dev v1beta1 TaskSpec TaskSpec a p div p WorkspaceDeclaration is a declaration of a volume that a Task requires p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name is the name by which you can bind the volume at runtime p td tr tr td code description code br em string em td td em Optional em p Description is an optional human readable description of this volume p td tr tr td code mountPath code br em string em td td em Optional em p MountPath overrides the directory that the volume will be made available at p td tr tr td code readOnly code br em bool em td td p ReadOnly dictates whether a mounted volume is writable By default this field is false and so mounted volumes are writable p td tr tr td code optional code br em bool em td td p Optional marks a Workspace as not being required in TaskRuns By default this field is false and so declared workspaces are required p td tr tbody table h3 id tekton dev v1beta1 WorkspacePipelineTaskBinding WorkspacePipelineTaskBinding h3 p em Appears on em a href tekton dev v1beta1 PipelineTask PipelineTask a p div p WorkspacePipelineTaskBinding describes how a workspace passed into the pipeline should be mapped to a task rsquo s declared workspace p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name is the name of the workspace as declared by the task p td tr tr td code workspace code br em string em td td em Optional em p Workspace is the name of the workspace declared by the pipeline p td tr tr td code subPath code br em string em td td em Optional em p SubPath is optionally a directory on the volume which should be used for this binding i e the volume will be mounted at this sub directory p td tr tbody table h3 id tekton dev v1beta1 WorkspaceUsage WorkspaceUsage h3 p em Appears on em a href tekton dev v1beta1 Sidecar Sidecar a a href tekton dev v1beta1 Step Step a p div p WorkspaceUsage is used by a Step or Sidecar to declare that it wants isolated access to a Workspace defined in a Task p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name is the name of the workspace this Step or Sidecar wants access to p td tr tr td code mountPath code br em string em td td p MountPath is the path that the workspace should be mounted to inside the Step or Sidecar overriding any MountPath specified in the Task rsquo s WorkspaceDeclaration p td tr tbody table h3 id tekton dev v1beta1 CustomRunResult CustomRunResult h3 p em Appears on em a href tekton dev v1beta1 CustomRunStatusFields CustomRunStatusFields a p div p CustomRunResult used to describe the results of a task p div table thead tr th Field th th Description th tr thead tbody tr td code name code br em string em td td p Name the given name p td tr tr td code value code br em string em td td p Value the given value of the result p td tr tbody table h3 id tekton dev v1beta1 CustomRunStatus CustomRunStatus h3 p em Appears on em a href tekton dev v1beta1 CustomRun CustomRun a a href tekton dev v1 PipelineRunRunStatus PipelineRunRunStatus a a href tekton dev v1beta1 PipelineRunRunStatus PipelineRunRunStatus a a href tekton dev v1beta1 CustomRunStatusFields CustomRunStatusFields a p div p CustomRunStatus defines the observed state of CustomRun p div table thead tr th Field th th Description th tr thead tbody tr td code Status code br em a href https pkg go dev knative dev pkg apis duck v1 Status knative dev pkg apis duck v1 Status a em td td p Members of code Status code are embedded into this type p td tr tr td code CustomRunStatusFields code br em a href tekton dev v1beta1 CustomRunStatusFields CustomRunStatusFields a em td td p Members of code CustomRunStatusFields code are embedded into this type p p CustomRunStatusFields inlines the status fields p td tr tbody table h3 id tekton dev v1beta1 CustomRunStatusFields CustomRunStatusFields h3 p em Appears on em a href tekton dev v1beta1 CustomRunStatus CustomRunStatus a p div p CustomRunStatusFields holds the fields of CustomRun rsquo s status This is defined separately and inlined so that other types can readily consume these fields via duck typing p div table thead tr th Field th th Description th tr thead tbody tr td code startTime code br em a href https kubernetes io docs reference generated kubernetes api v1 24 time v1 meta Kubernetes meta v1 Time a em td td em Optional em p StartTime is the time the build is actually started p td tr tr td code completionTime code br em a href https kubernetes io docs reference generated kubernetes api v1 24 time v1 meta Kubernetes meta v1 Time a em td td em Optional em p CompletionTime is the time the build completed p td tr tr td code results code br em a href tekton dev v1beta1 CustomRunResult CustomRunResult a em td td em Optional em p Results reports any output result values to be consumed by later tasks in a pipeline p td tr tr td code retriesStatus code br em a href tekton dev v1beta1 CustomRunStatus CustomRunStatus a em td td em Optional em p RetriesStatus contains the history of CustomRunStatus in case of a retry p td tr tr td code extraFields code br em k8s io apimachinery pkg runtime RawExtension em td td p ExtraFields holds arbitrary fields provided by the custom task controller p td tr tbody table hr p em Generated with code gen crd api reference docs code em p |
tekton Git Resolver Resolver Type weight 309 Simple Git Resolver | <!--
---
linkTitle: "Git Resolver"
weight: 309
---
-->
# Simple Git Resolver
## Resolver Type
This Resolver responds to type `git`.
## Parameters
| Param Name | Description | Example Value |
|--------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------|
| `url` | URL of the repo to fetch and clone anonymously. Either `url`, or `repo` (with `org`) must be specified, but not both. | `https://github.com/tektoncd/catalog.git` |
| `repo` | The repository to find the resource in. Either `url`, or `repo` (with `org`) must be specified, but not both. | `pipeline`, `test-infra` |
| `org` | The organization to find the repository in. Default can be set in [configuration](#configuration). | `tektoncd`, `kubernetes` |
| `token` | An optional secret name in the `PipelineRun` namespace to fetch the token from. Defaults to empty, meaning it will try to use the configuration from the global configmap. | `secret-name`, (empty) |
| `tokenKey` | An optional key in the token secret name in the `PipelineRun` namespace to fetch the token from. Defaults to `token`. | `token` |
| `revision` | Git revision to checkout a file from. This can be commit SHA, branch or tag. | `aeb957601cf41c012be462827053a21a420befca` `main` `v0.38.2` |
| `pathInRepo` | Where to find the file in the repo. | `task/golang-build/0.3/golang-build.yaml` |
| `serverURL` | An optional server URL (that includes the https:// prefix) to connect for API operations | `https:/github.mycompany.com` |
| `scmType` | An optional SCM type to use for API operations | `github`, `gitlab`, `gitea` |
## Requirements
- A cluster running Tekton Pipeline v0.41.0 or later.
- The [built-in remote resolvers installed](./install.md#installing-and-configuring-remote-task-and-pipeline-resolution).
- The `enable-git-resolver` feature flag in the `resolvers-feature-flags` ConfigMap in the
`tekton-pipelines-resolvers` namespace set to `true`.
- [Beta features](./additional-configs.md#beta-features) enabled.
## Configuration
This resolver uses a `ConfigMap` for its settings. See
[`../config/resolvers/git-resolver-config.yaml`](../config/resolvers/git-resolver-config.yaml)
for the name, namespace and defaults that the resolver ships with.
### Options
| Option Name | Description | Example Values |
|------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------|
| `default-revision` | The default git revision to use if none is specified | `main` |
| `fetch-timeout` | The maximum time any single git clone resolution may take. **Note**: a global maximum timeout of 1 minute is currently enforced on _all_ resolution requests. | `1m`, `2s`, `700ms` |
| `default-url` | The default git repository URL to use for anonymous cloning if none is specified. | `https://github.com/tektoncd/catalog.git` |
| `scm-type` | The SCM provider type. Required if using the authenticated API with `org` and `repo`. | `github`, `gitlab`, `gitea`, `bitbucketcloud`, `bitbucketserver` |
| `server-url` | The SCM provider's base URL for use with the authenticated API. Not needed if using github.com, gitlab.com, or BitBucket Cloud | `api.internal-github.com` |
| `api-token-secret-name` | The Kubernetes secret containing the SCM provider API token. Required if using the authenticated API with `org` and `repo`. | `bot-token-secret` |
| `api-token-secret-key` | The key within the token secret containing the actual secret. Required if using the authenticated API with `org` and `repo`. | `oauth`, `token` |
| `api-token-secret-namespace` | The namespace containing the token secret, if not `default`. | `other-namespace` |
| `default-org` | The default organization to look for repositories under when using the authenticated API, if not specified in the resolver parameters. Optional. | `tektoncd`, `kubernetes` |
## Usage
The `git` resolver has two modes: cloning a repository anonymously, or fetching individual files via an SCM provider's API using an API token.
### Anonymous Cloning
Anonymous cloning is supported only for public repositories. This mode clones the full git repo.
#### Task Resolution
```yaml
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: git-clone-demo-tr
spec:
taskRef:
resolver: git
params:
- name: url
value: https://github.com/tektoncd/catalog.git
- name: revision
value: main
- name: pathInRepo
value: task/git-clone/0.6/git-clone.yaml
```
#### Pipeline resolution
```yaml
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: git-clone-demo-pr
spec:
pipelineRef:
resolver: git
params:
- name: url
value: https://github.com/tektoncd/catalog.git
- name: revision
value: main
- name: pathInRepo
value: pipeline/simple/0.1/simple.yaml
params:
- name: name
value: Ranni
```
### Authenticated API
The authenticated API supports private repositories, and fetches only the file at the specified path rather than doing a full clone.
When using the authenticated API, [providers with implementations in `go-scm`](https://github.com/jenkins-x/go-scm/tree/main/scm/driver) can be used.
Note that not all `go-scm` implementations have been tested with the `git` resolver, but it is known to work with:
* github.com and GitHub Enterprise
* gitlab.com and self-hosted Gitlab
* Gitea
* BitBucket Server
* BitBucket Cloud
#### Task Resolution
```yaml
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: git-api-demo-tr
spec:
taskRef:
resolver: git
params:
- name: org
value: tektoncd
- name: repo
value: catalog
- name: revision
value: main
- name: pathInRepo
value: task/git-clone/0.6/git-clone.yaml
```
#### Task Resolution with a custom token to a custom SCM provider
```yaml
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: git-api-demo-tr
spec:
taskRef:
resolver: git
params:
- name: org
value: tektoncd
- name: repo
value: catalog
- name: revision
value: main
- name: pathInRepo
value: task/git-clone/0.6/git-clone.yaml
# my-secret-token should be created in the namespace where the
# pipelinerun is created and contain a GitHub personal access
# token in the token key of the secret.
- name: token
value: my-secret-token
- name: tokenKey
value: token
- name: scmType
value: github
- name: serverURL
value: https://ghe.mycompany.com
```
#### Pipeline resolution
```yaml
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: git-api-demo-pr
spec:
pipelineRef:
resolver: git
params:
- name: org
value: tektoncd
- name: repo
value: catalog
- name: revision
value: main
- name: pathInRepo
value: pipeline/simple/0.1/simple.yaml
params:
- name: name
value: Ranni
```
### Specifying Configuration for Multiple Git Providers
It is possible to specify configurations for multiple providers and even multiple configurations for same provider to use in
different tekton resources. Firstly, details need to be added in configmap with the unique identifier key prefix.
To use them in tekton resources, pass the unique key mentioned in configmap as an extra param to resolver with key
`configKey` and value will be the unique key. If no `configKey` param is passed, `default` will be used. Default
configuration to be used for git resolver can be specified in configmap by either mentioning no unique identifier or
using identifier `default`
**Note**: `configKey` should not contain `.` while specifying configurations in configmap
### Example Configmap
Multiple configurations can be specified in `git-resolver-config` configmap like this. All keys mentioned above are supported.
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: git-resolver-config
namespace: tekton-pipelines-resolvers
labels:
app.kubernetes.io/component: resolvers
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-pipelines
data:
# configuration 1, default one to use if no configKey provided or provided with value default
fetch-timeout: "1m"
default-url: "https://github.com/tektoncd/catalog.git"
default-revision: "main"
scm-type: "github"
server-url: ""
api-token-secret-name: ""
api-token-secret-key: ""
api-token-secret-namespace: "default"
default-org: ""
# configuration 2, will be used if configKey param passed with value test1
test1.fetch-timeout: "5m"
test1.default-url: ""
test1.default-revision: "stable"
test1.scm-type: "github"
test1.server-url: "api.internal-github.com"
test1.api-token-secret-name: "test1-secret"
test1.api-token-secret-key: "token"
test1.api-token-secret-namespace: "test1"
test1.default-org: "tektoncd"
# configuration 3, will be used if configKey param passed with value test2
test2.fetch-timeout: "10m"
test2.default-url: ""
test2.default-revision: "stable"
test2.scm-type: "gitlab"
test2.server-url: "api.internal-gitlab.com"
test2.api-token-secret-name: "test2-secret"
test2.api-token-secret-key: "pat"
test2.api-token-secret-namespace: "test2"
test2.default-org: "tektoncd-infra"
```
#### Task Resolution
A specific configurations from the configMap can be selected by passing the parameter `configKey` with the value
matching one of the configuration keys used in the configMap.
```yaml
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: git-api-demo-tr
spec:
taskRef:
resolver: git
params:
- name: org
value: tektoncd
- name: repo
value: catalog
- name: revision
value: main
- name: pathInRepo
value: task/git-clone/0.6/git-clone.yaml
- name: configKey
value: test1
```
#### Pipeline resolution
```yaml
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: git-api-demo-pr
spec:
pipelineRef:
resolver: git
params:
- name: org
value: tektoncd
- name: repo
value: catalog
- name: revision
value: main
- name: pathInRepo
value: pipeline/simple/0.1/simple.yaml
- name: configKey
value: test2
params:
- name: name
value: Ranni
```
## `ResolutionRequest` Status
`ResolutionRequest.Status.RefSource` field captures the source where the remote resource came from. It includes the 3 subfields: `url`, `digest` and `entrypoint`.
- `url`
- If users choose to use anonymous cloning, the url is just user-provided value for the `url` param in the [SPDX download format](https://spdx.github.io/spdx-spec/package-information/#77-package-download-location-field).
- If scm api is used, it would be the clone URL of the repo fetched from scm repository service in the [SPDX download format](https://spdx.github.io/spdx-spec/package-information/#77-package-download-location-field).
- `digest`
- The algorithm name is fixed "sha1", but subject to be changed to "sha256" once Git eventually uses SHA256 at some point later. See https://git-scm.com/docs/hash-function-transition for more details.
- The value is the actual commit sha at the moment of resolving the resource even if a user provides a tag/branch name for the param `revision`.
- `entrypoint`: the user-provided value for the `path` param.
Example:
- Pipeline Resolution
```yaml
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: git-demo
spec:
pipelineRef:
resolver: git
params:
- name: url
value: https://github.com/<username>/<reponame>.git
- name: revision
value: main
- name: pathInRepo
value: pipeline.yaml
```
- `ResolutionRequest`
```yaml
apiVersion: resolution.tekton.dev/v1alpha1
kind: ResolutionRequest
metadata:
labels:
resolution.tekton.dev/type: git
...
spec:
params:
pathInRepo: pipeline.yaml
revision: main
url: https://github.com/<username>/<reponame>.git
status:
refSource:
uri: git+https://github.com/<username>/<reponame>.git
digest:
sha1: <The latest commit sha on main at the moment of resolving>
entrypoint: pipeline.yaml
data: a2luZDogUGxxxx...
```
---
Except as otherwise noted, the content of this page is licensed under the
[Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/),
and code samples are licensed under the
[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). | tekton | linkTitle Git Resolver weight 309 Simple Git Resolver Resolver Type This Resolver responds to type git Parameters Param Name Description Example Value url URL of the repo to fetch and clone anonymously Either url or repo with org must be specified but not both https github com tektoncd catalog git repo The repository to find the resource in Either url or repo with org must be specified but not both pipeline test infra org The organization to find the repository in Default can be set in configuration configuration tektoncd kubernetes token An optional secret name in the PipelineRun namespace to fetch the token from Defaults to empty meaning it will try to use the configuration from the global configmap secret name empty tokenKey An optional key in the token secret name in the PipelineRun namespace to fetch the token from Defaults to token token revision Git revision to checkout a file from This can be commit SHA branch or tag aeb957601cf41c012be462827053a21a420befca main v0 38 2 pathInRepo Where to find the file in the repo task golang build 0 3 golang build yaml serverURL An optional server URL that includes the https prefix to connect for API operations https github mycompany com scmType An optional SCM type to use for API operations github gitlab gitea Requirements A cluster running Tekton Pipeline v0 41 0 or later The built in remote resolvers installed install md installing and configuring remote task and pipeline resolution The enable git resolver feature flag in the resolvers feature flags ConfigMap in the tekton pipelines resolvers namespace set to true Beta features additional configs md beta features enabled Configuration This resolver uses a ConfigMap for its settings See config resolvers git resolver config yaml config resolvers git resolver config yaml for the name namespace and defaults that the resolver ships with Options Option Name Description Example Values default revision The default git revision to use if none is specified main fetch timeout The maximum time any single git clone resolution may take Note a global maximum timeout of 1 minute is currently enforced on all resolution requests 1m 2s 700ms default url The default git repository URL to use for anonymous cloning if none is specified https github com tektoncd catalog git scm type The SCM provider type Required if using the authenticated API with org and repo github gitlab gitea bitbucketcloud bitbucketserver server url The SCM provider s base URL for use with the authenticated API Not needed if using github com gitlab com or BitBucket Cloud api internal github com api token secret name The Kubernetes secret containing the SCM provider API token Required if using the authenticated API with org and repo bot token secret api token secret key The key within the token secret containing the actual secret Required if using the authenticated API with org and repo oauth token api token secret namespace The namespace containing the token secret if not default other namespace default org The default organization to look for repositories under when using the authenticated API if not specified in the resolver parameters Optional tektoncd kubernetes Usage The git resolver has two modes cloning a repository anonymously or fetching individual files via an SCM provider s API using an API token Anonymous Cloning Anonymous cloning is supported only for public repositories This mode clones the full git repo Task Resolution yaml apiVersion tekton dev v1beta1 kind TaskRun metadata name git clone demo tr spec taskRef resolver git params name url value https github com tektoncd catalog git name revision value main name pathInRepo value task git clone 0 6 git clone yaml Pipeline resolution yaml apiVersion tekton dev v1beta1 kind PipelineRun metadata name git clone demo pr spec pipelineRef resolver git params name url value https github com tektoncd catalog git name revision value main name pathInRepo value pipeline simple 0 1 simple yaml params name name value Ranni Authenticated API The authenticated API supports private repositories and fetches only the file at the specified path rather than doing a full clone When using the authenticated API providers with implementations in go scm https github com jenkins x go scm tree main scm driver can be used Note that not all go scm implementations have been tested with the git resolver but it is known to work with github com and GitHub Enterprise gitlab com and self hosted Gitlab Gitea BitBucket Server BitBucket Cloud Task Resolution yaml apiVersion tekton dev v1beta1 kind TaskRun metadata name git api demo tr spec taskRef resolver git params name org value tektoncd name repo value catalog name revision value main name pathInRepo value task git clone 0 6 git clone yaml Task Resolution with a custom token to a custom SCM provider yaml apiVersion tekton dev v1beta1 kind TaskRun metadata name git api demo tr spec taskRef resolver git params name org value tektoncd name repo value catalog name revision value main name pathInRepo value task git clone 0 6 git clone yaml my secret token should be created in the namespace where the pipelinerun is created and contain a GitHub personal access token in the token key of the secret name token value my secret token name tokenKey value token name scmType value github name serverURL value https ghe mycompany com Pipeline resolution yaml apiVersion tekton dev v1beta1 kind PipelineRun metadata name git api demo pr spec pipelineRef resolver git params name org value tektoncd name repo value catalog name revision value main name pathInRepo value pipeline simple 0 1 simple yaml params name name value Ranni Specifying Configuration for Multiple Git Providers It is possible to specify configurations for multiple providers and even multiple configurations for same provider to use in different tekton resources Firstly details need to be added in configmap with the unique identifier key prefix To use them in tekton resources pass the unique key mentioned in configmap as an extra param to resolver with key configKey and value will be the unique key If no configKey param is passed default will be used Default configuration to be used for git resolver can be specified in configmap by either mentioning no unique identifier or using identifier default Note configKey should not contain while specifying configurations in configmap Example Configmap Multiple configurations can be specified in git resolver config configmap like this All keys mentioned above are supported yaml apiVersion v1 kind ConfigMap metadata name git resolver config namespace tekton pipelines resolvers labels app kubernetes io component resolvers app kubernetes io instance default app kubernetes io part of tekton pipelines data configuration 1 default one to use if no configKey provided or provided with value default fetch timeout 1m default url https github com tektoncd catalog git default revision main scm type github server url api token secret name api token secret key api token secret namespace default default org configuration 2 will be used if configKey param passed with value test1 test1 fetch timeout 5m test1 default url test1 default revision stable test1 scm type github test1 server url api internal github com test1 api token secret name test1 secret test1 api token secret key token test1 api token secret namespace test1 test1 default org tektoncd configuration 3 will be used if configKey param passed with value test2 test2 fetch timeout 10m test2 default url test2 default revision stable test2 scm type gitlab test2 server url api internal gitlab com test2 api token secret name test2 secret test2 api token secret key pat test2 api token secret namespace test2 test2 default org tektoncd infra Task Resolution A specific configurations from the configMap can be selected by passing the parameter configKey with the value matching one of the configuration keys used in the configMap yaml apiVersion tekton dev v1beta1 kind TaskRun metadata name git api demo tr spec taskRef resolver git params name org value tektoncd name repo value catalog name revision value main name pathInRepo value task git clone 0 6 git clone yaml name configKey value test1 Pipeline resolution yaml apiVersion tekton dev v1beta1 kind PipelineRun metadata name git api demo pr spec pipelineRef resolver git params name org value tektoncd name repo value catalog name revision value main name pathInRepo value pipeline simple 0 1 simple yaml name configKey value test2 params name name value Ranni ResolutionRequest Status ResolutionRequest Status RefSource field captures the source where the remote resource came from It includes the 3 subfields url digest and entrypoint url If users choose to use anonymous cloning the url is just user provided value for the url param in the SPDX download format https spdx github io spdx spec package information 77 package download location field If scm api is used it would be the clone URL of the repo fetched from scm repository service in the SPDX download format https spdx github io spdx spec package information 77 package download location field digest The algorithm name is fixed sha1 but subject to be changed to sha256 once Git eventually uses SHA256 at some point later See https git scm com docs hash function transition for more details The value is the actual commit sha at the moment of resolving the resource even if a user provides a tag branch name for the param revision entrypoint the user provided value for the path param Example Pipeline Resolution yaml apiVersion tekton dev v1beta1 kind PipelineRun metadata name git demo spec pipelineRef resolver git params name url value https github com username reponame git name revision value main name pathInRepo value pipeline yaml ResolutionRequest yaml apiVersion resolution tekton dev v1alpha1 kind ResolutionRequest metadata labels resolution tekton dev type git spec params pathInRepo pipeline yaml revision main url https github com username reponame git status refSource uri git https github com username reponame git digest sha1 The latest commit sha on main at the moment of resolving entrypoint pipeline yaml data a2luZDogUGxxxx Except as otherwise noted the content of this page is licensed under the Creative Commons Attribution 4 0 License https creativecommons org licenses by 4 0 and code samples are licensed under the Apache 2 0 License https www apache org licenses LICENSE 2 0 |
tekton weight 106 High Availability Support HA Support for Tekton Pipeline Controllers | <!--
---
linkTitle: "High Availability Support"
weight: 106
---
-->
# HA Support for Tekton Pipeline Controllers
- [Overview](#overview)
- [Controller HA](#controller-ha)
- [Configuring Controller Replicas](#configuring-controller-replicas)
- [Configuring Leader Election](#configuring-leader-election)
- [Disabling Controller HA](#disabling-controller-ha)
- [Webhook HA](#webhook-ha)
- [Configuring Webhook Replicas](#configuring-webhook-replicas)
- [Avoiding Disruptions](#avoiding-disruptions)
## Overview
This document is aimed at helping Cluster Admins when configuring High Availability (HA) support for the Tekton Pipeline [Controller](./../config/controller.yaml) and [Webhook](./../config/webhook.yaml) components. HA support allows components to remain operational when a disruption occurs, such as nodes being drained for upgrades.
## Controller HA
For the Controller, HA is achieved by following an active/active model, where all replicas of the Controller can receive and process work items. In this HA approach the workqueue is distributed across buckets, where each replica owns a subset of those buckets and can process the load if the given replica is the leader of that bucket.
By default, only one Controller replica is configured, to reduce resource usage. This effectively disables HA for the Controller by default.
### Configuring Controller Replicas
In order to achieve HA for the Controller, the number of replicas for the Controller should be greater than one. This allows other instances to take over in case of any disruption on the current active controller.
You can modify the replicas number in the [Controller deployment](./../config/controller.yaml) under `spec.replicas`, or apply an update to a running deployment:
```sh
kubectl -n tekton-pipelines scale deployment tekton-pipelines-controller --replicas=3
```
### Configuring Leader Election
Leader election can be configured in [config-leader-election.yaml](./../config/config-leader-election-controller.yaml). The ConfigMap defines the following parameters:
| Parameter | Default |
| -------------------- | -------- |
| `data.buckets` | 1 |
| `data.leaseDuration` | 15s |
| `data.renewDeadline` | 10s |
| `data.retryPeriod` | 2s |
_Note_: The maximum value of `data.buckets` at this time is 10.
### Disabling Controller HA
If HA is not required, you can disable it by scaling the deployment back to one replica. You can also modify the [controller deployment](./../config/controller.yaml), by specifying in the `tekton-pipelines-controller` container the `disable-ha` flag. For example:
```yaml
spec:
serviceAccountName: tekton-pipelines-controller
containers:
- name: tekton-pipelines-controller
# ...
args: [
# Other flags defined here...
"-disable-ha=true",
]
```
**Note:** If you set `-disable-ha=false` and run multiple replicas of the Controller, each replica will process work items separately, which will lead to unwanted behavior when creating resources (e.g., `TaskRuns`, etc.).
In general, setting `-disable-ha=false` is not recommended. Instead, to disable HA, simply run one replica of the Controller deployment.
## Webhook HA
The Webhook deployment is stateless, which means it can more easily be configured for HA, and even autoscale replicas in response to load.
By default, only one Webhook replica is configured, to reduce resource usage. This effectively disables HA for the Webhook by default.
### Configuring Webhook Replicas
In order to achieve HA for the Webhook deployment, you can modify the `replicas` number in the [Webhook deployment](./../config/webhook.yaml) under `spec.replicas`, or apply an update to a running deployment:
```sh
kubectl -n tekton-pipelines scale deployment tekton-pipelines-webhook --replicas=3
```
You can also modify the [HorizontalPodAutoscaler](./../config/webhook-hpa.yaml) to set a minimum number of replicas:
```yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: tekton-pipelines-webhook
# ...
spec:
minReplicas: 1
```
<!-- wokeignore:rule=master -->
By default, the Webhook deployment is _not_ configured to block a [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) from scaling down the node that's running the only replica of the deployment using the `cluster-autoscaler.kubernetes.io/safe-to-evict` annotation.
This means that during node drains, the Webhook might be unavailable temporarily, during which time Tekton resources can't be created, updated or deleted.
To avoid this, you can add the `safe-to-evict` annotation set to `false` to block node drains during autoscaling, or, better yet, configure multiple replicas of the Webhook deployment.
### Avoiding Disruptions
To avoid the Webhook Service becoming unavailable during node unavailability (e.g., during node upgrades), you can ensure that a minimum number of Webhook replicas are available at time by defining a [`PodDisruptionBudget`](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) which sets a `minAvailable` greater than zero:
```yaml
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: tekton-pipelines-webhook
namespace: tekton-pipelines
labels:
app.kubernetes.io/name: webhook
app.kubernetes.io/component: webhook
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-pipelines
# ...
spec:
minAvailable: 1
selector:
matchLabels:
app.kubernetes.io/name: webhook
app.kubernetes.io/component: webhook
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-pipelines
```
Webhook replicas are configured to avoid being scheduled onto the same node by default, so that a single node disruption doesn't make all Webhook replicas unavailable. | tekton | linkTitle High Availability Support weight 106 HA Support for Tekton Pipeline Controllers Overview overview Controller HA controller ha Configuring Controller Replicas configuring controller replicas Configuring Leader Election configuring leader election Disabling Controller HA disabling controller ha Webhook HA webhook ha Configuring Webhook Replicas configuring webhook replicas Avoiding Disruptions avoiding disruptions Overview This document is aimed at helping Cluster Admins when configuring High Availability HA support for the Tekton Pipeline Controller config controller yaml and Webhook config webhook yaml components HA support allows components to remain operational when a disruption occurs such as nodes being drained for upgrades Controller HA For the Controller HA is achieved by following an active active model where all replicas of the Controller can receive and process work items In this HA approach the workqueue is distributed across buckets where each replica owns a subset of those buckets and can process the load if the given replica is the leader of that bucket By default only one Controller replica is configured to reduce resource usage This effectively disables HA for the Controller by default Configuring Controller Replicas In order to achieve HA for the Controller the number of replicas for the Controller should be greater than one This allows other instances to take over in case of any disruption on the current active controller You can modify the replicas number in the Controller deployment config controller yaml under spec replicas or apply an update to a running deployment sh kubectl n tekton pipelines scale deployment tekton pipelines controller replicas 3 Configuring Leader Election Leader election can be configured in config leader election yaml config config leader election controller yaml The ConfigMap defines the following parameters Parameter Default data buckets 1 data leaseDuration 15s data renewDeadline 10s data retryPeriod 2s Note The maximum value of data buckets at this time is 10 Disabling Controller HA If HA is not required you can disable it by scaling the deployment back to one replica You can also modify the controller deployment config controller yaml by specifying in the tekton pipelines controller container the disable ha flag For example yaml spec serviceAccountName tekton pipelines controller containers name tekton pipelines controller args Other flags defined here disable ha true Note If you set disable ha false and run multiple replicas of the Controller each replica will process work items separately which will lead to unwanted behavior when creating resources e g TaskRuns etc In general setting disable ha false is not recommended Instead to disable HA simply run one replica of the Controller deployment Webhook HA The Webhook deployment is stateless which means it can more easily be configured for HA and even autoscale replicas in response to load By default only one Webhook replica is configured to reduce resource usage This effectively disables HA for the Webhook by default Configuring Webhook Replicas In order to achieve HA for the Webhook deployment you can modify the replicas number in the Webhook deployment config webhook yaml under spec replicas or apply an update to a running deployment sh kubectl n tekton pipelines scale deployment tekton pipelines webhook replicas 3 You can also modify the HorizontalPodAutoscaler config webhook hpa yaml to set a minimum number of replicas yaml apiVersion autoscaling v2 kind HorizontalPodAutoscaler metadata name tekton pipelines webhook spec minReplicas 1 wokeignore rule master By default the Webhook deployment is not configured to block a Cluster Autoscaler https github com kubernetes autoscaler tree master cluster autoscaler from scaling down the node that s running the only replica of the deployment using the cluster autoscaler kubernetes io safe to evict annotation This means that during node drains the Webhook might be unavailable temporarily during which time Tekton resources can t be created updated or deleted To avoid this you can add the safe to evict annotation set to false to block node drains during autoscaling or better yet configure multiple replicas of the Webhook deployment Avoiding Disruptions To avoid the Webhook Service becoming unavailable during node unavailability e g during node upgrades you can ensure that a minimum number of Webhook replicas are available at time by defining a PodDisruptionBudget https kubernetes io docs tasks run application configure pdb which sets a minAvailable greater than zero yaml apiVersion policy v1beta1 kind PodDisruptionBudget metadata name tekton pipelines webhook namespace tekton pipelines labels app kubernetes io name webhook app kubernetes io component webhook app kubernetes io instance default app kubernetes io part of tekton pipelines spec minAvailable 1 selector matchLabels app kubernetes io name webhook app kubernetes io component webhook app kubernetes io instance default app kubernetes io part of tekton pipelines Webhook replicas are configured to avoid being scheduled onto the same node by default so that a single node disruption doesn t make all Webhook replicas unavailable |
tekton Pipelines weight 203 Pipelines | <!--
---
linkTitle: "Pipelines"
weight: 203
---
-->
# Pipelines
- [Pipelines](#pipelines)
- [Overview](#overview)
- [Configuring a `Pipeline`](#configuring-a-pipeline)
- [Specifying `Workspaces`](#specifying-workspaces)
- [Specifying `Parameters`](#specifying-parameters)
- [Adding `Tasks` to the `Pipeline`](#adding-tasks-to-the-pipeline)
- [Specifying Display Name](#specifying-displayname-in-pipelinetasks)
- [Specifying Remote Tasks](#specifying-remote-tasks)
- [Specifying `Pipelines` in `PipelineTasks`](#specifying-pipelines-in-pipelinetasks)
- [Specifying `Parameters` in `PipelineTasks`](#specifying-parameters-in-pipelinetasks)
- [Specifying `Matrix` in `PipelineTasks`](#specifying-matrix-in-pipelinetasks)
- [Specifying `Workspaces` in `PipelineTasks`](#specifying-workspaces-in-pipelinetasks)
- [Tekton Bundles](#tekton-bundles)
- [Using the `runAfter` field](#using-the-runafter-field)
- [Using the `retries` field](#using-the-retries-field)
- [Using the `onError` field](#using-the-onerror-field)
- [Produce results with `OnError`](#produce-results-with-onerror)
- [Guard `Task` execution using `when` expressions](#guard-task-execution-using-when-expressions)
- [Guarding a `Task` and its dependent `Tasks`](#guarding-a-task-and-its-dependent-tasks)
- [Cascade `when` expressions to the specific dependent `Tasks`](#cascade-when-expressions-to-the-specific-dependent-tasks)
- [Compose using Pipelines in Pipelines](#compose-using-pipelines-in-pipelines)
- [Guarding a `Task` only](#guarding-a-task-only)
- [Configuring the failure timeout](#configuring-the-failure-timeout)
- [Using variable substitution](#using-variable-substitution)
- [Using the `retries` and `retry-count` variable substitutions](#using-the-retries-and-retry-count-variable-substitutions)
- [Using `Results`](#using-results)
- [Passing one Task's `Results` into the `Parameters` or `when` expressions of another](#passing-one-tasks-results-into-the-parameters-or-when-expressions-of-another)
- [Emitting `Results` from a `Pipeline`](#emitting-results-from-a-pipeline)
- [Configuring the `Task` execution order](#configuring-the-task-execution-order)
- [Adding a description](#adding-a-description)
- [Adding `Finally` to the `Pipeline`](#adding-finally-to-the-pipeline)
- [Specifying Display Name](#specifying-displayname-in-finally-tasks)
- [Specifying `Workspaces` in `finally` tasks](#specifying-workspaces-in-finally-tasks)
- [Specifying `Parameters` in `finally` tasks](#specifying-parameters-in-finally-tasks)
- [Specifying `matrix` in `finally` tasks](#specifying-matrix-in-finally-tasks)
- [Consuming `Task` execution results in `finally`](#consuming-task-execution-results-in-finally)
- [Consuming `Pipeline` result with `finally`](#consuming-pipeline-result-with-finally)
- [`PipelineRun` Status with `finally`](#pipelinerun-status-with-finally)
- [Using Execution `Status` of `pipelineTask`](#using-execution-status-of-pipelinetask)
- [Using Aggregate Execution `Status` of All `Tasks`](#using-aggregate-execution-status-of-all-tasks)
- [Guard `finally` `Task` execution using `when` expressions](#guard-finally-task-execution-using-when-expressions)
- [`when` expressions using `Parameters` in `finally` `Tasks`](#when-expressions-using-parameters-in-finally-tasks)
- [`when` expressions using `Results` in `finally` 'Tasks`](#when-expressions-using-results-in-finally-tasks)
- [`when` expressions using `Execution Status` of `PipelineTask` in `finally` `tasks`](#when-expressions-using-execution-status-of-pipelinetask-in-finally-tasks)
- [`when` expressions using `Aggregate Execution Status` of `Tasks` in `finally` `tasks`](#when-expressions-using-aggregate-execution-status-of-tasks-in-finally-tasks)
- [Known Limitations](#known-limitations)
- [Cannot configure the `finally` task execution order](#cannot-configure-the-finally-task-execution-order)
- [Using Custom Tasks](#using-custom-tasks)
- [Specifying the target Custom Task](#specifying-the-target-custom-task)
- [Specifying a Custom Task Spec in-line (or embedded)](#specifying-a-custom-task-spec-in-line-or-embedded)
- [Specifying parameters](#specifying-parameters-1)
- [Specifying matrix](#specifying-matrix)
- [Specifying workspaces](#specifying-workspaces-1)
- [Using `Results`](#using-results-1)
- [Specifying `Timeout`](#specifying-timeout)
- [Specifying `Retries`](#specifying-retries)
- [Known Custom Tasks](#known-custom-tasks)
- [Code examples](#code-examples)
## Overview
A `Pipeline` is a collection of `Tasks` that you define and arrange in a specific order
of execution as part of your continuous integration flow. Each `Task` in a `Pipeline`
executes as a `Pod` on your Kubernetes cluster. You can configure various execution
conditions to fit your business needs.
## Configuring a `Pipeline`
A `Pipeline` definition supports the following fields:
- Required:
- [`apiVersion`][kubernetes-overview] - Specifies the API version, for example
`tekton.dev/v1beta1`.
- [`kind`][kubernetes-overview] - Identifies this resource object as a `Pipeline` object.
- [`metadata`][kubernetes-overview] - Specifies metadata that uniquely identifies the
`Pipeline` object. For example, a `name`.
- [`spec`][kubernetes-overview] - Specifies the configuration information for
this `Pipeline` object. This must include:
- [`tasks`](#adding-tasks-to-the-pipeline) - Specifies the `Tasks` that comprise the `Pipeline`
and the details of their execution.
- Optional:
- [`params`](#specifying-parameters) - Specifies the `Parameters` that the `Pipeline` requires.
- [`workspaces`](#specifying-workspaces) - Specifies a set of Workspaces that the `Pipeline` requires.
- [`tasks`](#adding-tasks-to-the-pipeline):
- [`name`](#adding-tasks-to-the-pipeline) - the name of this `Task` within the context of this `Pipeline`.
- [`displayName`](#specifying-displayname-in-pipelinetasks) - a user-facing name of this `Task` within the context of this `Pipeline`.
- [`description`](#adding-tasks-to-the-pipeline) - a description of this `Task` within the context of this `Pipeline`.
- [`taskRef`](#adding-tasks-to-the-pipeline) - a reference to a `Task` definition.
- [`taskSpec`](#adding-tasks-to-the-pipeline) - a specification of a `Task`.
- [`runAfter`](#using-the-runafter-field) - Indicates that a `Task` should execute after one or more other
`Tasks` without output linking.
- [`retries`](#using-the-retries-field) - Specifies the number of times to retry the execution of a `Task` after
a failure. Does not apply to execution cancellations.
- [`when`](#guard-finally-task-execution-using-when-expressions) - Specifies `when` expressions that guard
the execution of a `Task`; allow execution only when all `when` expressions evaluate to true.
- [`timeout`](#configuring-the-failure-timeout) - Specifies the timeout before a `Task` fails.
- [`params`](#specifying-parameters-in-pipelinetasks) - Specifies the `Parameters` that a `Task` requires.
- [`workspaces`](#specifying-workspaces-in-pipelinetasks) - Specifies the `Workspaces` that a `Task` requires.
- [`matrix`](#specifying-matrix-in-pipelinetasks) - Specifies the `Parameters` used to fan out a `Task` into
multiple `TaskRuns` or `Runs`.
- [`results`](#emitting-results-from-a-pipeline) - Specifies the location to which the `Pipeline` emits its execution
results.
- [`displayName`](#specifying-a-display-name) - is a user-facing name of the pipeline that may be used to populate a UI.
- [`description`](#adding-a-description) - Holds an informative description of the `Pipeline` object.
- [`finally`](#adding-finally-to-the-pipeline) - Specifies one or more `Tasks` to be executed in parallel after
all other tasks have completed.
- [`name`](#adding-finally-to-the-pipeline) - the name of this `Task` within the context of this `Pipeline`.
- [`displayName`](#specifying-displayname-in-finally-tasks) - a user-facing name of this `Task` within the context of this `Pipeline`.
- [`description`](#adding-finally-to-the-pipeline) - a description of this `Task` within the context of this `Pipeline`.
- [`taskRef`](#adding-finally-to-the-pipeline) - a reference to a `Task` definition.
- [`taskSpec`](#adding-finally-to-the-pipeline) - a specification of a `Task`.
- [`retries`](#using-the-retries-field) - Specifies the number of times to retry the execution of a `Task` after
a failure. Does not apply to execution cancellations.
- [`when`](#guard-finally-task-execution-using-when-expressions) - Specifies `when` expressions that guard
the execution of a `Task`; allow execution only when all `when` expressions evaluate to true.
- [`timeout`](#configuring-the-failure-timeout) - Specifies the timeout before a `Task` fails.
- [`params`](#specifying-parameters-in-finally-tasks) - Specifies the `Parameters` that a `Task` requires.
- [`workspaces`](#specifying-workspaces-in-finally-tasks) - Specifies the `Workspaces` that a `Task` requires.
- [`matrix`](#specifying-matrix-in-finally-tasks) - Specifies the `Parameters` used to fan out a `Task` into
multiple `TaskRuns` or `Runs`.
[kubernetes-overview]:
https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields
## Specifying `Workspaces`
`Workspaces` allow you to specify one or more volumes that each `Task` in the `Pipeline`
requires during execution. You specify one or more `Workspaces` in the `workspaces` field.
For example:
```yaml
spec:
workspaces:
- name: pipeline-ws1 # The name of the workspace in the Pipeline
tasks:
- name: use-ws-from-pipeline
taskRef:
name: gen-code # gen-code expects a workspace with name "output"
workspaces:
- name: output
workspace: pipeline-ws1
- name: use-ws-again
taskRef:
name: commit # commit expects a workspace with name "src"
runAfter:
- use-ws-from-pipeline # important: use-ws-from-pipeline writes to the workspace first
workspaces:
- name: src
workspace: pipeline-ws1
```
For simplicity you can also map the name of the `Workspace` in `PipelineTask` to match with
the `Workspace` from the `Pipeline`.
For example:
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Pipeline
metadata:
name: pipeline
spec:
workspaces:
- name: source
tasks:
- name: gen-code
taskRef:
name: gen-code # gen-code expects a Workspace named "source"
workspaces:
- name: source # <- mapping workspace name
- name: commit
taskRef:
name: commit # commit expects a Workspace named "source"
workspaces:
- name: source # <- mapping workspace name
runAfter:
- gen-code
```
For more information, see:
- [Using `Workspaces` in `Pipelines`](workspaces.md#using-workspaces-in-pipelines)
- The [`Workspaces` in a `PipelineRun`](../examples/v1/pipelineruns/workspaces.yaml) code example
- The [variables available in a `PipelineRun`](variables.md#variables-available-in-a-pipeline), including `workspaces.<name>.bound`.
- [Mapping `Workspaces`](https://github.com/tektoncd/community/blob/main/teps/0108-mapping-workspaces.md)
## Specifying `Parameters`
(See also [Specifying Parameters in Tasks](tasks.md#specifying-parameters))
You can specify global parameters, such as compilation flags or artifact names, that you want to supply
to the `Pipeline` at execution time. `Parameters` are passed to the `Pipeline` from its corresponding
`PipelineRun` and can replace template values specified within each `Task` in the `Pipeline`.
Parameter names:
- Must only contain alphanumeric characters, hyphens (`-`), and underscores (`_`).
- Must begin with a letter or an underscore (`_`).
For example, `fooIs-Bar_` is a valid parameter name, but `barIsBa$` or `0banana` are not.
Each declared parameter has a `type` field, which can be set to either `array` or `string`.
`array` is useful in cases where the number of compilation flags being supplied to the `Pipeline`
varies throughout its execution. If no value is specified, the `type` field defaults to `string`.
When the actual parameter value is supplied, its parsed type is validated against the `type` field.
The `description` and `default` fields for a `Parameter` are optional.
The following example illustrates the use of `Parameters` in a `Pipeline`.
The following `Pipeline` declares two input parameters :
- `context` which passes its value (a string) to the `Task` to set the value of the `pathToContext` parameter within the `Task`.
- `flags` which passes its value (an array) to the `Task` to set the value of
the `flags` parameter within the `Task`. The `flags` parameter within the
`Task` **must** also be an array.
If you specify a value for the `default` field and invoke this `Pipeline` in a `PipelineRun`
without specifying a value for `context`, that value will be used.
**Note:** Input parameter values can be used as variables throughout the `Pipeline`
by using [variable substitution](variables.md#variables-available-in-a-pipeline).
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Pipeline
metadata:
name: pipeline-with-parameters
spec:
params:
- name: context
type: string
description: Path to context
default: /some/where/or/other
- name: flags
type: array
description: List of flags
tasks:
- name: build-skaffold-web
taskRef:
name: build-push
params:
- name: pathToDockerFile
value: Dockerfile
- name: pathToContext
value: "$(params.context)"
- name: flags
value: ["$(params.flags[*])"]
```
The following `PipelineRun` supplies a value for `context`:
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pipelinerun-with-parameters
spec:
pipelineRef:
name: pipeline-with-parameters
params:
- name: "context"
value: "/workspace/examples/microservices/leeroy-web"
- name: "flags"
value:
- "foo"
- "bar"
```
#### Param enum
> :seedling: **`enum` is an [alpha](additional-configs.md#alpha-features) feature.** The `enable-param-enum` feature flag must be set to `"true"` to enable this feature.
Parameter declarations can include `enum` which is a predefine set of valid values that can be accepted by the `Pipeline` `Param`. If a `Param` has both `enum` and default value, the default value must be in the `enum` set. For example, the valid/allowed values for `Param` "message" is bounded to `v1` and `v2`:
``` yaml
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: pipeline-param-enum
spec:
params:
- name: message
enum: ["v1", "v2"]
default: "v1"
tasks:
- name: task1
params:
- name: message
value: $(params.message)
steps:
- name: build
image: bash:3.2
script: |
echo "$(params.message)"
```
If the `Param` value passed in by `PipelineRun` is **NOT** in the predefined `enum` list, the `PipelineRun` will fail with reason `InvalidParamValue`.
If a `PipelineTask` references a `Task` with `enum`, the `enums` specified in the Pipeline `spec.params` (pipeline-level `enum`) must be
a **subset** of the `enums` specified in the referenced `Task` (task-level `enum`). An empty pipeline-level `enum` is invalid
in this scenario since an empty `enum` set indicates a "universal set" which allows all possible values. The same rules apply to `Pipelines` with embbeded `Tasks`.
In the below example, the referenced `Task` accepts `v1` and `v2` as valid values, the `Pipeline` further restricts the valid value to `v1`.
``` yaml
apiVersion: tekton.dev/v1
kind: Task
metadata:
name: param-enum-demo
spec:
params:
- name: message
type: string
enum: ["v1", "v2"]
steps:
- name: build
image: bash:latest
script: |
echo "$(params.message)"
```
``` yaml
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: pipeline-param-enum
spec:
params:
- name: message
enum: ["v1"] # note that an empty enum set is invalid
tasks:
- name: task1
params:
- name: message
value: $(params.message)
taskRef:
name: param-enum-demo
```
Note that this subset restriction only applies to the task-level `params` with a **direct single** reference to pipeline-level `params`. If a task-level `param` references multiple pipeline-level `params`, the subset validation is not applied.
``` yaml
apiVersion: tekton.dev/v1
kind: Pipeline
...
spec:
params:
- name: message1
enum: ["v1"]
- name: message2
enum: ["v2"]
tasks:
- name: task1
params:
- name: message
value: "$(params.message1) and $(params.message2)"
taskSpec:
params: message
enum: [...] # the message enum is not required to be a subset of message1 or message2
...
```
Tekton validates user-provided values in a `PipelineRun` against the `enum` specified in the `PipelineSpec.params`. Tekton also validates
any resolved `param` value against the `enum` specified in each `PipelineTask` before creating the `TaskRun`.
See usage in this [example](../examples/v1/pipelineruns/alpha/param-enum.yaml)
#### Propagated Params
Like with embedded [pipelineruns](pipelineruns.md#propagated-parameters), you can propagate `params` declared in the `pipeline` down to the inlined `pipelineTasks` and its inlined `Steps`. Wherever a resource (e.g. a `pipelineTask`) or a `StepAction` is referenced, the parameters need to be passed explicitly.
For example, the following is a valid yaml.
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Pipeline
metadata:
name: pipelien-propagated-params
spec:
params:
- name: HELLO
default: "Hello World!"
- name: BYE
default: "Bye World!"
tasks:
- name: echo-hello
taskSpec:
steps:
- name: echo
image: ubuntu
script: |
#!/usr/bin/env bash
echo "$(params.HELLO)"
- name: echo-bye
taskSpec:
steps:
- name: echo-action
ref:
name: step-action-echo
params:
- name: msg
value: "$(params.BYE)"
```
The same rules defined in [pipelineruns](pipelineruns.md#propagated-parameters) apply here.
## Adding `Tasks` to the `Pipeline`
Your `Pipeline` definition must reference at least one [`Task`](tasks.md).
Each `Task` within a `Pipeline` must have a [valid](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names)
`name` and a `taskRef` or a `taskSpec`. For example:
```yaml
tasks:
- name: build-the-image
taskRef:
name: build-push
```
**Note:** Using both `apiVersion` and `kind` will create [CustomRun](customruns.md), don't set `apiVersion` if only referring to [`Task`](tasks.md).
or
```yaml
tasks:
- name: say-hello
taskSpec:
steps:
- image: ubuntu
script: echo 'hello there'
```
Note that any `task` specified in `taskSpec` will be the same version as the `Pipeline`.
### Specifying `displayName` in `PipelineTasks`
The `displayName` field is an optional field that allows you to add a user-facing name of the `PipelineTask` that can be
used to populate and distinguish in the dashboard. For example:
```yaml
spec:
tasks:
- name: scan
displayName: "Code Scan"
taskRef:
name: sonar-scan
```
The `displayName` also allows you to parameterize the human-readable name of your choice based on the
[params](#specifying-parameters), [the task results](#passing-one-tasks-results-into-the-parameters-or-when-expressions-of-another),
and [the context variables](#context-variables). For example:
```yaml
spec:
params:
- name: application
tasks:
- name: scan
displayName: "Code Scan for $(params.application)"
taskRef:
name: sonar-scan
- name: upload-scan-report
displayName: "Upload Scan Report $(tasks.scan.results.report)"
taskRef:
name: upload
```
Specifying task results in the `displayName` does not introduce an inherent resource dependency among `tasks`. The
pipeline author is responsible for specifying dependency explicitly either using [runAfter](#using-the-runafter-field)
or rely on [whenExpressions](#guard-task-execution-using-when-expressions) or [task results in params](#using-results).
Fully resolved `displayName` is also available in the status as part of the `pipelineRun.status.childReferences`. The
clients such as the dashboard, CLI, etc. can retrieve the `displayName` from the `childReferences`. The `displayName` mainly
drives a better user experience and at the same time it is not validated for the content or length by the controller.
### Specifying Remote Tasks
**([beta feature](https://github.com/tektoncd/pipeline/blob/main/docs/install.md#beta-features))**
A `taskRef` field may specify a Task in a remote location such as git.
Support for specific types of remote will depend on the Resolvers your
cluster's operator has installed. For more information including a tutorial, please check [resolution docs](resolution.md). The below example demonstrates referencing a Task in git:
```yaml
tasks:
- name: "go-build"
taskRef:
resolver: git
params:
- name: url
value: https://github.com/tektoncd/catalog.git
- name: revision
# value can use params declared at the pipeline level or a static value like main
value: $(params.gitRevision)
- name: pathInRepo
value: task/golang-build/0.3/golang-build.yaml
```
### Specifying `Pipelines` in `PipelineTasks`
> :seedling: **Specifying `pipelines` in `PipelineTasks` is an [alpha](additional-configs.md#alpha-features) feature.**
> The `enable-api-fields` feature flag must be set to `"alpha"` to specify `PipelineRef` or `PipelineSpec` in a `PipelineTask`.
> This feature is in **Preview Only** mode and not yet supported/implemented.
Apart from `taskRef` and `taskSpec`, `pipelineRef` and `pipelineSpec` allows you to specify a `pipeline` in `pipelineTask`.
This allows you to generate a child `pipelineRun` which is inherited by the parent `pipelineRun`.
```
kind: Pipeline
metadata:
name: security-scans
spec:
tasks:
- name: scorecards
taskSpec:
steps:
- image: alpine
name: step-1
script: |
echo "Generating scorecard report ..."
- name: codeql
taskSpec:
steps:
- image: alpine
name: step-1
script: |
echo "Generating codeql report ..."
---
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: clone-scan-notify
spec:
tasks:
- name: git-clone
taskSpec:
steps:
- image: alpine
name: step-1
script: |
echo "Cloning a repo to run security scans ..."
- name: security-scans
runAfter:
- git-clone
pipelineRef:
name: security-scans
---
```
For further information read [Pipelines in Pipelines](./pipelines-in-pipelines.md)
### Specifying `Parameters` in `PipelineTasks`
You can also provide [`Parameters`](tasks.md#specifying-parameters):
```yaml
spec:
tasks:
- name: build-skaffold-web
taskRef:
name: build-push
params:
- name: pathToDockerFile
value: Dockerfile
- name: pathToContext
value: /workspace/examples/microservices/leeroy-web
```
### Specifying `Matrix` in `PipelineTasks`
> :seedling: **`Matrix` is an [beta](additional-configs.md#beta-features) feature.**
> The `enable-api-fields` feature flag can be set to `"beta"` to specify `Matrix` in a `PipelineTask`.
You can also provide [`Parameters`](tasks.md#specifying-parameters) through the `matrix` field:
```yaml
spec:
tasks:
- name: browser-test
taskRef:
name: browser-test
matrix:
params:
- name: browser
value:
- chrome
- safari
- firefox
include:
- name: build-1
params:
- name: browser
value: chrome
- name: url
value: some-url
```
For further information, read [`Matrix`](./matrix.md).
### Specifying `Workspaces` in `PipelineTasks`
You can also provide [`Workspaces`](tasks.md#specifying-workspaces):
```yaml
spec:
tasks:
- name: use-workspace
taskRef:
name: gen-code # gen-code expects a workspace with name "output"
workspaces:
- name: output
workspace: shared-ws
```
### Tekton Bundles
A `Tekton Bundle` is an OCI artifact that contains Tekton resources like `Tasks` which can be referenced within a `taskRef`.
There is currently a hard limit of 20 objects in a bundle.
You can reference a `Tekton bundle` in a `TaskRef` in both `v1` and `v1beta1` using [remote resolution](./bundle-resolver.md#pipeline-resolution). The example syntax shown below for `v1` uses remote resolution and requires enabling [beta features](./additional-configs.md#beta-features).
```yaml
spec:
tasks:
- name: hello-world
taskRef:
resolver: bundles
params:
- name: bundle
value: docker.io/myrepo/mycatalog
- name: name
value: echo-task
- name: kind
value: Task
```
You may also specify a `tag` as you would with a Docker image which will give you a fixed,
repeatable reference to a `Task`.
```yaml
spec:
taskRef:
resolver: bundles
params:
- name: bundle
value: docker.io/myrepo/mycatalog:v1.0.1
- name: name
value: echo-task
- name: kind
value: Task
```
You may also specify a fixed digest instead of a tag.
```yaml
spec:
taskRef:
resolver: bundles
params:
- name: bundle
value: docker.io/myrepo/mycatalog@sha256:abc123
- name: name
value: echo-task
- name: kind
value: Task
```
Any of the above options will fetch the image using the `ImagePullSecrets` attached to the
`ServiceAccount` specified in the `PipelineRun`.
See the [Service Account](pipelineruns.md#specifying-custom-serviceaccount-credentials) section
for details on how to configure a `ServiceAccount` on a `PipelineRun`. The `PipelineRun` will then
run that `Task` without registering it in the cluster allowing multiple versions of the same named
`Task` to be run at once.
`Tekton Bundles` may be constructed with any toolsets that produce valid OCI image artifacts
so long as the artifact adheres to the [contract](tekton-bundle-contracts.md).
### Using the `runAfter` field
If you need your `Tasks` to execute in a specific order within the `Pipeline`,
use the `runAfter` field to indicate that a `Task` must execute after
one or more other `Tasks`.
In the example below, we want to test the code before we build it. Since there
is no output from the `test-app` `Task`, the `build-app` `Task` uses `runAfter`
to indicate that `test-app` must run before it, regardless of the order in which
they are referenced in the `Pipeline` definition.
```yaml
workspaces:
- name: source
tasks:
- name: test-app
taskRef:
name: make-test
workspaces:
- name: source
workspace: source
- name: build-app
taskRef:
name: kaniko-build
runAfter:
- test-app
workspaces:
- name: source
workspace: source
```
### Using the `retries` field
For each `Task` in the `Pipeline`, you can specify the number of times Tekton
should retry its execution when it fails. When a `Task` fails, the corresponding
`TaskRun` sets its `Succeeded` `Condition` to `False`. The `retries` field
instructs Tekton to retry executing the `Task` when this happens. `retries` are executed
even when other `Task`s in the `Pipeline` have failed, unless the `PipelineRun` has
been [cancelled](./pipelineruns.md#cancelling-a-pipelinerun) or
[gracefully cancelled](./pipelineruns.md#gracefully-cancelling-a-pipelinerun).
If you expect a `Task` to encounter problems during execution (for example,
you know that there will be issues with network connectivity or missing
dependencies), set its `retries` field to a suitable value greater than 0.
If you don't explicitly specify a value, Tekton does not attempt to execute
the failed `Task` again.
In the example below, the execution of the `build-the-image` `Task` will be
retried once after a failure; if the retried execution fails, too, the `Task`
execution fails as a whole.
```yaml
tasks:
- name: build-the-image
retries: 1
taskRef:
name: build-push
```
### Using the `onError` field
When a `PipelineTask` fails, the rest of the `PipelineTasks` are skipped and the `PipelineRun` is declared a failure. If you would like to
ignore such `PipelineTask` failure and continue executing the rest of the `PipelineTasks`, you can specify `onError` for such a `PipelineTask`.
`OnError` can be set to `stopAndFail` (default) and `continue`. The failure of a `PipelineTask` with `stopAndFail` would stop and fail the whole `PipelineRun`. A `PipelineTask` fails with `continue` does not fail the whole `PipelineRun`, and the rest of the `PipelineTask` will continue to execute.
To ignore a `PipelineTask` failure, set `onError` to `continue`:
``` yaml
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: demo
spec:
tasks:
- name: task1
onError: continue
taskSpec:
steps:
- name: step1
image: alpine
script: |
exit 1
```
At runtime, the failure is ignored to determine the `PipelineRun` status. The `PipelineRun` `message` contains the ignored failure info:
``` yaml
status:
conditions:
- lastTransitionTime: "2023-09-28T19:08:30Z"
message: 'Tasks Completed: 1 (Failed: 1 (Ignored: 1), Cancelled 0), Skipped: 0'
reason: Succeeded
status: "True"
type: Succeeded
...
```
Note that the `TaskRun` status remains as it is irrelevant to `OnError`. Failed but ignored `TaskRuns` result in a `failed` status with reason
`FailureIgnored`.
For example, the `TaskRun` created by the above `PipelineRun` has the following status:
``` bash
$ kubectl get tr demo-run-task1
NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME
demo-run-task1 False FailureIgnored 12m 12m
```
To specify `onError` for a `step`, please see [specifying onError for a step](./tasks.md#specifying-onerror-for-a-step).
**Note:** Setting [`Retry`](#specifying-retries) and `OnError:continue` at the same time is **NOT** allowed.
### Produce results with `OnError`
When a `PipelineTask` is set to ignore error and the `PipelineTask` is able to initialize a result before failing, the result is made available to the consumer `PipelineTasks`.
``` yaml
tasks:
- name: task1
onError: continue
taskSpec:
results:
- name: result1
steps:
- name: step1
image: alpine
script: |
echo -n 123 | tee $(results.result1.path)
exit 1
```
The consumer `PipelineTasks` can access the result by referencing `$(tasks.task1.results.result1)`.
If the result is **NOT** initialized before failing, and there is a `PipelineTask` consuming it:
``` yaml
tasks:
- name: task1
onError: continue
taskSpec:
results:
- name: result1
steps:
- name: step1
image: alpine
script: |
exit 1
echo -n 123 | tee $(results.result1.path)
```
- If the consuming `PipelineTask` has `OnError:stopAndFail`, the `PipelineRun` will fail with `InvalidTaskResultReference`.
- If the consuming `PipelineTask` has `OnError:continue`, the consuming `PipelineTask` will be skipped with reason `Results were missing`,
and the `PipelineRun` will continue to execute.
### Guard `Task` execution using `when` expressions
To run a `Task` only when certain conditions are met, it is possible to _guard_ task execution using the `when` field. The `when` field allows you to list a series of references to `when` expressions.
The components of `when` expressions are `input`, `operator` and `values`:
| Component | Description | Syntax |
|------------|------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `input` | Input for the `when` expression, defaults to an empty string if not provided. | * Static values e.g. `"ubuntu"`<br/> * Variables ([parameters](#specifying-parameters) or [results](#using-results)) e.g. `"$(params.image)"` or `"$(tasks.task1.results.image)"` or `"$(tasks.task1.results.array-results[1])"` |
| `operator` | `operator` represents an `input`'s relationship to a set of `values`, a valid `operator` must be provided. | `in` or `notin` |
| `values` | An array of string values, the `values` array must be provided and has to be non-empty. | * An array param e.g. `["$(params.images[*])"]`<br/> * An array result of a task `["$(tasks.task1.results.array-results[*])"]`<br/> * `values` can contain static values e.g. `"ubuntu"`<br/> * `values` can contain variables ([parameters](#specifying-parameters) or [results](#using-results)) or [a Workspaces's `bound` state](#specifying-workspaces) e.g. `["$(params.image)"]` or `["$(tasks.task1.results.image)"]` or `["$(tasks.task1.results.array-results[1])"]` |
The [`Parameters`](#specifying-parameters) are read from the `Pipeline` and [`Results`](#using-results) are read directly from previous [`Tasks`](#adding-tasks-to-the-pipeline). Using [`Results`](#using-results) in a `when` expression in a guarded `Task` introduces a resource dependency on the previous `Task` that produced the `Result`.
The declared `when` expressions are evaluated before the `Task` is run. If all the `when` expressions evaluate to `True`, the `Task` is run. If any of the `when` expressions evaluate to `False`, the `Task` is not run and the `Task` is listed in the [`Skipped Tasks` section of the `PipelineRunStatus`](pipelineruns.md#monitoring-execution-status).
In these examples, `first-create-file` task will only be executed if the `path` parameter is `README.md`, `echo-file-exists` task will only be executed if the `exists` result from `check-file` task is `yes` and `run-lint` task will only be executed if the `lint-config` optional workspace has been provided by a PipelineRun.
```yaml
tasks:
- name: first-create-file
when:
- input: "$(params.path)"
operator: in
values: ["README.md"]
taskRef:
name: first-create-file
---
tasks:
- name: echo-file-exists
when:
- input: "$(tasks.check-file.results.exists)"
operator: in
values: ["yes"]
taskRef:
name: echo-file-exists
---
tasks:
- name: run-lint
when:
- input: "$(workspaces.lint-config.bound)"
operator: in
values: ["true"]
taskRef:
name: lint-source
---
tasks:
- name: deploy-in-blue
when:
- input: "blue"
operator: in
values: ["$(params.deployments[*])"]
taskRef:
name: deployment
```
For an end-to-end example, see [PipelineRun with `when` expressions](../examples/v1/pipelineruns/pipelinerun-with-when-expressions.yaml).
There are a lot of scenarios where `when` expressions can be really useful. Some of these are:
- Checking if the name of a git branch matches
- Checking if the `Result` of a previous `Task` is as expected
- Checking if a git file has changed in the previous commits
- Checking if an image exists in the registry
- Checking if the name of a CI job matches
- Checking if an optional Workspace has been provided
#### Use CEL expression in WhenExpression
> :seedling: **`CEL in WhenExpression` is an [alpha](additional-configs.md#alpha-features) feature.**
> The `enable-cel-in-whenexpression` feature flag must be set to `"true"` to enable the use of `CEL` in `WhenExpression`.
CEL (Common Expression Language) is a declarative language designed for simplicity, speed, safety, and portability which can be used to express a wide variety of conditions and computations.
You can define a CEL expression in `WhenExpression` to guard the execution of a `Task`. The CEL expression must evaluate to either `true` or `false`. You can use a single line of CEL string to replace current `WhenExpressions`'s `input`+`operator`+`values`. For example:
```yaml
# current WhenExpressions
when:
- input: "foo"
operator: "in"
values: ["foo", "bar"]
- input: "duh"
operator: "notin"
values: ["foo", "bar"]
# with cel
when:
- cel: "'foo' in ['foo', 'bar']"
- cel: "!('duh' in ['foo', 'bar'])"
```
CEL can offer more conditional functions, such as numeric comparisons (e.g. `>`, `<=`, etc), logic operators (e.g. `OR`, `AND`), Regex Pattern Matching. For example:
```yaml
when:
# test coverage result is larger than 90%
- cel: "'$(tasks.unit-test.results.test-coverage)' > 0.9"
# params is not empty, or params2 is 8.5 or 8.6
- cel: "'$(params.param1)' != '' || '$(params.param2)' == '8.5' || '$(params.param2)' == '8.6'"
# param branch matches pattern `release/.*`
- cel: "'$(params.branch)'.matches('release/.*')"
```
##### Variable substitution in CEL
`CEL` supports [string substitutions](https://github.com/tektoncd/pipeline/blob/main/docs/variables.md#variables-available-in-a-pipeline), you can reference string, array indexing or object value of a param/result. For example:
```yaml
when:
# string result
- cel: "$(tasks.unit-test.results.test-coverage) > 0.9"
# array indexing result
- cel: "$(tasks.unit-test.results.test-coverage[0]) > 0.9"
# object result key
- cel: "'$(tasks.objectTask.results.repo.url)'.matches('github.com/tektoncd/.*')"
# string param
- cel: "'$(params.foo)' == 'foo'"
# array indexing
- cel: "'$(params.branch[0])' == 'foo'"
# object param key
- cel: "'$(params.repo.url)'.matches('github.com/tektoncd/.*')"
```
**Note:** the reference needs to be wrapped with single quotes.
Whole `Array` and `Object` replacements are not supported yet. The following usage is not supported:
```yaml
when:
- cel: "'foo' in '$(params.array_params[*])'"
- cel: "'foo' in '$(params.object_params[*])'"
```
<!-- wokeignore:rule=master -->
In addition to the cases listed above, you can craft any valid CEL expression as defined by the [cel-spec language definition](https://github.com/google/cel-spec/blob/master/doc/langdef.md)
`CEL` expression is validated at admission webhook and a validation error will be returned if the expression is invalid.
**Note:** To use Tekton's [variable substitution](variables.md), you need to wrap the reference with single quotes. This also means that if you pass another CEL expression via `params` or `results`, it won't be executed. Therefore CEL injection is disallowed.
For example:
```
This is valid: '$(params.foo)' == 'foo'
This is invalid: $(params.foo) == 'foo'
CEL's variable substitution is not supported yet and thus invalid: params.foo == 'foo'
```
#### Guarding a `Task` and its dependent `Tasks`
To guard a `Task` and its dependent Tasks:
- cascade the `when` expressions to the specific dependent `Tasks` to be guarded as well
- compose the `Task` and its dependent `Tasks` as a unit to be guarded and executed together using `Pipelines` in `Pipelines`
##### Cascade `when` expressions to the specific dependent `Tasks`
Pick and choose which specific dependent `Tasks` to guard as well, and cascade the `when` expressions to those `Tasks`.
Taking the use case below, a user who wants to guard `manual-approval` and its dependent `Tasks`:
```
tests
|
v
manual-approval
| |
v (approver)
build-image |
| v
v slack-msg
deploy-image
```
The user can design the `Pipeline` to solve their use case as such:
```yaml
tasks:
#...
- name: manual-approval
runAfter:
- tests
when:
- input: $(params.git-action)
operator: in
values:
- merge
taskRef:
name: manual-approval
- name: build-image
when:
- input: $(params.git-action)
operator: in
values:
- merge
runAfter:
- manual-approval
taskRef:
name: build-image
- name: deploy-image
when:
- input: $(params.git-action)
operator: in
values:
- merge
runAfter:
- build-image
taskRef:
name: deploy-image
- name: slack-msg
params:
- name: approver
value: $(tasks.manual-approval.results.approver)
taskRef:
name: slack-msg
```
##### Compose using Pipelines in Pipelines
Compose a set of `Tasks` as a unit of execution using `Pipelines` in `Pipelines`, which allows for guarding a `Task` and
its dependent `Tasks` (as a sub-`Pipeline`) using `when` expressions.
**Note:** `Pipelines` in `Pipelines` is an [experimental feature](https://github.com/tektoncd/experimental/tree/main/pipelines-in-pipelines)
Taking the use case below, a user who wants to guard `manual-approval` and its dependent `Tasks`:
```
tests
|
v
manual-approval
| |
v (approver)
build-image |
| v
v slack-msg
deploy-image
```
The user can design the `Pipelines` to solve their use case as such:
```yaml
## sub pipeline (approve-build-deploy-slack)
tasks:
- name: manual-approval
runAfter:
- integration-tests
taskRef:
name: manual-approval
- name: build-image
runAfter:
- manual-approval
taskRef:
name: build-image
- name: deploy-image
runAfter:
- build-image
taskRef:
name: deploy-image
- name: slack-msg
params:
- name: approver
value: $(tasks.manual-approval.results.approver)
taskRef:
name: slack-msg
---
## main pipeline
tasks:
#...
- name: approve-build-deploy-slack
runAfter:
- tests
when:
- input: $(params.git-action)
operator: in
values:
- merge
taskRef:
apiVersion: tekton.dev/v1beta1
kind: Pipeline
name: approve-build-deploy-slack
```
#### Guarding a `Task` only
When `when` expressions evaluate to `False`, the `Task` will be skipped and:
- The ordering-dependent `Tasks` will be executed
- The resource-dependent `Tasks` (and their dependencies) will be skipped because of missing `Results` from the skipped
parent `Task`. When we add support for [default `Results`](https://github.com/tektoncd/community/pull/240), then the
resource-dependent `Tasks` may be executed if the default `Results` from the skipped parent `Task` are specified. In
addition, if a resource-dependent `Task` needs a file from a guarded parent `Task` in a shared `Workspace`, make sure
to handle the execution of the child `Task` in case the expected file is missing from the `Workspace` because the
guarded parent `Task` is skipped.
On the other hand, the rest of the `Pipeline` will continue executing.
```
tests
|
v
manual-approval
| |
v (approver)
build-image |
| v
v slack-msg
deploy-image
```
Taking the use case above, a user who wants to guard `manual-approval` only can design the `Pipeline` as such:
```yaml
tasks:
#...
- name: manual-approval
runAfter:
- tests
when:
- input: $(params.git-action)
operator: in
values:
- merge
taskRef:
name: manual-approval
- name: build-image
runAfter:
- manual-approval
taskRef:
name: build-image
- name: deploy-image
runAfter:
- build-image
taskRef:
name: deploy-image
- name: slack-msg
params:
- name: approver
value: $(tasks.manual-approval.results.approver)
taskRef:
name: slack-msg
```
If `manual-approval` is skipped, execution of its dependent `Tasks` (`slack-msg`, `build-image` and `deploy-image`)
would be unblocked regardless:
- `build-image` and `deploy-image` should be executed successfully
- `slack-msg` will be skipped because it is missing the `approver` `Result` from `manual-approval`
- dependents of `slack-msg` would have been skipped too if it had any of them
- if `manual-approval` specifies a default `approver` `Result`, such as "None", then `slack-msg` would be executed
([supporting default `Results` is in progress](https://github.com/tektoncd/community/pull/240))
### Configuring the failure timeout
You can use the `Timeout` field in the `Task` spec within the `Pipeline` to set the timeout
of the `TaskRun` that executes that `Task` within the `PipelineRun` that executes your `Pipeline.`
The `Timeout` value is a `duration` conforming to Go's [`ParseDuration`](https://golang.org/pkg/time/#ParseDuration)
format. For example, valid values are `1h30m`, `1h`, `1m`, and `60s`.
**Note:** If you do not specify a `Timeout` value, Tekton instead honors the timeout for the [`PipelineRun`](pipelineruns.md#configuring-a-pipelinerun).
In the example below, the `build-the-image` `Task` is configured to time out after 90 seconds:
```yaml
spec:
tasks:
- name: build-the-image
taskRef:
name: build-push
timeout: "0h1m30s"
```
## Using variable substitution
Tekton provides variables to inject values into the contents of certain fields.
The values you can inject come from a range of sources including other fields
in the Pipeline, context-sensitive information that Tekton provides, and runtime
information received from a PipelineRun.
The mechanism of variable substitution is quite simple - string replacement is
performed by the Tekton Controller when a PipelineRun is executed.
See the [complete list of variable substitutions for Pipelines](./variables.md#variables-available-in-a-pipeline)
and the [list of fields that accept substitutions](./variables.md#fields-that-accept-variable-substitutions).
For an end-to-end example, see [using context variables](../examples/v1/pipelineruns/using_context_variables.yaml).
### Using the `retries` and `retry-count` variable substitutions
Tekton supports variable substitution for the [`retries`](#using-the-retries-field)
parameter of `PipelineTask`. Variables like `context.pipelineTask.retries` and
`context.task.retry-count` can be added to the parameters of a `PipelineTask`.
`context.pipelineTask.retries` will be replaced by `retries` of the `PipelineTask`, while
`context.task.retry-count` will be replaced by current retry number of the `PipelineTask`.
```yaml
params:
- name: pipelineTask-retries
value: "$(context.pipelineTask.retries)"
taskSpec:
params:
- name: pipelineTask-retries
steps:
- image: ubuntu
name: print-if-retries-exhausted
script: |
if [ "$(context.task.retry-count)" == "$(params.pipelineTask-retries)" ]
then
echo "This is the last retry."
fi
exit 1
```
**Note:** Every `PipelineTask` can only access its own `retries` and `retry-count`. These
values aren't accessible for other `PipelineTask`s.
## Using `Results`
Tasks can emit [`Results`](tasks.md#emitting-results) when they execute. A Pipeline can use these
`Results` for two different purposes:
1. A Pipeline can pass the `Result` of a `Task` into the `Parameters` or `when` expressions of another.
2. A Pipeline can itself emit `Results` and include data from the `Results` of its Tasks.
> **Note** Tekton does not enforce that results are produced at Task level. If a pipeline attempts to
> consume a result that was declared by a Task, but not produced, it will fail. [TEP-0048](https://github.com/tektoncd/community/blob/main/teps/0048-task-results-without-results.md)
> propopses introducing default values for results to help Pipeline authors manage this case.
### Passing one Task's `Results` into the `Parameters` or `when` expressions of another
Sharing `Results` between `Tasks` in a `Pipeline` happens via
[variable substitution](variables.md#variables-available-in-a-pipeline) - one `Task` emits
a `Result` and another receives it as a `Parameter` with a variable such as
`$(tasks.<task-name>.results.<result-name>)`. Pipeline support two new types of
results and parameters: array `[]string` and object `map[string]string`.
Array result is a beta feature and can be enabled by setting `enable-api-fields` to `alpha` or `beta`.
| Result Type | Parameter Type | Specification | `enable-api-fields` |
|-------------|----------------|--------------------------------------------------|---------------------|
| string | string | `$(tasks.<task-name>.results.<result-name>)` | stable |
| array | array | `$(tasks.<task-name>.results.<result-name>[*])` | alpha or beta |
| array | string | `$(tasks.<task-name>.results.<result-name>[i])` | alpha or beta |
| object | object | `$(tasks.<task-name>.results.<result-name>[*])` | alpha or beta |
| object | string | `$(tasks.<task-name>.results.<result-name>.key)` | alpha or beta |
**Note:** Whole Array and Object `Results` (using star notation) cannot be referred in `script`.
When one `Task` receives the `Results` of another, there is a dependency created between those
two `Tasks`. In order for the receiving `Task` to get data from another `Task's` `Result`,
the `Task` producing the `Result` must run first. Tekton enforces this `Task` ordering
by ensuring that the `Task` emitting the `Result` executes before any `Task` that uses it.
In the snippet below, a param is provided its value from the `commit` `Result` emitted by the
`checkout-source` `Task`. Tekton will make sure that the `checkout-source` `Task` runs
before this one.
```yaml
params:
- name: foo
value: "$(tasks.checkout-source.results.commit)"
- name: array-params
value: "$(tasks.checkout-source.results.array-results[*])"
- name: array-indexing-params
value: "$(tasks.checkout-source.results.array-results[1])"
- name: object-params
value: "$(tasks.checkout-source.results.object-results[*])"
- name: object-element-params
value: "$(tasks.checkout-source.results.object-results.objectkey)"
```
**Note:** If `checkout-source` exits successfully without initializing `commit` `Result`,
the receiving `Task` fails and causes the `Pipeline` to fail with `InvalidTaskResultReference`:
```
unable to find result referenced by param 'foo' in 'task';: Could not find result with name 'commit' for task run 'checkout-source'
```
In the snippet below, a `when` expression is provided its value from the `exists` `Result` emitted by the
`check-file` `Task`. Tekton will make sure that the `check-file` `Task` runs before this one.
```yaml
when:
- input: "$(tasks.check-file.results.exists)"
operator: in
values: ["yes"]
```
For an end-to-end example, see [`Task` `Results` in a `PipelineRun`](../examples/v1/pipelineruns/task_results_example.yaml).
Note that `when` expressions are whitespace-sensitive. In particular, when producing results intended for inputs to `when`
expressions that may include newlines at their close (e.g. `cat`, `jq`), you may wish to truncate them.
```yaml
taskSpec:
params:
- name: jsonQuery-check
steps:
- image: ubuntu
name: store-name-in-results
script: |
curl -s https://my-json-server.typicode.com/typicode/demo/profile | jq -r .name | tr -d '\n' | tee $(results.name.path)
```
### Emitting `Results` from a `Pipeline`
A `Pipeline` can emit `Results` of its own for a variety of reasons - an external
system may need to read them when the `Pipeline` is complete, they might summarise
the most important `Results` from the `Pipeline's` `Tasks`, or they might simply
be used to expose non-critical messages generated during the execution of the `Pipeline`.
A `Pipeline's` `Results` can be composed of one or many `Task` `Results` emitted during
the course of the `Pipeline's` execution. A `Pipeline` `Result` can refer to its `Tasks'`
`Results` using a variable of the form `$(tasks.<task-name>.results.<result-name>)`.
After a `Pipeline` has executed the `PipelineRun` will be populated with the `Results`
emitted by the `Pipeline`. These will be written to the `PipelineRun's`
`status.pipelineResults` field.
In the example below, the `Pipeline` specifies a `results` entry with the name `sum` that
references the `outputValue` `Result` emitted by the `calculate-sum` `Task`.
```yaml
results:
- name: sum
description: the sum of all three operands
value: $(tasks.calculate-sum.results.outputValue)
```
For an end-to-end example, see [`Results` in a `PipelineRun`](../examples/v1/pipelineruns/pipelinerun-results.yaml).
In the example below, the `Pipeline` collects array and object results from `Tasks`.
```yaml
results:
- name: array-results
type: array
description: whole array
value: $(tasks.task1.results.array-results[*])
- name: array-indexing-results
type: string
description: array element
value: $(tasks.task1.results.array-results[1])
- name: object-results
type: object
description: whole object
value: $(tasks.task2.results.object-results[*])
- name: object-element
type: string
description: object element
value: $(tasks.task2.results.object-results.foo)
```
For an end-to-end example see [`Array and Object Results` in a `PipelineRun`](../examples/v1/pipelineruns/pipeline-emitting-results.yaml).
A `Pipeline Result` is not emitted if any of the following are true:
- A `PipelineTask` referenced by the `Pipeline Result` failed. The `PipelineRun` will also
have failed.
- A `PipelineTask` referenced by the `Pipeline Result` was skipped.
- A `PipelineTask` referenced by the `Pipeline Result` didn't emit the referenced `Task Result`. This
should be considered a bug in the `Task` and [may fail a `PipelineTask` in future](https://github.com/tektoncd/pipeline/issues/3497).
- The `Pipeline Result` uses a variable that doesn't point to an actual `PipelineTask`. This will
result in an `InvalidTaskResultReference` validation error during `PipelineRun` execution.
- The `Pipeline Result` uses a variable that doesn't point to an actual result in a `PipelineTask`.
This will cause an `InvalidTaskResultReference` validation error during `PipelineRun` execution.
**Note:** Since a `Pipeline Result` can contain references to multiple `Task Results`, if any of those
`Task Result` references are invalid the entire `Pipeline Result` is not emitted.
**Note:** If a `PipelineTask` referenced by the `Pipeline Result` was skipped, the `Pipeline Result` will not be emitted and the `PipelineRun` will not fail due to a missing result.
## Configuring the `Task` execution order
You can connect `Tasks` in a `Pipeline` so that they execute in a Directed Acyclic Graph (DAG).
Each `Task` in the `Pipeline` becomes a node on the graph that can be connected with an edge
so that one will run before another and the execution of the `Pipeline` progresses to completion
without getting stuck in an infinite loop.
This is done using:
- _resource dependencies_:
- [`results`](#emitting-results-from-a-pipeline) of one `Task` being passed into `params` or `when` expressions of
another
- _ordering dependencies_:
- [`runAfter`](#using-the-runafter-field) clauses on the corresponding `Tasks`
For example, the `Pipeline` defined as follows
```yaml
tasks:
- name: lint-repo
taskRef:
name: pylint
- name: test-app
taskRef:
name: make-test
- name: build-app
taskRef:
name: kaniko-build-app
runAfter:
- test-app
- name: build-frontend
taskRef:
name: kaniko-build-frontend
runAfter:
- test-app
- name: deploy-all
taskRef:
name: deploy-kubectl
runAfter:
- build-app
- build-frontend
```
executes according to the following graph:
```none
| |
v v
test-app lint-repo
/ \
v v
build-app build-frontend
\ /
v v
deploy-all
```
In particular:
1. The `lint-repo` and `test-app` `Tasks` have no `runAfter` clauses
and start executing simultaneously.
2. Once `test-app` completes, both `build-app` and `build-frontend` start
executing simultaneously since they both `runAfter` the `test-app` `Task`.
3. The `deploy-all` `Task` executes once both `build-app` and `build-frontend`
complete, since it is supposed to `runAfter` them both.
4. The entire `Pipeline` completes execution once both `lint-repo` and `deploy-all`
complete execution.
## Specifying a display name
The `displayName` field is an optional field that allows you to add a user-facing name of the `Pipeline` that can be used to populate a UI. For example:
```yaml
spec:
displayName: "Code Scan"
tasks:
- name: scan
taskRef:
name: sonar-scan
```
## Adding a description
The `description` field is an optional field and can be used to provide description of the `Pipeline`.
## Adding `Finally` to the `Pipeline`
You can specify a list of one or more final tasks under `finally` section. `finally` tasks are guaranteed to be executed
in parallel after all `PipelineTasks` under `tasks` have completed regardless of success or error. `finally` tasks are very
similar to `PipelineTasks` under `tasks` section and follow the same syntax. Each `finally` task must have a
[valid](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names) `name` and a [taskRef or
taskSpec](taskruns.md#specifying-the-target-task). For example:
```yaml
spec:
tasks:
- name: tests
taskRef:
name: integration-test
finally:
- name: cleanup-test
taskRef:
name: cleanup
```
### Specifying `displayName` in `finally` tasks
Similar to [specifying `displayName` in `pipelineTasks`](#specifying-displayname-in-pipelinetasks), `finally` tasks also
allows to add a user-facing name of the `finally` task that can be used to populate and distinguish in the dashboard.
For example:
```yaml
spec:
finally:
- name: notification
displayName: "Notify"
taskRef:
name: notification
- name: notification-using-context-variable
displayName: "Notification from $(context.pipeline.name)"
taskRef:
name: notification
```
The `displayName` also allows you to parameterize the human-readable name of your choice based on the
[params](#specifying-parameters), [the task results](#consuming-task-execution-results-in-finally),
and [the context variables](#context-variables).
Fully resolved `displayName` is also available in the status as part of the `pipelineRun.status.childReferences`. The
clients such as the dashboard, CLI, etc. can retrieve the `displayName` from the `childReferences`. The `displayName` mainly
drives a better user experience and at the same time it is not validated for the content or length by the controller.
### Specifying `Workspaces` in `finally` tasks
`finally` tasks can specify [workspaces](workspaces.md) which `PipelineTasks` might have utilized
e.g. a mount point for credentials held in Secrets. To support that requirement, you can specify one or more
`Workspaces` in the `workspaces` field for the `finally` tasks similar to `tasks`.
```yaml
spec:
workspaces:
- name: shared-workspace
tasks:
- name: clone-app-source
taskRef:
name: clone-app-repo-to-workspace
workspaces:
- name: shared-workspace
workspace: shared-workspace
finally:
- name: cleanup-workspace
taskRef:
name: cleanup-workspace
workspaces:
- name: shared-workspace
workspace: shared-workspace
```
### Specifying `Parameters` in `finally` tasks
Similar to `tasks`, you can specify [`Parameters`](tasks.md#specifying-parameters) in `finally` tasks:
```yaml
spec:
tasks:
- name: tests
taskRef:
name: integration-test
finally:
- name: report-results
taskRef:
name: report-results
params:
- name: url
value: "someURL"
```
### Specifying `matrix` in `finally` tasks
> :seedling: **`Matrix` is an [beta](additional-configs.md#beta-features) feature.**
> The `enable-api-fields` feature flag can be set to `"beta"` to specify `Matrix` in a `PipelineTask`.
Similar to `tasks`, you can also provide [`Parameters`](tasks.md#specifying-parameters) through `matrix`
in `finally` tasks:
```yaml
spec:
tasks:
- name: tests
taskRef:
name: integration-test
finally:
- name: report-results
taskRef:
name: report-results
params:
- name: url
value: "someURL"
matrix:
params:
- name: slack-channel
value:
- "foo"
- "bar"
include:
- name: build-1
params:
- name: slack-channel
value: "foo"
- name: flags
value: "-v"
```
For further information, read [`Matrix`](./matrix.md).
### Consuming `Task` execution results in `finally`
`finally` tasks can be configured to consume `Results` of `PipelineTask` from the `tasks` section:
```yaml
spec:
tasks:
- name: clone-app-repo
taskRef:
name: git-clone
finally:
- name: discover-git-commit
params:
- name: commit
value: $(tasks.clone-app-repo.results.commit)
```
**Note:** The scheduling of such `finally` task does not change, it will still be executed in parallel with other
`finally` tasks after all non-`finally` tasks are done.
The controller resolves task results before executing the `finally` task `discover-git-commit`. If the task
`clone-app-repo` failed before initializing `commit` or skipped with [when expression](#guard-task-execution-using-when-expressions)
resulting in uninitialized task result `commit`, the `finally` Task `discover-git-commit` will be included in the list of
`skippedTasks` and continues executing rest of the `finally` tasks. The pipeline exits with `completion` instead of
`success` if a `finally` task is added to the list of `skippedTasks`.
### Consuming `Pipeline` result with `finally`
`finally` tasks can emit `Results` and these results emitted from the `finally` tasks can be configured in the
[Pipeline Results](#emitting-results-from-a-pipeline). References of `Results` from `finally` will follow the same naming conventions as referencing `Results` from `tasks`: ```$(finally.<finally-pipelinetask-name>.result.<result-name>)```.
```yaml
results:
- name: comment-count-validate
value: $(finally.check-count.results.comment-count-validate)
finally:
- name: check-count
taskRef:
name: example-task-name
```
In this example, `pipelineResults` in `status` will show the name-value pair for the result `comment-count-validate` which is produced in the `Task` `example-task-name`.
### `PipelineRun` Status with `finally`
With `finally`, `PipelineRun` status is calculated based on `PipelineTasks` under `tasks` section and `finally` tasks.
Without `finally`:
| `PipelineTasks` under `tasks` | `PipelineRun` status | Reason |
|---------------------------------------------------------------------------------------------------------|----------------------|-------------|
| all `PipelineTasks` successful | `true` | `Succeeded` |
| one or more `PipelineTasks` [skipped](#guard-task-execution-using-when-expressions) and rest successful | `true` | `Completed` |
| single failure of `PipelineTask` | `false` | `failed` |
With `finally`:
| `PipelineTasks` under `tasks` | `finally` tasks | `PipelineRun` status | Reason |
|--------------------------------------------------------------------------------------------------------|----------------------------------------|----------------------|-------------|
| all `PipelineTask` successful | all `finally` tasks successful | `true` | `Succeeded` |
| all `PipelineTask` successful | one or more failure of `finally` tasks | `false` | `Failed` |
| one or more `PipelineTask` [skipped](#guard-task-execution-using-when-expressions) and rest successful | all `finally` tasks successful | `true` | `Completed` |
| one or more `PipelineTask` [skipped](#guard-task-execution-using-when-expressions) and rest successful | one or more failure of `finally` tasks | `false` | `Failed` |
| single failure of `PipelineTask` | all `finally` tasks successful | `false` | `failed` |
| single failure of `PipelineTask` | one or more failure of `finally` tasks | `false` | `failed` |
Overall, `PipelineRun` state transitioning is explained below for respective scenarios:
* All `PipelineTask` and `finally` tasks are successful: `Started` -> `Running` -> `Succeeded`
* At least one `PipelineTask` skipped and rest successful: `Started` -> `Running` -> `Completed`
* One `PipelineTask` failed / one or more `finally` tasks failed: `Started` -> `Running` -> `Failed`
Please refer to the [table](pipelineruns.md#monitoring-execution-status) under Monitoring Execution Status to learn about
what kind of events are triggered based on the `Pipelinerun` status.
### Using Execution `Status` of `pipelineTask`
A `pipeline` can check the status of a specific `pipelineTask` from the `tasks` section in `finally` through the task
parameters:
```yaml
finally:
- name: finaltask
params:
- name: task1Status
value: "$(tasks.task1.status)"
taskSpec:
params:
- name: task1Status
steps:
- image: ubuntu
name: print-task-status
script: |
if [ $(params.task1Status) == "Failed" ]
then
echo "Task1 has failed, continue processing the failure"
fi
```
This kind of variable can have any one of the values from the following table:
| Status | Description |
|-------------|--------------------------------------------------------------------------------------------------|
| `Succeeded` | `taskRun` for the `pipelineTask` completed successfully |
| `Failed` | `taskRun` for the `pipelineTask` completed with a failure or cancelled by the user |
| `None` | the `pipelineTask` has been skipped or no execution information available for the `pipelineTask` |
For an end-to-end example, see [`status` in a `PipelineRun`](../examples/v1/pipelineruns/pipelinerun-task-execution-status.yaml).
### Using Aggregate Execution `Status` of All `Tasks`
A `pipeline` can check an aggregate status of all the `tasks` section in `finally` through the task parameters:
```yaml
finally:
- name: finaltask
params:
- name: aggregateTasksStatus
value: "$(tasks.status)"
taskSpec:
params:
- name: aggregateTasksStatus
steps:
- image: ubuntu
name: check-task-status
script: |
if [ $(params.aggregateTasksStatus) == "Failed" ]
then
echo "Looks like one or more tasks returned failure, continue processing the failure"
fi
```
This kind of variable can have any one of the values from the following table:
| Status | Description |
|-------------|-----------------------------------------------------------------------------------------------------------------------------------|
| `Succeeded` | all `tasks` have succeeded |
| `Failed` | one ore more `tasks` failed |
| `Completed` | all `tasks` completed successfully including one or more skipped tasks |
| `None` | no aggregate execution status available (i.e. none of the above), one or more `tasks` could be pending/running/cancelled/timedout |
For an end-to-end example, see [`$(tasks.status)` usage in a `Pipeline`](../examples/v1/pipelineruns/pipelinerun-task-execution-status.yaml).
### Guard `finally` `Task` execution using `when` expressions
Similar to `Tasks`, `finally` `Tasks` can be guarded using [`when` expressions](#guard-task-execution-using-when-expressions)
that operate on static inputs or variables. Like in `Tasks`, `when` expressions in `finally` `Tasks` can operate on
`Parameters` and `Results`. Unlike in `Tasks`, `when` expressions in `finally` `tasks` can also operate on the [`Execution
Status`](#using-execution-status-of-pipelinetask) of `Tasks`.
#### `when` expressions using `Parameters` in `finally` `Tasks`
`when` expressions in `finally` `Tasks` can utilize `Parameters` as demonstrated using [`golang-build`](https://github.com/tektoncd/catalog/tree/main/task/golang-build/0.1)
and [`send-to-channel-slack`](https://github.com/tektoncd/catalog/tree/main/task/send-to-channel-slack/0.1) Catalog
`Tasks`:
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: pipelinerun-
spec:
pipelineSpec:
params:
- name: enable-notifications
type: string
description: a boolean indicating whether the notifications should be sent
tasks:
- name: golang-build
taskRef:
name: golang-build
# […]
finally:
- name: notify-build-failure # executed only when build task fails and notifications are enabled
when:
- input: $(tasks.golang-build.status)
operator: in
values: ["Failed"]
- input: $(params.enable-notifications)
operator: in
values: ["true"]
taskRef:
name: send-to-slack-channel
# […]
params:
- name: enable-notifications
value: true
```
#### `when` expressions using `Results` in `finally` 'Tasks`
`when` expressions in `finally` `tasks` can utilize `Results`, as demonstrated using [`git-clone`](https://github.com/tektoncd/catalog/tree/main/task/git-clone/0.2)
and [`github-add-comment`](https://github.com/tektoncd/catalog/tree/main/task/github-add-comment/0.2) Catalog `Tasks`:
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: pipelinerun-
spec:
pipelineSpec:
tasks:
- name: git-clone
taskRef:
name: git-clone
- name: go-build
# […]
finally:
- name: notify-commit-sha # executed only when commit sha is not the expected sha
when:
- input: $(tasks.git-clone.results.commit)
operator: notin
values: [$(params.expected-sha)]
taskRef:
name: github-add-comment
# […]
params:
- name: expected-sha
value: 54dd3984affab47f3018852e61a1a6f9946ecfa
```
If the `when` expressions in a `finally` `task` use `Results` from a skipped or failed non-finally `Tasks`, then the
`finally` `task` would also be skipped and be included in the list of `Skipped Tasks` in the `Status`, [similarly to when using
`Results` in other parts of the `finally` `task`](#consuming-task-execution-results-in-finally).
#### `when` expressions using `Execution Status` of `PipelineTask` in `finally` `tasks`
`when` expressions in `finally` `tasks` can utilize [`Execution Status` of `PipelineTasks`](#using-execution-status-of-pipelinetask),
as demonstrated using [`golang-build`](https://github.com/tektoncd/catalog/tree/main/task/golang-build/0.1) and
[`send-to-channel-slack`](https://github.com/tektoncd/catalog/tree/main/task/send-to-channel-slack/0.1) Catalog `Tasks`:
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: pipelinerun-
spec:
pipelineSpec:
tasks:
- name: golang-build
taskRef:
name: golang-build
# […]
finally:
- name: notify-build-failure # executed only when build task fails
when:
- input: $(tasks.golang-build.status)
operator: in
values: ["Failed"]
taskRef:
name: send-to-slack-channel
# […]
```
For an end-to-end example, see [PipelineRun with `when` expressions](../examples/v1/pipelineruns/pipelinerun-with-when-expressions.yaml).
#### `when` expressions using `Aggregate Execution Status` of `Tasks` in `finally` `tasks`
`when` expressions in `finally` `tasks` can utilize
[`Aggregate Execution Status` of `Tasks`](#using-aggregate-execution-status-of-all-tasks) as demonstrated:
```yaml
finally:
- name: notify-any-failure # executed only when one or more tasks fail
when:
- input: $(tasks.status)
operator: in
values: ["Failed"]
taskRef:
name: notify-failure
```
For an end-to-end example, see [PipelineRun with `when` expressions](../examples/v1/pipelineruns/pipelinerun-with-when-expressions.yaml).
### Known Limitations
#### Cannot configure the `finally` task execution order
It's not possible to configure or modify the execution order of the `finally` tasks. Unlike `Tasks` in a `Pipeline`,
all `finally` tasks run simultaneously and start executing once all `PipelineTasks` under `tasks` have settled which means
no `runAfter` can be specified in `finally` tasks.
## Using Custom Tasks
Custom Tasks have been promoted from `v1alpha1` to `v1beta1`. Starting from `v0.43.0` to `v0.46.0`, Pipeline Controller is able to create either `v1alpha1` or `v1beta1` Custom Task gated by a feature flag `custom-task-version`, defaulting to `v1beta1`. You can set `custom-task-version` to `v1alpha1` or `v1beta1` to control which version to create.
Starting from `v0.47.0`, feature flag `custom-task-version` is removed and only `v1beta1` Custom Task will be supported. See the [migration doc](migrating-v1alpha1.Run-to-v1beta1.CustomRun.md) for details.
[Custom Tasks](https://github.com/tektoncd/community/blob/main/teps/0002-custom-tasks.md)
can implement behavior that doesn't correspond directly to running a workload in a `Pod` on the cluster.
For example, a custom task might execute some operation outside of the cluster and wait for its execution to complete.
A `PipelineRun` starts a custom task by creating a [`CustomRun`](https://github.com/tektoncd/pipeline/blob/main/docs/customruns.md) instead of a `TaskRun`.
In order for a custom task to execute, there must be a custom task controller running on the cluster
that is responsible for watching and updating `CustomRun`s which reference their type.
### Specifying the target Custom Task
To specify the custom task type you want to execute, the `taskRef` field
must include the custom task's `apiVersion` and `kind` as shown below.
Using `apiVersion` will always create a `CustomRun`. If `apiVersion` is set, `kind` is required as well.
```yaml
spec:
tasks:
- name: run-custom-task
taskRef:
apiVersion: example.dev/v1alpha1
kind: Example
```
This creates a `Run/CustomRun` of a custom task of type `Example` in the `example.dev` API group with the version `v1alpha1`.
Validation error will be returned if `apiVersion` or `kind` is missing.
You can also specify the `name` of a custom task resource object previously defined in the cluster.
```yaml
spec:
tasks:
- name: run-custom-task
taskRef:
apiVersion: example.dev/v1alpha1
kind: Example
name: myexample
```
If the `taskRef` specifies a name, the custom task controller should look up the
`Example` resource with that name and use that object to configure the execution.
If the `taskRef` does not specify a name, the custom task controller might support
some default behavior for executing unnamed tasks.
### Specifying a Custom Task Spec in-line (or embedded)
**For `v1alpha1.Run`**
```yaml
spec:
tasks:
- name: run-custom-task
taskSpec:
apiVersion: example.dev/v1alpha1
kind: Example
spec:
field1: value1
field2: value2
```
**For `v1beta1.CustomRun`**
```yaml
spec:
tasks:
- name: run-custom-task
taskSpec:
apiVersion: example.dev/v1alpha1
kind: Example
customSpec:
field1: value1
field2: value2
```
If the custom task controller supports the in-line or embedded task spec, this will create a `Run/CustomRun` of a custom task of
type `Example` in the `example.dev` API group with the version `v1alpha1`.
If the `taskSpec` is not supported, the custom task controller should produce proper validation errors.
Please take a look at the
developer guide for custom controllers supporting `taskSpec`:
- [guidance for `Run`](runs.md#developer-guide-for-custom-controllers-supporting-spec)
- [guidance for `CustomRun`](customruns.md#developer-guide-for-custom-controllers-supporting-customspec)
`taskSpec` support for `pipelineRun` was designed and discussed in
[TEP-0061](https://github.com/tektoncd/community/blob/main/teps/0061-allow-custom-task-to-be-embedded-in-pipeline.md)
### Specifying parameters
If a custom task supports [`parameters`](tasks.md#specifying-parameters), you can use the
`params` field to specify their values:
```yaml
spec:
tasks:
- name: run-custom-task
taskRef:
apiVersion: example.dev/v1alpha1
kind: Example
name: myexample
params:
- name: foo
value: bah
```
## Context Variables
The `Parameters` in the `Params` field will accept
[context variables](variables.md) that will be substituted, including:
* `PipelineRun` name, namespace and uid
* `Pipeline` name
* `PipelineTask` retries
```yaml
spec:
tasks:
- name: run-custom-task
taskRef:
apiVersion: example.dev/v1alpha1
kind: Example
name: myexample
params:
- name: foo
value: $(context.pipeline.name)
```
### Specifying matrix
> :seedling: **`Matrix` is an [alpha](additional-configs.md#alpha-features) feature.**
> The `enable-api-fields` feature flag must be set to `"alpha"` to specify `Matrix` in a `PipelineTask`.
If a custom task supports [`parameters`](tasks.md#specifying-parameters), you can use the
`matrix` field to specify their values, if you want to fan:
```yaml
spec:
tasks:
- name: run-custom-task
taskRef:
apiVersion: example.dev/v1alpha1
kind: Example
name: myexample
params:
- name: foo
value: bah
matrix:
params:
- name: bar
value:
- qux
- thud
include:
- name: build-1
params:
- name: common-package
value: path-to-common-pkg
```
For further information, read [`Matrix`](./matrix.md).
### Specifying workspaces
If the custom task supports it, you can provide [`Workspaces`](workspaces.md#using-workspaces-in-tasks) to share data with the custom task.
```yaml
spec:
tasks:
- name: run-custom-task
taskRef:
apiVersion: example.dev/v1alpha1
kind: Example
name: myexample
workspaces:
- name: my-workspace
```
Consult the documentation of the custom task that you are using to determine whether it supports workspaces and how to name them.
### Using `Results`
If the custom task produces results, you can reference them in a Pipeline using the normal syntax,
`$(tasks.<task-name>.results.<result-name>)`.
### Specifying `Timeout`
#### `v1alpha1.Run`
If the custom task supports it as [we recommended](runs.md#developer-guide-for-custom-controllers-supporting-timeout), you can provide `timeout` to specify the maximum running time of a `CustomRun` (including all retry attempts or other operations).
#### `v1beta1.CustomRun`
If the custom task supports it as [we recommended](customruns.md#developer-guide-for-custom-controllers-supporting-timeout), you can provide `timeout` to specify the maximum running time of one `CustomRun` execution.
```yaml
spec:
tasks:
- name: run-custom-task
timeout: 2s
taskRef:
apiVersion: example.dev/v1alpha1
kind: Example
name: myexample
```
Consult the documentation of the custom task that you are using to determine whether it supports `Timeout`.
### Specifying `Retries`
If the custom task supports it, you can provide `retries` to specify how many times you want to retry the custom task.
```yaml
spec:
tasks:
- name: run-custom-task
retries: 2
taskRef:
apiVersion: example.dev/v1alpha1
kind: Example
name: myexample
```
Consult the documentation of the custom task that you are using to determine whether it supports `Retries`.
### Known Custom Tasks
We try to list as many known Custom Tasks as possible here so that users can easily find what they want. Please feel free to share the Custom Task you implemented in this table.
#### v1beta1.CustomRun
| Custom Task | Description |
|:---------------------------------|:---------------------------------------------------------------------------------------------------------------------------------|
| [Wait Task Beta][wait-task-beta] | Waits a given amount of time before succeeding, specified by an input parameter named duration. Support `timeout` and `retries`. |
| [Approvals][approvals-beta]| Pauses the execution of `PipelineRuns` and waits for manual approvals. Version 0.6.0 and up. |
#### v1alpha1.Run
| Custom Task | Description |
|:-------------------------------------------------|:-----------------------------------------------------------------------------------------------------------|
| [Pipeline Loops][pipeline-loops] | Runs a `Pipeline` in a loop with varying `Parameter` values. |
| [Common Expression Language][cel] | Provides Common Expression Language support in Tekton Pipelines. |
| [Wait][wait] | Waits a given amount of time, specified by a `Parameter` named "duration", before succeeding. |
| [Approvals][approvals-alpha] | Pauses the execution of `PipelineRuns` and waits for manual approvals. Version up to (and including) 0.5.0 |
| [Pipelines in Pipelines][pipelines-in-pipelines] | Defines and executes a `Pipeline` in a `Pipeline`. |
| [Task Group][task-group] | Groups `Tasks` together as a `Task`. |
| [Pipeline in a Pod][pipeline-in-pod] | Runs `Pipeline` in a `Pod`. |
[pipeline-loops]: https://github.com/tektoncd/experimental/tree/f60e1cd8ce22ed745e335f6f547bb9a44580dc7c/pipeline-loops
[task-loops]: https://github.com/tektoncd/experimental/tree/f60e1cd8ce22ed745e335f6f547bb9a44580dc7c/task-loops
[cel]: https://github.com/tektoncd/experimental/tree/f60e1cd8ce22ed745e335f6f547bb9a44580dc7c/cel
[wait]: https://github.com/tektoncd/experimental/tree/f60e1cd8ce22ed745e335f6f547bb9a44580dc7c/wait-task
[approvals-alpha]: https://github.com/automatiko-io/automatiko-approval-task/tree/v0.5.0
[approvals-beta]: https://github.com/automatiko-io/automatiko-approval-task/tree/v0.6.1
[task-group]: https://github.com/openshift-pipelines/tekton-task-group/tree/39823f26be8f59504f242a45b9f2e791d4b36e1c
[pipelines-in-pipelines]: https://github.com/tektoncd/experimental/tree/f60e1cd8ce22ed745e335f6f547bb9a44580dc7c/pipelines-in-pipelines
[pipeline-in-pod]: https://github.com/tektoncd/experimental/tree/f60e1cd8ce22ed745e335f6f547bb9a44580dc7c/pipeline-in-pod
[wait-task-beta]: https://github.com/tektoncd/pipeline/tree/a127323da31bcb933a04a6a1b5dbb6e0411e3dc1/test/custom-task-ctrls/wait-task-beta
## Code examples
For a better understanding of `Pipelines`, study [our code examples](https://github.com/tektoncd/pipeline/tree/main/examples).
---
Except as otherwise noted, the content of this page is licensed under the
[Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/),
and code samples are licensed under the
[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). | tekton | linkTitle Pipelines weight 203 Pipelines Pipelines pipelines Overview overview Configuring a Pipeline configuring a pipeline Specifying Workspaces specifying workspaces Specifying Parameters specifying parameters Adding Tasks to the Pipeline adding tasks to the pipeline Specifying Display Name specifying displayname in pipelinetasks Specifying Remote Tasks specifying remote tasks Specifying Pipelines in PipelineTasks specifying pipelines in pipelinetasks Specifying Parameters in PipelineTasks specifying parameters in pipelinetasks Specifying Matrix in PipelineTasks specifying matrix in pipelinetasks Specifying Workspaces in PipelineTasks specifying workspaces in pipelinetasks Tekton Bundles tekton bundles Using the runAfter field using the runafter field Using the retries field using the retries field Using the onError field using the onerror field Produce results with OnError produce results with onerror Guard Task execution using when expressions guard task execution using when expressions Guarding a Task and its dependent Tasks guarding a task and its dependent tasks Cascade when expressions to the specific dependent Tasks cascade when expressions to the specific dependent tasks Compose using Pipelines in Pipelines compose using pipelines in pipelines Guarding a Task only guarding a task only Configuring the failure timeout configuring the failure timeout Using variable substitution using variable substitution Using the retries and retry count variable substitutions using the retries and retry count variable substitutions Using Results using results Passing one Task s Results into the Parameters or when expressions of another passing one tasks results into the parameters or when expressions of another Emitting Results from a Pipeline emitting results from a pipeline Configuring the Task execution order configuring the task execution order Adding a description adding a description Adding Finally to the Pipeline adding finally to the pipeline Specifying Display Name specifying displayname in finally tasks Specifying Workspaces in finally tasks specifying workspaces in finally tasks Specifying Parameters in finally tasks specifying parameters in finally tasks Specifying matrix in finally tasks specifying matrix in finally tasks Consuming Task execution results in finally consuming task execution results in finally Consuming Pipeline result with finally consuming pipeline result with finally PipelineRun Status with finally pipelinerun status with finally Using Execution Status of pipelineTask using execution status of pipelinetask Using Aggregate Execution Status of All Tasks using aggregate execution status of all tasks Guard finally Task execution using when expressions guard finally task execution using when expressions when expressions using Parameters in finally Tasks when expressions using parameters in finally tasks when expressions using Results in finally Tasks when expressions using results in finally tasks when expressions using Execution Status of PipelineTask in finally tasks when expressions using execution status of pipelinetask in finally tasks when expressions using Aggregate Execution Status of Tasks in finally tasks when expressions using aggregate execution status of tasks in finally tasks Known Limitations known limitations Cannot configure the finally task execution order cannot configure the finally task execution order Using Custom Tasks using custom tasks Specifying the target Custom Task specifying the target custom task Specifying a Custom Task Spec in line or embedded specifying a custom task spec in line or embedded Specifying parameters specifying parameters 1 Specifying matrix specifying matrix Specifying workspaces specifying workspaces 1 Using Results using results 1 Specifying Timeout specifying timeout Specifying Retries specifying retries Known Custom Tasks known custom tasks Code examples code examples Overview A Pipeline is a collection of Tasks that you define and arrange in a specific order of execution as part of your continuous integration flow Each Task in a Pipeline executes as a Pod on your Kubernetes cluster You can configure various execution conditions to fit your business needs Configuring a Pipeline A Pipeline definition supports the following fields Required apiVersion kubernetes overview Specifies the API version for example tekton dev v1beta1 kind kubernetes overview Identifies this resource object as a Pipeline object metadata kubernetes overview Specifies metadata that uniquely identifies the Pipeline object For example a name spec kubernetes overview Specifies the configuration information for this Pipeline object This must include tasks adding tasks to the pipeline Specifies the Tasks that comprise the Pipeline and the details of their execution Optional params specifying parameters Specifies the Parameters that the Pipeline requires workspaces specifying workspaces Specifies a set of Workspaces that the Pipeline requires tasks adding tasks to the pipeline name adding tasks to the pipeline the name of this Task within the context of this Pipeline displayName specifying displayname in pipelinetasks a user facing name of this Task within the context of this Pipeline description adding tasks to the pipeline a description of this Task within the context of this Pipeline taskRef adding tasks to the pipeline a reference to a Task definition taskSpec adding tasks to the pipeline a specification of a Task runAfter using the runafter field Indicates that a Task should execute after one or more other Tasks without output linking retries using the retries field Specifies the number of times to retry the execution of a Task after a failure Does not apply to execution cancellations when guard finally task execution using when expressions Specifies when expressions that guard the execution of a Task allow execution only when all when expressions evaluate to true timeout configuring the failure timeout Specifies the timeout before a Task fails params specifying parameters in pipelinetasks Specifies the Parameters that a Task requires workspaces specifying workspaces in pipelinetasks Specifies the Workspaces that a Task requires matrix specifying matrix in pipelinetasks Specifies the Parameters used to fan out a Task into multiple TaskRuns or Runs results emitting results from a pipeline Specifies the location to which the Pipeline emits its execution results displayName specifying a display name is a user facing name of the pipeline that may be used to populate a UI description adding a description Holds an informative description of the Pipeline object finally adding finally to the pipeline Specifies one or more Tasks to be executed in parallel after all other tasks have completed name adding finally to the pipeline the name of this Task within the context of this Pipeline displayName specifying displayname in finally tasks a user facing name of this Task within the context of this Pipeline description adding finally to the pipeline a description of this Task within the context of this Pipeline taskRef adding finally to the pipeline a reference to a Task definition taskSpec adding finally to the pipeline a specification of a Task retries using the retries field Specifies the number of times to retry the execution of a Task after a failure Does not apply to execution cancellations when guard finally task execution using when expressions Specifies when expressions that guard the execution of a Task allow execution only when all when expressions evaluate to true timeout configuring the failure timeout Specifies the timeout before a Task fails params specifying parameters in finally tasks Specifies the Parameters that a Task requires workspaces specifying workspaces in finally tasks Specifies the Workspaces that a Task requires matrix specifying matrix in finally tasks Specifies the Parameters used to fan out a Task into multiple TaskRuns or Runs kubernetes overview https kubernetes io docs concepts overview working with objects kubernetes objects required fields Specifying Workspaces Workspaces allow you to specify one or more volumes that each Task in the Pipeline requires during execution You specify one or more Workspaces in the workspaces field For example yaml spec workspaces name pipeline ws1 The name of the workspace in the Pipeline tasks name use ws from pipeline taskRef name gen code gen code expects a workspace with name output workspaces name output workspace pipeline ws1 name use ws again taskRef name commit commit expects a workspace with name src runAfter use ws from pipeline important use ws from pipeline writes to the workspace first workspaces name src workspace pipeline ws1 For simplicity you can also map the name of the Workspace in PipelineTask to match with the Workspace from the Pipeline For example yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind Pipeline metadata name pipeline spec workspaces name source tasks name gen code taskRef name gen code gen code expects a Workspace named source workspaces name source mapping workspace name name commit taskRef name commit commit expects a Workspace named source workspaces name source mapping workspace name runAfter gen code For more information see Using Workspaces in Pipelines workspaces md using workspaces in pipelines The Workspaces in a PipelineRun examples v1 pipelineruns workspaces yaml code example The variables available in a PipelineRun variables md variables available in a pipeline including workspaces name bound Mapping Workspaces https github com tektoncd community blob main teps 0108 mapping workspaces md Specifying Parameters See also Specifying Parameters in Tasks tasks md specifying parameters You can specify global parameters such as compilation flags or artifact names that you want to supply to the Pipeline at execution time Parameters are passed to the Pipeline from its corresponding PipelineRun and can replace template values specified within each Task in the Pipeline Parameter names Must only contain alphanumeric characters hyphens and underscores Must begin with a letter or an underscore For example fooIs Bar is a valid parameter name but barIsBa or 0banana are not Each declared parameter has a type field which can be set to either array or string array is useful in cases where the number of compilation flags being supplied to the Pipeline varies throughout its execution If no value is specified the type field defaults to string When the actual parameter value is supplied its parsed type is validated against the type field The description and default fields for a Parameter are optional The following example illustrates the use of Parameters in a Pipeline The following Pipeline declares two input parameters context which passes its value a string to the Task to set the value of the pathToContext parameter within the Task flags which passes its value an array to the Task to set the value of the flags parameter within the Task The flags parameter within the Task must also be an array If you specify a value for the default field and invoke this Pipeline in a PipelineRun without specifying a value for context that value will be used Note Input parameter values can be used as variables throughout the Pipeline by using variable substitution variables md variables available in a pipeline yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind Pipeline metadata name pipeline with parameters spec params name context type string description Path to context default some where or other name flags type array description List of flags tasks name build skaffold web taskRef name build push params name pathToDockerFile value Dockerfile name pathToContext value params context name flags value params flags The following PipelineRun supplies a value for context yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata name pipelinerun with parameters spec pipelineRef name pipeline with parameters params name context value workspace examples microservices leeroy web name flags value foo bar Param enum seedling enum is an alpha additional configs md alpha features feature The enable param enum feature flag must be set to true to enable this feature Parameter declarations can include enum which is a predefine set of valid values that can be accepted by the Pipeline Param If a Param has both enum and default value the default value must be in the enum set For example the valid allowed values for Param message is bounded to v1 and v2 yaml apiVersion tekton dev v1 kind Pipeline metadata name pipeline param enum spec params name message enum v1 v2 default v1 tasks name task1 params name message value params message steps name build image bash 3 2 script echo params message If the Param value passed in by PipelineRun is NOT in the predefined enum list the PipelineRun will fail with reason InvalidParamValue If a PipelineTask references a Task with enum the enums specified in the Pipeline spec params pipeline level enum must be a subset of the enums specified in the referenced Task task level enum An empty pipeline level enum is invalid in this scenario since an empty enum set indicates a universal set which allows all possible values The same rules apply to Pipelines with embbeded Tasks In the below example the referenced Task accepts v1 and v2 as valid values the Pipeline further restricts the valid value to v1 yaml apiVersion tekton dev v1 kind Task metadata name param enum demo spec params name message type string enum v1 v2 steps name build image bash latest script echo params message yaml apiVersion tekton dev v1 kind Pipeline metadata name pipeline param enum spec params name message enum v1 note that an empty enum set is invalid tasks name task1 params name message value params message taskRef name param enum demo Note that this subset restriction only applies to the task level params with a direct single reference to pipeline level params If a task level param references multiple pipeline level params the subset validation is not applied yaml apiVersion tekton dev v1 kind Pipeline spec params name message1 enum v1 name message2 enum v2 tasks name task1 params name message value params message1 and params message2 taskSpec params message enum the message enum is not required to be a subset of message1 or message2 Tekton validates user provided values in a PipelineRun against the enum specified in the PipelineSpec params Tekton also validates any resolved param value against the enum specified in each PipelineTask before creating the TaskRun See usage in this example examples v1 pipelineruns alpha param enum yaml Propagated Params Like with embedded pipelineruns pipelineruns md propagated parameters you can propagate params declared in the pipeline down to the inlined pipelineTasks and its inlined Steps Wherever a resource e g a pipelineTask or a StepAction is referenced the parameters need to be passed explicitly For example the following is a valid yaml yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind Pipeline metadata name pipelien propagated params spec params name HELLO default Hello World name BYE default Bye World tasks name echo hello taskSpec steps name echo image ubuntu script usr bin env bash echo params HELLO name echo bye taskSpec steps name echo action ref name step action echo params name msg value params BYE The same rules defined in pipelineruns pipelineruns md propagated parameters apply here Adding Tasks to the Pipeline Your Pipeline definition must reference at least one Task tasks md Each Task within a Pipeline must have a valid https kubernetes io docs concepts overview working with objects names names name and a taskRef or a taskSpec For example yaml tasks name build the image taskRef name build push Note Using both apiVersion and kind will create CustomRun customruns md don t set apiVersion if only referring to Task tasks md or yaml tasks name say hello taskSpec steps image ubuntu script echo hello there Note that any task specified in taskSpec will be the same version as the Pipeline Specifying displayName in PipelineTasks The displayName field is an optional field that allows you to add a user facing name of the PipelineTask that can be used to populate and distinguish in the dashboard For example yaml spec tasks name scan displayName Code Scan taskRef name sonar scan The displayName also allows you to parameterize the human readable name of your choice based on the params specifying parameters the task results passing one tasks results into the parameters or when expressions of another and the context variables context variables For example yaml spec params name application tasks name scan displayName Code Scan for params application taskRef name sonar scan name upload scan report displayName Upload Scan Report tasks scan results report taskRef name upload Specifying task results in the displayName does not introduce an inherent resource dependency among tasks The pipeline author is responsible for specifying dependency explicitly either using runAfter using the runafter field or rely on whenExpressions guard task execution using when expressions or task results in params using results Fully resolved displayName is also available in the status as part of the pipelineRun status childReferences The clients such as the dashboard CLI etc can retrieve the displayName from the childReferences The displayName mainly drives a better user experience and at the same time it is not validated for the content or length by the controller Specifying Remote Tasks beta feature https github com tektoncd pipeline blob main docs install md beta features A taskRef field may specify a Task in a remote location such as git Support for specific types of remote will depend on the Resolvers your cluster s operator has installed For more information including a tutorial please check resolution docs resolution md The below example demonstrates referencing a Task in git yaml tasks name go build taskRef resolver git params name url value https github com tektoncd catalog git name revision value can use params declared at the pipeline level or a static value like main value params gitRevision name pathInRepo value task golang build 0 3 golang build yaml Specifying Pipelines in PipelineTasks seedling Specifying pipelines in PipelineTasks is an alpha additional configs md alpha features feature The enable api fields feature flag must be set to alpha to specify PipelineRef or PipelineSpec in a PipelineTask This feature is in Preview Only mode and not yet supported implemented Apart from taskRef and taskSpec pipelineRef and pipelineSpec allows you to specify a pipeline in pipelineTask This allows you to generate a child pipelineRun which is inherited by the parent pipelineRun kind Pipeline metadata name security scans spec tasks name scorecards taskSpec steps image alpine name step 1 script echo Generating scorecard report name codeql taskSpec steps image alpine name step 1 script echo Generating codeql report apiVersion tekton dev v1 kind Pipeline metadata name clone scan notify spec tasks name git clone taskSpec steps image alpine name step 1 script echo Cloning a repo to run security scans name security scans runAfter git clone pipelineRef name security scans For further information read Pipelines in Pipelines pipelines in pipelines md Specifying Parameters in PipelineTasks You can also provide Parameters tasks md specifying parameters yaml spec tasks name build skaffold web taskRef name build push params name pathToDockerFile value Dockerfile name pathToContext value workspace examples microservices leeroy web Specifying Matrix in PipelineTasks seedling Matrix is an beta additional configs md beta features feature The enable api fields feature flag can be set to beta to specify Matrix in a PipelineTask You can also provide Parameters tasks md specifying parameters through the matrix field yaml spec tasks name browser test taskRef name browser test matrix params name browser value chrome safari firefox include name build 1 params name browser value chrome name url value some url For further information read Matrix matrix md Specifying Workspaces in PipelineTasks You can also provide Workspaces tasks md specifying workspaces yaml spec tasks name use workspace taskRef name gen code gen code expects a workspace with name output workspaces name output workspace shared ws Tekton Bundles A Tekton Bundle is an OCI artifact that contains Tekton resources like Tasks which can be referenced within a taskRef There is currently a hard limit of 20 objects in a bundle You can reference a Tekton bundle in a TaskRef in both v1 and v1beta1 using remote resolution bundle resolver md pipeline resolution The example syntax shown below for v1 uses remote resolution and requires enabling beta features additional configs md beta features yaml spec tasks name hello world taskRef resolver bundles params name bundle value docker io myrepo mycatalog name name value echo task name kind value Task You may also specify a tag as you would with a Docker image which will give you a fixed repeatable reference to a Task yaml spec taskRef resolver bundles params name bundle value docker io myrepo mycatalog v1 0 1 name name value echo task name kind value Task You may also specify a fixed digest instead of a tag yaml spec taskRef resolver bundles params name bundle value docker io myrepo mycatalog sha256 abc123 name name value echo task name kind value Task Any of the above options will fetch the image using the ImagePullSecrets attached to the ServiceAccount specified in the PipelineRun See the Service Account pipelineruns md specifying custom serviceaccount credentials section for details on how to configure a ServiceAccount on a PipelineRun The PipelineRun will then run that Task without registering it in the cluster allowing multiple versions of the same named Task to be run at once Tekton Bundles may be constructed with any toolsets that produce valid OCI image artifacts so long as the artifact adheres to the contract tekton bundle contracts md Using the runAfter field If you need your Tasks to execute in a specific order within the Pipeline use the runAfter field to indicate that a Task must execute after one or more other Tasks In the example below we want to test the code before we build it Since there is no output from the test app Task the build app Task uses runAfter to indicate that test app must run before it regardless of the order in which they are referenced in the Pipeline definition yaml workspaces name source tasks name test app taskRef name make test workspaces name source workspace source name build app taskRef name kaniko build runAfter test app workspaces name source workspace source Using the retries field For each Task in the Pipeline you can specify the number of times Tekton should retry its execution when it fails When a Task fails the corresponding TaskRun sets its Succeeded Condition to False The retries field instructs Tekton to retry executing the Task when this happens retries are executed even when other Task s in the Pipeline have failed unless the PipelineRun has been cancelled pipelineruns md cancelling a pipelinerun or gracefully cancelled pipelineruns md gracefully cancelling a pipelinerun If you expect a Task to encounter problems during execution for example you know that there will be issues with network connectivity or missing dependencies set its retries field to a suitable value greater than 0 If you don t explicitly specify a value Tekton does not attempt to execute the failed Task again In the example below the execution of the build the image Task will be retried once after a failure if the retried execution fails too the Task execution fails as a whole yaml tasks name build the image retries 1 taskRef name build push Using the onError field When a PipelineTask fails the rest of the PipelineTasks are skipped and the PipelineRun is declared a failure If you would like to ignore such PipelineTask failure and continue executing the rest of the PipelineTasks you can specify onError for such a PipelineTask OnError can be set to stopAndFail default and continue The failure of a PipelineTask with stopAndFail would stop and fail the whole PipelineRun A PipelineTask fails with continue does not fail the whole PipelineRun and the rest of the PipelineTask will continue to execute To ignore a PipelineTask failure set onError to continue yaml apiVersion tekton dev v1 kind Pipeline metadata name demo spec tasks name task1 onError continue taskSpec steps name step1 image alpine script exit 1 At runtime the failure is ignored to determine the PipelineRun status The PipelineRun message contains the ignored failure info yaml status conditions lastTransitionTime 2023 09 28T19 08 30Z message Tasks Completed 1 Failed 1 Ignored 1 Cancelled 0 Skipped 0 reason Succeeded status True type Succeeded Note that the TaskRun status remains as it is irrelevant to OnError Failed but ignored TaskRuns result in a failed status with reason FailureIgnored For example the TaskRun created by the above PipelineRun has the following status bash kubectl get tr demo run task1 NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME demo run task1 False FailureIgnored 12m 12m To specify onError for a step please see specifying onError for a step tasks md specifying onerror for a step Note Setting Retry specifying retries and OnError continue at the same time is NOT allowed Produce results with OnError When a PipelineTask is set to ignore error and the PipelineTask is able to initialize a result before failing the result is made available to the consumer PipelineTasks yaml tasks name task1 onError continue taskSpec results name result1 steps name step1 image alpine script echo n 123 tee results result1 path exit 1 The consumer PipelineTasks can access the result by referencing tasks task1 results result1 If the result is NOT initialized before failing and there is a PipelineTask consuming it yaml tasks name task1 onError continue taskSpec results name result1 steps name step1 image alpine script exit 1 echo n 123 tee results result1 path If the consuming PipelineTask has OnError stopAndFail the PipelineRun will fail with InvalidTaskResultReference If the consuming PipelineTask has OnError continue the consuming PipelineTask will be skipped with reason Results were missing and the PipelineRun will continue to execute Guard Task execution using when expressions To run a Task only when certain conditions are met it is possible to guard task execution using the when field The when field allows you to list a series of references to when expressions The components of when expressions are input operator and values Component Description Syntax input Input for the when expression defaults to an empty string if not provided Static values e g ubuntu br Variables parameters specifying parameters or results using results e g params image or tasks task1 results image or tasks task1 results array results 1 operator operator represents an input s relationship to a set of values a valid operator must be provided in or notin values An array of string values the values array must be provided and has to be non empty An array param e g params images br An array result of a task tasks task1 results array results br values can contain static values e g ubuntu br values can contain variables parameters specifying parameters or results using results or a Workspaces s bound state specifying workspaces e g params image or tasks task1 results image or tasks task1 results array results 1 The Parameters specifying parameters are read from the Pipeline and Results using results are read directly from previous Tasks adding tasks to the pipeline Using Results using results in a when expression in a guarded Task introduces a resource dependency on the previous Task that produced the Result The declared when expressions are evaluated before the Task is run If all the when expressions evaluate to True the Task is run If any of the when expressions evaluate to False the Task is not run and the Task is listed in the Skipped Tasks section of the PipelineRunStatus pipelineruns md monitoring execution status In these examples first create file task will only be executed if the path parameter is README md echo file exists task will only be executed if the exists result from check file task is yes and run lint task will only be executed if the lint config optional workspace has been provided by a PipelineRun yaml tasks name first create file when input params path operator in values README md taskRef name first create file tasks name echo file exists when input tasks check file results exists operator in values yes taskRef name echo file exists tasks name run lint when input workspaces lint config bound operator in values true taskRef name lint source tasks name deploy in blue when input blue operator in values params deployments taskRef name deployment For an end to end example see PipelineRun with when expressions examples v1 pipelineruns pipelinerun with when expressions yaml There are a lot of scenarios where when expressions can be really useful Some of these are Checking if the name of a git branch matches Checking if the Result of a previous Task is as expected Checking if a git file has changed in the previous commits Checking if an image exists in the registry Checking if the name of a CI job matches Checking if an optional Workspace has been provided Use CEL expression in WhenExpression seedling CEL in WhenExpression is an alpha additional configs md alpha features feature The enable cel in whenexpression feature flag must be set to true to enable the use of CEL in WhenExpression CEL Common Expression Language is a declarative language designed for simplicity speed safety and portability which can be used to express a wide variety of conditions and computations You can define a CEL expression in WhenExpression to guard the execution of a Task The CEL expression must evaluate to either true or false You can use a single line of CEL string to replace current WhenExpressions s input operator values For example yaml current WhenExpressions when input foo operator in values foo bar input duh operator notin values foo bar with cel when cel foo in foo bar cel duh in foo bar CEL can offer more conditional functions such as numeric comparisons e g etc logic operators e g OR AND Regex Pattern Matching For example yaml when test coverage result is larger than 90 cel tasks unit test results test coverage 0 9 params is not empty or params2 is 8 5 or 8 6 cel params param1 params param2 8 5 params param2 8 6 param branch matches pattern release cel params branch matches release Variable substitution in CEL CEL supports string substitutions https github com tektoncd pipeline blob main docs variables md variables available in a pipeline you can reference string array indexing or object value of a param result For example yaml when string result cel tasks unit test results test coverage 0 9 array indexing result cel tasks unit test results test coverage 0 0 9 object result key cel tasks objectTask results repo url matches github com tektoncd string param cel params foo foo array indexing cel params branch 0 foo object param key cel params repo url matches github com tektoncd Note the reference needs to be wrapped with single quotes Whole Array and Object replacements are not supported yet The following usage is not supported yaml when cel foo in params array params cel foo in params object params wokeignore rule master In addition to the cases listed above you can craft any valid CEL expression as defined by the cel spec language definition https github com google cel spec blob master doc langdef md CEL expression is validated at admission webhook and a validation error will be returned if the expression is invalid Note To use Tekton s variable substitution variables md you need to wrap the reference with single quotes This also means that if you pass another CEL expression via params or results it won t be executed Therefore CEL injection is disallowed For example This is valid params foo foo This is invalid params foo foo CEL s variable substitution is not supported yet and thus invalid params foo foo Guarding a Task and its dependent Tasks To guard a Task and its dependent Tasks cascade the when expressions to the specific dependent Tasks to be guarded as well compose the Task and its dependent Tasks as a unit to be guarded and executed together using Pipelines in Pipelines Cascade when expressions to the specific dependent Tasks Pick and choose which specific dependent Tasks to guard as well and cascade the when expressions to those Tasks Taking the use case below a user who wants to guard manual approval and its dependent Tasks tests v manual approval v approver build image v v slack msg deploy image The user can design the Pipeline to solve their use case as such yaml tasks name manual approval runAfter tests when input params git action operator in values merge taskRef name manual approval name build image when input params git action operator in values merge runAfter manual approval taskRef name build image name deploy image when input params git action operator in values merge runAfter build image taskRef name deploy image name slack msg params name approver value tasks manual approval results approver taskRef name slack msg Compose using Pipelines in Pipelines Compose a set of Tasks as a unit of execution using Pipelines in Pipelines which allows for guarding a Task and its dependent Tasks as a sub Pipeline using when expressions Note Pipelines in Pipelines is an experimental feature https github com tektoncd experimental tree main pipelines in pipelines Taking the use case below a user who wants to guard manual approval and its dependent Tasks tests v manual approval v approver build image v v slack msg deploy image The user can design the Pipelines to solve their use case as such yaml sub pipeline approve build deploy slack tasks name manual approval runAfter integration tests taskRef name manual approval name build image runAfter manual approval taskRef name build image name deploy image runAfter build image taskRef name deploy image name slack msg params name approver value tasks manual approval results approver taskRef name slack msg main pipeline tasks name approve build deploy slack runAfter tests when input params git action operator in values merge taskRef apiVersion tekton dev v1beta1 kind Pipeline name approve build deploy slack Guarding a Task only When when expressions evaluate to False the Task will be skipped and The ordering dependent Tasks will be executed The resource dependent Tasks and their dependencies will be skipped because of missing Results from the skipped parent Task When we add support for default Results https github com tektoncd community pull 240 then the resource dependent Tasks may be executed if the default Results from the skipped parent Task are specified In addition if a resource dependent Task needs a file from a guarded parent Task in a shared Workspace make sure to handle the execution of the child Task in case the expected file is missing from the Workspace because the guarded parent Task is skipped On the other hand the rest of the Pipeline will continue executing tests v manual approval v approver build image v v slack msg deploy image Taking the use case above a user who wants to guard manual approval only can design the Pipeline as such yaml tasks name manual approval runAfter tests when input params git action operator in values merge taskRef name manual approval name build image runAfter manual approval taskRef name build image name deploy image runAfter build image taskRef name deploy image name slack msg params name approver value tasks manual approval results approver taskRef name slack msg If manual approval is skipped execution of its dependent Tasks slack msg build image and deploy image would be unblocked regardless build image and deploy image should be executed successfully slack msg will be skipped because it is missing the approver Result from manual approval dependents of slack msg would have been skipped too if it had any of them if manual approval specifies a default approver Result such as None then slack msg would be executed supporting default Results is in progress https github com tektoncd community pull 240 Configuring the failure timeout You can use the Timeout field in the Task spec within the Pipeline to set the timeout of the TaskRun that executes that Task within the PipelineRun that executes your Pipeline The Timeout value is a duration conforming to Go s ParseDuration https golang org pkg time ParseDuration format For example valid values are 1h30m 1h 1m and 60s Note If you do not specify a Timeout value Tekton instead honors the timeout for the PipelineRun pipelineruns md configuring a pipelinerun In the example below the build the image Task is configured to time out after 90 seconds yaml spec tasks name build the image taskRef name build push timeout 0h1m30s Using variable substitution Tekton provides variables to inject values into the contents of certain fields The values you can inject come from a range of sources including other fields in the Pipeline context sensitive information that Tekton provides and runtime information received from a PipelineRun The mechanism of variable substitution is quite simple string replacement is performed by the Tekton Controller when a PipelineRun is executed See the complete list of variable substitutions for Pipelines variables md variables available in a pipeline and the list of fields that accept substitutions variables md fields that accept variable substitutions For an end to end example see using context variables examples v1 pipelineruns using context variables yaml Using the retries and retry count variable substitutions Tekton supports variable substitution for the retries using the retries field parameter of PipelineTask Variables like context pipelineTask retries and context task retry count can be added to the parameters of a PipelineTask context pipelineTask retries will be replaced by retries of the PipelineTask while context task retry count will be replaced by current retry number of the PipelineTask yaml params name pipelineTask retries value context pipelineTask retries taskSpec params name pipelineTask retries steps image ubuntu name print if retries exhausted script if context task retry count params pipelineTask retries then echo This is the last retry fi exit 1 Note Every PipelineTask can only access its own retries and retry count These values aren t accessible for other PipelineTask s Using Results Tasks can emit Results tasks md emitting results when they execute A Pipeline can use these Results for two different purposes 1 A Pipeline can pass the Result of a Task into the Parameters or when expressions of another 2 A Pipeline can itself emit Results and include data from the Results of its Tasks Note Tekton does not enforce that results are produced at Task level If a pipeline attempts to consume a result that was declared by a Task but not produced it will fail TEP 0048 https github com tektoncd community blob main teps 0048 task results without results md propopses introducing default values for results to help Pipeline authors manage this case Passing one Task s Results into the Parameters or when expressions of another Sharing Results between Tasks in a Pipeline happens via variable substitution variables md variables available in a pipeline one Task emits a Result and another receives it as a Parameter with a variable such as tasks task name results result name Pipeline support two new types of results and parameters array string and object map string string Array result is a beta feature and can be enabled by setting enable api fields to alpha or beta Result Type Parameter Type Specification enable api fields string string tasks task name results result name stable array array tasks task name results result name alpha or beta array string tasks task name results result name i alpha or beta object object tasks task name results result name alpha or beta object string tasks task name results result name key alpha or beta Note Whole Array and Object Results using star notation cannot be referred in script When one Task receives the Results of another there is a dependency created between those two Tasks In order for the receiving Task to get data from another Task s Result the Task producing the Result must run first Tekton enforces this Task ordering by ensuring that the Task emitting the Result executes before any Task that uses it In the snippet below a param is provided its value from the commit Result emitted by the checkout source Task Tekton will make sure that the checkout source Task runs before this one yaml params name foo value tasks checkout source results commit name array params value tasks checkout source results array results name array indexing params value tasks checkout source results array results 1 name object params value tasks checkout source results object results name object element params value tasks checkout source results object results objectkey Note If checkout source exits successfully without initializing commit Result the receiving Task fails and causes the Pipeline to fail with InvalidTaskResultReference unable to find result referenced by param foo in task Could not find result with name commit for task run checkout source In the snippet below a when expression is provided its value from the exists Result emitted by the check file Task Tekton will make sure that the check file Task runs before this one yaml when input tasks check file results exists operator in values yes For an end to end example see Task Results in a PipelineRun examples v1 pipelineruns task results example yaml Note that when expressions are whitespace sensitive In particular when producing results intended for inputs to when expressions that may include newlines at their close e g cat jq you may wish to truncate them yaml taskSpec params name jsonQuery check steps image ubuntu name store name in results script curl s https my json server typicode com typicode demo profile jq r name tr d n tee results name path Emitting Results from a Pipeline A Pipeline can emit Results of its own for a variety of reasons an external system may need to read them when the Pipeline is complete they might summarise the most important Results from the Pipeline s Tasks or they might simply be used to expose non critical messages generated during the execution of the Pipeline A Pipeline s Results can be composed of one or many Task Results emitted during the course of the Pipeline s execution A Pipeline Result can refer to its Tasks Results using a variable of the form tasks task name results result name After a Pipeline has executed the PipelineRun will be populated with the Results emitted by the Pipeline These will be written to the PipelineRun s status pipelineResults field In the example below the Pipeline specifies a results entry with the name sum that references the outputValue Result emitted by the calculate sum Task yaml results name sum description the sum of all three operands value tasks calculate sum results outputValue For an end to end example see Results in a PipelineRun examples v1 pipelineruns pipelinerun results yaml In the example below the Pipeline collects array and object results from Tasks yaml results name array results type array description whole array value tasks task1 results array results name array indexing results type string description array element value tasks task1 results array results 1 name object results type object description whole object value tasks task2 results object results name object element type string description object element value tasks task2 results object results foo For an end to end example see Array and Object Results in a PipelineRun examples v1 pipelineruns pipeline emitting results yaml A Pipeline Result is not emitted if any of the following are true A PipelineTask referenced by the Pipeline Result failed The PipelineRun will also have failed A PipelineTask referenced by the Pipeline Result was skipped A PipelineTask referenced by the Pipeline Result didn t emit the referenced Task Result This should be considered a bug in the Task and may fail a PipelineTask in future https github com tektoncd pipeline issues 3497 The Pipeline Result uses a variable that doesn t point to an actual PipelineTask This will result in an InvalidTaskResultReference validation error during PipelineRun execution The Pipeline Result uses a variable that doesn t point to an actual result in a PipelineTask This will cause an InvalidTaskResultReference validation error during PipelineRun execution Note Since a Pipeline Result can contain references to multiple Task Results if any of those Task Result references are invalid the entire Pipeline Result is not emitted Note If a PipelineTask referenced by the Pipeline Result was skipped the Pipeline Result will not be emitted and the PipelineRun will not fail due to a missing result Configuring the Task execution order You can connect Tasks in a Pipeline so that they execute in a Directed Acyclic Graph DAG Each Task in the Pipeline becomes a node on the graph that can be connected with an edge so that one will run before another and the execution of the Pipeline progresses to completion without getting stuck in an infinite loop This is done using resource dependencies results emitting results from a pipeline of one Task being passed into params or when expressions of another ordering dependencies runAfter using the runafter field clauses on the corresponding Tasks For example the Pipeline defined as follows yaml tasks name lint repo taskRef name pylint name test app taskRef name make test name build app taskRef name kaniko build app runAfter test app name build frontend taskRef name kaniko build frontend runAfter test app name deploy all taskRef name deploy kubectl runAfter build app build frontend executes according to the following graph none v v test app lint repo v v build app build frontend v v deploy all In particular 1 The lint repo and test app Tasks have no runAfter clauses and start executing simultaneously 2 Once test app completes both build app and build frontend start executing simultaneously since they both runAfter the test app Task 3 The deploy all Task executes once both build app and build frontend complete since it is supposed to runAfter them both 4 The entire Pipeline completes execution once both lint repo and deploy all complete execution Specifying a display name The displayName field is an optional field that allows you to add a user facing name of the Pipeline that can be used to populate a UI For example yaml spec displayName Code Scan tasks name scan taskRef name sonar scan Adding a description The description field is an optional field and can be used to provide description of the Pipeline Adding Finally to the Pipeline You can specify a list of one or more final tasks under finally section finally tasks are guaranteed to be executed in parallel after all PipelineTasks under tasks have completed regardless of success or error finally tasks are very similar to PipelineTasks under tasks section and follow the same syntax Each finally task must have a valid https kubernetes io docs concepts overview working with objects names names name and a taskRef or taskSpec taskruns md specifying the target task For example yaml spec tasks name tests taskRef name integration test finally name cleanup test taskRef name cleanup Specifying displayName in finally tasks Similar to specifying displayName in pipelineTasks specifying displayname in pipelinetasks finally tasks also allows to add a user facing name of the finally task that can be used to populate and distinguish in the dashboard For example yaml spec finally name notification displayName Notify taskRef name notification name notification using context variable displayName Notification from context pipeline name taskRef name notification The displayName also allows you to parameterize the human readable name of your choice based on the params specifying parameters the task results consuming task execution results in finally and the context variables context variables Fully resolved displayName is also available in the status as part of the pipelineRun status childReferences The clients such as the dashboard CLI etc can retrieve the displayName from the childReferences The displayName mainly drives a better user experience and at the same time it is not validated for the content or length by the controller Specifying Workspaces in finally tasks finally tasks can specify workspaces workspaces md which PipelineTasks might have utilized e g a mount point for credentials held in Secrets To support that requirement you can specify one or more Workspaces in the workspaces field for the finally tasks similar to tasks yaml spec workspaces name shared workspace tasks name clone app source taskRef name clone app repo to workspace workspaces name shared workspace workspace shared workspace finally name cleanup workspace taskRef name cleanup workspace workspaces name shared workspace workspace shared workspace Specifying Parameters in finally tasks Similar to tasks you can specify Parameters tasks md specifying parameters in finally tasks yaml spec tasks name tests taskRef name integration test finally name report results taskRef name report results params name url value someURL Specifying matrix in finally tasks seedling Matrix is an beta additional configs md beta features feature The enable api fields feature flag can be set to beta to specify Matrix in a PipelineTask Similar to tasks you can also provide Parameters tasks md specifying parameters through matrix in finally tasks yaml spec tasks name tests taskRef name integration test finally name report results taskRef name report results params name url value someURL matrix params name slack channel value foo bar include name build 1 params name slack channel value foo name flags value v For further information read Matrix matrix md Consuming Task execution results in finally finally tasks can be configured to consume Results of PipelineTask from the tasks section yaml spec tasks name clone app repo taskRef name git clone finally name discover git commit params name commit value tasks clone app repo results commit Note The scheduling of such finally task does not change it will still be executed in parallel with other finally tasks after all non finally tasks are done The controller resolves task results before executing the finally task discover git commit If the task clone app repo failed before initializing commit or skipped with when expression guard task execution using when expressions resulting in uninitialized task result commit the finally Task discover git commit will be included in the list of skippedTasks and continues executing rest of the finally tasks The pipeline exits with completion instead of success if a finally task is added to the list of skippedTasks Consuming Pipeline result with finally finally tasks can emit Results and these results emitted from the finally tasks can be configured in the Pipeline Results emitting results from a pipeline References of Results from finally will follow the same naming conventions as referencing Results from tasks finally finally pipelinetask name result result name yaml results name comment count validate value finally check count results comment count validate finally name check count taskRef name example task name In this example pipelineResults in status will show the name value pair for the result comment count validate which is produced in the Task example task name PipelineRun Status with finally With finally PipelineRun status is calculated based on PipelineTasks under tasks section and finally tasks Without finally PipelineTasks under tasks PipelineRun status Reason all PipelineTasks successful true Succeeded one or more PipelineTasks skipped guard task execution using when expressions and rest successful true Completed single failure of PipelineTask false failed With finally PipelineTasks under tasks finally tasks PipelineRun status Reason all PipelineTask successful all finally tasks successful true Succeeded all PipelineTask successful one or more failure of finally tasks false Failed one or more PipelineTask skipped guard task execution using when expressions and rest successful all finally tasks successful true Completed one or more PipelineTask skipped guard task execution using when expressions and rest successful one or more failure of finally tasks false Failed single failure of PipelineTask all finally tasks successful false failed single failure of PipelineTask one or more failure of finally tasks false failed Overall PipelineRun state transitioning is explained below for respective scenarios All PipelineTask and finally tasks are successful Started Running Succeeded At least one PipelineTask skipped and rest successful Started Running Completed One PipelineTask failed one or more finally tasks failed Started Running Failed Please refer to the table pipelineruns md monitoring execution status under Monitoring Execution Status to learn about what kind of events are triggered based on the Pipelinerun status Using Execution Status of pipelineTask A pipeline can check the status of a specific pipelineTask from the tasks section in finally through the task parameters yaml finally name finaltask params name task1Status value tasks task1 status taskSpec params name task1Status steps image ubuntu name print task status script if params task1Status Failed then echo Task1 has failed continue processing the failure fi This kind of variable can have any one of the values from the following table Status Description Succeeded taskRun for the pipelineTask completed successfully Failed taskRun for the pipelineTask completed with a failure or cancelled by the user None the pipelineTask has been skipped or no execution information available for the pipelineTask For an end to end example see status in a PipelineRun examples v1 pipelineruns pipelinerun task execution status yaml Using Aggregate Execution Status of All Tasks A pipeline can check an aggregate status of all the tasks section in finally through the task parameters yaml finally name finaltask params name aggregateTasksStatus value tasks status taskSpec params name aggregateTasksStatus steps image ubuntu name check task status script if params aggregateTasksStatus Failed then echo Looks like one or more tasks returned failure continue processing the failure fi This kind of variable can have any one of the values from the following table Status Description Succeeded all tasks have succeeded Failed one ore more tasks failed Completed all tasks completed successfully including one or more skipped tasks None no aggregate execution status available i e none of the above one or more tasks could be pending running cancelled timedout For an end to end example see tasks status usage in a Pipeline examples v1 pipelineruns pipelinerun task execution status yaml Guard finally Task execution using when expressions Similar to Tasks finally Tasks can be guarded using when expressions guard task execution using when expressions that operate on static inputs or variables Like in Tasks when expressions in finally Tasks can operate on Parameters and Results Unlike in Tasks when expressions in finally tasks can also operate on the Execution Status using execution status of pipelinetask of Tasks when expressions using Parameters in finally Tasks when expressions in finally Tasks can utilize Parameters as demonstrated using golang build https github com tektoncd catalog tree main task golang build 0 1 and send to channel slack https github com tektoncd catalog tree main task send to channel slack 0 1 Catalog Tasks yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata generateName pipelinerun spec pipelineSpec params name enable notifications type string description a boolean indicating whether the notifications should be sent tasks name golang build taskRef name golang build finally name notify build failure executed only when build task fails and notifications are enabled when input tasks golang build status operator in values Failed input params enable notifications operator in values true taskRef name send to slack channel params name enable notifications value true when expressions using Results in finally Tasks when expressions in finally tasks can utilize Results as demonstrated using git clone https github com tektoncd catalog tree main task git clone 0 2 and github add comment https github com tektoncd catalog tree main task github add comment 0 2 Catalog Tasks yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata generateName pipelinerun spec pipelineSpec tasks name git clone taskRef name git clone name go build finally name notify commit sha executed only when commit sha is not the expected sha when input tasks git clone results commit operator notin values params expected sha taskRef name github add comment params name expected sha value 54dd3984affab47f3018852e61a1a6f9946ecfa If the when expressions in a finally task use Results from a skipped or failed non finally Tasks then the finally task would also be skipped and be included in the list of Skipped Tasks in the Status similarly to when using Results in other parts of the finally task consuming task execution results in finally when expressions using Execution Status of PipelineTask in finally tasks when expressions in finally tasks can utilize Execution Status of PipelineTasks using execution status of pipelinetask as demonstrated using golang build https github com tektoncd catalog tree main task golang build 0 1 and send to channel slack https github com tektoncd catalog tree main task send to channel slack 0 1 Catalog Tasks yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind PipelineRun metadata generateName pipelinerun spec pipelineSpec tasks name golang build taskRef name golang build finally name notify build failure executed only when build task fails when input tasks golang build status operator in values Failed taskRef name send to slack channel For an end to end example see PipelineRun with when expressions examples v1 pipelineruns pipelinerun with when expressions yaml when expressions using Aggregate Execution Status of Tasks in finally tasks when expressions in finally tasks can utilize Aggregate Execution Status of Tasks using aggregate execution status of all tasks as demonstrated yaml finally name notify any failure executed only when one or more tasks fail when input tasks status operator in values Failed taskRef name notify failure For an end to end example see PipelineRun with when expressions examples v1 pipelineruns pipelinerun with when expressions yaml Known Limitations Cannot configure the finally task execution order It s not possible to configure or modify the execution order of the finally tasks Unlike Tasks in a Pipeline all finally tasks run simultaneously and start executing once all PipelineTasks under tasks have settled which means no runAfter can be specified in finally tasks Using Custom Tasks Custom Tasks have been promoted from v1alpha1 to v1beta1 Starting from v0 43 0 to v0 46 0 Pipeline Controller is able to create either v1alpha1 or v1beta1 Custom Task gated by a feature flag custom task version defaulting to v1beta1 You can set custom task version to v1alpha1 or v1beta1 to control which version to create Starting from v0 47 0 feature flag custom task version is removed and only v1beta1 Custom Task will be supported See the migration doc migrating v1alpha1 Run to v1beta1 CustomRun md for details Custom Tasks https github com tektoncd community blob main teps 0002 custom tasks md can implement behavior that doesn t correspond directly to running a workload in a Pod on the cluster For example a custom task might execute some operation outside of the cluster and wait for its execution to complete A PipelineRun starts a custom task by creating a CustomRun https github com tektoncd pipeline blob main docs customruns md instead of a TaskRun In order for a custom task to execute there must be a custom task controller running on the cluster that is responsible for watching and updating CustomRun s which reference their type Specifying the target Custom Task To specify the custom task type you want to execute the taskRef field must include the custom task s apiVersion and kind as shown below Using apiVersion will always create a CustomRun If apiVersion is set kind is required as well yaml spec tasks name run custom task taskRef apiVersion example dev v1alpha1 kind Example This creates a Run CustomRun of a custom task of type Example in the example dev API group with the version v1alpha1 Validation error will be returned if apiVersion or kind is missing You can also specify the name of a custom task resource object previously defined in the cluster yaml spec tasks name run custom task taskRef apiVersion example dev v1alpha1 kind Example name myexample If the taskRef specifies a name the custom task controller should look up the Example resource with that name and use that object to configure the execution If the taskRef does not specify a name the custom task controller might support some default behavior for executing unnamed tasks Specifying a Custom Task Spec in line or embedded For v1alpha1 Run yaml spec tasks name run custom task taskSpec apiVersion example dev v1alpha1 kind Example spec field1 value1 field2 value2 For v1beta1 CustomRun yaml spec tasks name run custom task taskSpec apiVersion example dev v1alpha1 kind Example customSpec field1 value1 field2 value2 If the custom task controller supports the in line or embedded task spec this will create a Run CustomRun of a custom task of type Example in the example dev API group with the version v1alpha1 If the taskSpec is not supported the custom task controller should produce proper validation errors Please take a look at the developer guide for custom controllers supporting taskSpec guidance for Run runs md developer guide for custom controllers supporting spec guidance for CustomRun customruns md developer guide for custom controllers supporting customspec taskSpec support for pipelineRun was designed and discussed in TEP 0061 https github com tektoncd community blob main teps 0061 allow custom task to be embedded in pipeline md Specifying parameters If a custom task supports parameters tasks md specifying parameters you can use the params field to specify their values yaml spec tasks name run custom task taskRef apiVersion example dev v1alpha1 kind Example name myexample params name foo value bah Context Variables The Parameters in the Params field will accept context variables variables md that will be substituted including PipelineRun name namespace and uid Pipeline name PipelineTask retries yaml spec tasks name run custom task taskRef apiVersion example dev v1alpha1 kind Example name myexample params name foo value context pipeline name Specifying matrix seedling Matrix is an alpha additional configs md alpha features feature The enable api fields feature flag must be set to alpha to specify Matrix in a PipelineTask If a custom task supports parameters tasks md specifying parameters you can use the matrix field to specify their values if you want to fan yaml spec tasks name run custom task taskRef apiVersion example dev v1alpha1 kind Example name myexample params name foo value bah matrix params name bar value qux thud include name build 1 params name common package value path to common pkg For further information read Matrix matrix md Specifying workspaces If the custom task supports it you can provide Workspaces workspaces md using workspaces in tasks to share data with the custom task yaml spec tasks name run custom task taskRef apiVersion example dev v1alpha1 kind Example name myexample workspaces name my workspace Consult the documentation of the custom task that you are using to determine whether it supports workspaces and how to name them Using Results If the custom task produces results you can reference them in a Pipeline using the normal syntax tasks task name results result name Specifying Timeout v1alpha1 Run If the custom task supports it as we recommended runs md developer guide for custom controllers supporting timeout you can provide timeout to specify the maximum running time of a CustomRun including all retry attempts or other operations v1beta1 CustomRun If the custom task supports it as we recommended customruns md developer guide for custom controllers supporting timeout you can provide timeout to specify the maximum running time of one CustomRun execution yaml spec tasks name run custom task timeout 2s taskRef apiVersion example dev v1alpha1 kind Example name myexample Consult the documentation of the custom task that you are using to determine whether it supports Timeout Specifying Retries If the custom task supports it you can provide retries to specify how many times you want to retry the custom task yaml spec tasks name run custom task retries 2 taskRef apiVersion example dev v1alpha1 kind Example name myexample Consult the documentation of the custom task that you are using to determine whether it supports Retries Known Custom Tasks We try to list as many known Custom Tasks as possible here so that users can easily find what they want Please feel free to share the Custom Task you implemented in this table v1beta1 CustomRun Custom Task Description Wait Task Beta wait task beta Waits a given amount of time before succeeding specified by an input parameter named duration Support timeout and retries Approvals approvals beta Pauses the execution of PipelineRuns and waits for manual approvals Version 0 6 0 and up v1alpha1 Run Custom Task Description Pipeline Loops pipeline loops Runs a Pipeline in a loop with varying Parameter values Common Expression Language cel Provides Common Expression Language support in Tekton Pipelines Wait wait Waits a given amount of time specified by a Parameter named duration before succeeding Approvals approvals alpha Pauses the execution of PipelineRuns and waits for manual approvals Version up to and including 0 5 0 Pipelines in Pipelines pipelines in pipelines Defines and executes a Pipeline in a Pipeline Task Group task group Groups Tasks together as a Task Pipeline in a Pod pipeline in pod Runs Pipeline in a Pod pipeline loops https github com tektoncd experimental tree f60e1cd8ce22ed745e335f6f547bb9a44580dc7c pipeline loops task loops https github com tektoncd experimental tree f60e1cd8ce22ed745e335f6f547bb9a44580dc7c task loops cel https github com tektoncd experimental tree f60e1cd8ce22ed745e335f6f547bb9a44580dc7c cel wait https github com tektoncd experimental tree f60e1cd8ce22ed745e335f6f547bb9a44580dc7c wait task approvals alpha https github com automatiko io automatiko approval task tree v0 5 0 approvals beta https github com automatiko io automatiko approval task tree v0 6 1 task group https github com openshift pipelines tekton task group tree 39823f26be8f59504f242a45b9f2e791d4b36e1c pipelines in pipelines https github com tektoncd experimental tree f60e1cd8ce22ed745e335f6f547bb9a44580dc7c pipelines in pipelines pipeline in pod https github com tektoncd experimental tree f60e1cd8ce22ed745e335f6f547bb9a44580dc7c pipeline in pod wait task beta https github com tektoncd pipeline tree a127323da31bcb933a04a6a1b5dbb6e0411e3dc1 test custom task ctrls wait task beta Code examples For a better understanding of Pipelines study our code examples https github com tektoncd pipeline tree main examples Except as otherwise noted the content of this page is licensed under the Creative Commons Attribution 4 0 License https creativecommons org licenses by 4 0 and code samples are licensed under the Apache 2 0 License https www apache org licenses LICENSE 2 0 |
tekton weight 312 Trusted Resources Trusted Resources | <!--
---
linkTitle: "Trusted Resources"
weight: 312
---
-->
# Trusted Resources
- [Overview](#overview)
- [Instructions](#Instructions)
- [Sign Resources](#sign-resources)
- [Enable Trusted Resources](#enable-trusted-resources)
## Overview
Trusted Resources is a feature which can be used to sign Tekton Resources and verify them. Details of design can be found at [TEP--0091](https://github.com/tektoncd/community/blob/main/teps/0091-trusted-resources.md). This is an alpha feature and supports `v1beta1` and `v1` version of `Task` and `Pipeline`.
**Note**: Currently, trusted resources only support verifying Tekton resources that come from remote places i.e. git, OCI registry and Artifact Hub. To use [cluster resolver](./cluster-resolver.md) for in-cluster resources, make sure to set all default values for the resources before applied to cluster, because the mutating webhook will update the default fields if not given and fail the verification.
Verification failure will mark corresponding taskrun/pipelinerun as Failed status and stop the execution.
## Instructions
### Sign Resources
We have added `sign` and `verify` into [Tekton Cli](https://github.com/tektoncd/cli) as a subcommand in release [v0.28.0 and later](https://github.com/tektoncd/cli/releases/tag/v0.28.0). Please refer to [cli docs](https://github.com/tektoncd/cli/blob/main/docs/cmd/tkn_task_sign.md) to sign and Tekton resources.
A signed task example:
```yaml
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
annotations:
tekton.dev/signature: MEYCIQDM8WHQAn/yKJ6psTsa0BMjbI9IdguR+Zi6sPTVynxv6wIhAMy8JSETHP7A2Ncw7MyA7qp9eLsu/1cCKOjRL1mFXIKV
creationTimestamp: null
name: example-task
namespace: tekton-trusted-resources
spec:
steps:
- image: ubuntu
name: echo
```
### Enable Trusted Resources
#### Enable feature flag
Update the config map:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: feature-flags
namespace: tekton-pipelines
labels:
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-pipelines
data:
trusted-resources-verification-no-match-policy: "fail"
```
`trusted-resources-verification-no-match-policy` configurations:
* `ignore`: if no matching policies are found, skip the verification, don't log, and don't fail the taskrun/pipelinerun
* `warn`: if no matching policies are found, skip the verification, log a warning, and don't fail the taskrun/pipelinerun
* `fail`: Fail the taskrun/pipelinerun if no matching policies are found.
**Notes:**
* To skip the verification: make sure no policies exist and `trusted-resources-verification-no-match-policy` is set to `warn` or `ignore`.
* To enable the verification: install [VerificationPolicy](#config-key-at-verificationpolicy) to match the resources.
Or patch the new values:
```bash
kubectl patch configmap feature-flags -n tekton-pipelines -p='{"data":{"trusted-resources-verification-no-match-policy":"fail"}}
```
#### TaskRun and PipelineRun status update
<!-- wokeignore:rule=master -->
Trusted resources will update the taskrun's [condition](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties) to indicate if it passes verification or not.
The following tables illustrate how the conditions are impacted by feature flag and verification result. Note that if not `true` or `false` means this case doesn't update the corresponding condition.
**No Matching Policies:**
| | `Conditions.TrustedResourcesVerified` | `Conditions.Succeeded` |
|-----------------------------|---------------------------------------|------------------------|
| `no-match-policy`: "ignore" | | |
| `no-match-policy`: "warn" | False | |
| `no-match-policy`: "fail" | False | False |
**Matching Policies(no matter what `trusted-resources-verification-no-match-policy` value is):**
| | `Conditions.TrustedResourcesVerified` | `Conditions.Succeeded` |
|--------------------------|---------------------------------------|------------------------|
| all policies pass | True | |
| any enforce policy fails | False | False |
| only warn policies fail | False | |
A successful sample `TrustedResourcesVerified` condition is:
```yaml
status:
conditions:
- lastTransitionTime: "2023-03-01T18:17:05Z"
message: Trusted resource verification passed
status: "True"
type: TrustedResourcesVerified
```
Failed sample `TrustedResourcesVerified` and `Succeeded` conditions are:
```yaml
status:
conditions:
- lastTransitionTime: "2023-03-01T18:17:05Z"
message: resource verification failed # This will be filled with detailed error message.
status: "False"
type: TrustedResourcesVerified
- lastTransitionTime: "2023-03-01T18:17:10Z"
message: resource verification failed
status: "False"
type: Succeeded
```
#### Config key at VerificationPolicy
VerificationPolicy supports SecretRef or encoded public key data.
How does VerificationPolicy work?
You can create multiple `VerificationPolicy` and apply them to the cluster.
1. Trusted resources will look up policies from the resource namespace (usually this is the same as taskrun/pipelinerun namespace).
2. If multiple policies are found. For each policy we will check if the resource url is matching any of the `patterns` in the `resources` list. If matched then this policy will be used for verification.
3. If multiple policies are matched, the resource must pass all the "enforce" mode policies. If the resource only matches policies in "warn" mode and fails to pass the "warn" policy, it will not fail the taskrun or pipelinerun, but log a warning instead.
4. To pass one policy, the resource can pass any public keys in the policy.
Take the following `VerificationPolicies` for example, a resource from "https://github.com/tektoncd/catalog.git", needs to pass both `verification-policy-a` and `verification-policy-b`, to pass `verification-policy-a` the resource needs to pass either `key1` or `key2`.
Example:
```yaml
apiVersion: tekton.dev/v1alpha1
kind: VerificationPolicy
metadata:
name: verification-policy-a
namespace: resource-namespace
spec:
# resources defines a list of patterns
resources:
- pattern: "https://github.com/tektoncd/catalog.git" #git resource pattern
- pattern: "gcr.io/tekton-releases/catalog/upstream/git-clone" # bundle resource pattern
- pattern: " https://artifacthub.io/" # hub resource pattern
# authorities defines a list of public keys
authorities:
- name: key1
key:
# secretRef refers to a secret in the cluster, this secret should contain public keys data
secretRef:
name: secret-name-a
namespace: secret-namespace
hashAlgorithm: sha256
- name: key2
key:
# data stores the inline public key data
data: "STRING_ENCODED_PUBLIC_KEY"
# mode can be set to "enforce" (default) or "warn".
mode: enforce
```
```yaml
apiVersion: tekton.dev/v1alpha1
kind: VerificationPolicy
metadata:
name: verification-policy-b
namespace: resource-namespace
spec:
resources:
- pattern: "https://github.com/tektoncd/catalog.git"
authorities:
- name: key3
key:
# data stores the inline public key data
data: "STRING_ENCODED_PUBLIC_KEY"
```
`namespace` should be the same of corresponding resources' namespace.
`pattern` is used to filter out remote resources by their sources URL. e.g. git resources pattern can be set to https://github.com/tektoncd/catalog.git. The `pattern` should follow regex schema, we use go regex library's [`Match`](https://pkg.go.dev/regexp#Match) to match the pattern from VerificationPolicy to the `ConfigSource` URL resolved by remote resolution. Note that `.*` will match all resources.
To learn more about regex syntax please refer to [syntax](https://pkg.go.dev/regexp/syntax).
To learn more about `ConfigSource` please refer to resolvers doc for more context. e.g. [gitresolver](./git-resolver.md)
`key` is used to store the public key, `key` can be configured with `secretRef`, `data`, `kms` note that only 1 of these 3 fields can be configured.
* `secretRef`: refers to secret in cluster to store the public key.
* `data`: contains the inline data of the pubic key in "PEM-encoded byte slice" format.
* `kms`: refers to the uri of the public key, it should follow the format defined in [sigstore](https://docs.sigstore.dev/cosign/kms_support).
`hashAlgorithm` is the algorithm for the public key, by default is `sha256`. It also supports `SHA224`, `SHA384`, `SHA512`.
`mode` controls whether a failing policy will fail the taskrun/pipelinerun, or only log the a warning
* enforce (default) - fail the taskrun/pipelinerun if verification fails
* warn - don't fail the taskrun/pipelinerun if verification fails but log a warning
#### Migrate Config key at configmap to VerificationPolicy
**Note:** key configuration in configmap is deprecated,
The following usage of public keys in configmap can be migrated to VerificationPolicy/
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: config-trusted-resources
namespace: tekton-pipelines
labels:
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-pipelines
data:
publickeys: "/etc/verification-secrets/cosign.pub, /etc/verification-secrets/cosign2.pub"
```
To migrate to VerificationPolicy: Stores the public key files in a secret, and configure the secret ref in VerificationPolicy
```yaml
apiVersion: tekton.dev/v1alpha1
kind: VerificationPolicy
metadata:
name: verification-policy-name
namespace: resource-namespace
spec:
authorities:
- name: key1
key:
# secretRef refers to a secret in the cluster, this secret should contain public keys data
secretRef:
name: secret-name-cosign
namespace: secret-namespace
hashAlgorithm: sha256
- name: key2
key:
secretRef:
name: secret-name-cosign2
namespace: secret-namespace
hashAlgorithm: sha256
``` | tekton | linkTitle Trusted Resources weight 312 Trusted Resources Overview overview Instructions Instructions Sign Resources sign resources Enable Trusted Resources enable trusted resources Overview Trusted Resources is a feature which can be used to sign Tekton Resources and verify them Details of design can be found at TEP 0091 https github com tektoncd community blob main teps 0091 trusted resources md This is an alpha feature and supports v1beta1 and v1 version of Task and Pipeline Note Currently trusted resources only support verifying Tekton resources that come from remote places i e git OCI registry and Artifact Hub To use cluster resolver cluster resolver md for in cluster resources make sure to set all default values for the resources before applied to cluster because the mutating webhook will update the default fields if not given and fail the verification Verification failure will mark corresponding taskrun pipelinerun as Failed status and stop the execution Instructions Sign Resources We have added sign and verify into Tekton Cli https github com tektoncd cli as a subcommand in release v0 28 0 and later https github com tektoncd cli releases tag v0 28 0 Please refer to cli docs https github com tektoncd cli blob main docs cmd tkn task sign md to sign and Tekton resources A signed task example yaml apiVersion tekton dev v1beta1 kind Task metadata annotations tekton dev signature MEYCIQDM8WHQAn yKJ6psTsa0BMjbI9IdguR Zi6sPTVynxv6wIhAMy8JSETHP7A2Ncw7MyA7qp9eLsu 1cCKOjRL1mFXIKV creationTimestamp null name example task namespace tekton trusted resources spec steps image ubuntu name echo Enable Trusted Resources Enable feature flag Update the config map yaml apiVersion v1 kind ConfigMap metadata name feature flags namespace tekton pipelines labels app kubernetes io instance default app kubernetes io part of tekton pipelines data trusted resources verification no match policy fail trusted resources verification no match policy configurations ignore if no matching policies are found skip the verification don t log and don t fail the taskrun pipelinerun warn if no matching policies are found skip the verification log a warning and don t fail the taskrun pipelinerun fail Fail the taskrun pipelinerun if no matching policies are found Notes To skip the verification make sure no policies exist and trusted resources verification no match policy is set to warn or ignore To enable the verification install VerificationPolicy config key at verificationpolicy to match the resources Or patch the new values bash kubectl patch configmap feature flags n tekton pipelines p data trusted resources verification no match policy fail TaskRun and PipelineRun status update wokeignore rule master Trusted resources will update the taskrun s condition https github com kubernetes community blob master contributors devel sig architecture api conventions md typical status properties to indicate if it passes verification or not The following tables illustrate how the conditions are impacted by feature flag and verification result Note that if not true or false means this case doesn t update the corresponding condition No Matching Policies Conditions TrustedResourcesVerified Conditions Succeeded no match policy ignore no match policy warn False no match policy fail False False Matching Policies no matter what trusted resources verification no match policy value is Conditions TrustedResourcesVerified Conditions Succeeded all policies pass True any enforce policy fails False False only warn policies fail False A successful sample TrustedResourcesVerified condition is yaml status conditions lastTransitionTime 2023 03 01T18 17 05Z message Trusted resource verification passed status True type TrustedResourcesVerified Failed sample TrustedResourcesVerified and Succeeded conditions are yaml status conditions lastTransitionTime 2023 03 01T18 17 05Z message resource verification failed This will be filled with detailed error message status False type TrustedResourcesVerified lastTransitionTime 2023 03 01T18 17 10Z message resource verification failed status False type Succeeded Config key at VerificationPolicy VerificationPolicy supports SecretRef or encoded public key data How does VerificationPolicy work You can create multiple VerificationPolicy and apply them to the cluster 1 Trusted resources will look up policies from the resource namespace usually this is the same as taskrun pipelinerun namespace 2 If multiple policies are found For each policy we will check if the resource url is matching any of the patterns in the resources list If matched then this policy will be used for verification 3 If multiple policies are matched the resource must pass all the enforce mode policies If the resource only matches policies in warn mode and fails to pass the warn policy it will not fail the taskrun or pipelinerun but log a warning instead 4 To pass one policy the resource can pass any public keys in the policy Take the following VerificationPolicies for example a resource from https github com tektoncd catalog git needs to pass both verification policy a and verification policy b to pass verification policy a the resource needs to pass either key1 or key2 Example yaml apiVersion tekton dev v1alpha1 kind VerificationPolicy metadata name verification policy a namespace resource namespace spec resources defines a list of patterns resources pattern https github com tektoncd catalog git git resource pattern pattern gcr io tekton releases catalog upstream git clone bundle resource pattern pattern https artifacthub io hub resource pattern authorities defines a list of public keys authorities name key1 key secretRef refers to a secret in the cluster this secret should contain public keys data secretRef name secret name a namespace secret namespace hashAlgorithm sha256 name key2 key data stores the inline public key data data STRING ENCODED PUBLIC KEY mode can be set to enforce default or warn mode enforce yaml apiVersion tekton dev v1alpha1 kind VerificationPolicy metadata name verification policy b namespace resource namespace spec resources pattern https github com tektoncd catalog git authorities name key3 key data stores the inline public key data data STRING ENCODED PUBLIC KEY namespace should be the same of corresponding resources namespace pattern is used to filter out remote resources by their sources URL e g git resources pattern can be set to https github com tektoncd catalog git The pattern should follow regex schema we use go regex library s Match https pkg go dev regexp Match to match the pattern from VerificationPolicy to the ConfigSource URL resolved by remote resolution Note that will match all resources To learn more about regex syntax please refer to syntax https pkg go dev regexp syntax To learn more about ConfigSource please refer to resolvers doc for more context e g gitresolver git resolver md key is used to store the public key key can be configured with secretRef data kms note that only 1 of these 3 fields can be configured secretRef refers to secret in cluster to store the public key data contains the inline data of the pubic key in PEM encoded byte slice format kms refers to the uri of the public key it should follow the format defined in sigstore https docs sigstore dev cosign kms support hashAlgorithm is the algorithm for the public key by default is sha256 It also supports SHA224 SHA384 SHA512 mode controls whether a failing policy will fail the taskrun pipelinerun or only log the a warning enforce default fail the taskrun pipelinerun if verification fails warn don t fail the taskrun pipelinerun if verification fails but log a warning Migrate Config key at configmap to VerificationPolicy Note key configuration in configmap is deprecated The following usage of public keys in configmap can be migrated to VerificationPolicy yaml apiVersion v1 kind ConfigMap metadata name config trusted resources namespace tekton pipelines labels app kubernetes io instance default app kubernetes io part of tekton pipelines data publickeys etc verification secrets cosign pub etc verification secrets cosign2 pub To migrate to VerificationPolicy Stores the public key files in a secret and configure the secret ref in VerificationPolicy yaml apiVersion tekton dev v1alpha1 kind VerificationPolicy metadata name verification policy name namespace resource namespace spec authorities name key1 key secretRef refers to a secret in the cluster this secret should contain public keys data secretRef name secret name cosign namespace secret namespace hashAlgorithm sha256 name key2 key secretRef name secret name cosign2 namespace secret namespace hashAlgorithm sha256 |
tekton weight 4000 Migrating From Tekton to Tekton Migrating from Tekton v1beta1 | <!--
---
linkTitle: "Migrating from Tekton v1beta1"
weight: 4000
---
-->
# Migrating From Tekton `v1beta1` to Tekton `v1`
- [Changes to fields](#changes-to-fields)
- [Upgrading `PipelineRun.Timeout` to `PipelineRun.Timeouts`](#upgrading-pipelinerun.timeout-to-pipelinerun.timeouts)
- [Replacing Resources from Task, TaskRun, Pipeline and PipelineRun](#replacing-resources-from-task,-taskrun,-pipeline-and-pipelinerun)
- [Replacing `taskRef.bundle` and `pipelineRef.bundle` with Bundle Resolver](#replacing-taskRef.bundle-and-pipelineRef.bundle-with-bundle-resolver)
- [Replacing ClusterTask with Remote Resolution](#replacing-clustertask-with-remote-resolution)
- [Adding ServiceAccountName and PodTemplate under `TaskRunTemplate` in `PipelineRun.Spec`](#adding-serviceaccountname-and-podtemplate-under-taskruntemplate-in-pipelinerun.spec)
This document describes the differences between `v1beta1` Tekton entities and their
`v1` counterparts. It also describes the changed fields and the deprecated fields into v1.
## Changes to fields
In Tekton `v1`, the following fields have been changed:
| Old field | Replacement |
| --------- | ----------|
| `pipelineRun.spec.Timeout`| `pipelineRun.spec.timeouts.pipeline` |
| `pipelineRun.spec.taskRunSpecs.taskServiceAccountName` | `pipelineRun.spec.taskRunSpecs.serviceAccountName` |
| `pipelineRun.spec.taskRunSpecs.taskPodTemplate` | `pipelineRun.spec.taskRunSpecs.podTemplate` |
| `taskRun.status.taskResults` | `taskRun.status.results` |
| `pipelineRun.status.pipelineResults` | `pipelineRun.status.results` |
| `taskRun.spec.taskRef.bundle` | `taskRun.spec.taskRef.resolver` |
| `pipelineRun.spec.pipelineRef.bundle` | `pipelineRun.spec.pipelineRef.resolver` |
| `task.spec.resources` | removed from `Task` |
| `taskrun.spec.resources` | removed from `TaskRun` |
| `taskRun.status.cloudEvents` | removed from `TaskRun` |
| `taskRun.status.resourcesResult` | removed from `TaskRun` |
| `pipeline.spec.resources` | removed from `Pipeline` |
| `pipelineRun.spec.resources` | removed from `PipelineRun` |
| `pipelineRun.spec.serviceAccountName` | [`pipelineRun.spec.taskRunTemplate.serviceAccountName`](#adding-serviceaccountname-and-podtemplate-under-taskruntemplate-in-pipelinerun.spec) |
| `pipelineRun.spec.podTemplate` | [`pipelineRun.spec.taskRunTemplate.podTemplate`](#adding-serviceaccountname-and-podtemplate-under-taskruntemplate-in-pipelinerun.spec) |
| `task.spec.steps[].resources` | `task.spec.steps[].computeResources` |
| `task.spec.stepTemplate.resources` | `task.spec.stepTemplate.computeResources` |
| `task.spec.sidecars[].resources` | `task.spec.sidecars[].computeResources` |
| `taskRun.spec.sidecarOverrides`| `taskRun.spec.sidecarSpecs` |
| `taskRun.spec.stepOverrides` | `taskRun.spec.stepSpecs` |
| `taskRun.spec.sidecarSpecs[].resources` | `taskRun.spec.sidecarSpecs[].computeResources` |
| `taskRun.spec.stepSpecs[].resources` | `taskRun.spec.stepSpecs[].computeResources` |
## Replacing `resources` from Task, TaskRun, Pipeline and PipelineRun <a id='replacing-resources-from-task,-taskrun,-pipeline-and-pipelinerun'> </a>
`PipelineResources` and the `resources` fields of Task, TaskRun, Pipeline and PipelineRun have been removed. Please use `Tasks` instead. For more information, see [Replacing PipelineResources](https://github.com/tektoncd/pipeline/blob/main/docs/pipelineresources.md)
## Replacing `taskRef.bundle` and `pipelineRef.bundle` with Bundle Resolver <a id='replacing-taskRef.bundle-and-pipelineRef.bundle-with-bundle-resolver'> </a>
**Note: `taskRef.bundle` and `pipelineRef.bundle` are now removed from `v1beta1`. This is kept for "history" purposes**.
Bundle resolver in remote resolution should be used instead of `taskRun.spec.taskRef.bundle` and `pipelineRun.spec.pipelineRef.bundle`.
The [`enable-bundles-resolver`](https://github.com/tektoncd/pipeline/blob/main/docs/install.md#customizing-the-pipelines-controller-behavior) feature flag must be enabled to use this feature.
```yaml
# Before in v1beta1:
apiVersion: tekton.dev/v1beta1
kind: TaskRun
spec:
taskRef:
name: example-task
bundle: python:3-alpine
---
# After in v1:
apiVersion: tekton.dev/v1
kind: TaskRun
spec:
taskRef:
resolver: bundles
params:
- name: bundle
value: python:3-alpine
- name: name
value: taskName
- name: kind
value: Task
```
## Replacing ClusterTask with Remote Resolution
`ClusterTask` is deprecated. Please use the `cluster` resolver instead.
The [`enable-cluster-resolver`](https://github.com/tektoncd/pipeline/blob/main/docs/install.md#customizing-the-pipelines-controller-behavior) feature flag must be enabled to use this feature.
The `cluster` resolver allows `Pipeline`s, `PipelineRun`s, and `TaskRun`s to refer
to `Pipeline`s and `Task`s defined in other namespaces in the cluster.
```yaml
# Before in v1beta1:
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: cluster-task-reference
spec:
taskRef:
name: example-task
kind: ClusterTask
---
# After in v1:
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
name: cluster-task-reference
spec:
taskRef:
resolver: cluster
params:
- name: kind
value: task
- name: name
value: example-task
- name: namespace
value: example-namespace
```
For more information, see [Remote resolution](https://github.com/tektoncd/community/blob/main/teps/0060-remote-resource-resolution.md).
## Adding `ServiceAccountName` and `PodTemplate` under TaskRunTemplate in PipelineRun.Spec <a id='adding-serviceaccountname-and-podtemplate-under-taskruntemplate-in-pipelinerun.spec'></a>
`ServiceAccountName` and `PodTemplate` are moved to `TaskRunTemplate` as `TaskRunTemplate.ServiceAccountName` and `TaskRunTemplate.PodTemplate` so that users can specify common configuration in `TaskRunTemplate` which will apply to all the TaskRuns.
```yaml
# Before in v1beta1:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: template-pr
spec:
pipelineRef:
name: clone-test-build
serviceAccountName: build
podTemplate:
securityContext:
fsGroup: 65532
---
# After in v1:
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
name: template-pr
spec:
pipelineRef:
name: clone-test-build
taskRunTemplate:
serviceAccountName: build
podTemplate:
securityContext:
fsGroup: 65532
```
For more information, see [TEP-119](https://github.com/tektoncd/community/blob/main/teps/0119-add-taskrun-template-in-pipelinerun.md). | tekton | linkTitle Migrating from Tekton v1beta1 weight 4000 Migrating From Tekton v1beta1 to Tekton v1 Changes to fields changes to fields Upgrading PipelineRun Timeout to PipelineRun Timeouts upgrading pipelinerun timeout to pipelinerun timeouts Replacing Resources from Task TaskRun Pipeline and PipelineRun replacing resources from task taskrun pipeline and pipelinerun Replacing taskRef bundle and pipelineRef bundle with Bundle Resolver replacing taskRef bundle and pipelineRef bundle with bundle resolver Replacing ClusterTask with Remote Resolution replacing clustertask with remote resolution Adding ServiceAccountName and PodTemplate under TaskRunTemplate in PipelineRun Spec adding serviceaccountname and podtemplate under taskruntemplate in pipelinerun spec This document describes the differences between v1beta1 Tekton entities and their v1 counterparts It also describes the changed fields and the deprecated fields into v1 Changes to fields In Tekton v1 the following fields have been changed Old field Replacement pipelineRun spec Timeout pipelineRun spec timeouts pipeline pipelineRun spec taskRunSpecs taskServiceAccountName pipelineRun spec taskRunSpecs serviceAccountName pipelineRun spec taskRunSpecs taskPodTemplate pipelineRun spec taskRunSpecs podTemplate taskRun status taskResults taskRun status results pipelineRun status pipelineResults pipelineRun status results taskRun spec taskRef bundle taskRun spec taskRef resolver pipelineRun spec pipelineRef bundle pipelineRun spec pipelineRef resolver task spec resources removed from Task taskrun spec resources removed from TaskRun taskRun status cloudEvents removed from TaskRun taskRun status resourcesResult removed from TaskRun pipeline spec resources removed from Pipeline pipelineRun spec resources removed from PipelineRun pipelineRun spec serviceAccountName pipelineRun spec taskRunTemplate serviceAccountName adding serviceaccountname and podtemplate under taskruntemplate in pipelinerun spec pipelineRun spec podTemplate pipelineRun spec taskRunTemplate podTemplate adding serviceaccountname and podtemplate under taskruntemplate in pipelinerun spec task spec steps resources task spec steps computeResources task spec stepTemplate resources task spec stepTemplate computeResources task spec sidecars resources task spec sidecars computeResources taskRun spec sidecarOverrides taskRun spec sidecarSpecs taskRun spec stepOverrides taskRun spec stepSpecs taskRun spec sidecarSpecs resources taskRun spec sidecarSpecs computeResources taskRun spec stepSpecs resources taskRun spec stepSpecs computeResources Replacing resources from Task TaskRun Pipeline and PipelineRun a id replacing resources from task taskrun pipeline and pipelinerun a PipelineResources and the resources fields of Task TaskRun Pipeline and PipelineRun have been removed Please use Tasks instead For more information see Replacing PipelineResources https github com tektoncd pipeline blob main docs pipelineresources md Replacing taskRef bundle and pipelineRef bundle with Bundle Resolver a id replacing taskRef bundle and pipelineRef bundle with bundle resolver a Note taskRef bundle and pipelineRef bundle are now removed from v1beta1 This is kept for history purposes Bundle resolver in remote resolution should be used instead of taskRun spec taskRef bundle and pipelineRun spec pipelineRef bundle The enable bundles resolver https github com tektoncd pipeline blob main docs install md customizing the pipelines controller behavior feature flag must be enabled to use this feature yaml Before in v1beta1 apiVersion tekton dev v1beta1 kind TaskRun spec taskRef name example task bundle python 3 alpine After in v1 apiVersion tekton dev v1 kind TaskRun spec taskRef resolver bundles params name bundle value python 3 alpine name name value taskName name kind value Task Replacing ClusterTask with Remote Resolution ClusterTask is deprecated Please use the cluster resolver instead The enable cluster resolver https github com tektoncd pipeline blob main docs install md customizing the pipelines controller behavior feature flag must be enabled to use this feature The cluster resolver allows Pipeline s PipelineRun s and TaskRun s to refer to Pipeline s and Task s defined in other namespaces in the cluster yaml Before in v1beta1 apiVersion tekton dev v1beta1 kind TaskRun metadata name cluster task reference spec taskRef name example task kind ClusterTask After in v1 apiVersion tekton dev v1 kind TaskRun metadata name cluster task reference spec taskRef resolver cluster params name kind value task name name value example task name namespace value example namespace For more information see Remote resolution https github com tektoncd community blob main teps 0060 remote resource resolution md Adding ServiceAccountName and PodTemplate under TaskRunTemplate in PipelineRun Spec a id adding serviceaccountname and podtemplate under taskruntemplate in pipelinerun spec a ServiceAccountName and PodTemplate are moved to TaskRunTemplate as TaskRunTemplate ServiceAccountName and TaskRunTemplate PodTemplate so that users can specify common configuration in TaskRunTemplate which will apply to all the TaskRuns yaml Before in v1beta1 apiVersion tekton dev v1beta1 kind PipelineRun metadata name template pr spec pipelineRef name clone test build serviceAccountName build podTemplate securityContext fsGroup 65532 After in v1 apiVersion tekton dev v1 kind PipelineRun metadata name template pr spec pipelineRef name clone test build taskRunTemplate serviceAccountName build podTemplate securityContext fsGroup 65532 For more information see TEP 119 https github com tektoncd community blob main teps 0119 add taskrun template in pipelinerun md |
tekton Workspaces Workspaces weight 405 | <!--
---
linkTitle: "Workspaces"
weight: 405
---
-->
# Workspaces
- [Overview](#overview)
- [`Workspaces` in `Tasks` and `TaskRuns`](#workspaces-in-tasks-and-taskruns)
- [`Workspaces` in `Pipelines` and `PipelineRuns`](#workspaces-in-pipelines-and-pipelineruns)
- [Optional `Workspaces`](#optional-workspaces)
- [Isolated `Workspaces`](#isolated-workspaces)
- [Configuring `Workspaces`](#configuring-workspaces)
- [Using `Workspaces` in `Tasks`](#using-workspaces-in-tasks)
- [Isolating `Workspaces` to Specific `Steps` or `Sidecars`](#isolating-workspaces-to-specific-steps-or-sidecars)
- [Setting a default `TaskRun` `Workspace Binding`](#setting-a-default-taskrun-workspace-binding)
- [Using `Workspace` variables in `Tasks`](#using-workspace-variables-in-tasks)
- [Mapping `Workspaces` in `Tasks` to `TaskRuns`](#mapping-workspaces-in-tasks-to-taskruns)
- [Examples of `TaskRun` definition using `Workspaces`](#examples-of-taskrun-definition-using-workspaces)
- [Using `Workspaces` in `Pipelines`](#using-workspaces-in-pipelines)
- [Specifying `Workspace` order in a `Pipeline` and Affinity Assistants](#specifying-workspace-order-in-a-pipeline-and-affinity-assistants)
- [Specifying `Workspaces` in `PipelineRuns`](#specifying-workspaces-in-pipelineruns)
- [Example `PipelineRun` definition using `Workspaces`](#example-pipelinerun-definition-using-workspaces)
- [Specifying `VolumeSources` in `Workspaces`](#specifying-volumesources-in-workspaces)
- [Using `PersistentVolumeClaims` as `VolumeSource`](#using-persistentvolumeclaims-as-volumesource)
- [Using other types of `VolumeSources`](#using-other-types-of-volumesources)
- [Using Persistent Volumes within a `PipelineRun`](#using-persistent-volumes-within-a-pipelinerun)
- [More examples](#more-examples)
## Overview
`Workspaces` allow `Tasks` to declare parts of the filesystem that need to be provided
at runtime by `TaskRuns`. A `TaskRun` can make these parts of the filesystem available
in many ways: using a read-only `ConfigMap` or `Secret`, an existing `PersistentVolumeClaim`
shared with other Tasks, create a `PersistentVolumeClaim` from a provided `VolumeClaimTemplate`, or simply an `emptyDir` that is discarded when the `TaskRun`
completes.
`Workspaces` are similar to `Volumes` except that they allow a `Task` author
to defer to users and their `TaskRuns` when deciding which class of storage to use.
`Workspaces` can serve the following purposes:
- Storage of inputs and/or outputs
- Sharing data among `Tasks`
- A mount point for credentials held in `Secrets`
- A mount point for configurations held in `ConfigMaps`
- A mount point for common tools shared by an organization
- A cache of build artifacts that speed up jobs
### `Workspaces` in `Tasks` and `TaskRuns`
`Tasks` specify where a `Workspace` resides on disk for its `Steps`. At
runtime, a `TaskRun` provides the specific details of the `Volume` that is
mounted into that `Workspace`.
This separation of concerns allows for a lot of flexibility. For example, in isolation,
a single `TaskRun` might simply provide an `emptyDir` volume that mounts quickly
and disappears at the end of the run. In a more complex system, however, a `TaskRun`
might use a `PersistentVolumeClaim` which is pre-populated with
data for the `Task` to process. In both scenarios the `Task's`
`Workspace` declaration remains the same and only the runtime
information in the `TaskRun` changes.
`Tasks` can also share `Workspaces` with their `Sidecars`, though there's a little more
configuration involved to add the required `volumeMount`. This allows for a
long-running process in a `Sidecar` to share data with the executing `Steps` of a `Task`.
**Note**: If the `enable-api-fields` feature-flag is set to `"beta"` then workspaces
will automatically be available to `Sidecars` too!
### `Workspaces` in `Pipelines` and `PipelineRuns`
A `Pipeline` can use `Workspaces` to show how storage will be shared through
its `Tasks`. For example, `Task` A might clone a source repository onto a `Workspace`
and `Task` B might compile the code that it finds in that `Workspace`. It's
the `Pipeline's` job to ensure that the `Workspace` these two `Tasks` use is the
same, and more importantly, that the order in which they access the `Workspace` is
correct.
`PipelineRuns` perform mostly the same duties as `TaskRuns` - they provide the
specific `Volume` information to use for the `Workspaces` used by each `Pipeline`.
`PipelineRuns` have the added responsibility of ensuring that whatever `Volume` type they
provide can be safely and correctly shared across multiple `Tasks`.
### Optional `Workspaces`
Both `Tasks` and `Pipelines` can declare a `Workspace` "optional". When an optional `Workspace`
is declared the `TaskRun` or `PipelineRun` may omit a `Workspace` Binding for that `Workspace`.
The `Task` or `Pipeline` behaviour may change when the Binding is omitted. This feature has
many uses:
- A `Task` may optionally accept credentials to run authenticated commands.
- A `Pipeline` may accept optional configuration that changes the linting or compilation
parameters used.
- An optional build cache may be provided to speed up compile times.
See the section [Using `Workspaces` in `Tasks`](#using-workspaces-in-tasks) for more info on
the `optional` field.
### Isolated `Workspaces`
This is a beta feature. The `enable-api-fields` feature flag [must be set to `"beta"`](./install.md)
for Isolated Workspaces to function.
Certain kinds of data are more sensitive than others. To reduce exposure of sensitive data Task
authors can isolate `Workspaces` to only those `Steps` and `Sidecars` that require access to
them. The primary use-case for this is credentials but it can apply to any data that should have
its access strictly limited to only specific container images.
See the section [Isolating `Workspaces` to Specific `Steps` or `Sidecars`](#isolating-workspaces-to-specific-steps-or-sidecars)
for more info on this feature.
## Configuring `Workspaces`
This section describes how to configure one or more `Workspaces` in a `TaskRun`.
### Using `Workspaces` in `Tasks`
To configure one or more `Workspaces` in a `Task`, add a `workspaces` list with each entry using the following fields:
- `name` - (**required**) A **unique** string identifier that can be used to refer to the workspace
- `description` - An informative string describing the purpose of the `Workspace`
- `readOnly` - A boolean declaring whether the `Task` will write to the `Workspace`. Defaults to `false`.
- `optional` - A boolean indicating whether a TaskRun can omit the `Workspace`. Defaults to `false`.
- `mountPath` - A path to a location on disk where the workspace will be available to `Steps`. If a
`mountPath` is not provided the workspace will be placed by default at `/workspace/<name>` where `<name>`
is the workspace's unique name.
Note the following:
- A `Task` definition can include as many `Workspaces` as it needs. It is recommended that `Tasks` use
**at most** one _writeable_ `Workspace`.
- A `readOnly` `Workspace` will have its volume mounted as read-only. Attempting to write
to a `readOnly` `Workspace` will result in errors and failed `TaskRuns`.
Below is an example `Task` definition that includes a `Workspace` called `messages` to which the `Task` writes a message:
```yaml
spec:
steps:
- name: write-message
image: ubuntu
script: |
#!/usr/bin/env bash
set -xe
if [ "$(workspaces.messages.bound)" == "true" ] ; then
echo hello! > $(workspaces.messages.path)/message
fi
workspaces:
- name: messages
description: |
The folder where we write the message to. If no workspace
is provided then the message will not be written.
optional: true
mountPath: /custom/path/relative/to/root
```
#### Sharing `Workspaces` with `Sidecars`
A `Task's` `Sidecars` are also able to access the `Workspaces` the `Task` defines but must have their
`volumeMount` configuration set explicitly. Below is an example `Task` that shares a `Workspace` between
its `Steps` and its `Sidecar`. In the example a `Sidecar` sleeps for a short amount of time and then writes
a `ready` file which the `Step` is waiting for:
```yaml
spec:
workspaces:
- name: signals
steps:
- image: alpine
script: |
while [ ! -f "$(workspaces.signals.path)/ready" ]; do
echo "Waiting for ready file..."
sleep 1
done
echo "Saw ready file!"
sidecars:
- image: alpine
# Note: must explicitly include volumeMount for the workspace to be accessible in the Sidecar
volumeMounts:
- name: $(workspaces.signals.volume)
mountPath: $(workspaces.signals.path)
script: |
sleep 3
touch "$(workspaces.signals.path)/ready"
```
**Note:** Starting in Pipelines v0.24.0 `Sidecars` automatically get access to `Workspaces`. This is a
beta feature and requires Pipelines to have [the "beta" feature gate enabled](./install.md#beta-features).
If a Sidecar already has a `volumeMount` at the location expected for a `workspace` then that `workspace` is
not bound to the Sidecar. This preserves backwards-compatibility with any existing uses of the `volumeMount`
trick described above.
#### Isolating `Workspaces` to Specific `Steps` or `Sidecars`
This is a beta feature. The `enable-api-fields` feature flag [must be set to `"beta"`](./install.md#beta-features)
for Isolated Workspaces to function.
To limit access to a `Workspace` from a subset of a `Task's` `Steps` or `Sidecars` requires
adding a `workspaces` declaration to those sections. In the following example a `Task` has several
`Steps` but only the one that performs a `git clone` will be able to access the SSH credentials
passed into it:
```yaml
spec:
workspaces:
- name: ssh-credentials
description: An .ssh directory with keys, known_host and config files used to clone the repo.
steps:
- name: clone-repo
workspaces:
- name: ssh-credentials # This Step receives the sensitive workspace; the others do not.
image: git
script: # git clone ...
- name: build-source
image: third-party-source-builder:latest # This image doesn't get access to ssh-credentials.
- name: lint-source
image: third-party-source-linter:latest # This image doesn't get access to ssh-credentials.
```
It can potentially be useful to mount `Workspaces` to different locations on a per-`Step` or
per-`Sidecar` basis and this is also supported:
```yaml
kind: Task
spec:
workspaces:
- name: ws
mountPath: /workspaces/ws
steps:
- name: edit-files-1
workspaces:
- name: ws
mountPath: /foo # overrides mountPath
- name: edit-files-2
workspaces:
- name: ws # no mountPath specified so will use /workspaces/ws
sidecars:
- name: watch-files-on-workspace
workspaces:
- name: ws
mountPath: /files # overrides mountPath
```
#### Setting a default `TaskRun` `Workspace Binding`
An organization may want to specify default `Workspace` configuration for `TaskRuns`. This allows users to
use `Tasks` without having to know the specifics of `Workspaces` - they can simply rely on the platform
to use the default configuration when a `Workspace` is missing. To support this Tekton allows a default
`Workspace Binding` to be specified for `TaskRuns`. When the `TaskRun` executes, any `Workspaces` that
a `Task` requires but which are not provided by the `TaskRun` will be bound with the default configuration.
The configuration for the default `Workspace Binding` is added to the `config-defaults` `ConfigMap`, under
the `default-task-run-workspace-binding` key. For an example, see the [Customizing basic execution
parameters](./additional-configs.md#customizing-basic-execution-parameters) section of the install doc.
**Note:** the default configuration is used for any _required_ `Workspace` declared by a `Task`. Optional
`Workspaces` are not populated with the default binding. This is because a `Task's` behaviour will typically
differ slightly when an optional `Workspace` is bound.
#### Using `Workspace` variables in `Tasks`
The following variables make information about `Workspaces` available to `Tasks`:
- `$(workspaces.<name>.path)` - specifies the path to a `Workspace`
where `<name>` is the name of the `Workspace`. This will be an
empty string when a Workspace is declared optional and not provided
by a TaskRun.
- `$(workspaces.<name>.bound)` - either `true` or `false`, specifies
whether a workspace was bound. Always `true` if the workspace is required.
- `$(workspaces.<name>.claim)` - specifies the name of the `PersistentVolumeClaim` used as a volume source for the `Workspace`
where `<name>` is the name of the `Workspace`. If a volume source other than `PersistentVolumeClaim` is used, an empty string is returned.
- `$(workspaces.<name>.volume)`- specifies the name of the `Volume`
provided for a `Workspace` where `<name>` is the name of the `Workspace`.
#### Mapping `Workspaces` in `Tasks` to `TaskRuns`
A `TaskRun` that executes a `Task` containing a `workspaces` list must bind
those `workspaces` to actual physical `Volumes`. To do so, the `TaskRun` includes
its own `workspaces` list. Each entry in the list contains the following fields:
- `name` - (**required**) The name of the `Workspace` within the `Task` for which the `Volume` is being provided
- `subPath` - An optional subdirectory on the `Volume` to store data for that `Workspace`
The entry must also include one `VolumeSource`. See [Specifying `VolumeSources` in `Workspaces`](#specifying-volumesources-in-workspaces) for more information.
**Caution:**
- The `Workspaces` declared in a `Task` must be available when executing the associated `TaskRun`.
Otherwise, the `TaskRun` will fail.
#### Examples of `TaskRun` definition using `Workspaces`
The following example illustrate how to specify `Workspaces` in your `TaskRun` definition,
an [`emptyDir`](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir)
is provided for a Task's `workspace` called `myworkspace`:
```yaml
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
generateName: example-taskrun-
spec:
taskRef:
name: example-task
workspaces:
- name: myworkspace # this workspace name must be declared in the Task
emptyDir: {} # emptyDir volumes can be used for TaskRuns,
# but consider using a PersistentVolumeClaim for PipelineRuns
```
For examples of using other types of volume sources, see [Specifying `VolumeSources` in `Workspaces`](#specifying-volumesources-in-workspaces).
For a more in-depth example, see [`Workspaces` in a `TaskRun`](../examples/v1/taskruns/workspace.yaml).
### Using `Workspaces` in `Pipelines`
While individual `Tasks` declare the `Workspaces` they need to run, the `Pipeline` decides
which `Workspaces` are shared among its `Tasks`. To declare shared `Workspaces` in a `Pipeline`,
you must add the following information to your `Pipeline` definition:
- A list of `Workspaces` that your `PipelineRuns` will be providing. Use the `workspaces` field to
specify the target `Workspaces` in your `Pipeline` definition as shown below. Each entry in the
list must have a unique name.
- A mapping of `Workspace` names between the `Pipeline` and the `Task` definitions.
The example below defines a `Pipeline` with a `Workspace` named `pipeline-ws1`. This
`Workspace` is bound in two `Tasks` - first as the `output` workspace declared by the `gen-code`
`Task`, then as the `src` workspace declared by the `commit` `Task`. If the `Workspace`
provided by the `PipelineRun` is a `PersistentVolumeClaim` then these two `Tasks` can share
data within that `Workspace`.
```yaml
spec:
workspaces:
- name: pipeline-ws1 # Name of the workspace in the Pipeline
- name: pipeline-ws2
optional: true
tasks:
- name: use-ws-from-pipeline
taskRef:
name: gen-code # gen-code expects a workspace named "output"
workspaces:
- name: output
workspace: pipeline-ws1
- name: use-ws-again
taskRef:
name: commit # commit expects a workspace named "src"
workspaces:
- name: src
workspace: pipeline-ws1
runAfter:
- use-ws-from-pipeline # important: use-ws-from-pipeline writes to the workspace first
```
Include a `subPath` in the `Workspace Binding` to mount different parts of the same volume for different Tasks. See [a full example of this kind of Pipeline](../examples/v1/pipelineruns/pipelinerun-using-different-subpaths-of-workspace.yaml) which writes data to two adjacent directories on the same Volume.
The `subPath` specified in a `Pipeline` will be appended to any `subPath` specified as part of the `PipelineRun` workspace declaration. So a `PipelineRun` declaring a `Workspace` with `subPath` of `/foo` for a `Pipeline` who binds it to a `Task` with `subPath` of `/bar` will end up mounting the `Volume`'s `/foo/bar` directory.
#### Specifying `Workspace` order in a `Pipeline` and Affinity Assistants
Sharing a `Workspace` between `Tasks` requires you to define the order in which those `Tasks`
write to or read from that `Workspace`. Use the `runAfter` field in your `Pipeline` definition
to define when a `Task` should be executed. For more information, see the [`runAfter` documentation](pipelines.md#using-the-runafter-parameter).
When a `PersistentVolumeClaim` is used as volume source for a `Workspace` in a `PipelineRun`,
an Affinity Assistant will be created. For more information, see the [`Affinity Assistants` documentation](affinityassistants.md).
**Note**: When `coschedule` is set to `workspaces` or `disabled`, it is not allowed to bind multiple [`PersistentVolumeClaim` based workspaces](#using-persistentvolumeclaims-as-volumesource) to the same `TaskRun` in a `PipelineRun` due to potential Availability Zone conflicts.
See more details in [Availability Zones](#availability-zones).
#### Specifying `Workspaces` in `PipelineRuns`
For a `PipelineRun` to execute a `Pipeline` that includes one or more `Workspaces`, it needs to
bind the `Workspace` names to volumes using its own `workspaces` field. Each entry in
this list must correspond to a `Workspace` declaration in the `Pipeline`. Each entry in the
`workspaces` list must specify the following:
- `name` - (**required**) the name of the `Workspace` specified in the `Pipeline` definition for which a volume is being provided.
- `subPath` - (optional) a directory on the volume that will store that `Workspace's` data. This directory must exist at the
time the `TaskRun` executes, otherwise the execution will fail.
The entry must also include one `VolumeSource`. See [Using `VolumeSources` with `Workspaces`](#specifying-volumesources-in-workspaces) for more information.
**Note:** If the `Workspaces` specified by a `Pipeline` are not provided at runtime by a `PipelineRun`, that `PipelineRun` will fail.
You can pass in extra `Workspaces` if needed depending on your use cases. An example use
case is when your CI system autogenerates `PipelineRuns` and it has `Workspaces` it wants to
provide to all `PipelineRuns`. Because you can pass in extra `Workspaces`, you don't have to
go through the complexity of checking each `Pipeline` and providing only the required `Workspaces`.
#### Example `PipelineRun` definition using `Workspaces`
In the example below, a `volumeClaimTemplate` is provided for how a `PersistentVolumeClaim` should be created for a workspace named
`myworkspace` declared in a `Pipeline`. When using `volumeClaimTemplate` a new `PersistentVolumeClaim` is created for
each `PipelineRun` and it allows the user to specify e.g. size and StorageClass for the volume.
```yaml
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: example-pipelinerun-
spec:
pipelineRef:
name: example-pipeline
workspaces:
- name: myworkspace # this workspace name must be declared in the Pipeline
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce # access mode may affect how you can use this volume in parallel tasks
resources:
requests:
storage: 1Gi
```
For examples of using other types of volume sources, see [Specifying `VolumeSources` in `Workspaces`](#specifying-volumesources-in-workspaces).
For a more in-depth example, see the [`Workspaces` in `PipelineRun`](../examples/v1/pipelineruns/workspaces.yaml) YAML sample.
### Specifying `VolumeSources` in `Workspaces`
You can only use a single type of `VolumeSource` per `Workspace` entry. The configuration
options differ for each type. `Workspaces` support the following fields:
#### Using `PersistentVolumeClaims` as `VolumeSource`
`PersistentVolumeClaim` volumes are a good choice for sharing data among `Tasks` within a `Pipeline`.
Beware that the [access mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes)
configured for the `PersistentVolumeClaim` effects how you can use the volume for parallel `Tasks` in a `Pipeline`. See
[Specifying `workspace` order in a `Pipeline` and Affinity Assistants](#specifying-workspace-order-in-a-pipeline-and-affinity-assistants) for more information about this.
There are two ways of using `PersistentVolumeClaims` as a `VolumeSource`.
##### `volumeClaimTemplate`
The `volumeClaimTemplate` is a template of a [`PersistentVolumeClaim` volume](https://kubernetes.io/docs/concepts/storage/volumes/#persistentvolumeclaim),
created for each `PipelineRun` or `TaskRun`. When the volume is created from a template in a `PipelineRun` or `TaskRun`
it will be deleted when the `PipelineRun` or `TaskRun` is deleted.
```yaml
workspaces:
- name: myworkspace
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
```
##### `persistentVolumeClaim`
The `persistentVolumeClaim` field references an *existing* [`persistentVolumeClaim` volume](https://kubernetes.io/docs/concepts/storage/volumes/#persistentvolumeclaim). The example exposes only the subdirectory `my-subdir` from that `PersistentVolumeClaim`
```yaml
workspaces:
- name: myworkspace
persistentVolumeClaim:
claimName: mypvc
subPath: my-subdir
```
#### Using other types of `VolumeSources`
##### `emptyDir`
The `emptyDir` field references an [`emptyDir` volume](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) which holds
a temporary directory that only lives as long as the `TaskRun` that invokes it. `emptyDir` volumes are **not** suitable for sharing data among `Tasks` within a `Pipeline`.
However, they work well for single `TaskRuns` where the data stored in the `emptyDir` needs to be shared among the `Steps` of the `Task` and discarded after execution.
```yaml
workspaces:
- name: myworkspace
emptyDir: {}
```
##### `configMap`
The `configMap` field references a [`configMap` volume](https://kubernetes.io/docs/concepts/storage/volumes/#configmap).
Using a `configMap` as a `Workspace` has the following limitations:
- `configMap` volume sources are always mounted as read-only. `Steps` cannot write to them and will error out if they try.
- The `configMap` you want to use as a `Workspace` must exist prior to submitting the `TaskRun`.
- `configMaps` are [size-limited to 1MB](https://github.com/kubernetes/kubernetes/blob/f16bfb069a22241a5501f6fe530f5d4e2a82cf0e/pkg/apis/core/validation/validation.go#L5042).
```yaml
workspaces:
- name: myworkspace
configmap:
name: my-configmap
```
##### `secret`
The `secret` field references a [`secret` volume](https://kubernetes.io/docs/concepts/storage/volumes/#secret).
Using a `secret` volume has the following limitations:
- `secret` volume sources are always mounted as read-only. `Steps` cannot write to them and will error out if they try.
- The `secret` you want to use as a `Workspace` must exist prior to submitting the `TaskRun`.
- `secret` are [size-limited to 1MB](https://github.com/kubernetes/kubernetes/blob/f16bfb069a22241a5501f6fe530f5d4e2a82cf0e/pkg/apis/core/validation/validation.go#L5042).
```yaml
workspaces:
- name: myworkspace
secret:
secretName: my-secret
```
##### `projected`
The `projected` field references a [`projected` volume](https://kubernetes.io/docs/concepts/storage/projected-volumes).
`projected` volume workspaces are a [beta feature](./additional-configs.md#beta-features).
Using a `projected` volume has the following limitations:
- `projected` volume sources are always mounted as read-only. `Steps` cannot write to them and will error out if they try.
- The volumes you want to project as a `Workspace` must exist prior to submitting the `TaskRun`.
- The following volumes can be projected: `configMap`, `secret`, `serviceAccountToken` and `downwardApi`
```yaml
workspaces:
- name: myworkspace
projected:
sources:
- configMap:
name: my-configmap
- secret:
name: my-secret
```
##### `csi`
The `csi` field references a [`csi` volume](https://kubernetes.io/docs/concepts/storage/volumes/#csi).
`csi` workspaces are a [beta feature](./additional-configs.md#beta-features).
Using a `csi` volume has the following limitations:
<!-- wokeignore:rule=master -->
- `csi` volume sources require a volume driver to use, which must correspond to the value by the CSI driver as defined in the [CSI spec](https://github.com/container-storage-interface/spec/blob/master/spec.md#getplugininfo).
```yaml
workspaces:
- name: my-credentials
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "vault-database"
```
Example of CSI workspace using Hashicorp Vault:
- Install the required csi driver. eg. [secrets-store-csi-driver](https://github.com/hashicorp/vault-csi-provider#using-yaml)
- Install the `vault` Provider onto the kubernetes cluster. [Reference](https://learn.hashicorp.com/tutorials/vault/kubernetes-raft-deployment-guide)
- Deploy a provider via [example](https://gist.github.com/JeromeJu/cc8e4e758029b6694806604750b8911c)
- Create a SecretProviderClass Provider using the following [yaml](https://github.com/tektoncd/pipeline/blob/main/examples/v1/pipelineruns/no-ci/csi-workspace.yaml#L1-L19)
- Specify the ServiceAccount via vault:
```
vault write auth/kubernetes/role/database \
bound_service_account_names=default \
bound_service_account_namespaces=default \
policies=internal-app \
ttl=20m
```
If you need support for a `VolumeSource` type not listed above, [open an issue](https://github.com/tektoncd/pipeline/issues) or
a [pull request](https://github.com/tektoncd/pipeline/blob/main/CONTRIBUTING.md).
## Using Persistent Volumes within a `PipelineRun`
When using a workspace with a [`PersistentVolumeClaim` as `VolumeSource`](#using-persistentvolumeclaims-as-volumesource),
a Kubernetes [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) is used within the `PipelineRun`.
There are some details that are good to know when using Persistent Volumes within a `PipelineRun`.
### Storage Class
`PersistentVolumeClaims` specify a [Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/) for the underlying Persistent Volume. Storage Classes have specific
characteristics. If a StorageClassName is not specified for your `PersistentVolumeClaim`, the cluster defined _default_
Storage Class is used. For _regional_ clusters - clusters that typically consist of Nodes located in multiple Availability
Zones - it is important to know whether your Storage Class is available to all Nodes. Default Storage Classes are typically
only available to Nodes within *one* Availability Zone. There is usually an option to use a _regional_ Storage Class,
but they have trade-offs, e.g. you need to pay for multiple volumes since they are replicated and your volume may have
substantially higher latency.
### Access Modes
A `PersistentVolumeClaim` specifies an [Access Mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes).
Available Access Modes are `ReadWriteOnce`, `ReadWriteMany` and `ReadOnlyMany`. What Access Mode you can use depend on
the storage solution that you are using.
* `ReadWriteOnce` is the most commonly available Access Mode. A volume with this Access Mode can only be mounted on one
Node at a time. This can be problematic for a `Pipeline` that has parallel `Tasks` that access the volume concurrently.
The Affinity Assistant helps with this problem by scheduling all `Tasks` that use the same `PersistentVolumeClaim` to
the same Node.
* `ReadOnlyMany` is read-only and is less common in a CI/CD-pipeline. These volumes often need to be "prepared" with data
in some way before use. Dynamically provided volumes can usually not be used in read-only mode.
* `ReadWriteMany` is the least commonly available Access Mode. If you use this access mode and these volumes are available
to all Nodes within your cluster, you may want to disable the Affinity Assistant.
### Availability Zones
`Persistent Volumes` are "zonal" in some cloud providers like GKE (i.e. they live within a single Availability Zone and cannot be accessed from a `pod` living in another Availability Zone). When using a workspace backed by a `PersistentVolumeClaim` (typically only available within a Data Center), the `TaskRun` `pods` can be scheduled to any Availability Zone in a regional cluster. This results in potential Availability Zone scheduling conflict when two `pods` requiring the same Volume are scheduled to different Availability Zones (see issue [#3480](https://github.com/tektoncd/pipeline/issues/3480) and [#5275](https://github.com/tektoncd/pipeline/issues/5275)).
To avoid such conflict in `PipelineRuns`, Tekton provides [Affinity Assistants](affinityassistants.md) which schedule all `TaskRun` `pods` or all `TaskRun` sharing a `PersistentVolumeClaim` in a `PipelineRun` to the same Node depending on the `coschedule` mode.
Specifically, for users use zonal clusters like GKE or use `PersistentVolumeClaim` in ReadWriteOnce access modes, please set `coschedule: workspaces` to schedule each of the `TaskRun` `pod` to the same zone as the associated `PersistentVolumeClaim`. In addition, for users want to bind multiple `PersistentVolumeClaims` to a single `TaskRun`, please set `coschedule: pipelineruns` to schedule all `TaskRun` `pods` and `PersistentVolumeClaim` in a `PipelineRun` to the same zone.
## More examples
See the following in-depth examples of configuring `Workspaces`:
- [`Workspaces` in a `TaskRun`](../examples/v1/taskruns/workspace.yaml)
- [`Workspaces` in a `PipelineRun`](../examples/v1/pipelineruns/workspaces.yaml)
- [`Workspaces` from a volumeClaimTemplate in a `PipelineRun`](../examples/v1/pipelineruns/workspace-from-volumeclaimtemplate.yaml) | tekton | linkTitle Workspaces weight 405 Workspaces Overview overview Workspaces in Tasks and TaskRuns workspaces in tasks and taskruns Workspaces in Pipelines and PipelineRuns workspaces in pipelines and pipelineruns Optional Workspaces optional workspaces Isolated Workspaces isolated workspaces Configuring Workspaces configuring workspaces Using Workspaces in Tasks using workspaces in tasks Isolating Workspaces to Specific Steps or Sidecars isolating workspaces to specific steps or sidecars Setting a default TaskRun Workspace Binding setting a default taskrun workspace binding Using Workspace variables in Tasks using workspace variables in tasks Mapping Workspaces in Tasks to TaskRuns mapping workspaces in tasks to taskruns Examples of TaskRun definition using Workspaces examples of taskrun definition using workspaces Using Workspaces in Pipelines using workspaces in pipelines Specifying Workspace order in a Pipeline and Affinity Assistants specifying workspace order in a pipeline and affinity assistants Specifying Workspaces in PipelineRuns specifying workspaces in pipelineruns Example PipelineRun definition using Workspaces example pipelinerun definition using workspaces Specifying VolumeSources in Workspaces specifying volumesources in workspaces Using PersistentVolumeClaims as VolumeSource using persistentvolumeclaims as volumesource Using other types of VolumeSources using other types of volumesources Using Persistent Volumes within a PipelineRun using persistent volumes within a pipelinerun More examples more examples Overview Workspaces allow Tasks to declare parts of the filesystem that need to be provided at runtime by TaskRuns A TaskRun can make these parts of the filesystem available in many ways using a read only ConfigMap or Secret an existing PersistentVolumeClaim shared with other Tasks create a PersistentVolumeClaim from a provided VolumeClaimTemplate or simply an emptyDir that is discarded when the TaskRun completes Workspaces are similar to Volumes except that they allow a Task author to defer to users and their TaskRuns when deciding which class of storage to use Workspaces can serve the following purposes Storage of inputs and or outputs Sharing data among Tasks A mount point for credentials held in Secrets A mount point for configurations held in ConfigMaps A mount point for common tools shared by an organization A cache of build artifacts that speed up jobs Workspaces in Tasks and TaskRuns Tasks specify where a Workspace resides on disk for its Steps At runtime a TaskRun provides the specific details of the Volume that is mounted into that Workspace This separation of concerns allows for a lot of flexibility For example in isolation a single TaskRun might simply provide an emptyDir volume that mounts quickly and disappears at the end of the run In a more complex system however a TaskRun might use a PersistentVolumeClaim which is pre populated with data for the Task to process In both scenarios the Task s Workspace declaration remains the same and only the runtime information in the TaskRun changes Tasks can also share Workspaces with their Sidecars though there s a little more configuration involved to add the required volumeMount This allows for a long running process in a Sidecar to share data with the executing Steps of a Task Note If the enable api fields feature flag is set to beta then workspaces will automatically be available to Sidecars too Workspaces in Pipelines and PipelineRuns A Pipeline can use Workspaces to show how storage will be shared through its Tasks For example Task A might clone a source repository onto a Workspace and Task B might compile the code that it finds in that Workspace It s the Pipeline s job to ensure that the Workspace these two Tasks use is the same and more importantly that the order in which they access the Workspace is correct PipelineRuns perform mostly the same duties as TaskRuns they provide the specific Volume information to use for the Workspaces used by each Pipeline PipelineRuns have the added responsibility of ensuring that whatever Volume type they provide can be safely and correctly shared across multiple Tasks Optional Workspaces Both Tasks and Pipelines can declare a Workspace optional When an optional Workspace is declared the TaskRun or PipelineRun may omit a Workspace Binding for that Workspace The Task or Pipeline behaviour may change when the Binding is omitted This feature has many uses A Task may optionally accept credentials to run authenticated commands A Pipeline may accept optional configuration that changes the linting or compilation parameters used An optional build cache may be provided to speed up compile times See the section Using Workspaces in Tasks using workspaces in tasks for more info on the optional field Isolated Workspaces This is a beta feature The enable api fields feature flag must be set to beta install md for Isolated Workspaces to function Certain kinds of data are more sensitive than others To reduce exposure of sensitive data Task authors can isolate Workspaces to only those Steps and Sidecars that require access to them The primary use case for this is credentials but it can apply to any data that should have its access strictly limited to only specific container images See the section Isolating Workspaces to Specific Steps or Sidecars isolating workspaces to specific steps or sidecars for more info on this feature Configuring Workspaces This section describes how to configure one or more Workspaces in a TaskRun Using Workspaces in Tasks To configure one or more Workspaces in a Task add a workspaces list with each entry using the following fields name required A unique string identifier that can be used to refer to the workspace description An informative string describing the purpose of the Workspace readOnly A boolean declaring whether the Task will write to the Workspace Defaults to false optional A boolean indicating whether a TaskRun can omit the Workspace Defaults to false mountPath A path to a location on disk where the workspace will be available to Steps If a mountPath is not provided the workspace will be placed by default at workspace name where name is the workspace s unique name Note the following A Task definition can include as many Workspaces as it needs It is recommended that Tasks use at most one writeable Workspace A readOnly Workspace will have its volume mounted as read only Attempting to write to a readOnly Workspace will result in errors and failed TaskRuns Below is an example Task definition that includes a Workspace called messages to which the Task writes a message yaml spec steps name write message image ubuntu script usr bin env bash set xe if workspaces messages bound true then echo hello workspaces messages path message fi workspaces name messages description The folder where we write the message to If no workspace is provided then the message will not be written optional true mountPath custom path relative to root Sharing Workspaces with Sidecars A Task s Sidecars are also able to access the Workspaces the Task defines but must have their volumeMount configuration set explicitly Below is an example Task that shares a Workspace between its Steps and its Sidecar In the example a Sidecar sleeps for a short amount of time and then writes a ready file which the Step is waiting for yaml spec workspaces name signals steps image alpine script while f workspaces signals path ready do echo Waiting for ready file sleep 1 done echo Saw ready file sidecars image alpine Note must explicitly include volumeMount for the workspace to be accessible in the Sidecar volumeMounts name workspaces signals volume mountPath workspaces signals path script sleep 3 touch workspaces signals path ready Note Starting in Pipelines v0 24 0 Sidecars automatically get access to Workspaces This is a beta feature and requires Pipelines to have the beta feature gate enabled install md beta features If a Sidecar already has a volumeMount at the location expected for a workspace then that workspace is not bound to the Sidecar This preserves backwards compatibility with any existing uses of the volumeMount trick described above Isolating Workspaces to Specific Steps or Sidecars This is a beta feature The enable api fields feature flag must be set to beta install md beta features for Isolated Workspaces to function To limit access to a Workspace from a subset of a Task s Steps or Sidecars requires adding a workspaces declaration to those sections In the following example a Task has several Steps but only the one that performs a git clone will be able to access the SSH credentials passed into it yaml spec workspaces name ssh credentials description An ssh directory with keys known host and config files used to clone the repo steps name clone repo workspaces name ssh credentials This Step receives the sensitive workspace the others do not image git script git clone name build source image third party source builder latest This image doesn t get access to ssh credentials name lint source image third party source linter latest This image doesn t get access to ssh credentials It can potentially be useful to mount Workspaces to different locations on a per Step or per Sidecar basis and this is also supported yaml kind Task spec workspaces name ws mountPath workspaces ws steps name edit files 1 workspaces name ws mountPath foo overrides mountPath name edit files 2 workspaces name ws no mountPath specified so will use workspaces ws sidecars name watch files on workspace workspaces name ws mountPath files overrides mountPath Setting a default TaskRun Workspace Binding An organization may want to specify default Workspace configuration for TaskRuns This allows users to use Tasks without having to know the specifics of Workspaces they can simply rely on the platform to use the default configuration when a Workspace is missing To support this Tekton allows a default Workspace Binding to be specified for TaskRuns When the TaskRun executes any Workspaces that a Task requires but which are not provided by the TaskRun will be bound with the default configuration The configuration for the default Workspace Binding is added to the config defaults ConfigMap under the default task run workspace binding key For an example see the Customizing basic execution parameters additional configs md customizing basic execution parameters section of the install doc Note the default configuration is used for any required Workspace declared by a Task Optional Workspaces are not populated with the default binding This is because a Task s behaviour will typically differ slightly when an optional Workspace is bound Using Workspace variables in Tasks The following variables make information about Workspaces available to Tasks workspaces name path specifies the path to a Workspace where name is the name of the Workspace This will be an empty string when a Workspace is declared optional and not provided by a TaskRun workspaces name bound either true or false specifies whether a workspace was bound Always true if the workspace is required workspaces name claim specifies the name of the PersistentVolumeClaim used as a volume source for the Workspace where name is the name of the Workspace If a volume source other than PersistentVolumeClaim is used an empty string is returned workspaces name volume specifies the name of the Volume provided for a Workspace where name is the name of the Workspace Mapping Workspaces in Tasks to TaskRuns A TaskRun that executes a Task containing a workspaces list must bind those workspaces to actual physical Volumes To do so the TaskRun includes its own workspaces list Each entry in the list contains the following fields name required The name of the Workspace within the Task for which the Volume is being provided subPath An optional subdirectory on the Volume to store data for that Workspace The entry must also include one VolumeSource See Specifying VolumeSources in Workspaces specifying volumesources in workspaces for more information Caution The Workspaces declared in a Task must be available when executing the associated TaskRun Otherwise the TaskRun will fail Examples of TaskRun definition using Workspaces The following example illustrate how to specify Workspaces in your TaskRun definition an emptyDir https kubernetes io docs concepts storage volumes emptydir is provided for a Task s workspace called myworkspace yaml apiVersion tekton dev v1beta1 kind TaskRun metadata generateName example taskrun spec taskRef name example task workspaces name myworkspace this workspace name must be declared in the Task emptyDir emptyDir volumes can be used for TaskRuns but consider using a PersistentVolumeClaim for PipelineRuns For examples of using other types of volume sources see Specifying VolumeSources in Workspaces specifying volumesources in workspaces For a more in depth example see Workspaces in a TaskRun examples v1 taskruns workspace yaml Using Workspaces in Pipelines While individual Tasks declare the Workspaces they need to run the Pipeline decides which Workspaces are shared among its Tasks To declare shared Workspaces in a Pipeline you must add the following information to your Pipeline definition A list of Workspaces that your PipelineRuns will be providing Use the workspaces field to specify the target Workspaces in your Pipeline definition as shown below Each entry in the list must have a unique name A mapping of Workspace names between the Pipeline and the Task definitions The example below defines a Pipeline with a Workspace named pipeline ws1 This Workspace is bound in two Tasks first as the output workspace declared by the gen code Task then as the src workspace declared by the commit Task If the Workspace provided by the PipelineRun is a PersistentVolumeClaim then these two Tasks can share data within that Workspace yaml spec workspaces name pipeline ws1 Name of the workspace in the Pipeline name pipeline ws2 optional true tasks name use ws from pipeline taskRef name gen code gen code expects a workspace named output workspaces name output workspace pipeline ws1 name use ws again taskRef name commit commit expects a workspace named src workspaces name src workspace pipeline ws1 runAfter use ws from pipeline important use ws from pipeline writes to the workspace first Include a subPath in the Workspace Binding to mount different parts of the same volume for different Tasks See a full example of this kind of Pipeline examples v1 pipelineruns pipelinerun using different subpaths of workspace yaml which writes data to two adjacent directories on the same Volume The subPath specified in a Pipeline will be appended to any subPath specified as part of the PipelineRun workspace declaration So a PipelineRun declaring a Workspace with subPath of foo for a Pipeline who binds it to a Task with subPath of bar will end up mounting the Volume s foo bar directory Specifying Workspace order in a Pipeline and Affinity Assistants Sharing a Workspace between Tasks requires you to define the order in which those Tasks write to or read from that Workspace Use the runAfter field in your Pipeline definition to define when a Task should be executed For more information see the runAfter documentation pipelines md using the runafter parameter When a PersistentVolumeClaim is used as volume source for a Workspace in a PipelineRun an Affinity Assistant will be created For more information see the Affinity Assistants documentation affinityassistants md Note When coschedule is set to workspaces or disabled it is not allowed to bind multiple PersistentVolumeClaim based workspaces using persistentvolumeclaims as volumesource to the same TaskRun in a PipelineRun due to potential Availability Zone conflicts See more details in Availability Zones availability zones Specifying Workspaces in PipelineRuns For a PipelineRun to execute a Pipeline that includes one or more Workspaces it needs to bind the Workspace names to volumes using its own workspaces field Each entry in this list must correspond to a Workspace declaration in the Pipeline Each entry in the workspaces list must specify the following name required the name of the Workspace specified in the Pipeline definition for which a volume is being provided subPath optional a directory on the volume that will store that Workspace s data This directory must exist at the time the TaskRun executes otherwise the execution will fail The entry must also include one VolumeSource See Using VolumeSources with Workspaces specifying volumesources in workspaces for more information Note If the Workspaces specified by a Pipeline are not provided at runtime by a PipelineRun that PipelineRun will fail You can pass in extra Workspaces if needed depending on your use cases An example use case is when your CI system autogenerates PipelineRuns and it has Workspaces it wants to provide to all PipelineRuns Because you can pass in extra Workspaces you don t have to go through the complexity of checking each Pipeline and providing only the required Workspaces Example PipelineRun definition using Workspaces In the example below a volumeClaimTemplate is provided for how a PersistentVolumeClaim should be created for a workspace named myworkspace declared in a Pipeline When using volumeClaimTemplate a new PersistentVolumeClaim is created for each PipelineRun and it allows the user to specify e g size and StorageClass for the volume yaml apiVersion tekton dev v1beta1 kind PipelineRun metadata generateName example pipelinerun spec pipelineRef name example pipeline workspaces name myworkspace this workspace name must be declared in the Pipeline volumeClaimTemplate spec accessModes ReadWriteOnce access mode may affect how you can use this volume in parallel tasks resources requests storage 1Gi For examples of using other types of volume sources see Specifying VolumeSources in Workspaces specifying volumesources in workspaces For a more in depth example see the Workspaces in PipelineRun examples v1 pipelineruns workspaces yaml YAML sample Specifying VolumeSources in Workspaces You can only use a single type of VolumeSource per Workspace entry The configuration options differ for each type Workspaces support the following fields Using PersistentVolumeClaims as VolumeSource PersistentVolumeClaim volumes are a good choice for sharing data among Tasks within a Pipeline Beware that the access mode https kubernetes io docs concepts storage persistent volumes access modes configured for the PersistentVolumeClaim effects how you can use the volume for parallel Tasks in a Pipeline See Specifying workspace order in a Pipeline and Affinity Assistants specifying workspace order in a pipeline and affinity assistants for more information about this There are two ways of using PersistentVolumeClaims as a VolumeSource volumeClaimTemplate The volumeClaimTemplate is a template of a PersistentVolumeClaim volume https kubernetes io docs concepts storage volumes persistentvolumeclaim created for each PipelineRun or TaskRun When the volume is created from a template in a PipelineRun or TaskRun it will be deleted when the PipelineRun or TaskRun is deleted yaml workspaces name myworkspace volumeClaimTemplate spec accessModes ReadWriteOnce resources requests storage 1Gi persistentVolumeClaim The persistentVolumeClaim field references an existing persistentVolumeClaim volume https kubernetes io docs concepts storage volumes persistentvolumeclaim The example exposes only the subdirectory my subdir from that PersistentVolumeClaim yaml workspaces name myworkspace persistentVolumeClaim claimName mypvc subPath my subdir Using other types of VolumeSources emptyDir The emptyDir field references an emptyDir volume https kubernetes io docs concepts storage volumes emptydir which holds a temporary directory that only lives as long as the TaskRun that invokes it emptyDir volumes are not suitable for sharing data among Tasks within a Pipeline However they work well for single TaskRuns where the data stored in the emptyDir needs to be shared among the Steps of the Task and discarded after execution yaml workspaces name myworkspace emptyDir configMap The configMap field references a configMap volume https kubernetes io docs concepts storage volumes configmap Using a configMap as a Workspace has the following limitations configMap volume sources are always mounted as read only Steps cannot write to them and will error out if they try The configMap you want to use as a Workspace must exist prior to submitting the TaskRun configMaps are size limited to 1MB https github com kubernetes kubernetes blob f16bfb069a22241a5501f6fe530f5d4e2a82cf0e pkg apis core validation validation go L5042 yaml workspaces name myworkspace configmap name my configmap secret The secret field references a secret volume https kubernetes io docs concepts storage volumes secret Using a secret volume has the following limitations secret volume sources are always mounted as read only Steps cannot write to them and will error out if they try The secret you want to use as a Workspace must exist prior to submitting the TaskRun secret are size limited to 1MB https github com kubernetes kubernetes blob f16bfb069a22241a5501f6fe530f5d4e2a82cf0e pkg apis core validation validation go L5042 yaml workspaces name myworkspace secret secretName my secret projected The projected field references a projected volume https kubernetes io docs concepts storage projected volumes projected volume workspaces are a beta feature additional configs md beta features Using a projected volume has the following limitations projected volume sources are always mounted as read only Steps cannot write to them and will error out if they try The volumes you want to project as a Workspace must exist prior to submitting the TaskRun The following volumes can be projected configMap secret serviceAccountToken and downwardApi yaml workspaces name myworkspace projected sources configMap name my configmap secret name my secret csi The csi field references a csi volume https kubernetes io docs concepts storage volumes csi csi workspaces are a beta feature additional configs md beta features Using a csi volume has the following limitations wokeignore rule master csi volume sources require a volume driver to use which must correspond to the value by the CSI driver as defined in the CSI spec https github com container storage interface spec blob master spec md getplugininfo yaml workspaces name my credentials csi driver secrets store csi k8s io readOnly true volumeAttributes secretProviderClass vault database Example of CSI workspace using Hashicorp Vault Install the required csi driver eg secrets store csi driver https github com hashicorp vault csi provider using yaml Install the vault Provider onto the kubernetes cluster Reference https learn hashicorp com tutorials vault kubernetes raft deployment guide Deploy a provider via example https gist github com JeromeJu cc8e4e758029b6694806604750b8911c Create a SecretProviderClass Provider using the following yaml https github com tektoncd pipeline blob main examples v1 pipelineruns no ci csi workspace yaml L1 L19 Specify the ServiceAccount via vault vault write auth kubernetes role database bound service account names default bound service account namespaces default policies internal app ttl 20m If you need support for a VolumeSource type not listed above open an issue https github com tektoncd pipeline issues or a pull request https github com tektoncd pipeline blob main CONTRIBUTING md Using Persistent Volumes within a PipelineRun When using a workspace with a PersistentVolumeClaim as VolumeSource using persistentvolumeclaims as volumesource a Kubernetes Persistent Volumes https kubernetes io docs concepts storage persistent volumes is used within the PipelineRun There are some details that are good to know when using Persistent Volumes within a PipelineRun Storage Class PersistentVolumeClaims specify a Storage Class https kubernetes io docs concepts storage storage classes for the underlying Persistent Volume Storage Classes have specific characteristics If a StorageClassName is not specified for your PersistentVolumeClaim the cluster defined default Storage Class is used For regional clusters clusters that typically consist of Nodes located in multiple Availability Zones it is important to know whether your Storage Class is available to all Nodes Default Storage Classes are typically only available to Nodes within one Availability Zone There is usually an option to use a regional Storage Class but they have trade offs e g you need to pay for multiple volumes since they are replicated and your volume may have substantially higher latency Access Modes A PersistentVolumeClaim specifies an Access Mode https kubernetes io docs concepts storage persistent volumes access modes Available Access Modes are ReadWriteOnce ReadWriteMany and ReadOnlyMany What Access Mode you can use depend on the storage solution that you are using ReadWriteOnce is the most commonly available Access Mode A volume with this Access Mode can only be mounted on one Node at a time This can be problematic for a Pipeline that has parallel Tasks that access the volume concurrently The Affinity Assistant helps with this problem by scheduling all Tasks that use the same PersistentVolumeClaim to the same Node ReadOnlyMany is read only and is less common in a CI CD pipeline These volumes often need to be prepared with data in some way before use Dynamically provided volumes can usually not be used in read only mode ReadWriteMany is the least commonly available Access Mode If you use this access mode and these volumes are available to all Nodes within your cluster you may want to disable the Affinity Assistant Availability Zones Persistent Volumes are zonal in some cloud providers like GKE i e they live within a single Availability Zone and cannot be accessed from a pod living in another Availability Zone When using a workspace backed by a PersistentVolumeClaim typically only available within a Data Center the TaskRun pods can be scheduled to any Availability Zone in a regional cluster This results in potential Availability Zone scheduling conflict when two pods requiring the same Volume are scheduled to different Availability Zones see issue 3480 https github com tektoncd pipeline issues 3480 and 5275 https github com tektoncd pipeline issues 5275 To avoid such conflict in PipelineRuns Tekton provides Affinity Assistants affinityassistants md which schedule all TaskRun pods or all TaskRun sharing a PersistentVolumeClaim in a PipelineRun to the same Node depending on the coschedule mode Specifically for users use zonal clusters like GKE or use PersistentVolumeClaim in ReadWriteOnce access modes please set coschedule workspaces to schedule each of the TaskRun pod to the same zone as the associated PersistentVolumeClaim In addition for users want to bind multiple PersistentVolumeClaims to a single TaskRun please set coschedule pipelineruns to schedule all TaskRun pods and PersistentVolumeClaim in a PipelineRun to the same zone More examples See the following in depth examples of configuring Workspaces Workspaces in a TaskRun examples v1 taskruns workspace yaml Workspaces in a PipelineRun examples v1 pipelineruns workspaces yaml Workspaces from a volumeClaimTemplate in a PipelineRun examples v1 pipelineruns workspace from volumeclaimtemplate yaml |
tekton Pipelines in Pipelines Pipelines in Pipelines weight 406 | <!--
---
linkTitle: "Pipelines in Pipelines"
weight: 406
---
-->
# Pipelines in Pipelines
- [Overview](#overview)
- [Specifying `pipelineRef` in `Tasks`](#specifying-pipelineref-in-pipelinetasks)
- [Specifying `pipelineSpec` in `Tasks`](#specifying-pipelinespec-in-pipelinetasks)
- [Specifying `Parameters`](#specifying-parameters)
## Overview
A mechanism to define and execute Pipelines in Pipelines, alongside Tasks and Custom Tasks, for a more in-depth background and inspiration, refer to the proposal [TEP-0056](https://github.com/tektoncd/community/blob/main/teps/0056-pipelines-in-pipelines.md "Proposal").
> :seedling: **Pipelines in Pipelines is an [alpha](additional-configs.md#alpha-features) feature.**
> The `enable-api-fields` feature flag must be set to `"alpha"` to specify `pipelineRef` or `pipelineSpec` in a `pipelineTask`.
> This feature is in Preview Only mode and not yet supported/implemented.
## Specifying `pipelineRef` in `pipelineTasks`
Defining Pipelines in Pipelines at authoring time, by either specifying `PipelineRef` or `PipelineSpec` fields to a `PipelineTask` alongside `TaskRef` and `TaskSpec`.
For example, a Pipeline named security-scans which is run within a Pipeline named clone-scan-notify where the PipelineRef is used:
```
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: security-scans
spec:
tasks:
- name: scorecards
taskRef:
name: scorecards
- name: codeql
taskRef:
name: codeql
---
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: clone-scan-notify
spec:
tasks:
- name: git-clone
taskRef:
name: git-clone
- name: security-scans
pipelineRef:
name: security-scans
- name: notification
taskRef:
name: notification
```
## Specifying `pipelineSpec` in `pipelineTasks`
The `pipelineRef` [example](#specifying-pipelineref-in-pipelinetasks) can be modified to use PipelineSpec instead of PipelineRef to instead embed the Pipeline specification:
```
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: clone-scan-notify
spec:
tasks:
- name: git-clone
taskRef:
name: git-clone
- name: security-scans
pipelineSpec:
tasks:
- name: scorecards
taskRef:
name: scorecards
- name: codeql
taskRef:
name: codeql
- name: notification
taskRef:
name: notification
```
## Specifying `Parameters`
Pipelines in Pipelines consume Parameters in the same way as Tasks in Pipelines
```
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: clone-scan-notify
spec:
params:
- name: repo
value: $(params.repo)
tasks:
- name: git-clone
params:
- name: repo
value: $(params.repo)
taskRef:
name: git-clone
- name: security-scans
params:
- name: repo
value: $(params.repo)
pipelineRef:
name: security-scans
- name: notification
taskRef:
name: notification
``` | tekton | linkTitle Pipelines in Pipelines weight 406 Pipelines in Pipelines Overview overview Specifying pipelineRef in Tasks specifying pipelineref in pipelinetasks Specifying pipelineSpec in Tasks specifying pipelinespec in pipelinetasks Specifying Parameters specifying parameters Overview A mechanism to define and execute Pipelines in Pipelines alongside Tasks and Custom Tasks for a more in depth background and inspiration refer to the proposal TEP 0056 https github com tektoncd community blob main teps 0056 pipelines in pipelines md Proposal seedling Pipelines in Pipelines is an alpha additional configs md alpha features feature The enable api fields feature flag must be set to alpha to specify pipelineRef or pipelineSpec in a pipelineTask This feature is in Preview Only mode and not yet supported implemented Specifying pipelineRef in pipelineTasks Defining Pipelines in Pipelines at authoring time by either specifying PipelineRef or PipelineSpec fields to a PipelineTask alongside TaskRef and TaskSpec For example a Pipeline named security scans which is run within a Pipeline named clone scan notify where the PipelineRef is used apiVersion tekton dev v1 kind Pipeline metadata name security scans spec tasks name scorecards taskRef name scorecards name codeql taskRef name codeql apiVersion tekton dev v1 kind Pipeline metadata name clone scan notify spec tasks name git clone taskRef name git clone name security scans pipelineRef name security scans name notification taskRef name notification Specifying pipelineSpec in pipelineTasks The pipelineRef example specifying pipelineref in pipelinetasks can be modified to use PipelineSpec instead of PipelineRef to instead embed the Pipeline specification apiVersion tekton dev v1 kind Pipeline metadata name clone scan notify spec tasks name git clone taskRef name git clone name security scans pipelineSpec tasks name scorecards taskRef name scorecards name codeql taskRef name codeql name notification taskRef name notification Specifying Parameters Pipelines in Pipelines consume Parameters in the same way as Tasks in Pipelines apiVersion tekton dev v1 kind Pipeline metadata name clone scan notify spec params name repo value params repo tasks name git clone params name repo value params repo taskRef name git clone name security scans params name repo value params repo pipelineRef name security scans name notification taskRef name notification |
tekton CustomRuns CustomRuns weight 206 | <!--
---
linkTitle: "CustomRuns"
weight: 206
---
-->
# CustomRuns
- [Overview](#overview)
- [Configuring a `CustomRun`](#configuring-a-customrun)
- [Specifying the target Custom Task](#specifying-the-target-custom-task)
- [Cancellation](#cancellation)
- [Specifying `Timeout`](#specifying-timeout)
- [Specifying `Retries`](#specifying-retries)
- [Specifying Parameters](#specifying-parameters)
- [Specifying Workspaces](#specifying-workspaces)
- [Specifying Service Account](#specifying-a-serviceaccount)
- [Monitoring execution status](#monitoring-execution-status)
- [Status Reporting](#status-reporting)
- [Monitoring `Results`](#monitoring-results)
- [Code examples](#code-examples)
- [Example `CustomRun` with a referenced custom task](#example-customrun-with-a-referenced-custom-task)
- [Example `CustomRun` with an unnamed custom task](#example-customrun-with-an-unnamed-custom-task)
- [Example of specifying parameters](#example-of-specifying-parameters)
# Overview
*`v1beta1.CustomRun` has replaced `v1alpha1.Run` for executing `Custom Tasks`. Please refer to the [migration doc](migrating-v1alpha1.Run-to-v1beta1.CustomRun.md) for details
on updating `v1alpha1.Run` to `v1beta1.CustomRun` before upgrading to a release that does not support `v1alpha1.Run`.*
A `CustomRun` allows you to instantiate and execute a [Custom
Task](https://github.com/tektoncd/community/blob/main/teps/0002-custom-tasks.md),
which can be implemented by a custom task controller running on-cluster. Custom
Tasks can implement behavior that's independent of how Tekton TaskRun is implemented.
In order for a `CustomRun` to actually execute, there must be a custom task
controller running on the cluster that is responsible for watching and updating
`CustomRun`s which reference their type. If no such controller is running, `CustomRun`s
will have no `.status` value and no further action will be taken.
## Configuring a `CustomRun`
A `CustomRun` definition supports the following fields:
- Required:
- [`apiVersion`][kubernetes-overview] - Specifies the API version. Currently,only
`tekton.dev/v1beta1` is supported.
- [`kind`][kubernetes-overview] - Identifies this resource object as a `CustomRun` object.
- [`metadata`][kubernetes-overview] - Specifies the metadata that uniquely identifies the
`CustomRun`, such as a `name`.
- [`spec`][kubernetes-overview] - Specifies the configuration for the `CustomRun`.
- [`customRef`](#1-specifying-the-target-custom-task-with-customref) - Specifies the type and
(optionally) name of the custom task type to execute.
- [`customSpec`](#2-specifying-the-target-custom-task-by-embedding-its-spec) - Embed the custom task resource spec
directly in a `CustomRun`.
- Optional:
- [`timeout`](#specifying-timeout) - specifies the maximum duration of a single execution
of a `CustomRun`.
- [`retries`](#specifying-retries) - specifies the number of retries to execute upon `CustomRun`failure.
- [`params`](#specifying-parameters) - Specifies the desired execution
parameters for the custom task.
- [`serviceAccountName`](#specifying-a-serviceaccount) - Specifies a `ServiceAccount`
object for executing the `CustomRun`.
- [`workspaces`](#specifying-workspaces) - Specifies the physical volumes to use for the
[`Workspaces`](workspaces.md) required by a custom task.
[kubernetes-overview]:
https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields
### Specifying the target Custom Task
A custom task resource's `CustomSpec` may be directly embedded in the `CustomRun` or it may
be referred to by a `CustomRef`. But, not both at the same time.
1. [Specifying the target Custom Task with customRef](#specifying-the-target-custom-task-with-customref)
Referring a custom task (i.e. `CustomRef` ) promotes reuse of custom task definitions.
2. [Specifying the target Custom Task by embedding its spec](#specifying-the-target-custom-task-by-embedding-its-customspec)
Embedding a custom task (i.e. `CustomSpec` ) helps in avoiding name collisions with other users within the same namespace.
Additionally, in a pipeline with multiple embedded custom tasks, the details of entire pipeline can be fetched in a
single API request.
#### Specifying the target Custom Task with customRef
To specify the custom task type you want to execute in your `CustomRun`, use the
`customRef` field as shown below:
```yaml
apiVersion: tekton.dev/v1beta1
kind: CustomRun
metadata:
name: my-example-run
spec:
customRef:
apiVersion: example.dev/v1beta1
kind: MyCustomKind
params:
- name: duration
value: 10s
```
When this `CustomRun` is created, the Custom Task controller responsible for
reconciling objects of kind "MyCustomKind" in the "example.dev/v1beta1" api group
will execute it based on the input params.
You can also specify the `name` and optional `namespace` (default is `default`)
of a custom task resource object previously defined in the cluster.
```yaml
apiVersion: tekton.dev/v1beta1
kind: CustomRun
metadata:
name: my-example-run
spec:
customRef:
apiVersion: example.dev/v1beta1
kind: Example
name: an-existing-example-task
```
If the `customRef` specifies a name, the custom task controller should look up the
`Example` resource with that name, and use that object to configure the
execution.
If the `customRef` does not specify a name, the custom task controller might support
some default behavior for executing unnamed tasks.
In either case, if the named resource cannot be found, or if unnamed tasks are
not supported, the custom task controller should update the `CustomRun`'s status to
indicate the error.
#### Specifying the target Custom Task by embedding its customSpec
To specify the custom task spec, it can be embedded directly into a
`CustomRun`'s spec as shown below:
```yaml
apiVersion: tekton.dev/v1beta1
kind: CustomRun
metadata:
name: embedded-run
spec:
customSpec:
apiVersion: example.dev/v1beta1
kind: Example
spec:
field1: value1
field2: value2
```
When this `CustomRun` is created, the custom task controller responsible for
reconciling objects of kind `Example` in the `example.dev` api group will
execute it.
#### Developer guide for custom controllers supporting `customSpec`.
1. A custom controller may or may not support a `Spec`. In cases where it is
not supported the custom controller should respond with proper validation
error.
2. Validation of the fields of the custom task is delegated to the custom
task controller. It is recommended to implement validations as asynchronous
(i.e. at reconcile time), rather than part of the webhook. Using a webhook
for validation is problematic because, it is not possible to filter custom
task resource objects before validation step, as a result each custom task
resource has to undergo validation by all the installed custom task
controllers.
3. A custom task may have an empty spec, but cannot have an empty
`ApiVersion` and `Kind`. Custom task controllers should handle
an empty spec, either with a default behaviour, in a case no default
behaviour is supported then, appropriate validation error should be
updated to the `CustomRun`'s status.
### Cancellation
The custom task is responsible for implementing `cancellation` to support pipelineRun level `timeouts` and `cancellation`. If the Custom Task implementor does not support cancellation via `.spec.status`, `Pipeline` **can not** timeout within the specified interval/duration and **can not** be cancelled as expected upon request.
Pipeline Controller sets the `spec.Status` and `spec.StatusMessage` to signal `CustomRuns` about the `Cancellation`, while `CustomRun` controller updates its `status.conditions` as following once noticed the change on `spec.Status`.
```yaml
status
conditions:
- type: Succeeded
status: False
reason: CustomRunCancelled
```
### Specifying `Timeout`
A custom task specification can be created with `Timeout` as follows:
```yaml
apiVersion: tekton.dev/v1beta1
kind: CustomRun
metadata:
generateName: simpleexample
spec:
timeout: 10s # set timeouts here.
params:
- name: searching
value: the purpose of my existence
customRef:
apiVersion: custom.tekton.dev/v1alpha1
kind: Example
name: exampleName
```
Supporting timeouts is optional but recommended.
#### Developer guide for custom controllers supporting `Timeout`
1. Tekton controllers will never directly update the status of the `CustomRun`,
it is the responsibility of the custom task controller to support timeout.
If timeout is not supported, it's the responsibility of the custom task
controller to reject `CustomRun`s that specify a timeout value.
2. When `CustomRun.Spec.Status` is updated to `RunCancelled`, the custom task controller
MUST cancel the `CustomRun`. Otherwise, pipeline-level `Timeout` and
`Cancellation` won't work for the Custom Task.
3. A Custom Task controller can watch for this status update
(i.e. `CustomRun.Spec.Status == RunCancelled`) and or `CustomRun.HasTimedOut()`
and take any corresponding actions (i.e. a clean up e.g., cancel a cloud build,
stop the waiting timer, tear down the approval listener).
4. Once resources or timers are cleaned up, while it is **REQUIRED** to set a
`conditions` on the `CustomRun`'s `status` of `Succeeded/False` with an optional
`Reason` of `CustomRunTimedOut`.
5. `Timeout` is specified for each `retry attempt` instead of all `retries`.
### Specifying `Retries`
A custom task specification can be created with `Retries` as follows:
```yaml
apiVersion: tekton.dev/v1beta1
kind: CustomRun
metadata:
generateName: simpleexample
spec:
retries: 3 # set retries
params:
- name: searching
value: the purpose of my existence
customRef:
apiVersion: custom.tekton.dev/v1alpha1
kind: Example
name: exampleName
```
Supporting retries is optional but recommended.
#### Developer guide for custom controllers supporting `retries`
1. Tekton controller only depends on `ConditionSucceeded` to determine the
termination status of a `CustomRun`, therefore Custom task implementors
MUST NOT set `ConditionSucceeded` to `False` until all retries are exhausted.
2. Those custom tasks who do not wish to support retry, can simply ignore it.
3. It is recommended, that custom task should update the field `RetriesStatus`
of a `CustomRun` on each retry performed by the custom task.
4. Tekton controller does not validate that number of entries in `RetriesStatus`
is same as specified value of retries count.
### Specifying `Parameters`
If a custom task supports [`parameters`](tasks.md#parameters), you can use the
`params` field in the `CustomRun` to specify their values:
```yaml
spec:
params:
- name: my-param
value: chicken
```
If the custom task controller knows how to interpret the parameter value, it
will do so. It might enforce that some parameter values must be specified, or
reject unknown parameter values.
### Specifying workspaces
If the custom task supports it, you can provide [`Workspaces`](workspaces.md) to share data with the custom task .
```yaml
spec:
workspaces:
- name: my-workspace
emptyDir: {}
```
Consult the documentation of the custom task that you are using to determine whether it supports workspaces and how to name them.
### Specifying a ServiceAccount
If the custom task supports it, you can execute the `CustomRun` with a specific set of credentials by
specifying a `ServiceAccount` object name in the `serviceAccountName` field in your `CustomRun`
definition. If you do not explicitly specify this, the `CustomRun` executes with the service account
specified in the `configmap-defaults` `ConfigMap`. If this default is not specified, `CustomRuns`
will execute with the [`default` service account](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server)
set for the target [`namespace`](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/).
```yaml
spec:
serviceAccountName: my-account
```
Consult the documentation of the custom task that you are using to determine whether it supports a service account name.
## Monitoring execution status
As your `CustomRun` executes, its `status` field accumulates information on the
execution of the `CustomRun`. This information includes the current state of the
`CustomRun`, start and completion times, and any output `results` reported by
the custom task controller.
### Status Reporting
When the `CustomRun<Example>` is validated and created, the Custom Task controller will be notified and is expected to begin doing some operation. When the operation begins, the controller **MUST** update the `CustomRun`'s `.status.conditions` to report that it's ongoing:
```yaml
status
conditions:
- type: Succeeded
status: Unknown
```
When the operation completes, if it was successful, the condition **MUST** report `status: True`, and optionally a brief `reason` and human-readable `message`:
```yaml
status
conditions:
- type: Succeeded
status: True
reason: ExampleComplete # optional
message: Yay, good times # optional
```
If the operation was _unsuccessful_, the condition **MUST** report `status: False`, and optionally a `reason` and human-readable `message`:
```yaml
status
conditions:
- type: Succeeded
status: False
reason: ExampleFailed # optional
message: Oh no bad times # optional
```
If the `CustomRun` was _cancelled_, the condition **MUST** report `status: False`, `reason: CustomRunCancelled`, and optionally a human-readable `message`:
```yaml
status
conditions:
- type: Succeeded
status: False
reason: CustomRunCancelled
message: Oh it's cancelled # optional
```
The following tables shows the overall status of a `CustomRun`:
`status`|Description
:-------|:-----------
<unset>|The custom task controller has not taken any action on the `CustomRun`.
Unknown|The custom task controller has started execution and the `CustomRun` is ongoing.
True|The `CustomRun` completed successfully.
False|The `CustomRun` completed unsuccessfully, and all retries were exhausted.
The `CustomRun` type's `.status` will also allow controllers to report other fields, such as `startTime`, `completionTime`, `results` (see below), and arbitrary context-dependent fields the Custom Task author wants to report. A fully-specified `CustomRun` status might look like:
```
status
conditions:
- type: Succeeded
status: True
reason: ExampleComplete
message: Yay, good times
completionTime: "2020-06-18T11:55:01Z"
startTime: "2020-06-18T11:55:01Z"
results:
- name: first-name
value: Bob
- name: last-name
value: Smith
arbitraryField: hello world
arbitraryStructuredField:
listOfThings: ["a", "b", "c"]
```
### Monitoring `Results`
After the `CustomRun` completes, the custom task controller can report output
values in the `results` field:
```
results:
- name: my-result
value: chicken
```
## Code examples
To better understand `CustomRuns`, study the following code examples:
- [Example `CustomRun` with a referenced custom task](#example-customrun-with-a-referenced-custom-task)
- [Example `CustomRun` with an unnamed custom task](#example-customrun-with-an-unnamed-custom-task)
- [Example of specifying parameters](#example-of-specifying-parameters)
### Example `CustomRun` with a referenced custom task
In this example, a `CustomRun` named `my-example-run` invokes a custom task of the `v1alpha1`
version of the `Example` kind in the `example.dev` API group, with the name
`my-example-task`.
In this case the custom task controller is expected to look up the `Example`
resource named `my-example-task` and to use that configuration to configure the
execution of the `CustomRun`.
```yaml
apiVersion: tekton.dev/v1beta1
kind: CustomRun
metadata:
name: my-example-run
spec:
customRef:
apiVersion: example.dev/v1alpha1
kind: Example
name: my-example-task
```
### Example `CustomRun` with an unnamed custom task
In this example, a `CustomRun` named `my-example-run` invokes a custom task of the `v1alpha1`
version of the `Example` kind in the `example.dev` API group, without a specified name.
In this case the custom task controller is expected to provide some default
behavior when the referenced task is unnamed.
```yaml
apiVersion: tekton.dev/v1beta1
kind: CustomRun
metadata:
name: my-example-run
spec:
customRef:
apiVersion: example.dev/v1alpha1
kind: Example
```
### Example of specifying parameters
In this example, a `CustomRun` named `my-example-run` invokes a custom task, and
specifies some parameter values to further configure the execution's behavior.
In this case the custom task controller is expected to validate and interpret
these parameter values and use them to configure the `CustomRun`'s execution.
```yaml
apiVersion: tekton.dev/v1alpha1
kind: CustomRun
metadata:
name: my-example-run
spec:
customRef:
apiVersion: example.dev/v1alpha1
kind: Example
name: my-example-task
params:
- name: my-first-param
value: i'm number one
- name: my-second-param
value: close second
``` | tekton | linkTitle CustomRuns weight 206 CustomRuns Overview overview Configuring a CustomRun configuring a customrun Specifying the target Custom Task specifying the target custom task Cancellation cancellation Specifying Timeout specifying timeout Specifying Retries specifying retries Specifying Parameters specifying parameters Specifying Workspaces specifying workspaces Specifying Service Account specifying a serviceaccount Monitoring execution status monitoring execution status Status Reporting status reporting Monitoring Results monitoring results Code examples code examples Example CustomRun with a referenced custom task example customrun with a referenced custom task Example CustomRun with an unnamed custom task example customrun with an unnamed custom task Example of specifying parameters example of specifying parameters Overview v1beta1 CustomRun has replaced v1alpha1 Run for executing Custom Tasks Please refer to the migration doc migrating v1alpha1 Run to v1beta1 CustomRun md for details on updating v1alpha1 Run to v1beta1 CustomRun before upgrading to a release that does not support v1alpha1 Run A CustomRun allows you to instantiate and execute a Custom Task https github com tektoncd community blob main teps 0002 custom tasks md which can be implemented by a custom task controller running on cluster Custom Tasks can implement behavior that s independent of how Tekton TaskRun is implemented In order for a CustomRun to actually execute there must be a custom task controller running on the cluster that is responsible for watching and updating CustomRun s which reference their type If no such controller is running CustomRun s will have no status value and no further action will be taken Configuring a CustomRun A CustomRun definition supports the following fields Required apiVersion kubernetes overview Specifies the API version Currently only tekton dev v1beta1 is supported kind kubernetes overview Identifies this resource object as a CustomRun object metadata kubernetes overview Specifies the metadata that uniquely identifies the CustomRun such as a name spec kubernetes overview Specifies the configuration for the CustomRun customRef 1 specifying the target custom task with customref Specifies the type and optionally name of the custom task type to execute customSpec 2 specifying the target custom task by embedding its spec Embed the custom task resource spec directly in a CustomRun Optional timeout specifying timeout specifies the maximum duration of a single execution of a CustomRun retries specifying retries specifies the number of retries to execute upon CustomRun failure params specifying parameters Specifies the desired execution parameters for the custom task serviceAccountName specifying a serviceaccount Specifies a ServiceAccount object for executing the CustomRun workspaces specifying workspaces Specifies the physical volumes to use for the Workspaces workspaces md required by a custom task kubernetes overview https kubernetes io docs concepts overview working with objects kubernetes objects required fields Specifying the target Custom Task A custom task resource s CustomSpec may be directly embedded in the CustomRun or it may be referred to by a CustomRef But not both at the same time 1 Specifying the target Custom Task with customRef specifying the target custom task with customref Referring a custom task i e CustomRef promotes reuse of custom task definitions 2 Specifying the target Custom Task by embedding its spec specifying the target custom task by embedding its customspec Embedding a custom task i e CustomSpec helps in avoiding name collisions with other users within the same namespace Additionally in a pipeline with multiple embedded custom tasks the details of entire pipeline can be fetched in a single API request Specifying the target Custom Task with customRef To specify the custom task type you want to execute in your CustomRun use the customRef field as shown below yaml apiVersion tekton dev v1beta1 kind CustomRun metadata name my example run spec customRef apiVersion example dev v1beta1 kind MyCustomKind params name duration value 10s When this CustomRun is created the Custom Task controller responsible for reconciling objects of kind MyCustomKind in the example dev v1beta1 api group will execute it based on the input params You can also specify the name and optional namespace default is default of a custom task resource object previously defined in the cluster yaml apiVersion tekton dev v1beta1 kind CustomRun metadata name my example run spec customRef apiVersion example dev v1beta1 kind Example name an existing example task If the customRef specifies a name the custom task controller should look up the Example resource with that name and use that object to configure the execution If the customRef does not specify a name the custom task controller might support some default behavior for executing unnamed tasks In either case if the named resource cannot be found or if unnamed tasks are not supported the custom task controller should update the CustomRun s status to indicate the error Specifying the target Custom Task by embedding its customSpec To specify the custom task spec it can be embedded directly into a CustomRun s spec as shown below yaml apiVersion tekton dev v1beta1 kind CustomRun metadata name embedded run spec customSpec apiVersion example dev v1beta1 kind Example spec field1 value1 field2 value2 When this CustomRun is created the custom task controller responsible for reconciling objects of kind Example in the example dev api group will execute it Developer guide for custom controllers supporting customSpec 1 A custom controller may or may not support a Spec In cases where it is not supported the custom controller should respond with proper validation error 2 Validation of the fields of the custom task is delegated to the custom task controller It is recommended to implement validations as asynchronous i e at reconcile time rather than part of the webhook Using a webhook for validation is problematic because it is not possible to filter custom task resource objects before validation step as a result each custom task resource has to undergo validation by all the installed custom task controllers 3 A custom task may have an empty spec but cannot have an empty ApiVersion and Kind Custom task controllers should handle an empty spec either with a default behaviour in a case no default behaviour is supported then appropriate validation error should be updated to the CustomRun s status Cancellation The custom task is responsible for implementing cancellation to support pipelineRun level timeouts and cancellation If the Custom Task implementor does not support cancellation via spec status Pipeline can not timeout within the specified interval duration and can not be cancelled as expected upon request Pipeline Controller sets the spec Status and spec StatusMessage to signal CustomRuns about the Cancellation while CustomRun controller updates its status conditions as following once noticed the change on spec Status yaml status conditions type Succeeded status False reason CustomRunCancelled Specifying Timeout A custom task specification can be created with Timeout as follows yaml apiVersion tekton dev v1beta1 kind CustomRun metadata generateName simpleexample spec timeout 10s set timeouts here params name searching value the purpose of my existence customRef apiVersion custom tekton dev v1alpha1 kind Example name exampleName Supporting timeouts is optional but recommended Developer guide for custom controllers supporting Timeout 1 Tekton controllers will never directly update the status of the CustomRun it is the responsibility of the custom task controller to support timeout If timeout is not supported it s the responsibility of the custom task controller to reject CustomRun s that specify a timeout value 2 When CustomRun Spec Status is updated to RunCancelled the custom task controller MUST cancel the CustomRun Otherwise pipeline level Timeout and Cancellation won t work for the Custom Task 3 A Custom Task controller can watch for this status update i e CustomRun Spec Status RunCancelled and or CustomRun HasTimedOut and take any corresponding actions i e a clean up e g cancel a cloud build stop the waiting timer tear down the approval listener 4 Once resources or timers are cleaned up while it is REQUIRED to set a conditions on the CustomRun s status of Succeeded False with an optional Reason of CustomRunTimedOut 5 Timeout is specified for each retry attempt instead of all retries Specifying Retries A custom task specification can be created with Retries as follows yaml apiVersion tekton dev v1beta1 kind CustomRun metadata generateName simpleexample spec retries 3 set retries params name searching value the purpose of my existence customRef apiVersion custom tekton dev v1alpha1 kind Example name exampleName Supporting retries is optional but recommended Developer guide for custom controllers supporting retries 1 Tekton controller only depends on ConditionSucceeded to determine the termination status of a CustomRun therefore Custom task implementors MUST NOT set ConditionSucceeded to False until all retries are exhausted 2 Those custom tasks who do not wish to support retry can simply ignore it 3 It is recommended that custom task should update the field RetriesStatus of a CustomRun on each retry performed by the custom task 4 Tekton controller does not validate that number of entries in RetriesStatus is same as specified value of retries count Specifying Parameters If a custom task supports parameters tasks md parameters you can use the params field in the CustomRun to specify their values yaml spec params name my param value chicken If the custom task controller knows how to interpret the parameter value it will do so It might enforce that some parameter values must be specified or reject unknown parameter values Specifying workspaces If the custom task supports it you can provide Workspaces workspaces md to share data with the custom task yaml spec workspaces name my workspace emptyDir Consult the documentation of the custom task that you are using to determine whether it supports workspaces and how to name them Specifying a ServiceAccount If the custom task supports it you can execute the CustomRun with a specific set of credentials by specifying a ServiceAccount object name in the serviceAccountName field in your CustomRun definition If you do not explicitly specify this the CustomRun executes with the service account specified in the configmap defaults ConfigMap If this default is not specified CustomRuns will execute with the default service account https kubernetes io docs tasks configure pod container configure service account use the default service account to access the api server set for the target namespace https kubernetes io docs concepts overview working with objects namespaces yaml spec serviceAccountName my account Consult the documentation of the custom task that you are using to determine whether it supports a service account name Monitoring execution status As your CustomRun executes its status field accumulates information on the execution of the CustomRun This information includes the current state of the CustomRun start and completion times and any output results reported by the custom task controller Status Reporting When the CustomRun Example is validated and created the Custom Task controller will be notified and is expected to begin doing some operation When the operation begins the controller MUST update the CustomRun s status conditions to report that it s ongoing yaml status conditions type Succeeded status Unknown When the operation completes if it was successful the condition MUST report status True and optionally a brief reason and human readable message yaml status conditions type Succeeded status True reason ExampleComplete optional message Yay good times optional If the operation was unsuccessful the condition MUST report status False and optionally a reason and human readable message yaml status conditions type Succeeded status False reason ExampleFailed optional message Oh no bad times optional If the CustomRun was cancelled the condition MUST report status False reason CustomRunCancelled and optionally a human readable message yaml status conditions type Succeeded status False reason CustomRunCancelled message Oh it s cancelled optional The following tables shows the overall status of a CustomRun status Description unset The custom task controller has not taken any action on the CustomRun Unknown The custom task controller has started execution and the CustomRun is ongoing True The CustomRun completed successfully False The CustomRun completed unsuccessfully and all retries were exhausted The CustomRun type s status will also allow controllers to report other fields such as startTime completionTime results see below and arbitrary context dependent fields the Custom Task author wants to report A fully specified CustomRun status might look like status conditions type Succeeded status True reason ExampleComplete message Yay good times completionTime 2020 06 18T11 55 01Z startTime 2020 06 18T11 55 01Z results name first name value Bob name last name value Smith arbitraryField hello world arbitraryStructuredField listOfThings a b c Monitoring Results After the CustomRun completes the custom task controller can report output values in the results field results name my result value chicken Code examples To better understand CustomRuns study the following code examples Example CustomRun with a referenced custom task example customrun with a referenced custom task Example CustomRun with an unnamed custom task example customrun with an unnamed custom task Example of specifying parameters example of specifying parameters Example CustomRun with a referenced custom task In this example a CustomRun named my example run invokes a custom task of the v1alpha1 version of the Example kind in the example dev API group with the name my example task In this case the custom task controller is expected to look up the Example resource named my example task and to use that configuration to configure the execution of the CustomRun yaml apiVersion tekton dev v1beta1 kind CustomRun metadata name my example run spec customRef apiVersion example dev v1alpha1 kind Example name my example task Example CustomRun with an unnamed custom task In this example a CustomRun named my example run invokes a custom task of the v1alpha1 version of the Example kind in the example dev API group without a specified name In this case the custom task controller is expected to provide some default behavior when the referenced task is unnamed yaml apiVersion tekton dev v1beta1 kind CustomRun metadata name my example run spec customRef apiVersion example dev v1alpha1 kind Example Example of specifying parameters In this example a CustomRun named my example run invokes a custom task and specifies some parameter values to further configure the execution s behavior In this case the custom task controller is expected to validate and interpret these parameter values and use them to configure the CustomRun s execution yaml apiVersion tekton dev v1alpha1 kind CustomRun metadata name my example run spec customRef apiVersion example dev v1alpha1 kind Example name my example task params name my first param value i m number one name my second param value close second |
tekton Windows weight 306 Windows | <!--
---
linkTitle: "Windows"
weight: 306
---
-->
# Windows
- [Overview](#overview)
- [Scheduling Tasks on Windows Nodes](#scheduling-tasks-on-windows-nodes)
- [Node Selectors](#node-selectors)
- [Node Affinity](#node-affinity)
## Overview
If you need a Windows environment as part of a Tekton Task or Pipeline, you can include Windows container images in your Task steps. Because Windows containers can only run on a Windows host, you will need to have Windows nodes available in your Kubernetes cluster. You should read [Windows support in Kubernetes](https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/) to understand the functionality and limitations of Kubernetes on Windows.
Some important things to note about **Windows containers and Kubernetes**:
- Windows containers cannot run on a Linux host.
- Kubernetes does not support *Windows only* clusters. The Kubernetes control plane components can only run on Linux.
- Kubernetes currently only supports process isolated containers, which means a container's base image OS version **must** match that of the host OS. See [Windows container version compatibility](https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/version-compatibility?tabs=windows-server-20H2%2Cwindows-10-20H2) for more information.
- A Kubernetes Pod cannot contain both Windows and Linux containers.
Some important things to note about **Windows support in Tekton**:
- A Task can only have Windows or Linux containers, but not both.
- A Pipeline can contain both Windows and Linux Tasks.
- In a mixed-OS cluster, TaskRuns and PipelineRuns will need to be scheduled to the correct node using one of the methods [described below](#scheduling-tasks-on-windows-nodes).
- Tekton's controller components can only run on Linux nodes.
## Scheduling Tasks on Windows Nodes
In order to ensure that Tasks are scheduled to a node with the correct host OS, you will need to update the TaskRun or PipelineRun spec with rules to define this behaviour. This can be done in a couple of different ways, but the simplest option is to specify a node selector.
### Node Selectors
Node selectors are the simplest way to schedule pods to a Windows or Linux node. By default, Kubernetes nodes include a label `kubernetes.io/os` to identify the host OS. The Kubelet populates this with `runtime.GOOS` as defined by Go. Use `spec.podTemplate.nodeSelector` (or `spec.taskRunSpecs[i].podTemplate.nodeSelector` in a PipelineRun) to schedule Tasks to a node with a specific label and value.
For example:
``` yaml
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
name: windows-taskrun
spec:
taskRef:
name: windows-task
podTemplate:
nodeSelector:
kubernetes.io/os: windows
---
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
name: linux-taskrun
spec:
taskRef:
name: linux-task
podTemplate:
nodeSelector:
kubernetes.io/os: linux
```
### Node Affinity
Node affinity can be used as an alternative method of defining the OS requirement of a Task. These rules can be set under `spec.podTemplate.affinity.nodeAffinity` in a TaskRun definition. The example below produces the same result as the previous example which used node selectors.
For example:
```yaml
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
name: windows-taskrun
spec:
taskRef:
name: windows-task
podTemplate:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- windows
---
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
name: linux-taskrun
spec:
taskRef:
name: linux-task
podTemplate:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
``` | tekton | linkTitle Windows weight 306 Windows Overview overview Scheduling Tasks on Windows Nodes scheduling tasks on windows nodes Node Selectors node selectors Node Affinity node affinity Overview If you need a Windows environment as part of a Tekton Task or Pipeline you can include Windows container images in your Task steps Because Windows containers can only run on a Windows host you will need to have Windows nodes available in your Kubernetes cluster You should read Windows support in Kubernetes https kubernetes io docs setup production environment windows intro windows in kubernetes to understand the functionality and limitations of Kubernetes on Windows Some important things to note about Windows containers and Kubernetes Windows containers cannot run on a Linux host Kubernetes does not support Windows only clusters The Kubernetes control plane components can only run on Linux Kubernetes currently only supports process isolated containers which means a container s base image OS version must match that of the host OS See Windows container version compatibility https docs microsoft com en us virtualization windowscontainers deploy containers version compatibility tabs windows server 20H2 2Cwindows 10 20H2 for more information A Kubernetes Pod cannot contain both Windows and Linux containers Some important things to note about Windows support in Tekton A Task can only have Windows or Linux containers but not both A Pipeline can contain both Windows and Linux Tasks In a mixed OS cluster TaskRuns and PipelineRuns will need to be scheduled to the correct node using one of the methods described below scheduling tasks on windows nodes Tekton s controller components can only run on Linux nodes Scheduling Tasks on Windows Nodes In order to ensure that Tasks are scheduled to a node with the correct host OS you will need to update the TaskRun or PipelineRun spec with rules to define this behaviour This can be done in a couple of different ways but the simplest option is to specify a node selector Node Selectors Node selectors are the simplest way to schedule pods to a Windows or Linux node By default Kubernetes nodes include a label kubernetes io os to identify the host OS The Kubelet populates this with runtime GOOS as defined by Go Use spec podTemplate nodeSelector or spec taskRunSpecs i podTemplate nodeSelector in a PipelineRun to schedule Tasks to a node with a specific label and value For example yaml apiVersion tekton dev v1 kind TaskRun metadata name windows taskrun spec taskRef name windows task podTemplate nodeSelector kubernetes io os windows apiVersion tekton dev v1 kind TaskRun metadata name linux taskrun spec taskRef name linux task podTemplate nodeSelector kubernetes io os linux Node Affinity Node affinity can be used as an alternative method of defining the OS requirement of a Task These rules can be set under spec podTemplate affinity nodeAffinity in a TaskRun definition The example below produces the same result as the previous example which used node selectors For example yaml apiVersion tekton dev v1 kind TaskRun metadata name windows taskrun spec taskRef name windows task podTemplate affinity nodeAffinity requiredDuringSchedulingIgnoredDuringExecution nodeSelectorTerms matchExpressions key kubernetes io os operator In values windows apiVersion tekton dev v1 kind TaskRun metadata name linux taskrun spec taskRef name linux task podTemplate affinity nodeAffinity requiredDuringSchedulingIgnoredDuringExecution nodeSelectorTerms matchExpressions key kubernetes io os operator In values linux |
tekton Artifacts weight 201 Artifacts | <!--
---
linkTitle: "Artifacts"
weight: 201
---
-->
# Artifacts
- [Overview](#overview)
- [Artifact Provenance Data](#artifact-provenance-data)
- [Passing Artifacts between Steps](#passing-artifacts-between-steps)
- [Passing Artifacts between Tasks](#passing-artifacts-between-tasks)
## Overview
> :seedling: **`Artifacts` is an [alpha](additional-configs.md#alpha-features) feature.**
> The `enable-artifacts` feature flag must be set to `"true"` to read or write artifacts in a step.
Artifacts provide a way to track the origin of data produced and consumed within your Tekton Tasks.
## Artifact Provenance Data
Artifacts fall into two categories:
- Inputs: Artifacts downloaded and used by the Step/Task.
- Outputs: Artifacts created and uploaded by the Step/Task.
Example Structure:
```json
{
"inputs":[
{
"name": "<input-category-name>",
"values": [
{
"uri": "pkg:github/package-url/purl-spec@244fd47e07d1004f0aed9c",
"digest": { "sha256": "b35caccc..." }
}
]
}
],
"outputs": [
{
"name": "<output-category-name>",
"values": [
{
"uri": "pkg:oci/nginx:stable-alpine3.17-slim?repository_url=docker.io/library",
"digest": {
"sha256": "df85b9e3...",
"sha1": "95588b8f..."
}
}
]
}
]
}
```
The content is written by the `Step` to a file `$(step.artifacts.path)`:
```yaml
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
generateName: step-artifacts-
spec:
taskSpec:
description: |
A simple task that populates artifacts to TaskRun stepState
steps:
- name: artifacts-producer
image: bash:latest
script: |
cat > $(step.artifacts.path) << EOF
{
"inputs":[
{
"name":"source",
"values":[
{
"uri":"pkg:github/package-url/purl-spec@244fd47e07d1004f0aed9c",
"digest":{
"sha256":"b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0"
}
}
]
}
],
"outputs":[
{
"name":"image",
"values":[
{
"uri":"pkg:oci/nginx:stable-alpine3.17-slim?repository_url=docker.io/library",
"digest":{
"sha256":"df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48",
"sha1":"95588b8f34c31eb7d62c92aaa4e6506639b06ef2"
}
}
]
}
]
}
EOF
```
The content is written by the `Step` to a file `$(artifacts.path)`:
```yaml
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
generateName: step-artifacts-
spec:
taskSpec:
description: |
A simple task that populates artifacts to TaskRun stepState
steps:
- name: artifacts-producer
image: bash:latest
script: |
cat > $(artifacts.path) << EOF
{
"inputs":[
{
"name":"source",
"values":[
{
"uri":"pkg:github/package-url/purl-spec@244fd47e07d1004f0aed9c",
"digest":{
"sha256":"b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0"
}
}
]
}
],
"outputs":[
{
"name":"image",
"values":[
{
"uri":"pkg:oci/nginx:stable-alpine3.17-slim?repository_url=docker.io/library",
"digest":{
"sha256":"df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48",
"sha1":"95588b8f34c31eb7d62c92aaa4e6506639b06ef2"
}
}
]
}
]
}
EOF
```
It is recommended to use [purl format](https://github.com/package-url/purl-spec/blob/master/PURL-SPECIFICATION.rst) for artifacts uri as shown in the example.
### Output Artifacts in SLSA Provenance
Artifacts are classified as either:
- Build Outputs - packages, images, etc. that are being published by the build.
- Build Byproducts - logs, caches, etc. that are incidental artifacts that are produced by the build.
By default, Tekton Chains will consider all output artifacts as `byProducts` when generating in the [SLSA provenance](https://slsa.dev/spec/v1.0/provenance). In order to treat an artifact as a [subject](https://slsa.dev/spec/v1.0/provenance#schema) of the build, you must set a boolean field `"buildOutput": true` for the output artifact.
e.g.
```yaml
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
generateName: step-artifacts-
spec:
taskSpec:
description: |
A simple task that populates artifacts to TaskRun stepState
steps:
- name: artifacts-producer
image: bash:latest
script: |
cat > $(artifacts.path) << EOF
{
"outputs":[
{
"name":"image",
"buildOutput": true,
"values":[
{
"uri":"pkg:oci/nginx:stable-alpine3.17-slim?repository_url=docker.io/library",
"digest":{
"sha256":"df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48",
"sha1":"95588b8f34c31eb7d62c92aaa4e6506639b06ef2"
}
}
]
}
]
}
EOF
```
This informs Tekton Chains your desire to handle the artifact.
> [!TIP]
> When authoring a `StepAction` or a `Task`, you can parametrize this field to allow users to indicate their desire depending on what they are uploading - this can be useful for actions that may produce either a build output or a byproduct depending on the context!
### Passing Artifacts between Steps
You can pass artifacts from one step to the next using:
- Specific Artifact: `$(steps.<step-name>.inputs.<artifact-category-name>)` or `$(steps.<step-name>.outputs.<artifact-category-name>)`
The example below shows how to access the previous' step artifacts from another step in the same task
```yaml
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
generateName: step-artifacts-
spec:
taskSpec:
description: |
A simple task that populates artifacts to TaskRun stepState
steps:
- name: artifacts-producer
image: bash:latest
script: |
# the script is for creating the output artifacts
cat > $(step.artifacts.path) << EOF
{
"inputs":[
{
"name":"source",
"values":[
{
"uri":"pkg:github/package-url/purl-spec@244fd47e07d1004f0aed9c",
"digest":{
"sha256":"b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0"
}
}
]
}
],
"outputs":[
{
"name":"image",
"values":[
{
"uri":"pkg:oci/nginx:stable-alpine3.17-slim?repository_url=docker.io/library",
"digest":{
"sha256":"df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48",
"sha1":"95588b8f34c31eb7d62c92aaa4e6506639b06ef2"
}
}
]
}
]
}
EOF
- name: artifacts-consumer
image: bash:latest
script: |
echo $(steps.artifacts-producer.outputs.image)
```
The resolved value of `$(steps.<step-name>.outputs.<artifact-category-name>)` is the values of an artifact. For this example,
`$(steps.artifacts-producer.outputs.image)` is resolved to
```json
[
{
"uri":"pkg:oci/nginx:stable-alpine3.17-slim?repository_url=docker.io/library",
"digest":{
"sha256":"df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48",
"sha1":"95588b8f34c31eb7d62c92aaa4e6506639b06ef2"
}
}
]
```
Upon resolution and execution of the `TaskRun`, the `Status` will look something like:
```json
{
"artifacts": {
"inputs": [
{
"name": "source",
"values": [
{
"digest": {
"sha256": "b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0"
},
"uri": "pkg:github/package-url/purl-spec@244fd47e07d1004f0aed9c"
}
]
}
],
"outputs": [
{
"name": "image",
"values": [
{
"digest": {
"sha1": "95588b8f34c31eb7d62c92aaa4e6506639b06ef2",
"sha256": "df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48"
},
"uri": "pkg:oci/nginx:stable-alpine3.17-slim?repository_url=docker.io/library"
}
]
}
]
},
"steps": [
{
"container": "step-artifacts-producer",
"imageID": "docker.io/library/bash@sha256:5353512b79d2963e92a2b97d9cb52df72d32f94661aa825fcfa0aede73304743",
"inputs": [
{
"name": "source",
"values": [
{
"digest": {
"sha256": "b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0"
},
"uri": "pkg:github/package-url/purl-spec@244fd47e07d1004f0aed9c"
}
]
}
],
"name": "artifacts-producer",
"outputs": [
{
"name": "image",
"values": [
{
"digest": {
"sha1": "95588b8f34c31eb7d62c92aaa4e6506639b06ef2",
"sha256": "df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48"
},
"uri": "pkg:oci/nginx:stable-alpine3.17-slim?repository_url=docker.io/library"
}
]
}
],
"terminated": {
"containerID": "containerd://010f02d103d1db48531327a1fe09797c87c1d50b6a216892319b3af93e0f56e7",
"exitCode": 0,
"finishedAt": "2024-03-18T17:05:06Z",
"message": "...",
"reason": "Completed",
"startedAt": "2024-03-18T17:05:06Z"
},
"terminationReason": "Completed"
},
{
"container": "step-artifacts-consumer",
"imageID": "docker.io/library/bash@sha256:5353512b79d2963e92a2b97d9cb52df72d32f94661aa825fcfa0aede73304743",
"name": "artifacts-consumer",
"terminated": {
"containerID": "containerd://42428aa7e5a507eba924239f213d185dd4bc0882b6f217a79e6792f7fec3586e",
"exitCode": 0,
"finishedAt": "2024-03-18T17:05:06Z",
"reason": "Completed",
"startedAt": "2024-03-18T17:05:06Z"
},
"terminationReason": "Completed"
}
]
}
```
### Passing Artifacts between Tasks
You can pass artifacts from one task to the another using:
- Specific Artifact: `$(tasks.<task-name>.inputs.<artifact-category-name>)` or `$(tasks.<task-name>.outputs.<artifact-category-name>)`
The example below shows how to access the previous' task artifacts from another task in a pipeline
```yaml
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
generateName: pipelinerun-consume-tasks-artifacts
spec:
pipelineSpec:
tasks:
- name: produce-artifacts-task
taskSpec:
description: |
A simple task that produces artifacts
steps:
- name: produce-artifacts
image: bash:latest
script: |
#!/usr/bin/env bash
cat > $(artifacts.path) << EOF
{
"inputs":[
{
"name":"input-artifacts",
"values":[
{
"uri":"pkg:example.github.com/inputs",
"digest":{
"sha256":"b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0"
}
}
]
}
],
"outputs":[
{
"name":"image",
"values":[
{
"uri":"pkg:github/package-url/purl-spec@244fd47e07d1004f0aed9c",
"digest":{
"sha256":"df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48",
"sha1":"95588b8f34c31eb7d62c92aaa4e6506639b06ef2"
}
}
]
}
]
}
EOF
- name: consume-artifacts
runAfter:
- produce-artifacts-task
taskSpec:
steps:
- name: artifacts-consumer-python
image: python:latest
script: |
#!/usr/bin/env python3
import json
data = json.loads('$(tasks.produce-artifacts-task.outputs.image)')
if data[0]['uri'] != "pkg:github/package-url/purl-spec@244fd47e07d1004f0aed9c":
exit(1)
```
Similar to Step Artifacts. The resolved value of `$(tasks.<task-name>.outputs.<artifact-category-name>)` is the values of an artifact. For this example,
`$(tasks.produce-artifacts-task.outputs.image)` is resolved to
```json
[
{
"uri":"pkg:example.github.com/inputs",
"digest":{
"sha256":"b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0"
}
}
]
```
Upon resolution and execution of the `TaskRun`, the `Status` will look something like:
```json
{
"artifacts": {
"inputs": [
{
"name": "input-artifacts",
"values": [
{
"digest": {
"sha256": "b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0"
},
"uri": "pkg:example.github.com/inputs"
}
]
}
],
"outputs": [
{
"name": "image",
"values": [
{
"digest": {
"sha1": "95588b8f34c31eb7d62c92aaa4e6506639b06ef2",
"sha256": "df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48"
},
"uri": "pkg:github/package-url/purl-spec@244fd47e07d1004f0aed9c"
}
]
}
]
},
"completionTime": "2024-05-28T14:10:58Z",
"conditions": [
{
"lastTransitionTime": "2024-05-28T14:10:58Z",
"message": "All Steps have completed executing",
"reason": "Succeeded",
"status": "True",
"type": "Succeeded"
}
],
"podName": "pipelinerun-consume-tasks-a41ee44e4f964e95adfd3aea417d52f90-pod",
"provenance": {
"featureFlags": {
"AwaitSidecarReadiness": true,
"Coschedule": "workspaces",
"DisableAffinityAssistant": false,
"DisableCredsInit": false,
"DisableInlineSpec": "",
"EnableAPIFields": "beta",
"EnableArtifacts": true,
"EnableCELInWhenExpression": false,
"EnableConciseResolverSyntax": false,
"EnableKeepPodOnCancel": false,
"EnableParamEnum": false,
"EnableProvenanceInStatus": true,
"EnableStepActions": true,
"EnableTektonOCIBundles": false,
"EnforceNonfalsifiability": "none",
"MaxResultSize": 4096,
"RequireGitSSHSecretKnownHosts": false,
"ResultExtractionMethod": "termination-message",
"RunningInEnvWithInjectedSidecars": true,
"ScopeWhenExpressionsToTask": false,
"SendCloudEventsForRuns": false,
"SetSecurityContext": false,
"VerificationNoMatchPolicy": "ignore"
}
},
"startTime": "2024-05-28T14:10:48Z",
"steps": [
{
"container": "step-produce-artifacts",
"imageID": "docker.io/library/bash@sha256:23f90212fd89e4c292d7b41386ef1a6ac2b8a02bbc6947680bfe184cbc1a2899",
"name": "produce-artifacts",
"terminated": {
"containerID": "containerd://1291ce07b175a7897beee6ba62eaa1528427bacb1f76b31435eeba68828c445a",
"exitCode": 0,
"finishedAt": "2024-05-28T14:10:57Z",
"message": "...",
"reason": "Completed",
"startedAt": "2024-05-28T14:10:57Z"
},
"terminationReason": "Completed"
}
],
"taskSpec": {
"description": "A simple task that produces artifacts\n",
"steps": [
{
"computeResources": {},
"image": "bash:latest",
"name": "produce-artifacts",
"script": "#!/usr/bin/env bash\ncat > /tekton/artifacts/provenance.json << EOF\n{\n \"inputs\":[\n {\n \"name\":\"input-artifacts\",\n \"values\":[\n {\n \"uri\":\"pkg:example.github.com/inputs\",\n \"digest\":{\n \"sha256\":\"b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0\"\n }\n }\n ]\n }\n ],\n \"outputs\":[\n {\n \"name\":\"image\",\n \"values\":[\n {\n \"uri\":\"pkg:github/package-url/purl-spec@244fd47e07d1004f0aed9c\",\n \"digest\":{\n \"sha256\":\"df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48\",\n \"sha1\":\"95588b8f34c31eb7d62c92aaa4e6506639b06ef2\"\n }\n }\n ]\n }\n ]\n}\nEOF\n"
}
]
}
}
``` | tekton | linkTitle Artifacts weight 201 Artifacts Overview overview Artifact Provenance Data artifact provenance data Passing Artifacts between Steps passing artifacts between steps Passing Artifacts between Tasks passing artifacts between tasks Overview seedling Artifacts is an alpha additional configs md alpha features feature The enable artifacts feature flag must be set to true to read or write artifacts in a step Artifacts provide a way to track the origin of data produced and consumed within your Tekton Tasks Artifact Provenance Data Artifacts fall into two categories Inputs Artifacts downloaded and used by the Step Task Outputs Artifacts created and uploaded by the Step Task Example Structure json inputs name input category name values uri pkg github package url purl spec 244fd47e07d1004f0aed9c digest sha256 b35caccc outputs name output category name values uri pkg oci nginx stable alpine3 17 slim repository url docker io library digest sha256 df85b9e3 sha1 95588b8f The content is written by the Step to a file step artifacts path yaml apiVersion tekton dev v1 kind TaskRun metadata generateName step artifacts spec taskSpec description A simple task that populates artifacts to TaskRun stepState steps name artifacts producer image bash latest script cat step artifacts path EOF inputs name source values uri pkg github package url purl spec 244fd47e07d1004f0aed9c digest sha256 b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0 outputs name image values uri pkg oci nginx stable alpine3 17 slim repository url docker io library digest sha256 df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48 sha1 95588b8f34c31eb7d62c92aaa4e6506639b06ef2 EOF The content is written by the Step to a file artifacts path yaml apiVersion tekton dev v1 kind TaskRun metadata generateName step artifacts spec taskSpec description A simple task that populates artifacts to TaskRun stepState steps name artifacts producer image bash latest script cat artifacts path EOF inputs name source values uri pkg github package url purl spec 244fd47e07d1004f0aed9c digest sha256 b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0 outputs name image values uri pkg oci nginx stable alpine3 17 slim repository url docker io library digest sha256 df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48 sha1 95588b8f34c31eb7d62c92aaa4e6506639b06ef2 EOF It is recommended to use purl format https github com package url purl spec blob master PURL SPECIFICATION rst for artifacts uri as shown in the example Output Artifacts in SLSA Provenance Artifacts are classified as either Build Outputs packages images etc that are being published by the build Build Byproducts logs caches etc that are incidental artifacts that are produced by the build By default Tekton Chains will consider all output artifacts as byProducts when generating in the SLSA provenance https slsa dev spec v1 0 provenance In order to treat an artifact as a subject https slsa dev spec v1 0 provenance schema of the build you must set a boolean field buildOutput true for the output artifact e g yaml apiVersion tekton dev v1 kind TaskRun metadata generateName step artifacts spec taskSpec description A simple task that populates artifacts to TaskRun stepState steps name artifacts producer image bash latest script cat artifacts path EOF outputs name image buildOutput true values uri pkg oci nginx stable alpine3 17 slim repository url docker io library digest sha256 df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48 sha1 95588b8f34c31eb7d62c92aaa4e6506639b06ef2 EOF This informs Tekton Chains your desire to handle the artifact TIP When authoring a StepAction or a Task you can parametrize this field to allow users to indicate their desire depending on what they are uploading this can be useful for actions that may produce either a build output or a byproduct depending on the context Passing Artifacts between Steps You can pass artifacts from one step to the next using Specific Artifact steps step name inputs artifact category name or steps step name outputs artifact category name The example below shows how to access the previous step artifacts from another step in the same task yaml apiVersion tekton dev v1 kind TaskRun metadata generateName step artifacts spec taskSpec description A simple task that populates artifacts to TaskRun stepState steps name artifacts producer image bash latest script the script is for creating the output artifacts cat step artifacts path EOF inputs name source values uri pkg github package url purl spec 244fd47e07d1004f0aed9c digest sha256 b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0 outputs name image values uri pkg oci nginx stable alpine3 17 slim repository url docker io library digest sha256 df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48 sha1 95588b8f34c31eb7d62c92aaa4e6506639b06ef2 EOF name artifacts consumer image bash latest script echo steps artifacts producer outputs image The resolved value of steps step name outputs artifact category name is the values of an artifact For this example steps artifacts producer outputs image is resolved to json uri pkg oci nginx stable alpine3 17 slim repository url docker io library digest sha256 df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48 sha1 95588b8f34c31eb7d62c92aaa4e6506639b06ef2 Upon resolution and execution of the TaskRun the Status will look something like json artifacts inputs name source values digest sha256 b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0 uri pkg github package url purl spec 244fd47e07d1004f0aed9c outputs name image values digest sha1 95588b8f34c31eb7d62c92aaa4e6506639b06ef2 sha256 df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48 uri pkg oci nginx stable alpine3 17 slim repository url docker io library steps container step artifacts producer imageID docker io library bash sha256 5353512b79d2963e92a2b97d9cb52df72d32f94661aa825fcfa0aede73304743 inputs name source values digest sha256 b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0 uri pkg github package url purl spec 244fd47e07d1004f0aed9c name artifacts producer outputs name image values digest sha1 95588b8f34c31eb7d62c92aaa4e6506639b06ef2 sha256 df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48 uri pkg oci nginx stable alpine3 17 slim repository url docker io library terminated containerID containerd 010f02d103d1db48531327a1fe09797c87c1d50b6a216892319b3af93e0f56e7 exitCode 0 finishedAt 2024 03 18T17 05 06Z message reason Completed startedAt 2024 03 18T17 05 06Z terminationReason Completed container step artifacts consumer imageID docker io library bash sha256 5353512b79d2963e92a2b97d9cb52df72d32f94661aa825fcfa0aede73304743 name artifacts consumer terminated containerID containerd 42428aa7e5a507eba924239f213d185dd4bc0882b6f217a79e6792f7fec3586e exitCode 0 finishedAt 2024 03 18T17 05 06Z reason Completed startedAt 2024 03 18T17 05 06Z terminationReason Completed Passing Artifacts between Tasks You can pass artifacts from one task to the another using Specific Artifact tasks task name inputs artifact category name or tasks task name outputs artifact category name The example below shows how to access the previous task artifacts from another task in a pipeline yaml apiVersion tekton dev v1 kind PipelineRun metadata generateName pipelinerun consume tasks artifacts spec pipelineSpec tasks name produce artifacts task taskSpec description A simple task that produces artifacts steps name produce artifacts image bash latest script usr bin env bash cat artifacts path EOF inputs name input artifacts values uri pkg example github com inputs digest sha256 b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0 outputs name image values uri pkg github package url purl spec 244fd47e07d1004f0aed9c digest sha256 df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48 sha1 95588b8f34c31eb7d62c92aaa4e6506639b06ef2 EOF name consume artifacts runAfter produce artifacts task taskSpec steps name artifacts consumer python image python latest script usr bin env python3 import json data json loads tasks produce artifacts task outputs image if data 0 uri pkg github package url purl spec 244fd47e07d1004f0aed9c exit 1 Similar to Step Artifacts The resolved value of tasks task name outputs artifact category name is the values of an artifact For this example tasks produce artifacts task outputs image is resolved to json uri pkg example github com inputs digest sha256 b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0 Upon resolution and execution of the TaskRun the Status will look something like json artifacts inputs name input artifacts values digest sha256 b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0 uri pkg example github com inputs outputs name image values digest sha1 95588b8f34c31eb7d62c92aaa4e6506639b06ef2 sha256 df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48 uri pkg github package url purl spec 244fd47e07d1004f0aed9c completionTime 2024 05 28T14 10 58Z conditions lastTransitionTime 2024 05 28T14 10 58Z message All Steps have completed executing reason Succeeded status True type Succeeded podName pipelinerun consume tasks a41ee44e4f964e95adfd3aea417d52f90 pod provenance featureFlags AwaitSidecarReadiness true Coschedule workspaces DisableAffinityAssistant false DisableCredsInit false DisableInlineSpec EnableAPIFields beta EnableArtifacts true EnableCELInWhenExpression false EnableConciseResolverSyntax false EnableKeepPodOnCancel false EnableParamEnum false EnableProvenanceInStatus true EnableStepActions true EnableTektonOCIBundles false EnforceNonfalsifiability none MaxResultSize 4096 RequireGitSSHSecretKnownHosts false ResultExtractionMethod termination message RunningInEnvWithInjectedSidecars true ScopeWhenExpressionsToTask false SendCloudEventsForRuns false SetSecurityContext false VerificationNoMatchPolicy ignore startTime 2024 05 28T14 10 48Z steps container step produce artifacts imageID docker io library bash sha256 23f90212fd89e4c292d7b41386ef1a6ac2b8a02bbc6947680bfe184cbc1a2899 name produce artifacts terminated containerID containerd 1291ce07b175a7897beee6ba62eaa1528427bacb1f76b31435eeba68828c445a exitCode 0 finishedAt 2024 05 28T14 10 57Z message reason Completed startedAt 2024 05 28T14 10 57Z terminationReason Completed taskSpec description A simple task that produces artifacts n steps computeResources image bash latest name produce artifacts script usr bin env bash ncat tekton artifacts provenance json EOF n n inputs n n name input artifacts n values n n uri pkg example github com inputs n digest n sha256 b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0 n n n n n n outputs n n name image n values n n uri pkg github package url purl spec 244fd47e07d1004f0aed9c n digest n sha256 df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48 n sha1 95588b8f34c31eb7d62c92aaa4e6506639b06ef2 n n n n n n nEOF n |
tekton Install Tekton Pipelines weight 101 Install Tekton Pipelines on your cluster title Install Tekton Pipelines | <!--
---
title: "Install Tekton Pipelines"
linkTitle: "Install Tekton Pipelines"
weight: 101
description: >
Install Tekton Pipelines on your cluster
---
-->
To view the full contents of this page, go to the
<a href="http://tekton.dev/docs/installation/pipelines/">Tekton website</a>.
This guide explains how to install Tekton Pipelines.
## Prerequisites
- A [Kubernetes cluster][k8s] running version 1.28 or later.
- [Kubectl][].
- Grant `cluster-admin` privileges to the current user. See the [Kubernetes
role-based access control (RBAC) docs][rbac] for more information.
- (Optional) Install a [Metrics Server][metrics] if you need support for high
availability use cases.
See the [local installation guide][local-install] if you want to test Tekton on
your computer.
## Installation
To install Tekton Pipelines on a Kubernetes cluster:
1. Run one of the following commands depending on which version of Tekton
Pipelines you want to install:
- **Latest official release:**
```bash
kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml
```
Note: These instructions are ideal as a quick start installation guide with Tekton Pipelines and not meant for the production use. Please refer to the [operator](https://github.com/tektoncd/operator) to install, upgrade and manage Tekton projects.
- **Nightly release:**
```bash
kubectl apply --filename https://storage.googleapis.com/tekton-releases-nightly/pipeline/latest/release.yaml
```
- **Specific release:**
```bash
kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/previous/<version_number>/release.yaml
```
Replace `<version_number>` with the numbered version you want to install.
For example, `v0.26.0`.
- **Untagged release:**
If your container runtime does not support `image-reference:tag@digest`:
```bash
kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.notags.yaml
```
Multi-tenant installation is only partially supported today, read the [guide](./developers/multi-tenant-support.md)
for reference.
1. Monitor the installation:
```bash
kubectl get pods --namespace tekton-pipelines --watch
```
When all components show `1/1` under the `READY` column, the installation is
complete. Hit *Ctrl + C* to stop monitoring.
Congratulations! You have successfully installed Tekton Pipelines on your
Kubernetes cluster.
## Additional configuration options
You can enable additional alpha and beta features, customize execution
parameters, configure availability, and many more options. See the
[addition configurations options](./additional-configs.md) for more information.
## Next steps
To get started with Tekton check the [Introductory tutorials][quickstarts],
the [how-to guides][howtos], and the [examples folder][examples].
---
Except as otherwise noted, the content of this page is licensed under the
[Creative Commons Attribution 4.0 License][cca4], and code samples are licensed
under the [Apache 2.0 License][apache2l].
[quickstarts]: https://tekton.dev/docs/getting-started/
[howtos]: https://tekton.dev/docs/how-to-guides/
[examples]: https://github.com/tektoncd/pipeline/tree/main/examples/
[cca4]: https://creativecommons.org/licenses/by/4.0/
[apache2l]: https://www.apache.org/licenses/LICENSE-2.0
[k8s]: https://www.downloadkubernetes.com/
[kubectl]: https://www.downloadkubernetes.com/
[rbac]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
[metrics]: https://github.com/kubernetes-sigs/metrics-server
[local-install]: https://tekton.dev/docs/installation/local-installation/
| tekton | title Install Tekton Pipelines linkTitle Install Tekton Pipelines weight 101 description Install Tekton Pipelines on your cluster To view the full contents of this page go to the a href http tekton dev docs installation pipelines Tekton website a This guide explains how to install Tekton Pipelines Prerequisites A Kubernetes cluster k8s running version 1 28 or later Kubectl Grant cluster admin privileges to the current user See the Kubernetes role based access control RBAC docs rbac for more information Optional Install a Metrics Server metrics if you need support for high availability use cases See the local installation guide local install if you want to test Tekton on your computer Installation To install Tekton Pipelines on a Kubernetes cluster 1 Run one of the following commands depending on which version of Tekton Pipelines you want to install Latest official release bash kubectl apply filename https storage googleapis com tekton releases pipeline latest release yaml Note These instructions are ideal as a quick start installation guide with Tekton Pipelines and not meant for the production use Please refer to the operator https github com tektoncd operator to install upgrade and manage Tekton projects Nightly release bash kubectl apply filename https storage googleapis com tekton releases nightly pipeline latest release yaml Specific release bash kubectl apply filename https storage googleapis com tekton releases pipeline previous version number release yaml Replace version number with the numbered version you want to install For example v0 26 0 Untagged release If your container runtime does not support image reference tag digest bash kubectl apply filename https storage googleapis com tekton releases pipeline latest release notags yaml Multi tenant installation is only partially supported today read the guide developers multi tenant support md for reference 1 Monitor the installation bash kubectl get pods namespace tekton pipelines watch When all components show 1 1 under the READY column the installation is complete Hit Ctrl C to stop monitoring Congratulations You have successfully installed Tekton Pipelines on your Kubernetes cluster Additional configuration options You can enable additional alpha and beta features customize execution parameters configure availability and many more options See the addition configurations options additional configs md for more information Next steps To get started with Tekton check the Introductory tutorials quickstarts the how to guides howtos and the examples folder examples Except as otherwise noted the content of this page is licensed under the Creative Commons Attribution 4 0 License cca4 and code samples are licensed under the Apache 2 0 License apache2l quickstarts https tekton dev docs getting started howtos https tekton dev docs how to guides examples https github com tektoncd pipeline tree main examples cca4 https creativecommons org licenses by 4 0 apache2l https www apache org licenses LICENSE 2 0 k8s https www downloadkubernetes com kubectl https www downloadkubernetes com rbac https kubernetes io docs reference access authn authz rbac metrics https github com kubernetes sigs metrics server local install https tekton dev docs installation local installation |
tekton Compute Resources in Tekton Compute Resources Limits weight 408 Background Resource Requirements in Kubernetes | <!--
---
linkTitle: "Compute Resources Limits"
weight: 408
---
-->
# Compute Resources in Tekton
## Background: Resource Requirements in Kubernetes
Kubernetes allows users to specify CPU, memory, and ephemeral storage constraints
for [containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
Resource requests determine the resources reserved for a pod when it's scheduled,
and affect likelihood of pod eviction. Resource limits constrain the maximum amount of
a resource a container can use. A container that exceeds its memory limits will be killed,
and a container that exceeds its CPU limits will be throttled.
A pod's [effective resource requests and limits](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#resources)
are the higher of:
- the sum of all app containers request/limit for a resource
- the effective init container request/limit for a resource
This formula exists because Kubernetes runs init containers sequentially and app containers
in parallel. (There is no distinction made between app containers and sidecar containers
in Kubernetes; a sidecar is used in the following example to illustrate this.)
For example, consider a pod with the following containers:
| Container | CPU request | CPU limit |
| ------------------- | ----------- | --------- |
| init container 1 | 1 | 2 |
| init container 2 | 2 | 3 |
| app container 1 | 1 | 2 |
| app container 2 | 2 | 3 |
| sidecar container 1 | 3 | no limit |
The sum of all app container CPU requests is 6 (including the sidecar container), which is
greater than the maximum init container CPU request (2). Therefore, the pod's effective CPU
request will be 6.
Since the sidecar container has no CPU limit, this is treated as the highest CPU limit.
Therefore, the pod will have no effective CPU limit.
## Task-level Compute Resources Configuration
**([beta](https://github.com/tektoncd/pipeline/blob/main/docs/additional-configs.md#beta-features))**
Tekton allows users to specify resource requirements of [`Steps`](./tasks.md#defining-steps),
which run sequentially. However, the pod's effective resource requirements are still the
sum of its containers' resource requirements. This means that when specifying resource
requirements for `Step` containers, they must be treated as if they are running in parallel.
Tekton adjusts `Step` resource requirements to comply with [LimitRanges](#limitrange-support).
[ResourceQuotas](#resourcequota-support) are not currently supported.
Instead of specifying resource requirements on each `Step`, users can choose to specify resource requirements at the Task-level. If users specify a Task-level resource request, it will ensure that the kubelet reserves only that amount of resources to execute the `Task`'s `Steps`.
If users specify a Task-level resource limit, no `Step` may use more than that amount of resources.
Each of these details is explained in more depth below.
Some points to note:
- Task-level resource requests and limits do not apply to sidecars which can be configured separately.
- If only limits are configured in task-level, it will be applied as the task-level requests.
- Resource requirements configured in `Step` or `StepTemplate` of the referenced `Task` will be overridden by the task-level requirements.
- `TaskRun` configured with both `StepOverrides` and task-level requirements will be rejected.
### Configure Task-level Compute Resources
Task-level resource requirements can be configured in `TaskRun.ComputeResources`, or `PipelineRun.TaskRunSpecs.ComputeResources`.
e.g.
```yaml
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: foo
spec:
computeResources:
requests:
cpu: 1
limits:
cpu: 2
```
The following TaskRun will be rejected, because it configures both stepOverrides and task-level compute resource requirements:
```yaml
kind: TaskRun
spec:
stepOverrides:
- name: foo
resources:
requests:
cpu: 1
computeResources:
requests:
cpu: 2
```
```yaml
kind: PipelineRun
spec:
taskRunSpecs:
- pipelineTaskName: foo
stepOverrides:
- name: foo
resources:
requests:
cpu: 1
computeResources:
requests:
cpu: 2
```
### Configure Resource Requirements with Sidecar
Users can specify compute resources separately for a sidecar while configuring task-level resource requirements on TaskRun.
e.g.
```yaml
kind: TaskRun
spec:
sidecarOverrides:
- name: sidecar
resources:
requests:
cpu: 750m
limits:
cpu: 1
computeResources:
requests:
cpu: 2
```
## LimitRange Support
Kubernetes allows users to configure [LimitRanges]((https://kubernetes.io/docs/concepts/policy/limit-range/)),
which constrain compute resources of pods, containers, or PVCs running in the same namespace.
LimitRanges can:
- Enforce minimum and maximum compute resources usage per Pod or Container in a namespace.
- Enforce minimum and maximum storage request per PersistentVolumeClaim in a namespace.
- Enforce a ratio between request and limit for a resource in a namespace.
- Set default request/limit for compute resources in a namespace and automatically inject them to Containers at runtime.
Tekton applies the resource requirements specified by users directly to the containers
in a `Task's` pod, unless there is a LimitRange present in the namespace.
Tekton supports LimitRange minimum, maximum, and default resource requirements for containers,
but does not support LimitRange ratios between requests and limits ([#4230](https://github.com/tektoncd/pipeline/issues/4230)).
LimitRange types other than "Container" are not considered for purposes of resource requirements.
Tekton doesn't allow users to configure init containers for a `Task`, but any `default` and `defaultRequest` from a LimitRange
will be applied to the init containers that Tekton injects into a `TaskRun`'s pod.
### Requests
If a Step container does not have requests defined, Tekton will divide a LimitRange's `defaultRequests` by the number of Step containers and apply these requests to the Steps.
This results in a TaskRun with overall requests equal to LimitRange `defaultRequests`.
If this value is less than the LimitRange minimum, the LimitRange minimum will be used instead.
LimitRange `defaultRequests` are applied as-is to init containers or Sidecar containers that don't specify requests.
Containers that do specify requests will not be modified. If these requests are lower than LimitRange minimums, Kubernetes will reject the resulting TaskRun's pod.
### Limits
Tekton does not adjust container limits, regardless of whether a container is a Step, Sidecar, or init container.
If a container does not have limits defined, Kubernetes will apply the LimitRange `default` to the container's limits.
If a container does define limits, and they are less than the LimitRange `default`, Kubernetes will reject the resulting TaskRun's pod.
### Examples
Consider the following LimitRange:
```
apiVersion: v1
kind: LimitRange
metadata:
name: limitrange-example
spec:
limits:
- default: # The default limits
cpu: 2
defaultRequest: # The default requests
cpu: 1
max: # The maximum limits
cpu: 3
min: # The minimum requests
cpu: 300m
type: Container
```
A `Task` with 2 `Steps` and no resources specified would result in a pod with the following containers:
| Container | CPU request | CPU limit |
| ------------ | ----------- | --------- |
| container 1 | 500m | 2 |
| container 2 | 500m | 2 |
Here, the default CPU request was divided among the step containers, and this value was used since it was greater
than the minimum request specified by the LimitRange.
The CPU limits are 2 for each container, as this is the default limit specifed in the LimitRange.
If we had a `Task` with 2 `Steps` and 1 `Sidecar` with no resources specified would result in a pod with the following containers:
| Container | CPU request | CPU limit |
| ------------ | ----------- | --------- |
| container 1 | 500m | 2 |
| container 2 | 500m | 2 |
| container 3 | 1 | 2 |
For the first two containers, the default CPU request was divided among the step containers, and this value was used since it was greater
than the minimum request specified by the LimitRange. The third container is a sidecar and since it is not a step container gets the full
default CPU request of 1. AS before the CPU limits are 2 for each container, as this is the default limit specifed in the LimitRange.
Now, consider a `Task` with the following `Step`s:
| Step | CPU request | CPU limit |
| ------ | ----------- | --------- |
| step 1 | 200m | 2 |
| step 2 | 1 | 4 |
The resulting pod would have the following containers:
| Container | CPU request | CPU limit |
| ------------ | ----------- | --------- |
| container 1 | 300m | 2 |
| container 2 | 1 | 3 |
Here, the first `Step's` request was less than the LimitRange minimum, so the output request is the minimum (300m).
The second `Step's` request is unchanged. The first `Step's` limit is less than the maximum, so it is unchanged,
while the second `Step's` limit is greater than the maximum, so the maximum (3) is used.
### Support for multiple LimitRanges
Tekton supports running `TaskRuns` in namespaces with multiple LimitRanges.
For a given resource, the minumum used will be the largest of any of the LimitRanges' minimum values,
and the maximum used will be the smallest of any of the LimitRanges' maximum values.
The minimum resource requirement used will be the largest of any minimum for that resource,
and the maximum resource requirement will be the smallest of any of the maximum values defined.
The default value will be the minimum of any default values defined.
If the resulting default value is less than the resulting minimum value, the default value will be the minimum value.
It's possible for multiple LimitRanges to be defined which are not compatible with each other, preventing pods from being scheduled.
#### Example
Consider a namespaces with the following LimitRanges defined:
```
apiVersion: v1
kind: LimitRange
metadata:
name: limitrange-1
spec:
limits:
- default: # The default limits
cpu: 2
defaultRequest: # The default requests
cpu: 750m
max: # The maximum limits
cpu: 3
min: # The minimum requests
cpu: 500m
type: Container
```
```
apiVersion: v1
kind: LimitRange
metadata:
name: limitrange-2
spec:
limits:
- default: # The default limits
cpu: 1.5
defaultRequest: # The default requests
cpu: 1
max: # The maximum limits
cpu: 2.5
min: # The minimum requests
cpu: 300m
type: Container
```
A namespace with limitrange-1 and limitrange-2 would be treated as if it contained only the following LimitRange:
```
apiVersion: v1
kind: LimitRange
metadata:
name: aggregate-limitrange
spec:
limits:
- default: # The default limits
cpu: 1.5
defaultRequest: # The default requests
cpu: 750m
max: # The maximum limits
cpu: 2.5
min: # The minimum requests
cpu: 300m
type: Container
```
Here, the minimum of the "max" values is the output "max" value, and likewise for "default" and "defaultRequest".
The maximum of the "min" values is the output "min" value.
## ResourceQuota Support
Kubernetes allows users to define [ResourceQuotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/),
which restrict the maximum resource requests and limits of all pods running in a namespace.
To deploy Tekton TaskRuns or PipelineRuns in namespaces with ResourceQuotas, compute resource requirements
must be set for all containers in a `TaskRun`'s pod, including the init containers injected by Tekton.
`Step` and `Sidecar` resource requirements can be configured directly through the API, as described in
[Task Resource Requirements](#task-resource-requirements). To configure resource requirements for Tekton's init containers,
deploy a LimitRange in the same namespace. The LimitRange's `default` and `defaultRequest` will be applied to the init containers,
and divided among the `Steps` and `Sidecars`, as described in [LimitRange Support](#limitrange-support).
[#2933](https://github.com/tektoncd/pipeline/issues/2933) tracks support for running `TaskRuns` in a namespace with a ResourceQuota
without having to use LimitRanges.
ResourceQuotas consider the effective resource requests and limits of a pod, which Kubernetes determines by summing the resource requirements
of its containers (under the assumption that they run in parallel). When using LimitRanges to set compute resources for `TaskRun` pods,
LimitRange default requests are divided among `Step` containers, meaning that the pod's effective requests reflect the actual requests
that the pod needs. However, LimitRange default limits are not divided among containers, meaning the pod's effective limits are much larger
than the limits applied during execution of any given `Step`. For example, if a ResourceQuota restricts a namespace to a limit of 10 CPU,
and a user creates a TaskRun with 20 steps with a limit of 1 CPU each, the pod would not be schedulable even though it is
limited to 1 CPU at each point in time. Therefore, it is recommended to use ResourceQuotas to restrict only requests of `TaskRun` pods,
not limits (tracked in [#4976](https://github.com/tektoncd/pipeline/issues/4976)).
## Quality of Service (QoS)
By default, pods that run Tekton TaskRuns will have a [Quality of Service (QoS)](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/)
of "BestEffort". If compute resource requirements are set for any Step or Sidecar, the pod will have a "Burstable" QoS.
To get a "Guaranteed" QoS, a TaskRun pod must have compute resources set for all of its containers, including init containers which are
injected by Tekton, and all containers must have their requests equal to their limits.
This can be achieved by using LimitRanges to apply default requests and limits.
# References
- [LimitRange in k8s docs](https://kubernetes.io/docs/concepts/policy/limit-range/)
- [Configure default memory requests and limits for a Namespace](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)
- [Configure default CPU requests and limits for a Namespace](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)
- [Configure Minimum and Maximum CPU constraints for a Namespace](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)
- [Configure Minimum and Maximum Memory constraints for a Namespace](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)
- [Managing Resources for Containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/)
- [Kubernetes best practices: Resource requests and limits](https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-resource-requests-and-limits)
- [Restrict resource consumption with limit ranges](https://docs.openshift.com/container-platform/4.8/nodes/clusters/nodes-cluster-limit-ranges.html) | tekton | linkTitle Compute Resources Limits weight 408 Compute Resources in Tekton Background Resource Requirements in Kubernetes Kubernetes allows users to specify CPU memory and ephemeral storage constraints for containers https kubernetes io docs concepts configuration manage resources containers Resource requests determine the resources reserved for a pod when it s scheduled and affect likelihood of pod eviction Resource limits constrain the maximum amount of a resource a container can use A container that exceeds its memory limits will be killed and a container that exceeds its CPU limits will be throttled A pod s effective resource requests and limits https kubernetes io docs concepts workloads pods init containers resources are the higher of the sum of all app containers request limit for a resource the effective init container request limit for a resource This formula exists because Kubernetes runs init containers sequentially and app containers in parallel There is no distinction made between app containers and sidecar containers in Kubernetes a sidecar is used in the following example to illustrate this For example consider a pod with the following containers Container CPU request CPU limit init container 1 1 2 init container 2 2 3 app container 1 1 2 app container 2 2 3 sidecar container 1 3 no limit The sum of all app container CPU requests is 6 including the sidecar container which is greater than the maximum init container CPU request 2 Therefore the pod s effective CPU request will be 6 Since the sidecar container has no CPU limit this is treated as the highest CPU limit Therefore the pod will have no effective CPU limit Task level Compute Resources Configuration beta https github com tektoncd pipeline blob main docs additional configs md beta features Tekton allows users to specify resource requirements of Steps tasks md defining steps which run sequentially However the pod s effective resource requirements are still the sum of its containers resource requirements This means that when specifying resource requirements for Step containers they must be treated as if they are running in parallel Tekton adjusts Step resource requirements to comply with LimitRanges limitrange support ResourceQuotas resourcequota support are not currently supported Instead of specifying resource requirements on each Step users can choose to specify resource requirements at the Task level If users specify a Task level resource request it will ensure that the kubelet reserves only that amount of resources to execute the Task s Steps If users specify a Task level resource limit no Step may use more than that amount of resources Each of these details is explained in more depth below Some points to note Task level resource requests and limits do not apply to sidecars which can be configured separately If only limits are configured in task level it will be applied as the task level requests Resource requirements configured in Step or StepTemplate of the referenced Task will be overridden by the task level requirements TaskRun configured with both StepOverrides and task level requirements will be rejected Configure Task level Compute Resources Task level resource requirements can be configured in TaskRun ComputeResources or PipelineRun TaskRunSpecs ComputeResources e g yaml apiVersion tekton dev v1beta1 kind TaskRun metadata name foo spec computeResources requests cpu 1 limits cpu 2 The following TaskRun will be rejected because it configures both stepOverrides and task level compute resource requirements yaml kind TaskRun spec stepOverrides name foo resources requests cpu 1 computeResources requests cpu 2 yaml kind PipelineRun spec taskRunSpecs pipelineTaskName foo stepOverrides name foo resources requests cpu 1 computeResources requests cpu 2 Configure Resource Requirements with Sidecar Users can specify compute resources separately for a sidecar while configuring task level resource requirements on TaskRun e g yaml kind TaskRun spec sidecarOverrides name sidecar resources requests cpu 750m limits cpu 1 computeResources requests cpu 2 LimitRange Support Kubernetes allows users to configure LimitRanges https kubernetes io docs concepts policy limit range which constrain compute resources of pods containers or PVCs running in the same namespace LimitRanges can Enforce minimum and maximum compute resources usage per Pod or Container in a namespace Enforce minimum and maximum storage request per PersistentVolumeClaim in a namespace Enforce a ratio between request and limit for a resource in a namespace Set default request limit for compute resources in a namespace and automatically inject them to Containers at runtime Tekton applies the resource requirements specified by users directly to the containers in a Task s pod unless there is a LimitRange present in the namespace Tekton supports LimitRange minimum maximum and default resource requirements for containers but does not support LimitRange ratios between requests and limits 4230 https github com tektoncd pipeline issues 4230 LimitRange types other than Container are not considered for purposes of resource requirements Tekton doesn t allow users to configure init containers for a Task but any default and defaultRequest from a LimitRange will be applied to the init containers that Tekton injects into a TaskRun s pod Requests If a Step container does not have requests defined Tekton will divide a LimitRange s defaultRequests by the number of Step containers and apply these requests to the Steps This results in a TaskRun with overall requests equal to LimitRange defaultRequests If this value is less than the LimitRange minimum the LimitRange minimum will be used instead LimitRange defaultRequests are applied as is to init containers or Sidecar containers that don t specify requests Containers that do specify requests will not be modified If these requests are lower than LimitRange minimums Kubernetes will reject the resulting TaskRun s pod Limits Tekton does not adjust container limits regardless of whether a container is a Step Sidecar or init container If a container does not have limits defined Kubernetes will apply the LimitRange default to the container s limits If a container does define limits and they are less than the LimitRange default Kubernetes will reject the resulting TaskRun s pod Examples Consider the following LimitRange apiVersion v1 kind LimitRange metadata name limitrange example spec limits default The default limits cpu 2 defaultRequest The default requests cpu 1 max The maximum limits cpu 3 min The minimum requests cpu 300m type Container A Task with 2 Steps and no resources specified would result in a pod with the following containers Container CPU request CPU limit container 1 500m 2 container 2 500m 2 Here the default CPU request was divided among the step containers and this value was used since it was greater than the minimum request specified by the LimitRange The CPU limits are 2 for each container as this is the default limit specifed in the LimitRange If we had a Task with 2 Steps and 1 Sidecar with no resources specified would result in a pod with the following containers Container CPU request CPU limit container 1 500m 2 container 2 500m 2 container 3 1 2 For the first two containers the default CPU request was divided among the step containers and this value was used since it was greater than the minimum request specified by the LimitRange The third container is a sidecar and since it is not a step container gets the full default CPU request of 1 AS before the CPU limits are 2 for each container as this is the default limit specifed in the LimitRange Now consider a Task with the following Step s Step CPU request CPU limit step 1 200m 2 step 2 1 4 The resulting pod would have the following containers Container CPU request CPU limit container 1 300m 2 container 2 1 3 Here the first Step s request was less than the LimitRange minimum so the output request is the minimum 300m The second Step s request is unchanged The first Step s limit is less than the maximum so it is unchanged while the second Step s limit is greater than the maximum so the maximum 3 is used Support for multiple LimitRanges Tekton supports running TaskRuns in namespaces with multiple LimitRanges For a given resource the minumum used will be the largest of any of the LimitRanges minimum values and the maximum used will be the smallest of any of the LimitRanges maximum values The minimum resource requirement used will be the largest of any minimum for that resource and the maximum resource requirement will be the smallest of any of the maximum values defined The default value will be the minimum of any default values defined If the resulting default value is less than the resulting minimum value the default value will be the minimum value It s possible for multiple LimitRanges to be defined which are not compatible with each other preventing pods from being scheduled Example Consider a namespaces with the following LimitRanges defined apiVersion v1 kind LimitRange metadata name limitrange 1 spec limits default The default limits cpu 2 defaultRequest The default requests cpu 750m max The maximum limits cpu 3 min The minimum requests cpu 500m type Container apiVersion v1 kind LimitRange metadata name limitrange 2 spec limits default The default limits cpu 1 5 defaultRequest The default requests cpu 1 max The maximum limits cpu 2 5 min The minimum requests cpu 300m type Container A namespace with limitrange 1 and limitrange 2 would be treated as if it contained only the following LimitRange apiVersion v1 kind LimitRange metadata name aggregate limitrange spec limits default The default limits cpu 1 5 defaultRequest The default requests cpu 750m max The maximum limits cpu 2 5 min The minimum requests cpu 300m type Container Here the minimum of the max values is the output max value and likewise for default and defaultRequest The maximum of the min values is the output min value ResourceQuota Support Kubernetes allows users to define ResourceQuotas https kubernetes io docs concepts policy resource quotas which restrict the maximum resource requests and limits of all pods running in a namespace To deploy Tekton TaskRuns or PipelineRuns in namespaces with ResourceQuotas compute resource requirements must be set for all containers in a TaskRun s pod including the init containers injected by Tekton Step and Sidecar resource requirements can be configured directly through the API as described in Task Resource Requirements task resource requirements To configure resource requirements for Tekton s init containers deploy a LimitRange in the same namespace The LimitRange s default and defaultRequest will be applied to the init containers and divided among the Steps and Sidecars as described in LimitRange Support limitrange support 2933 https github com tektoncd pipeline issues 2933 tracks support for running TaskRuns in a namespace with a ResourceQuota without having to use LimitRanges ResourceQuotas consider the effective resource requests and limits of a pod which Kubernetes determines by summing the resource requirements of its containers under the assumption that they run in parallel When using LimitRanges to set compute resources for TaskRun pods LimitRange default requests are divided among Step containers meaning that the pod s effective requests reflect the actual requests that the pod needs However LimitRange default limits are not divided among containers meaning the pod s effective limits are much larger than the limits applied during execution of any given Step For example if a ResourceQuota restricts a namespace to a limit of 10 CPU and a user creates a TaskRun with 20 steps with a limit of 1 CPU each the pod would not be schedulable even though it is limited to 1 CPU at each point in time Therefore it is recommended to use ResourceQuotas to restrict only requests of TaskRun pods not limits tracked in 4976 https github com tektoncd pipeline issues 4976 Quality of Service QoS By default pods that run Tekton TaskRuns will have a Quality of Service QoS https kubernetes io docs tasks configure pod container quality service pod of BestEffort If compute resource requirements are set for any Step or Sidecar the pod will have a Burstable QoS To get a Guaranteed QoS a TaskRun pod must have compute resources set for all of its containers including init containers which are injected by Tekton and all containers must have their requests equal to their limits This can be achieved by using LimitRanges to apply default requests and limits References LimitRange in k8s docs https kubernetes io docs concepts policy limit range Configure default memory requests and limits for a Namespace https kubernetes io docs tasks administer cluster manage resources memory default namespace Configure default CPU requests and limits for a Namespace https kubernetes io docs tasks administer cluster manage resources cpu default namespace Configure Minimum and Maximum CPU constraints for a Namespace https kubernetes io docs tasks administer cluster manage resources cpu constraint namespace Configure Minimum and Maximum Memory constraints for a Namespace https kubernetes io docs tasks administer cluster manage resources memory constraint namespace Managing Resources for Containers https kubernetes io docs concepts configuration manage resources containers Kubernetes best practices Resource requests and limits https cloud google com blog products containers kubernetes kubernetes best practices resource requests and limits Restrict resource consumption with limit ranges https docs openshift com container platform 4 8 nodes clusters nodes cluster limit ranges html |
tekton Pod templates A Pod template defines a portion of a weight 409 Pod templates | <!--
---
linkTitle: "Pod templates"
weight: 409
---
-->
# Pod templates
A Pod template defines a portion of a [`PodSpec`](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#pod-v1-core)
configuration that Tekton can use as "boilerplate" for a Pod that runs your `Tasks` and `Pipelines`.
You can specify a Pod template for `TaskRuns` and `PipelineRuns`. In the template, you can specify custom values for fields governing
the execution of individual `Tasks` or for all `Tasks` executed by a given `PipelineRun`.
You also have the option to define a global Pod template [in your Tekton config](./additional-configs.md#customizing-basic-execution-parameters) using the key `default-pod-template`.
However, this global template is going to be merged with any templates you specify in your `TaskRuns` and `PipelineRuns`.<br>
Except for the `env` and `volumes` fields, other fields that exist in both the global template and the `TaskRun`'s or
`PipelineRun`'s template will be taken from the `TaskRun` or `PipelineRun`.
The `env` and `volumes` fields are merged by the `name` value in the array elements. If the item's `name` is the same, the item from `TaskRun` or `PipelineRun` will be used.
See the following for examples of specifying a Pod template:
- [Specifying a Pod template for a `TaskRun`](./taskruns.md#specifying-a-pod-template)
- [Specifying a Pod template for a `PipelineRun`](./pipelineruns.md#specifying-a-pod-template)
## Supported fields
Pod templates support fields listed in the table below.
<table>
<thead>
<th>Field</th>
<th>Description</th>
</thead>
<tbody>
<tr>
<td><code>env</code></td>
<td>Environment variables defined in the Pod template at <code>TaskRun</code> and <code>PipelineRun</code> level take precedence over the ones defined in <code>steps</code> and <code>stepTemplate</code></td>
</tr>
<tr>
<td><code>nodeSelector</code></td>
<td>Must be true for <a href=https://kubernetes.io/docs/concepts/configuration/assign-pod-node/>the Pod to fit on a node</a>.</td>
</tr>
<tr>
<td><code>tolerations</code></td>
<td>Allows (but does not require) the Pods to schedule onto nodes with matching taints.</td>
</tr>
<tr>
<td><code>affinity</code></td>
<td>Allows constraining the set of nodes for which the Pod can be scheduled based on the labels present on the node.</td>
</tr>
<tr>
<td><code>securityContext</code></td>
<td>Specifies Pod-level security attributes and common container settings such as <code>runAsUser</code> and <code>selinux</code>.</td>
</tr>
<tr>
<td><code>volumes</code></td>
<td>Specifies a list of volumes that containers within the Pod can mount. This allows you to specify a volume type for each <code>volumeMount</code> in a <code>Task</code>.</td>
</tr>
<tr>
<td><code>runtimeClassName</code></td>
<td>Specifies the <a href=https://kubernetes.io/docs/concepts/containers/runtime-class/>runtime class</a> for the Pod.</td>
</tr>
<tr>
<td><code>automountServiceAccountToken</code></td>
<td><b>Default:</b> <code>true</code>. Determines whether Tekton automatically provides the token for the service account used by the Pod inside containers at a predefined path.</td>
</tr>
<tr>
<td><code>dnsPolicy</code></td>
<td><b>Default:</b> <code>ClusterFirst</code>. Specifies the <a href=https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy>DNS policy</a>
for the Pod. Legal values are <code>ClusterFirst</code>, <code>Default</code>, and <code>None</code>. Does <b>not</b> support <code>ClusterFirstWithHostNet</code>
because Tekton Pods cannot run with host networking.</td>
</tr>
<tr>
<td><code>dnsConfig</code></td>
<td>Specifies <a href=https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config>additional DNS configuration for the Pod</a>, such as name servers and search domains.</td>
</tr>
<tr>
<td><code>enableServiceLinks</code></td>
<td><b>Default:</b> <code>true</code>. Determines whether services in the Pod's namespace are exposed as environment variables to the Pod, similarly to Docker service links.</td>
</tr>
<tr>
<td><code>priorityClassName</code></td>
<td>Specifies the <a href=https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/>priority class</a> for the Pod. Allows you to selectively enable preemption on lower-priority workloads.</td>
</tr>
<tr>
<td><code>schedulerName</code></td>
<td>Specifies the <a href=https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/>scheduler</a> to use when dispatching the Pod. You can specify different schedulers for different types of
workloads, such as <code>volcano.sh</code> for machine learning workloads.</td>
</tr>
<tr>
<td><code>imagePullSecrets</code></td>
<td>Specifies the <a href=https://kubernetes.io/docs/concepts/configuration/secret/>secret</a> to use when <a href=https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/>
pulling a container image</a>.</td>
</tr>
<tr>
<td><code>hostNetwork</code></td>
<td><b>Default:</b> <code>false</code>. Determines whether to use the host network namespace.</td>
</tr>
<tr>
<td><code>hostAliases</code></td>
<td>Adds entries to a Pod's `/etc/hosts` to provide Pod-level overrides of hostnames. For further info see [Kubernetes' docs for this field](https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/).</td>
</tr>
<tr>
<td><code>topologySpreadConstraints</code></td>
<td>Specify how Pods are spread across your cluster among topology domains.</td>
</tr>
</tbody>
</table>
## Use `imagePullSecrets` to lookup entrypoint
If no command is configured in `task` and `imagePullSecrets` is configured in `podTemplate`, the Tekton Controller will look up the entrypoint of image with `imagePullSecrets`. The Tekton controller's service account is given access to secrets by default. See [this](https://github.com/tektoncd/pipeline/blob/main/config/200-clusterrole.yaml) for reference. If the Tekton controller's service account is not granted the access to secrets in different namespace, you need to grant the access via `RoleBinding`:
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: creds-getter
namespace: my-ns
rules:
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["creds"]
verbs: ["get"]
```
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: creds-getter-binding
namespace: my-ns
subjects:
- kind: ServiceAccount
name: tekton-pipelines-controller
namespace: tekton-pipelines
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: creds-getter
apiGroup: rbac.authorization.k8s.io
```
# Affinity Assistant Pod templates
The Pod templates specified in the `TaskRuns` and `PipelineRuns `also apply to
the [affinity assistant Pods](#./workspaces.md#specifying-workspace-order-in-a-pipeline-and-affinity-assistants)
that are created when using Workspaces, but only on selected fields.
The supported fields for affinity assistant pods are: `tolerations`, `nodeSelector`, `securityContext`,
`priorityClassName` and `imagePullSecrets` (see the table above for more details about the fields).
Similarly to global Pod Template, you have the option to define a global affinity
assistant Pod template [in your Tekton config](./additional-configs.md#customizing-basic-execution-parameters)
using the key `default-affinity-assistant-pod-template`. The merge strategy is
the same as the one described above for the supported fields.
---
Except as otherwise noted, the content of this page is licensed under the
[Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/),
and code samples are licensed under the
[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). | tekton | linkTitle Pod templates weight 409 Pod templates A Pod template defines a portion of a PodSpec https kubernetes io docs reference generated kubernetes api v1 18 pod v1 core configuration that Tekton can use as boilerplate for a Pod that runs your Tasks and Pipelines You can specify a Pod template for TaskRuns and PipelineRuns In the template you can specify custom values for fields governing the execution of individual Tasks or for all Tasks executed by a given PipelineRun You also have the option to define a global Pod template in your Tekton config additional configs md customizing basic execution parameters using the key default pod template However this global template is going to be merged with any templates you specify in your TaskRuns and PipelineRuns br Except for the env and volumes fields other fields that exist in both the global template and the TaskRun s or PipelineRun s template will be taken from the TaskRun or PipelineRun The env and volumes fields are merged by the name value in the array elements If the item s name is the same the item from TaskRun or PipelineRun will be used See the following for examples of specifying a Pod template Specifying a Pod template for a TaskRun taskruns md specifying a pod template Specifying a Pod template for a PipelineRun pipelineruns md specifying a pod template Supported fields Pod templates support fields listed in the table below table thead th Field th th Description th thead tbody tr td code env code td td Environment variables defined in the Pod template at code TaskRun code and code PipelineRun code level take precedence over the ones defined in code steps code and code stepTemplate code td tr tr td code nodeSelector code td td Must be true for a href https kubernetes io docs concepts configuration assign pod node the Pod to fit on a node a td tr tr td code tolerations code td td Allows but does not require the Pods to schedule onto nodes with matching taints td tr tr td code affinity code td td Allows constraining the set of nodes for which the Pod can be scheduled based on the labels present on the node td tr tr td code securityContext code td td Specifies Pod level security attributes and common container settings such as code runAsUser code and code selinux code td tr tr td code volumes code td td Specifies a list of volumes that containers within the Pod can mount This allows you to specify a volume type for each code volumeMount code in a code Task code td tr tr td code runtimeClassName code td td Specifies the a href https kubernetes io docs concepts containers runtime class runtime class a for the Pod td tr tr td code automountServiceAccountToken code td td b Default b code true code Determines whether Tekton automatically provides the token for the service account used by the Pod inside containers at a predefined path td tr tr td code dnsPolicy code td td b Default b code ClusterFirst code Specifies the a href https kubernetes io docs concepts services networking dns pod service pod s dns policy DNS policy a for the Pod Legal values are code ClusterFirst code code Default code and code None code Does b not b support code ClusterFirstWithHostNet code because Tekton Pods cannot run with host networking td tr tr td code dnsConfig code td td Specifies a href https kubernetes io docs concepts services networking dns pod service pod s dns config additional DNS configuration for the Pod a such as name servers and search domains td tr tr td code enableServiceLinks code td td b Default b code true code Determines whether services in the Pod s namespace are exposed as environment variables to the Pod similarly to Docker service links td tr tr td code priorityClassName code td td Specifies the a href https kubernetes io docs concepts configuration pod priority preemption priority class a for the Pod Allows you to selectively enable preemption on lower priority workloads td tr tr td code schedulerName code td td Specifies the a href https kubernetes io docs tasks administer cluster configure multiple schedulers scheduler a to use when dispatching the Pod You can specify different schedulers for different types of workloads such as code volcano sh code for machine learning workloads td tr tr td code imagePullSecrets code td td Specifies the a href https kubernetes io docs concepts configuration secret secret a to use when a href https kubernetes io docs tasks configure pod container pull image private registry pulling a container image a td tr tr td code hostNetwork code td td b Default b code false code Determines whether to use the host network namespace td tr tr td code hostAliases code td td Adds entries to a Pod s etc hosts to provide Pod level overrides of hostnames For further info see Kubernetes docs for this field https kubernetes io docs tasks network customize hosts file for pods td tr tr td code topologySpreadConstraints code td td Specify how Pods are spread across your cluster among topology domains td tr tbody table Use imagePullSecrets to lookup entrypoint If no command is configured in task and imagePullSecrets is configured in podTemplate the Tekton Controller will look up the entrypoint of image with imagePullSecrets The Tekton controller s service account is given access to secrets by default See this https github com tektoncd pipeline blob main config 200 clusterrole yaml for reference If the Tekton controller s service account is not granted the access to secrets in different namespace you need to grant the access via RoleBinding yaml apiVersion rbac authorization k8s io v1 kind Role metadata name creds getter namespace my ns rules apiGroups resources secrets resourceNames creds verbs get yaml apiVersion rbac authorization k8s io v1 kind RoleBinding metadata name creds getter binding namespace my ns subjects kind ServiceAccount name tekton pipelines controller namespace tekton pipelines apiGroup rbac authorization k8s io roleRef kind Role name creds getter apiGroup rbac authorization k8s io Affinity Assistant Pod templates The Pod templates specified in the TaskRuns and PipelineRuns also apply to the affinity assistant Pods workspaces md specifying workspace order in a pipeline and affinity assistants that are created when using Workspaces but only on selected fields The supported fields for affinity assistant pods are tolerations nodeSelector securityContext priorityClassName and imagePullSecrets see the table above for more details about the fields Similarly to global Pod Template you have the option to define a global affinity assistant Pod template in your Tekton config additional configs md customizing basic execution parameters using the key default affinity assistant pod template The merge strategy is the same as the one described above for the supported fields Except as otherwise noted the content of this page is licensed under the Creative Commons Attribution 4 0 License https creativecommons org licenses by 4 0 and code samples are licensed under the Apache 2 0 License https www apache org licenses LICENSE 2 0 |
tekton Resolver Type weight 308 Bundles Resolver Bundles Resolver | <!--
---
linkTitle: "Bundles Resolver"
weight: 308
---
-->
# Bundles Resolver
## Resolver Type
This Resolver responds to type `bundles`.
## Parameters
| Param Name | Description | Example Value |
|------------------|-------------------------------------------------------------------------------|------------------------------------------------------------|
| `secret` | The name of the secret to use when constructing registry credentials | `default` |
| `bundle` | The bundle url pointing at the image to fetch | `gcr.io/tekton-releases/catalog/upstream/golang-build:0.1` |
| `name` | The name of the resource to pull out of the bundle | `golang-build` |
| `kind` | The resource kind to pull out of the bundle | `task` |
## Requirements
- A cluster running Tekton Pipeline v0.41.0 or later.
- The [built-in remote resolvers installed](./install.md#installing-and-configuring-remote-task-and-pipeline-resolution).
- The `enable-bundles-resolver` feature flag in the `resolvers-feature-flags` ConfigMap
in the `tekton-pipelines-resolvers` namespace set to `true`.
- [Beta features](./additional-configs.md#beta-features) enabled.
## Configuration
This resolver uses a `ConfigMap` for its settings. See
[`../config/resolvers/bundleresolver-config.yaml`](../config/resolvers/bundleresolver-config.yaml)
for the name, namespace and defaults that the resolver ships with.
### Options
| Option Name | Description | Example Values |
|---------------------------|--------------------------------------------------------------|-----------------------|
| `default-kind` | The default layer kind in the bundle image. | `task`, `pipeline` |
## Usage
### Task Resolution
```yaml
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: remote-task-reference
spec:
taskRef:
resolver: bundles
params:
- name: bundle
value: docker.io/ptasci67/example-oci@sha256:053a6cb9f3711d4527dd0d37ac610e8727ec0288a898d5dfbd79b25bcaa29828
- name: name
value: hello-world
- name: kind
value: task
```
### Pipeline Resolution
Unfortunately the Tekton Catalog does not publish pipelines at the
moment. Here's an example PipelineRun that talks to a private registry
but won't work unless you tweak the `bundle` field to point to a
registry with a pipeline in it:
```yaml
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: bundle-demo
spec:
pipelineRef:
resolver: bundles
params:
- name: bundle
value: 10.96.190.208:5000/simple/pipeline:latest
- name: name
value: hello-pipeline
- name: kind
value: pipeline
params:
- name: username
value: "tekton pipelines"
```
## `ResolutionRequest` Status
`ResolutionRequest.Status.RefSource` field captures the source where the remote resource came from. It includes the 3 subfields: `url`, `digest` and `entrypoint`.
- `uri`: The image repository URI
- `digest`: The map of the algorithm portion -> the hex encoded portion of the image digest.
- `entrypoint`: The resource name in the OCI bundle image.
Example:
- TaskRun Resolution
```yaml
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: remote-task-reference
spec:
taskRef:
resolver: bundles
params:
- name: bundle
value: gcr.io/tekton-releases/catalog/upstream/git-clone:0.7
- name: name
value: git-clone
- name: kind
value: task
params:
- name: url
value: https://github.com/octocat/Hello-World
workspaces:
- name: output
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
```
- `ResolutionRequest`
```yaml
apiVersion: resolution.tekton.dev/v1beta1
kind: ResolutionRequest
metadata:
...
labels:
resolution.tekton.dev/type: bundles
name: bundles-21ad80ec13f3e8b73fed5880a64d4611
...
spec:
params:
- name: bundle
value: gcr.io/tekton-releases/catalog/upstream/git-clone:0.7
- name: name
value: git-clone
- name: kind
value: task
status:
annotations: ...
...
data: xxx
observedGeneration: 1
refSource:
digest:
sha256: f51ca50f1c065acba8290ef14adec8461915ecc5f70a8eb26190c6e8e0ededaf
entryPoint: git-clone
uri: gcr.io/tekton-releases/catalog/upstream/git-clone
```
---
Except as otherwise noted, the content of this page is licensed under the
[Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/),
and code samples are licensed under the
[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). | tekton | linkTitle Bundles Resolver weight 308 Bundles Resolver Resolver Type This Resolver responds to type bundles Parameters Param Name Description Example Value secret The name of the secret to use when constructing registry credentials default bundle The bundle url pointing at the image to fetch gcr io tekton releases catalog upstream golang build 0 1 name The name of the resource to pull out of the bundle golang build kind The resource kind to pull out of the bundle task Requirements A cluster running Tekton Pipeline v0 41 0 or later The built in remote resolvers installed install md installing and configuring remote task and pipeline resolution The enable bundles resolver feature flag in the resolvers feature flags ConfigMap in the tekton pipelines resolvers namespace set to true Beta features additional configs md beta features enabled Configuration This resolver uses a ConfigMap for its settings See config resolvers bundleresolver config yaml config resolvers bundleresolver config yaml for the name namespace and defaults that the resolver ships with Options Option Name Description Example Values default kind The default layer kind in the bundle image task pipeline Usage Task Resolution yaml apiVersion tekton dev v1beta1 kind TaskRun metadata name remote task reference spec taskRef resolver bundles params name bundle value docker io ptasci67 example oci sha256 053a6cb9f3711d4527dd0d37ac610e8727ec0288a898d5dfbd79b25bcaa29828 name name value hello world name kind value task Pipeline Resolution Unfortunately the Tekton Catalog does not publish pipelines at the moment Here s an example PipelineRun that talks to a private registry but won t work unless you tweak the bundle field to point to a registry with a pipeline in it yaml apiVersion tekton dev v1beta1 kind PipelineRun metadata name bundle demo spec pipelineRef resolver bundles params name bundle value 10 96 190 208 5000 simple pipeline latest name name value hello pipeline name kind value pipeline params name username value tekton pipelines ResolutionRequest Status ResolutionRequest Status RefSource field captures the source where the remote resource came from It includes the 3 subfields url digest and entrypoint uri The image repository URI digest The map of the algorithm portion the hex encoded portion of the image digest entrypoint The resource name in the OCI bundle image Example TaskRun Resolution yaml apiVersion tekton dev v1beta1 kind TaskRun metadata name remote task reference spec taskRef resolver bundles params name bundle value gcr io tekton releases catalog upstream git clone 0 7 name name value git clone name kind value task params name url value https github com octocat Hello World workspaces name output volumeClaimTemplate spec accessModes ReadWriteOnce resources requests storage 500Mi ResolutionRequest yaml apiVersion resolution tekton dev v1beta1 kind ResolutionRequest metadata labels resolution tekton dev type bundles name bundles 21ad80ec13f3e8b73fed5880a64d4611 spec params name bundle value gcr io tekton releases catalog upstream git clone 0 7 name name value git clone name kind value task status annotations data xxx observedGeneration 1 refSource digest sha256 f51ca50f1c065acba8290ef14adec8461915ecc5f70a8eb26190c6e8e0ededaf entryPoint git clone uri gcr io tekton releases catalog upstream git clone Except as otherwise noted the content of this page is licensed under the Creative Commons Attribution 4 0 License https creativecommons org licenses by 4 0 and code samples are licensed under the Apache 2 0 License https www apache org licenses LICENSE 2 0 |
tekton Use resolver type Hub Resolver weight 311 Hub Resolver | <!--
---
linkTitle: "Hub Resolver"
weight: 311
---
-->
# Hub Resolver
Use resolver type `hub`.
## Parameters
| Param Name | Description | Example Value |
|------------------|-------------------------------------------------------------------------------|------------------------------------------------------------|
| `catalog` | The catalog from where to pull the resource (Optional) | Default: `tekton-catalog-tasks` (for `task` kind); `tekton-catalog-pipelines` (for `pipeline` kind) |
| `type` | The type of Hub from where to pull the resource (Optional). Either `artifact` or `tekton` | Default: `artifact` |
| `kind` | Either `task` or `pipeline` (Optional) | Default: `task` |
| `name` | The name of the task or pipeline to fetch from the hub | `golang-build` |
| `version` | Version or a Constraint (see [below](#version-constraint) of a task or a pipeline to pull in from. Wrap the number in quotes! | `"0.5.0"`, `">= 0.5.0"` |
The Catalogs in the Artifact Hub follows the semVer (i.e.` <major-version>.<minor-version>.0`) and the Catalogs in the Tekton Hub follows the simplified semVer (i.e. `<major-version>.<minor-version>`). Both full and simplified semantic versioning will be accepted by the `version` parameter. The Hub Resolver will map the version to the format expected by the target Hub `type`.
## Requirements
- A cluster running Tekton Pipeline v0.41.0 or later.
- The [built-in remote resolvers installed](./install.md#installing-and-configuring-remote-task-and-pipeline-resolution).
- The `enable-hub-resolver` feature flag in the `resolvers-feature-flags` ConfigMap in the
`tekton-pipelines-resolvers` namespace set to `true`.
- [Beta features](./additional-configs.md#beta-features) enabled.
## Configuration
This resolver uses a `ConfigMap` for its settings. See
[`../config/resolvers/hubresolver-config.yaml`](../config/resolvers/hubresolver-config.yaml)
for the name, namespace and defaults that the resolver ships with.
### Options
| Option Name | Description | Example Values |
|-----------------------------|------------------------------------------------------|------------------------|
| `default-tekton-hub-catalog`| The default tekton hub catalog from where to pull the resource.| `Tekton` |
| `default-artifact-hub-task-catalog`| The default artifact hub catalog from where to pull the resource for task kind.| `tekton-catalog-tasks` |
| `default-artifact-hub-pipeline-catalog`| The default artifact hub catalog from where to pull the resource for pipeline kind. | `tekton-catalog-pipelines` |
| `default-kind` | The default object kind for references. | `task`, `pipeline` |
| `default-type` | The default hub from where to pull the resource. | `artifact`, `tekton` |
### Configuring the Hub API endpoint
The Hub Resolver supports to resolve resources from the [Artifact Hub](https://artifacthub.io/) and the [Tekton Hub](https://hub.tekton.dev/),
which can be configured by setting the `type` field of the resolver.
*(Please note that the [Tekton Hub](https://hub.tekton.dev/) will be deprecated after [migration to the Artifact Hub](https://github.com/tektoncd/hub/issues/667) is done.)*
When setting the `type` field to `artifact`, the resolver will hit the public hub api at https://artifacthub.io/ by default
but you can configure your own (for example to use a private hub
instance) by setting the `ARTIFACT_HUB_API` environment variable in
[`../config/resolvers/resolvers-deployment.yaml`](../config/resolvers/resolvers-deployment.yaml). Example:
```yaml
env
- name: ARTIFACT_HUB_API
value: "https://artifacthub.io/"
```
When setting the `type` field to `tekton`, the resolver will hit the public
tekton catalog api at https://api.hub.tekton.dev by default but you can configure
your own instance of the Tekton Hub by setting the `TEKTON_HUB_API` environment
variable in
[`../config/resolvers/resolvers-deployment.yaml`](../config/resolvers/resolvers-deployment.yaml). Example:
```yaml
env
- name: TEKTON_HUB_API
value: "https://api.private.hub.instance.dev"
```
The Tekton Hub deployment guide can be found [here](https://github.com/tektoncd/hub/blob/main/docs/DEPLOYMENT.md).
## Usage
### Task Resolution
```yaml
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: remote-task-reference
spec:
taskRef:
resolver: hub
params:
- name: catalog # optional
value: tekton-catalog-tasks
- name: type # optional
value: artifact
- name: kind
value: task
- name: name
value: git-clone
- name: version
value: "0.6"
```
### Pipeline Resolution
```yaml
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: hub-demo
spec:
pipelineRef:
resolver: hub
params:
- name: catalog # optional
value: tekton-catalog-pipelines
- name: type # optional
value: artifact
- name: kind
value: pipeline
- name: name
value: buildpacks
- name: version
value: "0.1"
# Note: the buildpacks pipeline requires parameters.
# Resolution of the pipeline will succeed but the PipelineRun
# overall will not succeed without those parameters.
```
### Version constraint
Instead of a version you can specify a constraint to choose from. The constraint is a string as documented in the [go-version](https://github.com/hashicorp/go-version) library.
Some examples:
```yaml
params:
- name: name
value: git-clone
- name: version
value: ">=0.7.0"
```
Will only choose the git-clone task that is greater than version `0.7.0`
```yaml
params:
- name: name
value: git-clone
- name: version
value: ">=0.7.0, < 2.0.0"
```
Will select the **latest** git-clone task that is greater than version `0.7.0` and
less than version `2.0.0`, so if the latest task is the version `0.9.0` it will
be selected.
Other operators for selection are available for comparisons, see the
[go-version](https://github.com/hashicorp/go-version/blob/644291d14038339745c2d883a1a114488e30b702/constraint.go#L40C2-L48)
source code.
---
Except as otherwise noted, the content of this page is licensed under the
[Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/),
and code samples are licensed under the
[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). | tekton | linkTitle Hub Resolver weight 311 Hub Resolver Use resolver type hub Parameters Param Name Description Example Value catalog The catalog from where to pull the resource Optional Default tekton catalog tasks for task kind tekton catalog pipelines for pipeline kind type The type of Hub from where to pull the resource Optional Either artifact or tekton Default artifact kind Either task or pipeline Optional Default task name The name of the task or pipeline to fetch from the hub golang build version Version or a Constraint see below version constraint of a task or a pipeline to pull in from Wrap the number in quotes 0 5 0 0 5 0 The Catalogs in the Artifact Hub follows the semVer i e major version minor version 0 and the Catalogs in the Tekton Hub follows the simplified semVer i e major version minor version Both full and simplified semantic versioning will be accepted by the version parameter The Hub Resolver will map the version to the format expected by the target Hub type Requirements A cluster running Tekton Pipeline v0 41 0 or later The built in remote resolvers installed install md installing and configuring remote task and pipeline resolution The enable hub resolver feature flag in the resolvers feature flags ConfigMap in the tekton pipelines resolvers namespace set to true Beta features additional configs md beta features enabled Configuration This resolver uses a ConfigMap for its settings See config resolvers hubresolver config yaml config resolvers hubresolver config yaml for the name namespace and defaults that the resolver ships with Options Option Name Description Example Values default tekton hub catalog The default tekton hub catalog from where to pull the resource Tekton default artifact hub task catalog The default artifact hub catalog from where to pull the resource for task kind tekton catalog tasks default artifact hub pipeline catalog The default artifact hub catalog from where to pull the resource for pipeline kind tekton catalog pipelines default kind The default object kind for references task pipeline default type The default hub from where to pull the resource artifact tekton Configuring the Hub API endpoint The Hub Resolver supports to resolve resources from the Artifact Hub https artifacthub io and the Tekton Hub https hub tekton dev which can be configured by setting the type field of the resolver Please note that the Tekton Hub https hub tekton dev will be deprecated after migration to the Artifact Hub https github com tektoncd hub issues 667 is done When setting the type field to artifact the resolver will hit the public hub api at https artifacthub io by default but you can configure your own for example to use a private hub instance by setting the ARTIFACT HUB API environment variable in config resolvers resolvers deployment yaml config resolvers resolvers deployment yaml Example yaml env name ARTIFACT HUB API value https artifacthub io When setting the type field to tekton the resolver will hit the public tekton catalog api at https api hub tekton dev by default but you can configure your own instance of the Tekton Hub by setting the TEKTON HUB API environment variable in config resolvers resolvers deployment yaml config resolvers resolvers deployment yaml Example yaml env name TEKTON HUB API value https api private hub instance dev The Tekton Hub deployment guide can be found here https github com tektoncd hub blob main docs DEPLOYMENT md Usage Task Resolution yaml apiVersion tekton dev v1beta1 kind TaskRun metadata name remote task reference spec taskRef resolver hub params name catalog optional value tekton catalog tasks name type optional value artifact name kind value task name name value git clone name version value 0 6 Pipeline Resolution yaml apiVersion tekton dev v1beta1 kind PipelineRun metadata name hub demo spec pipelineRef resolver hub params name catalog optional value tekton catalog pipelines name type optional value artifact name kind value pipeline name name value buildpacks name version value 0 1 Note the buildpacks pipeline requires parameters Resolution of the pipeline will succeed but the PipelineRun overall will not succeed without those parameters Version constraint Instead of a version you can specify a constraint to choose from The constraint is a string as documented in the go version https github com hashicorp go version library Some examples yaml params name name value git clone name version value 0 7 0 Will only choose the git clone task that is greater than version 0 7 0 yaml params name name value git clone name version value 0 7 0 2 0 0 Will select the latest git clone task that is greater than version 0 7 0 and less than version 2 0 0 so if the latest task is the version 0 9 0 it will be selected Other operators for selection are available for comparisons see the go version https github com hashicorp go version blob 644291d14038339745c2d883a1a114488e30b702 constraint go L40C2 L48 source code Except as otherwise noted the content of this page is licensed under the Creative Commons Attribution 4 0 License https creativecommons org licenses by 4 0 and code samples are licensed under the Apache 2 0 License https www apache org licenses LICENSE 2 0 |
tekton Tasks Tasks weight 201 | <!--
---
linkTitle: "Tasks"
weight: 201
---
-->
# Tasks
- [Overview](#overview)
- [Configuring a `Task`](#configuring-a-task)
- [`Task` vs. `ClusterTask`](#task-vs-clustertask)
- [Defining `Steps`](#defining-steps)
- [Reserved directories](#reserved-directories)
- [Running scripts within `Steps`](#running-scripts-within-steps)
- [Windows scripts](#windows-scripts)
- [Specifying a timeout](#specifying-a-timeout)
- [Specifying `onError` for a `step`](#specifying-onerror-for-a-step)
- [Accessing Step's `exitCode` in subsequent `Steps`](#accessing-steps-exitcode-in-subsequent-steps)
- [Produce a task result with `onError`](#produce-a-task-result-with-onerror)
- [Breakpoint on failure with `onError`](#breakpoint-on-failure-with-onerror)
- [Redirecting step output streams with `stdoutConfig` and `stderrConfig`](#redirecting-step-output-streams-with-stdoutConfig-and-stderrConfig)
- [Specifying `Parameters`](#specifying-parameters)
- [Specifying `Workspaces`](#specifying-workspaces)
- [Emitting `Results`](#emitting-results)
- [Larger `Results` using sidecar logs](#larger-results-using-sidecar-logs)
- [Specifying `Volumes`](#specifying-volumes)
- [Specifying a `Step` template](#specifying-a-step-template)
- [Specifying `Sidecars`](#specifying-sidecars)
- [Specifying a `DisplayName`](#specifying-a-display-name)
- [Adding a description](#adding-a-description)
- [Using variable substitution](#using-variable-substitution)
- [Substituting parameters and resources](#substituting-parameters-and-resources)
- [Substituting `Array` parameters](#substituting-array-parameters)
- [Substituting `Workspace` paths](#substituting-workspace-paths)
- [Substituting `Volume` names and types](#substituting-volume-names-and-types)
- [Substituting in `Script` blocks](#substituting-in-script-blocks)
- [Code examples](#code-examples)
- [Building and pushing a Docker image](#building-and-pushing-a-docker-image)
- [Mounting multiple `Volumes`](#mounting-multiple-volumes)
- [Mounting a `ConfigMap` as a `Volume` source](#mounting-a-configmap-as-a-volume-source)
- [Using a `Secret` as an environment source](#using-a-secret-as-an-environment-source)
- [Using a `Sidecar` in a `Task`](#using-a-sidecar-in-a-task)
- [Debugging](#debugging)
- [Inspecting the file structure](#inspecting-the-file-structure)
- [Inspecting the `Pod`](#inspecting-the-pod)
- [Running Step Containers as a Non Root User](#running-step-containers-as-a-non-root-user)
- [`Task` Authoring Recommendations](#task-authoring-recommendations)
## Overview
A `Task` is a collection of `Steps` that you
define and arrange in a specific order of execution as part of your continuous integration flow.
A `Task` executes as a Pod on your Kubernetes cluster. A `Task` is available within a specific
namespace, while a `ClusterTask` is available across the entire cluster.
A `Task` declaration includes the following elements:
- [Parameters](#specifying-parameters)
- [Steps](#defining-steps)
- [Workspaces](#specifying-workspaces)
- [Results](#emitting-results)
## Configuring a `Task`
A `Task` definition supports the following fields:
- Required:
- [`apiVersion`][kubernetes-overview] - Specifies the API version. For example,
`tekton.dev/v1beta1`.
- [`kind`][kubernetes-overview] - Identifies this resource object as a `Task` object.
- [`metadata`][kubernetes-overview] - Specifies metadata that uniquely identifies the
`Task` resource object. For example, a `name`.
- [`spec`][kubernetes-overview] - Specifies the configuration information for
this `Task` resource object.
- [`steps`](#defining-steps) - Specifies one or more container images to run in the `Task`.
- Optional:
- [`description`](#adding-a-description) - An informative description of the `Task`.
- [`params`](#specifying-parameters) - Specifies execution parameters for the `Task`.
- [`workspaces`](#specifying-workspaces) - Specifies paths to volumes required by the `Task`.
- [`results`](#emitting-results) - Specifies the names under which `Tasks` write execution results.
- [`volumes`](#specifying-volumes) - Specifies one or more volumes that will be available to the `Steps` in the `Task`.
- [`stepTemplate`](#specifying-a-step-template) - Specifies a `Container` step definition to use as the basis for all `Steps` in the `Task`.
- [`sidecars`](#specifying-sidecars) - Specifies `Sidecar` containers to run alongside the `Steps` in the `Task`.
[kubernetes-overview]:
https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields
The non-functional example below demonstrates the use of most of the above-mentioned fields:
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Task
metadata:
name: example-task-name
spec:
params:
- name: pathToDockerFile
type: string
description: The path to the dockerfile to build
default: /workspace/workspace/Dockerfile
- name: builtImageUrl
type: string
description: location to push the built image to
steps:
- name: ubuntu-example
image: ubuntu
args: ["ubuntu-build-example", "SECRETS-example.md"]
- image: gcr.io/example-builders/build-example
command: ["echo"]
args: ["$(params.pathToDockerFile)"]
- name: dockerfile-pushexample
image: gcr.io/example-builders/push-example
args: ["push", "$(params.builtImageUrl)"]
volumeMounts:
- name: docker-socket-example
mountPath: /var/run/docker.sock
volumes:
- name: example-volume
emptyDir: {}
```
### `Task` vs. `ClusterTask`
**Note: ClusterTasks are deprecated.** Please use the [cluster resolver](./cluster-resolver.md) instead.
A `ClusterTask` is a `Task` scoped to the entire cluster instead of a single namespace.
A `ClusterTask` behaves identically to a `Task` and therefore everything in this document
applies to both.
**Note:** When using a `ClusterTask`, you must explicitly set the `kind` sub-field in the `taskRef` field to `ClusterTask`.
If not specified, the `kind` sub-field defaults to `Task.`
Below is an example of a Pipeline declaration that uses a `ClusterTask`:
**Note**:
- There is no `v1` API specification for `ClusterTask` but a `v1beta1 clustertask` can still be referenced in a `v1 pipeline`.
- The cluster resolver syntax below can be used to reference any task, not just a clustertask.
```yaml
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: demo-pipeline
spec:
tasks:
- name: build-skaffold-web
taskRef:
resolver: cluster
params:
- name: kind
value: task
- name: name
value: build-push
- name: namespace
value: default
```
```yaml
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: demo-pipeline
namespace: default
spec:
tasks:
- name: build-skaffold-web
taskRef:
name: build-push
kind: ClusterTask
params: ....
```
### Defining `Steps`
A `Step` is a reference to a container image that executes a specific tool on a
specific input and produces a specific output. To add `Steps` to a `Task` you
define a `steps` field (required) containing a list of desired `Steps`. The order in
which the `Steps` appear in this list is the order in which they will execute.
The following requirements apply to each container image referenced in a `steps` field:
- The container image must abide by the [container contract](./container-contract.md).
- Each container image runs to completion or until the first failure occurs.
- The CPU, memory, and ephemeral storage resource requests set on `Step`s
will be adjusted to comply with any [`LimitRange`](https://kubernetes.io/docs/concepts/policy/limit-range/)s
present in the `Namespace`. In addition, Kubernetes determines a pod's effective resource
requests and limits by summing the requests and limits for all its containers, even
though Tekton runs `Steps` sequentially.
For more detail, see [Compute Resources in Tekton](./compute-resources.md).
**Note:** If the image referenced in the `step` field is from a private registry, `TaskRuns` or `PipelineRuns` that consume the task
must provide the `imagePullSecrets` in a [podTemplate](./podtemplates.md).
Below is an example of setting the resource requests and limits for a step:
```yaml
spec:
steps:
- name: step-with-limts
computeResources:
requests:
memory: 1Gi
cpu: 500m
limits:
memory: 2Gi
cpu: 800m
```
```yaml
spec:
steps:
- name: step-with-limts
resources:
requests:
memory: 1Gi
cpu: 500m
limits:
memory: 2Gi
cpu: 800m
```
#### Reserved directories
There are several directories that all `Tasks` run by Tekton will treat as special
* `/workspace` - This directory is where [resources](#specifying-resources) and [workspaces](#specifying-workspaces)
are mounted. Paths to these are available to `Task` authors via [variable substitution](variables.md)
* `/tekton` - This directory is used for Tekton specific functionality:
* `/tekton/results` is where [results](#emitting-results) are written to.
The path is available to `Task` authors via [`$(results.name.path)`](variables.md)
* There are other subfolders which are [implementation details of Tekton](developers/README.md#reserved-directories)
and **users should not rely on their specific behavior as it may change in the future**
#### Running scripts within `Steps`
A step can specify a `script` field, which contains the body of a script. That script is
invoked as if it were stored inside the container image, and any `args` are passed directly
to it.
**Note:** If the `script` field is present, the step cannot also contain a `command` field.
Scripts that do not start with a [shebang](https://en.wikipedia.org/wiki/Shebang_(Unix))
line will have the following default preamble prepended:
```bash
#!/bin/sh
set -e
```
You can override this default preamble by prepending a shebang that specifies the desired parser.
This parser must be present within that `Step's` container image.
The example below executes a Bash script:
```yaml
steps:
- image: ubuntu # contains bash
script: |
#!/usr/bin/env bash
echo "Hello from Bash!"
```
The example below executes a Python script:
```yaml
steps:
- image: python # contains python
script: |
#!/usr/bin/env python3
print("Hello from Python!")
```
The example below executes a Node script:
```yaml
steps:
- image: node # contains node
script: |
#!/usr/bin/env node
console.log("Hello from Node!")
```
You can execute scripts directly in the workspace:
```yaml
steps:
- image: ubuntu
script: |
#!/usr/bin/env bash
/workspace/my-script.sh # provided by an input resource
```
You can also execute scripts within the container image:
```yaml
steps:
- image: my-image # contains /bin/my-binary
script: |
#!/usr/bin/env bash
/bin/my-binary
```
##### Windows scripts
Scripts in tasks that will eventually run on windows nodes need a custom shebang line, so that Tekton knows how to run the script. The format of the shebang line is:
`#!win <interpreter command> <args>`
Unlike linux, we need to specify how to interpret the script file which is generated by Tekton. The example below shows how to execute a powershell script:
```yaml
steps:
- image: mcr.microsoft.com/windows/servercore:1809
script: |
#!win powershell.exe -File
echo 'Hello from PowerShell'
```
Microsoft provide `powershell` images, which contain Powershell Core (which is slightly different from powershell found in standard windows images). The example below shows how to use these images:
```yaml
steps:
- image: mcr.microsoft.com/powershell:nanoserver
script: |
#!win pwsh.exe -File
echo 'Hello from PowerShell Core'
```
As can be seen the command is different. The windows shebang can be used for any interpreter, as long as it exists in the image and can interpret commands from a file. The example below executes a Python script:
```yaml
steps:
- image: python
script: |
#!win python
print("Hello from Python!")
```
Note that other than the `#!win` shebang the example is identical to the earlier linux example.
Finally, if no interpreter is specified on the `#!win` line then the script will be treated as a windows `.cmd` file which will be excecuted. The example below shows this:
```yaml
steps:
- image: mcr.microsoft.com/powershell:lts-nanoserver-1809
script: |
#!win
echo Hello from the default cmd file
```
#### Specifying a timeout
A `Step` can specify a `timeout` field.
If the `Step` execution time exceeds the specified timeout, the `Step` kills
its running process and any subsequent `Steps` in the `TaskRun` will not be
executed. The `TaskRun` is placed into a `Failed` condition. An accompanying log
describing which `Step` timed out is written as the `Failed` condition's message.
The timeout specification follows the duration format as specified in the [Go time package](https://golang.org/pkg/time/#ParseDuration) (e.g. 1s or 1ms).
The example `Step` below is supposed to sleep for 60 seconds but will be canceled by the specified 5 second timeout.
```yaml
steps:
- name: sleep-then-timeout
image: ubuntu
script: |
#!/usr/bin/env bash
echo "I am supposed to sleep for 60 seconds!"
sleep 60
timeout: 5s
```
#### Specifying `onError` for a `step`
When a `step` in a `task` results in a failure, the rest of the steps in the `task` are skipped and the `taskRun` is
declared a failure. If you would like to ignore such step errors and continue executing the rest of the steps in
the task, you can specify `onError` for such a `step`.
`onError` can be set to either `continue` or `stopAndFail` as part of the step definition. If `onError` is
set to `continue`, the entrypoint sets the original failed exit code of the [script](#running-scripts-within-steps)
in the container terminated state. A `step` with `onError` set to `continue` does not fail the `taskRun` and continues
executing the rest of the steps in a task.
To ignore a step error, set `onError` to `continue`:
```yaml
steps:
- image: docker.io/library/golang:latest
name: ignore-unit-test-failure
onError: continue
script: |
go test .
```
The original failed exit code of the [script](#running-scripts-within-steps) is available in the terminated state of
the container.
```
kubectl get tr taskrun-unit-test-t6qcl -o json | jq .status
{
"conditions": [
{
"message": "All Steps have completed executing",
"reason": "Succeeded",
"status": "True",
"type": "Succeeded"
}
],
"steps": [
{
"container": "step-ignore-unit-test-failure",
"imageID": "...",
"name": "ignore-unit-test-failure",
"terminated": {
"containerID": "...",
"exitCode": 1,
"reason": "Completed",
}
},
],
```
For an end-to-end example, see [the taskRun ignoring a step error](../examples/v1/taskruns/ignore-step-error.yaml)
and [the pipelineRun ignoring a step error](../examples/v1/pipelineruns/ignore-step-error.yaml).
#### Accessing Step's `exitCode` in subsequent `Steps`
A step can access the exit code of any previous step by reading the file pointed to by the `exitCode` path variable:
```shell
cat $(steps.step-<step-name>.exitCode.path)
```
The `exitCode` of a step without any name can be referenced using:
```shell
cat $(steps.step-unnamed-<step-index>.exitCode.path)
```
#### Produce a task result with `onError`
When a step is set to ignore the step error and if that step is able to initialize a result file before failing,
that result is made available to its consumer task.
```yaml
steps:
- name: ignore-failure-and-produce-a-result
onError: continue
image: busybox
script: |
echo -n 123 | tee $(results.result1.path)
exit 1
```
The task consuming the result using the result reference `$(tasks.task1.results.result1)` in a `pipeline` will be able
to access the result and run with the resolved value.
Now, a step can fail before initializing a result and the `pipeline` can ignore such step failure. But, the `pipeline`
will fail with `InvalidTaskResultReference` if it has a task consuming that task result. For example, any task
consuming `$(tasks.task1.results.result2)` will cause the pipeline to fail.
```yaml
steps:
- name: ignore-failure-and-produce-a-result
onError: continue
image: busybox
script: |
echo -n 123 | tee $(results.result1.path)
exit 1
echo -n 456 | tee $(results.result2.path)
```
#### Breakpoint on failure with `onError`
[Debugging](taskruns.md#debugging-a-taskrun) a taskRun is supported to debug a container and comes with a set of
[tools](taskruns.md#debug-environment) to declare the step as a failure or a success. Specifying
[breakpoint](taskruns.md#breakpoint-on-failure) at the `taskRun` level overrides ignoring a step error using `onError`.
#### Redirecting step output streams with `stdoutConfig` and `stderrConfig`
This is an alpha feature. The `enable-api-fields` feature flag [must be set to `"alpha"`](./install.md)
for Redirecting Step Output Streams to function.
This feature defines optional `Step` fields `stdoutConfig` and `stderrConfig` which can be used to redirection the output streams `stdout` and `stderr` respectively:
```yaml
- name: ...
...
stdoutConfig:
path: ...
stderrConfig:
path: ...
```
Once `stdoutConfig.path` or `stderrConfig.path` is specified, the corresponding output stream will be duplicated to both the given file and the standard output stream of the container, so users can still view the output through the Pod log API. If both `stdoutConfig.path` and `stderrConfig.path` are set to the same value, outputs from both streams will be interleaved in the same file, but there will be no ordering guarantee on the data. If multiple `Step`'s `stdoutConfig.path` fields are set to the same value, the file content will be overwritten by the last outputting step.
Variable substitution will be applied to the new fields, so one could specify `$(results.<name>.path)` to the `stdoutConfig.path` or `stderrConfig.path` field to extract the stdout of a step into a Task result.
##### Example Usage
Redirecting stdout of `boskosctl` to `jq` and publish the resulting `project-id` as a Task result:
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Task
metadata:
name: boskos-acquire
spec:
results:
- name: project-id
steps:
- name: boskosctl
image: gcr.io/k8s-staging-boskos/boskosctl
args:
- acquire
- --server-url=http://boskos.test-pods.svc.cluster.local
- --owner-name=christie-test-boskos
- --type=gke-project
- --state=free
- --target-state=busy
stdoutConfig:
path: /data/boskosctl-stdout
volumeMounts:
- name: data
mountPath: /data
- name: parse-project-id
image: imega/jq
args:
- -r
- .name
- /data/boskosctl-stdout
stdoutConfig:
path: $(results.project-id.path)
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
```
> NOTE:
>
> - If the intent is to share output between `Step`s via a file, the user must ensure that the paths provided are shared between the `Step`s (e.g via `volumes`).
> - There is currently a limit on the overall size of the `Task` results. If the stdout/stderr of a step is set to the path of a `Task` result and the step prints too many data, the result manifest would become too large. Currently the entrypoint binary will fail if that happens.
> - If the stdout/stderr of a `Step` is set to the path of a `Task` result, e.g. `$(results.empty.path)`, but that result is not defined for the `Task`, the `Step` will run but the output will be captured in a file named `$(results.empty.path)` in the current working directory. Similarly, any stubstition that is not valid, e.g. `$(some.invalid.path)/out.txt`, will be left as-is and will result in a file path `$(some.invalid.path)/out.txt` relative to the current working directory.
### Specifying `Parameters`
You can specify parameters, such as compilation flags or artifact names, that you want to supply to the `Task` at execution time.
`Parameters` are passed to the `Task` from its corresponding `TaskRun`.
#### Parameter name
Parameter name format:
- Must only contain alphanumeric characters, hyphens (`-`), underscores (`_`), and dots (`.`). However, `object` parameter name and its key names can't contain dots (`.`). See the reasons in the third item added in this [PR](https://github.com/tektoncd/community/pull/711).
- Must begin with a letter or an underscore (`_`).
For example, `foo.Is-Bar_` is a valid parameter name for string or array type, but is invalid for object parameter because it contains dots. On the other hand, `barIsBa$` or `0banana` are invalid for all types.
> NOTE:
> 1. Parameter names are **case insensitive**. For example, `APPLE` and `apple` will be treated as equal. If they appear in the same TaskSpec's params, it will be rejected as invalid.
> 2. If a parameter name contains dots (.), it must be referenced by using the [bracket notation](#substituting-parameters-and-resources) with either single or double quotes i.e. `$(params['foo.bar'])`, `$(params["foo.bar"])`. See the following example for more information.
#### Parameter type
Each declared parameter has a `type` field, which can be set to `string`, `array` or `object`.
##### `object` type
`object` type is useful in cases where users want to group related parameters. For example, an object parameter called `gitrepo` can contain both the `url` and the `commmit` to group related information:
```yaml
spec:
params:
- name: gitrepo
type: object
properties:
url:
type: string
commit:
type: string
```
Refer to the [TaskRun example](../examples/v1/taskruns/object-param-result.yaml) and the [PipelineRun example](../examples/v1/pipelineruns/pipeline-object-param-and-result.yaml) in which `object` parameters are demonstrated.
> NOTE:
> - `object` param must specify the `properties` section to define the schema i.e. what keys are available for this object param. See how to define `properties` section in the following example and the [TEP-0075](https://github.com/tektoncd/community/blob/main/teps/0075-object-param-and-result-types.md#defaulting-to-string-types-for-values).
> - When providing value for an `object` param, one may provide values for just a subset of keys in spec's `default`, and provide values for the rest of keys at runtime ([example](../examples/v1/taskruns/object-param-result.yaml)).
> - When using object in variable replacement, users can only access its individual key ("child" member) of the object by its name i.e. `$(params.gitrepo.url)`. Using an entire object as a value is only allowed when the value is also an object like [this example](../examples/v1/pipelineruns/pipeline-object-param-and-result.yaml). See more details about using object param from the [TEP-0075](https://github.com/tektoncd/community/blob/main/teps/0075-object-param-and-result-types.md#using-objects-in-variable-replacement).
##### `array` type
`array` type is useful in cases where the number of compilation flags being supplied to a task varies throughout the `Task's` execution.
`array` param can be defined by setting `type` to `array`. Also, `array` params only supports `string` array i.e.
each array element has to be of type `string`.
```yaml
spec:
params:
- name: flags
type: array
```
##### `string` type
If not specified, the `type` field defaults to `string`. When the actual parameter value is supplied, its parsed type is validated against the `type` field.
The following example illustrates the use of `Parameters` in a `Task`. The `Task` declares 3 input parameters named `gitrepo` (of type `object`), `flags`
(of type `array`) and `someURL` (of type `string`). These parameters are used in the `steps.args` list
- For `object` parameter, you can only use individual members (aka keys).
- You can expand parameters of type `array` inside an existing array using the star operator. In this example, `flags` contains the star operator: `$(params.flags[*])`.
**Note:** Input parameter values can be used as variables throughout the `Task` by using [variable substitution](#using-variable-substitution).
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Task
metadata:
name: task-with-parameters
spec:
params:
- name: gitrepo
type: object
properties:
url:
type: string
commit:
type: string
- name: flags
type: array
- name: someURL
type: string
- name: foo.bar
description: "the name contains dot character"
default: "test"
steps:
- name: do-the-clone
image: some-git-image
args: [
"-url=$(params.gitrepo.url)",
"-revision=$(params.gitrepo.commit)"
]
- name: build
image: my-builder
args: [
"build",
"$(params.flags[*])",
# It would be equivalent to use $(params["someURL"]) here,
# which is necessary when the parameter name contains '.'
# characters (e.g. `$(params["some.other.URL"])`). See the example in step "echo-param"
'url=$(params.someURL)',
]
- name: echo-param
image: bash
args: [
"echo",
"$(params['foo.bar'])",
]
```
The following `TaskRun` supplies the value for the parameter `gitrepo`, `flags` and `someURL`:
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: TaskRun
metadata:
name: run-with-parameters
spec:
taskRef:
name: task-with-parameters
params:
- name: gitrepo
value:
url: "abc.com"
commit: "c12b72"
- name: flags
value:
- "--set"
- "arg1=foo"
- "--randomflag"
- "--someotherflag"
- name: someURL
value: "http://google.com"
```
#### Default value
Parameter declarations (within Tasks and Pipelines) can include default values which will be used if the parameter is
not specified, for example to specify defaults for both string params and array params
([full example](../examples/v1/taskruns/array-default.yaml)) :
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Task
metadata:
name: task-with-array-default
spec:
params:
- name: flags
type: array
default:
- "--set"
- "arg1=foo"
- "--randomflag"
- "--someotherflag"
```
#### Param enum
> :seedling: **`enum` is an [alpha](additional-configs.md#alpha-features) feature.** The `enable-param-enum` feature flag must be set to `"true"` to enable this feature.
Parameter declarations can include `enum` which is a predefine set of valid values that can be accepted by the `Param`. If a `Param` has both `enum` and default value, the default value must be in the `enum` set. For example, the valid/allowed values for `Param` "message" is bounded to `v1`, `v2` and `v3`:
``` yaml
apiVersion: tekton.dev/v1
kind: Task
metadata:
name: param-enum-demo
spec:
params:
- name: message
type: string
enum: ["v1", "v2", "v3"]
default: "v1"
steps:
- name: build
image: bash:latest
script: |
echo "$(params.message)"
```
If the `Param` value passed in by `TaskRuns` is **NOT** in the predefined `enum` list, the `TaskRuns` will fail with reason `InvalidParamValue`.
See usage in this [example](../examples/v1/taskruns/alpha/param-enum.yaml)
### Specifying `Workspaces`
[`Workspaces`](workspaces.md#using-workspaces-in-tasks) allow you to specify
one or more volumes that your `Task` requires during execution. It is recommended that `Tasks` uses **at most**
one writeable `Workspace`. For example:
```yaml
spec:
steps:
- name: write-message
image: ubuntu
script: |
#!/usr/bin/env bash
set -xe
echo hello! > $(workspaces.messages.path)/message
workspaces:
- name: messages
description: The folder where we write the message to
mountPath: /custom/path/relative/to/root
```
For more information, see [Using `Workspaces` in `Tasks`](workspaces.md#using-workspaces-in-tasks)
and the [`Workspaces` in a `TaskRun`](../examples/v1/taskruns/workspace.yaml) example YAML file.
### Propagated `Workspaces`
Workspaces can be propagated to embedded task specs, not referenced Tasks. For more information, see [Propagated Workspaces](taskruns.md#propagated-workspaces).
### Emitting `Results`
A Task is able to emit string results that can be viewed by users and passed to other Tasks in a Pipeline. These
results have a wide variety of potential uses. To highlight just a few examples from the Tekton Catalog: the
[`git-clone` Task](https://github.com/tektoncd/catalog/blob/main/task/git-clone/0.1/git-clone.yaml) emits a
cloned commit SHA as a result, the [`generate-build-id` Task](https://github.com/tektoncd/catalog/blob/main/task/generate-build-id/0.1/generate-build-id.yaml)
emits a randomized ID as a result, and the [`kaniko` Task](https://github.com/tektoncd/catalog/tree/main/task/kaniko/0.1)
emits a container image digest as a result. In each case these results convey information for users to see when
looking at their TaskRuns and can also be used in a Pipeline to pass data along from one Task to the next.
`Task` results are best suited for holding small amounts of data, such as commit SHAs, branch names,
ephemeral namespaces, and so on.
To define a `Task's` results, use the `results` field.
In the example below, the `Task` specifies two files in the `results` field:
`current-date-unix-timestamp` and `current-date-human-readable`.
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Task
metadata:
name: print-date
annotations:
description: |
A simple task that prints the date
spec:
results:
- name: current-date-unix-timestamp
description: The current date in unix timestamp format
- name: current-date-human-readable
description: The current date in human readable format
steps:
- name: print-date-unix-timestamp
image: bash:latest
script: |
#!/usr/bin/env bash
date +%s | tee $(results.current-date-unix-timestamp.path)
- name: print-date-human-readable
image: bash:latest
script: |
#!/usr/bin/env bash
date | tee $(results.current-date-human-readable.path)
```
In this example, [`$(results.name.path)`](https://github.com/tektoncd/pipeline/blob/main/docs/variables.md#variables-available-in-a-task)
is replaced with the path where Tekton will store the Task's results.
When this Task is executed in a TaskRun, the results will appear in the TaskRun's status:
```yaml
apiVersion: tekton.dev/v1
kind: TaskRun
# ...
status:
# ...
results:
- name: current-date-human-readable
value: |
Wed Jan 22 19:47:26 UTC 2020
- name: current-date-unix-timestamp
value: |
1579722445
```
```yaml
apiVersion: tekton.dev/v1beta1
kind: TaskRun
# ...
status:
# ...
taskResults:
- name: current-date-human-readable
value: |
Wed Jan 22 19:47:26 UTC 2020
- name: current-date-unix-timestamp
value: |
1579722445
```
Tekton does not perform any processing on the contents of results; they are emitted
verbatim from your Task including any leading or trailing whitespace characters. Make sure to write only the
precise string you want returned from your `Task` into the result files that your `Task` creates.
The stored results can be used [at the `Task` level](./pipelines.md#passing-one-tasks-results-into-the-parameters-or-when-expressions-of-another)
or [at the `Pipeline` level](./pipelines.md#emitting-results-from-a-pipeline).
> **Note** Tekton does not enforce Task results unless there is a consumer: when a Task declares a result,
> it may complete successfully even if no result was actually produced. When a Task that declares results is
> used in a Pipeline, and a component of the Pipeline attempts to consume the Task's result, if the result
> was not produced the pipeline will fail. [TEP-0048](https://github.com/tektoncd/community/blob/main/teps/0048-task-results-without-results.md)
> propopses introducing default values for results to help Pipeline authors manage this case.
#### Emitting Object `Results`
Emitting a task result of type `object` is implemented based on the
[TEP-0075](https://github.com/tektoncd/community/blob/main/teps/0075-object-param-and-result-types.md#emitting-object-results).
You can initialize `object` results from a `task` using JSON escaped string. For example, to assign the following data to an object result:
```
{"url":"abc.dev/sampler","digest":"19f02276bf8dbdd62f069b922f10c65262cc34b710eea26ff928129a736be791"}
```
You will need to use escaped JSON to write to pod termination message:
```
{\"url\":\"abc.dev/sampler\",\"digest\":\"19f02276bf8dbdd62f069b922f10c65262cc34b710eea26ff928129a736be791\"}
```
An example of a task definition producing an object result:
```yaml
kind: Task
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
metadata:
name: write-object
annotations:
description: |
A simple task that writes object
spec:
results:
- name: object-results
type: object
description: The object results
properties:
url:
type: string
digest:
type: string
steps:
- name: write-object
image: bash:latest
script: |
#!/usr/bin/env bash
echo -n "{\"url\":\"abc.dev/sampler\",\"digest\":\"19f02276bf8dbdd62f069b922f10c65262cc34b710eea26ff928129a736be791\"}" | tee $(results.object-results.path)
```
> **Note:**
> - that the opening and closing braces are mandatory along with an escaped JSON.
> - object result must specify the `properties` section to define the schema i.e. what keys are available for this object result. Failing to emit keys from the defined object results will result in validation error at runtime.
#### Emitting Array `Results`
Tekton Task also supports defining a result of type `array` and `object` in addition to `string`.
Emitting a task result of type `array` is a `beta` feature implemented based on the
[TEP-0076](https://github.com/tektoncd/community/blob/main/teps/0076-array-result-types.md#emitting-array-results).
You can initialize `array` results from a `task` using JSON escaped string, for example, to assign the following
list of animals to an array result:
```
["cat", "dog", "squirrel"]
```
You will have to initialize the pod termination message as escaped JSON:
```
[\"cat\", \"dog\", \"squirrel\"]
```
An example of a task definition producing an array result with such greetings `["hello", "world"]`:
```yaml
kind: Task
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
metadata:
name: write-array
annotations:
description: |
A simple task that writes array
spec:
results:
- name: array-results
type: array
description: The array results
steps:
- name: write-array
image: bash:latest
script: |
#!/usr/bin/env bash
echo -n "[\"hello\",\"world\"]" | tee $(results.array-results.path)
```
**Note** that the opening and closing square brackets are mandatory along with an escaped JSON.
Now, similar to the Go zero-valued slices, an array result is considered as uninitialized (i.e. `nil`) if it's set to an empty
array i.e. `[]`. For example, `echo -n "[]" | tee $(results.result.path);` is equivalent to `result := []string{}`.
The result initialized in this way will have zero length. And trying to access this array with a star notation i.e.
`$(tasks.write-array-results.results.result[*])` or an element of such array i.e. `$(tasks.write-array-results.results.result[0])`
results in `InvalidTaskResultReference` with `index out of range`.
Depending on your use case, you might have to initialize a result array to the desired length just like using `make()` function in Go.
`make()` function is used to allocate an array and returns a slice of the specified length i.e.
`result := make([]string, 5)` results in `["", "", "", "", ""]`, similarly set the array result to following JSON escaped
expression to allocate an array of size 2:
```
echo -n "[\"\", \"\"]" | tee $(results.array-results.path) # an array of size 2 with empty string
echo -n "[\"first-array-element\", \"\"]" | tee $(results.array-results.path) # an array of size 2 with only first element initialized
echo -n "[\"\", \"second-array-element\"]" | tee $(results.array-results.path) # an array of size 2 with only second element initialized
echo -n "[\"first-array-element\", \"second-array-element\"]" | tee $(results.array-results.path) # an array of size 2 with both elements initialized
```
This is also important to maintain the order of the elements in an array. The order in which the task result was
initialized is the order in which the result is consumed by the dependent tasks. For example, a task is producing
two array results `images` and `configmaps`. The pipeline author can implement deployment by indexing into each array result:
```yaml
- name: deploy-stage-1
taskRef:
name: deploy
params:
- name: image
value: $(tasks.setup.results.images[0])
- name: configmap
value: $(tasks.setup.results.configmap[0])
...
- name: deploy-stage-2
taskRef:
name: deploy
params:
- name: image
value: $(tasks.setup.results.images[1])
- name: configmap
value: $(tasks.setup.results.configmap[1])
```
As a task author, make sure the task array results are initialized accordingly or set to a zero value in case of no
`image` or `configmap` to maintain the order.
**Note**: Tekton uses [termination
messages](https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/#writing-and-reading-a-termination-message). As
written in
[tektoncd/pipeline#4808](https://github.com/tektoncd/pipeline/issues/4808),
the maximum size of a `Task's` results is limited by the container termination message feature of Kubernetes.
At present, the limit is ["4096 bytes"](https://github.com/kubernetes/kubernetes/blob/96e13de777a9eb57f87889072b68ac40467209ac/pkg/kubelet/container/runtime.go#L632).
This also means that the number of Steps in a Task affects the maximum size of a Result,
as each Step is implemented as a container in the TaskRun's pod.
The more containers we have in our pod, *the smaller the allowed size of each container's
message*, meaning that the **more steps you have in a Task, the smaller the result for each step can be**.
For example, if you have 10 steps, the size of each step's Result will have a maximum of less than 1KB.
If your `Task` writes a large number of small results, you can work around this limitation
by writing each result from a separate `Step` so that each `Step` has its own termination message.
If a termination message is detected as being too large the TaskRun will be placed into a failed state
with the following message: `Termination message is above max allowed size 4096, caused by large task
result`. Since Tekton also uses the termination message for some internal information, so the real
available size will less than 4096 bytes.
As a general rule-of-thumb, if a result needs to be larger than a kilobyte, you should likely use a
[`Workspace`](#specifying-workspaces) to store and pass it between `Tasks` within a `Pipeline`.
#### Larger `Results` using sidecar logs
This is a beta feature which is guarded behind its own feature flag. The `results-from` feature flag must be set to
[`"sidecar-logs"`](./install.md#enabling-larger-results-using-sidecar-logs) to enable larger results using sidecar logs.
Instead of using termination messages to store results, the taskrun controller injects a sidecar container which monitors
the results of all the steps. The sidecar mounts the volume where results of all the steps are stored. As soon as it
finds a new result, it logs it to std out. The controller has access to the logs of the sidecar container.
**CAUTION**: we need you to enable access to [kubernetes pod/logs](./install.md#enabling-larger-results-using-sidecar-logs).
This feature allows users to store up to 4 KB per result by default. Because we are not limited by the size of the
termination messages, users can have as many results as they require (or until the CRD reaches its limit). If the size
of a result exceeds this limit, then the TaskRun will be placed into a failed state with the following message: `Result
exceeded the maximum allowed limit.`
**Note**: If you require even larger results, you can specify a different upper limit per result by setting
`max-result-size` feature flag to your desired size in bytes ([see instructions](./install.md#enabling-larger-results-using-sidecar-logs)).
**CAUTION**: the larger you make the size, more likely will the CRD reach its max limit enforced by the `etcd` server
leading to bad user experience.
Refer to the detailed instructions listed in [additional config](additional-configs.md#enabling-larger-results-using-sidecar-logs)
to learn how to enable this feature.
### Specifying `Volumes`
Specifies one or more [`Volumes`](https://kubernetes.io/docs/concepts/storage/volumes/) that the `Steps` in your
`Task` require to execute in addition to volumes that are implicitly created for input and output resources.
For example, you can use `Volumes` to do the following:
- [Mount a Kubernetes `Secret`](auth.md).
- Create an `emptyDir` persistent `Volume` that caches data across multiple `Steps`.
- Mount a [Kubernetes `ConfigMap`](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/)
as `Volume` source.
- Mount a host's Docker socket to use a `Dockerfile` for building container images.
**Note:** Building a container image on-cluster using `docker build` is **very
unsafe** and is mentioned only for the sake of the example. Use [kaniko](https://github.com/GoogleContainerTools/kaniko) instead.
### Specifying a `Step` template
The `stepTemplate` field specifies a [`Container`](https://kubernetes.io/docs/concepts/containers/)
configuration that will be used as the starting point for all of the `Steps` in your
`Task`. Individual configurations specified within `Steps` supersede the template wherever
overlap occurs.
In the example below, the `Task` specifies a `stepTemplate` field with the environment variable
`FOO` set to `bar`. The first `Step` in the `Task` uses that value for `FOO`, but the second `Step`
overrides the value set in the template with `baz`. Additional, the `Task` specifies a `stepTemplate`
field with the environment variable `TOKEN` set to `public`. The last one `Step` in the `Task` uses
`private` in the referenced secret to override the value set in the template.
```yaml
stepTemplate:
env:
- name: "FOO"
value: "bar"
- name: "TOKEN"
value: "public"
steps:
- image: ubuntu
command: [echo]
args: ["FOO is ${FOO}"]
- image: ubuntu
command: [echo]
args: ["FOO is ${FOO}"]
env:
- name: "FOO"
value: "baz"
- image: ubuntu
command: [echo]
args: ["TOKEN is ${TOKEN}"]
env:
- name: "TOKEN"
valueFrom:
secretKeyRef:
key: "token"
name: "test"
---
# The secret 'test' part data is as follows.
data:
# The decoded value of 'cHJpdmF0ZQo=' is 'private'.
token: "cHJpdmF0ZQo="
```
### Specifying `Sidecars`
The `sidecars` field specifies a list of [`Containers`](https://kubernetes.io/docs/concepts/containers/)
to run alongside the `Steps` in your `Task`. You can use `Sidecars` to provide auxiliary functionality, such as
[Docker in Docker](https://hub.docker.com/_/docker) or running a mock API server that your app can hit during testing.
`Sidecars` spin up before your `Task` executes and are deleted after the `Task` execution completes.
For further information, see [`Sidecars` in `TaskRuns`](taskruns.md#specifying-sidecars).
**Note**: Starting in v0.62 you can enable native Kubernetes sidecar support using the `enable-kubernetes-sidecar` feature flag ([see instructions](./additional-configs.md#customizing-the-pipelines-controller-behavior)). If kubernetes does not wait for your sidecar application to be ready, use a `startupProbe` to help kubernetes identify when it is ready.
Refer to the detailed instructions listed in [additional config](additional-configs.md#enabling-larger-results-using-sidecar-logs)
to learn how to enable this feature.
In the example below, a `Step` uses a Docker-in-Docker `Sidecar` to build a Docker image:
```yaml
steps:
- image: docker
name: client
script: |
#!/usr/bin/env bash
cat > Dockerfile << EOF
FROM ubuntu
RUN apt-get update
ENTRYPOINT ["echo", "hello"]
EOF
docker build -t hello . && docker run hello
docker images
volumeMounts:
- mountPath: /var/run/
name: dind-socket
sidecars:
- image: docker:18.05-dind
name: server
securityContext:
privileged: true
volumeMounts:
- mountPath: /var/lib/docker
name: dind-storage
- mountPath: /var/run/
name: dind-socket
volumes:
- name: dind-storage
emptyDir: {}
- name: dind-socket
emptyDir: {}
```
Sidecars, just like `Steps`, can also run scripts:
```yaml
sidecars:
- image: busybox
name: hello-sidecar
script: |
echo 'Hello from sidecar!'
```
**Note:** Tekton's current `Sidecar` implementation contains a bug.
Tekton uses a container image named `nop` to terminate `Sidecars`.
That image is configured by passing a flag to the Tekton controller.
If the configured `nop` image contains the exact command the `Sidecar`
was executing before receiving a "stop" signal, the `Sidecar` keeps
running, eventually causing the `TaskRun` to time out with an error.
For more information, see [issue 1347](https://github.com/tektoncd/pipeline/issues/1347).
### Specifying a display name
The `displayName` field is an optional field that allows you to add a user-facing name to the task that may be used to populate a UI.
### Adding a description
The `description` field is an optional field that allows you to add an informative description to the `Task`.
### Using variable substitution
Tekton provides variables to inject values into the contents of certain fields.
The values you can inject come from a range of sources including other fields
in the Task, context-sensitive information that Tekton provides, and runtime
information received from a TaskRun.
The mechanism of variable substitution is quite simple - string replacement is
performed by the Tekton Controller when a TaskRun is executed.
`Tasks` allow you to substitute variable names for the following entities:
- [Parameters and resources](#substituting-parameters-and-resources)
- [`Array` parameters](#substituting-array-parameters)
- [`Workspaces`](#substituting-workspace-paths)
- [`Volume` names and types](#substituting-volume-names-and-paths)
See the [complete list of variable substitutions for Tasks](./variables.md#variables-available-in-a-task)
and the [list of fields that accept substitutions](./variables.md#fields-that-accept-variable-substitutions).
#### Substituting parameters and resources
[`params`](#specifying-parameters) and [`resources`](#specifying-resources) attributes can replace
variable values as follows:
- To reference a parameter in a `Task`, use the following syntax, where `<name>` is the name of the parameter:
```shell
# dot notation
# Here, the name cannot contain dots (eg. foo.bar is not allowed). If the name contains `dots`, it can only be accessed via the bracket notation.
$(params.<name> )
# or bracket notation (wrapping <name> with either single or double quotes):
# Here, the name can contain dots (eg. foo.bar is allowed).
$(params['<name>'])
$(params["<name>"])
```
- To access parameter values from resources, see [variable substitution](resources.md#variable-substitution)
#### Substituting `Array` parameters
You can expand referenced parameters of type `array` using the star operator. To do so, add the operator (`[*]`)
to the named parameter to insert the array elements in the spot of the reference string.
For example, given a `params` field with the contents listed below, you can expand
`command: ["first", "$(params.array-param[*])", "last"]` to `command: ["first", "some", "array", "elements", "last"]`:
```yaml
params:
- name: array-param
value:
- "some"
- "array"
- "elements"
```
You **must** reference parameters of type `array` in a completely isolated string within a larger `string` array.
Referencing an `array` parameter in any other way will result in an error. For example, if `build-args` is a parameter of
type `array`, then the following example is an invalid `Step` because the string isn't isolated:
```yaml
- name: build-step
image: gcr.io/cloud-builders/some-image
args: ["build", "additionalArg $(params.build-args[*])"]
```
Similarly, referencing `build-args` in a non-`array` field is also invalid:
```yaml
- name: build-step
image: "$(params.build-args[*])"
args: ["build", "args"]
```
A valid reference to the `build-args` parameter is isolated and in an eligible field (`args`, in this case):
```yaml
- name: build-step
image: gcr.io/cloud-builders/some-image
args: ["build", "$(params.build-args[*])", "additionalArg"]
```
`array` param when referenced in `args` section of the `step` can be utilized in the `script` as command line arguments:
```yaml
- name: build-step
image: gcr.io/cloud-builders/some-image
args: ["$(params.flags[*])"]
script: |
#!/usr/bin/env bash
echo "The script received $# flags."
echo "The first command line argument is $1."
```
Indexing into an array to reference an individual array element is supported as an **alpha** feature (`enable-api-fields: alpha`).
Referencing an individual array element in `args`:
```yaml
- name: build-step
image: gcr.io/cloud-builders/some-image
args: ["$(params.flags[0])"]
```
Referencing an individual array element in `script`:
```yaml
- name: build-step
image: gcr.io/cloud-builders/some-image
script: |
#!/usr/bin/env bash
echo "$(params.flags[0])"
```
#### Substituting `Workspace` paths
You can substitute paths to `Workspaces` specified within a `Task` as follows:
```yaml
$(workspaces.myworkspace.path)
```
Since the `Volume` name is randomized and only set when the `Task` executes, you can also
substitute the volume name as follows:
```yaml
$(workspaces.myworkspace.volume)
```
#### Substituting `Volume` names and types
You can substitute `Volume` names and [types](https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes)
by parameterizing them. Tekton supports popular `Volume` types such as `ConfigMap`, `Secret`, and `PersistentVolumeClaim`.
See this [example](#mounting-a-configmap-as-a-volume-source) to find out how to perform this type of substitution
in your `Task.`
#### Substituting in `Script` blocks
Variables can contain any string, including snippets of script that can
be injected into a Task's `Script` field. If you are using Tekton's variables
in your Task's `Script` field be aware that the strings you're interpolating
could include executable instructions.
Preventing a substituted variable from executing as code depends on the container
image, language or shell that your Task uses. Here's an example of interpolating
a Tekton variable into a `bash` `Script` block that prevents the variable's string
contents from being executed:
```yaml
# Task.yaml
spec:
steps:
- image: an-image-that-runs-bash
env:
- name: SCRIPT_CONTENTS
value: $(params.script)
script: |
printf '%s' "${SCRIPT_CONTENTS}" > input-script
```
This works by injecting Tekton's variable as an environment variable into the Step's
container. The `printf` program is then used to write the environment variable's
content to a file.
## Code examples
Study the following code examples to better understand how to configure your `Tasks`:
- [Building and pushing a Docker image](#building-and-pushing-a-docker-image)
- [Mounting multiple `Volumes`](#mounting-multiple-volumes)
- [Mounting a `ConfigMap` as a `Volume` source](#mounting-a-configmap-as-a-volume-source)
- [Using a `Secret` as an environment source](#using-a-secret-as-an-environment-source)
- [Using a `Sidecar` in a `Task`](#using-a-sidecar-in-a-task)
_Tip: See the collection of Tasks in the
[Tekton community catalog](https://github.com/tektoncd/catalog) for
more examples.
### Building and pushing a Docker image
The following example `Task` builds and pushes a `Dockerfile`-built image.
**Note:** Building a container image using `docker build` on-cluster is **very
unsafe** and is shown here only as a demonstration. Use [kaniko](https://github.com/GoogleContainerTools/kaniko) instead.
```yaml
spec:
params:
# This may be overridden, but is a sensible default.
- name: dockerfileName
type: string
description: The name of the Dockerfile
default: Dockerfile
- name: image
type: string
description: The image to build and push
workspaces:
- name: source
steps:
- name: dockerfile-build
image: gcr.io/cloud-builders/docker
workingDir: "$(workspaces.source.path)"
args:
[
"build",
"--no-cache",
"--tag",
"$(params.image)",
"--file",
"$(params.dockerfileName)",
".",
]
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
- name: dockerfile-push
image: gcr.io/cloud-builders/docker
args: ["push", "$(params.image)"]
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
# As an implementation detail, this Task mounts the host's daemon socket.
volumes:
- name: docker-socket
hostPath:
path: /var/run/docker.sock
type: Socket
```
#### Mounting multiple `Volumes`
The example below illustrates mounting multiple `Volumes`:
```yaml
spec:
steps:
- image: ubuntu
script: |
#!/usr/bin/env bash
curl https://foo.com > /var/my-volume
volumeMounts:
- name: my-volume
mountPath: /var/my-volume
- image: ubuntu
script: |
#!/usr/bin/env bash
cat /etc/my-volume
volumeMounts:
- name: my-volume
mountPath: /etc/my-volume
volumes:
- name: my-volume
emptyDir: {}
```
#### Mounting a `ConfigMap` as a `Volume` source
The example below illustrates how to mount a `ConfigMap` to act as a `Volume` source:
```yaml
spec:
params:
- name: CFGNAME
type: string
description: Name of config map
- name: volumeName
type: string
description: Name of volume
steps:
- image: ubuntu
script: |
#!/usr/bin/env bash
cat /var/configmap/test
volumeMounts:
- name: "$(params.volumeName)"
mountPath: /var/configmap
volumes:
- name: "$(params.volumeName)"
configMap:
name: "$(params.CFGNAME)"
```
#### Using a `Secret` as an environment source
The example below illustrates how to use a `Secret` as an environment source:
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Task
metadata:
name: goreleaser
spec:
params:
- name: package
type: string
description: base package to build in
- name: github-token-secret
type: string
description: name of the secret holding the github-token
default: github-token
workspaces:
- name: source
steps:
- name: release
image: goreleaser/goreleaser
workingDir: $(workspaces.source.path)/$(params.package)
command:
- goreleaser
args:
- release
env:
- name: GOPATH
value: /workspace
- name: GITHUB_TOKEN
valueFrom:
secretKeyRef:
name: $(params.github-token-secret)
key: bot-token
```
#### Using a `Sidecar` in a `Task`
The example below illustrates how to use a `Sidecar` in your `Task`:
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Task
metadata:
name: with-sidecar-task
spec:
params:
- name: sidecar-image
type: string
description: Image name of the sidecar container
- name: sidecar-env
type: string
description: Environment variable value
sidecars:
- name: sidecar
image: $(params.sidecar-image)
env:
- name: SIDECAR_ENV
value: $(params.sidecar-env)
steps:
- name: test
image: hello-world
```
## Debugging
This section describes techniques for debugging the most common issues in `Tasks`.
### Inspecting the file structure
A common issue when configuring `Tasks` stems from not knowing the location of your data.
For the most part, files ingested and output by your `Task` live in the `/workspace` directory,
but the specifics can vary. To inspect the file structure of your `Task`, add a step that outputs
the name of every file stored in the `/workspace` directory to the build log. For example:
```yaml
- name: build-and-push-1
image: ubuntu
command:
- /bin/bash
args:
- -c
- |
set -ex
find /workspace
```
You can also choose to examine the *contents* of every file used by your `Task`:
```yaml
- name: build-and-push-1
image: ubuntu
command:
- /bin/bash
args:
- -c
- |
set -ex
find /workspace | xargs cat
```
### Inspecting the `Pod`
To inspect the contents of the `Pod` used by your `Task` at a specific stage in the `Task's` execution,
log into the `Pod` and add a `Step` that pauses the `Task` at the desired stage. For example:
```yaml
- name: pause
image: docker
args: ["sleep", "6000"]
```
### Running Step Containers as a Non Root User
All steps that do not require to be run as a root user should make use of TaskRun features to
designate the container for a step runs as a user without root permissions. As a best practice,
running containers as non root should be built into the container image to avoid any possibility
of the container being run as root. However, as a further measure of enforcing this practice,
steps can make use of a `securityContext` to specify how the container should run.
An example of running Task steps as a non root user is shown below:
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Task
metadata:
name: show-non-root-steps
spec:
steps:
# no securityContext specified so will use
# securityContext from TaskRun podTemplate
- name: show-user-1001
image: ubuntu
command:
- ps
args:
- "aux"
# securityContext specified so will run as
# user 2000 instead of 1001
- name: show-user-2000
image: ubuntu
command:
- ps
args:
- "aux"
securityContext:
runAsUser: 2000
---
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: TaskRun
metadata:
generateName: show-non-root-steps-run-
spec:
taskRef:
name: show-non-root-steps
podTemplate:
securityContext:
runAsNonRoot: true
runAsUser: 1001
```
In the example above, the step `show-user-2000` specifies via a `securityContext` that the container
for the step should run as user 2000. A `securityContext` must still be specified via a TaskRun `podTemplate`
for this TaskRun to run in a Kubernetes environment that enforces running containers as non root as a requirement.
The `runAsNonRoot` property specified via the `podTemplate` above validates that steps part of this TaskRun are
running as non root users and will fail to start any step container that attempts to run as root. Only specifying
`runAsNonRoot: true` will not actually run containers as non root as the property simply validates that steps are not
running as root. It is the `runAsUser` property that is actually used to set the non root user ID for the container.
If a step defines its own `securityContext`, it will be applied for the step container over the `securityContext`
specified at the pod level via the TaskRun `podTemplate`.
More information about Pod and Container Security Contexts can be found via the [Kubernetes website](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod).
The example Task/TaskRun above can be found as a [TaskRun example](../examples/v1/taskruns/run-steps-as-non-root.yaml).
## `Task` Authoring Recommendations
Recommendations for authoring `Tasks` are available in the [Tekton Catalog][recommendations].
[recommendations]: https://github.com/tektoncd/catalog/blob/main/recommendations.md
---
Except as otherwise noted, the contents of this page are licensed under the
[Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/).
Code samples are licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). | tekton | linkTitle Tasks weight 201 Tasks Overview overview Configuring a Task configuring a task Task vs ClusterTask task vs clustertask Defining Steps defining steps Reserved directories reserved directories Running scripts within Steps running scripts within steps Windows scripts windows scripts Specifying a timeout specifying a timeout Specifying onError for a step specifying onerror for a step Accessing Step s exitCode in subsequent Steps accessing steps exitcode in subsequent steps Produce a task result with onError produce a task result with onerror Breakpoint on failure with onError breakpoint on failure with onerror Redirecting step output streams with stdoutConfig and stderrConfig redirecting step output streams with stdoutConfig and stderrConfig Specifying Parameters specifying parameters Specifying Workspaces specifying workspaces Emitting Results emitting results Larger Results using sidecar logs larger results using sidecar logs Specifying Volumes specifying volumes Specifying a Step template specifying a step template Specifying Sidecars specifying sidecars Specifying a DisplayName specifying a display name Adding a description adding a description Using variable substitution using variable substitution Substituting parameters and resources substituting parameters and resources Substituting Array parameters substituting array parameters Substituting Workspace paths substituting workspace paths Substituting Volume names and types substituting volume names and types Substituting in Script blocks substituting in script blocks Code examples code examples Building and pushing a Docker image building and pushing a docker image Mounting multiple Volumes mounting multiple volumes Mounting a ConfigMap as a Volume source mounting a configmap as a volume source Using a Secret as an environment source using a secret as an environment source Using a Sidecar in a Task using a sidecar in a task Debugging debugging Inspecting the file structure inspecting the file structure Inspecting the Pod inspecting the pod Running Step Containers as a Non Root User running step containers as a non root user Task Authoring Recommendations task authoring recommendations Overview A Task is a collection of Steps that you define and arrange in a specific order of execution as part of your continuous integration flow A Task executes as a Pod on your Kubernetes cluster A Task is available within a specific namespace while a ClusterTask is available across the entire cluster A Task declaration includes the following elements Parameters specifying parameters Steps defining steps Workspaces specifying workspaces Results emitting results Configuring a Task A Task definition supports the following fields Required apiVersion kubernetes overview Specifies the API version For example tekton dev v1beta1 kind kubernetes overview Identifies this resource object as a Task object metadata kubernetes overview Specifies metadata that uniquely identifies the Task resource object For example a name spec kubernetes overview Specifies the configuration information for this Task resource object steps defining steps Specifies one or more container images to run in the Task Optional description adding a description An informative description of the Task params specifying parameters Specifies execution parameters for the Task workspaces specifying workspaces Specifies paths to volumes required by the Task results emitting results Specifies the names under which Tasks write execution results volumes specifying volumes Specifies one or more volumes that will be available to the Steps in the Task stepTemplate specifying a step template Specifies a Container step definition to use as the basis for all Steps in the Task sidecars specifying sidecars Specifies Sidecar containers to run alongside the Steps in the Task kubernetes overview https kubernetes io docs concepts overview working with objects kubernetes objects required fields The non functional example below demonstrates the use of most of the above mentioned fields yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind Task metadata name example task name spec params name pathToDockerFile type string description The path to the dockerfile to build default workspace workspace Dockerfile name builtImageUrl type string description location to push the built image to steps name ubuntu example image ubuntu args ubuntu build example SECRETS example md image gcr io example builders build example command echo args params pathToDockerFile name dockerfile pushexample image gcr io example builders push example args push params builtImageUrl volumeMounts name docker socket example mountPath var run docker sock volumes name example volume emptyDir Task vs ClusterTask Note ClusterTasks are deprecated Please use the cluster resolver cluster resolver md instead A ClusterTask is a Task scoped to the entire cluster instead of a single namespace A ClusterTask behaves identically to a Task and therefore everything in this document applies to both Note When using a ClusterTask you must explicitly set the kind sub field in the taskRef field to ClusterTask If not specified the kind sub field defaults to Task Below is an example of a Pipeline declaration that uses a ClusterTask Note There is no v1 API specification for ClusterTask but a v1beta1 clustertask can still be referenced in a v1 pipeline The cluster resolver syntax below can be used to reference any task not just a clustertask yaml apiVersion tekton dev v1 kind Pipeline metadata name demo pipeline spec tasks name build skaffold web taskRef resolver cluster params name kind value task name name value build push name namespace value default yaml apiVersion tekton dev v1beta1 kind Pipeline metadata name demo pipeline namespace default spec tasks name build skaffold web taskRef name build push kind ClusterTask params Defining Steps A Step is a reference to a container image that executes a specific tool on a specific input and produces a specific output To add Steps to a Task you define a steps field required containing a list of desired Steps The order in which the Steps appear in this list is the order in which they will execute The following requirements apply to each container image referenced in a steps field The container image must abide by the container contract container contract md Each container image runs to completion or until the first failure occurs The CPU memory and ephemeral storage resource requests set on Step s will be adjusted to comply with any LimitRange https kubernetes io docs concepts policy limit range s present in the Namespace In addition Kubernetes determines a pod s effective resource requests and limits by summing the requests and limits for all its containers even though Tekton runs Steps sequentially For more detail see Compute Resources in Tekton compute resources md Note If the image referenced in the step field is from a private registry TaskRuns or PipelineRuns that consume the task must provide the imagePullSecrets in a podTemplate podtemplates md Below is an example of setting the resource requests and limits for a step yaml spec steps name step with limts computeResources requests memory 1Gi cpu 500m limits memory 2Gi cpu 800m yaml spec steps name step with limts resources requests memory 1Gi cpu 500m limits memory 2Gi cpu 800m Reserved directories There are several directories that all Tasks run by Tekton will treat as special workspace This directory is where resources specifying resources and workspaces specifying workspaces are mounted Paths to these are available to Task authors via variable substitution variables md tekton This directory is used for Tekton specific functionality tekton results is where results emitting results are written to The path is available to Task authors via results name path variables md There are other subfolders which are implementation details of Tekton developers README md reserved directories and users should not rely on their specific behavior as it may change in the future Running scripts within Steps A step can specify a script field which contains the body of a script That script is invoked as if it were stored inside the container image and any args are passed directly to it Note If the script field is present the step cannot also contain a command field Scripts that do not start with a shebang https en wikipedia org wiki Shebang Unix line will have the following default preamble prepended bash bin sh set e You can override this default preamble by prepending a shebang that specifies the desired parser This parser must be present within that Step s container image The example below executes a Bash script yaml steps image ubuntu contains bash script usr bin env bash echo Hello from Bash The example below executes a Python script yaml steps image python contains python script usr bin env python3 print Hello from Python The example below executes a Node script yaml steps image node contains node script usr bin env node console log Hello from Node You can execute scripts directly in the workspace yaml steps image ubuntu script usr bin env bash workspace my script sh provided by an input resource You can also execute scripts within the container image yaml steps image my image contains bin my binary script usr bin env bash bin my binary Windows scripts Scripts in tasks that will eventually run on windows nodes need a custom shebang line so that Tekton knows how to run the script The format of the shebang line is win interpreter command args Unlike linux we need to specify how to interpret the script file which is generated by Tekton The example below shows how to execute a powershell script yaml steps image mcr microsoft com windows servercore 1809 script win powershell exe File echo Hello from PowerShell Microsoft provide powershell images which contain Powershell Core which is slightly different from powershell found in standard windows images The example below shows how to use these images yaml steps image mcr microsoft com powershell nanoserver script win pwsh exe File echo Hello from PowerShell Core As can be seen the command is different The windows shebang can be used for any interpreter as long as it exists in the image and can interpret commands from a file The example below executes a Python script yaml steps image python script win python print Hello from Python Note that other than the win shebang the example is identical to the earlier linux example Finally if no interpreter is specified on the win line then the script will be treated as a windows cmd file which will be excecuted The example below shows this yaml steps image mcr microsoft com powershell lts nanoserver 1809 script win echo Hello from the default cmd file Specifying a timeout A Step can specify a timeout field If the Step execution time exceeds the specified timeout the Step kills its running process and any subsequent Steps in the TaskRun will not be executed The TaskRun is placed into a Failed condition An accompanying log describing which Step timed out is written as the Failed condition s message The timeout specification follows the duration format as specified in the Go time package https golang org pkg time ParseDuration e g 1s or 1ms The example Step below is supposed to sleep for 60 seconds but will be canceled by the specified 5 second timeout yaml steps name sleep then timeout image ubuntu script usr bin env bash echo I am supposed to sleep for 60 seconds sleep 60 timeout 5s Specifying onError for a step When a step in a task results in a failure the rest of the steps in the task are skipped and the taskRun is declared a failure If you would like to ignore such step errors and continue executing the rest of the steps in the task you can specify onError for such a step onError can be set to either continue or stopAndFail as part of the step definition If onError is set to continue the entrypoint sets the original failed exit code of the script running scripts within steps in the container terminated state A step with onError set to continue does not fail the taskRun and continues executing the rest of the steps in a task To ignore a step error set onError to continue yaml steps image docker io library golang latest name ignore unit test failure onError continue script go test The original failed exit code of the script running scripts within steps is available in the terminated state of the container kubectl get tr taskrun unit test t6qcl o json jq status conditions message All Steps have completed executing reason Succeeded status True type Succeeded steps container step ignore unit test failure imageID name ignore unit test failure terminated containerID exitCode 1 reason Completed For an end to end example see the taskRun ignoring a step error examples v1 taskruns ignore step error yaml and the pipelineRun ignoring a step error examples v1 pipelineruns ignore step error yaml Accessing Step s exitCode in subsequent Steps A step can access the exit code of any previous step by reading the file pointed to by the exitCode path variable shell cat steps step step name exitCode path The exitCode of a step without any name can be referenced using shell cat steps step unnamed step index exitCode path Produce a task result with onError When a step is set to ignore the step error and if that step is able to initialize a result file before failing that result is made available to its consumer task yaml steps name ignore failure and produce a result onError continue image busybox script echo n 123 tee results result1 path exit 1 The task consuming the result using the result reference tasks task1 results result1 in a pipeline will be able to access the result and run with the resolved value Now a step can fail before initializing a result and the pipeline can ignore such step failure But the pipeline will fail with InvalidTaskResultReference if it has a task consuming that task result For example any task consuming tasks task1 results result2 will cause the pipeline to fail yaml steps name ignore failure and produce a result onError continue image busybox script echo n 123 tee results result1 path exit 1 echo n 456 tee results result2 path Breakpoint on failure with onError Debugging taskruns md debugging a taskrun a taskRun is supported to debug a container and comes with a set of tools taskruns md debug environment to declare the step as a failure or a success Specifying breakpoint taskruns md breakpoint on failure at the taskRun level overrides ignoring a step error using onError Redirecting step output streams with stdoutConfig and stderrConfig This is an alpha feature The enable api fields feature flag must be set to alpha install md for Redirecting Step Output Streams to function This feature defines optional Step fields stdoutConfig and stderrConfig which can be used to redirection the output streams stdout and stderr respectively yaml name stdoutConfig path stderrConfig path Once stdoutConfig path or stderrConfig path is specified the corresponding output stream will be duplicated to both the given file and the standard output stream of the container so users can still view the output through the Pod log API If both stdoutConfig path and stderrConfig path are set to the same value outputs from both streams will be interleaved in the same file but there will be no ordering guarantee on the data If multiple Step s stdoutConfig path fields are set to the same value the file content will be overwritten by the last outputting step Variable substitution will be applied to the new fields so one could specify results name path to the stdoutConfig path or stderrConfig path field to extract the stdout of a step into a Task result Example Usage Redirecting stdout of boskosctl to jq and publish the resulting project id as a Task result yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind Task metadata name boskos acquire spec results name project id steps name boskosctl image gcr io k8s staging boskos boskosctl args acquire server url http boskos test pods svc cluster local owner name christie test boskos type gke project state free target state busy stdoutConfig path data boskosctl stdout volumeMounts name data mountPath data name parse project id image imega jq args r name data boskosctl stdout stdoutConfig path results project id path volumeMounts name data mountPath data volumes name data NOTE If the intent is to share output between Step s via a file the user must ensure that the paths provided are shared between the Step s e g via volumes There is currently a limit on the overall size of the Task results If the stdout stderr of a step is set to the path of a Task result and the step prints too many data the result manifest would become too large Currently the entrypoint binary will fail if that happens If the stdout stderr of a Step is set to the path of a Task result e g results empty path but that result is not defined for the Task the Step will run but the output will be captured in a file named results empty path in the current working directory Similarly any stubstition that is not valid e g some invalid path out txt will be left as is and will result in a file path some invalid path out txt relative to the current working directory Specifying Parameters You can specify parameters such as compilation flags or artifact names that you want to supply to the Task at execution time Parameters are passed to the Task from its corresponding TaskRun Parameter name Parameter name format Must only contain alphanumeric characters hyphens underscores and dots However object parameter name and its key names can t contain dots See the reasons in the third item added in this PR https github com tektoncd community pull 711 Must begin with a letter or an underscore For example foo Is Bar is a valid parameter name for string or array type but is invalid for object parameter because it contains dots On the other hand barIsBa or 0banana are invalid for all types NOTE 1 Parameter names are case insensitive For example APPLE and apple will be treated as equal If they appear in the same TaskSpec s params it will be rejected as invalid 2 If a parameter name contains dots it must be referenced by using the bracket notation substituting parameters and resources with either single or double quotes i e params foo bar params foo bar See the following example for more information Parameter type Each declared parameter has a type field which can be set to string array or object object type object type is useful in cases where users want to group related parameters For example an object parameter called gitrepo can contain both the url and the commmit to group related information yaml spec params name gitrepo type object properties url type string commit type string Refer to the TaskRun example examples v1 taskruns object param result yaml and the PipelineRun example examples v1 pipelineruns pipeline object param and result yaml in which object parameters are demonstrated NOTE object param must specify the properties section to define the schema i e what keys are available for this object param See how to define properties section in the following example and the TEP 0075 https github com tektoncd community blob main teps 0075 object param and result types md defaulting to string types for values When providing value for an object param one may provide values for just a subset of keys in spec s default and provide values for the rest of keys at runtime example examples v1 taskruns object param result yaml When using object in variable replacement users can only access its individual key child member of the object by its name i e params gitrepo url Using an entire object as a value is only allowed when the value is also an object like this example examples v1 pipelineruns pipeline object param and result yaml See more details about using object param from the TEP 0075 https github com tektoncd community blob main teps 0075 object param and result types md using objects in variable replacement array type array type is useful in cases where the number of compilation flags being supplied to a task varies throughout the Task s execution array param can be defined by setting type to array Also array params only supports string array i e each array element has to be of type string yaml spec params name flags type array string type If not specified the type field defaults to string When the actual parameter value is supplied its parsed type is validated against the type field The following example illustrates the use of Parameters in a Task The Task declares 3 input parameters named gitrepo of type object flags of type array and someURL of type string These parameters are used in the steps args list For object parameter you can only use individual members aka keys You can expand parameters of type array inside an existing array using the star operator In this example flags contains the star operator params flags Note Input parameter values can be used as variables throughout the Task by using variable substitution using variable substitution yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind Task metadata name task with parameters spec params name gitrepo type object properties url type string commit type string name flags type array name someURL type string name foo bar description the name contains dot character default test steps name do the clone image some git image args url params gitrepo url revision params gitrepo commit name build image my builder args build params flags It would be equivalent to use params someURL here which is necessary when the parameter name contains characters e g params some other URL See the example in step echo param url params someURL name echo param image bash args echo params foo bar The following TaskRun supplies the value for the parameter gitrepo flags and someURL yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind TaskRun metadata name run with parameters spec taskRef name task with parameters params name gitrepo value url abc com commit c12b72 name flags value set arg1 foo randomflag someotherflag name someURL value http google com Default value Parameter declarations within Tasks and Pipelines can include default values which will be used if the parameter is not specified for example to specify defaults for both string params and array params full example examples v1 taskruns array default yaml yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind Task metadata name task with array default spec params name flags type array default set arg1 foo randomflag someotherflag Param enum seedling enum is an alpha additional configs md alpha features feature The enable param enum feature flag must be set to true to enable this feature Parameter declarations can include enum which is a predefine set of valid values that can be accepted by the Param If a Param has both enum and default value the default value must be in the enum set For example the valid allowed values for Param message is bounded to v1 v2 and v3 yaml apiVersion tekton dev v1 kind Task metadata name param enum demo spec params name message type string enum v1 v2 v3 default v1 steps name build image bash latest script echo params message If the Param value passed in by TaskRuns is NOT in the predefined enum list the TaskRuns will fail with reason InvalidParamValue See usage in this example examples v1 taskruns alpha param enum yaml Specifying Workspaces Workspaces workspaces md using workspaces in tasks allow you to specify one or more volumes that your Task requires during execution It is recommended that Tasks uses at most one writeable Workspace For example yaml spec steps name write message image ubuntu script usr bin env bash set xe echo hello workspaces messages path message workspaces name messages description The folder where we write the message to mountPath custom path relative to root For more information see Using Workspaces in Tasks workspaces md using workspaces in tasks and the Workspaces in a TaskRun examples v1 taskruns workspace yaml example YAML file Propagated Workspaces Workspaces can be propagated to embedded task specs not referenced Tasks For more information see Propagated Workspaces taskruns md propagated workspaces Emitting Results A Task is able to emit string results that can be viewed by users and passed to other Tasks in a Pipeline These results have a wide variety of potential uses To highlight just a few examples from the Tekton Catalog the git clone Task https github com tektoncd catalog blob main task git clone 0 1 git clone yaml emits a cloned commit SHA as a result the generate build id Task https github com tektoncd catalog blob main task generate build id 0 1 generate build id yaml emits a randomized ID as a result and the kaniko Task https github com tektoncd catalog tree main task kaniko 0 1 emits a container image digest as a result In each case these results convey information for users to see when looking at their TaskRuns and can also be used in a Pipeline to pass data along from one Task to the next Task results are best suited for holding small amounts of data such as commit SHAs branch names ephemeral namespaces and so on To define a Task s results use the results field In the example below the Task specifies two files in the results field current date unix timestamp and current date human readable yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind Task metadata name print date annotations description A simple task that prints the date spec results name current date unix timestamp description The current date in unix timestamp format name current date human readable description The current date in human readable format steps name print date unix timestamp image bash latest script usr bin env bash date s tee results current date unix timestamp path name print date human readable image bash latest script usr bin env bash date tee results current date human readable path In this example results name path https github com tektoncd pipeline blob main docs variables md variables available in a task is replaced with the path where Tekton will store the Task s results When this Task is executed in a TaskRun the results will appear in the TaskRun s status yaml apiVersion tekton dev v1 kind TaskRun status results name current date human readable value Wed Jan 22 19 47 26 UTC 2020 name current date unix timestamp value 1579722445 yaml apiVersion tekton dev v1beta1 kind TaskRun status taskResults name current date human readable value Wed Jan 22 19 47 26 UTC 2020 name current date unix timestamp value 1579722445 Tekton does not perform any processing on the contents of results they are emitted verbatim from your Task including any leading or trailing whitespace characters Make sure to write only the precise string you want returned from your Task into the result files that your Task creates The stored results can be used at the Task level pipelines md passing one tasks results into the parameters or when expressions of another or at the Pipeline level pipelines md emitting results from a pipeline Note Tekton does not enforce Task results unless there is a consumer when a Task declares a result it may complete successfully even if no result was actually produced When a Task that declares results is used in a Pipeline and a component of the Pipeline attempts to consume the Task s result if the result was not produced the pipeline will fail TEP 0048 https github com tektoncd community blob main teps 0048 task results without results md propopses introducing default values for results to help Pipeline authors manage this case Emitting Object Results Emitting a task result of type object is implemented based on the TEP 0075 https github com tektoncd community blob main teps 0075 object param and result types md emitting object results You can initialize object results from a task using JSON escaped string For example to assign the following data to an object result url abc dev sampler digest 19f02276bf8dbdd62f069b922f10c65262cc34b710eea26ff928129a736be791 You will need to use escaped JSON to write to pod termination message url abc dev sampler digest 19f02276bf8dbdd62f069b922f10c65262cc34b710eea26ff928129a736be791 An example of a task definition producing an object result yaml kind Task apiVersion tekton dev v1 or tekton dev v1beta1 metadata name write object annotations description A simple task that writes object spec results name object results type object description The object results properties url type string digest type string steps name write object image bash latest script usr bin env bash echo n url abc dev sampler digest 19f02276bf8dbdd62f069b922f10c65262cc34b710eea26ff928129a736be791 tee results object results path Note that the opening and closing braces are mandatory along with an escaped JSON object result must specify the properties section to define the schema i e what keys are available for this object result Failing to emit keys from the defined object results will result in validation error at runtime Emitting Array Results Tekton Task also supports defining a result of type array and object in addition to string Emitting a task result of type array is a beta feature implemented based on the TEP 0076 https github com tektoncd community blob main teps 0076 array result types md emitting array results You can initialize array results from a task using JSON escaped string for example to assign the following list of animals to an array result cat dog squirrel You will have to initialize the pod termination message as escaped JSON cat dog squirrel An example of a task definition producing an array result with such greetings hello world yaml kind Task apiVersion tekton dev v1 or tekton dev v1beta1 metadata name write array annotations description A simple task that writes array spec results name array results type array description The array results steps name write array image bash latest script usr bin env bash echo n hello world tee results array results path Note that the opening and closing square brackets are mandatory along with an escaped JSON Now similar to the Go zero valued slices an array result is considered as uninitialized i e nil if it s set to an empty array i e For example echo n tee results result path is equivalent to result string The result initialized in this way will have zero length And trying to access this array with a star notation i e tasks write array results results result or an element of such array i e tasks write array results results result 0 results in InvalidTaskResultReference with index out of range Depending on your use case you might have to initialize a result array to the desired length just like using make function in Go make function is used to allocate an array and returns a slice of the specified length i e result make string 5 results in similarly set the array result to following JSON escaped expression to allocate an array of size 2 echo n tee results array results path an array of size 2 with empty string echo n first array element tee results array results path an array of size 2 with only first element initialized echo n second array element tee results array results path an array of size 2 with only second element initialized echo n first array element second array element tee results array results path an array of size 2 with both elements initialized This is also important to maintain the order of the elements in an array The order in which the task result was initialized is the order in which the result is consumed by the dependent tasks For example a task is producing two array results images and configmaps The pipeline author can implement deployment by indexing into each array result yaml name deploy stage 1 taskRef name deploy params name image value tasks setup results images 0 name configmap value tasks setup results configmap 0 name deploy stage 2 taskRef name deploy params name image value tasks setup results images 1 name configmap value tasks setup results configmap 1 As a task author make sure the task array results are initialized accordingly or set to a zero value in case of no image or configmap to maintain the order Note Tekton uses termination messages https kubernetes io docs tasks debug debug application determine reason pod failure writing and reading a termination message As written in tektoncd pipeline 4808 https github com tektoncd pipeline issues 4808 the maximum size of a Task s results is limited by the container termination message feature of Kubernetes At present the limit is 4096 bytes https github com kubernetes kubernetes blob 96e13de777a9eb57f87889072b68ac40467209ac pkg kubelet container runtime go L632 This also means that the number of Steps in a Task affects the maximum size of a Result as each Step is implemented as a container in the TaskRun s pod The more containers we have in our pod the smaller the allowed size of each container s message meaning that the more steps you have in a Task the smaller the result for each step can be For example if you have 10 steps the size of each step s Result will have a maximum of less than 1KB If your Task writes a large number of small results you can work around this limitation by writing each result from a separate Step so that each Step has its own termination message If a termination message is detected as being too large the TaskRun will be placed into a failed state with the following message Termination message is above max allowed size 4096 caused by large task result Since Tekton also uses the termination message for some internal information so the real available size will less than 4096 bytes As a general rule of thumb if a result needs to be larger than a kilobyte you should likely use a Workspace specifying workspaces to store and pass it between Tasks within a Pipeline Larger Results using sidecar logs This is a beta feature which is guarded behind its own feature flag The results from feature flag must be set to sidecar logs install md enabling larger results using sidecar logs to enable larger results using sidecar logs Instead of using termination messages to store results the taskrun controller injects a sidecar container which monitors the results of all the steps The sidecar mounts the volume where results of all the steps are stored As soon as it finds a new result it logs it to std out The controller has access to the logs of the sidecar container CAUTION we need you to enable access to kubernetes pod logs install md enabling larger results using sidecar logs This feature allows users to store up to 4 KB per result by default Because we are not limited by the size of the termination messages users can have as many results as they require or until the CRD reaches its limit If the size of a result exceeds this limit then the TaskRun will be placed into a failed state with the following message Result exceeded the maximum allowed limit Note If you require even larger results you can specify a different upper limit per result by setting max result size feature flag to your desired size in bytes see instructions install md enabling larger results using sidecar logs CAUTION the larger you make the size more likely will the CRD reach its max limit enforced by the etcd server leading to bad user experience Refer to the detailed instructions listed in additional config additional configs md enabling larger results using sidecar logs to learn how to enable this feature Specifying Volumes Specifies one or more Volumes https kubernetes io docs concepts storage volumes that the Steps in your Task require to execute in addition to volumes that are implicitly created for input and output resources For example you can use Volumes to do the following Mount a Kubernetes Secret auth md Create an emptyDir persistent Volume that caches data across multiple Steps Mount a Kubernetes ConfigMap https kubernetes io docs tasks configure pod container configure pod configmap as Volume source Mount a host s Docker socket to use a Dockerfile for building container images Note Building a container image on cluster using docker build is very unsafe and is mentioned only for the sake of the example Use kaniko https github com GoogleContainerTools kaniko instead Specifying a Step template The stepTemplate field specifies a Container https kubernetes io docs concepts containers configuration that will be used as the starting point for all of the Steps in your Task Individual configurations specified within Steps supersede the template wherever overlap occurs In the example below the Task specifies a stepTemplate field with the environment variable FOO set to bar The first Step in the Task uses that value for FOO but the second Step overrides the value set in the template with baz Additional the Task specifies a stepTemplate field with the environment variable TOKEN set to public The last one Step in the Task uses private in the referenced secret to override the value set in the template yaml stepTemplate env name FOO value bar name TOKEN value public steps image ubuntu command echo args FOO is FOO image ubuntu command echo args FOO is FOO env name FOO value baz image ubuntu command echo args TOKEN is TOKEN env name TOKEN valueFrom secretKeyRef key token name test The secret test part data is as follows data The decoded value of cHJpdmF0ZQo is private token cHJpdmF0ZQo Specifying Sidecars The sidecars field specifies a list of Containers https kubernetes io docs concepts containers to run alongside the Steps in your Task You can use Sidecars to provide auxiliary functionality such as Docker in Docker https hub docker com docker or running a mock API server that your app can hit during testing Sidecars spin up before your Task executes and are deleted after the Task execution completes For further information see Sidecars in TaskRuns taskruns md specifying sidecars Note Starting in v0 62 you can enable native Kubernetes sidecar support using the enable kubernetes sidecar feature flag see instructions additional configs md customizing the pipelines controller behavior If kubernetes does not wait for your sidecar application to be ready use a startupProbe to help kubernetes identify when it is ready Refer to the detailed instructions listed in additional config additional configs md enabling larger results using sidecar logs to learn how to enable this feature In the example below a Step uses a Docker in Docker Sidecar to build a Docker image yaml steps image docker name client script usr bin env bash cat Dockerfile EOF FROM ubuntu RUN apt get update ENTRYPOINT echo hello EOF docker build t hello docker run hello docker images volumeMounts mountPath var run name dind socket sidecars image docker 18 05 dind name server securityContext privileged true volumeMounts mountPath var lib docker name dind storage mountPath var run name dind socket volumes name dind storage emptyDir name dind socket emptyDir Sidecars just like Steps can also run scripts yaml sidecars image busybox name hello sidecar script echo Hello from sidecar Note Tekton s current Sidecar implementation contains a bug Tekton uses a container image named nop to terminate Sidecars That image is configured by passing a flag to the Tekton controller If the configured nop image contains the exact command the Sidecar was executing before receiving a stop signal the Sidecar keeps running eventually causing the TaskRun to time out with an error For more information see issue 1347 https github com tektoncd pipeline issues 1347 Specifying a display name The displayName field is an optional field that allows you to add a user facing name to the task that may be used to populate a UI Adding a description The description field is an optional field that allows you to add an informative description to the Task Using variable substitution Tekton provides variables to inject values into the contents of certain fields The values you can inject come from a range of sources including other fields in the Task context sensitive information that Tekton provides and runtime information received from a TaskRun The mechanism of variable substitution is quite simple string replacement is performed by the Tekton Controller when a TaskRun is executed Tasks allow you to substitute variable names for the following entities Parameters and resources substituting parameters and resources Array parameters substituting array parameters Workspaces substituting workspace paths Volume names and types substituting volume names and paths See the complete list of variable substitutions for Tasks variables md variables available in a task and the list of fields that accept substitutions variables md fields that accept variable substitutions Substituting parameters and resources params specifying parameters and resources specifying resources attributes can replace variable values as follows To reference a parameter in a Task use the following syntax where name is the name of the parameter shell dot notation Here the name cannot contain dots eg foo bar is not allowed If the name contains dots it can only be accessed via the bracket notation params name or bracket notation wrapping name with either single or double quotes Here the name can contain dots eg foo bar is allowed params name params name To access parameter values from resources see variable substitution resources md variable substitution Substituting Array parameters You can expand referenced parameters of type array using the star operator To do so add the operator to the named parameter to insert the array elements in the spot of the reference string For example given a params field with the contents listed below you can expand command first params array param last to command first some array elements last yaml params name array param value some array elements You must reference parameters of type array in a completely isolated string within a larger string array Referencing an array parameter in any other way will result in an error For example if build args is a parameter of type array then the following example is an invalid Step because the string isn t isolated yaml name build step image gcr io cloud builders some image args build additionalArg params build args Similarly referencing build args in a non array field is also invalid yaml name build step image params build args args build args A valid reference to the build args parameter is isolated and in an eligible field args in this case yaml name build step image gcr io cloud builders some image args build params build args additionalArg array param when referenced in args section of the step can be utilized in the script as command line arguments yaml name build step image gcr io cloud builders some image args params flags script usr bin env bash echo The script received flags echo The first command line argument is 1 Indexing into an array to reference an individual array element is supported as an alpha feature enable api fields alpha Referencing an individual array element in args yaml name build step image gcr io cloud builders some image args params flags 0 Referencing an individual array element in script yaml name build step image gcr io cloud builders some image script usr bin env bash echo params flags 0 Substituting Workspace paths You can substitute paths to Workspaces specified within a Task as follows yaml workspaces myworkspace path Since the Volume name is randomized and only set when the Task executes you can also substitute the volume name as follows yaml workspaces myworkspace volume Substituting Volume names and types You can substitute Volume names and types https kubernetes io docs concepts storage volumes types of volumes by parameterizing them Tekton supports popular Volume types such as ConfigMap Secret and PersistentVolumeClaim See this example mounting a configmap as a volume source to find out how to perform this type of substitution in your Task Substituting in Script blocks Variables can contain any string including snippets of script that can be injected into a Task s Script field If you are using Tekton s variables in your Task s Script field be aware that the strings you re interpolating could include executable instructions Preventing a substituted variable from executing as code depends on the container image language or shell that your Task uses Here s an example of interpolating a Tekton variable into a bash Script block that prevents the variable s string contents from being executed yaml Task yaml spec steps image an image that runs bash env name SCRIPT CONTENTS value params script script printf s SCRIPT CONTENTS input script This works by injecting Tekton s variable as an environment variable into the Step s container The printf program is then used to write the environment variable s content to a file Code examples Study the following code examples to better understand how to configure your Tasks Building and pushing a Docker image building and pushing a docker image Mounting multiple Volumes mounting multiple volumes Mounting a ConfigMap as a Volume source mounting a configmap as a volume source Using a Secret as an environment source using a secret as an environment source Using a Sidecar in a Task using a sidecar in a task Tip See the collection of Tasks in the Tekton community catalog https github com tektoncd catalog for more examples Building and pushing a Docker image The following example Task builds and pushes a Dockerfile built image Note Building a container image using docker build on cluster is very unsafe and is shown here only as a demonstration Use kaniko https github com GoogleContainerTools kaniko instead yaml spec params This may be overridden but is a sensible default name dockerfileName type string description The name of the Dockerfile default Dockerfile name image type string description The image to build and push workspaces name source steps name dockerfile build image gcr io cloud builders docker workingDir workspaces source path args build no cache tag params image file params dockerfileName volumeMounts name docker socket mountPath var run docker sock name dockerfile push image gcr io cloud builders docker args push params image volumeMounts name docker socket mountPath var run docker sock As an implementation detail this Task mounts the host s daemon socket volumes name docker socket hostPath path var run docker sock type Socket Mounting multiple Volumes The example below illustrates mounting multiple Volumes yaml spec steps image ubuntu script usr bin env bash curl https foo com var my volume volumeMounts name my volume mountPath var my volume image ubuntu script usr bin env bash cat etc my volume volumeMounts name my volume mountPath etc my volume volumes name my volume emptyDir Mounting a ConfigMap as a Volume source The example below illustrates how to mount a ConfigMap to act as a Volume source yaml spec params name CFGNAME type string description Name of config map name volumeName type string description Name of volume steps image ubuntu script usr bin env bash cat var configmap test volumeMounts name params volumeName mountPath var configmap volumes name params volumeName configMap name params CFGNAME Using a Secret as an environment source The example below illustrates how to use a Secret as an environment source yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind Task metadata name goreleaser spec params name package type string description base package to build in name github token secret type string description name of the secret holding the github token default github token workspaces name source steps name release image goreleaser goreleaser workingDir workspaces source path params package command goreleaser args release env name GOPATH value workspace name GITHUB TOKEN valueFrom secretKeyRef name params github token secret key bot token Using a Sidecar in a Task The example below illustrates how to use a Sidecar in your Task yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind Task metadata name with sidecar task spec params name sidecar image type string description Image name of the sidecar container name sidecar env type string description Environment variable value sidecars name sidecar image params sidecar image env name SIDECAR ENV value params sidecar env steps name test image hello world Debugging This section describes techniques for debugging the most common issues in Tasks Inspecting the file structure A common issue when configuring Tasks stems from not knowing the location of your data For the most part files ingested and output by your Task live in the workspace directory but the specifics can vary To inspect the file structure of your Task add a step that outputs the name of every file stored in the workspace directory to the build log For example yaml name build and push 1 image ubuntu command bin bash args c set ex find workspace You can also choose to examine the contents of every file used by your Task yaml name build and push 1 image ubuntu command bin bash args c set ex find workspace xargs cat Inspecting the Pod To inspect the contents of the Pod used by your Task at a specific stage in the Task s execution log into the Pod and add a Step that pauses the Task at the desired stage For example yaml name pause image docker args sleep 6000 Running Step Containers as a Non Root User All steps that do not require to be run as a root user should make use of TaskRun features to designate the container for a step runs as a user without root permissions As a best practice running containers as non root should be built into the container image to avoid any possibility of the container being run as root However as a further measure of enforcing this practice steps can make use of a securityContext to specify how the container should run An example of running Task steps as a non root user is shown below yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind Task metadata name show non root steps spec steps no securityContext specified so will use securityContext from TaskRun podTemplate name show user 1001 image ubuntu command ps args aux securityContext specified so will run as user 2000 instead of 1001 name show user 2000 image ubuntu command ps args aux securityContext runAsUser 2000 apiVersion tekton dev v1 or tekton dev v1beta1 kind TaskRun metadata generateName show non root steps run spec taskRef name show non root steps podTemplate securityContext runAsNonRoot true runAsUser 1001 In the example above the step show user 2000 specifies via a securityContext that the container for the step should run as user 2000 A securityContext must still be specified via a TaskRun podTemplate for this TaskRun to run in a Kubernetes environment that enforces running containers as non root as a requirement The runAsNonRoot property specified via the podTemplate above validates that steps part of this TaskRun are running as non root users and will fail to start any step container that attempts to run as root Only specifying runAsNonRoot true will not actually run containers as non root as the property simply validates that steps are not running as root It is the runAsUser property that is actually used to set the non root user ID for the container If a step defines its own securityContext it will be applied for the step container over the securityContext specified at the pod level via the TaskRun podTemplate More information about Pod and Container Security Contexts can be found via the Kubernetes website https kubernetes io docs tasks configure pod container security context set the security context for a pod The example Task TaskRun above can be found as a TaskRun example examples v1 taskruns run steps as non root yaml Task Authoring Recommendations Recommendations for authoring Tasks are available in the Tekton Catalog recommendations recommendations https github com tektoncd catalog blob main recommendations md Except as otherwise noted the contents of this page are licensed under the Creative Commons Attribution 4 0 License https creativecommons org licenses by 4 0 Code samples are licensed under the Apache 2 0 License https www apache org licenses LICENSE 2 0 |
tekton Cluster Resolver Resolver Type weight 310 Cluster Resolver | <!--
---
linkTitle: "Cluster Resolver"
weight: 310
---
-->
# Cluster Resolver
## Resolver Type
This Resolver responds to type `cluster`.
## Parameters
| Param Name | Description | Example Value |
|-------------|-------------------------------------------------------|----------------------------------|
| `kind` | The kind of resource to fetch. | `task`, `pipeline`, `stepaction` |
| `name` | The name of the resource to fetch. | `some-pipeline`, `some-task` |
| `namespace` | The namespace in the cluster containing the resource. | `default`, `other-namespace` |
## Requirements
- A cluster running Tekton Pipeline v0.41.0 or later.
- The [built-in remote resolvers installed](./install.md#installing-and-configuring-remote-task-and-pipeline-resolution).
- The `enable-cluster-resolver` feature flag in the `resolvers-feature-flags` ConfigMap
in the `tekton-pipelines-resolvers` namespace set to `true`.
- [Beta features](./additional-configs.md#beta-features) enabled.
## Configuration
This resolver uses a `ConfigMap` for its settings. See
[`../config/resolvers/cluster-resolver-config.yaml`](../config/resolvers/cluster-resolver-config.yaml)
for the name, namespace and defaults that the resolver ships with.
### Options
| Option Name | Description | Example Values |
|----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------|
| `default-kind` | The default resource kind to fetch if not specified in parameters. | `task`, `pipeline`, `stepaction` |
| `default-namespace` | The default namespace to fetch resources from if not specified in parameters. | `default`, `some-namespace` |
| `allowed-namespaces` | An optional comma-separated list of namespaces which the resolver is allowed to access. Defaults to empty, meaning all namespaces are allowed. | `default,some-namespace`, (empty) |
| `blocked-namespaces` | An optional comma-separated list of namespaces which the resolver is blocked from accessing. If the value is a `*` all namespaces will be disallowed and allowed namespace will need to be explicitely listed in `allowed-namespaces`. Defaults to empty, meaning all namespaces are allowed. | `default,other-namespace`, `*`, (empty) |
## Usage
### Task Resolution
```yaml
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: remote-task-reference
spec:
taskRef:
resolver: cluster
params:
- name: kind
value: task
- name: name
value: some-task
- name: namespace
value: namespace-containing-task
```
### StepAction Resolution
```yaml
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: remote-stepaction-reference
spec:
steps:
- name: step-action-example
ref
resolver: cluster
params:
- name: kind
value: stepaction
- name: name
value: some-stepaction
- name: namespace
value: namespace-containing-stepaction
```
### Pipeline resolution
```yaml
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: remote-pipeline-reference
spec:
pipelineRef:
resolver: cluster
params:
- name: kind
value: pipeline
- name: name
value: some-pipeline
- name: namespace
value: namespace-containing-pipeline
```
## `ResolutionRequest` Status
`ResolutionRequest.Status.RefSource` field captures the source where the remote resource came from. It includes the 3 subfields: `url`, `digest` and `entrypoint`.
- `url`: url is the unique full identifier for the resource in the cluster. It is in the format of `<resource uri>@<uid>`. Resource URI part is the namespace-scoped uri i.e. `/apis/GROUP/VERSION/namespaces/NAMESPACE/RESOURCETYPE/NAME`. See [K8s Resource URIs](https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-uris) for more details.
- `digest`: hex-encoded sha256 checksum of the content in the in-cluster resource's spec field. The reason why it's the checksum of the spec content rather than the whole object is because the metadata of in-cluster resources might be modified i.e. annotations. Therefore, the checksum of the spec content should be sufficient for source verifiers to verify if things have been changed maliciously even though the metadata is modified with good intentions.
- `entrypoint`: ***empty*** because the path information is already available in the url field.
Example:
- TaskRun Resolution
```yaml
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: cluster-demo
spec:
taskRef:
resolver: cluster
params:
- name: kind
value: task
- name: name
value: a-simple-task
- name: namespace
value: default
```
- `ResolutionRequest`
```yaml
apiVersion: resolution.tekton.dev/v1beta1
kind: ResolutionRequest
metadata:
labels:
resolution.tekton.dev/type: cluster
name: cluster-7a04be6baa3eeedd232542036b7f3b2d
namespace: default
ownerReferences: ...
spec:
params:
- name: kind
value: task
- name: name
value: a-simple-task
- name: namespace
value: default
status:
annotations: ...
conditions: ...
data: xxx
refSource:
digest:
sha256: 245b1aa918434cc8195b4d4d026f2e43df09199e2ed31d4dfd9c2cbea1c7ce54
uri: /apis/tekton.dev/v1beta1/namespaces/default/task/a-simple-task@3b82d8c4-f89e-47ea-a49d-3be0dca4c038
```
---
Except as otherwise noted, the content of this page is licensed under the
[Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/),
and code samples are licensed under the
[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). | tekton | linkTitle Cluster Resolver weight 310 Cluster Resolver Resolver Type This Resolver responds to type cluster Parameters Param Name Description Example Value kind The kind of resource to fetch task pipeline stepaction name The name of the resource to fetch some pipeline some task namespace The namespace in the cluster containing the resource default other namespace Requirements A cluster running Tekton Pipeline v0 41 0 or later The built in remote resolvers installed install md installing and configuring remote task and pipeline resolution The enable cluster resolver feature flag in the resolvers feature flags ConfigMap in the tekton pipelines resolvers namespace set to true Beta features additional configs md beta features enabled Configuration This resolver uses a ConfigMap for its settings See config resolvers cluster resolver config yaml config resolvers cluster resolver config yaml for the name namespace and defaults that the resolver ships with Options Option Name Description Example Values default kind The default resource kind to fetch if not specified in parameters task pipeline stepaction default namespace The default namespace to fetch resources from if not specified in parameters default some namespace allowed namespaces An optional comma separated list of namespaces which the resolver is allowed to access Defaults to empty meaning all namespaces are allowed default some namespace empty blocked namespaces An optional comma separated list of namespaces which the resolver is blocked from accessing If the value is a all namespaces will be disallowed and allowed namespace will need to be explicitely listed in allowed namespaces Defaults to empty meaning all namespaces are allowed default other namespace empty Usage Task Resolution yaml apiVersion tekton dev v1beta1 kind TaskRun metadata name remote task reference spec taskRef resolver cluster params name kind value task name name value some task name namespace value namespace containing task StepAction Resolution yaml apiVersion tekton dev v1beta1 kind Task metadata name remote stepaction reference spec steps name step action example ref resolver cluster params name kind value stepaction name name value some stepaction name namespace value namespace containing stepaction Pipeline resolution yaml apiVersion tekton dev v1beta1 kind PipelineRun metadata name remote pipeline reference spec pipelineRef resolver cluster params name kind value pipeline name name value some pipeline name namespace value namespace containing pipeline ResolutionRequest Status ResolutionRequest Status RefSource field captures the source where the remote resource came from It includes the 3 subfields url digest and entrypoint url url is the unique full identifier for the resource in the cluster It is in the format of resource uri uid Resource URI part is the namespace scoped uri i e apis GROUP VERSION namespaces NAMESPACE RESOURCETYPE NAME See K8s Resource URIs https kubernetes io docs reference using api api concepts resource uris for more details digest hex encoded sha256 checksum of the content in the in cluster resource s spec field The reason why it s the checksum of the spec content rather than the whole object is because the metadata of in cluster resources might be modified i e annotations Therefore the checksum of the spec content should be sufficient for source verifiers to verify if things have been changed maliciously even though the metadata is modified with good intentions entrypoint empty because the path information is already available in the url field Example TaskRun Resolution yaml apiVersion tekton dev v1beta1 kind TaskRun metadata name cluster demo spec taskRef resolver cluster params name kind value task name name value a simple task name namespace value default ResolutionRequest yaml apiVersion resolution tekton dev v1beta1 kind ResolutionRequest metadata labels resolution tekton dev type cluster name cluster 7a04be6baa3eeedd232542036b7f3b2d namespace default ownerReferences spec params name kind value task name name value a simple task name namespace value default status annotations conditions data xxx refSource digest sha256 245b1aa918434cc8195b4d4d026f2e43df09199e2ed31d4dfd9c2cbea1c7ce54 uri apis tekton dev v1beta1 namespaces default task a simple task 3b82d8c4 f89e 47ea a49d 3be0dca4c038 Except as otherwise noted the content of this page is licensed under the Creative Commons Attribution 4 0 License https creativecommons org licenses by 4 0 and code samples are licensed under the Apache 2 0 License https www apache org licenses LICENSE 2 0 |
tekton based on so that it possible for the taskruns to execute parallel while sharing volume weight 405 Affinity Assistants Affinity Assistant is a feature to coschedule to the same node Affinity Assistants | <!--
---
linkTitle: "Affinity Assistants"
weight: 405
---
-->
# Affinity Assistants
Affinity Assistant is a feature to coschedule `PipelineRun` `pods` to the same node
based on [kubernetes pod affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) so that it possible for the taskruns to execute parallel while sharing volume.
Available Affinity Assistant Modes are **coschedule workspaces**, **coschedule pipelineruns**,
**isolate pipelinerun** and **disabled**.
> :seedling: **coschedule pipelineruns** and **isolate pipelinerun** modes are [**alpha features**](./additional-configs.md#alpha-features).
> **coschedule workspaces** is a **stable feature**
* **coschedule workspaces** - When a `PersistentVolumeClaim` is used as volume source for a `Workspace` in a `PipelineRun`,
all `TaskRun` pods within the `PipelineRun` that share the `Workspace` will be scheduled to the same Node. (**Note:** Only one pvc-backed workspace can be mounted to each TaskRun in this mode.)
* **coschedule pipelineruns** - All `TaskRun` pods within the `PipelineRun` will be scheduled to the same Node.
* **isolate pipelinerun** - All `TaskRun` pods within the `PipelineRun` will be scheduled to the same Node,
and only one PipelineRun is allowed to run on a node at a time.
* **disabled** - The Affinity Assistant is disabled. No pod coscheduling behavior.
This means that Affinity Assistant is incompatible with other affinity rules
configured for the `TaskRun` pods (i.e. other affinity rules specified in custom [PodTemplate](pipelineruns.md#specifying-a-pod-template) will be overwritten by Affinity Assistant).
If the `PipelineRun` has a custom [PodTemplate](pipelineruns.md#specifying-a-pod-template) configured, the `NodeSelector` and `Tolerations` fields will also be set on the Affinity Assistant pod. The Affinity Assistant
is deleted when the `PipelineRun` is completed.
Currently, the Affinity Assistant Modes can be configured by the `disable-affinity-assistant` and `coschedule` feature flags.
The `disable-affinity-assistant` feature flag is now deprecated and will be removed in release `v0.60`. At the time, the Affinity Assistant Modes will be only determined by the `coschedule` feature flag.
The following chart summarizes the Affinity Assistant Modes with different combinations of the `disable-affinity-assistant` and `coschedule` feature flags during migration (when both feature flags are present) and after the migration (when only the `coschedule` flag is present).
<table>
<thead>
<tr>
<th>disable-affinity-assistant</th>
<th>coschedule</th>
<th>behavior during migration</th>
<th>behavior after migration</th>
</tr>
</thead>
<tbody>
<tr>
<td>false (default)</td>
<td>disabled</td>
<td>N/A: invalid</td>
<td>disabled</td>
</tr>
<tr>
<td>false (default)</td>
<td>workspaces (default)</td>
<td>coschedule workspaces</td>
<td>coschedule workspaces</td>
</tr>
<tr>
<td>false (default)</td>
<td>pipelineruns</td>
<td>N/A: invalid</td>
<td>coschedule pipelineruns</td>
</tr>
<tr>
<td>false (default)</td>
<td>isolate-pipelinerun</td>
<td>N/A: invalid</td>
<td>isolate pipelinerun</td>
</tr>
<tr>
<td>true</td>
<td>disabled</td>
<td>disabled</td>
<td>disabled</td>
</tr>
<tr>
<td>true</td>
<td>workspaces (default)</td>
<td>disabled</td>
<td>coschedule workspaces</td>
</tr>
<tr>
<td>true</td>
<td>pipelineruns</td>
<td>coschedule pipelineruns</td>
<td>coschedule pipelineruns</td>
</tr>
<tr>
<td>true</td>
<td>isolate-pipelinerun</td>
<td>isolate pipelinerun</td>
<td>isolate pipelinerun</td>
</tr>
</tbody>
</table>
**Note:** For users who previously accepted the default behavior (`disable-affinity-assistant`: `false`) but now want one of the new features, you need to set `disable-affinity-assistant` to "true" and then turn on the new behavior by setting the `coschedule` flag. For users who previously disabled the affinity assistant but want one of the new features, just set the `coschedule` flag accordingly.
**Note:** Affinity Assistant use [Inter-pod affinity and anti-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)
that require substantial amount of processing which can slow down scheduling in large clusters
significantly. We do not recommend using the affinity assistant in clusters larger than several hundred nodes
**Note:** Pod anti-affinity requires nodes to be consistently labelled, in other words every
node in the cluster must have an appropriate label matching `topologyKey`. If some or all nodes
are missing the specified `topologyKey` label, it can lead to unintended behavior.
**Note:** Any time during the execution of a `pipelineRun`, if the node with a placeholder Affinity Assistant pod and
the `taskRun` pods sharing a `workspace` is `cordoned` or disabled for scheduling anything new (`tainted`), the
`pipelineRun` controller deletes the placeholder pod. The `taskRun` pods on a `cordoned` node continues running
until completion. The deletion of a placeholder pod triggers creating a new placeholder pod on any available node
such that the rest of the `pipelineRun` can continue without any disruption until it finishes | tekton | linkTitle Affinity Assistants weight 405 Affinity Assistants Affinity Assistant is a feature to coschedule PipelineRun pods to the same node based on kubernetes pod affinity https kubernetes io docs concepts scheduling eviction assign pod node inter pod affinity and anti affinity so that it possible for the taskruns to execute parallel while sharing volume Available Affinity Assistant Modes are coschedule workspaces coschedule pipelineruns isolate pipelinerun and disabled seedling coschedule pipelineruns and isolate pipelinerun modes are alpha features additional configs md alpha features coschedule workspaces is a stable feature coschedule workspaces When a PersistentVolumeClaim is used as volume source for a Workspace in a PipelineRun all TaskRun pods within the PipelineRun that share the Workspace will be scheduled to the same Node Note Only one pvc backed workspace can be mounted to each TaskRun in this mode coschedule pipelineruns All TaskRun pods within the PipelineRun will be scheduled to the same Node isolate pipelinerun All TaskRun pods within the PipelineRun will be scheduled to the same Node and only one PipelineRun is allowed to run on a node at a time disabled The Affinity Assistant is disabled No pod coscheduling behavior This means that Affinity Assistant is incompatible with other affinity rules configured for the TaskRun pods i e other affinity rules specified in custom PodTemplate pipelineruns md specifying a pod template will be overwritten by Affinity Assistant If the PipelineRun has a custom PodTemplate pipelineruns md specifying a pod template configured the NodeSelector and Tolerations fields will also be set on the Affinity Assistant pod The Affinity Assistant is deleted when the PipelineRun is completed Currently the Affinity Assistant Modes can be configured by the disable affinity assistant and coschedule feature flags The disable affinity assistant feature flag is now deprecated and will be removed in release v0 60 At the time the Affinity Assistant Modes will be only determined by the coschedule feature flag The following chart summarizes the Affinity Assistant Modes with different combinations of the disable affinity assistant and coschedule feature flags during migration when both feature flags are present and after the migration when only the coschedule flag is present table thead tr th disable affinity assistant th th coschedule th th behavior during migration th th behavior after migration th tr thead tbody tr td false default td td disabled td td N A invalid td td disabled td tr tr td false default td td workspaces default td td coschedule workspaces td td coschedule workspaces td tr tr td false default td td pipelineruns td td N A invalid td td coschedule pipelineruns td tr tr td false default td td isolate pipelinerun td td N A invalid td td isolate pipelinerun td tr tr td true td td disabled td td disabled td td disabled td tr tr td true td td workspaces default td td disabled td td coschedule workspaces td tr tr td true td td pipelineruns td td coschedule pipelineruns td td coschedule pipelineruns td tr tr td true td td isolate pipelinerun td td isolate pipelinerun td td isolate pipelinerun td tr tbody table Note For users who previously accepted the default behavior disable affinity assistant false but now want one of the new features you need to set disable affinity assistant to true and then turn on the new behavior by setting the coschedule flag For users who previously disabled the affinity assistant but want one of the new features just set the coschedule flag accordingly Note Affinity Assistant use Inter pod affinity and anti affinity https kubernetes io docs concepts scheduling eviction assign pod node inter pod affinity and anti affinity that require substantial amount of processing which can slow down scheduling in large clusters significantly We do not recommend using the affinity assistant in clusters larger than several hundred nodes Note Pod anti affinity requires nodes to be consistently labelled in other words every node in the cluster must have an appropriate label matching topologyKey If some or all nodes are missing the specified topologyKey label it can lead to unintended behavior Note Any time during the execution of a pipelineRun if the node with a placeholder Affinity Assistant pod and the taskRun pods sharing a workspace is cordoned or disabled for scheduling anything new tainted the pipelineRun controller deletes the placeholder pod The taskRun pods on a cordoned node continues running until completion The deletion of a placeholder pod triggers creating a new placeholder pod on any available node such that the rest of the pipelineRun can continue without any disruption until it finishes |
tekton Authentication Authentication at Run Time weight 301 This document describes how Tekton handles authentication when executing | <!--
---
linkTitle: "Authentication"
weight: 301
---
-->
# Authentication at Run Time
This document describes how Tekton handles authentication when executing
`TaskRuns` and `PipelineRuns`. Since authentication concepts and processes
apply to both of those entities in the same manner, this document collectively
refers to `TaskRuns` and `PipelineRuns` as `Runs` for the sake of brevity.
- [Overview](#overview)
- [Understanding credential selection](#understanding-credential-selection)
- [Using `Secrets` as a non-root user](#using-secrets-as-a-non-root-user)
- [Limiting `Secret` access to specific `Steps`](#limiting-secret-access-to-specific-steps)
- [Configuring authentication for Git](#configuring-authentication-for-git)
- [Configuring `basic-auth` authentication for Git](#configuring-basic-auth-authentication-for-git)
- [Configuring `ssh-auth` authentication for Git](#configuring-ssh-auth-authentication-for-git)
- [Using a custom port for SSH authentication](#using-a-custom-port-for-ssh-authentication)
- [Using SSH authentication in `git` type `Tasks`](#using-ssh-authentication-in-git-type-tasks)
- [Configuring authentication for Docker](#configuring-authentication-for-docker)
- [Configuring `basic-auth` authentication for Docker](#configuring-basic-auth-authentication-for-docker)
- [Configuring `docker*` authentication for Docker](#configuring-docker-authentication-for-docker)
- [Technical reference](#technical-reference)
- [`basic-auth` for Git](#basic-auth-for-git)
- [`ssh-auth` for Git](#ssh-auth-for-git)
- [`basic-auth` for Docker](#basic-auth-for-docker)
- [Errors and their meaning](#errors-and-their-meaning)
- ["unsuccessful cred copy" Warning](#unsuccessful-cred-copy-warning)
- [Multiple Steps with varying UIDs](#multiple-steps-with-varying-uids)
- [A Workspace or Volume is also Mounted for the same credentials](#a-workspace-or-volume-is-also-mounted-for-the-same-credentials)
- [A Task employes a read-only-Workspace or Volume for `$HOME`](#a-task-employs-a-read-only-workspace-or-volume-for-home)
- [The Step is named `image-digest-exporter`](#the-step-is-named-image-digest-exporter)
- [Disabling Tekton's Built-In Auth](#disabling-tektons-built-in-auth)
- [Why would an organization want to do this?](#why-would-an-organization-want-to-do-this)
- [What are the effects of making this change?](#what-are-the-effects-of-making-this-change)
- [How to disable the built-in auth](#how-to-disable-the-built-in-auth)
## Overview
Tekton supports authentication via the Kubernetes first-class `Secret` types listed below.
<table>
<thead>
<th>Git</th>
<th>Docker</th>
</thead>
<tbody>
<tr>
<td><code>kubernetes.io/basic-auth</code><br>
<code>kubernetes.io/ssh-auth</code>
</td>
<td><code>kubernetes.io/basic-auth</code><br>
<code>kubernetes.io/dockercfg</code><br>
<code>kubernetes.io/dockerconfigjson</code>
</td>
</tbody>
</table>
A `Run` gains access to these `Secrets` through its associated `ServiceAccount`. Tekton requires that each
supported `Secret` includes a [Tekton-specific annotation](#understanding-credential-selection).
Tekton converts properly annotated `Secrets` of the supported types and stores them in a `Step's` container as follows:
- **Git:** Tekton produces a ~/.gitconfig file or a ~/.ssh directory.
- **Docker:** Tekton produces a ~/.docker/config.json file.
Each `Secret` type supports multiple credentials covering multiple domains and establishes specific rules governing
credential formatting and merging. Tekton follows those rules when merging credentials of each supported type.
To consume these `Secrets`, Tekton performs credential initialization within every `Pod` it instantiates, before executing
any `Steps` in the `Run`. During credential initialization, Tekton accesses each `Secret` associated with the `Run` and
aggregates them into a `/tekton/creds` directory. Tekton then copies or symlinks files from this directory into the user's
`$HOME` directory.
TODO(#5357): Update docs to explain recommended methods of passing secrets in via workspaces
## Understanding credential selection
A `Run` might require multiple types of authentication. For example, a `Run` might require access to
multiple private Git and Docker repositories. You must properly annotate each `Secret` to specify the
domains for which Tekton can use the credentials that the `Secret` contains. Tekton **ignores** all
`Secrets` that are not properly annotated.
A credential annotation key must begin with `tekton.dev/git-` or `tekton.dev/docker-` and its value is the
URL of the host for which you want Tekton to use that credential. In the following example, Tekton uses a
`basic-auth` (username/password pair) `Secret` to access Git repositories at `github.com` and `gitlab.com`
as well as Docker repositories at `gcr.io`:
```yaml
apiVersion: v1
kind: Secret
metadata:
annotations:
tekton.dev/git-0: https://github.com
tekton.dev/git-1: https://gitlab.com
tekton.dev/docker-0: https://gcr.io
type: kubernetes.io/basic-auth
stringData:
username: <cleartext username>
password: <cleartext password>
```
And in this example, Tekton uses an `ssh-auth` `Secret` to access Git repositories
at `github.com` only:
```yaml
apiVersion: v1
kind: Secret
metadata:
annotations:
tekton.dev/git-0: github.com
type: kubernetes.io/ssh-auth
stringData:
ssh-privatekey: <private-key>
# This is non-standard, but its use is encouraged to make this more secure.
# Omitting this results in the server's public key being blindly accepted.
```
## Using `Secrets` as a non-root user
In certain scenarios you might need to use `Secrets` as a non-root user. For example:
- Your platform randomizes the user and/or groups that your containers use to execute.
- The `Steps` in your `Task` define a non-root `securityContext`.
- Your `Task` specifies a global non-root `securityContext` that applies to all `Steps` in the `Task`.
The following are considerations for executing `Runs` as a non-root user:
- `ssh-auth` for Git requires the user to have a valid home directory configured in `/etc/passwd`.
Specifying a UID that has no valid home directory results in authentication failure.
- Since SSH authentication ignores the `$HOME` environment variable, you must either move or symlink
the appropriate `Secret` files from the `$HOME` directory defined by Tekton (`/tekton/home`) to
the non-root user's valid home directory to use SSH authentication for either Git or Docker.
For an example of configuring SSH authentication in a non-root `securityContext`,
see [`authenticating-git-commands`](../examples/v1/taskruns/authenticating-git-commands.yaml).
## Limiting `Secret` access to specific `Steps`
As described earlier in this document, Tekton stores supported `Secrets` in
`$HOME/tekton/home` and makes them available to all `Steps` within a `Task`.
If you want to limit a `Secret` to only be accessible to specific `Steps` but not
others, you must explicitly specify a `Volume` using the `Secret` definition and
manually `VolumeMount` it into the desired `Steps` instead of using the procedures
described later in this document.
## Configuring authentication for Git
This section describes how to configure the following authentication schemes for use with Git:
- [Configuring `basic-auth` authentication for Git](#configuring-basic-auth-authentication-for-git)
- [Configuring `ssh-auth` authentication for Git](#configuring-ssh-auth-authentication-for-git)
- [Using a custom port for SSH authentication](#using-a-custom-port-for-ssh-authentication)
- [Using SSH authentication in `git` type `Tasks`](#using-ssh-authentication-in-git-type-tasks)
### Configuring `basic-auth` authentication for Git
This section describes how to configure a `basic-auth` type `Secret` for use with Git. In the example below,
before executing any `Steps` in the `Run`, Tekton creates a `~/.gitconfig` file containing the credentials
specified in the `Secret`.
Note: Github deprecated basic authentication with username and password. You can still use basic authentication, but you wil need to use a personal access token instead of the cleartext password in the following example. You can find out how to create such a token on the [Github documentation site](https://docs.github.com/en/github/authenticating-to-github/creating-a-personal-access-token).
1. In `secret.yaml`, define a `Secret` that specifies the username and password that you want Tekton
to use to access the target Git repository:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: basic-user-pass
annotations:
tekton.dev/git-0: https://github.com # Described below
type: kubernetes.io/basic-auth
stringData:
username: <cleartext username>
password: <cleartext password>
```
In the above example, the value for `tekton.dev/git-0` specifies the URL for which Tekton will use this `Secret`,
as described in [Understanding credential selection](#understanding-credential-selection).
1. In `serviceaccount.yaml`, associate the `Secret` with the desired `ServiceAccount`:
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: build-bot
secrets:
- name: basic-user-pass
```
1. In `run.yaml`, associate the `ServiceAccount` with your `Run` by doing one of the following:
- Associate the `ServiceAccount` with your `TaskRun`:
```yaml
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: build-push-task-run-2
spec:
serviceAccountName: build-bot
taskRef:
name: build-push
```
- Associate the `ServiceAccount` with your `PipelineRun`:
```yaml
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: demo-pipeline
namespace: default
spec:
serviceAccountName: build-bot
pipelineRef:
name: demo-pipeline
```
1. Execute the `Run`:
```shell
kubectl apply --filename secret.yaml serviceaccount.yaml run.yaml
```
### Configuring `ssh-auth` authentication for Git
This section describes how to configure an `ssh-auth` type `Secret` for use with Git. In the example below,
before executing any `Steps` in the `Run`, Tekton creates a `~/.ssh/config` file containing the SSH key
specified in the `Secret`.
1. In `secret.yaml`, define a `Secret` that specifies your SSH private key:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: ssh-key
annotations:
tekton.dev/git-0: github.com # Described below
type: kubernetes.io/ssh-auth
stringData:
ssh-privatekey: <private-key>
# This is non-standard, but its use is encouraged to make this more secure.
# If it is not provided then the git server's public key will be requested
# when the repo is first fetched.
known_hosts: <known-hosts>
```
In the above example, the value for `tekton.dev/git-0` specifies the URL for which Tekton will use this `Secret`,
as described in [Understanding credential selection](#understanding-credential-selection).
1. Generate the `ssh-privatekey` value. For example:
`cat ~/.ssh/id_rsa`
1. Set the value of the `known_hosts` field to the generated `ssh-privatekey` value from the previous step.
1. In `serviceaccount.yaml`, associate the `Secret` with the desired `ServiceAccount`:
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: build-bot
secrets:
- name: ssh-key
```
1. In `run.yaml`, associate the `ServiceAccount` with your `Run` by doing one of the following:
- Associate the `ServiceAccount` with your `TaskRun`:
```yaml
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: build-push-task-run-2
spec:
serviceAccountName: build-bot
taskRef:
name: build-push
```
- Associate the `ServiceAccount` with your `PipelineRun`:
```yaml
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: demo-pipeline
namespace: default
spec:
serviceAccountName: build-bot
pipelineRef:
name: demo-pipeline
```
1. Execute the `Run`:
```shell
kubectl apply --filename secret.yaml,serviceaccount.yaml,run.yaml
```
### Using a custom port for SSH authentication
You can specify a custom SSH port in your `Secret`.
```
apiVersion: v1
kind: Secret
metadata:
name: ssh-key-custom-port
annotations:
tekton.dev/git-0: example.com:2222
type: kubernetes.io/ssh-auth
stringData:
ssh-privatekey: <private-key>
known_hosts: <known-hosts>
```
### Using SSH authentication in `git` type `Tasks`
You can use SSH authentication as described earlier in this document when invoking `git` commands
directly in the `Steps` of a `Task`. Since `ssh` ignores the `$HOME` variable and only uses the
user's home directory specified in `/etc/passwd`, each `Step` must symlink `/tekton/home/.ssh`
to the home directory of its associated user.
**Note:** This explicit symlinking is not necessary when using the
[`git-clone` `Task`](https://github.com/tektoncd/catalog/tree/v1beta1/git) from Tekton Catalog.
For example usage, see [`authenticating-git-commands`](../examples/v1/taskruns/authenticating-git-commands.yaml).
## Configuring authentication for Docker
This section describes how to configure the following authentication schemes for use with Docker:
- [Configuring `basic-auth` authentication for Docker](#configuring-basic-auth-authentication-for-docker)
- [Configuring `docker*` authentication for Docker](#configuring-docker-authentication-for-docker)
### Configuring `basic-auth` authentication for Docker
This section describes how to configure the `basic-auth` (username/password pair) type `Secret` for use with Docker.
In the example below, before executing any `Steps` in the `Run`, Tekton creates a `~/.docker/config.json` file containing
the credentials specified in the `Secret`.
1. In `secret.yaml`, define a `Secret` that specifies the username and password that you want Tekton
to use to access the target Docker registry:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: basic-user-pass
annotations:
tekton.dev/docker-0: https://gcr.io # Described below
type: kubernetes.io/basic-auth
stringData:
username: <cleartext username>
password: <cleartext password>
```
In the above example, the value for `tekton.dev/docker-0` specifies the URL for which Tekton will use this `Secret`,
as described in [Understanding credential selection](#understanding-credential-selection).
1. In `serviceaccount.yaml`, associate the `Secret` with the desired `ServiceAccount`:
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: build-bot
secrets:
- name: basic-user-pass
```
1. In `run.yaml`, associate the `ServiceAccount` with your `Run` by doing one of the following:
- Associate the `ServiceAccount` with your `TaskRun`:
```yaml
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: build-push-task-run-2
spec:
serviceAccountName: build-bot
taskRef:
name: build-push
```
- Associate the `ServiceAccount` with your `PipelineRun`:
```yaml
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: demo-pipeline
namespace: default
spec:
serviceAccountName: build-bot
pipelineRef:
name: demo-pipeline
```
1. Execute the `Run`:
```shell
kubectl apply --filename secret.yaml serviceaccount.yaml run.yaml
```
## Configuring `docker*` authentication for Docker
This section describes how to configure authentication using the `dockercfg` and `dockerconfigjson` type
`Secrets` for use with Docker. In the example below, before executing any `Steps` in the `Run`, Tekton creates
a `~/.docker/config.json` file containing the credentials specified in the `Secret`. When the `Steps` execute,
Tekton uses those credentials to access the target Docker registry.
f
**Note:** If you specify both the Tekton `basic-auth` and the above Kubernetes `Secrets`, Tekton merges all
credentials from all specified `Secrets` but Tekton's `basic-auth` `Secret` overrides either of the
Kubernetes `Secrets`.
1. Define a `Secret` based on your Docker client configuration file.
```bash
kubectl create secret generic regcred \
--from-file=.dockerconfigjson=<path/to/.docker/config.json> \
--type=kubernetes.io/dockerconfigjson
```
For more information, see [Pull an Image from a Private Registry](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/)
in the Kubernetes documentation.
1. In `serviceaccount.yaml`, associate the `Secret` with the desired `ServiceAccount`:
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: build-bot
secrets:
- name: regcred
```
1. In `run.yaml`, associate the `ServiceAccount` with your `Run` by doing one of the following:
- Associate the `ServiceAccount` with your `TaskRun`:
```yaml
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: build-with-basic-auth
spec:
serviceAccountName: build-bot
steps:
# ...
```
- Associate the `ServiceAccount` with your `PipelineRun`:
```yaml
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: demo-pipeline
namespace: default
spec:
serviceAccountName: build-bot
pipelineRef:
name: demo-pipeline
```
1. Execute the build:
```shell
kubectl apply --filename secret.yaml --filename serviceaccount.yaml --filename taskrun.yaml
```
## Technical reference
This section provides a technical reference for the implementation of the authentication mechanisms
described earlier in this document.
### `basic-auth` for Git
Given URLs, usernames, and passwords of the form: `https://url{n}.com`,
`user{n}`, and `pass{n}`, Tekton generates the following:
```
=== ~/.gitconfig ===
[credential]
helper = store
[credential "https://url1.com"]
username = "user1"
[credential "https://url2.com"]
username = "user2"
...
=== ~/.git-credentials ===
https://user1:pass1@url1.com
https://user2:pass2@url2.com
...
```
### `ssh-auth` for Git
Given hostnames, private keys, and `known_hosts` of the form: `url{n}.com`,
`key{n}`, and `known_hosts{n}`, Tekton generates the following.
By default, if no value is specified for `known_hosts`, Tekton configures SSH to accept
**any public key** returned by the server on first query. Tekton does this
by setting Git's `core.sshCommand` variable to `ssh -o StrictHostKeyChecking=accept-new`.
This behaviour can be prevented
[using a feature-flag: `require-git-ssh-secret-known-hosts`](./install.md#customizing-the-pipelines-controller-behavior).
Set this flag to `true` and all Git SSH Secrets _must_ include a `known_hosts`.
```
=== ~/.ssh/id_key1 ===
{contents of key1}
=== ~/.ssh/id_key2 ===
{contents of key2}
...
=== ~/.ssh/config ===
Host url1.com
HostName url1.com
IdentityFile ~/.ssh/id_key1
Host url2.com
HostName url2.com
IdentityFile ~/.ssh/id_key2
...
=== ~/.ssh/known_hosts ===
{contents of known_hosts1}
{contents of known_hosts2}
...
```
### `basic-auth` for Docker
Given URLs, usernames, and passwords of the form: `https://url{n}.com`,
`user{n}`, and `pass{n}`, Tekton generates the following. Since Docker doesn't
support the `kubernetes.io/ssh-auth` type `Secret`, Tekton ignores annotations
on `Secrets` of that type.
```
=== ~/.docker/config.json ===
{
"auths": {
"https://url1.com": {
"auth": "$(echo -n user1:pass1 | base64)",
"email": "not@val.id",
},
"https://url2.com": {
"auth": "$(echo -n user2:pass2 | base64)",
"email": "not@val.id",
},
...
}
}
```
## Errors and their meaning
### "unsuccessful cred copy" Warning
This message has the following format:
> `warning: unsuccessful cred copy: ".docker" from "/tekton/creds" to
> "/tekton/home": unable to open destination: open
> /tekton/home/.docker/config.json: permission denied`
The precise credential and paths mentioned can vary. This message is only a
warning but can be indicative of the following problems:
#### Multiple Steps with varying UIDs
Multiple Steps with different users / UIDs are trying to initialize docker
or git credentials in the same Task. If those Steps need access to the
credentials then they may fail as they might not have permission to access them.
This happens because, by default, `/tekton/home` is set to be a Step user's home
directory and Tekton makes this directory a shared volume that all Steps in a
Task have access to. Any credentials initialized by one Step are overwritten
by subsequent Steps also initializing credentials.
If the Steps reporting this warning do not use the credentials mentioned
in the message then you can safely ignore it.
This can most easily be resolved by ensuring that each Step executing in your
Task and TaskRun runs with the same UID. A blanket UID can be set with [a
TaskRun's `Pod template` field](./taskruns.md#specifying-a-pod-template).
If you require Steps to run with different UIDs then you should disable
Tekton's built-in credential initialization and use Workspaces to mount
credentials from Secrets instead. See [the section on disabling Tekton's
credential initialization](#disabling-tektons-built-in-auth).
#### A Workspace or Volume is also Mounted for the same credentials
A Task has mounted both a Workspace (or Volume) for credentials and the TaskRun
has attached a service account with git or docker credentials that Tekton will
try to initialize.
The simplest solution to this problem is to not mix credentials mounted via
Workspace with those initialized using the process described in this document.
See [the section on disabling Tekton's credential initialization](#disabling-tektons-built-in-auth).
#### A Task employs a read-only Workspace or Volume for `$HOME`
A Task has mounted a read-only Workspace (or Volume) for the user's `HOME`
directory and the TaskRun attaches a service account with git or docker
credentials that Tekton will try to initialize.
The simplest solution to this problem is to not mix credentials mounted via
Workspace with those initialized using the process described in this document.
See [the section on disabling Tekton's credential initialization](#disabling-tektons-built-in-auth).
#### The contents of `$HOME` are `chown`ed to a different user
A Task Step that modifies the ownership of files in the user home directory
may prevent subsequent Steps from initializing credentials in that same home
directory. The simplest solution to this problem is to avoid running chown
on files and directories under `/tekton`. Another option is to run all Steps
with the same UID.
#### The Step is named `image-digest-exporter`
If you see this warning reported specifically by an `image-digest-exporter` Step
you can safely ignore this message. The reason it appears is that this Step is
injected by Tekton and it runs with a non-root UID
that can differ from those of the Steps in the Task. The Step does not use
these credentials.
---
## Disabling Tekton's Built-In Auth
### Why would an organization want to do this?
There are a number of reasons that an organization may want to disable
Tekton's built-in credential handling:
1. The mechanism can be quite difficult to debug.
2. There are an extremely limited set of supported credential types.
3. Tasks with Steps that have different UIDs can break if multiple Steps
are trying to share access to the same credentials.
4. Tasks with Steps that have different UIDs can log more warning messages,
creating more noise in TaskRun logs. Again this is because multiple Steps
with differing UIDs cannot share access to the same credential files.
### What are the effects of making this change?
1. Credentials must now be passed explicitly to Tasks either with [Workspaces](./workspaces.md#using-workspaces-in-tasks),
environment variables (using [`envFrom`](https://kubernetes.io/docs/concepts/configuration/secret/#use-case-as-container-environment-variables) in your Steps and a Task param to
specify a Secret), or a custom volume and volumeMount definition.
### How to disable the built-in auth
To disable Tekton's built-in auth, edit the `feature-flag` `ConfigMap` in the
`tekton-pipelines` namespace and update the value of `disable-creds-init`
from `"false"` to `"true"`.
Except as otherwise noted, the content of this page is licensed under the
[Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/),
and code samples are licensed under the
[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). | tekton | linkTitle Authentication weight 301 Authentication at Run Time This document describes how Tekton handles authentication when executing TaskRuns and PipelineRuns Since authentication concepts and processes apply to both of those entities in the same manner this document collectively refers to TaskRuns and PipelineRuns as Runs for the sake of brevity Overview overview Understanding credential selection understanding credential selection Using Secrets as a non root user using secrets as a non root user Limiting Secret access to specific Steps limiting secret access to specific steps Configuring authentication for Git configuring authentication for git Configuring basic auth authentication for Git configuring basic auth authentication for git Configuring ssh auth authentication for Git configuring ssh auth authentication for git Using a custom port for SSH authentication using a custom port for ssh authentication Using SSH authentication in git type Tasks using ssh authentication in git type tasks Configuring authentication for Docker configuring authentication for docker Configuring basic auth authentication for Docker configuring basic auth authentication for docker Configuring docker authentication for Docker configuring docker authentication for docker Technical reference technical reference basic auth for Git basic auth for git ssh auth for Git ssh auth for git basic auth for Docker basic auth for docker Errors and their meaning errors and their meaning unsuccessful cred copy Warning unsuccessful cred copy warning Multiple Steps with varying UIDs multiple steps with varying uids A Workspace or Volume is also Mounted for the same credentials a workspace or volume is also mounted for the same credentials A Task employes a read only Workspace or Volume for HOME a task employs a read only workspace or volume for home The Step is named image digest exporter the step is named image digest exporter Disabling Tekton s Built In Auth disabling tektons built in auth Why would an organization want to do this why would an organization want to do this What are the effects of making this change what are the effects of making this change How to disable the built in auth how to disable the built in auth Overview Tekton supports authentication via the Kubernetes first class Secret types listed below table thead th Git th th Docker th thead tbody tr td code kubernetes io basic auth code br code kubernetes io ssh auth code td td code kubernetes io basic auth code br code kubernetes io dockercfg code br code kubernetes io dockerconfigjson code td tbody table A Run gains access to these Secrets through its associated ServiceAccount Tekton requires that each supported Secret includes a Tekton specific annotation understanding credential selection Tekton converts properly annotated Secrets of the supported types and stores them in a Step s container as follows Git Tekton produces a gitconfig file or a ssh directory Docker Tekton produces a docker config json file Each Secret type supports multiple credentials covering multiple domains and establishes specific rules governing credential formatting and merging Tekton follows those rules when merging credentials of each supported type To consume these Secrets Tekton performs credential initialization within every Pod it instantiates before executing any Steps in the Run During credential initialization Tekton accesses each Secret associated with the Run and aggregates them into a tekton creds directory Tekton then copies or symlinks files from this directory into the user s HOME directory TODO 5357 Update docs to explain recommended methods of passing secrets in via workspaces Understanding credential selection A Run might require multiple types of authentication For example a Run might require access to multiple private Git and Docker repositories You must properly annotate each Secret to specify the domains for which Tekton can use the credentials that the Secret contains Tekton ignores all Secrets that are not properly annotated A credential annotation key must begin with tekton dev git or tekton dev docker and its value is the URL of the host for which you want Tekton to use that credential In the following example Tekton uses a basic auth username password pair Secret to access Git repositories at github com and gitlab com as well as Docker repositories at gcr io yaml apiVersion v1 kind Secret metadata annotations tekton dev git 0 https github com tekton dev git 1 https gitlab com tekton dev docker 0 https gcr io type kubernetes io basic auth stringData username cleartext username password cleartext password And in this example Tekton uses an ssh auth Secret to access Git repositories at github com only yaml apiVersion v1 kind Secret metadata annotations tekton dev git 0 github com type kubernetes io ssh auth stringData ssh privatekey private key This is non standard but its use is encouraged to make this more secure Omitting this results in the server s public key being blindly accepted Using Secrets as a non root user In certain scenarios you might need to use Secrets as a non root user For example Your platform randomizes the user and or groups that your containers use to execute The Steps in your Task define a non root securityContext Your Task specifies a global non root securityContext that applies to all Steps in the Task The following are considerations for executing Runs as a non root user ssh auth for Git requires the user to have a valid home directory configured in etc passwd Specifying a UID that has no valid home directory results in authentication failure Since SSH authentication ignores the HOME environment variable you must either move or symlink the appropriate Secret files from the HOME directory defined by Tekton tekton home to the non root user s valid home directory to use SSH authentication for either Git or Docker For an example of configuring SSH authentication in a non root securityContext see authenticating git commands examples v1 taskruns authenticating git commands yaml Limiting Secret access to specific Steps As described earlier in this document Tekton stores supported Secrets in HOME tekton home and makes them available to all Steps within a Task If you want to limit a Secret to only be accessible to specific Steps but not others you must explicitly specify a Volume using the Secret definition and manually VolumeMount it into the desired Steps instead of using the procedures described later in this document Configuring authentication for Git This section describes how to configure the following authentication schemes for use with Git Configuring basic auth authentication for Git configuring basic auth authentication for git Configuring ssh auth authentication for Git configuring ssh auth authentication for git Using a custom port for SSH authentication using a custom port for ssh authentication Using SSH authentication in git type Tasks using ssh authentication in git type tasks Configuring basic auth authentication for Git This section describes how to configure a basic auth type Secret for use with Git In the example below before executing any Steps in the Run Tekton creates a gitconfig file containing the credentials specified in the Secret Note Github deprecated basic authentication with username and password You can still use basic authentication but you wil need to use a personal access token instead of the cleartext password in the following example You can find out how to create such a token on the Github documentation site https docs github com en github authenticating to github creating a personal access token 1 In secret yaml define a Secret that specifies the username and password that you want Tekton to use to access the target Git repository yaml apiVersion v1 kind Secret metadata name basic user pass annotations tekton dev git 0 https github com Described below type kubernetes io basic auth stringData username cleartext username password cleartext password In the above example the value for tekton dev git 0 specifies the URL for which Tekton will use this Secret as described in Understanding credential selection understanding credential selection 1 In serviceaccount yaml associate the Secret with the desired ServiceAccount yaml apiVersion v1 kind ServiceAccount metadata name build bot secrets name basic user pass 1 In run yaml associate the ServiceAccount with your Run by doing one of the following Associate the ServiceAccount with your TaskRun yaml apiVersion tekton dev v1beta1 kind TaskRun metadata name build push task run 2 spec serviceAccountName build bot taskRef name build push Associate the ServiceAccount with your PipelineRun yaml apiVersion tekton dev v1beta1 kind PipelineRun metadata name demo pipeline namespace default spec serviceAccountName build bot pipelineRef name demo pipeline 1 Execute the Run shell kubectl apply filename secret yaml serviceaccount yaml run yaml Configuring ssh auth authentication for Git This section describes how to configure an ssh auth type Secret for use with Git In the example below before executing any Steps in the Run Tekton creates a ssh config file containing the SSH key specified in the Secret 1 In secret yaml define a Secret that specifies your SSH private key yaml apiVersion v1 kind Secret metadata name ssh key annotations tekton dev git 0 github com Described below type kubernetes io ssh auth stringData ssh privatekey private key This is non standard but its use is encouraged to make this more secure If it is not provided then the git server s public key will be requested when the repo is first fetched known hosts known hosts In the above example the value for tekton dev git 0 specifies the URL for which Tekton will use this Secret as described in Understanding credential selection understanding credential selection 1 Generate the ssh privatekey value For example cat ssh id rsa 1 Set the value of the known hosts field to the generated ssh privatekey value from the previous step 1 In serviceaccount yaml associate the Secret with the desired ServiceAccount yaml apiVersion v1 kind ServiceAccount metadata name build bot secrets name ssh key 1 In run yaml associate the ServiceAccount with your Run by doing one of the following Associate the ServiceAccount with your TaskRun yaml apiVersion tekton dev v1beta1 kind TaskRun metadata name build push task run 2 spec serviceAccountName build bot taskRef name build push Associate the ServiceAccount with your PipelineRun yaml apiVersion tekton dev v1beta1 kind PipelineRun metadata name demo pipeline namespace default spec serviceAccountName build bot pipelineRef name demo pipeline 1 Execute the Run shell kubectl apply filename secret yaml serviceaccount yaml run yaml Using a custom port for SSH authentication You can specify a custom SSH port in your Secret apiVersion v1 kind Secret metadata name ssh key custom port annotations tekton dev git 0 example com 2222 type kubernetes io ssh auth stringData ssh privatekey private key known hosts known hosts Using SSH authentication in git type Tasks You can use SSH authentication as described earlier in this document when invoking git commands directly in the Steps of a Task Since ssh ignores the HOME variable and only uses the user s home directory specified in etc passwd each Step must symlink tekton home ssh to the home directory of its associated user Note This explicit symlinking is not necessary when using the git clone Task https github com tektoncd catalog tree v1beta1 git from Tekton Catalog For example usage see authenticating git commands examples v1 taskruns authenticating git commands yaml Configuring authentication for Docker This section describes how to configure the following authentication schemes for use with Docker Configuring basic auth authentication for Docker configuring basic auth authentication for docker Configuring docker authentication for Docker configuring docker authentication for docker Configuring basic auth authentication for Docker This section describes how to configure the basic auth username password pair type Secret for use with Docker In the example below before executing any Steps in the Run Tekton creates a docker config json file containing the credentials specified in the Secret 1 In secret yaml define a Secret that specifies the username and password that you want Tekton to use to access the target Docker registry yaml apiVersion v1 kind Secret metadata name basic user pass annotations tekton dev docker 0 https gcr io Described below type kubernetes io basic auth stringData username cleartext username password cleartext password In the above example the value for tekton dev docker 0 specifies the URL for which Tekton will use this Secret as described in Understanding credential selection understanding credential selection 1 In serviceaccount yaml associate the Secret with the desired ServiceAccount yaml apiVersion v1 kind ServiceAccount metadata name build bot secrets name basic user pass 1 In run yaml associate the ServiceAccount with your Run by doing one of the following Associate the ServiceAccount with your TaskRun yaml apiVersion tekton dev v1beta1 kind TaskRun metadata name build push task run 2 spec serviceAccountName build bot taskRef name build push Associate the ServiceAccount with your PipelineRun yaml apiVersion tekton dev v1beta1 kind PipelineRun metadata name demo pipeline namespace default spec serviceAccountName build bot pipelineRef name demo pipeline 1 Execute the Run shell kubectl apply filename secret yaml serviceaccount yaml run yaml Configuring docker authentication for Docker This section describes how to configure authentication using the dockercfg and dockerconfigjson type Secrets for use with Docker In the example below before executing any Steps in the Run Tekton creates a docker config json file containing the credentials specified in the Secret When the Steps execute Tekton uses those credentials to access the target Docker registry f Note If you specify both the Tekton basic auth and the above Kubernetes Secrets Tekton merges all credentials from all specified Secrets but Tekton s basic auth Secret overrides either of the Kubernetes Secrets 1 Define a Secret based on your Docker client configuration file bash kubectl create secret generic regcred from file dockerconfigjson path to docker config json type kubernetes io dockerconfigjson For more information see Pull an Image from a Private Registry https kubernetes io docs tasks configure pod container pull image private registry in the Kubernetes documentation 1 In serviceaccount yaml associate the Secret with the desired ServiceAccount yaml apiVersion v1 kind ServiceAccount metadata name build bot secrets name regcred 1 In run yaml associate the ServiceAccount with your Run by doing one of the following Associate the ServiceAccount with your TaskRun yaml apiVersion tekton dev v1beta1 kind TaskRun metadata name build with basic auth spec serviceAccountName build bot steps Associate the ServiceAccount with your PipelineRun yaml apiVersion tekton dev v1beta1 kind PipelineRun metadata name demo pipeline namespace default spec serviceAccountName build bot pipelineRef name demo pipeline 1 Execute the build shell kubectl apply filename secret yaml filename serviceaccount yaml filename taskrun yaml Technical reference This section provides a technical reference for the implementation of the authentication mechanisms described earlier in this document basic auth for Git Given URLs usernames and passwords of the form https url n com user n and pass n Tekton generates the following gitconfig credential helper store credential https url1 com username user1 credential https url2 com username user2 git credentials https user1 pass1 url1 com https user2 pass2 url2 com ssh auth for Git Given hostnames private keys and known hosts of the form url n com key n and known hosts n Tekton generates the following By default if no value is specified for known hosts Tekton configures SSH to accept any public key returned by the server on first query Tekton does this by setting Git s core sshCommand variable to ssh o StrictHostKeyChecking accept new This behaviour can be prevented using a feature flag require git ssh secret known hosts install md customizing the pipelines controller behavior Set this flag to true and all Git SSH Secrets must include a known hosts ssh id key1 contents of key1 ssh id key2 contents of key2 ssh config Host url1 com HostName url1 com IdentityFile ssh id key1 Host url2 com HostName url2 com IdentityFile ssh id key2 ssh known hosts contents of known hosts1 contents of known hosts2 basic auth for Docker Given URLs usernames and passwords of the form https url n com user n and pass n Tekton generates the following Since Docker doesn t support the kubernetes io ssh auth type Secret Tekton ignores annotations on Secrets of that type docker config json auths https url1 com auth echo n user1 pass1 base64 email not val id https url2 com auth echo n user2 pass2 base64 email not val id Errors and their meaning unsuccessful cred copy Warning This message has the following format warning unsuccessful cred copy docker from tekton creds to tekton home unable to open destination open tekton home docker config json permission denied The precise credential and paths mentioned can vary This message is only a warning but can be indicative of the following problems Multiple Steps with varying UIDs Multiple Steps with different users UIDs are trying to initialize docker or git credentials in the same Task If those Steps need access to the credentials then they may fail as they might not have permission to access them This happens because by default tekton home is set to be a Step user s home directory and Tekton makes this directory a shared volume that all Steps in a Task have access to Any credentials initialized by one Step are overwritten by subsequent Steps also initializing credentials If the Steps reporting this warning do not use the credentials mentioned in the message then you can safely ignore it This can most easily be resolved by ensuring that each Step executing in your Task and TaskRun runs with the same UID A blanket UID can be set with a TaskRun s Pod template field taskruns md specifying a pod template If you require Steps to run with different UIDs then you should disable Tekton s built in credential initialization and use Workspaces to mount credentials from Secrets instead See the section on disabling Tekton s credential initialization disabling tektons built in auth A Workspace or Volume is also Mounted for the same credentials A Task has mounted both a Workspace or Volume for credentials and the TaskRun has attached a service account with git or docker credentials that Tekton will try to initialize The simplest solution to this problem is to not mix credentials mounted via Workspace with those initialized using the process described in this document See the section on disabling Tekton s credential initialization disabling tektons built in auth A Task employs a read only Workspace or Volume for HOME A Task has mounted a read only Workspace or Volume for the user s HOME directory and the TaskRun attaches a service account with git or docker credentials that Tekton will try to initialize The simplest solution to this problem is to not mix credentials mounted via Workspace with those initialized using the process described in this document See the section on disabling Tekton s credential initialization disabling tektons built in auth The contents of HOME are chown ed to a different user A Task Step that modifies the ownership of files in the user home directory may prevent subsequent Steps from initializing credentials in that same home directory The simplest solution to this problem is to avoid running chown on files and directories under tekton Another option is to run all Steps with the same UID The Step is named image digest exporter If you see this warning reported specifically by an image digest exporter Step you can safely ignore this message The reason it appears is that this Step is injected by Tekton and it runs with a non root UID that can differ from those of the Steps in the Task The Step does not use these credentials Disabling Tekton s Built In Auth Why would an organization want to do this There are a number of reasons that an organization may want to disable Tekton s built in credential handling 1 The mechanism can be quite difficult to debug 2 There are an extremely limited set of supported credential types 3 Tasks with Steps that have different UIDs can break if multiple Steps are trying to share access to the same credentials 4 Tasks with Steps that have different UIDs can log more warning messages creating more noise in TaskRun logs Again this is because multiple Steps with differing UIDs cannot share access to the same credential files What are the effects of making this change 1 Credentials must now be passed explicitly to Tasks either with Workspaces workspaces md using workspaces in tasks environment variables using envFrom https kubernetes io docs concepts configuration secret use case as container environment variables in your Steps and a Task param to specify a Secret or a custom volume and volumeMount definition How to disable the built in auth To disable Tekton s built in auth edit the feature flag ConfigMap in the tekton pipelines namespace and update the value of disable creds init from false to true Except as otherwise noted the content of this page is licensed under the Creative Commons Attribution 4 0 License https creativecommons org licenses by 4 0 and code samples are licensed under the Apache 2 0 License https www apache org licenses LICENSE 2 0 |
tekton Matrix Matrix weight 406 | <!--
---
linkTitle: "Matrix"
weight: 406
---
-->
# Matrix
- [Overview](#overview)
- [Configuring a Matrix](#configuring-a-matrix)
- [Generating Combinations](#generating-combinations)
- [Explicit Combinations](#explicit-combinations)
- [Concurrency Control](#concurrency-control)
- [Parameters](#parameters)
- [Parameters in Matrix.Params](#parameters-in-matrixparams-1)
- [Parameters in Matrix.Include.Params](#parameters-in-matrixincludeparams)
- [Specifying both `params` and `matrix` in a `PipelineTask`](#specifying-both-params-and-matrix-in-a-pipelinetask)
- [Context Variables](#context-variables)
- [Access Matrix Combinations Length](#access-matrix-combinations-length)
- [Access Aggregated Results Length](#access-aggregated-results-length)
- [Results](#results)
- [Specifying Results in a Matrix](#specifying-results-in-a-matrix)
- [Results in Matrix.Params](#results-in-matrixparams)
- [Results in Matrix.Include.Params](#results-in-matrixincludeparams)
- [Results from fanned out PipelineTasks](#results-from-fanned-out-pipelinetasks)
- [Retries](#retries)
- [Examples](#examples)
- [`Matrix` Combinations with `Matrix.Params` only](#-matrix--combinations-with--matrixparams--only)
- [`Matrix` Combinations with `Matrix.Params` and `Matrix.Include`](#-matrix--combinations-with--matrixparams--and--matrixinclude-)
- [`PipelineTasks` with `Tasks`](#-pipelinetasks--with--tasks-)
- [`PipelineTasks` with `Custom Tasks`](#-pipelinetasks--with--custom-tasks-)
## Overview
`Matrix` is used to fan out `Tasks` in a `Pipeline`. This doc will explain the details of `matrix` support in
Tekton.
Documentation for specifying `Matrix` in a `Pipeline`:
- [Specifying `Matrix` in `Tasks`](pipelines.md#specifying-matrix-in-pipelinetasks)
- [Specifying `Matrix` in `Finally Tasks`](pipelines.md#specifying-matrix-in-finally-tasks)
- [Specifying `Matrix` in `Custom Tasks`](pipelines.md#specifying-matrix)
> :seedling: **`Matrix` is an [beta](additional-configs.md#beta-features) feature.**
> The `enable-api-fields` feature flag can be set to `"beta"` to specify `Matrix` in a `PipelineTask`.
## Configuring a Matrix
A `Matrix` allows you to generate combinations and specify explicit combinations to fan out a `PipelineTask`.
### Generating Combinations
The `Matrix.Params` is used to generate combinations to fan out a `PipelineTask`.
```yaml
matrix:
params:
- name: platform
value:
- linux
- mac
- name: browser
value:
- safari
- chrome
...
```
Combinations generated
```json!
{ "platform": "linux", "browser": "safari" }
{ "platform": "linux", "browser": "chrome"}
{ "platform": "mac", "browser": "safari" }
{ "platform": "mac", "browser": "chrome"}
```
[See another example](#-matrix--combinations-with--matrixparams--only)
### Explicit Combinations
The `Matrix.Include` is used to add explicit combinations to fan out a `PipelineTask`.
```yaml
matrix:
params:
- name: platform
value:
- linux
- mac
- name: browser
value:
- safari
- chrome
include:
- name: linux-url
params:
- name: platform
value: linux
- name: url
value: some-url
- name: non-existent-browser
params:
- name: browser
value: "i-do-not-exist"
...
```
The first `Matrix.Include` clause adds `"url": "some-url"` only to the original `matrix` combinations that include `"platform": "linux"` and the second `Matrix.Include` clause cannot be added to any original `matrix` combination without overwriting any `params` of the original combinations, so it is added as an additional `matrix` combination:
Combinations generated
```json!
{ "platform": "linux", "browser": "safari", "url": "some-url" }
{ "platform": "linux", "browser": "chrome", "url": "some-url"}
{ "platform": "mac", "browser": "safari" }
{ "platform": "mac", "browser": "chrome"}
{ "browser": "i-do-not-exist"}
```
[See another example](#-matrix--combinations-with--matrixparams--and--matrixinclude-)
The `Matrix.Include` can also be used without `Matrix.Params` to generate explicit combinations to fan out a `PipelineTask`.
```yaml
matrix:
include:
- name: build-1
params:
- name: IMAGE
value: "image-1"
- name: DOCKERFILE
value: "path/to/Dockerfile1"
- name: build-2
params:
- name: IMAGE
value: "image-2"
- name: DOCKERFILE
value: "path/to/Dockerfile2"
- name: build-3
params:
- name: IMAGE
value: "image-3"
- name: DOCKERFILE
value: "path/to/Dockerfile3"
...
```
This configuration allows users to take advantage of `Matrix` to fan out without having an auto-populated `Matrix`. `Matrix` with include section without `Params` section creates the number of `TaskRuns` specified in the `Include` section with the specified `Parameters`.
Combinations generated
```json!
{ "IMAGE": "image-1", "DOCKERFILE": "path/to/Dockerfile1" }
{ "IMAGE": "image-2", "DOCKERFILE": "path/to/Dockerfile2"}
{ "IMAGE": "image-3", "DOCKERFILE": "path/to/Dockerfile3}
```
## DisplayName
Matrix creates multiple `taskRuns` with the same `pipelineTask`. Each `taskRun` has its unique combination `params` based
on the `matrix` specifications. These `params` can now be surfaced and used to configure unique name of each `matrix`
instance such that it is easier to distinguish all the instances based on their inputs.
```yaml
pipelineSpec:
tasks:
- name: platforms-and-browsers
displayName: "Platforms and Browsers: $(params.platform) and $(params.browser)"
matrix:
params:
- name: platform
value:
- linux
- mac
- windows
- name: browser
value:
- chrome
- safari
- firefox
taskRef:
name: platform-browsers
```
The `displayName` is available as part of `pipelineRun.status.childReferences` with each `taskRun`.
This allows the clients to consume `displayName` wherever needed:
```json
[
{
"apiVersion": "tekton.dev/v1",
"displayName": "Platforms and Browsers: linux and chrome",
"kind": "TaskRun",
"name": "matrixed-pr-vcx79-platforms-and-browsers-0",
"pipelineTaskName": "platforms-and-browsers"
},
{
"apiVersion": "tekton.dev/v1",
"displayName": "Platforms and Browsers: mac and safari",
"kind": "TaskRun",
"name": "matrixed-pr-vcx79-platforms-and-browsers-1",
"pipelineTaskName": "platforms-and-browsers"
}
]
```
### `matrix.include[].name`
`matrix.include[]` section allows specifying a `name` along with a list of `params`. This `name` field is available as
part of the `pipelineRun.status.childReferences[].displayName` if specified.
`displayName` and `matrix.include[].name` can co-exist but `matrix.include[].name` takes higher precedence. It is also
possible for the pipeline author to specify `params` in `matrix.include[].name` which are resolved in the `childReferences`.
```yaml
- name: platforms-and-browsers-with-include
matrix:
include:
- name: "Platform: $(params.platform)"
params:
- name: platform
value: linux111
params:
- name: browser
value: chrome
```
### Precedence Order
| specification | precedence | `childReferences[].displayName` |
|-----------------------------------------------------------|---------------------------------|---------------------------------|
| `tasks[].displayName` | `tasks[].displayName` | `tasks[].displayName` |
| `tasks[].matrix.include[].name` | `tasks[].matrix.include[].name` | `tasks[].matrix.include[].name` |
| `tasks[].displayName` and `tasks[].matrix.include[].name` | `tasks[].matrix.include[].name` | `tasks[].matrix.include[].name` |
## Concurrency Control
The default maximum count of `TaskRuns` or `Runs` from a given `Matrix` is **256**. To customize the maximum count of
`TaskRuns` or `Runs` generated from a given `Matrix`, configure the `default-max-matrix-combinations-count` in
[config defaults](/config/config-defaults.yaml). When a `Matrix` in `PipelineTask` would generate more than the maximum
`TaskRuns` or `Runs`, the `Pipeline` validation would fail.
Note: The matrix combination count includes combinations generated from both `Matrix.Params` and `Matrix.Include.Params`.
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: config-defaults
data:
default-service-account: "tekton"
default-timeout-minutes: "20"
default-max-matrix-combinations-count: "1024"
...
```
For more information, see [installation customizations](./additional-configs.md#customizing-basic-execution-parameters).
## Parameters
`Matrix` takes in `Parameters` in two sections:
- `Matrix.Params`: used to generate combinations to fan out the `PipelineTask`.
- `Matrix.Include.Params`: used to specify specific combinations to fan out the `PipelineTask`.
Note that:
- The names of the `Parameters` in the `Matrix` must match the names of the `Parameters` in the underlying
`Task` that they will be substituting.
- The names of the `Parameters` in the `Matrix` must be unique. Specifying the same parameter multiple times
will result in a validation error.
- A `Parameter` can be passed to either the `matrix` or `params` field, not both.
- If the `Matrix` has an empty array `Parameter`, then the `PipelineTask` will be skipped.
For further details on specifying `Parameters` in the `Pipeline` and passing them to
`PipelineTasks`, see [documentation](pipelines.md#specifying-parameters).
#### Parameters in Matrix.Params
`Matrix.Params` supports string replacements from `Parameters` of type String, Array or Object.
```yaml
tasks:
...
- name: task-4
taskRef:
name: task-4
matrix:
params:
- name: param-one
value:
- $(params.foo) # string replacement from string param
- $(params.bar[0]) # string replacement from array param
- $(params.rad.key) # string replacement from object param
- name: param-two
value: $(params.bar) # array replacement from array param
```
`Matrix.Params` supports whole array replacements from array `Parameters`.
```yaml
tasks:
...
- name: task-4
taskRef:
name: task-4
matrix:
params:
- name: param-one
value: $(params.bar[*]) # whole array replacement from array param
```
#### Parameters in Matrix.Include.Params
`Matrix.Include.Params` takes string replacements from `Parameters` of type String, Array or Object.
```yaml
tasks:
...
- name: task-4
taskRef:
name: task-4
matrix:
include:
- name: foo-bar-rad
params:
- name: foo
value: $(params.foo) # string replacement from string param
- name: bar
value: $(params.bar[0]) # string replacement from array param
- name: rad
value: $(params.rad.key) # string replacement from object param
```
### Specifying both `params` and `matrix` in a `PipelineTask`
In the example below, the *test* `Task` takes *browser* and *platform* `Parameters` of type
`"string"`. A `Pipeline` used to run the `Task` on three browsers (using `matrix`) and one
platform (using `params`) would be specified as such and execute three `TaskRuns`:
```yaml
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: platform-browser-tests
spec:
tasks:
- name: fetch-repository
taskRef:
name: git-clone
...
- name: test
matrix:
params:
- name: browser
value:
- chrome
- safari
- firefox
params:
- name: platform
value: linux
taskRef:
name: browser-test
...
```
## Context Variables
Similarly to the `Parameters` in the `Params` field, the `Parameters` in the `Matrix` field will accept
[context variables](variables.md) that will be substituted, including:
* `PipelineRun` name, namespace and uid
* `Pipeline` name
* `PipelineTask` retries
The following `context` variables allow users to access the `matrix` runtime data. Note: In order to create an ordering dependency, use `runAfter` or `taskResult` consumption as part of the same pipelineTask.
#### Access Matrix Combinations Length
The pipeline authors can access the total number of instances created as part of the `matrix` using the syntax: `tasks.<pipelineTaskName>.matrix.length`.
```yaml
- name: matrixed-echo-length
runAfter:
- matrix-emitting-results
params:
- name: matrixlength
value: $(tasks.matrix-emitting-results.matrix.length)
```
#### Access Aggregated Results Length
The pipeline authors can access the length of the array of aggregated results that were
actually produced using the syntax: `tasks.<pipelineTaskName>.matrix.<resultName>.length`. This will allow users to loop over the results produced.
```yaml
- name: matrixed-echo-results-length
runAfter:
- matrix-emitting-results
params:
- name: matrixlength
value: $(tasks.matrix-emitting-results.matrix.a-result.length)
```
See the full example here: [pr-with-matrix-context-variables]
## Results
### Specifying Results in a Matrix
Consuming `Results` from previous `TaskRuns` or `Runs` in a `Matrix`, which would dynamically generate
`TaskRuns` or `Runs` from the fanned out `PipelineTask`, is supported. Producing `Results` in from a
`PipelineTask` with a `Matrix` is not yet supported - see [further details](#results-from-fanned-out-pipelinetasks).
See the end-to-end example in [`PipelineRun` with `Matrix` and `Results`][pr-with-matrix-and-results].
#### Results in Matrix.Params
`Matrix.Params` supports whole array replacements and string replacements from `Results` of type String, Array or Object
```yaml
tasks:
...
- name: task-4
taskRef:
name: task-4
matrix:
params:
- name: values
value: $(tasks.task-4.results.whole-array[*])
```
```yaml
tasks:
...
- name: task-5
taskRef:
name: task-5
matrix:
params:
- name: values
value:
- $(tasks.task-1.results.a-string-result)
- $(tasks.task-2.results.an-array-result[0])
- $(tasks.task-3.results.an-object-result.key)
```
For further information, see the example in [`PipelineRun` with `Matrix` and `Results`][pr-with-matrix-and-results].
#### Results in Matrix.Include.Params
`Matrix.Include.Params` supports string replacements from `Results` of type String, Array or Object.
```yaml
tasks:
...
- name: task-4
taskRef:
name: task-4
matrix:
include:
- name: foo-bar-duh
params:
- name: foo
value: $(tasks.task-1.results.foo) # string replacement from string result
- name: bar
value: $(tasks.task-2.results.bar[0]) # string replacement from array result
- name: duh
value: $(tasks.task-2.results.duh.key) # string replacement from object result
```
### Results from fanned out Matrixed PipelineTasks
Emitting `Results` from fanned out `PipelineTasks` is now supported. Each fanned out
`TaskRun` that produces `Result` of type `string` will be aggregated into an `array`
of `Results` during reconciliation, in which the whole `array` of `Results` can be consumed by another `pipelineTask` using the star notion [*].
Note: A known limitation is not being able to consume a singular result or specific
combinations of results produced by a previous fanned out `PipelineTask`.
| Result Type in `taskRef` or `taskSpec` | Parameter Type of Consumer | Specification |
|----------------------------------------|----------------------------|-------------------------------------------------------|
| string | array | `$(tasks.<pipelineTaskName>.results.<resultName>[*])` |
| array | Not Supported | Not Supported |
| object | Not Supported | Not Supported |
```yaml
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: platform-browser-tests
spec:
tasks:
- name: matrix-emitting-results
matrix:
params:
- name: platform
value:
- linux
- mac
- windows
- name: browser
value:
- chrome
- safari
- firefox
taskRef:
name: taskwithresults
kind: Task
- name: task-consuming-results
taskRef:
name: echoarrayurl
kind: Task
params:
- name: url
value: $(tasks.matrix-emitting-results.results.report-url[*])
...
```
See the full example [pr-with-matrix-emitting-results]
## Retries
The `retries` field is used to specify the number of times a `PipelineTask` should be retried when its `TaskRun` or
`Run` fails, see the [documentation][retries] for further details. When a `PipelineTask` is fanned out using `Matrix`,
a given `TaskRun` or `Run` executed will be retried as much as the field in the `retries` field of the `PipelineTask`.
For example, the `PipelineTask` in this `PipelineRun` will be fanned out into three `TaskRuns` each of which will be
retried once:
```yaml
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: matrixed-pr-with-retries-
spec:
pipelineSpec:
tasks:
- name: matrix-and-params
matrix:
params:
- name: platform
value:
- linux
- mac
- windows
params:
- name: browser
value: chrome
retries: 1
taskSpec:
params:
- name: platform
- name: browser
steps:
- name: echo
image: alpine
script: |
echo "$(params.platform) and $(params.browser)"
exit 1
```
## Examples
### `Matrix` Combinations with `Matrix.Params` only
```yaml
matrix:
params:
- name: GOARCH
value:
- "linux/amd64"
- "linux/ppc64le"
- "linux/s390x"
- name: version
value:
- "go1.17"
- "go1.18.1"
```
This `matrix` specification will result in six `taskRuns` with the following `matrix` combinations:
```json!
{ "GOARCH": "linux/amd64", "version": "go1.17" }
{ "GOARCH": "linux/amd64", "version": "go1.18.1" }
{ "GOARCH": "linux/ppc64le", "version": "go1.17" }
{ "GOARCH": "linux/ppc64le", "version": "go1.18.1" }
{ "GOARCH": "linux/s390x", "version": "go1.17" }
{ "GOARCH": "linux/s390x", "version": "go1.18.1" }
```
Let's expand this use case to showcase a little more complex combinations in the next example.
### `Matrix` Combinations with `Matrix.Params` and `Matrix.Include`
Now, let's introduce `include` with a couple of `Parameters`: `"package"`, `"flags"` and `"context"`:
```yaml
matrix:
params:
- name: GOARCH
value:
- "linux/amd64"
- "linux/ppc64le"
- "linux/s390x"
- name: version
value:
- "go1.17"
- "go1.18.1"
include:
- name: common-package
params:
- name: package
value: "path/to/common/package/"
- name: s390x-no-race
params:
- name: GOARCH
value: "linux/s390x"
- name: flags
value: "-cover -v"
- name: go117-context
params:
- name: version
value: "go1.17"
- name: context
value: "path/to/go117/context"
- name: non-existent-arch
params:
- name: GOARCH
value: "I-do-not-exist"
```
The first `include` clause is added to all the original `matrix` combintations without overwriting any `parameters` of
the original combinations:
```json!
{ "GOARCH": "linux/amd64", "version": "go1.17", **"package": "path/to/common/package/"** }
{ "GOARCH": "linux/amd64", "version": "go1.18.1", **"package": "path/to/common/package/"** }
{ "GOARCH": "linux/ppc64le", "version": "go1.17", **"package": "path/to/common/package/"** }
{ "GOARCH": "linux/ppc64le", "version": "go1.18.1", **"package": "path/to/common/package/"** }
{ "GOARCH": "linux/s390x", "version": "go1.17", **"package": "path/to/common/package/"** }
{ "GOARCH": "linux/s390x", "version": "go1.18.1", **"package": "path/to/common/package/"** }
```
The second `include` clause adds `"flags": "-cover -v"` only to the original `matrix` combinations that include
`"GOARCH": "linux/s390x"`:
```json!
{ "GOARCH": "linux/s390x", "version": "go1.17", "package": "path/to/common/package/", **"flags": "-cover -v"** }
{ "GOARCH": "linux/s390x", "version": "go1.18.1", "package": "path/to/common/package/", **"flags": "-cover -v"** }
```
The third `include` clause adds `"context": "path/to/go117/context"` only to the original `matrix` combinations
that include `"version": "go1.17"`:
```json!
{ "GOARCH": "linux/amd64", "version": "go1.17", "package": "path/to/common/package/", **"context": "path/to/go117/context"** }
{ "GOARCH": "linux/ppc64le", "version": "go1.17", "package": "path/to/common/package/", **"context": "path/to/go117/context"** }
{ "GOARCH": "linux/s390x", "version": "go1.17", "package": "path/to/common/package/", "flags": "-cover -v", **"context": "path/to/go117/context"** }
```
The fourth `include` clause cannot be added to any original `matrix` combination without overwriting any `params` of the
original combinations, so it is added as an additional `matrix` combination:
```json!
* { **"GOARCH": "I-do-not-exist"** }
```
The above specification will result in seven `taskRuns` with the following matrix combinations:
```json!
{ "GOARCH": "linux/amd64", "version": "go1.17", "package": "path/to/common/package/", "context": "path/to/go117/context" }
{ "GOARCH": "linux/amd64", "version": "go1.18.1", "package": "path/to/common/package/" }
{ "GOARCH": "linux/ppc64le", "version": "go1.17", "package": "path/to/common/package/", "context": "path/to/go117/context" }
{ "GOARCH": "linux/ppc64le", "version": "go1.18.1", "package": "path/to/common/package/" }
{ "GOARCH": "linux/s390x", "version": "go1.17", "package": "path/to/common/package/", "flags": "-cover -v", "context": "path/to/go117/context" }
{ "GOARCH": "linux/s390x", "version": "go1.18.1", "package": "path/to/common/package/", "flags": "-cover -v" }
{ "GOARCH": "I-do-not-exist" }
```
### `PipelineTasks` with `Tasks`
When a `PipelineTask` has a `Task` and a `Matrix`, the `Task` will be executed in parallel `TaskRuns` with
substitutions from combinations of `Parameters`.
In the example below, nine `TaskRuns` are created with combinations of platforms ("linux", "mac", "windows")
and browsers ("chrome", "safari", "firefox").
```yaml
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: platform-browsers
annotations:
description: |
A task that does something cool with platforms and browsers
spec:
params:
- name: platform
- name: browser
steps:
- name: echo
image: alpine
script: |
echo "$(params.platform) and $(params.browser)"
---
# run platform-browsers task with:
# platforms: linux, mac, windows
# browsers: chrome, safari, firefox
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: matrixed-pr-
spec:
serviceAccountName: 'default'
pipelineSpec:
tasks:
- name: platforms-and-browsers
matrix:
params:
- name: platform
value:
- linux
- mac
- windows
- name: browser
value:
- chrome
- safari
- firefox
taskRef:
name: platform-browsers
```
When the above `PipelineRun` is executed, these are the `TaskRuns` that are created:
```shell
$ tkn taskruns list
NAME STARTED DURATION STATUS
matrixed-pr-6lvzk-platforms-and-browsers-8 11 seconds ago 7 seconds Succeeded
matrixed-pr-6lvzk-platforms-and-browsers-6 12 seconds ago 7 seconds Succeeded
matrixed-pr-6lvzk-platforms-and-browsers-7 12 seconds ago 9 seconds Succeeded
matrixed-pr-6lvzk-platforms-and-browsers-4 12 seconds ago 7 seconds Succeeded
matrixed-pr-6lvzk-platforms-and-browsers-5 12 seconds ago 6 seconds Succeeded
matrixed-pr-6lvzk-platforms-and-browsers-3 13 seconds ago 7 seconds Succeeded
matrixed-pr-6lvzk-platforms-and-browsers-1 13 seconds ago 8 seconds Succeeded
matrixed-pr-6lvzk-platforms-and-browsers-2 13 seconds ago 8 seconds Succeeded
matrixed-pr-6lvzk-platforms-and-browsers-0 13 seconds ago 8 seconds Succeeded
```
When the above `Pipeline` is executed, its status is populated with `ChildReferences` of the above `TaskRuns`. The
`PipelineRun` status tracks the status of all the fanned out `TaskRuns`. This is the `PipelineRun` after completing
successfully:
```yaml
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: matrixed-pr-
labels:
tekton.dev/pipeline: matrixed-pr-6lvzk
name: matrixed-pr-6lvzk
namespace: default
spec:
pipelineSpec:
tasks:
- matrix:
params:
- name: platform
value:
- linux
- mac
- windows
- name: browser
value:
- chrome
- safari
- firefox
name: platforms-and-browsers
taskRef:
kind: Task
name: platform-browsers
serviceAccountName: default
timeout: 1h0m0s
status:
pipelineSpec:
tasks:
- matrix:
params:
- name: platform
value:
- linux
- mac
- windows
- name: browser
value:
- chrome
- safari
- firefox
name: platforms-and-browsers
taskRef:
kind: Task
name: platform-browsers
startTime: "2022-06-23T23:01:11Z"
completionTime: "2022-06-23T23:01:20Z"
conditions:
- lastTransitionTime: "2022-06-23T23:01:20Z"
message: 'Tasks Completed: 1 (Failed: 0, Cancelled 0), Skipped: 0'
reason: Succeeded
status: "True"
type: Succeeded
childReferences:
- apiVersion: tekton.dev/v1beta1
kind: TaskRun
name: matrixed-pr-6lvzk-platforms-and-browsers-4
pipelineTaskName: platforms-and-browsers
- apiVersion: tekton.dev/v1beta1
kind: TaskRun
name: matrixed-pr-6lvzk-platforms-and-browsers-6
pipelineTaskName: platforms-and-browsers
- apiVersion: tekton.dev/v1beta1
kind: TaskRun
name: matrixed-pr-6lvzk-platforms-and-browsers-2
pipelineTaskName: platforms-and-browsers
- apiVersion: tekton.dev/v1beta1
kind: TaskRun
name: matrixed-pr-6lvzk-platforms-and-browsers-1
pipelineTaskName: platforms-and-browsers
- apiVersion: tekton.dev/v1beta1
kind: TaskRun
name: matrixed-pr-6lvzk-platforms-and-browsers-7
pipelineTaskName: platforms-and-browsers
- apiVersion: tekton.dev/v1beta1
kind: TaskRun
name: matrixed-pr-6lvzk-platforms-and-browsers-0
pipelineTaskName: platforms-and-browsers
- apiVersion: tekton.dev/v1beta1
kind: TaskRun
name: matrixed-pr-6lvzk-platforms-and-browsers-8
pipelineTaskName: platforms-and-browsers
- apiVersion: tekton.dev/v1beta1
kind: TaskRun
name: matrixed-pr-6lvzk-platforms-and-browsers-3
pipelineTaskName: platforms-and-browsers
- apiVersion: tekton.dev/v1beta1
kind: TaskRun
name: matrixed-pr-6lvzk-platforms-and-browsers-5
pipelineTaskName: platforms-and-browsers
```
To execute this example yourself, run [`PipelineRun` with `Matrix`][pr-with-matrix].
### `PipelineTasks` with `Custom Tasks`
When a `PipelineTask` has a `Custom Task` and a `Matrix`, the `Custom Task` will be executed in parallel `Runs` with
substitutions from combinations of `Parameters`.
In the example below, eight `Runs` are created with combinations of CEL expressions, using the [CEL `Custom Task`][cel].
```yaml
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: matrixed-pr-
spec:
serviceAccountName: 'default'
pipelineSpec:
tasks:
- name: platforms-and-browsers
matrix:
params:
- name: type
value:
- "type(1)"
- "type(1.0)"
- name: colors
value:
- "{'blue': '0x000080', 'red': '0xFF0000'}['blue']"
- "{'blue': '0x000080', 'red': '0xFF0000'}['red']"
- name: bool
value:
- "type(1) == int"
- "{'blue': '0x000080', 'red': '0xFF0000'}['red'] == '0xFF0000'"
taskRef:
apiVersion: cel.tekton.dev/v1alpha1
kind: CEL
```
When the above `PipelineRun` is executed, these `Runs` are created:
```shell
$ k get run.tekton.dev
NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME
matrixed-pr-4djw9-platforms-and-browsers-0 True EvaluationSuccess 10s 10s
matrixed-pr-4djw9-platforms-and-browsers-1 True EvaluationSuccess 10s 10s
matrixed-pr-4djw9-platforms-and-browsers-2 True EvaluationSuccess 10s 10s
matrixed-pr-4djw9-platforms-and-browsers-3 True EvaluationSuccess 9s 9s
matrixed-pr-4djw9-platforms-and-browsers-4 True EvaluationSuccess 9s 9s
matrixed-pr-4djw9-platforms-and-browsers-5 True EvaluationSuccess 9s 9s
matrixed-pr-4djw9-platforms-and-browsers-6 True EvaluationSuccess 9s 9s
matrixed-pr-4djw9-platforms-and-browsers-7 True EvaluationSuccess 9s 9s
```
When the above `PipelineRun` is executed, its status is populated with `ChildReferences` of the above `Runs`. The
`PipelineRun` status tracks the status of all the fanned out `Runs`. This is the `PipelineRun` after completing:
```yaml
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: matrixed-pr-
labels:
tekton.dev/pipeline: matrixed-pr-4djw9
name: matrixed-pr-4djw9
namespace: default
spec:
pipelineSpec:
tasks:
- matrix:
params:
- name: type
value:
- type(1)
- type(1.0)
- name: colors
value:
- '{''blue'': ''0x000080'', ''red'': ''0xFF0000''}[''blue'']'
- '{''blue'': ''0x000080'', ''red'': ''0xFF0000''}[''red'']'
- name: bool
value:
- type(1) == int
- '{''blue'': ''0x000080'', ''red'': ''0xFF0000''}[''red''] == ''0xFF0000'''
name: platforms-and-browsers
taskRef:
apiVersion: cel.tekton.dev/v1alpha1
kind: CEL
serviceAccountName: default
timeout: 1h0m0s
status:
pipelineSpec:
tasks:
- matrix:
params:
- name: type
value:
- type(1)
- type(1.0)
- name: colors
value:
- '{''blue'': ''0x000080'', ''red'': ''0xFF0000''}[''blue'']'
- '{''blue'': ''0x000080'', ''red'': ''0xFF0000''}[''red'']'
- name: bool
value:
- type(1) == int
- '{''blue'': ''0x000080'', ''red'': ''0xFF0000''}[''red''] == ''0xFF0000'''
name: platforms-and-browsers
taskRef:
apiVersion: cel.tekton.dev/v1alpha1
kind: CEL
startTime: "2022-06-28T20:49:40Z"
completionTime: "2022-06-28T20:49:41Z"
conditions:
- lastTransitionTime: "2022-06-28T20:49:41Z"
message: 'Tasks Completed: 1 (Failed: 0, Cancelled 0), Skipped: 0'
reason: Succeeded
status: "True"
type: Succeeded
childReferences:
- apiVersion: tekton.dev/v1alpha1
kind: Run
name: matrixed-pr-4djw9-platforms-and-browsers-1
pipelineTaskName: platforms-and-browsers
- apiVersion: tekton.dev/v1alpha1
kind: Run
name: matrixed-pr-4djw9-platforms-and-browsers-2
pipelineTaskName: platforms-and-browsers
- apiVersion: tekton.dev/v1alpha1
kind: Run
name: matrixed-pr-4djw9-platforms-and-browsers-3
pipelineTaskName: platforms-and-browsers
- apiVersion: tekton.dev/v1alpha1
kind: Run
name: matrixed-pr-4djw9-platforms-and-browsers-4
pipelineTaskName: platforms-and-browsers
- apiVersion: tekton.dev/v1alpha1
kind: Run
name: matrixed-pr-4djw9-platforms-and-browsers-5
pipelineTaskName: platforms-and-browsers
- apiVersion: tekton.dev/v1alpha1
kind: Run
name: matrixed-pr-4djw9-platforms-and-browsers-6
pipelineTaskName: platforms-and-browsers
- apiVersion: tekton.dev/v1alpha1
kind: Run
name: matrixed-pr-4djw9-platforms-and-browsers-7
pipelineTaskName: platforms-and-browsers
- apiVersion: tekton.dev/v1alpha1
kind: Run
name: matrixed-pr-4djw9-platforms-and-browsers-0
pipelineTaskName: platforms-and-browsers
```
[cel]: https://github.com/tektoncd/experimental/tree/1609827ea81d05c8d00f8933c5c9d6150cd36989/cel
[pr-with-matrix]: https://github.com/tektoncd/pipeline/blob/main/examples/v1/pipelineruns/beta/pipelinerun-with-matrix.yaml
[pr-with-matrix-and-results]: https://github.com/tektoncd/pipeline/blob/main/examples/v1/pipelineruns/beta/pipelinerun-with-matrix-and-results.yaml
[pr-with-matrix-context-variables]: https://github.com/tektoncd/pipeline/blob/main/examples/v1/pipelineruns/beta/pipelinerun-with-matrix-context-variables.yaml
[pr-with-matrix-emitting-results]: https://github.com/tektoncd/pipeline/blob/main/examples/v1/pipelineruns/beta/pipelinerun-with-matrix-emitting-results.yaml
[retries]: pipelines.md#using-the-retries-field | tekton | linkTitle Matrix weight 406 Matrix Overview overview Configuring a Matrix configuring a matrix Generating Combinations generating combinations Explicit Combinations explicit combinations Concurrency Control concurrency control Parameters parameters Parameters in Matrix Params parameters in matrixparams 1 Parameters in Matrix Include Params parameters in matrixincludeparams Specifying both params and matrix in a PipelineTask specifying both params and matrix in a pipelinetask Context Variables context variables Access Matrix Combinations Length access matrix combinations length Access Aggregated Results Length access aggregated results length Results results Specifying Results in a Matrix specifying results in a matrix Results in Matrix Params results in matrixparams Results in Matrix Include Params results in matrixincludeparams Results from fanned out PipelineTasks results from fanned out pipelinetasks Retries retries Examples examples Matrix Combinations with Matrix Params only matrix combinations with matrixparams only Matrix Combinations with Matrix Params and Matrix Include matrix combinations with matrixparams and matrixinclude PipelineTasks with Tasks pipelinetasks with tasks PipelineTasks with Custom Tasks pipelinetasks with custom tasks Overview Matrix is used to fan out Tasks in a Pipeline This doc will explain the details of matrix support in Tekton Documentation for specifying Matrix in a Pipeline Specifying Matrix in Tasks pipelines md specifying matrix in pipelinetasks Specifying Matrix in Finally Tasks pipelines md specifying matrix in finally tasks Specifying Matrix in Custom Tasks pipelines md specifying matrix seedling Matrix is an beta additional configs md beta features feature The enable api fields feature flag can be set to beta to specify Matrix in a PipelineTask Configuring a Matrix A Matrix allows you to generate combinations and specify explicit combinations to fan out a PipelineTask Generating Combinations The Matrix Params is used to generate combinations to fan out a PipelineTask yaml matrix params name platform value linux mac name browser value safari chrome Combinations generated json platform linux browser safari platform linux browser chrome platform mac browser safari platform mac browser chrome See another example matrix combinations with matrixparams only Explicit Combinations The Matrix Include is used to add explicit combinations to fan out a PipelineTask yaml matrix params name platform value linux mac name browser value safari chrome include name linux url params name platform value linux name url value some url name non existent browser params name browser value i do not exist The first Matrix Include clause adds url some url only to the original matrix combinations that include platform linux and the second Matrix Include clause cannot be added to any original matrix combination without overwriting any params of the original combinations so it is added as an additional matrix combination Combinations generated json platform linux browser safari url some url platform linux browser chrome url some url platform mac browser safari platform mac browser chrome browser i do not exist See another example matrix combinations with matrixparams and matrixinclude The Matrix Include can also be used without Matrix Params to generate explicit combinations to fan out a PipelineTask yaml matrix include name build 1 params name IMAGE value image 1 name DOCKERFILE value path to Dockerfile1 name build 2 params name IMAGE value image 2 name DOCKERFILE value path to Dockerfile2 name build 3 params name IMAGE value image 3 name DOCKERFILE value path to Dockerfile3 This configuration allows users to take advantage of Matrix to fan out without having an auto populated Matrix Matrix with include section without Params section creates the number of TaskRuns specified in the Include section with the specified Parameters Combinations generated json IMAGE image 1 DOCKERFILE path to Dockerfile1 IMAGE image 2 DOCKERFILE path to Dockerfile2 IMAGE image 3 DOCKERFILE path to Dockerfile3 DisplayName Matrix creates multiple taskRuns with the same pipelineTask Each taskRun has its unique combination params based on the matrix specifications These params can now be surfaced and used to configure unique name of each matrix instance such that it is easier to distinguish all the instances based on their inputs yaml pipelineSpec tasks name platforms and browsers displayName Platforms and Browsers params platform and params browser matrix params name platform value linux mac windows name browser value chrome safari firefox taskRef name platform browsers The displayName is available as part of pipelineRun status childReferences with each taskRun This allows the clients to consume displayName wherever needed json apiVersion tekton dev v1 displayName Platforms and Browsers linux and chrome kind TaskRun name matrixed pr vcx79 platforms and browsers 0 pipelineTaskName platforms and browsers apiVersion tekton dev v1 displayName Platforms and Browsers mac and safari kind TaskRun name matrixed pr vcx79 platforms and browsers 1 pipelineTaskName platforms and browsers matrix include name matrix include section allows specifying a name along with a list of params This name field is available as part of the pipelineRun status childReferences displayName if specified displayName and matrix include name can co exist but matrix include name takes higher precedence It is also possible for the pipeline author to specify params in matrix include name which are resolved in the childReferences yaml name platforms and browsers with include matrix include name Platform params platform params name platform value linux111 params name browser value chrome Precedence Order specification precedence childReferences displayName tasks displayName tasks displayName tasks displayName tasks matrix include name tasks matrix include name tasks matrix include name tasks displayName and tasks matrix include name tasks matrix include name tasks matrix include name Concurrency Control The default maximum count of TaskRuns or Runs from a given Matrix is 256 To customize the maximum count of TaskRuns or Runs generated from a given Matrix configure the default max matrix combinations count in config defaults config config defaults yaml When a Matrix in PipelineTask would generate more than the maximum TaskRuns or Runs the Pipeline validation would fail Note The matrix combination count includes combinations generated from both Matrix Params and Matrix Include Params yaml apiVersion v1 kind ConfigMap metadata name config defaults data default service account tekton default timeout minutes 20 default max matrix combinations count 1024 For more information see installation customizations additional configs md customizing basic execution parameters Parameters Matrix takes in Parameters in two sections Matrix Params used to generate combinations to fan out the PipelineTask Matrix Include Params used to specify specific combinations to fan out the PipelineTask Note that The names of the Parameters in the Matrix must match the names of the Parameters in the underlying Task that they will be substituting The names of the Parameters in the Matrix must be unique Specifying the same parameter multiple times will result in a validation error A Parameter can be passed to either the matrix or params field not both If the Matrix has an empty array Parameter then the PipelineTask will be skipped For further details on specifying Parameters in the Pipeline and passing them to PipelineTasks see documentation pipelines md specifying parameters Parameters in Matrix Params Matrix Params supports string replacements from Parameters of type String Array or Object yaml tasks name task 4 taskRef name task 4 matrix params name param one value params foo string replacement from string param params bar 0 string replacement from array param params rad key string replacement from object param name param two value params bar array replacement from array param Matrix Params supports whole array replacements from array Parameters yaml tasks name task 4 taskRef name task 4 matrix params name param one value params bar whole array replacement from array param Parameters in Matrix Include Params Matrix Include Params takes string replacements from Parameters of type String Array or Object yaml tasks name task 4 taskRef name task 4 matrix include name foo bar rad params name foo value params foo string replacement from string param name bar value params bar 0 string replacement from array param name rad value params rad key string replacement from object param Specifying both params and matrix in a PipelineTask In the example below the test Task takes browser and platform Parameters of type string A Pipeline used to run the Task on three browsers using matrix and one platform using params would be specified as such and execute three TaskRuns yaml apiVersion tekton dev v1beta1 kind Pipeline metadata name platform browser tests spec tasks name fetch repository taskRef name git clone name test matrix params name browser value chrome safari firefox params name platform value linux taskRef name browser test Context Variables Similarly to the Parameters in the Params field the Parameters in the Matrix field will accept context variables variables md that will be substituted including PipelineRun name namespace and uid Pipeline name PipelineTask retries The following context variables allow users to access the matrix runtime data Note In order to create an ordering dependency use runAfter or taskResult consumption as part of the same pipelineTask Access Matrix Combinations Length The pipeline authors can access the total number of instances created as part of the matrix using the syntax tasks pipelineTaskName matrix length yaml name matrixed echo length runAfter matrix emitting results params name matrixlength value tasks matrix emitting results matrix length Access Aggregated Results Length The pipeline authors can access the length of the array of aggregated results that were actually produced using the syntax tasks pipelineTaskName matrix resultName length This will allow users to loop over the results produced yaml name matrixed echo results length runAfter matrix emitting results params name matrixlength value tasks matrix emitting results matrix a result length See the full example here pr with matrix context variables Results Specifying Results in a Matrix Consuming Results from previous TaskRuns or Runs in a Matrix which would dynamically generate TaskRuns or Runs from the fanned out PipelineTask is supported Producing Results in from a PipelineTask with a Matrix is not yet supported see further details results from fanned out pipelinetasks See the end to end example in PipelineRun with Matrix and Results pr with matrix and results Results in Matrix Params Matrix Params supports whole array replacements and string replacements from Results of type String Array or Object yaml tasks name task 4 taskRef name task 4 matrix params name values value tasks task 4 results whole array yaml tasks name task 5 taskRef name task 5 matrix params name values value tasks task 1 results a string result tasks task 2 results an array result 0 tasks task 3 results an object result key For further information see the example in PipelineRun with Matrix and Results pr with matrix and results Results in Matrix Include Params Matrix Include Params supports string replacements from Results of type String Array or Object yaml tasks name task 4 taskRef name task 4 matrix include name foo bar duh params name foo value tasks task 1 results foo string replacement from string result name bar value tasks task 2 results bar 0 string replacement from array result name duh value tasks task 2 results duh key string replacement from object result Results from fanned out Matrixed PipelineTasks Emitting Results from fanned out PipelineTasks is now supported Each fanned out TaskRun that produces Result of type string will be aggregated into an array of Results during reconciliation in which the whole array of Results can be consumed by another pipelineTask using the star notion Note A known limitation is not being able to consume a singular result or specific combinations of results produced by a previous fanned out PipelineTask Result Type in taskRef or taskSpec Parameter Type of Consumer Specification string array tasks pipelineTaskName results resultName array Not Supported Not Supported object Not Supported Not Supported yaml apiVersion tekton dev v1beta1 kind Pipeline metadata name platform browser tests spec tasks name matrix emitting results matrix params name platform value linux mac windows name browser value chrome safari firefox taskRef name taskwithresults kind Task name task consuming results taskRef name echoarrayurl kind Task params name url value tasks matrix emitting results results report url See the full example pr with matrix emitting results Retries The retries field is used to specify the number of times a PipelineTask should be retried when its TaskRun or Run fails see the documentation retries for further details When a PipelineTask is fanned out using Matrix a given TaskRun or Run executed will be retried as much as the field in the retries field of the PipelineTask For example the PipelineTask in this PipelineRun will be fanned out into three TaskRuns each of which will be retried once yaml apiVersion tekton dev v1beta1 kind PipelineRun metadata generateName matrixed pr with retries spec pipelineSpec tasks name matrix and params matrix params name platform value linux mac windows params name browser value chrome retries 1 taskSpec params name platform name browser steps name echo image alpine script echo params platform and params browser exit 1 Examples Matrix Combinations with Matrix Params only yaml matrix params name GOARCH value linux amd64 linux ppc64le linux s390x name version value go1 17 go1 18 1 This matrix specification will result in six taskRuns with the following matrix combinations json GOARCH linux amd64 version go1 17 GOARCH linux amd64 version go1 18 1 GOARCH linux ppc64le version go1 17 GOARCH linux ppc64le version go1 18 1 GOARCH linux s390x version go1 17 GOARCH linux s390x version go1 18 1 Let s expand this use case to showcase a little more complex combinations in the next example Matrix Combinations with Matrix Params and Matrix Include Now let s introduce include with a couple of Parameters package flags and context yaml matrix params name GOARCH value linux amd64 linux ppc64le linux s390x name version value go1 17 go1 18 1 include name common package params name package value path to common package name s390x no race params name GOARCH value linux s390x name flags value cover v name go117 context params name version value go1 17 name context value path to go117 context name non existent arch params name GOARCH value I do not exist The first include clause is added to all the original matrix combintations without overwriting any parameters of the original combinations json GOARCH linux amd64 version go1 17 package path to common package GOARCH linux amd64 version go1 18 1 package path to common package GOARCH linux ppc64le version go1 17 package path to common package GOARCH linux ppc64le version go1 18 1 package path to common package GOARCH linux s390x version go1 17 package path to common package GOARCH linux s390x version go1 18 1 package path to common package The second include clause adds flags cover v only to the original matrix combinations that include GOARCH linux s390x json GOARCH linux s390x version go1 17 package path to common package flags cover v GOARCH linux s390x version go1 18 1 package path to common package flags cover v The third include clause adds context path to go117 context only to the original matrix combinations that include version go1 17 json GOARCH linux amd64 version go1 17 package path to common package context path to go117 context GOARCH linux ppc64le version go1 17 package path to common package context path to go117 context GOARCH linux s390x version go1 17 package path to common package flags cover v context path to go117 context The fourth include clause cannot be added to any original matrix combination without overwriting any params of the original combinations so it is added as an additional matrix combination json GOARCH I do not exist The above specification will result in seven taskRuns with the following matrix combinations json GOARCH linux amd64 version go1 17 package path to common package context path to go117 context GOARCH linux amd64 version go1 18 1 package path to common package GOARCH linux ppc64le version go1 17 package path to common package context path to go117 context GOARCH linux ppc64le version go1 18 1 package path to common package GOARCH linux s390x version go1 17 package path to common package flags cover v context path to go117 context GOARCH linux s390x version go1 18 1 package path to common package flags cover v GOARCH I do not exist PipelineTasks with Tasks When a PipelineTask has a Task and a Matrix the Task will be executed in parallel TaskRuns with substitutions from combinations of Parameters In the example below nine TaskRuns are created with combinations of platforms linux mac windows and browsers chrome safari firefox yaml apiVersion tekton dev v1beta1 kind Task metadata name platform browsers annotations description A task that does something cool with platforms and browsers spec params name platform name browser steps name echo image alpine script echo params platform and params browser run platform browsers task with platforms linux mac windows browsers chrome safari firefox apiVersion tekton dev v1beta1 kind PipelineRun metadata generateName matrixed pr spec serviceAccountName default pipelineSpec tasks name platforms and browsers matrix params name platform value linux mac windows name browser value chrome safari firefox taskRef name platform browsers When the above PipelineRun is executed these are the TaskRuns that are created shell tkn taskruns list NAME STARTED DURATION STATUS matrixed pr 6lvzk platforms and browsers 8 11 seconds ago 7 seconds Succeeded matrixed pr 6lvzk platforms and browsers 6 12 seconds ago 7 seconds Succeeded matrixed pr 6lvzk platforms and browsers 7 12 seconds ago 9 seconds Succeeded matrixed pr 6lvzk platforms and browsers 4 12 seconds ago 7 seconds Succeeded matrixed pr 6lvzk platforms and browsers 5 12 seconds ago 6 seconds Succeeded matrixed pr 6lvzk platforms and browsers 3 13 seconds ago 7 seconds Succeeded matrixed pr 6lvzk platforms and browsers 1 13 seconds ago 8 seconds Succeeded matrixed pr 6lvzk platforms and browsers 2 13 seconds ago 8 seconds Succeeded matrixed pr 6lvzk platforms and browsers 0 13 seconds ago 8 seconds Succeeded When the above Pipeline is executed its status is populated with ChildReferences of the above TaskRuns The PipelineRun status tracks the status of all the fanned out TaskRuns This is the PipelineRun after completing successfully yaml apiVersion tekton dev v1beta1 kind PipelineRun metadata generateName matrixed pr labels tekton dev pipeline matrixed pr 6lvzk name matrixed pr 6lvzk namespace default spec pipelineSpec tasks matrix params name platform value linux mac windows name browser value chrome safari firefox name platforms and browsers taskRef kind Task name platform browsers serviceAccountName default timeout 1h0m0s status pipelineSpec tasks matrix params name platform value linux mac windows name browser value chrome safari firefox name platforms and browsers taskRef kind Task name platform browsers startTime 2022 06 23T23 01 11Z completionTime 2022 06 23T23 01 20Z conditions lastTransitionTime 2022 06 23T23 01 20Z message Tasks Completed 1 Failed 0 Cancelled 0 Skipped 0 reason Succeeded status True type Succeeded childReferences apiVersion tekton dev v1beta1 kind TaskRun name matrixed pr 6lvzk platforms and browsers 4 pipelineTaskName platforms and browsers apiVersion tekton dev v1beta1 kind TaskRun name matrixed pr 6lvzk platforms and browsers 6 pipelineTaskName platforms and browsers apiVersion tekton dev v1beta1 kind TaskRun name matrixed pr 6lvzk platforms and browsers 2 pipelineTaskName platforms and browsers apiVersion tekton dev v1beta1 kind TaskRun name matrixed pr 6lvzk platforms and browsers 1 pipelineTaskName platforms and browsers apiVersion tekton dev v1beta1 kind TaskRun name matrixed pr 6lvzk platforms and browsers 7 pipelineTaskName platforms and browsers apiVersion tekton dev v1beta1 kind TaskRun name matrixed pr 6lvzk platforms and browsers 0 pipelineTaskName platforms and browsers apiVersion tekton dev v1beta1 kind TaskRun name matrixed pr 6lvzk platforms and browsers 8 pipelineTaskName platforms and browsers apiVersion tekton dev v1beta1 kind TaskRun name matrixed pr 6lvzk platforms and browsers 3 pipelineTaskName platforms and browsers apiVersion tekton dev v1beta1 kind TaskRun name matrixed pr 6lvzk platforms and browsers 5 pipelineTaskName platforms and browsers To execute this example yourself run PipelineRun with Matrix pr with matrix PipelineTasks with Custom Tasks When a PipelineTask has a Custom Task and a Matrix the Custom Task will be executed in parallel Runs with substitutions from combinations of Parameters In the example below eight Runs are created with combinations of CEL expressions using the CEL Custom Task cel yaml apiVersion tekton dev v1beta1 kind PipelineRun metadata generateName matrixed pr spec serviceAccountName default pipelineSpec tasks name platforms and browsers matrix params name type value type 1 type 1 0 name colors value blue 0x000080 red 0xFF0000 blue blue 0x000080 red 0xFF0000 red name bool value type 1 int blue 0x000080 red 0xFF0000 red 0xFF0000 taskRef apiVersion cel tekton dev v1alpha1 kind CEL When the above PipelineRun is executed these Runs are created shell k get run tekton dev NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME matrixed pr 4djw9 platforms and browsers 0 True EvaluationSuccess 10s 10s matrixed pr 4djw9 platforms and browsers 1 True EvaluationSuccess 10s 10s matrixed pr 4djw9 platforms and browsers 2 True EvaluationSuccess 10s 10s matrixed pr 4djw9 platforms and browsers 3 True EvaluationSuccess 9s 9s matrixed pr 4djw9 platforms and browsers 4 True EvaluationSuccess 9s 9s matrixed pr 4djw9 platforms and browsers 5 True EvaluationSuccess 9s 9s matrixed pr 4djw9 platforms and browsers 6 True EvaluationSuccess 9s 9s matrixed pr 4djw9 platforms and browsers 7 True EvaluationSuccess 9s 9s When the above PipelineRun is executed its status is populated with ChildReferences of the above Runs The PipelineRun status tracks the status of all the fanned out Runs This is the PipelineRun after completing yaml apiVersion tekton dev v1beta1 kind PipelineRun metadata generateName matrixed pr labels tekton dev pipeline matrixed pr 4djw9 name matrixed pr 4djw9 namespace default spec pipelineSpec tasks matrix params name type value type 1 type 1 0 name colors value blue 0x000080 red 0xFF0000 blue blue 0x000080 red 0xFF0000 red name bool value type 1 int blue 0x000080 red 0xFF0000 red 0xFF0000 name platforms and browsers taskRef apiVersion cel tekton dev v1alpha1 kind CEL serviceAccountName default timeout 1h0m0s status pipelineSpec tasks matrix params name type value type 1 type 1 0 name colors value blue 0x000080 red 0xFF0000 blue blue 0x000080 red 0xFF0000 red name bool value type 1 int blue 0x000080 red 0xFF0000 red 0xFF0000 name platforms and browsers taskRef apiVersion cel tekton dev v1alpha1 kind CEL startTime 2022 06 28T20 49 40Z completionTime 2022 06 28T20 49 41Z conditions lastTransitionTime 2022 06 28T20 49 41Z message Tasks Completed 1 Failed 0 Cancelled 0 Skipped 0 reason Succeeded status True type Succeeded childReferences apiVersion tekton dev v1alpha1 kind Run name matrixed pr 4djw9 platforms and browsers 1 pipelineTaskName platforms and browsers apiVersion tekton dev v1alpha1 kind Run name matrixed pr 4djw9 platforms and browsers 2 pipelineTaskName platforms and browsers apiVersion tekton dev v1alpha1 kind Run name matrixed pr 4djw9 platforms and browsers 3 pipelineTaskName platforms and browsers apiVersion tekton dev v1alpha1 kind Run name matrixed pr 4djw9 platforms and browsers 4 pipelineTaskName platforms and browsers apiVersion tekton dev v1alpha1 kind Run name matrixed pr 4djw9 platforms and browsers 5 pipelineTaskName platforms and browsers apiVersion tekton dev v1alpha1 kind Run name matrixed pr 4djw9 platforms and browsers 6 pipelineTaskName platforms and browsers apiVersion tekton dev v1alpha1 kind Run name matrixed pr 4djw9 platforms and browsers 7 pipelineTaskName platforms and browsers apiVersion tekton dev v1alpha1 kind Run name matrixed pr 4djw9 platforms and browsers 0 pipelineTaskName platforms and browsers cel https github com tektoncd experimental tree 1609827ea81d05c8d00f8933c5c9d6150cd36989 cel pr with matrix https github com tektoncd pipeline blob main examples v1 pipelineruns beta pipelinerun with matrix yaml pr with matrix and results https github com tektoncd pipeline blob main examples v1 pipelineruns beta pipelinerun with matrix and results yaml pr with matrix context variables https github com tektoncd pipeline blob main examples v1 pipelineruns beta pipelinerun with matrix context variables yaml pr with matrix emitting results https github com tektoncd pipeline blob main examples v1 pipelineruns beta pipelinerun with matrix emitting results yaml retries pipelines md using the retries field |
tekton weight 305 Labels and Annotations Labels and Annotations Tekton allows you to use custom | <!--
---
linkTitle: "Labels and Annotations"
weight: 305
---
-->
# Labels and Annotations
Tekton allows you to use custom [Kubernetes Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/)
to easily mark Tekton entities belonging to the same conceptual execution chain. Tekton also automatically adds select labels
to more easily identify resource relationships. This document describes the label propagation scheme, automatic labeling, and
provides usage examples.
---
- [Label propagation](#label-propagation)
- [Automatic labeling](#automatic-labeling)
- [Usage examples](#usage-examples)
---
## Label propagation
Labels propagate among Tekton entities as follows:
- For `Pipelines` instantiated using a `PipelineRun`, labels propagate
automatically from `Pipelines` to `PipelineRuns` to `TaskRuns`, and then to
the associated `Pods`. If a label is present in both `Pipeline` and
`PipelineRun`, the label in `PipelineRun` takes precedence.
- Labels from `Tasks` referenced by `TaskRuns` within a `PipelineRun` propagate to the corresponding `TaskRuns`,
and then to the associated `Pods`. As for `Pipeline` and `PipelineRun`, if a label is present in both `Task` and
`TaskRun`, the label in `TaskRun` takes precedence.
- For standalone `TaskRuns` (that is, ones not executing as part of a `Pipeline`), labels
propagate from the [referenced `Task`](taskruns.md#specifying-the-target-task), if one exists, to
the corresponding `TaskRun`, and then to the associated `Pod`. The same as above applies.
## Automatic labeling
Tekton automatically adds labels to Tekton entities as described in the following table.
**Note:** `*.tekton.dev` labels are reserved for Tekton's internal use only. Do not add or remove them manually.
<table >
<tbody>
<tr>
<td><b>Label</b></td>
<td><b>Added To</b></td>
<td><b>Propagates To</b></td>
<td><b>Contains</b></td>
</tr>
<tr>
<td><code>tekton.dev/pipeline</code></td>
<td><code>PipelineRuns</code></td>
<td><code>TaskRuns, Pods</code></td>
<td>Name of the <code>Pipeline</code> that the <code>PipelineRun</code> references.</td>
</tr>
<tr>
<td><code>tekton.dev/pipelineRun</code></td>
<td><code>TaskRuns</code> that are created automatically during the execution of a <code>PipelineRun</code>.</td>
<td><code>TaskRuns, Pods</code></td>
<td>Name of the <code>PipelineRun</code> that triggered the creation of the <code>TaskRun</code>.</td>
</tr>
<tr>
<td><code>tekton.dev/task</code></td>
<td><code>TaskRuns</code> that <a href="taskruns.md#specifying-the-target-task">reference an existing </code>Task</code></a>.</td>
<td><code>Pods</code></td>
<td>Name of the <code>Task</code> that the <code>TaskRun</code> references.</td>
</tr>
<tr>
<td><code>tekton.dev/clusterTask</code></td>
<td><code>TaskRuns</code> that reference an existing <code>ClusterTask</code>.</td>
<td><code>Pods</code></td>
<td>Name of the <code>ClusterTask</code> that the <code>TaskRun</code> references.</td>
</tr>
<tr>
<td><code>tekton.dev/taskRun</code></td>
<td><code>Pods</code></td>
<td>No propagation.</td>
<td>Name of the <code>TaskRun</code> that created the <code>Pod</code>.</td>
</tr>
<tr>
<td><code>tekton.dev/memberOf</code></td>
<td><code>TaskRuns</code> that are created automatically during the execution of a <code>PipelineRun</code>.</td>
<td><code>TaskRuns, Pods</code></td>
<td><code>tasks</code> or <code>finally</code> depending on the <code>PipelineTask</code>'s membership in the <code>Pipeline</code>.</td>
</tr>
<tr>
<td><code>app.kubernetes.io/instance</code>, <code>app.kubernetes.io/component</code></td>
<td><code>Pods</code>, <code>StatefulSets</code> (Affinity Assistant)</td>
<td>No propagation.</td>
<td><code>Pod</code> affinity values for <code>TaskRuns</code>.</td>
</tr>
</tbody>
</table>
## Usage examples
Below are some examples of using labels:
The following command finds all `Pods` created by a `PipelineRun` named `test-pipelinerun`:
```shell
kubectl get pods --all-namespaces -l tekton.dev/pipelineRun=test-pipelinerun
```
The following command finds all `TaskRuns` that reference a `Task` named `test-task`:
```shell
kubectl get taskruns --all-namespaces -l tekton.dev/task=test-task
```
The following command finds all `TaskRuns` that reference a `ClusterTask` named `test-clustertask`:
```shell
kubectl get taskruns --all-namespaces -l tekton.dev/clusterTask=test-clustertask
```
## Annotations propagation
Annotation propagate among Tekton entities as follows (similar to Labels):
- For `Pipelines` instantiated using a `PipelineRun`, annotations propagate
automatically from `Pipelines` to `PipelineRuns` to `TaskRuns`, and then to
the associated `Pods`. If a annotation is present in both `Pipeline` and
`PipelineRun`, the annotation in `PipelineRun` takes precedence.
- Annotations from `Tasks` referenced by `TaskRuns` within a `PipelineRun` propagate to the corresponding `TaskRuns`,
and then to the associated `Pods`. As for `Pipeline` and `PipelineRun`, if a annotation is present in both `Task` and
`TaskRun`, the annotation in `TaskRun` takes precedence.
- For standalone `TaskRuns` (that is, ones not executing as part of a `Pipeline`), annotations
propagate from the [referenced `Task`](taskruns.md#specifying-the-target-task), if one exists, to
the corresponding `TaskRun`, and then to the associated `Pod`. The same as above applies. | tekton | linkTitle Labels and Annotations weight 305 Labels and Annotations Tekton allows you to use custom Kubernetes Labels https kubernetes io docs concepts overview working with objects labels to easily mark Tekton entities belonging to the same conceptual execution chain Tekton also automatically adds select labels to more easily identify resource relationships This document describes the label propagation scheme automatic labeling and provides usage examples Label propagation label propagation Automatic labeling automatic labeling Usage examples usage examples Label propagation Labels propagate among Tekton entities as follows For Pipelines instantiated using a PipelineRun labels propagate automatically from Pipelines to PipelineRuns to TaskRuns and then to the associated Pods If a label is present in both Pipeline and PipelineRun the label in PipelineRun takes precedence Labels from Tasks referenced by TaskRuns within a PipelineRun propagate to the corresponding TaskRuns and then to the associated Pods As for Pipeline and PipelineRun if a label is present in both Task and TaskRun the label in TaskRun takes precedence For standalone TaskRuns that is ones not executing as part of a Pipeline labels propagate from the referenced Task taskruns md specifying the target task if one exists to the corresponding TaskRun and then to the associated Pod The same as above applies Automatic labeling Tekton automatically adds labels to Tekton entities as described in the following table Note tekton dev labels are reserved for Tekton s internal use only Do not add or remove them manually table tbody tr td b Label b td td b Added To b td td b Propagates To b td td b Contains b td tr tr td code tekton dev pipeline code td td code PipelineRuns code td td code TaskRuns Pods code td td Name of the code Pipeline code that the code PipelineRun code references td tr tr td code tekton dev pipelineRun code td td code TaskRuns code that are created automatically during the execution of a code PipelineRun code td td code TaskRuns Pods code td td Name of the code PipelineRun code that triggered the creation of the code TaskRun code td tr tr td code tekton dev task code td td code TaskRuns code that a href taskruns md specifying the target task reference an existing code Task code a td td code Pods code td td Name of the code Task code that the code TaskRun code references td tr tr td code tekton dev clusterTask code td td code TaskRuns code that reference an existing code ClusterTask code td td code Pods code td td Name of the code ClusterTask code that the code TaskRun code references td tr tr td code tekton dev taskRun code td td code Pods code td td No propagation td td Name of the code TaskRun code that created the code Pod code td tr tr td code tekton dev memberOf code td td code TaskRuns code that are created automatically during the execution of a code PipelineRun code td td code TaskRuns Pods code td td code tasks code or code finally code depending on the code PipelineTask code s membership in the code Pipeline code td tr tr td code app kubernetes io instance code code app kubernetes io component code td td code Pods code code StatefulSets code Affinity Assistant td td No propagation td td code Pod code affinity values for code TaskRuns code td tr tbody table Usage examples Below are some examples of using labels The following command finds all Pods created by a PipelineRun named test pipelinerun shell kubectl get pods all namespaces l tekton dev pipelineRun test pipelinerun The following command finds all TaskRuns that reference a Task named test task shell kubectl get taskruns all namespaces l tekton dev task test task The following command finds all TaskRuns that reference a ClusterTask named test clustertask shell kubectl get taskruns all namespaces l tekton dev clusterTask test clustertask Annotations propagation Annotation propagate among Tekton entities as follows similar to Labels For Pipelines instantiated using a PipelineRun annotations propagate automatically from Pipelines to PipelineRuns to TaskRuns and then to the associated Pods If a annotation is present in both Pipeline and PipelineRun the annotation in PipelineRun takes precedence Annotations from Tasks referenced by TaskRuns within a PipelineRun propagate to the corresponding TaskRuns and then to the associated Pods As for Pipeline and PipelineRun if a annotation is present in both Task and TaskRun the annotation in TaskRun takes precedence For standalone TaskRuns that is ones not executing as part of a Pipeline annotations propagate from the referenced Task taskruns md specifying the target task if one exists to the corresponding TaskRun and then to the associated Pod The same as above applies |
tekton HTTP Resolver weight 311 HTTP Resolver | <!--
---
linkTitle: "HTTP Resolver"
weight: 311
---
-->
# HTTP Resolver
This resolver responds to type `http`.
## Parameters
| Param Name | Description | Example Value | |
|----------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|---|
| `url` | The URL to fetch from | <https://raw.githubusercontent.com/tektoncd-catalog/git-clone/main/task/git-clone/git-clone.yaml> | |
| `http-username` | An optional username when fetching a task with credentials (need to be used in conjunction with `http-password-secret`) | `git` | |
| `http-password-secret` | An optional secret in the PipelineRun namespace with a reference to a password when fetching a task with credentials (need to be used in conjunction with `http-username`) | `http-password` | |
| `http-password-secret-key` | An optional key in the `http-password-secret` to be used when fetching a task with credentials | Default: `password` | |
A valid URL must be provided. Only HTTP or HTTPS URLs are supported.
## Requirements
- A cluster running Tekton Pipeline v0.41.0 or later.
- The [built-in remote resolvers installed](./install.md#installing-and-configuring-remote-task-and-pipeline-resolution).
- The `enable-http-resolver` feature flag in the `resolvers-feature-flags` ConfigMap in the
`tekton-pipelines-resolvers` namespace set to `true`.
- [Beta features](./additional-configs.md#beta-features) enabled.
## Configuration
This resolver uses a `ConfigMap` for its settings. See
[`../config/resolvers/http-resolver-config.yaml`](../config/resolvers/http-resolver-config.yaml)
for the name, namespace and defaults that the resolver ships with.
### Options
| Option Name | Description | Example Values |
|-----------------------------|------------------------------------------------------|------------------------|
| `fetch-timeout` | The maximum time any fetching of URL resolution may take. **Note**: a global maximum timeout of 1 minute is currently enforced on _all_ resolution requests. | `1m`, `2s`, `700ms` |
## Usage
### Task Resolution
```yaml
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: remote-task-reference
spec:
taskRef:
resolver: http
params:
- name: url
value: https://raw.githubusercontent.com/tektoncd-catalog/git-clone/main/task/git-clone/git-clone.yaml
```
### Task Resolution with Basic Auth
```yaml
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: remote-task-reference
spec:
taskRef:
resolver: http
params:
- name: url
value: https://raw.githubusercontent.com/owner/private-repo/main/task/task.yaml
- name: http-username
value: git
- name: http-password-secret
value: git-secret
- name: http-password-secret-key
value: git-token
```
### Pipeline Resolution
```yaml
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: http-demo
spec:
pipelineRef:
resolver: http
params:
- name: url
value: https://raw.githubusercontent.com/tektoncd/catalog/main/pipeline/build-push-gke-deploy/0.1/build-push-gke-deploy.yaml
```
---
Except as otherwise noted, the content of this page is licensed under the
[Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/),
and code samples are licensed under the
[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). | tekton | linkTitle HTTP Resolver weight 311 HTTP Resolver This resolver responds to type http Parameters Param Name Description Example Value url The URL to fetch from https raw githubusercontent com tektoncd catalog git clone main task git clone git clone yaml http username An optional username when fetching a task with credentials need to be used in conjunction with http password secret git http password secret An optional secret in the PipelineRun namespace with a reference to a password when fetching a task with credentials need to be used in conjunction with http username http password http password secret key An optional key in the http password secret to be used when fetching a task with credentials Default password A valid URL must be provided Only HTTP or HTTPS URLs are supported Requirements A cluster running Tekton Pipeline v0 41 0 or later The built in remote resolvers installed install md installing and configuring remote task and pipeline resolution The enable http resolver feature flag in the resolvers feature flags ConfigMap in the tekton pipelines resolvers namespace set to true Beta features additional configs md beta features enabled Configuration This resolver uses a ConfigMap for its settings See config resolvers http resolver config yaml config resolvers http resolver config yaml for the name namespace and defaults that the resolver ships with Options Option Name Description Example Values fetch timeout The maximum time any fetching of URL resolution may take Note a global maximum timeout of 1 minute is currently enforced on all resolution requests 1m 2s 700ms Usage Task Resolution yaml apiVersion tekton dev v1beta1 kind TaskRun metadata name remote task reference spec taskRef resolver http params name url value https raw githubusercontent com tektoncd catalog git clone main task git clone git clone yaml Task Resolution with Basic Auth yaml apiVersion tekton dev v1beta1 kind TaskRun metadata name remote task reference spec taskRef resolver http params name url value https raw githubusercontent com owner private repo main task task yaml name http username value git name http password secret value git secret name http password secret key value git token Pipeline Resolution yaml apiVersion tekton dev v1beta1 kind PipelineRun metadata name http demo spec pipelineRef resolver http params name url value https raw githubusercontent com tektoncd catalog main pipeline build push gke deploy 0 1 build push gke deploy yaml Except as otherwise noted the content of this page is licensed under the Creative Commons Attribution 4 0 License https creativecommons org licenses by 4 0 and code samples are licensed under the Apache 2 0 License https www apache org licenses LICENSE 2 0 |
tekton weight 1660 TaskRun result attestations is currently an alpha experimental feature Currently all that is implemented is support for configuring Tekton to connect to SPIRE See TEP 0089 for details on the overall design and feature set TaskRun Result Attestation This is a work in progress SPIRE support is not yet functional | <!--
---
linkTitle: "TaskRun Result Attestation"
weight: 1660
---
-->
⚠️ This is a work in progress: SPIRE support is not yet functional
TaskRun result attestations is currently an alpha experimental feature. Currently all that is implemented is support for configuring Tekton to connect to SPIRE. See TEP-0089 for details on the overall design and feature set.
This being a large feature, this will be implemented in the following phases. This document will be updated as we implement new phases.
1. Add a client for SPIRE (done).
2. Add a configMap which initializes SPIRE (in progress).
3. Modify TaskRun to sign and verify TaskRun Results using SPIRE.
4. Modify Tekton Chains to verify the TaskRun Results.
## Architecture Overview
This feature relies on a SPIRE installation. This is how it integrates into the architecture of Tekton:
```
┌─────────────┐ Register TaskRun Workload Identity ┌──────────┐
│ ├──────────────────────────────────────────────►│ │
│ Tekton │ │ SPIRE │
│ Controller │◄───────────┐ │ Server │
│ │ │ Listen on TaskRun │ │
└────────────┬┘ │ └──────────┘
▲ │ ┌───────┴───────────────────────────────┐ ▲
│ │ │ Tekton TaskRun │ │
│ │ └───────────────────────────────────────┘ │
│ Configure│ ▲ │ Attest
│ Pod & │ │ │ +
│ check │ │ │ Request
│ ready │ ┌───────────┐ │ │ SVIDs
│ └────►│ TaskRun ├────────────────────────┘ │
│ │ Pod │ │
│ └───────────┘ TaskRun Entrypointer │
│ ▲ Sign Result and update │
│ Get │ Get SVID TaskRun status with │
│ SPIRE │ signature + cert │
│ server │ │
│ Credentials │ ▼
┌┴───────────────────┴─────────────────────────────────────────────────────┐
│ │
│ SPIRE Agent ( Runs as ) │
│ + CSI Driver ( Daemonset ) │
│ │
└──────────────────────────────────────────────────────────────────────────┘
```
Initial Setup:
1. As part of the SPIRE deployment, the SPIRE server attests the agents running on each node in the cluster.
1. The Tekton Controller is configured to have workload identity entry creation permissions to the SPIRE server.
1. As part of the Tekton Controller operations, the Tekton Controller will retrieve an identity that it can use to talk to the SPIRE server to register TaskRun workloads.
When a TaskRun is created:
1. The Tekton Controller creates a TaskRun pod and its associated resources
1. When the TaskRun pod is ready, the Tekton Controller registers an identity with the information of the pod to the SPIRE server. This will tell the SPIRE server the identity of the TaskRun to use as well as how to attest the workload/pod.
1. After the TaskRun steps complete, as part of the entrypointer code, it requests an SVID from SPIFFE workload API (via the SPIRE agent socket)
1. The SPIRE agent will attest the workload and request an SVID.
1. The entrypointer receives an x509 SVID, containing the x509 certificate and associated private key.
1. The entrypointer signs the results of the TaskRun and emits the signatures and x509 certificate to the TaskRun results for later verification.
## Enabling TaskRun result attestations
To enable TaskRun attestations:
1. Make sure `enforce-nonfalsifiability` is set to `"spire"`. See [`additional-configs.md`](./additional-configs.md#customizing-the-pipelines-controller-behavior) for details
1. Create a SPIRE deployment containing a SPIRE server, SPIRE agents and the SPIRE CSI driver, for convenience, [this sample single cluster deployment](https://github.com/spiffe/spiffe-csi/tree/main/example/config) can be used.
1. Register the SPIRE workload entry for Tekton with the "Admin" flag, which will allow the Tekton controller to communicate with the SPIRE server to manage the TaskRun identities dynamically.
```
# This example is assuming use of the above SPIRE deployment
# Example where trust domain is "example.org" and cluster name is "example-cluster"
# Register a node alias for all nodes of which the Tekton Controller may reside
kubectl -n spire exec -it \
deployment/spire-server -- \
/opt/spire/bin/spire-server entry create \
-node \
-spiffeID spiffe://example.org/allnodes \
-selector k8s_psat:cluster:example-cluster
# Register the tekton controller workload to have access to creating entries in the SPIRE server
kubectl -n spire exec -it \
deployment/spire-server -- \
/opt/spire/bin/spire-server entry create \
-admin \
-spiffeID spiffe://example.org/tekton/controller \
-parentID spiffe://example.org/allnode \
-selector k8s:ns:tekton-pipelines \
-selector k8s:pod-label:app:tekton-pipelines-controller \
-selector k8s:sa:tekton-pipelines-controller
```
1. Modify the controller (`config/controller.yaml`) to provide access to the SPIRE agent socket.
```yaml
# Add the following the volumeMounts of the "tekton-pipelines-controller" container
- name: spiffe-workload-api
mountPath: /spiffe-workload-api
readOnly: true
# Add the following to the volumes of the controller pod
- name: spiffe-workload-api
csi:
driver: "csi.spiffe.io"
```
1. (Optional) Modify the configmap (`config/config-spire.yaml`) to configure non-default SPIRE options.
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: config-spire
namespace: tekton-pipelines
labels:
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-pipelines
data:
# More explanation about the fields is at the SPIRE Server Configuration file
# https://spiffe.io/docs/latest/deploying/spire_server/#server-configuration-file
# spire-trust-domain specifies the SPIRE trust domain to use.
spire-trust-domain: "example.org"
# spire-socket-path specifies the SPIRE agent socket for SPIFFE workload API.
spire-socket-path: "unix:///spiffe-workload-api/spire-agent.sock"
# spire-server-addr specifies the SPIRE server address for workload/node registration.
spire-server-addr: "spire-server.spire.svc.cluster.local:8081"
# spire-node-alias-prefix specifies the SPIRE node alias prefix to use.
spire-node-alias-prefix: "/tekton-node/"
``` | tekton | linkTitle TaskRun Result Attestation weight 1660 This is a work in progress SPIRE support is not yet functional TaskRun result attestations is currently an alpha experimental feature Currently all that is implemented is support for configuring Tekton to connect to SPIRE See TEP 0089 for details on the overall design and feature set This being a large feature this will be implemented in the following phases This document will be updated as we implement new phases 1 Add a client for SPIRE done 2 Add a configMap which initializes SPIRE in progress 3 Modify TaskRun to sign and verify TaskRun Results using SPIRE 4 Modify Tekton Chains to verify the TaskRun Results Architecture Overview This feature relies on a SPIRE installation This is how it integrates into the architecture of Tekton Register TaskRun Workload Identity Tekton SPIRE Controller Server Listen on TaskRun Tekton TaskRun Configure Attest Pod check Request ready SVIDs TaskRun Pod TaskRun Entrypointer Sign Result and update Get Get SVID TaskRun status with SPIRE signature cert server Credentials SPIRE Agent Runs as CSI Driver Daemonset Initial Setup 1 As part of the SPIRE deployment the SPIRE server attests the agents running on each node in the cluster 1 The Tekton Controller is configured to have workload identity entry creation permissions to the SPIRE server 1 As part of the Tekton Controller operations the Tekton Controller will retrieve an identity that it can use to talk to the SPIRE server to register TaskRun workloads When a TaskRun is created 1 The Tekton Controller creates a TaskRun pod and its associated resources 1 When the TaskRun pod is ready the Tekton Controller registers an identity with the information of the pod to the SPIRE server This will tell the SPIRE server the identity of the TaskRun to use as well as how to attest the workload pod 1 After the TaskRun steps complete as part of the entrypointer code it requests an SVID from SPIFFE workload API via the SPIRE agent socket 1 The SPIRE agent will attest the workload and request an SVID 1 The entrypointer receives an x509 SVID containing the x509 certificate and associated private key 1 The entrypointer signs the results of the TaskRun and emits the signatures and x509 certificate to the TaskRun results for later verification Enabling TaskRun result attestations To enable TaskRun attestations 1 Make sure enforce nonfalsifiability is set to spire See additional configs md additional configs md customizing the pipelines controller behavior for details 1 Create a SPIRE deployment containing a SPIRE server SPIRE agents and the SPIRE CSI driver for convenience this sample single cluster deployment https github com spiffe spiffe csi tree main example config can be used 1 Register the SPIRE workload entry for Tekton with the Admin flag which will allow the Tekton controller to communicate with the SPIRE server to manage the TaskRun identities dynamically This example is assuming use of the above SPIRE deployment Example where trust domain is example org and cluster name is example cluster Register a node alias for all nodes of which the Tekton Controller may reside kubectl n spire exec it deployment spire server opt spire bin spire server entry create node spiffeID spiffe example org allnodes selector k8s psat cluster example cluster Register the tekton controller workload to have access to creating entries in the SPIRE server kubectl n spire exec it deployment spire server opt spire bin spire server entry create admin spiffeID spiffe example org tekton controller parentID spiffe example org allnode selector k8s ns tekton pipelines selector k8s pod label app tekton pipelines controller selector k8s sa tekton pipelines controller 1 Modify the controller config controller yaml to provide access to the SPIRE agent socket yaml Add the following the volumeMounts of the tekton pipelines controller container name spiffe workload api mountPath spiffe workload api readOnly true Add the following to the volumes of the controller pod name spiffe workload api csi driver csi spiffe io 1 Optional Modify the configmap config config spire yaml to configure non default SPIRE options yaml apiVersion v1 kind ConfigMap metadata name config spire namespace tekton pipelines labels app kubernetes io instance default app kubernetes io part of tekton pipelines data More explanation about the fields is at the SPIRE Server Configuration file https spiffe io docs latest deploying spire server server configuration file spire trust domain specifies the SPIRE trust domain to use spire trust domain example org spire socket path specifies the SPIRE agent socket for SPIFFE workload API spire socket path unix spiffe workload api spire agent sock spire server addr specifies the SPIRE server address for workload node registration spire server addr spire server spire svc cluster local 8081 spire node alias prefix specifies the SPIRE node alias prefix to use spire node alias prefix tekton node |
tekton StepActions StepActions weight 201 | <!--
---
linkTitle: "StepActions"
weight: 201
---
-->
# StepActions
- [Overview](#overview)
- [Configuring a StepAction](#configuring-a-stepaction)
- [Declaring Parameters](#declaring-parameters)
- [Passing Params to StepAction](#passing-params-to-stepaction)
- [Emitting Results](#emitting-results)
- [Fetching Emitted Results from StepActions](#fetching-emitted-results-from-stepactions)
- [Declaring WorkingDir](#declaring-workingdir)
- [Declaring SecurityContext](#declaring-securitycontext)
- [Declaring VolumeMounts](#declaring-volumemounts)
- [Referencing a StepAction](#referencing-a-stepaction)
- [Specifying Remote StepActions](#specifying-remote-stepactions)
- [Controlling Step Execution with when Expressions](#controlling-step-execution-with-when-expressions)
## Overview
> :seedling: **`StepActions` is an [beta](additional-configs.md#beta-features) feature.**
> The `enable-step-actions` feature flag must be set to `"true"` to specify a `StepAction` in a `Step`.
A `StepAction` is the reusable and scriptable unit of work that is performed by a `Step`.
A `Step` is not reusable, the work it performs is reusable and referenceable. `Steps` are in-lined in the `Task` definition and either perform work directly or perform a `StepAction`. A `StepAction` cannot be run stand-alone (unlike a `TaskRun` or a `PipelineRun`). It has to be referenced by a `Step`. Another way to think about this is that a `Step` is not composed of `StepActions` (unlike a `Task` being composed of `Steps` and `Sidecars`). Instead, a `Step` is an actionable component, meaning that it has the ability to refer to a `StepAction`. The author of the `StepAction` must be able to compose a `Step` using a `StepAction` and provide all the necessary context (or orchestration) to it.
## Configuring a `StepAction`
A `StepAction` definition supports the following fields:
- Required
- [`apiVersion`][kubernetes-overview] - Specifies the API version. For example,
`tekton.dev/v1alpha1`.
- [`kind`][kubernetes-overview] - Identifies this resource object as a `StepAction` object.
- [`metadata`][kubernetes-overview] - Specifies metadata that uniquely identifies the
`StepAction` resource object. For example, a `name`.
- [`spec`][kubernetes-overview] - Specifies the configuration information for this `StepAction` resource object.
- `image` - Specifies the image to use for the `Step`.
- The container image must abide by the [container contract](./container-contract.md).
- Optional
- `command`
- cannot be used at the same time as using `script`.
- `args`
- `script`
- cannot be used at the same time as using `command`.
- `env`
- [`params`](#declaring-parameters)
- [`results`](#emitting-results)
- [`workingDir`](#declaring-workingdir)
- [`securityContext`](#declaring-securitycontext)
- [`volumeMounts`](#declaring-volumemounts)
[kubernetes-overview]:
https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields
The example below demonstrates the use of most of the above-mentioned fields:
```yaml
apiVersion: tekton.dev/v1beta1
kind: StepAction
metadata:
name: example-stepaction-name
spec:
env:
- name: HOME
value: /home
image: ubuntu
command: ["ls"]
args: ["-lh"]
```
### Declaring Parameters
Like with `Tasks`, a `StepAction` must declare all the parameters that it uses. The same rules for `Parameter` [name](./tasks.md/#parameter-name), [type](./tasks.md/#parameter-type) (including [object](./tasks.md/#object-type), [array](./tasks.md/#array-type) and [string](./tasks.md/#string-type)) apply as when declaring them in `Tasks`. A `StepAction` can also provide [default value](./tasks.md/#default-value) to a `Parameter`.
`Parameters` are passed to the `StepAction` from its corresponding `Step` referencing it.
```yaml
apiVersion: tekton.dev/v1beta1
kind: StepAction
metadata:
name: stepaction-using-params
spec:
params:
- name: gitrepo
type: object
properties:
url:
type: string
commit:
type: string
- name: flags
type: array
- name: outputPath
type: string
default: "/workspace"
image: some-git-image
args: [
"-url=$(params.gitrepo.url)",
"-revision=$(params.gitrepo.commit)",
"-output=$(params.outputPath)",
"$(params.flags[*])",
]
```
> :seedling: **`params` cannot be directly used in a `script` in `StepActions`.**
> Directly substituting `params` in `scripts` makes the workload prone to shell attacks. Therefore, we do not allow direct usage of `params` in `scripts` in `StepActions`. Instead, rely on passing `params` to `env` variables and reference them in `scripts`. We cannot do the same for `inlined-steps` because it breaks `v1 API` compatibility for existing users.
#### Passing Params to StepAction
A `StepAction` may require [params](#declaring-parameters). In this case, a `Task` needs to ensure that the `StepAction` has access to all the required `params`.
When referencing a `StepAction`, a `Step` can also provide it with `params`, just like how a `TaskRun` provides params to the underlying `Task`.
```yaml
apiVersion: tekton.dev/v1
kind: Task
metadata:
name: step-action
spec:
params:
- name: param-for-step-action
description: "this is a param that the step action needs."
steps:
- name: action-runner
ref:
name: step-action
params:
- name: step-action-param
value: $(params.param-for-step-action)
```
**Note:** If a `Step` declares `params` for an `inlined Step`, it will also lead to a validation error. This is because an `inlined Step` gets its `params` from the `TaskRun`.
### Emitting Results
A `StepAction` also declares the results that it will emit.
```yaml
apiVersion: tekton.dev/v1alpha1
kind: StepAction
metadata:
name: stepaction-declaring-results
spec:
results:
- name: current-date-unix-timestamp
description: The current date in unix timestamp format
- name: current-date-human-readable
description: The current date in human readable format
image: bash:latest
script: |
#!/usr/bin/env bash
date +%s | tee $(results.current-date-unix-timestamp.path)
date | tee $(results.current-date-human-readable.path)
```
It is possible that a `StepAction` with `Results` is used multiple times in the same `Task` or multiple `StepActions` in the same `Task` produce `Results` with the same name. Resolving the `Result` names becomes critical otherwise there could be unexpected outcomes. The `Task` needs to be able to resolve these `Result` names clashes by mapping it to a different `Result` name. For this reason, we introduce the capability to store results on a `Step` level.
`StepActions` can also emit `Results` to `$(step.results.<resultName>.path)`.
```yaml
apiVersion: tekton.dev/v1alpha1
kind: StepAction
metadata:
name: stepaction-declaring-results
spec:
results:
- name: current-date-unix-timestamp
description: The current date in unix timestamp format
- name: current-date-human-readable
description: The current date in human readable format
image: bash:latest
script: |
#!/usr/bin/env bash
date +%s | tee $(step.results.current-date-unix-timestamp.path)
date | tee $(step.results.current-date-human-readable.path)
```
`Results` from the above `StepAction` can be [fetched by the `Task`](#fetching-emitted-results-from-stepactions) or in [another `Step/StepAction`](#passing-step-results-between-steps) via `$(steps.<stepName>.results.<resultName>)`.
#### Fetching Emitted Results from StepActions
A `Task` can fetch `Results` produced by the `StepActions` (i.e. only `Results` emitted to `$(step.results.<resultName>.path)`, NOT `$(results.<resultName>.path)`) using variable replacement syntax. We introduce a field to [`Task Results`](./tasks.md#emitting-results) called `Value` whose value can be set to the variable `$(steps.<stepName>.results.<resultName>)`.
```yaml
apiVersion: tekton.dev/v1
kind: Task
metadata:
name: task-fetching-results
spec:
results:
- name: git-url
description: "url of git repo"
value: $(steps.git-clone.results.url)
- name: registry-url
description: "url of docker registry"
value: $(steps.kaniko.results.url)
steps:
- name: git-clone
ref:
name: clone-step-action
- name: kaniko
ref:
name: kaniko-step-action
```
`Results` emitted to `$(step.results.<resultName>.path)` are not automatically available as `TaskRun Results`. The `Task` must explicitly fetch it from the underlying `Step` referencing `StepActions`.
For example, lets assume that in the previous example, the "kaniko" `StepAction` also produced a `Result` named "digest". In that case, the `Task` should also fetch the "digest" from "kaniko" `Step`.
```yaml
apiVersion: tekton.dev/v1
kind: Task
metadata:
name: task-fetching-results
spec:
results:
- name: git-url
description: "url of git repo"
value: $(steps.git-clone.results.url)
- name: registry-url
description: "url of docker registry"
value: $(steps.kaniko.results.url)
- name: digest
description: "digest of the image"
value: $(steps.kaniko.results.digest)
steps:
- name: git-clone
ref:
name: clone-step-action
- name: kaniko
ref:
name: kaniko-step-action
```
#### Passing Results between Steps
`StepResults` (i.e. results written to `$(step.results.<result-name>.path)`, NOT `$(results.<result-name>.path)`) can be shared with following steps via replacement variable `$(steps.<step-name>.results.<result-name>)`.
Pipeline supports two new types of results and parameters: array `[]string` and object `map[string]string`.
| Result Type | Parameter Type | Specification | `enable-api-fields` |
|-------------|----------------|--------------------------------------------------|---------------------|
| string | string | `$(steps.<step-name>.results.<result-name>)` | stable |
| array | array | `$(steps.<step-name>.results.<result-name>[*])` | alpha or beta |
| array | string | `$(steps.<step-name>.results.<result-name>[i])` | alpha or beta |
| object | string | `$(tasks.<task-name>.results.<result-name>.key)` | alpha or beta |
**Note:** Whole Array `Results` (using star notation) cannot be referred in `script` and `env`.
The example below shows how you could pass `step results` from a `step` into following steps, in this case, into a `StepAction`.
```yaml
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
name: step-action-run
spec:
TaskSpec:
steps:
- name: inline-step
results:
- name: result1
type: array
- name: result2
type: string
- name: result3
type: object
properties:
IMAGE_URL:
type: string
IMAGE_DIGEST:
type: string
image: alpine
script: |
echo -n "[\"image1\", \"image2\", \"image3\"]" | tee $(step.results.result1.path)
echo -n "foo" | tee $(step.results.result2.path)
echo -n "{\"IMAGE_URL\":\"ar.com\", \"IMAGE_DIGEST\":\"sha234\"}" | tee $(step.results.result3.path)
- name: action-runner
ref:
name: step-action
params:
- name: param1
value: $(steps.inline-step.results.result1[*])
- name: param2
value: $(steps.inline-step.results.result2)
- name: param3
value: $(steps.inline-step.results.result3[*])
```
**Note:** `Step Results` can only be referenced in a `Step's/StepAction's` `env`, `command` and `args`. Referencing in any other field will throw an error.
### Declaring WorkingDir
You can declare `workingDir` in a `StepAction`:
```yaml
apiVersion: tekton.dev/v1alpha1
kind: StepAction
metadata:
name: example-stepaction-name
spec:
image: gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init:latest
workingDir: /workspace
script: |
# clone the repo
...
```
The `Task` using the `StepAction` has more context about how the `Steps` have been orchestrated. As such, the `Task` should be able to update the `workingDir` of the `StepAction` so that the `StepAction` is executed from the correct location.
The `StepAction` can parametrize the `workingDir` and work relative to it. This way, the `Task` does not really need control over the workingDir, it just needs to pass the path as a parameter.
```yaml
apiVersion: tekton.dev/v1alpha1
kind: StepAction
metadata:
name: example-stepaction-name
spec:
image: ubuntu
params:
- name: source
description: "The path to the source code."
workingDir: $(params.source)
```
### Declaring SecurityContext
You can declare `securityContext` in a `StepAction`:
```yaml
apiVersion: tekton.dev/v1alpha1
kind: StepAction
metadata:
name: example-stepaction-name
spec:
image: gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init:latest
securityContext:
runAsUser: 0
script: |
# clone the repo
...
```
Note that the `securityContext` from `StepAction` will overwrite the `securityContext` from [`TaskRun`](./taskruns.md/#example-of-running-step-containers-as-a-non-root-user).
### Declaring VolumeMounts
You can define `VolumeMounts` in `StepActions`. The `name` of the `VolumeMount` MUST be a single reference to a string `Parameter`. For example, `$(params.registryConfig)` is valid while `$(params.registryConfig)-foo` and `"unparametrized-name"` are invalid. This is to ensure reusability of `StepActions` such that `Task` authors have control of which `Volumes` they bind to the `VolumeMounts`.
```yaml
apiVersion: tekton.dev/v1alpha1
kind: StepAction
metadata:
name: myStep
spec:
params:
- name: registryConfig
- name: otherConfig
volumeMounts:
- name: $(params.registryConfig)
mountPath: /registry-config
- name: $(params.otherConfig)
mountPath: /other-config
image: ...
script: ...
```
### Referencing a StepAction
`StepActions` can be referenced from the `Step` using the `ref` field, as follows:
```yaml
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
name: step-action-run
spec:
taskSpec:
steps:
- name: action-runner
ref:
name: step-action
```
Upon resolution and execution of the `TaskRun`, the `Status` will look something like:
```yaml
status:
completionTime: "2023-10-24T20:28:42Z"
conditions:
- lastTransitionTime: "2023-10-24T20:28:42Z"
message: All Steps have completed executing
reason: Succeeded
status: "True"
type: Succeeded
podName: step-action-run-pod
provenance:
featureFlags:
EnableStepActions: true
...
startTime: "2023-10-24T20:28:32Z"
steps:
- container: step-action-runner
imageID: docker.io/library/alpine@sha256:eece025e432126ce23f223450a0326fbebde39cdf496a85d8c016293fc851978
name: action-runner
terminationReason: Completed
terminated:
containerID: containerd://46a836588967202c05b594696077b147a0eb0621976534765478925bb7ce57f6
exitCode: 0
finishedAt: "2023-10-24T20:28:42Z"
reason: Completed
startedAt: "2023-10-24T20:28:42Z"
taskSpec:
steps:
- computeResources: {}
image: alpine
name: action-runner
```
If a `Step` is referencing a `StepAction`, it cannot contain the fields supported by `StepActions`. This includes:
- `image`
- `command`
- `args`
- `script`
- `env`
- `volumeMounts`
Using any of the above fields and referencing a `StepAction` in the same `Step` is not allowed and will cause a validation error.
```yaml
# This is not allowed and will result in a validation error
# because the image is expected to be provided by the StepAction
# and not inlined.
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
name: step-action-run
spec:
taskSpec:
steps:
- name: action-runner
ref:
name: step-action
image: ubuntu
```
Executing the above `TaskRun` will result in an error that looks like:
```
Error from server (BadRequest): error when creating "STDIN": admission webhook "validation.webhook.pipeline.tekton.dev" denied the request: validation failed: image cannot be used with Ref: spec.taskSpec.steps[0].image
```
When a `Step` is referencing a `StepAction`, it can contain the following fields:
- `computeResources`
- `workspaces` (Isolated workspaces)
- `volumeDevices`
- `imagePullPolicy`
- `onError`
- `stdoutConfig`
- `stderrConfig`
- `securityContext`
- `envFrom`
- `timeout`
- `ref`
- `params`
Using any of the above fields and referencing a `StepAction` is allowed and will not cause an error. For example, the `TaskRun` below will execute without any errors:
```yaml
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
name: step-action-run
spec:
taskSpec:
steps:
- name: action-runner
ref:
name: step-action
params:
- name: step-action-param
value: hello
computeResources:
requests:
memory: 1Gi
cpu: 500m
timeout: 1h
onError: continue
```
#### Specifying Remote StepActions
A `ref` field may specify a `StepAction` in a remote location such as git.
Support for specific types of remote will depend on the `Resolvers` your
cluster's operator has installed. For more information including a tutorial, please check [resolution docs](resolution.md). The below example demonstrates referencing a `StepAction` in git:
```yaml
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
generateName: step-action-run-
spec:
taskSpec:
steps:
- name: action-runner
ref:
resolver: git
params:
- name: url
value: https://github.com/repo/repo.git
- name: revision
value: main
- name: pathInRepo
value: remote_step.yaml
```
The default resolver type can be configured by the `default-resolver-type` field in the `config-defaults` ConfigMap (`alpha` feature). See [additional-configs.md](./additional-configs.md) for details.
### Controlling Step Execution with when Expressions
You can define `when` in a `step` to control its execution.
The components of `when` expressions are `input`, `operator`, `values`, `cel`:
| Component | Description | Syntax |
|------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `input` | Input for the `when` expression, defaults to an empty string if not provided. | * Static values e.g. `"ubuntu"`<br/> * Variables (parameters or results) e.g. `"$(params.image)"` or `"$(tasks.task1.results.image)"` or `"$(tasks.task1.results.array-results[1])"` |
| `operator` | `operator` represents an `input`'s relationship to a set of `values`, a valid `operator` must be provided. | `in` or `notin` |
| `values` | An array of string values, the `values` array must be provided and has to be non-empty. | * An array param e.g. `["$(params.images[*])"]`<br/> * An array result of a task `["$(tasks.task1.results.array-results[*])"]`<br/> * An array result of a step`["(steps.step1.results.array-results[*])"]`<br/>* `values` can contain static values e.g. `"ubuntu"`<br/> * `values` can contain variables (parameters or results) or a Workspaces's `bound` state e.g. `["$(params.image)"]` or `["$(steps.step1.results.image)"]` or `["$(tasks.task1.results.array-results[1])"]` or `["$(steps.step1.results.array-results[1])"]` |
| `cel` | The Common Expression Language (CEL) implements common semantics for expression evaluation, enabling different applications to more easily interoperate. This is an `alpha` feature, `enable-cel-in-whenexpression` needs to be set to true to use this feature. | [cel-syntax](https://github.com/google/cel-spec/blob/master/doc/langdef.md#syntax)
The below example shows how to use when expressions to control step executions:
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc-2
spec:
resources:
requests:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
---
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
generateName: step-when-example
spec:
workspaces:
- name: custom
persistentVolumeClaim:
claimName: my-pvc-2
taskSpec:
description: |
A simple task that shows how to use when determine if a step should be executed
steps:
- name: should-execute
image: bash:latest
script: |
#!/usr/bin/env bash
echo "executed..."
when:
- input: "$(workspaces.custom.bound)"
operator: in
values: [ "true" ]
- name: should-skip
image: bash:latest
script: |
#!/usr/bin/env bash
echo skipskipskip
when:
- input: "$(workspaces.custom2.bound)"
operator: in
values: [ "true" ]
- name: should-continue
image: bash:latest
script: |
#!/usr/bin/env bash
echo blabalbaba
- name: produce-step
image: alpine
results:
- name: result2
type: string
script: |
echo -n "foo" | tee $(step.results.result2.path)
- name: run-based-on-step-results
image: alpine
script: |
echo "wooooooo"
when:
- input: "$(steps.produce-step.results.result2)"
operator: in
values: [ "bar" ]
workspaces:
- name: custom
```
The StepState for a skipped step looks like something similar to the below:
```yaml
{
"container": "step-run-based-on-step-results",
"imageID": "docker.io/library/alpine@sha256:c5b1261d6d3e43071626931fc004f70149baeba2c8ec672bd4f27761f8e1ad6b",
"name": "run-based-on-step-results",
"terminated": {
"containerID": "containerd://bf81162e79cf66a2bbc03e3654942d3464db06ff368c0be263a8a70f363a899b",
"exitCode": 0,
"finishedAt": "2024-03-26T03:57:47Z",
"reason": "Completed",
"startedAt": "2024-03-26T03:57:47Z"
},
"terminationReason": "Skipped"
}
```
Where `terminated.exitCode` is `0` and `terminationReason` is `Skipped` to indicate the Step exited successfully and was skipped. | tekton | linkTitle StepActions weight 201 StepActions Overview overview Configuring a StepAction configuring a stepaction Declaring Parameters declaring parameters Passing Params to StepAction passing params to stepaction Emitting Results emitting results Fetching Emitted Results from StepActions fetching emitted results from stepactions Declaring WorkingDir declaring workingdir Declaring SecurityContext declaring securitycontext Declaring VolumeMounts declaring volumemounts Referencing a StepAction referencing a stepaction Specifying Remote StepActions specifying remote stepactions Controlling Step Execution with when Expressions controlling step execution with when expressions Overview seedling StepActions is an beta additional configs md beta features feature The enable step actions feature flag must be set to true to specify a StepAction in a Step A StepAction is the reusable and scriptable unit of work that is performed by a Step A Step is not reusable the work it performs is reusable and referenceable Steps are in lined in the Task definition and either perform work directly or perform a StepAction A StepAction cannot be run stand alone unlike a TaskRun or a PipelineRun It has to be referenced by a Step Another way to think about this is that a Step is not composed of StepActions unlike a Task being composed of Steps and Sidecars Instead a Step is an actionable component meaning that it has the ability to refer to a StepAction The author of the StepAction must be able to compose a Step using a StepAction and provide all the necessary context or orchestration to it Configuring a StepAction A StepAction definition supports the following fields Required apiVersion kubernetes overview Specifies the API version For example tekton dev v1alpha1 kind kubernetes overview Identifies this resource object as a StepAction object metadata kubernetes overview Specifies metadata that uniquely identifies the StepAction resource object For example a name spec kubernetes overview Specifies the configuration information for this StepAction resource object image Specifies the image to use for the Step The container image must abide by the container contract container contract md Optional command cannot be used at the same time as using script args script cannot be used at the same time as using command env params declaring parameters results emitting results workingDir declaring workingdir securityContext declaring securitycontext volumeMounts declaring volumemounts kubernetes overview https kubernetes io docs concepts overview working with objects kubernetes objects required fields The example below demonstrates the use of most of the above mentioned fields yaml apiVersion tekton dev v1beta1 kind StepAction metadata name example stepaction name spec env name HOME value home image ubuntu command ls args lh Declaring Parameters Like with Tasks a StepAction must declare all the parameters that it uses The same rules for Parameter name tasks md parameter name type tasks md parameter type including object tasks md object type array tasks md array type and string tasks md string type apply as when declaring them in Tasks A StepAction can also provide default value tasks md default value to a Parameter Parameters are passed to the StepAction from its corresponding Step referencing it yaml apiVersion tekton dev v1beta1 kind StepAction metadata name stepaction using params spec params name gitrepo type object properties url type string commit type string name flags type array name outputPath type string default workspace image some git image args url params gitrepo url revision params gitrepo commit output params outputPath params flags seedling params cannot be directly used in a script in StepActions Directly substituting params in scripts makes the workload prone to shell attacks Therefore we do not allow direct usage of params in scripts in StepActions Instead rely on passing params to env variables and reference them in scripts We cannot do the same for inlined steps because it breaks v1 API compatibility for existing users Passing Params to StepAction A StepAction may require params declaring parameters In this case a Task needs to ensure that the StepAction has access to all the required params When referencing a StepAction a Step can also provide it with params just like how a TaskRun provides params to the underlying Task yaml apiVersion tekton dev v1 kind Task metadata name step action spec params name param for step action description this is a param that the step action needs steps name action runner ref name step action params name step action param value params param for step action Note If a Step declares params for an inlined Step it will also lead to a validation error This is because an inlined Step gets its params from the TaskRun Emitting Results A StepAction also declares the results that it will emit yaml apiVersion tekton dev v1alpha1 kind StepAction metadata name stepaction declaring results spec results name current date unix timestamp description The current date in unix timestamp format name current date human readable description The current date in human readable format image bash latest script usr bin env bash date s tee results current date unix timestamp path date tee results current date human readable path It is possible that a StepAction with Results is used multiple times in the same Task or multiple StepActions in the same Task produce Results with the same name Resolving the Result names becomes critical otherwise there could be unexpected outcomes The Task needs to be able to resolve these Result names clashes by mapping it to a different Result name For this reason we introduce the capability to store results on a Step level StepActions can also emit Results to step results resultName path yaml apiVersion tekton dev v1alpha1 kind StepAction metadata name stepaction declaring results spec results name current date unix timestamp description The current date in unix timestamp format name current date human readable description The current date in human readable format image bash latest script usr bin env bash date s tee step results current date unix timestamp path date tee step results current date human readable path Results from the above StepAction can be fetched by the Task fetching emitted results from stepactions or in another Step StepAction passing step results between steps via steps stepName results resultName Fetching Emitted Results from StepActions A Task can fetch Results produced by the StepActions i e only Results emitted to step results resultName path NOT results resultName path using variable replacement syntax We introduce a field to Task Results tasks md emitting results called Value whose value can be set to the variable steps stepName results resultName yaml apiVersion tekton dev v1 kind Task metadata name task fetching results spec results name git url description url of git repo value steps git clone results url name registry url description url of docker registry value steps kaniko results url steps name git clone ref name clone step action name kaniko ref name kaniko step action Results emitted to step results resultName path are not automatically available as TaskRun Results The Task must explicitly fetch it from the underlying Step referencing StepActions For example lets assume that in the previous example the kaniko StepAction also produced a Result named digest In that case the Task should also fetch the digest from kaniko Step yaml apiVersion tekton dev v1 kind Task metadata name task fetching results spec results name git url description url of git repo value steps git clone results url name registry url description url of docker registry value steps kaniko results url name digest description digest of the image value steps kaniko results digest steps name git clone ref name clone step action name kaniko ref name kaniko step action Passing Results between Steps StepResults i e results written to step results result name path NOT results result name path can be shared with following steps via replacement variable steps step name results result name Pipeline supports two new types of results and parameters array string and object map string string Result Type Parameter Type Specification enable api fields string string steps step name results result name stable array array steps step name results result name alpha or beta array string steps step name results result name i alpha or beta object string tasks task name results result name key alpha or beta Note Whole Array Results using star notation cannot be referred in script and env The example below shows how you could pass step results from a step into following steps in this case into a StepAction yaml apiVersion tekton dev v1 kind TaskRun metadata name step action run spec TaskSpec steps name inline step results name result1 type array name result2 type string name result3 type object properties IMAGE URL type string IMAGE DIGEST type string image alpine script echo n image1 image2 image3 tee step results result1 path echo n foo tee step results result2 path echo n IMAGE URL ar com IMAGE DIGEST sha234 tee step results result3 path name action runner ref name step action params name param1 value steps inline step results result1 name param2 value steps inline step results result2 name param3 value steps inline step results result3 Note Step Results can only be referenced in a Step s StepAction s env command and args Referencing in any other field will throw an error Declaring WorkingDir You can declare workingDir in a StepAction yaml apiVersion tekton dev v1alpha1 kind StepAction metadata name example stepaction name spec image gcr io tekton releases github com tektoncd pipeline cmd git init latest workingDir workspace script clone the repo The Task using the StepAction has more context about how the Steps have been orchestrated As such the Task should be able to update the workingDir of the StepAction so that the StepAction is executed from the correct location The StepAction can parametrize the workingDir and work relative to it This way the Task does not really need control over the workingDir it just needs to pass the path as a parameter yaml apiVersion tekton dev v1alpha1 kind StepAction metadata name example stepaction name spec image ubuntu params name source description The path to the source code workingDir params source Declaring SecurityContext You can declare securityContext in a StepAction yaml apiVersion tekton dev v1alpha1 kind StepAction metadata name example stepaction name spec image gcr io tekton releases github com tektoncd pipeline cmd git init latest securityContext runAsUser 0 script clone the repo Note that the securityContext from StepAction will overwrite the securityContext from TaskRun taskruns md example of running step containers as a non root user Declaring VolumeMounts You can define VolumeMounts in StepActions The name of the VolumeMount MUST be a single reference to a string Parameter For example params registryConfig is valid while params registryConfig foo and unparametrized name are invalid This is to ensure reusability of StepActions such that Task authors have control of which Volumes they bind to the VolumeMounts yaml apiVersion tekton dev v1alpha1 kind StepAction metadata name myStep spec params name registryConfig name otherConfig volumeMounts name params registryConfig mountPath registry config name params otherConfig mountPath other config image script Referencing a StepAction StepActions can be referenced from the Step using the ref field as follows yaml apiVersion tekton dev v1 kind TaskRun metadata name step action run spec taskSpec steps name action runner ref name step action Upon resolution and execution of the TaskRun the Status will look something like yaml status completionTime 2023 10 24T20 28 42Z conditions lastTransitionTime 2023 10 24T20 28 42Z message All Steps have completed executing reason Succeeded status True type Succeeded podName step action run pod provenance featureFlags EnableStepActions true startTime 2023 10 24T20 28 32Z steps container step action runner imageID docker io library alpine sha256 eece025e432126ce23f223450a0326fbebde39cdf496a85d8c016293fc851978 name action runner terminationReason Completed terminated containerID containerd 46a836588967202c05b594696077b147a0eb0621976534765478925bb7ce57f6 exitCode 0 finishedAt 2023 10 24T20 28 42Z reason Completed startedAt 2023 10 24T20 28 42Z taskSpec steps computeResources image alpine name action runner If a Step is referencing a StepAction it cannot contain the fields supported by StepActions This includes image command args script env volumeMounts Using any of the above fields and referencing a StepAction in the same Step is not allowed and will cause a validation error yaml This is not allowed and will result in a validation error because the image is expected to be provided by the StepAction and not inlined apiVersion tekton dev v1 kind TaskRun metadata name step action run spec taskSpec steps name action runner ref name step action image ubuntu Executing the above TaskRun will result in an error that looks like Error from server BadRequest error when creating STDIN admission webhook validation webhook pipeline tekton dev denied the request validation failed image cannot be used with Ref spec taskSpec steps 0 image When a Step is referencing a StepAction it can contain the following fields computeResources workspaces Isolated workspaces volumeDevices imagePullPolicy onError stdoutConfig stderrConfig securityContext envFrom timeout ref params Using any of the above fields and referencing a StepAction is allowed and will not cause an error For example the TaskRun below will execute without any errors yaml apiVersion tekton dev v1 kind TaskRun metadata name step action run spec taskSpec steps name action runner ref name step action params name step action param value hello computeResources requests memory 1Gi cpu 500m timeout 1h onError continue Specifying Remote StepActions A ref field may specify a StepAction in a remote location such as git Support for specific types of remote will depend on the Resolvers your cluster s operator has installed For more information including a tutorial please check resolution docs resolution md The below example demonstrates referencing a StepAction in git yaml apiVersion tekton dev v1 kind TaskRun metadata generateName step action run spec taskSpec steps name action runner ref resolver git params name url value https github com repo repo git name revision value main name pathInRepo value remote step yaml The default resolver type can be configured by the default resolver type field in the config defaults ConfigMap alpha feature See additional configs md additional configs md for details Controlling Step Execution with when Expressions You can define when in a step to control its execution The components of when expressions are input operator values cel Component Description Syntax input Input for the when expression defaults to an empty string if not provided Static values e g ubuntu br Variables parameters or results e g params image or tasks task1 results image or tasks task1 results array results 1 operator operator represents an input s relationship to a set of values a valid operator must be provided in or notin values An array of string values the values array must be provided and has to be non empty An array param e g params images br An array result of a task tasks task1 results array results br An array result of a step steps step1 results array results br values can contain static values e g ubuntu br values can contain variables parameters or results or a Workspaces s bound state e g params image or steps step1 results image or tasks task1 results array results 1 or steps step1 results array results 1 cel The Common Expression Language CEL implements common semantics for expression evaluation enabling different applications to more easily interoperate This is an alpha feature enable cel in whenexpression needs to be set to true to use this feature cel syntax https github com google cel spec blob master doc langdef md syntax The below example shows how to use when expressions to control step executions yaml apiVersion v1 kind PersistentVolumeClaim metadata name my pvc 2 spec resources requests storage 5Gi volumeMode Filesystem accessModes ReadWriteOnce apiVersion tekton dev v1 kind TaskRun metadata generateName step when example spec workspaces name custom persistentVolumeClaim claimName my pvc 2 taskSpec description A simple task that shows how to use when determine if a step should be executed steps name should execute image bash latest script usr bin env bash echo executed when input workspaces custom bound operator in values true name should skip image bash latest script usr bin env bash echo skipskipskip when input workspaces custom2 bound operator in values true name should continue image bash latest script usr bin env bash echo blabalbaba name produce step image alpine results name result2 type string script echo n foo tee step results result2 path name run based on step results image alpine script echo wooooooo when input steps produce step results result2 operator in values bar workspaces name custom The StepState for a skipped step looks like something similar to the below yaml container step run based on step results imageID docker io library alpine sha256 c5b1261d6d3e43071626931fc004f70149baeba2c8ec672bd4f27761f8e1ad6b name run based on step results terminated containerID containerd bf81162e79cf66a2bbc03e3654942d3464db06ff368c0be263a8a70f363a899b exitCode 0 finishedAt 2024 03 26T03 57 47Z reason Completed startedAt 2024 03 26T03 57 47Z terminationReason Skipped Where terminated exitCode is 0 and terminationReason is Skipped to indicate the Step exited successfully and was skipped |
tekton This how to will outline the steps a developer needs to take when creating weight 104 How to write a Resolver How to write a Resolver | <!--
---
linkTitle: "How to write a Resolver"
weight: 104
---
-->
# How to write a Resolver
This how-to will outline the steps a developer needs to take when creating
a new (very basic) Resolver. Rather than focus on support for a particular version
control system or cloud platform this Resolver will simply respond with
some hard-coded YAML.
If you aren't yet familiar with the meaning of "resolution" when it
comes to Tekton, a short summary follows. You might also want to read a
little bit into Tekton Pipelines, particularly [the docs on specifying a
target Pipeline to
run](./pipelineruns.md#specifying-the-target-pipeline)
and, if you're feeling particularly brave or bored, the [really long
design doc describing Tekton
Resolution](https://github.com/tektoncd/community/blob/main/teps/0060-remote-resource-resolution.md).
## What's a Resolver?
A Resolver is a program that runs in a Kubernetes cluster alongside
[Tekton Pipelines](https://github.com/tektoncd/pipeline) and "resolves"
requests for `Tasks` and `Pipelines` from remote locations. More
concretely: if a user submitted a `PipelineRun` that needed a Pipeline
YAML stored in a git repo, then it would be a `Resolver` that's
responsible for fetching the YAML file from git and returning it to
Tekton Pipelines.
This pattern extends beyond just git, allowing a developer to integrate
support for other version control systems, cloud buckets, or storage systems
without having to modify Tekton Pipelines itself.
## Just want to see the working example?
If you'd prefer to look at the end result of this howto you can take a
visit the
[`./resolver-template`](./resolver-template)
in the Tekton Resolution repo. That template is built on the code from
this howto to get you up and running quickly.
## Pre-requisites
Before getting started with this howto you'll need to be comfortable
developing in Go and have a general understanding of how Tekton
Resolution works.
You'll also need the following:
- A computer with
[`kubectl`](https://kubernetes.io/docs/tasks/tools/#kubectl) and
[`ko`](https://github.com/google/ko) installed.
- A Kubernetes cluster running at least Kubernetes 1.28. A [`kind`
cluster](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)
should work fine for following the guide on your local machine.
- An image registry that you can push images to. If you're using `kind`
make sure your `KO_DOCKER_REPO` environment variable is set to
`kind.local`.
- Tekton Pipelines and remote resolvers installed in your Kubernetes
cluster. See [the installation
guide](./install.md#installing-and-configuring-remote-task-and-pipeline-resolution) for
instructions on installing it.
## First Steps
The first thing to do is create an initial directory structure for your
project. For this example we'll create a directory and initialize a new
go module with a few subdirectories for our code:
```bash
$ mkdir demoresolver
$ cd demoresolver
$ go mod init example.com/demoresolver
$ mkdir -p cmd/demoresolver
$ mkdir config
```
The `cmd/demoresolver` directory will contain code for the resolver and the
`config` directory will eventually contain a yaml file for deploying the
resolver to Kubernetes.
## Initializing the resolver's binary
A Resolver is ultimately just a program running in your cluster, so the
first step is to fill out the initial code for starting that program.
Our resolver here is going to be extremely simple and doesn't need any
flags or special environment variables, so we'll just initialize it with
a little bit of boilerplate.
Create `cmd/demoresolver/main.go` with the following setup code:
```go
package main
import (
"context"
"github.com/tektoncd/pipeline/pkg/remoteresolution/resolver/framework"
"knative.dev/pkg/injection/sharedmain"
)
func main() {
sharedmain.Main("controller",
framework.NewController(context.Background(), &resolver{}),
)
}
type resolver struct {}
```
```go
package main
import (
"context"
"github.com/tektoncd/pipeline/pkg/resolution/resolver/framework"
"knative.dev/pkg/injection/sharedmain"
)
func main() {
sharedmain.Main("controller",
framework.NewController(context.Background(), &resolver{}),
)
}
type resolver struct {}
```
This won't compile yet but you can download the dependencies by running:
```bash
# Depending on your go version you might not need the -compat flag
$ go mod tidy -compat=1.17
```
## Writing the Resolver
If you try to build the binary right now you'll receive the following
error:
```bash
$ go build -o /dev/null ./cmd/demoresolver
cmd/demoresolver/main.go:11:78: cannot use &resolver{} (type *resolver) as
type framework.Resolver in argument to framework.NewController:
*resolver does not implement framework.Resolver (missing GetName method)
```
We've already defined our own `resolver` type but in order to get the
resolver running you'll need to add the methods defined in [the
`framework.Resolver` interface](../pkg/resolution/resolver/framework/interface.go)
to your `main.go` file. Going through each method in turn:
## The `Initialize` method
This method is used to start any libraries or otherwise setup any
prerequisites your resolver needs. For this example we won't need
anything so this method can just return `nil`.
```go
// Initialize sets up any dependencies needed by the resolver. None atm.
func (r *resolver) Initialize(context.Context) error {
return nil
}
```
## The `GetName` method
This method returns a string name that will be used to refer to this
resolver. You'd see this name show up in places like logs. For this
simple example we'll return `"Demo"`:
```go
// GetName returns a string name to refer to this resolver by.
func (r *resolver) GetName(context.Context) string {
return "Demo"
}
```
## The `GetSelector` method
This method should return a map of string labels and their values that
will be used to direct requests to this resolver. For this example the
only label we're interested in matching on is defined by
`tektoncd/resolution`:
```go
// GetSelector returns a map of labels to match requests to this resolver.
func (r *resolver) GetSelector(context.Context) map[string]string {
return map[string]string{
common.LabelKeyResolverType: "demo",
}
}
```
What this does is tell the resolver framework that any
`ResolutionRequest` object with a label of
`"resolution.tekton.dev/type": "demo"` should be routed to our
example resolver.
We'll also need to add another import for this package at the top:
```go
import (
"context"
// Add this one; it defines LabelKeyResolverType we use in GetSelector
"github.com/tektoncd/pipeline/pkg/resolution/common"
"github.com/tektoncd/pipeline/pkg/remoteresolution/resolver/framework"
"knative.dev/pkg/injection/sharedmain"
pipelinev1 "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1"
)
```
```go
import (
"context"
// Add this one; it defines LabelKeyResolverType we use in GetSelector
"github.com/tektoncd/pipeline/pkg/resolution/common"
"github.com/tektoncd/pipeline/pkg/resolution/resolver/framework"
"knative.dev/pkg/injection/sharedmain"
pipelinev1 "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1"
)
```
## The `Validate` method
The `Validate` method checks that the resolution-spec submitted as part of
a resolution request are valid. Our example resolver doesn't expect
any params in the spec so we'll simply ensure that the there are no params.
Our example resolver also expects format for the `url` to be `demoscheme://<path>` so we'll validate this format.
In the previous version, this was instead called `ValidateParams` method. See below
for the differences.
```go
// Validate ensures that the resolution spec from a request is as expected.
func (r *resolver) Validate(ctx context.Context, req *v1beta1.ResolutionRequestSpec) error {
if len(req.Params) > 0 {
return errors.New("no params allowed")
}
url := req.URL
u, err := neturl.ParseRequestURI(url)
if err != nil {
return err
}
if u.Scheme != "demoscheme" {
return fmt.Errorf("Invalid Scheme. Want %s, Got %s", "demoscheme", u.Scheme)
}
if u.Path == "" {
return errors.New("Empty path.")
}
return nil
}
```
You'll also need to add the `net/url` as `neturl` and `"errors"` package to your list of imports at
the top of the file.
```
```go
// ValidateParams ensures that the params from a request are as expected.
func (r *resolver) ValidateParams(ctx context.Context, params []pipelinev1.Param) error {
if len(req.Params) > 0 {
return errors.New("no params allowed")
}
return nil
}
```
You'll also need to add the `"errors"` package to your list of imports at
the top of the file.
## The `Resolve` method
We implement the `Resolve` method to do the heavy lifting of fetching
the contents of a file and returning them. It takes in the resolution request spec as input.
For this example we're just going to return a hard-coded string of YAML. Since Tekton Pipelines
currently only supports fetching Pipeline resources via remote
resolution that's what we'll return.
The method signature we're implementing here has a
`framework.ResolvedResource` interface as one of its return values. This
is another type we have to implement but it has a small footprint:
```go
// Resolve uses the given resolution spec to resolve the requested file or resource.
func (r *resolver) Resolve(ctx context.Context, req *v1beta1.ResolutionRequestSpec) (framework.ResolvedResource, error) {
return &myResolvedResource{}, nil
}
// our hard-coded resolved file to return
const pipeline = `
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: my-pipeline
spec:
tasks:
- name: hello-world
taskSpec:
steps:
- image: alpine:3.15.1
script: |
echo "hello world"
`
// myResolvedResource wraps the data we want to return to Pipelines
type myResolvedResource struct {}
// Data returns the bytes of our hard-coded Pipeline
func (*myResolvedResource) Data() []byte {
return []byte(pipeline)
}
// Annotations returns any metadata needed alongside the data. None atm.
func (*myResolvedResource) Annotations() map[string]string {
return nil
}
// RefSource is the source reference of the remote data that records where the remote
// file came from including the url, digest and the entrypoint. None atm.
func (*myResolvedResource) RefSource() *pipelinev1.RefSource {
return nil
}
```
```go
// Resolve uses the given resolution spec to resolve the requested file or resource.
func (r *resolver) Resolve(ctx context.Context, params []pipelinev1.Param) (framework.ResolvedResource, error) {
return &myResolvedResource{}, nil
}
// our hard-coded resolved file to return
const pipeline = `
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: my-pipeline
spec:
tasks:
- name: hello-world
taskSpec:
steps:
- image: alpine:3.15.1
script: |
echo "hello world"
`
// myResolvedResource wraps the data we want to return to Pipelines
type myResolvedResource struct {}
// Data returns the bytes of our hard-coded Pipeline
func (*myResolvedResource) Data() []byte {
return []byte(pipeline)
}
// Annotations returns any metadata needed alongside the data. None atm.
func (*myResolvedResource) Annotations() map[string]string {
return nil
}
// RefSource is the source reference of the remote data that records where the remote
// file came from including the url, digest and the entrypoint. None atm.
func (*myResolvedResource) RefSource() *pipelinev1.RefSource {
return nil
}
```
```go
// Resolve uses the given resolution spec to resolve the requested file or resource.
func (r *resolver) Resolve(ctx context.Context, req *v1beta1.ResolutionRequestSpec) (framework.ResolvedResource, error) {
return &myResolvedResource{}, nil
}
// our hard-coded resolved file to return
const pipeline = `
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: my-pipeline
spec:
tasks:
- name: hello-world
taskSpec:
steps:
- image: alpine:3.15.1
script: |
echo "hello world"
`
// myResolvedResource wraps the data we want to return to Pipelines
type myResolvedResource struct {}
// Data returns the bytes of our hard-coded Pipeline
func (*myResolvedResource) Data() []byte {
return []byte(pipeline)
}
// Annotations returns any metadata needed alongside the data. None atm.
func (*myResolvedResource) Annotations() map[string]string {
return nil
}
// RefSource is the source reference of the remote data that records where the remote
// file came from including the url, digest and the entrypoint. None atm.
func (*myResolvedResource) RefSource() *pipelinev1.RefSource {
return nil
}
```
Best practice: In order to enable Tekton Chains to record the source
information of the remote data in the SLSA provenance, the resolver should
implement the `RefSource()` method to return a correct RefSource value. See the
following example.
```go
// RefSource is the source reference of the remote data that records where the remote
// file came from including the url, digest and the entrypoint.
func (*myResolvedResource) RefSource() *pipelinev1.RefSource {
return &v1.RefSource{
URI: "https://github.com/user/example",
Digest: map[string]string{
"sha1": "example",
},
EntryPoint: "foo/bar/task.yaml",
}
}
```
## The deployment configuration
Finally, our resolver needs some deployment configuration so that it can
run in Kubernetes.
A full description of the config is beyond the scope of a short howto
but in summary we'll tell Kubernetes to run our resolver application
along with some environment variables and other configuration that the
underlying `knative` framework expects. The deployed application is put
in the `tekton-pipelines` namespace and uses `ko` to build its
container image. Finally the `ServiceAccount` our deployment uses is
`tekton-pipelines-resolvers`, which is the default `ServiceAccount` shared by all
resolvers in the `tekton-pipelines-resolvers` namespace.
The full configuration follows:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: demoresolver
namespace: tekton-pipelines-resolvers
spec:
replicas: 1
selector:
matchLabels:
app: demoresolver
template:
metadata:
labels:
app: demoresolver
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app: demoresolver
topologyKey: kubernetes.io/hostname
weight: 100
serviceAccountName: tekton-pipelines-resolvers
containers:
- name: controller
image: ko://example.com/demoresolver/cmd/demoresolver
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 1000m
memory: 1000Mi
ports:
- name: metrics
containerPort: 9090
env:
- name: SYSTEM_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CONFIG_LOGGING_NAME
value: config-logging
- name: CONFIG_OBSERVABILITY_NAME
value: config-observability
- name: METRICS_DOMAIN
value: tekton.dev/resolution
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
capabilities:
drop:
- all
```
Phew, ok, put all that in a file at `config/demo-resolver-deployment.yaml`
and you'll be ready to deploy your application to Kubernetes and see it
work!
## Trying it out
Now that all the code is written your new resolver should be ready to
deploy to a Kubernetes cluster. We'll use `ko` to build and deploy the
application:
```bash
$ ko apply -f ./config/demo-resolver-deployment.yaml
```
Assuming the resolver deployed successfully you should be able to see it
in the output from the following command:
```bash
$ kubectl get deployments -n tekton-pipelines
# And here's approximately what you should see when you run this command:
NAME READY UP-TO-DATE AVAILABLE AGE
controller 1/1 1 1 2d21h
demoresolver 1/1 1 1 91s
webhook 1/1 1 1 2d21
```
To exercise your new resolver, let's submit a request for its hard-coded
pipeline. Create a file called `test-request.yaml` with the following
content:
```yaml
apiVersion: resolution.tekton.dev/v1beta1
kind: ResolutionRequest
metadata:
name: test-request
labels:
resolution.tekton.dev/type: demo
```
And submit this request with the following command:
```bash
$ kubectl apply -f ./test-request.yaml && kubectl get --watch resolutionrequests
```
You should soon see your ResolutionRequest printed to screen with a True
value in its SUCCEEDED column:
```bash
resolutionrequest.resolution.tekton.dev/test-request created
NAME SUCCEEDED REASON
test-request True
```
Press Ctrl-C to get back to the command line.
If you now take a look at the ResolutionRequest's YAML you'll see the
hard-coded pipeline yaml in its `status.data` field. It won't be totally
recognizable, though, because it's encoded as base64. Have a look with the
following command:
```bash
$ kubectl get resolutionrequest test-request -o yaml
```
You can convert that base64 data back into yaml with the following
command:
```bash
$ kubectl get resolutionrequest test-request -o jsonpath="{$.status.data}" | base64 -d
```
Great work, you've successfully written a Resolver from scratch!
## Next Steps
At this point you could start to expand the `Resolve()` method in your
Resolver to fetch data from your storage backend of choice.
Or if you prefer to take a look at a more fully-realized example of a
Resolver, see the [code for the `gitresolver` hosted in the Tekton
Pipeline repo](https://github.com/tektoncd/pipeline/tree/main/pkg/resolution/resolver/git/).
Finally, another direction you could take this would be to try writing a
`PipelineRun` for Tekton Pipelines that speaks to your Resolver. Can
you get a `PipelineRun` to execute successfully that uses the hard-coded
`Pipeline` your Resolver returns?
---
Except as otherwise noted, the content of this page is licensed under the
[Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/),
and code samples are licensed under the
[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). | tekton | linkTitle How to write a Resolver weight 104 How to write a Resolver This how to will outline the steps a developer needs to take when creating a new very basic Resolver Rather than focus on support for a particular version control system or cloud platform this Resolver will simply respond with some hard coded YAML If you aren t yet familiar with the meaning of resolution when it comes to Tekton a short summary follows You might also want to read a little bit into Tekton Pipelines particularly the docs on specifying a target Pipeline to run pipelineruns md specifying the target pipeline and if you re feeling particularly brave or bored the really long design doc describing Tekton Resolution https github com tektoncd community blob main teps 0060 remote resource resolution md What s a Resolver A Resolver is a program that runs in a Kubernetes cluster alongside Tekton Pipelines https github com tektoncd pipeline and resolves requests for Tasks and Pipelines from remote locations More concretely if a user submitted a PipelineRun that needed a Pipeline YAML stored in a git repo then it would be a Resolver that s responsible for fetching the YAML file from git and returning it to Tekton Pipelines This pattern extends beyond just git allowing a developer to integrate support for other version control systems cloud buckets or storage systems without having to modify Tekton Pipelines itself Just want to see the working example If you d prefer to look at the end result of this howto you can take a visit the resolver template resolver template in the Tekton Resolution repo That template is built on the code from this howto to get you up and running quickly Pre requisites Before getting started with this howto you ll need to be comfortable developing in Go and have a general understanding of how Tekton Resolution works You ll also need the following A computer with kubectl https kubernetes io docs tasks tools kubectl and ko https github com google ko installed A Kubernetes cluster running at least Kubernetes 1 28 A kind cluster https kind sigs k8s io docs user quick start installation should work fine for following the guide on your local machine An image registry that you can push images to If you re using kind make sure your KO DOCKER REPO environment variable is set to kind local Tekton Pipelines and remote resolvers installed in your Kubernetes cluster See the installation guide install md installing and configuring remote task and pipeline resolution for instructions on installing it First Steps The first thing to do is create an initial directory structure for your project For this example we ll create a directory and initialize a new go module with a few subdirectories for our code bash mkdir demoresolver cd demoresolver go mod init example com demoresolver mkdir p cmd demoresolver mkdir config The cmd demoresolver directory will contain code for the resolver and the config directory will eventually contain a yaml file for deploying the resolver to Kubernetes Initializing the resolver s binary A Resolver is ultimately just a program running in your cluster so the first step is to fill out the initial code for starting that program Our resolver here is going to be extremely simple and doesn t need any flags or special environment variables so we ll just initialize it with a little bit of boilerplate Create cmd demoresolver main go with the following setup code go package main import context github com tektoncd pipeline pkg remoteresolution resolver framework knative dev pkg injection sharedmain func main sharedmain Main controller framework NewController context Background resolver type resolver struct go package main import context github com tektoncd pipeline pkg resolution resolver framework knative dev pkg injection sharedmain func main sharedmain Main controller framework NewController context Background resolver type resolver struct This won t compile yet but you can download the dependencies by running bash Depending on your go version you might not need the compat flag go mod tidy compat 1 17 Writing the Resolver If you try to build the binary right now you ll receive the following error bash go build o dev null cmd demoresolver cmd demoresolver main go 11 78 cannot use resolver type resolver as type framework Resolver in argument to framework NewController resolver does not implement framework Resolver missing GetName method We ve already defined our own resolver type but in order to get the resolver running you ll need to add the methods defined in the framework Resolver interface pkg resolution resolver framework interface go to your main go file Going through each method in turn The Initialize method This method is used to start any libraries or otherwise setup any prerequisites your resolver needs For this example we won t need anything so this method can just return nil go Initialize sets up any dependencies needed by the resolver None atm func r resolver Initialize context Context error return nil The GetName method This method returns a string name that will be used to refer to this resolver You d see this name show up in places like logs For this simple example we ll return Demo go GetName returns a string name to refer to this resolver by func r resolver GetName context Context string return Demo The GetSelector method This method should return a map of string labels and their values that will be used to direct requests to this resolver For this example the only label we re interested in matching on is defined by tektoncd resolution go GetSelector returns a map of labels to match requests to this resolver func r resolver GetSelector context Context map string string return map string string common LabelKeyResolverType demo What this does is tell the resolver framework that any ResolutionRequest object with a label of resolution tekton dev type demo should be routed to our example resolver We ll also need to add another import for this package at the top go import context Add this one it defines LabelKeyResolverType we use in GetSelector github com tektoncd pipeline pkg resolution common github com tektoncd pipeline pkg remoteresolution resolver framework knative dev pkg injection sharedmain pipelinev1 github com tektoncd pipeline pkg apis pipeline v1 go import context Add this one it defines LabelKeyResolverType we use in GetSelector github com tektoncd pipeline pkg resolution common github com tektoncd pipeline pkg resolution resolver framework knative dev pkg injection sharedmain pipelinev1 github com tektoncd pipeline pkg apis pipeline v1 The Validate method The Validate method checks that the resolution spec submitted as part of a resolution request are valid Our example resolver doesn t expect any params in the spec so we ll simply ensure that the there are no params Our example resolver also expects format for the url to be demoscheme path so we ll validate this format In the previous version this was instead called ValidateParams method See below for the differences go Validate ensures that the resolution spec from a request is as expected func r resolver Validate ctx context Context req v1beta1 ResolutionRequestSpec error if len req Params 0 return errors New no params allowed url req URL u err neturl ParseRequestURI url if err nil return err if u Scheme demoscheme return fmt Errorf Invalid Scheme Want s Got s demoscheme u Scheme if u Path return errors New Empty path return nil You ll also need to add the net url as neturl and errors package to your list of imports at the top of the file go ValidateParams ensures that the params from a request are as expected func r resolver ValidateParams ctx context Context params pipelinev1 Param error if len req Params 0 return errors New no params allowed return nil You ll also need to add the errors package to your list of imports at the top of the file The Resolve method We implement the Resolve method to do the heavy lifting of fetching the contents of a file and returning them It takes in the resolution request spec as input For this example we re just going to return a hard coded string of YAML Since Tekton Pipelines currently only supports fetching Pipeline resources via remote resolution that s what we ll return The method signature we re implementing here has a framework ResolvedResource interface as one of its return values This is another type we have to implement but it has a small footprint go Resolve uses the given resolution spec to resolve the requested file or resource func r resolver Resolve ctx context Context req v1beta1 ResolutionRequestSpec framework ResolvedResource error return myResolvedResource nil our hard coded resolved file to return const pipeline apiVersion tekton dev v1beta1 kind Pipeline metadata name my pipeline spec tasks name hello world taskSpec steps image alpine 3 15 1 script echo hello world myResolvedResource wraps the data we want to return to Pipelines type myResolvedResource struct Data returns the bytes of our hard coded Pipeline func myResolvedResource Data byte return byte pipeline Annotations returns any metadata needed alongside the data None atm func myResolvedResource Annotations map string string return nil RefSource is the source reference of the remote data that records where the remote file came from including the url digest and the entrypoint None atm func myResolvedResource RefSource pipelinev1 RefSource return nil go Resolve uses the given resolution spec to resolve the requested file or resource func r resolver Resolve ctx context Context params pipelinev1 Param framework ResolvedResource error return myResolvedResource nil our hard coded resolved file to return const pipeline apiVersion tekton dev v1beta1 kind Pipeline metadata name my pipeline spec tasks name hello world taskSpec steps image alpine 3 15 1 script echo hello world myResolvedResource wraps the data we want to return to Pipelines type myResolvedResource struct Data returns the bytes of our hard coded Pipeline func myResolvedResource Data byte return byte pipeline Annotations returns any metadata needed alongside the data None atm func myResolvedResource Annotations map string string return nil RefSource is the source reference of the remote data that records where the remote file came from including the url digest and the entrypoint None atm func myResolvedResource RefSource pipelinev1 RefSource return nil go Resolve uses the given resolution spec to resolve the requested file or resource func r resolver Resolve ctx context Context req v1beta1 ResolutionRequestSpec framework ResolvedResource error return myResolvedResource nil our hard coded resolved file to return const pipeline apiVersion tekton dev v1beta1 kind Pipeline metadata name my pipeline spec tasks name hello world taskSpec steps image alpine 3 15 1 script echo hello world myResolvedResource wraps the data we want to return to Pipelines type myResolvedResource struct Data returns the bytes of our hard coded Pipeline func myResolvedResource Data byte return byte pipeline Annotations returns any metadata needed alongside the data None atm func myResolvedResource Annotations map string string return nil RefSource is the source reference of the remote data that records where the remote file came from including the url digest and the entrypoint None atm func myResolvedResource RefSource pipelinev1 RefSource return nil Best practice In order to enable Tekton Chains to record the source information of the remote data in the SLSA provenance the resolver should implement the RefSource method to return a correct RefSource value See the following example go RefSource is the source reference of the remote data that records where the remote file came from including the url digest and the entrypoint func myResolvedResource RefSource pipelinev1 RefSource return v1 RefSource URI https github com user example Digest map string string sha1 example EntryPoint foo bar task yaml The deployment configuration Finally our resolver needs some deployment configuration so that it can run in Kubernetes A full description of the config is beyond the scope of a short howto but in summary we ll tell Kubernetes to run our resolver application along with some environment variables and other configuration that the underlying knative framework expects The deployed application is put in the tekton pipelines namespace and uses ko to build its container image Finally the ServiceAccount our deployment uses is tekton pipelines resolvers which is the default ServiceAccount shared by all resolvers in the tekton pipelines resolvers namespace The full configuration follows yaml apiVersion apps v1 kind Deployment metadata name demoresolver namespace tekton pipelines resolvers spec replicas 1 selector matchLabels app demoresolver template metadata labels app demoresolver spec affinity podAntiAffinity preferredDuringSchedulingIgnoredDuringExecution podAffinityTerm labelSelector matchLabels app demoresolver topologyKey kubernetes io hostname weight 100 serviceAccountName tekton pipelines resolvers containers name controller image ko example com demoresolver cmd demoresolver resources requests cpu 100m memory 100Mi limits cpu 1000m memory 1000Mi ports name metrics containerPort 9090 env name SYSTEM NAMESPACE valueFrom fieldRef fieldPath metadata namespace name CONFIG LOGGING NAME value config logging name CONFIG OBSERVABILITY NAME value config observability name METRICS DOMAIN value tekton dev resolution securityContext allowPrivilegeEscalation false readOnlyRootFilesystem true runAsNonRoot true capabilities drop all Phew ok put all that in a file at config demo resolver deployment yaml and you ll be ready to deploy your application to Kubernetes and see it work Trying it out Now that all the code is written your new resolver should be ready to deploy to a Kubernetes cluster We ll use ko to build and deploy the application bash ko apply f config demo resolver deployment yaml Assuming the resolver deployed successfully you should be able to see it in the output from the following command bash kubectl get deployments n tekton pipelines And here s approximately what you should see when you run this command NAME READY UP TO DATE AVAILABLE AGE controller 1 1 1 1 2d21h demoresolver 1 1 1 1 91s webhook 1 1 1 1 2d21 To exercise your new resolver let s submit a request for its hard coded pipeline Create a file called test request yaml with the following content yaml apiVersion resolution tekton dev v1beta1 kind ResolutionRequest metadata name test request labels resolution tekton dev type demo And submit this request with the following command bash kubectl apply f test request yaml kubectl get watch resolutionrequests You should soon see your ResolutionRequest printed to screen with a True value in its SUCCEEDED column bash resolutionrequest resolution tekton dev test request created NAME SUCCEEDED REASON test request True Press Ctrl C to get back to the command line If you now take a look at the ResolutionRequest s YAML you ll see the hard coded pipeline yaml in its status data field It won t be totally recognizable though because it s encoded as base64 Have a look with the following command bash kubectl get resolutionrequest test request o yaml You can convert that base64 data back into yaml with the following command bash kubectl get resolutionrequest test request o jsonpath status data base64 d Great work you ve successfully written a Resolver from scratch Next Steps At this point you could start to expand the Resolve method in your Resolver to fetch data from your storage backend of choice Or if you prefer to take a look at a more fully realized example of a Resolver see the code for the gitresolver hosted in the Tekton Pipeline repo https github com tektoncd pipeline tree main pkg resolution resolver git Finally another direction you could take this would be to try writing a PipelineRun for Tekton Pipelines that speaks to your Resolver Can you get a PipelineRun to execute successfully that uses the hard coded Pipeline your Resolver returns Except as otherwise noted the content of this page is licensed under the Creative Commons Attribution 4 0 License https creativecommons org licenses by 4 0 and code samples are licensed under the Apache 2 0 License https www apache org licenses LICENSE 2 0 |
tekton Migrating from v1alpha1 Run to v1beta1 CustomRun weight 4000 Migrating from v1alpha1 Run to v1beta1 CustomRun This document describes how to migrate from and | <!--
---
linkTitle: "Migrating from v1alpha1.Run to v1beta1.CustomRun"
weight: 4000
---
-->
# Migrating from v1alpha1.Run to v1beta1.CustomRun
This document describes how to migrate from `v1alpha1.Run` and `v1beta1.CustomRun`
- [Changes to fields](#changes-to-fields)
- [Changes to the specification](#changes-to-the-specification)
- [Changes to the reference](#changes-to-the-reference)
- [Support `PodTemplate` in Custom Task Spec](#support-podtemplate-in-custom-task-spec)
- [Changes to implementation requirements](#changes-to-implementation-instructions)
- [Status Reporting](#status-reporting)
- [Cancellation](#cancellation)
- [Timeout](#timeout)
- [Retries & RetriesStatus](#4-retries--retriesstatus)
- [New feature flag `custom-task-version` for migration](#new-feature-flag-custom-task-version)
## Changes to fields
Comparing `v1alpha1.Run` with `v1beta1.CustomRun`, the following fields have been changed:
| Old field | New field |
| ---------------------- | ------------------|
| `spec.spec` | [`spec.customSpec`](#changes-to-the-specification) |
| `spec.ref` | [`spec.customRef`](#changes-to-the-reference) |
| `spec.podTemplate` | removed, see [this section](#support-podtemplate-in-custom-task-spec) if you still want to support it|
### Changes to the specification
In `v1beta1.CustomRun`, the specification is renamed from `spec.spec` to `spec.customSpec`.
```yaml
# before (v1alpha1.Run)
apiVersion: tekton.dev/v1alpha1
kind: Run
metadata:
name: run-with-spec
spec:
spec:
apiVersion: example.dev/v1alpha1
kind: Example
spec:
field1: value1
field2: value2
# after (v1beta1.CustomRun)
apiVersion: tekton.dev/v1beta1
kind: CustomRun
metadata:
name: customrun-with-spec
spec:
customSpec:
apiVersion: example.dev/v1beta1
kind: Example
spec:
field1: value1
field2: value2
```
### Changes to the reference
In `v1beta1.CustomRun`, the specification is renamed from `spec.ref` to `spec.customRef`.
```yaml
# before (v1alpha1.Run)
apiVersion: tekton.dev/v1alpha1
kind: Run
metadata:
name: run-with-reference
spec:
ref:
apiVersion: example.dev/v1alpha1
kind: Example
name: my-example
---
# after (v1beta1.CustomRun)
apiVersion: tekton.dev/v1beta1
kind: CustomRun
metadata:
name: customrun-with-reference
spec:
customRef:
apiVersion: example.dev/v1beta1
kind: Example
name: my-customtask
```
### Support `PodTemplate` in Custom Task Spec
`spec.podTemplate` is removed in `v1beta1.CustomRun`. You can support that field in your own custom task spec if you want to. For example:
```yaml
apiVersion: tekton.dev/v1beta1
kind: CustomRun
metadata:
name: customrun-with-podtemplate
spec:
customSpec:
apiVersion: example.dev/v1beta1
kind: Example
spec:
podTemplate:
securityContext:
runAsUser: 1001
```
## Changes to implementation instructions
We've changed four implementation instructions. Note that `status reporting` is the only required instruction to follow, others are recommendations.
### Status Reporting
You **MUST** report `v1beta1.CustomRun` as `Done` (set its `Conditions.Succeeded` status as `True` or `False`) **ONLY** when you want Pipeline Controller consider it as finished.
For example, if the `CustomRun` failed, while it still has remaining `retries`. If you want pipeline controller to continue watching its status changing, you **MUST NOT** mark it as `Done`. Otherwise, the `PipelineRun` controller may consider it as finished.
Here are statuses that indicating the `CustomRun` is done.
```yaml
Type: Succeeded
Status: False
Reason: TimedOut
```
```yaml
Type: Succeeded
Status: True
Reason: Succeeded
```
### Cancellation
Custom Task implementors are responsible for implementing `cancellation` to **support pipelineRun level timeouts and cancellation**. If a Custom Task implementor does not support cancellation via `customRun.spec.status`, `PipelineRun` can not timeout within the specified interval/duration and can not be cancelled as expected upon request.
It is recommended to update the `CustomRun` status as following, once noticing `customRun.Spec.Status` is updated to `RunCancelled`
```yaml
Type: Succeeded
Status: False
Reason: CustomRunCancelled
```
### Timeout
We recommend `customRun.Timeout` to be set for each retry attempt instead of all retries. That's said, if one `CustomRun` execution fails on `Timeout` and it has remaining retries, the `CustomRun` controller SHOULD NOT set the status of that `CustomRun` as `False`. Instead, it SHOULD initiate another round of execution.
### 4. Retries & RetriesStatus
We recommend you use `customRun.Spec.Retries` if you want to implement the `retry` logic for your `Custom Task`, and archive history `customRun.Status` in `customRun.Status.RetriesStatus`.
Say you started a `CustomRun` by setting the following condition:
```yaml
Status:
Conditions:
- Type: Succeeded
Status: Unknown
Reason: Running
```
Now it failed for some reasons but has remaining retries, instead of setting Conditions as failed on TimedOut:
```yaml
Status:
Conditions:
- Type: Succeeded
Status: False
Reason: Failed
```
We **recommend** you to do archive the failure status to `customRun.Status.retriesStatus`, and keep setting `customRun.Status` as `Unknown`:
```yaml
Status:
Conditions:
- Type: Succeeded
Status: Unknown
Reason: "xxx"
RetriesStatus:
- Conditions:
- Type: Succeeded
Status: False
Reason: Failed
```
## New feature flag `custom-task-version` for migration
You can use `custom-task-version` to control whether `v1alpha1.Run` or `v1beta1.CustomRun` should be created when a `Custom Task` is specified in a `Pipeline`. The feature flag currently supports two values: `v1alpha1` and `v1beta1`.
We'll change its default value per the following timeline:
- v0.43.*: default to `v1alpha1`.
- v0.44.*: switch the default to `v1beta1`
- v0.47.*: remove the feature flag, stop supporting creating `v1alpha1.Run`
[cancel-pr]: https://github.com/tektoncd/pipeline/blob/main/docs/pipelineruns.md#cancelling-a-pipelinerun
[gracefully-cancel-pr]: (https://github.com/tektoncd/pipeline/blob/main/docs/pipelineruns.md#gracefully-cancelling-a-pipelinerun | tekton | linkTitle Migrating from v1alpha1 Run to v1beta1 CustomRun weight 4000 Migrating from v1alpha1 Run to v1beta1 CustomRun This document describes how to migrate from v1alpha1 Run and v1beta1 CustomRun Changes to fields changes to fields Changes to the specification changes to the specification Changes to the reference changes to the reference Support PodTemplate in Custom Task Spec support podtemplate in custom task spec Changes to implementation requirements changes to implementation instructions Status Reporting status reporting Cancellation cancellation Timeout timeout Retries RetriesStatus 4 retries retriesstatus New feature flag custom task version for migration new feature flag custom task version Changes to fields Comparing v1alpha1 Run with v1beta1 CustomRun the following fields have been changed Old field New field spec spec spec customSpec changes to the specification spec ref spec customRef changes to the reference spec podTemplate removed see this section support podtemplate in custom task spec if you still want to support it Changes to the specification In v1beta1 CustomRun the specification is renamed from spec spec to spec customSpec yaml before v1alpha1 Run apiVersion tekton dev v1alpha1 kind Run metadata name run with spec spec spec apiVersion example dev v1alpha1 kind Example spec field1 value1 field2 value2 after v1beta1 CustomRun apiVersion tekton dev v1beta1 kind CustomRun metadata name customrun with spec spec customSpec apiVersion example dev v1beta1 kind Example spec field1 value1 field2 value2 Changes to the reference In v1beta1 CustomRun the specification is renamed from spec ref to spec customRef yaml before v1alpha1 Run apiVersion tekton dev v1alpha1 kind Run metadata name run with reference spec ref apiVersion example dev v1alpha1 kind Example name my example after v1beta1 CustomRun apiVersion tekton dev v1beta1 kind CustomRun metadata name customrun with reference spec customRef apiVersion example dev v1beta1 kind Example name my customtask Support PodTemplate in Custom Task Spec spec podTemplate is removed in v1beta1 CustomRun You can support that field in your own custom task spec if you want to For example yaml apiVersion tekton dev v1beta1 kind CustomRun metadata name customrun with podtemplate spec customSpec apiVersion example dev v1beta1 kind Example spec podTemplate securityContext runAsUser 1001 Changes to implementation instructions We ve changed four implementation instructions Note that status reporting is the only required instruction to follow others are recommendations Status Reporting You MUST report v1beta1 CustomRun as Done set its Conditions Succeeded status as True or False ONLY when you want Pipeline Controller consider it as finished For example if the CustomRun failed while it still has remaining retries If you want pipeline controller to continue watching its status changing you MUST NOT mark it as Done Otherwise the PipelineRun controller may consider it as finished Here are statuses that indicating the CustomRun is done yaml Type Succeeded Status False Reason TimedOut yaml Type Succeeded Status True Reason Succeeded Cancellation Custom Task implementors are responsible for implementing cancellation to support pipelineRun level timeouts and cancellation If a Custom Task implementor does not support cancellation via customRun spec status PipelineRun can not timeout within the specified interval duration and can not be cancelled as expected upon request It is recommended to update the CustomRun status as following once noticing customRun Spec Status is updated to RunCancelled yaml Type Succeeded Status False Reason CustomRunCancelled Timeout We recommend customRun Timeout to be set for each retry attempt instead of all retries That s said if one CustomRun execution fails on Timeout and it has remaining retries the CustomRun controller SHOULD NOT set the status of that CustomRun as False Instead it SHOULD initiate another round of execution 4 Retries RetriesStatus We recommend you use customRun Spec Retries if you want to implement the retry logic for your Custom Task and archive history customRun Status in customRun Status RetriesStatus Say you started a CustomRun by setting the following condition yaml Status Conditions Type Succeeded Status Unknown Reason Running Now it failed for some reasons but has remaining retries instead of setting Conditions as failed on TimedOut yaml Status Conditions Type Succeeded Status False Reason Failed We recommend you to do archive the failure status to customRun Status retriesStatus and keep setting customRun Status as Unknown yaml Status Conditions Type Succeeded Status Unknown Reason xxx RetriesStatus Conditions Type Succeeded Status False Reason Failed New feature flag custom task version for migration You can use custom task version to control whether v1alpha1 Run or v1beta1 CustomRun should be created when a Custom Task is specified in a Pipeline The feature flag currently supports two values v1alpha1 and v1beta1 We ll change its default value per the following timeline v0 43 default to v1alpha1 v0 44 switch the default to v1beta1 v0 47 remove the feature flag stop supporting creating v1alpha1 Run cancel pr https github com tektoncd pipeline blob main docs pipelineruns md cancelling a pipelinerun gracefully cancel pr https github com tektoncd pipeline blob main docs pipelineruns md gracefully cancelling a pipelinerun |
tekton TaskRuns weight 202 toc TaskRuns | <!--
---
linkTitle: "TaskRuns"
weight: 202
---
-->
# TaskRuns
<!-- toc -->
- [Overview](#overview)
- [Configuring a `TaskRun`](#configuring-a-taskrun)
- [Specifying the target `Task`](#specifying-the-target-task)
- [Tekton Bundles](#tekton-bundles)
- [Remote Tasks](#remote-tasks)
- [Specifying `Parameters`](#specifying-parameters)
- [Propagated Parameters](#propagated-parameters)
- [Propagated Object Parameters](#propagated-object-parameters)
- [Extra Parameters](#extra-parameters)
- [Specifying `Resource` limits](#specifying-resource-limits)
- [Specifying Task-level `ComputeResources`](#specifying-task-level-computeresources)
- [Specifying a `Pod` template](#specifying-a-pod-template)
- [Specifying `Workspaces`](#specifying-workspaces)
- [Propagated Workspaces](#propagated-workspaces)
- [Specifying `Sidecars`](#specifying-sidecars)
- [Configuring `Task` `Steps` and `Sidecars` in a TaskRun](#configuring-task-steps-and-sidecars-in-a-taskrun)
- [Specifying `LimitRange` values](#specifying-limitrange-values)
- [Specifying `Retries`](#specifying-retries)
- [Configuring the failure timeout](#configuring-the-failure-timeout)
- [Specifying `ServiceAccount` credentials](#specifying-serviceaccount-credentials)
- [<code>TaskRun</code> status](#taskrun-status)
- [The <code>status</code> field](#the-status-field)
- [Monitoring execution status](#monitoring-execution-status)
- [Monitoring `Steps`](#monitoring-steps)
- [Steps](#steps)
- [Monitoring `Results`](#monitoring-results)
- [Cancelling a `TaskRun`](#cancelling-a-taskrun)
- [Debugging a `TaskRun`](#debugging-a-taskrun)
- [Breakpoint on Failure](#breakpoint-on-failure)
- [Debug Environment](#debug-environment)
- [Events](events.md#taskruns)
- [Running a TaskRun Hermetically](hermetic.md)
- [Code examples](#code-examples)
- [Example `TaskRun` with a referenced `Task`](#example-taskrun-with-a-referenced-task)
- [Example `TaskRun` with an embedded `Task`](#example-taskrun-with-an-embedded-task)
- [Example of reusing a `Task`](#example-of-reusing-a-task)
- [Example of Using custom `ServiceAccount` credentials](#example-of-using-custom-serviceaccount-credentials)
- [Example of Running Step Containers as a Non Root User](#example-of-running-step-containers-as-a-non-root-user)
<!-- /toc -->
## Overview
A `TaskRun` allows you to instantiate and execute a [`Task`](tasks.md) on-cluster. A `Task` specifies one or more
`Steps` that execute container images and each container image performs a specific piece of build work. A `TaskRun` executes the
`Steps` in the `Task` in the order they are specified until all `Steps` have executed successfully or a failure occurs.
## Configuring a `TaskRun`
A `TaskRun` definition supports the following fields:
- Required:
- [`apiVersion`][kubernetes-overview] - Specifies the API version, for example
`tekton.dev/v1beta1`.
- [`kind`][kubernetes-overview] - Identifies this resource object as a `TaskRun` object.
- [`metadata`][kubernetes-overview] - Specifies the metadata that uniquely identifies the
`TaskRun`, such as a `name`.
- [`spec`][kubernetes-overview] - Specifies the configuration for the `TaskRun`.
- [`taskRef` or `taskSpec`](#specifying-the-target-task) - Specifies the `Tasks` that the
`TaskRun` will execute.
- Optional:
- [`serviceAccountName`](#specifying-serviceaccount-credentials) - Specifies a `ServiceAccount`
object that provides custom credentials for executing the `TaskRun`.
- [`params`](#specifying-parameters) - Specifies the desired execution parameters for the `Task`.
- [`timeout`](#configuring-the-failure-timeout) - Specifies the timeout before the `TaskRun` fails.
- [`podTemplate`](#specifying-a-pod-template) - Specifies a [`Pod` template](podtemplates.md) to use as
the starting point for configuring the `Pods` for the `Task`.
- [`workspaces`](#specifying-workspaces) - Specifies the physical volumes to use for the
[`Workspaces`](workspaces.md#using-workspaces-in-tasks) declared by a `Task`.
- [`debug`](#debugging-a-taskrun)- Specifies any breakpoints and debugging configuration for the `Task` execution.
- [`stepOverrides`](#overriding-task-steps-and-sidecars) - Specifies configuration to use to override the `Task`'s `Step`s.
- [`sidecarOverrides`](#overriding-task-steps-and-sidecars) - Specifies configuration to use to override the `Task`'s `Sidecar`s.
[kubernetes-overview]:
https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields
### Specifying the target `Task`
To specify the `Task` you want to execute in your `TaskRun`, use the `taskRef` field as shown below:
```yaml
spec:
taskRef:
name: read-task
```
You can also embed the desired `Task` definition directly in the `TaskRun` using the `taskSpec` field:
```yaml
spec:
taskSpec:
workspaces:
- name: source
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:v0.17.1
# specifying DOCKER_CONFIG is required to allow kaniko to detect docker credential
workingDir: $(workspaces.source.path)
env:
- name: "DOCKER_CONFIG"
value: "/tekton/home/.docker/"
command:
- /kaniko/executor
args:
- --destination=gcr.io/my-project/gohelloworld
```
#### Tekton Bundles
A `Tekton Bundle` is an OCI artifact that contains Tekton resources like `Tasks` which can be referenced within a `taskRef`.
You can reference a `Tekton bundle` in a `TaskRef` in both `v1` and `v1beta1` using [remote resolution](./bundle-resolver.md#pipeline-resolution). The example syntax shown below for `v1` uses remote resolution and requires enabling [beta features](./additional-configs.md#beta-features).
```yaml
spec:
taskRef:
resolver: bundles
params:
- name: bundle
value: docker.io/myrepo/mycatalog
- name: name
value: echo-task
- name: kind
value: Task
```
You may also specify a `tag` as you would with a Docker image which will give you a repeatable reference to a `Task`.
```yaml
spec:
taskRef:
resolver: bundles
params:
- name: bundle
value: docker.io/myrepo/mycatalog:v1.0.1
- name: name
value: echo-task
- name: kind
value: Task
```
You may also specify a fixed digest instead of a tag which ensures the referenced task is constant.
```yaml
spec:
taskRef:
resolver: bundles
params:
- name: bundle
value: docker.io/myrepo/mycatalog@sha256:abc123
- name: name
value: echo-task
- name: kind
value: Task
```
A working example can be found [here](../examples/v1beta1/taskruns/no-ci/tekton-bundles.yaml).
Any of the above options will fetch the image using the `ImagePullSecrets` attached to the
`ServiceAccount` specified in the `TaskRun`. See the [Service Account](#service-account)
section for details on how to configure a `ServiceAccount` on a `TaskRun`. The `TaskRun`
will then run that `Task` without registering it in the cluster allowing multiple versions
of the same named `Task` to be run at once.
`Tekton Bundles` may be constructed with any toolsets that produces valid OCI image artifacts so long as
the artifact adheres to the [contract](tekton-bundle-contracts.md). Additionally, you may also use the `tkn`
cli *(coming soon)*.
#### Remote Tasks
**([beta feature](https://github.com/tektoncd/pipeline/blob/main/docs/install.md#beta-features))**
A `taskRef` field may specify a Task in a remote location such as git.
Support for specific types of remote will depend on the Resolvers your
cluster's operator has installed. For more information including a tutorial, please check [resolution docs](resolution.md). The below example demonstrates referencing a Task in git:
```yaml
spec:
taskRef:
resolver: git
params:
- name: url
value: https://github.com/tektoncd/catalog.git
- name: revision
value: abc123
- name: pathInRepo
value: /task/golang-build/0.3/golang-build.yaml
```
### Specifying `Parameters`
If a `Task` has [`parameters`](tasks.md#specifying-parameters), you can use the `params` field to specify their values:
```yaml
spec:
params:
- name: flags
value: -someflag
```
**Note:** If a parameter does not have an implicit default value, you must explicitly set its value.
#### Propagated Parameters
When using an inlined `taskSpec`, parameters from the parent `TaskRun` will be
available to the `Task` without needing to be explicitly defined.
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: TaskRun
metadata:
generateName: hello-
spec:
params:
- name: message
value: "hello world!"
taskSpec:
# There are no explicit params defined here.
# They are derived from the TaskRun params above.
steps:
- name: default
image: ubuntu
script: |
echo $(params.message)
```
On executing the task run, the parameters will be interpolated during resolution.
The specifications are not mutated before storage and so it remains the same.
The status is updated.
```yaml
kind: TaskRun
metadata:
name: hello-dlqm9
...
spec:
params:
- name: message
value: hello world!
serviceAccountName: default
taskSpec:
steps:
- image: ubuntu
name: default
script: |
echo $(params.message)
status:
conditions:
- lastTransitionTime: "2022-05-20T15:24:41Z"
message: All Steps have completed executing
reason: Succeeded
status: "True"
type: Succeeded
...
steps:
- container: step-default
...
taskSpec:
steps:
- image: ubuntu
name: default
script: |
echo "hello world!"
```
#### Propagated Object Parameters
When using an inlined `taskSpec`, object parameters from the parent `TaskRun` will be
available to the `Task` without needing to be explicitly defined.
**Note:** If an object parameter is being defined explicitly then you must define the spec of the object in `Properties`.
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: TaskRun
metadata:
generateName: object-param-result-
spec:
params:
- name: gitrepo
value:
commit: sha123
url: xyz.com
taskSpec:
steps:
- name: echo-object-params
image: bash
args:
- echo
- --url=$(params.gitrepo.url)
- --commit=$(params.gitrepo.commit)
```
On executing the task run, the object parameters will be interpolated during resolution.
The specifications are not mutated before storage and so it remains the same.
The status is updated.
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: TaskRun
metadata:
name: object-param-result-vlnmb
...
spec:
params:
- name: gitrepo
value:
commit: sha123
url: xyz.com
serviceAccountName: default
taskSpec:
steps:
- args:
- echo
- --url=$(params.gitrepo.url)
- --commit=$(params.gitrepo.commit)
image: bash
name: echo-object-params
status:
completionTime: "2022-09-08T17:09:37Z"
conditions:
- lastTransitionTime: "2022-09-08T17:09:37Z"
message: All Steps have completed executing
reason: Succeeded
status: "True"
type: Succeeded
...
steps:
- container: step-echo-object-params
...
taskSpec:
steps:
- args:
- echo
- --url=xyz.com
- --commit=sha123
image: bash
name: echo-object-params
```
#### Extra Parameters
**([alpha only](https://github.com/tektoncd/pipeline/blob/main/docs/additional-configs.md#alpha-features))**
You can pass in extra `Parameters` if needed depending on your use cases. An example use
case is when your CI system autogenerates `TaskRuns` and it has `Parameters` it wants to
provide to all `TaskRuns`. Because you can pass in extra `Parameters`, you don't have to
go through the complexity of checking each `Task` and providing only the required params.
#### Parameter Enums
> :seedling: **`enum` is an [alpha](additional-configs.md#alpha-features) feature.** The `enable-param-enum` feature flag must be set to `"true"` to enable this feature.
If a `Parameter` is guarded by `Enum` in the `Task`, you can only provide `Parameter` values in the `TaskRun` that are predefined in the `Param.Enum` in the `Task`. The `TaskRun` will fail with reason `InvalidParamValue` otherwise.
You can also specify `Enum` for [`TaskRun` with an embedded `Task`](#example-taskrun-with-an-embedded-task). The same param validation will be executed in this scenario.
See more details in [Param.Enum](./tasks.md#param-enum).
### Specifying `Resource` limits
Each Step in a Task can specify its resource requirements. See
[Defining `Steps`](tasks.md#defining-steps). Resource requirements defined in Steps and Sidecars
may be overridden by a TaskRun's StepSpecs and SidecarSpecs.
### Specifying Task-level `ComputeResources`
**([beta only](https://github.com/tektoncd/pipeline/blob/main/docs/additional-configs.md#beta-features))**
Task-level compute resources can be configured in `TaskRun.ComputeResources`, or `PipelineRun.TaskRunSpecs.ComputeResources`.
e.g.
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Task
metadata:
name: task
spec:
steps:
- name: foo
---
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: TaskRun
metadata:
name: taskrun
spec:
taskRef:
name: task
computeResources:
requests:
cpu: 1
limits:
cpu: 2
```
Further details and examples could be found in [Compute Resources in Tekton](https://github.com/tektoncd/pipeline/blob/main/docs/compute-resources.md).
### Specifying a `Pod` template
You can specify a [`Pod` template](podtemplates.md) configuration that will serve as the configuration starting
point for the `Pod` in which the container images specified in your `Task` will execute. This allows you to
customize the `Pod` configuration specifically for that `TaskRun`.
In the following example, the `Task` specifies a `volumeMount` (`my-cache`) object, also provided by the `TaskRun`,
using a `PersistentVolumeClaim` volume. A specific scheduler is also configured in the `SchedulerName` field.
The `Pod` executes with regular (non-root) user permissions.
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Task
metadata:
name: mytask
namespace: default
spec:
steps:
- name: writesomething
image: ubuntu
command: ["bash", "-c"]
args: ["echo 'foo' > /my-cache/bar"]
volumeMounts:
- name: my-cache
mountPath: /my-cache
---
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: TaskRun
metadata:
name: mytaskrun
namespace: default
spec:
taskRef:
name: mytask
podTemplate:
schedulerName: volcano
securityContext:
runAsNonRoot: true
runAsUser: 1001
volumes:
- name: my-cache
persistentVolumeClaim:
claimName: my-volume-claim
```
### Specifying `Workspaces`
If a `Task` specifies one or more `Workspaces`, you must map those `Workspaces` to
the corresponding physical volumes in your `TaskRun` definition. For example, you
can map a `PersistentVolumeClaim` volume to a `Workspace` as follows:
```yaml
workspaces:
- name: myworkspace # must match workspace name in the Task
persistentVolumeClaim:
claimName: mypvc # this PVC must already exist
subPath: my-subdir
```
For more information, see the following topics:
- For information on mapping `Workspaces` to `Volumes`, see [Using `Workspace` variables in `TaskRuns`](workspaces.md#using-workspace-variables-in-taskruns).
- For a list of supported `Volume` types, see [Specifying `VolumeSources` in `Workspaces`](workspaces.md#specifying-volumesources-in-workspaces).
- For an end-to-end example, see [`Workspaces` in a `TaskRun`](../examples/v1/taskruns/workspace.yaml).
#### Propagated Workspaces
When using an embedded spec, workspaces from the parent `TaskRun` will be
propagated to any inlined specs without needing to be explicitly defined. This
allows authors to simplify specs by automatically propagating top-level
workspaces down to other inlined resources.
**Workspace substutions will only be made for `commands`, `args` and `script` fields of `steps`, `stepTemplates`, and `sidecars`.**
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: TaskRun
metadata:
generateName: propagating-workspaces-
spec:
taskSpec:
steps:
- name: simple-step
image: ubuntu
command:
- echo
args:
- $(workspaces.tr-workspace.path)
workspaces:
- emptyDir: {}
name: tr-workspace
```
Upon execution, the workspaces will be interpolated during resolution through to the `taskSpec`.
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: TaskRun
metadata:
name: propagating-workspaces-ndxnc
...
spec:
...
status:
...
taskSpec:
steps:
...
workspaces:
- name: tr-workspace
```
##### Propagating Workspaces to Referenced Tasks
Workspaces can only be propagated to `embedded` task specs, not `referenced` Tasks.
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Task
metadata:
name: workspace-propagation
spec:
steps:
- name: simple-step
image: ubuntu
command:
- echo
args:
- $(workspaces.tr-workspace.path)
---
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: TaskRun
metadata:
generateName: propagating-workspaces-
spec:
taskRef:
name: workspace-propagation
workspaces:
- emptyDir: {}
name: tr-workspace
```
Upon execution, the above `TaskRun` will fail because the `Task` is referenced and workspace is not propagated. It must be explicitly defined in the `spec` of the defined `Task`.
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: TaskRun
metadata:
...
spec:
taskRef:
kind: Task
name: workspace-propagation
workspaces:
- emptyDir: {}
name: tr-workspace
status:
conditions:
- lastTransitionTime: "2022-09-13T15:12:35Z"
message: workspace binding "tr-workspace" does not match any declared workspace
reason: TaskRunValidationFailed
status: "False"
type: Succeeded
...
```
### Specifying `Sidecars`
A `Sidecar` is a container that runs alongside the containers specified
in the `Steps` of a task to provide auxiliary support to the execution of
those `Steps`. For example, a `Sidecar` can run a logging daemon, a service
that updates files on a shared volume, or a network proxy.
Tekton supports the injection of `Sidecars` into a `Pod` belonging to
a `TaskRun` with the condition that each `Sidecar` running inside the
`Pod` are terminated as soon as all `Steps` in the `Task` complete execution.
This might result in the `Pod` including each affected `Sidecar` with a
retry count of 1 and a different container image than expected.
We are aware of the following issues affecting Tekton's implementation of `Sidecars`:
- The configured `nop` image **must not** provide the command that the
`Sidecar` is expected to run, otherwise it will not exit, resulting in the `Sidecar`
running forever and the Task eventually timing out. For more information, see the
[associated issue](https://github.com/tektoncd/pipeline/issues/1347).
- The `kubectl get pods` command returns the status of the `Pod` as "Completed" if a
`Sidecar` exits successfully and as "Error" if a `Sidecar` exits with an error,
disregarding the exit codes of the container images that actually executed the `Steps`
inside the `Pod`. Only the above command is affected. The `Pod's` description correctly
denotes a "Failed" status and the container statuses correctly denote their exit codes
and reasons.
### Configuring Task Steps and Sidecars in a TaskRun
**([beta only](https://github.com/tektoncd/pipeline/blob/main/docs/additional-configs.md#beta-features))**
A TaskRun can specify `StepSpecs` or `SidecarSpecs` to configure Step or Sidecar
specified in a Task. Only named Steps and Sidecars may be configured.
For example, given the following Task definition:
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Task
metadata:
name: image-build-task
spec:
steps:
- name: build
image: gcr.io/kaniko-project/executor:latest
sidecars:
- name: logging
image: my-logging-image
```
An example TaskRun definition could look like:
```yaml
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
name: image-build-taskrun
spec:
taskRef:
name: image-build-task
stepSpecs:
- name: build
computeResources:
requests:
memory: 1Gi
sidecarSpecs:
- name: logging
computeResources:
requests:
cpu: 100m
limits:
cpu: 500m
```
```yaml
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: image-build-taskrun
spec:
taskRef:
name: image-build-task
stepOverrides:
- name: build
resources:
requests:
memory: 1Gi
sidecarOverrides:
- name: logging
resources:
requests:
cpu: 100m
limits:
cpu: 500m
```
`StepSpecs` and `SidecarSpecs` must include the `name` field and may include `resources`.
No other fields can be overridden.
If the overridden `Task` uses a [`StepTemplate`](./tasks.md#specifying-a-step-template), configuration on
`Step` will take precedence over configuration in `StepTemplate`, and configuration in `StepSpec` will
take precedence over both.
When merging resource requirements, different resource types are considered independently.
For example, if a `Step` configures both CPU and memory, and a `StepSpec` configures only memory,
the CPU values from the `Step` will be preserved. Requests and limits are also considered independently.
For example, if a `Step` configures a memory request and limit, and a `StepSpec` configures only a
memory request, the memory limit from the `Step` will be preserved.
### Specifying `LimitRange` values
In order to only consume the bare minimum amount of resources needed to execute one `Step` at a
time from the invoked `Task`, Tekton will request the compute values for CPU, memory, and ephemeral
storage for each `Step` based on the [`LimitRange`](https://kubernetes.io/docs/concepts/policy/limit-range/)
object(s), if present. Any `Request` or `Limit` specified by the user (on `Task` for example) will be left unchanged.
For more information, see the [`LimitRange` support in Pipeline](./compute-resources.md#limitrange-support).
### Specifying `Retries`
You can use the `retries` field to set how many times you want to retry on a failed TaskRun.
All TaskRun failures are retriable except for `Cancellation`.
For a retriable `TaskRun`, when an error occurs:
- The error status is archived in `status.RetriesStatus`
- The `Succeeded` condition in `status` is updated:
```
Type: Succeeded
Status: Unknown
Reason: ToBeRetried
```
- `status.StartTime`, `status.PodName` and `status.Results` are unset to trigger another retry attempt.
### Configuring the failure timeout
You can use the `timeout` field to set the `TaskRun's` desired timeout value for **each retry attempt**. If you do
not specify this value, the global default timeout value applies (the same, to `each retry attempt`). If you set the timeout to 0,
the `TaskRun` will have no timeout and will run until it completes successfully or fails from an error.
The `timeout` value is a `duration` conforming to Go's
[`ParseDuration`](https://golang.org/pkg/time/#ParseDuration) format. For example, valid
values are `1h30m`, `1h`, `1m`, `60s`, and `0`.
If a `TaskRun` runs longer than its timeout value, the pod associated with the `TaskRun` will be deleted. This
means that the logs of the `TaskRun` are not preserved. The deletion of the `TaskRun` pod is necessary in order to
stop `TaskRun` step containers from running.
The global default timeout is set to 60 minutes when you first install Tekton. You can set
a different global default timeout value using the `default-timeout-minutes` field in
[`config/config-defaults.yaml`](./../config/config-defaults.yaml). If you set the global timeout to 0,
all `TaskRuns` that do not have a timeout set will have no timeout and will run until it completes successfully
or fails from an error.
> :note: An internal detail of the `PipelineRun` and `TaskRun` reconcilers in the Tekton controller is that it will requeue a `PipelineRun` or `TaskRun` for re-evaluation, versus waiting for the next update, under certain conditions. The wait time for that re-queueing is the elapsed time subtracted from the timeout; however, if the timeout is set to '0', that calculation produces a negative number, and the new reconciliation event will fire immediately, which can impact overall performance, which is counter to the intent of wait time calculation. So instead, the reconcilers will use the configured global timeout as the wait time when the associated timeout has been set to '0'.
### Specifying `ServiceAccount` credentials
You can execute the `Task` in your `TaskRun` with a specific set of credentials by
specifying a `ServiceAccount` object name in the `serviceAccountName` field in your `TaskRun`
definition. If you do not explicitly specify this, the `TaskRun` executes with the credentials
specified in the `configmap-defaults` `ConfigMap`. If this default is not specified, `TaskRuns`
will execute with the [`default` service account](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server)
set for the target [`namespace`](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/).
For more information, see [`ServiceAccount`](auth.md).
## `TaskRun` status
The `status` field defines the observed state of `TaskRun`
### The `status` field
- Required:
- `status` - The most relevant information about the TaskRun's state. This field includes:
<!-- wokeignore:rule=master -->
- `status.conditions`, which contains the latest observations of the `TaskRun`'s state. [See here](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties) for information on typical status properties.
- `podName` - Name of the pod containing the containers responsible for executing this `task`'s `step`s.
- `startTime` - The time at which the `TaskRun` began executing, conforms to [RFC3339](https://tools.ietf.org/html/rfc3339) format.
- `completionTime` - The time at which the `TaskRun` finished executing, conforms to [RFC3339](https://tools.ietf.org/html/rfc3339) format.
- [`taskSpec`](tasks.md#configuring-a-task) - `TaskSpec` defines the desired state of the `Task` executed via the `TaskRun`.
- Optional:
- `results` - List of results written out by the `task`'s containers.
- `provenance` - Provenance contains metadata about resources used in the `TaskRun` such as the source from where a remote `task` definition was fetched. It carries minimum amount of metadata in `TaskRun` `status` so that `Tekton Chains` can utilize it for provenance, its two subfields are:
- `refSource`: the source from where a remote `Task` definition was fetched.
- `featureFlags`: Identifies the feature flags used during the `TaskRun`.
- `steps` - Contains the `state` of each `step` container.
- `steps[].terminationReason` - When the step is terminated, it stores the step's final state.
- `retriesStatus` - Contains the history of `TaskRun`'s `status` in case of a retry in order to keep record of failures. No `status` stored within `retriesStatus` will have any `date` within as it is redundant.
- [`sidecars`](tasks.md#using-a-sidecar-in-a-task) - This field is a list. The list has one entry per `sidecar` in the manifest. Each entry represents the imageid of the corresponding sidecar.
- `spanContext` - Contains tracing span context fields.
## Monitoring execution status
As your `TaskRun` executes, its `status` field accumulates information on the execution of each `Step`
as well as the `TaskRun` as a whole. This information includes start and stop times, exit codes, the
fully-qualified name of the container image, and the corresponding digest.
**Note:** If any `Pods` have been [`OOMKilled`](https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/)
by Kubernetes, the `TaskRun` is marked as failed even if its exit code is 0.
The following example shows the `status` field of a `TaskRun` that has executed successfully:
```yaml
completionTime: "2019-08-12T18:22:57Z"
conditions:
- lastTransitionTime: "2019-08-12T18:22:57Z"
message: All Steps have completed executing
reason: Succeeded
status: "True"
type: Succeeded
podName: status-taskrun-pod
startTime: "2019-08-12T18:22:51Z"
steps:
- container: step-hello
imageID: docker-pullable://busybox@sha256:895ab622e92e18d6b461d671081757af7dbaa3b00e3e28e12505af7817f73649
name: hello
terminationReason: Completed
terminated:
containerID: docker://d5a54f5bbb8e7a6fd3bc7761b78410403244cf4c9c5822087fb0209bf59e3621
exitCode: 0
finishedAt: "2019-08-12T18:22:56Z"
reason: Completed
startedAt: "2019-08-12T18:22:54Z"
```
The following tables shows how to read the overall status of a `TaskRun`:
| `status` | `reason` | `message` | `completionTime` is set | Description |
|:---------|:-----------------------|:------------------------------------------------------------------|:-----------------------:|--------------------------------------------------------------------------------------------------:|
| Unknown | Started | n/a | No | The TaskRun has just been picked up by the controller. |
| Unknown | Pending | n/a | No | The TaskRun is waiting on a Pod in status Pending. |
| Unknown | Running | n/a | No | The TaskRun has been validated and started to perform its work. |
| Unknown | TaskRunCancelled | n/a | No | The user requested the TaskRun to be cancelled. Cancellation has not been done yet. |
| True | Succeeded | n/a | Yes | The TaskRun completed successfully. |
| False | Failed | n/a | Yes | The TaskRun failed because one of the steps failed. |
| False | \[Error message\] | n/a | No | The TaskRun encountered a non-permanent error, and it's still running. It may ultimately succeed. |
| False | \[Error message\] | n/a | Yes | The TaskRun failed with a permanent error (usually validation). |
| False | TaskRunCancelled | n/a | Yes | The TaskRun was cancelled successfully. |
| False | TaskRunCancelled | TaskRun cancelled as the PipelineRun it belongs to has timed out. | Yes | The TaskRun was cancelled because the PipelineRun timed out. |
| False | TaskRunTimeout | n/a | Yes | The TaskRun timed out. |
| False | TaskRunImagePullFailed | n/a | Yes | The TaskRun failed due to one of its steps not being able to pull the image. |
| False | FailureIgnored | n/a | Yes | The TaskRun failed but the failure was ignored. |
When a `TaskRun` changes status, [events](events.md#taskruns) are triggered accordingly.
The name of the `Pod` owned by a `TaskRun` is univocally associated to the owning resource.
If a `TaskRun` resource is deleted and created with the same name, the child `Pod` will be created with the same name
as before. The base format of the name is `<taskrun-name>-pod`. The name may vary according to the logic of
[`kmeta.ChildName`](https://pkg.go.dev/github.com/knative/pkg/kmeta#ChildName). In case of retries of a `TaskRun`
triggered by the `PipelineRun` controller, the base format of the name is `<taskrun-name>-pod-retry<N>` starting from
the first retry.
Some examples:
| `TaskRun` Name | `Pod` Name |
|----------------------------------------------------------------------------|-----------------------------------------------------------------|
| task-run | task-run-pod |
| task-run-0123456789-0123456789-0123456789-0123456789-0123456789-0123456789 | task-run-0123456789-01234560d38957287bb0283c59440df14069f59-pod |
### Monitoring `Steps`
If multiple `Steps` are defined in the `Task` invoked by the `TaskRun`, you can monitor their execution
status in the `status.steps` field using the following command, where `<name>` is the name of the target
`TaskRun`:
```bash
kubectl get taskrun <name> -o yaml
```
The exact Task Spec used to instantiate the TaskRun is also included in the Status for full auditability.
### Steps
The corresponding statuses appear in the `status.steps` list in the order in which the `Steps` have been
specified in the `Task` definition.
### Monitoring `Results`
If one or more `results` fields have been specified in the invoked `Task`, the `TaskRun's` execution
status will include a `Task Results` section, in which the `Results` appear verbatim, including original
line returns and whitespace. For example:
```yaml
Status:
# […]
Steps:
# […]
Task Results:
Name: current-date-human-readable
Value: Thu Jan 23 16:29:06 UTC 2020
Name: current-date-unix-timestamp
Value: 1579796946
```
## Cancelling a `TaskRun`
To cancel a `TaskRun` that's currently executing, update its status to mark it as cancelled.
When you cancel a TaskRun, the running pod associated with that `TaskRun` is deleted. This
means that the logs of the `TaskRun` are not preserved. The deletion of the `TaskRun` pod is necessary
in order to stop `TaskRun` step containers from running.
**Note: if `keep-pod-on-cancel` is set to
`"true"` in the `feature-flags`, the pod associated with that `TaskRun` will not be deleted**
Example of cancelling a `TaskRun`:
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: TaskRun
metadata:
name: go-example-git
spec:
# […]
status: "TaskRunCancelled"
```
## Debugging a `TaskRun`
### Breakpoint on Failure
TaskRuns can be halted on failure for troubleshooting by providing the following spec patch as seen below.
```yaml
spec:
debug:
breakpoints:
onFailure: "enabled"
```
### Breakpoint before step
If you want to set a breakpoint before the step is executed, you can add the step name to the `beforeSteps` field in the following way:
```yaml
spec:
debug:
breakpoints:
beforeSteps:
-
```
Upon failure of a step, the TaskRun Pod execution is halted. If this TaskRun Pod continues to run without any lifecycle
change done by the user (running the debug-continue or debug-fail-continue script) the TaskRun would be subject to
[TaskRunTimeout](#configuring-the-failure-timeout).
During this time, the user/client can get remote shell access to the step container with a command such as the following.
```bash
kubectl exec -it print-date-d7tj5-pod -c step-print-date-human-readable sh
```
### Debug Environment
After the user/client has access to the container environment, they can scour for any missing parts because of which
their step might have failed.
To control the lifecycle of the step to mark it as a success or a failure or close the breakpoint, there are scripts
provided in the `/tekton/debug/scripts` directory in the container. The following are the scripts and the tasks they
perform :-
`debug-continue`: Mark the step as a success and exit the breakpoint.
`debug-fail-continue`: Mark the step as a failure and exit the breakpoint.
`debug-beforestep-continue`: Mark the step continue to execute
`debug-beforestep-fail-continue`: Mark the step not continue to execute
*More information on the inner workings of debug can be found in the [Debug documentation](debug.md)*
## Code examples
To better understand `TaskRuns`, study the following code examples:
- [Example `TaskRun` with a referenced `Task`](#example-taskrun-with-a-referenced-task)
- [Example `TaskRun` with an embedded `Task`](#example-taskrun-with-an-embedded-task)
- [Example of reusing a `Task`](#example-of-reusing-a-task)
- [Example of Using custom `ServiceAccount` credentials](#example-of-using-custom-serviceaccount-credentials)
- [Example of Running Step Containers as a Non Root User](#example-of-running-step-containers-as-a-non-root-user)
### Example `TaskRun` with a referenced `Task`
In this example, a `TaskRun` named `read-repo-run` invokes and executes an existing
`Task` named `read-task`. This `Task` reads the repository from the
"input" `workspace`.
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: Task
metadata:
name: read-task
spec:
workspaces:
- name: input
steps:
- name: readme
image: ubuntu
script: cat $(workspaces.input.path)/README.md
---
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: TaskRun
metadata:
name: read-repo-run
spec:
taskRef:
name: read-task
workspaces:
- name: input
persistentVolumeClaim:
claimName: mypvc
subPath: my-subdir
```
### Example `TaskRun` with an embedded `Task`
In this example, a `TaskRun` named `build-push-task-run-2` directly executes
a `Task` from its definition embedded in the `TaskRun's` `taskSpec` field:
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: TaskRun
metadata:
name: build-push-task-run-2
spec:
workspaces:
- name: source
persistentVolumeClaim:
claimName: my-pvc
taskSpec:
workspaces:
- name: source
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:v0.17.1
workingDir: $(workspaces.source.path)
# specifying DOCKER_CONFIG is required to allow kaniko to detect docker credential
env:
- name: "DOCKER_CONFIG"
value: "/tekton/home/.docker/"
command:
- /kaniko/executor
args:
- --destination=gcr.io/my-project/gohelloworld
```
### Example of Using custom `ServiceAccount` credentials
The example below illustrates how to specify a `ServiceAccount` to access a private `git` repository:
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: TaskRun
metadata:
name: test-task-with-serviceaccount-git-ssh
spec:
serviceAccountName: test-task-robot-git-ssh
workspaces:
- name: source
persistentVolumeClaim:
claimName: repo-pvc
- name: ssh-creds
secret:
secretName: test-git-ssh
params:
- name: url
value: https://github.com/tektoncd/pipeline.git
taskRef:
name: git-clone
```
In the above code snippet, `serviceAccountName: test-build-robot-git-ssh` references the following
`ServiceAccount`:
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-task-robot-git-ssh
secrets:
- name: test-git-ssh
```
And `secretName: test-git-ssh` references the following `Secret`:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: test-git-ssh
annotations:
tekton.dev/git-0: github.com
type: kubernetes.io/ssh-auth
data:
# Generated by:
# cat id_rsa | base64 -w 0
ssh-privatekey: LS0tLS1CRUdJTiBSU0EgUFJJVk.....[example]
# Generated by:
# ssh-keyscan github.com | base64 -w 0
known_hosts: Z2l0aHViLmNvbSBzc2g.....[example]
```
### Example of Running Step Containers as a Non Root User
All steps that do not require to be run as a root user should make use of TaskRun features to
designate the container for a step runs as a user without root permissions. As a best practice,
running containers as non root should be built into the container image to avoid any possibility
of the container being run as root. However, as a further measure of enforcing this practice,
TaskRun pod templates can be used to specify how containers should be run within a TaskRun pod.
An example of using a TaskRun pod template is shown below to specify that containers running via this
TaskRun's pod should run as non root and run as user 1001 if the container itself does not specify what
user to run as:
```yaml
apiVersion: tekton.dev/v1 # or tekton.dev/v1beta1
kind: TaskRun
metadata:
generateName: show-non-root-steps-run-
spec:
taskRef:
name: show-non-root-steps
podTemplate:
securityContext:
runAsNonRoot: true
runAsUser: 1001
```
If a Task step specifies that it is to run as a different user than what is specified in the pod template,
the step's `securityContext` will be applied instead of what is specified at the pod level. An example of
this is available as a [TaskRun example](../examples/v1/taskruns/run-steps-as-non-root.yaml).
More information about Pod and Container Security Contexts can be found via the [Kubernetes website](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod).
---
Except as otherwise noted, the content of this page is licensed under the
[Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/),
and code samples are licensed under the
[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). | tekton | linkTitle TaskRuns weight 202 TaskRuns toc Overview overview Configuring a TaskRun configuring a taskrun Specifying the target Task specifying the target task Tekton Bundles tekton bundles Remote Tasks remote tasks Specifying Parameters specifying parameters Propagated Parameters propagated parameters Propagated Object Parameters propagated object parameters Extra Parameters extra parameters Specifying Resource limits specifying resource limits Specifying Task level ComputeResources specifying task level computeresources Specifying a Pod template specifying a pod template Specifying Workspaces specifying workspaces Propagated Workspaces propagated workspaces Specifying Sidecars specifying sidecars Configuring Task Steps and Sidecars in a TaskRun configuring task steps and sidecars in a taskrun Specifying LimitRange values specifying limitrange values Specifying Retries specifying retries Configuring the failure timeout configuring the failure timeout Specifying ServiceAccount credentials specifying serviceaccount credentials code TaskRun code status taskrun status The code status code field the status field Monitoring execution status monitoring execution status Monitoring Steps monitoring steps Steps steps Monitoring Results monitoring results Cancelling a TaskRun cancelling a taskrun Debugging a TaskRun debugging a taskrun Breakpoint on Failure breakpoint on failure Debug Environment debug environment Events events md taskruns Running a TaskRun Hermetically hermetic md Code examples code examples Example TaskRun with a referenced Task example taskrun with a referenced task Example TaskRun with an embedded Task example taskrun with an embedded task Example of reusing a Task example of reusing a task Example of Using custom ServiceAccount credentials example of using custom serviceaccount credentials Example of Running Step Containers as a Non Root User example of running step containers as a non root user toc Overview A TaskRun allows you to instantiate and execute a Task tasks md on cluster A Task specifies one or more Steps that execute container images and each container image performs a specific piece of build work A TaskRun executes the Steps in the Task in the order they are specified until all Steps have executed successfully or a failure occurs Configuring a TaskRun A TaskRun definition supports the following fields Required apiVersion kubernetes overview Specifies the API version for example tekton dev v1beta1 kind kubernetes overview Identifies this resource object as a TaskRun object metadata kubernetes overview Specifies the metadata that uniquely identifies the TaskRun such as a name spec kubernetes overview Specifies the configuration for the TaskRun taskRef or taskSpec specifying the target task Specifies the Tasks that the TaskRun will execute Optional serviceAccountName specifying serviceaccount credentials Specifies a ServiceAccount object that provides custom credentials for executing the TaskRun params specifying parameters Specifies the desired execution parameters for the Task timeout configuring the failure timeout Specifies the timeout before the TaskRun fails podTemplate specifying a pod template Specifies a Pod template podtemplates md to use as the starting point for configuring the Pods for the Task workspaces specifying workspaces Specifies the physical volumes to use for the Workspaces workspaces md using workspaces in tasks declared by a Task debug debugging a taskrun Specifies any breakpoints and debugging configuration for the Task execution stepOverrides overriding task steps and sidecars Specifies configuration to use to override the Task s Step s sidecarOverrides overriding task steps and sidecars Specifies configuration to use to override the Task s Sidecar s kubernetes overview https kubernetes io docs concepts overview working with objects kubernetes objects required fields Specifying the target Task To specify the Task you want to execute in your TaskRun use the taskRef field as shown below yaml spec taskRef name read task You can also embed the desired Task definition directly in the TaskRun using the taskSpec field yaml spec taskSpec workspaces name source steps name build and push image gcr io kaniko project executor v0 17 1 specifying DOCKER CONFIG is required to allow kaniko to detect docker credential workingDir workspaces source path env name DOCKER CONFIG value tekton home docker command kaniko executor args destination gcr io my project gohelloworld Tekton Bundles A Tekton Bundle is an OCI artifact that contains Tekton resources like Tasks which can be referenced within a taskRef You can reference a Tekton bundle in a TaskRef in both v1 and v1beta1 using remote resolution bundle resolver md pipeline resolution The example syntax shown below for v1 uses remote resolution and requires enabling beta features additional configs md beta features yaml spec taskRef resolver bundles params name bundle value docker io myrepo mycatalog name name value echo task name kind value Task You may also specify a tag as you would with a Docker image which will give you a repeatable reference to a Task yaml spec taskRef resolver bundles params name bundle value docker io myrepo mycatalog v1 0 1 name name value echo task name kind value Task You may also specify a fixed digest instead of a tag which ensures the referenced task is constant yaml spec taskRef resolver bundles params name bundle value docker io myrepo mycatalog sha256 abc123 name name value echo task name kind value Task A working example can be found here examples v1beta1 taskruns no ci tekton bundles yaml Any of the above options will fetch the image using the ImagePullSecrets attached to the ServiceAccount specified in the TaskRun See the Service Account service account section for details on how to configure a ServiceAccount on a TaskRun The TaskRun will then run that Task without registering it in the cluster allowing multiple versions of the same named Task to be run at once Tekton Bundles may be constructed with any toolsets that produces valid OCI image artifacts so long as the artifact adheres to the contract tekton bundle contracts md Additionally you may also use the tkn cli coming soon Remote Tasks beta feature https github com tektoncd pipeline blob main docs install md beta features A taskRef field may specify a Task in a remote location such as git Support for specific types of remote will depend on the Resolvers your cluster s operator has installed For more information including a tutorial please check resolution docs resolution md The below example demonstrates referencing a Task in git yaml spec taskRef resolver git params name url value https github com tektoncd catalog git name revision value abc123 name pathInRepo value task golang build 0 3 golang build yaml Specifying Parameters If a Task has parameters tasks md specifying parameters you can use the params field to specify their values yaml spec params name flags value someflag Note If a parameter does not have an implicit default value you must explicitly set its value Propagated Parameters When using an inlined taskSpec parameters from the parent TaskRun will be available to the Task without needing to be explicitly defined yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind TaskRun metadata generateName hello spec params name message value hello world taskSpec There are no explicit params defined here They are derived from the TaskRun params above steps name default image ubuntu script echo params message On executing the task run the parameters will be interpolated during resolution The specifications are not mutated before storage and so it remains the same The status is updated yaml kind TaskRun metadata name hello dlqm9 spec params name message value hello world serviceAccountName default taskSpec steps image ubuntu name default script echo params message status conditions lastTransitionTime 2022 05 20T15 24 41Z message All Steps have completed executing reason Succeeded status True type Succeeded steps container step default taskSpec steps image ubuntu name default script echo hello world Propagated Object Parameters When using an inlined taskSpec object parameters from the parent TaskRun will be available to the Task without needing to be explicitly defined Note If an object parameter is being defined explicitly then you must define the spec of the object in Properties yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind TaskRun metadata generateName object param result spec params name gitrepo value commit sha123 url xyz com taskSpec steps name echo object params image bash args echo url params gitrepo url commit params gitrepo commit On executing the task run the object parameters will be interpolated during resolution The specifications are not mutated before storage and so it remains the same The status is updated yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind TaskRun metadata name object param result vlnmb spec params name gitrepo value commit sha123 url xyz com serviceAccountName default taskSpec steps args echo url params gitrepo url commit params gitrepo commit image bash name echo object params status completionTime 2022 09 08T17 09 37Z conditions lastTransitionTime 2022 09 08T17 09 37Z message All Steps have completed executing reason Succeeded status True type Succeeded steps container step echo object params taskSpec steps args echo url xyz com commit sha123 image bash name echo object params Extra Parameters alpha only https github com tektoncd pipeline blob main docs additional configs md alpha features You can pass in extra Parameters if needed depending on your use cases An example use case is when your CI system autogenerates TaskRuns and it has Parameters it wants to provide to all TaskRuns Because you can pass in extra Parameters you don t have to go through the complexity of checking each Task and providing only the required params Parameter Enums seedling enum is an alpha additional configs md alpha features feature The enable param enum feature flag must be set to true to enable this feature If a Parameter is guarded by Enum in the Task you can only provide Parameter values in the TaskRun that are predefined in the Param Enum in the Task The TaskRun will fail with reason InvalidParamValue otherwise You can also specify Enum for TaskRun with an embedded Task example taskrun with an embedded task The same param validation will be executed in this scenario See more details in Param Enum tasks md param enum Specifying Resource limits Each Step in a Task can specify its resource requirements See Defining Steps tasks md defining steps Resource requirements defined in Steps and Sidecars may be overridden by a TaskRun s StepSpecs and SidecarSpecs Specifying Task level ComputeResources beta only https github com tektoncd pipeline blob main docs additional configs md beta features Task level compute resources can be configured in TaskRun ComputeResources or PipelineRun TaskRunSpecs ComputeResources e g yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind Task metadata name task spec steps name foo apiVersion tekton dev v1 or tekton dev v1beta1 kind TaskRun metadata name taskrun spec taskRef name task computeResources requests cpu 1 limits cpu 2 Further details and examples could be found in Compute Resources in Tekton https github com tektoncd pipeline blob main docs compute resources md Specifying a Pod template You can specify a Pod template podtemplates md configuration that will serve as the configuration starting point for the Pod in which the container images specified in your Task will execute This allows you to customize the Pod configuration specifically for that TaskRun In the following example the Task specifies a volumeMount my cache object also provided by the TaskRun using a PersistentVolumeClaim volume A specific scheduler is also configured in the SchedulerName field The Pod executes with regular non root user permissions yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind Task metadata name mytask namespace default spec steps name writesomething image ubuntu command bash c args echo foo my cache bar volumeMounts name my cache mountPath my cache apiVersion tekton dev v1 or tekton dev v1beta1 kind TaskRun metadata name mytaskrun namespace default spec taskRef name mytask podTemplate schedulerName volcano securityContext runAsNonRoot true runAsUser 1001 volumes name my cache persistentVolumeClaim claimName my volume claim Specifying Workspaces If a Task specifies one or more Workspaces you must map those Workspaces to the corresponding physical volumes in your TaskRun definition For example you can map a PersistentVolumeClaim volume to a Workspace as follows yaml workspaces name myworkspace must match workspace name in the Task persistentVolumeClaim claimName mypvc this PVC must already exist subPath my subdir For more information see the following topics For information on mapping Workspaces to Volumes see Using Workspace variables in TaskRuns workspaces md using workspace variables in taskruns For a list of supported Volume types see Specifying VolumeSources in Workspaces workspaces md specifying volumesources in workspaces For an end to end example see Workspaces in a TaskRun examples v1 taskruns workspace yaml Propagated Workspaces When using an embedded spec workspaces from the parent TaskRun will be propagated to any inlined specs without needing to be explicitly defined This allows authors to simplify specs by automatically propagating top level workspaces down to other inlined resources Workspace substutions will only be made for commands args and script fields of steps stepTemplates and sidecars yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind TaskRun metadata generateName propagating workspaces spec taskSpec steps name simple step image ubuntu command echo args workspaces tr workspace path workspaces emptyDir name tr workspace Upon execution the workspaces will be interpolated during resolution through to the taskSpec yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind TaskRun metadata name propagating workspaces ndxnc spec status taskSpec steps workspaces name tr workspace Propagating Workspaces to Referenced Tasks Workspaces can only be propagated to embedded task specs not referenced Tasks yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind Task metadata name workspace propagation spec steps name simple step image ubuntu command echo args workspaces tr workspace path apiVersion tekton dev v1 or tekton dev v1beta1 kind TaskRun metadata generateName propagating workspaces spec taskRef name workspace propagation workspaces emptyDir name tr workspace Upon execution the above TaskRun will fail because the Task is referenced and workspace is not propagated It must be explicitly defined in the spec of the defined Task yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind TaskRun metadata spec taskRef kind Task name workspace propagation workspaces emptyDir name tr workspace status conditions lastTransitionTime 2022 09 13T15 12 35Z message workspace binding tr workspace does not match any declared workspace reason TaskRunValidationFailed status False type Succeeded Specifying Sidecars A Sidecar is a container that runs alongside the containers specified in the Steps of a task to provide auxiliary support to the execution of those Steps For example a Sidecar can run a logging daemon a service that updates files on a shared volume or a network proxy Tekton supports the injection of Sidecars into a Pod belonging to a TaskRun with the condition that each Sidecar running inside the Pod are terminated as soon as all Steps in the Task complete execution This might result in the Pod including each affected Sidecar with a retry count of 1 and a different container image than expected We are aware of the following issues affecting Tekton s implementation of Sidecars The configured nop image must not provide the command that the Sidecar is expected to run otherwise it will not exit resulting in the Sidecar running forever and the Task eventually timing out For more information see the associated issue https github com tektoncd pipeline issues 1347 The kubectl get pods command returns the status of the Pod as Completed if a Sidecar exits successfully and as Error if a Sidecar exits with an error disregarding the exit codes of the container images that actually executed the Steps inside the Pod Only the above command is affected The Pod s description correctly denotes a Failed status and the container statuses correctly denote their exit codes and reasons Configuring Task Steps and Sidecars in a TaskRun beta only https github com tektoncd pipeline blob main docs additional configs md beta features A TaskRun can specify StepSpecs or SidecarSpecs to configure Step or Sidecar specified in a Task Only named Steps and Sidecars may be configured For example given the following Task definition yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind Task metadata name image build task spec steps name build image gcr io kaniko project executor latest sidecars name logging image my logging image An example TaskRun definition could look like yaml apiVersion tekton dev v1 kind TaskRun metadata name image build taskrun spec taskRef name image build task stepSpecs name build computeResources requests memory 1Gi sidecarSpecs name logging computeResources requests cpu 100m limits cpu 500m yaml apiVersion tekton dev v1beta1 kind TaskRun metadata name image build taskrun spec taskRef name image build task stepOverrides name build resources requests memory 1Gi sidecarOverrides name logging resources requests cpu 100m limits cpu 500m StepSpecs and SidecarSpecs must include the name field and may include resources No other fields can be overridden If the overridden Task uses a StepTemplate tasks md specifying a step template configuration on Step will take precedence over configuration in StepTemplate and configuration in StepSpec will take precedence over both When merging resource requirements different resource types are considered independently For example if a Step configures both CPU and memory and a StepSpec configures only memory the CPU values from the Step will be preserved Requests and limits are also considered independently For example if a Step configures a memory request and limit and a StepSpec configures only a memory request the memory limit from the Step will be preserved Specifying LimitRange values In order to only consume the bare minimum amount of resources needed to execute one Step at a time from the invoked Task Tekton will request the compute values for CPU memory and ephemeral storage for each Step based on the LimitRange https kubernetes io docs concepts policy limit range object s if present Any Request or Limit specified by the user on Task for example will be left unchanged For more information see the LimitRange support in Pipeline compute resources md limitrange support Specifying Retries You can use the retries field to set how many times you want to retry on a failed TaskRun All TaskRun failures are retriable except for Cancellation For a retriable TaskRun when an error occurs The error status is archived in status RetriesStatus The Succeeded condition in status is updated Type Succeeded Status Unknown Reason ToBeRetried status StartTime status PodName and status Results are unset to trigger another retry attempt Configuring the failure timeout You can use the timeout field to set the TaskRun s desired timeout value for each retry attempt If you do not specify this value the global default timeout value applies the same to each retry attempt If you set the timeout to 0 the TaskRun will have no timeout and will run until it completes successfully or fails from an error The timeout value is a duration conforming to Go s ParseDuration https golang org pkg time ParseDuration format For example valid values are 1h30m 1h 1m 60s and 0 If a TaskRun runs longer than its timeout value the pod associated with the TaskRun will be deleted This means that the logs of the TaskRun are not preserved The deletion of the TaskRun pod is necessary in order to stop TaskRun step containers from running The global default timeout is set to 60 minutes when you first install Tekton You can set a different global default timeout value using the default timeout minutes field in config config defaults yaml config config defaults yaml If you set the global timeout to 0 all TaskRuns that do not have a timeout set will have no timeout and will run until it completes successfully or fails from an error note An internal detail of the PipelineRun and TaskRun reconcilers in the Tekton controller is that it will requeue a PipelineRun or TaskRun for re evaluation versus waiting for the next update under certain conditions The wait time for that re queueing is the elapsed time subtracted from the timeout however if the timeout is set to 0 that calculation produces a negative number and the new reconciliation event will fire immediately which can impact overall performance which is counter to the intent of wait time calculation So instead the reconcilers will use the configured global timeout as the wait time when the associated timeout has been set to 0 Specifying ServiceAccount credentials You can execute the Task in your TaskRun with a specific set of credentials by specifying a ServiceAccount object name in the serviceAccountName field in your TaskRun definition If you do not explicitly specify this the TaskRun executes with the credentials specified in the configmap defaults ConfigMap If this default is not specified TaskRuns will execute with the default service account https kubernetes io docs tasks configure pod container configure service account use the default service account to access the api server set for the target namespace https kubernetes io docs concepts overview working with objects namespaces For more information see ServiceAccount auth md TaskRun status The status field defines the observed state of TaskRun The status field Required status The most relevant information about the TaskRun s state This field includes wokeignore rule master status conditions which contains the latest observations of the TaskRun s state See here https github com kubernetes community blob master contributors devel sig architecture api conventions md typical status properties for information on typical status properties podName Name of the pod containing the containers responsible for executing this task s step s startTime The time at which the TaskRun began executing conforms to RFC3339 https tools ietf org html rfc3339 format completionTime The time at which the TaskRun finished executing conforms to RFC3339 https tools ietf org html rfc3339 format taskSpec tasks md configuring a task TaskSpec defines the desired state of the Task executed via the TaskRun Optional results List of results written out by the task s containers provenance Provenance contains metadata about resources used in the TaskRun such as the source from where a remote task definition was fetched It carries minimum amount of metadata in TaskRun status so that Tekton Chains can utilize it for provenance its two subfields are refSource the source from where a remote Task definition was fetched featureFlags Identifies the feature flags used during the TaskRun steps Contains the state of each step container steps terminationReason When the step is terminated it stores the step s final state retriesStatus Contains the history of TaskRun s status in case of a retry in order to keep record of failures No status stored within retriesStatus will have any date within as it is redundant sidecars tasks md using a sidecar in a task This field is a list The list has one entry per sidecar in the manifest Each entry represents the imageid of the corresponding sidecar spanContext Contains tracing span context fields Monitoring execution status As your TaskRun executes its status field accumulates information on the execution of each Step as well as the TaskRun as a whole This information includes start and stop times exit codes the fully qualified name of the container image and the corresponding digest Note If any Pods have been OOMKilled https kubernetes io docs tasks administer cluster out of resource by Kubernetes the TaskRun is marked as failed even if its exit code is 0 The following example shows the status field of a TaskRun that has executed successfully yaml completionTime 2019 08 12T18 22 57Z conditions lastTransitionTime 2019 08 12T18 22 57Z message All Steps have completed executing reason Succeeded status True type Succeeded podName status taskrun pod startTime 2019 08 12T18 22 51Z steps container step hello imageID docker pullable busybox sha256 895ab622e92e18d6b461d671081757af7dbaa3b00e3e28e12505af7817f73649 name hello terminationReason Completed terminated containerID docker d5a54f5bbb8e7a6fd3bc7761b78410403244cf4c9c5822087fb0209bf59e3621 exitCode 0 finishedAt 2019 08 12T18 22 56Z reason Completed startedAt 2019 08 12T18 22 54Z The following tables shows how to read the overall status of a TaskRun status reason message completionTime is set Description Unknown Started n a No The TaskRun has just been picked up by the controller Unknown Pending n a No The TaskRun is waiting on a Pod in status Pending Unknown Running n a No The TaskRun has been validated and started to perform its work Unknown TaskRunCancelled n a No The user requested the TaskRun to be cancelled Cancellation has not been done yet True Succeeded n a Yes The TaskRun completed successfully False Failed n a Yes The TaskRun failed because one of the steps failed False Error message n a No The TaskRun encountered a non permanent error and it s still running It may ultimately succeed False Error message n a Yes The TaskRun failed with a permanent error usually validation False TaskRunCancelled n a Yes The TaskRun was cancelled successfully False TaskRunCancelled TaskRun cancelled as the PipelineRun it belongs to has timed out Yes The TaskRun was cancelled because the PipelineRun timed out False TaskRunTimeout n a Yes The TaskRun timed out False TaskRunImagePullFailed n a Yes The TaskRun failed due to one of its steps not being able to pull the image False FailureIgnored n a Yes The TaskRun failed but the failure was ignored When a TaskRun changes status events events md taskruns are triggered accordingly The name of the Pod owned by a TaskRun is univocally associated to the owning resource If a TaskRun resource is deleted and created with the same name the child Pod will be created with the same name as before The base format of the name is taskrun name pod The name may vary according to the logic of kmeta ChildName https pkg go dev github com knative pkg kmeta ChildName In case of retries of a TaskRun triggered by the PipelineRun controller the base format of the name is taskrun name pod retry N starting from the first retry Some examples TaskRun Name Pod Name task run task run pod task run 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789 task run 0123456789 01234560d38957287bb0283c59440df14069f59 pod Monitoring Steps If multiple Steps are defined in the Task invoked by the TaskRun you can monitor their execution status in the status steps field using the following command where name is the name of the target TaskRun bash kubectl get taskrun name o yaml The exact Task Spec used to instantiate the TaskRun is also included in the Status for full auditability Steps The corresponding statuses appear in the status steps list in the order in which the Steps have been specified in the Task definition Monitoring Results If one or more results fields have been specified in the invoked Task the TaskRun s execution status will include a Task Results section in which the Results appear verbatim including original line returns and whitespace For example yaml Status Steps Task Results Name current date human readable Value Thu Jan 23 16 29 06 UTC 2020 Name current date unix timestamp Value 1579796946 Cancelling a TaskRun To cancel a TaskRun that s currently executing update its status to mark it as cancelled When you cancel a TaskRun the running pod associated with that TaskRun is deleted This means that the logs of the TaskRun are not preserved The deletion of the TaskRun pod is necessary in order to stop TaskRun step containers from running Note if keep pod on cancel is set to true in the feature flags the pod associated with that TaskRun will not be deleted Example of cancelling a TaskRun yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind TaskRun metadata name go example git spec status TaskRunCancelled Debugging a TaskRun Breakpoint on Failure TaskRuns can be halted on failure for troubleshooting by providing the following spec patch as seen below yaml spec debug breakpoints onFailure enabled Breakpoint before step If you want to set a breakpoint before the step is executed you can add the step name to the beforeSteps field in the following way yaml spec debug breakpoints beforeSteps Upon failure of a step the TaskRun Pod execution is halted If this TaskRun Pod continues to run without any lifecycle change done by the user running the debug continue or debug fail continue script the TaskRun would be subject to TaskRunTimeout configuring the failure timeout During this time the user client can get remote shell access to the step container with a command such as the following bash kubectl exec it print date d7tj5 pod c step print date human readable sh Debug Environment After the user client has access to the container environment they can scour for any missing parts because of which their step might have failed To control the lifecycle of the step to mark it as a success or a failure or close the breakpoint there are scripts provided in the tekton debug scripts directory in the container The following are the scripts and the tasks they perform debug continue Mark the step as a success and exit the breakpoint debug fail continue Mark the step as a failure and exit the breakpoint debug beforestep continue Mark the step continue to execute debug beforestep fail continue Mark the step not continue to execute More information on the inner workings of debug can be found in the Debug documentation debug md Code examples To better understand TaskRuns study the following code examples Example TaskRun with a referenced Task example taskrun with a referenced task Example TaskRun with an embedded Task example taskrun with an embedded task Example of reusing a Task example of reusing a task Example of Using custom ServiceAccount credentials example of using custom serviceaccount credentials Example of Running Step Containers as a Non Root User example of running step containers as a non root user Example TaskRun with a referenced Task In this example a TaskRun named read repo run invokes and executes an existing Task named read task This Task reads the repository from the input workspace yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind Task metadata name read task spec workspaces name input steps name readme image ubuntu script cat workspaces input path README md apiVersion tekton dev v1 or tekton dev v1beta1 kind TaskRun metadata name read repo run spec taskRef name read task workspaces name input persistentVolumeClaim claimName mypvc subPath my subdir Example TaskRun with an embedded Task In this example a TaskRun named build push task run 2 directly executes a Task from its definition embedded in the TaskRun s taskSpec field yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind TaskRun metadata name build push task run 2 spec workspaces name source persistentVolumeClaim claimName my pvc taskSpec workspaces name source steps name build and push image gcr io kaniko project executor v0 17 1 workingDir workspaces source path specifying DOCKER CONFIG is required to allow kaniko to detect docker credential env name DOCKER CONFIG value tekton home docker command kaniko executor args destination gcr io my project gohelloworld Example of Using custom ServiceAccount credentials The example below illustrates how to specify a ServiceAccount to access a private git repository yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind TaskRun metadata name test task with serviceaccount git ssh spec serviceAccountName test task robot git ssh workspaces name source persistentVolumeClaim claimName repo pvc name ssh creds secret secretName test git ssh params name url value https github com tektoncd pipeline git taskRef name git clone In the above code snippet serviceAccountName test build robot git ssh references the following ServiceAccount yaml apiVersion v1 kind ServiceAccount metadata name test task robot git ssh secrets name test git ssh And secretName test git ssh references the following Secret yaml apiVersion v1 kind Secret metadata name test git ssh annotations tekton dev git 0 github com type kubernetes io ssh auth data Generated by cat id rsa base64 w 0 ssh privatekey LS0tLS1CRUdJTiBSU0EgUFJJVk example Generated by ssh keyscan github com base64 w 0 known hosts Z2l0aHViLmNvbSBzc2g example Example of Running Step Containers as a Non Root User All steps that do not require to be run as a root user should make use of TaskRun features to designate the container for a step runs as a user without root permissions As a best practice running containers as non root should be built into the container image to avoid any possibility of the container being run as root However as a further measure of enforcing this practice TaskRun pod templates can be used to specify how containers should be run within a TaskRun pod An example of using a TaskRun pod template is shown below to specify that containers running via this TaskRun s pod should run as non root and run as user 1001 if the container itself does not specify what user to run as yaml apiVersion tekton dev v1 or tekton dev v1beta1 kind TaskRun metadata generateName show non root steps run spec taskRef name show non root steps podTemplate securityContext runAsNonRoot true runAsUser 1001 If a Task step specifies that it is to run as a different user than what is specified in the pod template the step s securityContext will be applied instead of what is specified at the pod level An example of this is available as a TaskRun example examples v1 taskruns run steps as non root yaml More information about Pod and Container Security Contexts can be found via the Kubernetes website https kubernetes io docs tasks configure pod container security context set the security context for a pod Except as otherwise noted the content of this page is licensed under the Creative Commons Attribution 4 0 License https creativecommons org licenses by 4 0 and code samples are licensed under the Apache 2 0 License https www apache org licenses LICENSE 2 0 |
tekton Events in Tekton weight 302 Tekton s controllers emits Events | <!--
---
linkTitle: "Events"
weight: 302
---
-->
# Events in Tekton
Tekton's controllers emits [Kubernetes events](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#event-v1-core)
when `TaskRuns` and `PipelineRuns` execute. This allows you to monitor and react to what's happening during execution by
retrieving those events using the `kubectl describe` command. Tekton can also emit [CloudEvents](https://github.com/cloudevents/spec).
**Note:** `Conditions` [do not emit events](https://github.com/tektoncd/pipeline/issues/2461)
but the underlying `TaskRun` do.
## Events in `TaskRuns`
`TaskRuns` emit events for the following `Reasons`:
- `Started`: emitted the first time the `TaskRun` is picked by the
reconciler from its work queue, so it only happens if webhook validation was
successful. This event in itself does not indicate that a `Step` is executing;
the `Step` executes once the following conditions are satisfied:
- Validation of the `Task` and its associated resources must succeed, and
- Checks for associated `Conditions` must succeed, and
- Scheduling of the associated `Pod` must succeed.
- `Succeeded`: emitted once all steps in the `TaskRun` have executed successfully,
including post-steps injected by Tekton.
- `Failed`: emitted if the `TaskRun` finishes running unsuccessfully because a `Step` failed,
or the `TaskRun` timed out or was cancelled. A `TaskRun` also emits `Failed` events
if it cannot execute at all due to failing validation.
## Events in `PipelineRuns`
`PipelineRuns` emit events for the following `Reasons`:
- `Started`: emitted the first time the `PipelineRun` is picked by the
reconciler from its work queue, so it only happens if webhook validation was
successful. This event in itself does not indicate that a `Step` is executing;
the `Step` executes once validation for the `Pipeline` as well as all associated `Tasks`
and `Resources` is successful.
- `Running`: emitted when the `PipelineRun` passes validation and
actually begins execution.
- `Succeeded`: emitted once all `Tasks` reachable via the DAG have
executed successfully.
- `Failed`: emitted if the `PipelineRun` finishes running unsuccessfully because a `Task` failed or the
`PipelineRun` timed out or was cancelled. A `PipelineRun` also emits `Failed` events if it cannot
execute at all due to failing validation.
# Events via `CloudEvents`
When you [configure a sink](./additional-configs.md#configuring-cloudevents-notifications), Tekton emits
events as described in the table below.
Tekton sends cloud events in a parallel routine to allow for retries without blocking the
reconciler. A routine is started every time the `Succeeded` condition changes - either state,
reason or message. Retries are sent using an exponential back-off strategy.
Because of retries, events are not guaranteed to be sent to the target sink in the order they happened.
Resource |Event |Event Type
:-------------|:-------:|:----------------------------------------------------------
`TaskRun` | `Started` | `dev.tekton.event.taskrun.started.v1`
`TaskRun` | `Running` | `dev.tekton.event.taskrun.running.v1`
`TaskRun` | `Condition Change while Running` | `dev.tekton.event.taskrun.unknown.v1`
`TaskRun` | `Succeed` | `dev.tekton.event.taskrun.successful.v1`
`TaskRun` | `Failed` | `dev.tekton.event.taskrun.failed.v1`
`PipelineRun` | `Started` | `dev.tekton.event.pipelinerun.started.v1`
`PipelineRun` | `Running` | `dev.tekton.event.pipelinerun.running.v1`
`PipelineRun` | `Condition Change while Running` | `dev.tekton.event.pipelinerun.unknown.v1`
`PipelineRun` | `Succeed` | `dev.tekton.event.pipelinerun.successful.v1`
`PipelineRun` | `Failed` | `dev.tekton.event.pipelinerun.failed.v1`
`Run` | `Started` | `dev.tekton.event.run.started.v1`
`Run` | `Running` | `dev.tekton.event.run.running.v1`
`Run` | `Succeed` | `dev.tekton.event.run.successful.v1`
`Run` | `Failed` | `dev.tekton.event.run.failed.v1`
`CloudEvents` for `Runs` are only sent when enabled in the [configuration](./additional-configs.md#configuring-cloudevents-notifications).
**Note**: `CloudEvents` for `Runs` rely on an ephemeral cache to avoid duplicate
events. In case of controller restart, the cache is reset and duplicate events
may be sent.
## Format of `CloudEvents`
According to the [`CloudEvents` spec](https://github.com/cloudevents/spec/blob/main/cloudevents/spec.md), HTTP headers are included to match the context fields. For example:
```
"Ce-Id": "77f78ae7-ff6d-4e39-9d05-b9a0b7850527",
"Ce-Source": "/apis/tekton.dev/v1beta1/namespaces/default/taskruns/curl-run-6gplk",
"Ce-Specversion": "1.0",
"Ce-Subject": "curl-run-6gplk",
"Ce-Time": "2021-01-29T14:47:58.157819Z",
"Ce-Type": "dev.tekton.event.taskrun.unknown.v1",
```
Other HTTP headers are:
```
"Accept-Encoding": "gzip",
"Connection": "close",
"Content-Length": "3519",
"Content-Type": "application/json",
"User-Agent": "Go-http-client/1.1"
```
The payload is JSON, a map with a single root key `taskRun` or `pipelineRun`, depending on the source
of the event. Inside the root key, the whole `spec` and `status` of the resource is included. For example:
```json
{
"taskRun": {
"metadata": {
"annotations": {
"pipeline.tekton.dev/release": "v0.20.1",
"tekton.dev/pipelines.minVersion": "0.12.1",
"tekton.dev/tags": "search"
},
"creationTimestamp": "2021-01-29T14:47:57Z",
"generateName": "curl-run-",
"generation": 1,
"labels": {
"app.kubernetes.io/managed-by": "tekton-pipelines",
"app.kubernetes.io/version": "0.1",
"tekton.dev/task": "curl"
},
"managedFields": "(...)",
"name": "curl-run-6gplk",
"namespace": "default",
"resourceVersion": "156770",
"selfLink": "/apis/tekton.dev/v1beta1/namespaces/default/taskruns/curl-run-6gplk",
"uid": "4ccb4f01-3ecc-4eb4-87e1-76f04efeee5c"
},
"spec": {
"params": [
{
"name": "url",
"value": "https://api.hub.tekton.dev/resource/96"
}
],
"resources": {},
"serviceAccountName": "default",
"taskRef": {
"kind": "Task",
"name": "curl"
},
"timeout": "1h0m0s"
},
"status": {
"conditions": [
{
"lastTransitionTime": "2021-01-29T14:47:58Z",
"message": "pod status \"Initialized\":\"False\"; message: \"containers with incomplete status: [place-tools]\"",
"reason": "Pending",
"status": "Unknown",
"type": "Succeeded"
}
],
"podName": "curl-run-6gplk-pod",
"startTime": "2021-01-29T14:47:57Z",
"steps": [
{
"container": "step-curl",
"name": "curl",
"waiting": {
"reason": "PodInitializing"
}
}
],
"taskSpec": {
"description": "This task performs curl operation to transfer data from internet.",
"params": [
{
"description": "URL to curl'ed",
"name": "url",
"type": "string"
},
{
"default": [],
"description": "options of url",
"name": "options",
"type": "array"
},
{
"default": "docker.io/curlimages/curl:7.72.0@sha256:3c3ff0c379abb1150bb586c7d55848ed4dcde4a6486b6f37d6815aed569332fe",
"description": "option of curl image",
"name": "curl-image",
"type": "string"
}
],
"steps": [
{
"args": [
"$(params.options[*])",
"$(params.url)"
],
"command": [
"curl"
],
"image": "$(params.curl-image)",
"name": "curl",
"resources": {}
}
]
}
}
}
}
``` | tekton | linkTitle Events weight 302 Events in Tekton Tekton s controllers emits Kubernetes events https kubernetes io docs reference generated kubernetes api v1 18 event v1 core when TaskRuns and PipelineRuns execute This allows you to monitor and react to what s happening during execution by retrieving those events using the kubectl describe command Tekton can also emit CloudEvents https github com cloudevents spec Note Conditions do not emit events https github com tektoncd pipeline issues 2461 but the underlying TaskRun do Events in TaskRuns TaskRuns emit events for the following Reasons Started emitted the first time the TaskRun is picked by the reconciler from its work queue so it only happens if webhook validation was successful This event in itself does not indicate that a Step is executing the Step executes once the following conditions are satisfied Validation of the Task and its associated resources must succeed and Checks for associated Conditions must succeed and Scheduling of the associated Pod must succeed Succeeded emitted once all steps in the TaskRun have executed successfully including post steps injected by Tekton Failed emitted if the TaskRun finishes running unsuccessfully because a Step failed or the TaskRun timed out or was cancelled A TaskRun also emits Failed events if it cannot execute at all due to failing validation Events in PipelineRuns PipelineRuns emit events for the following Reasons Started emitted the first time the PipelineRun is picked by the reconciler from its work queue so it only happens if webhook validation was successful This event in itself does not indicate that a Step is executing the Step executes once validation for the Pipeline as well as all associated Tasks and Resources is successful Running emitted when the PipelineRun passes validation and actually begins execution Succeeded emitted once all Tasks reachable via the DAG have executed successfully Failed emitted if the PipelineRun finishes running unsuccessfully because a Task failed or the PipelineRun timed out or was cancelled A PipelineRun also emits Failed events if it cannot execute at all due to failing validation Events via CloudEvents When you configure a sink additional configs md configuring cloudevents notifications Tekton emits events as described in the table below Tekton sends cloud events in a parallel routine to allow for retries without blocking the reconciler A routine is started every time the Succeeded condition changes either state reason or message Retries are sent using an exponential back off strategy Because of retries events are not guaranteed to be sent to the target sink in the order they happened Resource Event Event Type TaskRun Started dev tekton event taskrun started v1 TaskRun Running dev tekton event taskrun running v1 TaskRun Condition Change while Running dev tekton event taskrun unknown v1 TaskRun Succeed dev tekton event taskrun successful v1 TaskRun Failed dev tekton event taskrun failed v1 PipelineRun Started dev tekton event pipelinerun started v1 PipelineRun Running dev tekton event pipelinerun running v1 PipelineRun Condition Change while Running dev tekton event pipelinerun unknown v1 PipelineRun Succeed dev tekton event pipelinerun successful v1 PipelineRun Failed dev tekton event pipelinerun failed v1 Run Started dev tekton event run started v1 Run Running dev tekton event run running v1 Run Succeed dev tekton event run successful v1 Run Failed dev tekton event run failed v1 CloudEvents for Runs are only sent when enabled in the configuration additional configs md configuring cloudevents notifications Note CloudEvents for Runs rely on an ephemeral cache to avoid duplicate events In case of controller restart the cache is reset and duplicate events may be sent Format of CloudEvents According to the CloudEvents spec https github com cloudevents spec blob main cloudevents spec md HTTP headers are included to match the context fields For example Ce Id 77f78ae7 ff6d 4e39 9d05 b9a0b7850527 Ce Source apis tekton dev v1beta1 namespaces default taskruns curl run 6gplk Ce Specversion 1 0 Ce Subject curl run 6gplk Ce Time 2021 01 29T14 47 58 157819Z Ce Type dev tekton event taskrun unknown v1 Other HTTP headers are Accept Encoding gzip Connection close Content Length 3519 Content Type application json User Agent Go http client 1 1 The payload is JSON a map with a single root key taskRun or pipelineRun depending on the source of the event Inside the root key the whole spec and status of the resource is included For example json taskRun metadata annotations pipeline tekton dev release v0 20 1 tekton dev pipelines minVersion 0 12 1 tekton dev tags search creationTimestamp 2021 01 29T14 47 57Z generateName curl run generation 1 labels app kubernetes io managed by tekton pipelines app kubernetes io version 0 1 tekton dev task curl managedFields name curl run 6gplk namespace default resourceVersion 156770 selfLink apis tekton dev v1beta1 namespaces default taskruns curl run 6gplk uid 4ccb4f01 3ecc 4eb4 87e1 76f04efeee5c spec params name url value https api hub tekton dev resource 96 resources serviceAccountName default taskRef kind Task name curl timeout 1h0m0s status conditions lastTransitionTime 2021 01 29T14 47 58Z message pod status Initialized False message containers with incomplete status place tools reason Pending status Unknown type Succeeded podName curl run 6gplk pod startTime 2021 01 29T14 47 57Z steps container step curl name curl waiting reason PodInitializing taskSpec description This task performs curl operation to transfer data from internet params description URL to curl ed name url type string default description options of url name options type array default docker io curlimages curl 7 72 0 sha256 3c3ff0c379abb1150bb586c7d55848ed4dcde4a6486b6f37d6815aed569332fe description option of curl image name curl image type string steps args params options params url command curl image params curl image name curl resources |
tekton Variable Substitutions Supported by and Variable Substitutions weight 407 This page documents the variable substitutions supported by and | <!--
---
linkTitle: "Variable Substitutions"
weight: 407
---
-->
# Variable Substitutions Supported by `Tasks` and `Pipelines`
This page documents the variable substitutions supported by `Tasks` and `Pipelines`.
For instructions on using variable substitutions see the relevant section of [the Tasks doc](tasks.md#using-variable-substitution).
**Note:** Tekton does not escape the contents of variables. Task authors are responsible for properly escaping a variable's value according to the shell, image or scripting language that the variable will be used in.
## Variables available in a `Pipeline`
| Variable | Description |
|----------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `params.<param name>` | The value of the parameter at runtime. |
| `params['<param name>']` | (see above) |
| `params["<param name>"]` | (see above) |
| `params.<param name>[*]` | Get the whole param array or object. |
| `params['<param name>'][*]` | (see above) |
| `params["<param name>"][*]` | (see above) |
| `params.<param name>[i]` | Get the i-th element of param array. This is alpha feature, set `enable-api-fields` to `alpha` to use it. |
| `params['<param name>'][i]` | (see above) |
| `params["<param name>"][i]` | (see above) |
| `params.<object-param-name>[*]` | Get the value of the whole object param. This is alpha feature, set `enable-api-fields` to `alpha` to use it. |
| `params.<object-param-name>.<individual-key-name>` | Get the value of an individual child of an object param. This is alpha feature, set `enable-api-fields` to `alpha` to use it. |
| `tasks.<taskName>.matrix.length` | The length of the `Matrix` combination count. |
| `tasks.<taskName>.results.<resultName>` | The value of the `Task's` result. Can alter `Task` execution order within a `Pipeline`.) |
| `tasks.<taskName>.results.<resultName>[i]` | The ith value of the `Task's` array result. Can alter `Task` execution order within a `Pipeline`.) |
| `tasks.<taskName>.results.<resultName>[*]` | The array value of the `Task's` result. Can alter `Task` execution order within a `Pipeline`. Cannot be used in `script`.) |
| `tasks.<taskName>.results.<resultName>.key` | The `key` value of the `Task's` object result. Can alter `Task` execution order within a `Pipeline`.) |
| `tasks.<taskName>.matrix.<resultName>.length` | The length of the matrixed `Task's` results. (Can alter `Task` execution order within a `Pipeline`.) |
| `workspaces.<workspaceName>.bound` | Whether a `Workspace` has been bound or not. "false" if the `Workspace` declaration has `optional: true` and the Workspace binding was omitted by the PipelineRun. |
| `context.pipelineRun.name` | The name of the `PipelineRun` that this `Pipeline` is running in. |
| `context.pipelineRun.namespace` | The namespace of the `PipelineRun` that this `Pipeline` is running in. |
| `context.pipelineRun.uid` | The uid of the `PipelineRun` that this `Pipeline` is running in. |
| `context.pipeline.name` | The name of this `Pipeline` . |
| `tasks.<pipelineTaskName>.status` | The execution status of the specified `pipelineTask`, only available in `finally` tasks. The execution status can be set to any one of the values (`Succeeded`, `Failed`, or `None`) described [here](pipelines.md#using-execution-status-of-pipelinetask). |
| `tasks.<pipelineTaskName>.reason` | The execution reason of the specified `pipelineTask`, only available in `finally` tasks. The reason can be set to any one of the values (`Failed`, `TaskRunCancelled`, `TaskRunTimeout`, `FailureIgnored`, etc ) described [here](taskruns.md#monitoring-execution-status). |
| `tasks.status` | An aggregate status of all the `pipelineTasks` under the `tasks` section (excluding the `finally` section). This variable is only available in the `finally` tasks and can have any one of the values (`Succeeded`, `Failed`, `Completed`, or `None`) described [here](pipelines.md#using-aggregate-execution-status-of-all-tasks). |
| `context.pipelineTask.retries` | The retries of this `PipelineTask`. |
| `tasks.<taskName>.outputs.<artifactName>` | The value of a specific output artifact of the `Task` |
| `tasks.<taskName>.inputs.<artifactName>` | The value of a specific input artifact of the `Task` |
## Variables available in a `Task`
| Variable | Description |
|----------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------|
| `params.<param name>` | The value of the parameter at runtime. |
| `params['<param name>']` | (see above) |
| `params["<param name>"]` | (see above) |
| `params.<param name>[*]` | Get the whole param array or object. |
| `params['<param name>'][*]` | (see above) |
| `params["<param name>"][*]` | (see above) |
| `params.<param name>[i]` | Get the i-th element of param array. This is alpha feature, set `enable-api-fields` to `alpha` to use it. |
| `params['<param name>'][i]` | (see above) |
| `params["<param name>"][i]` | (see above) |
| `params.<object-param-name>.<individual-key-name>` | Get the value of an individual child of an object param. This is alpha feature, set `enable-api-fields` to `alpha` to use it. |
| `results.<resultName>.path` | The path to the file where the `Task` writes its results data. |
| `results['<resultName>'].path` | (see above) |
| `results["<resultName>"].path` | (see above) |
| `workspaces.<workspaceName>.path` | The path to the mounted `Workspace`. Empty string if an optional `Workspace` has not been provided by the TaskRun. |
| `workspaces.<workspaceName>.bound` | Whether a `Workspace` has been bound or not. "false" if an optional`Workspace` has not been provided by the TaskRun. |
| `workspaces.<workspaceName>.claim` | The name of the `PersistentVolumeClaim` specified as a volume source for the `Workspace`. Empty string for other volume types. |
| `workspaces.<workspaceName>.volume` | The name of the volume populating the `Workspace`. |
| `credentials.path` | The path to credentials injected from Secrets with matching annotations. |
| `context.taskRun.name` | The name of the `TaskRun` that this `Task` is running in. |
| `context.taskRun.namespace` | The namespace of the `TaskRun` that this `Task` is running in. |
| `context.taskRun.uid` | The uid of the `TaskRun` that this `Task` is running in. |
| `context.task.name` | The name of this `Task`. |
| `context.task.retry-count` | The current retry number of this `Task`. |
| `steps.step-<stepName>.exitCode.path` | The path to the file where a Step's exit code is stored. |
| `steps.step-unnamed-<stepIndex>.exitCode.path` | The path to the file where a Step's exit code is stored for a step without any name. |
| `artifacts.path` | The path to the file where the `Task` writes its artifacts data. |
## Fields that accept variable substitutions
| CRD | Field |
|---------------|-----------------------------------------------------------------|
| `Task` | `spec.steps[].name` |
| `Task` | `spec.steps[].image` |
| `Task` | `spec.steps[].imagePullPolicy` |
| `Task` | `spec.steps[].command` |
| `Task` | `spec.steps[].args` |
| `Task` | `spec.steps[].script` |
| `Task` | `spec.steps[].onError` |
| `Task` | `spec.steps[].env.value` |
| `Task` | `spec.steps[].env.valueFrom.secretKeyRef.name` |
| `Task` | `spec.steps[].env.valueFrom.secretKeyRef.key` |
| `Task` | `spec.steps[].env.valueFrom.configMapKeyRef.name` |
| `Task` | `spec.steps[].env.valueFrom.configMapKeyRef.key` |
| `Task` | `spec.steps[].volumeMounts.name` |
| `Task` | `spec.steps[].volumeMounts.mountPath` |
| `Task` | `spec.steps[].volumeMounts.subPath` |
| `Task` | `spec.volumes[].name` |
| `Task` | `spec.volumes[].configMap.name` |
| `Task` | `spec.volumes[].configMap.items[].key` |
| `Task` | `spec.volumes[].configMap.items[].path` |
| `Task` | `spec.volumes[].secret.secretName` |
| `Task` | `spec.volumes[].secret.items[].key` |
| `Task` | `spec.volumes[].secret.items[].path` |
| `Task` | `spec.volumes[].persistentVolumeClaim.claimName` |
| `Task` | `spec.volumes[].projected.sources.configMap.name` |
| `Task` | `spec.volumes[].projected.sources.secret.name` |
| `Task` | `spec.volumes[].projected.sources.serviceAccountToken.audience` |
| `Task` | `spec.volumes[].csi.nodePublishSecretRef.name` |
| `Task` | `spec.volumes[].csi.volumeAttributes.* ` |
| `Task` | `spec.sidecars[].name` |
| `Task` | `spec.sidecars[].image` |
| `Task` | `spec.sidecars[].imagePullPolicy` |
| `Task` | `spec.sidecars[].env.value` |
| `Task` | `spec.sidecars[].env.valueFrom.secretKeyRef.name` |
| `Task` | `spec.sidecars[].env.valueFrom.secretKeyRef.key` |
| `Task` | `spec.sidecars[].env.valueFrom.configMapKeyRef.name` |
| `Task` | `spec.sidecars[].env.valueFrom.configMapKeyRef.key` |
| `Task` | `spec.sidecars[].volumeMounts.name` |
| `Task` | `spec.sidecars[].volumeMounts.mountPath` |
| `Task` | `spec.sidecars[].volumeMounts.subPath` |
| `Task` | `spec.sidecars[].command` |
| `Task` | `spec.sidecars[].args` |
| `Task` | `spec.sidecars[].script` |
| `Task` | `spec.workspaces[].mountPath` |
| `TaskRun` | `spec.workspaces[].subPath` |
| `TaskRun` | `spec.workspaces[].persistentVolumeClaim.claimName` |
| `TaskRun` | `spec.workspaces[].configMap.name` |
| `TaskRun` | `spec.workspaces[].configMap.items[].key` |
| `TaskRun` | `spec.workspaces[].configMap.items[].path` |
| `TaskRun` | `spec.workspaces[].secret.secretName` |
| `TaskRun` | `spec.workspaces[].secret.items[].key` |
| `TaskRun` | `spec.workspaces[].secret.items[].path` |
| `TaskRun` | `spec.workspaces[].projected.sources[].secret.name` |
| `TaskRun` | `spec.workspaces[].projected.sources[].secret.items[].key` |
| `TaskRun` | `spec.workspaces[].projected.sources[].secret.items[].path` |
| `TaskRun` | `spec.workspaces[].projected.sources[].configMap.name` |
| `TaskRun` | `spec.workspaces[].projected.sources[].configMap.items[].key` |
| `TaskRun` | `spec.workspaces[].projected.sources[].configMap.items[].path` |
| `TaskRun` | `spec.workspaces[].csi.driver` |
| `TaskRun` | `spec.workspaces[].csi.nodePublishSecretRef.name` |
| `Pipeline` | `spec.tasks[].params[].value` |
| `Pipeline` | `spec.tasks[].conditions[].params[].value` |
| `Pipeline` | `spec.results[].value` |
| `Pipeline` | `spec.tasks[].when[].input` |
| `Pipeline` | `spec.tasks[].when[].values` |
| `Pipeline` | `spec.tasks[].workspaces[].subPath` |
| `Pipeline` | `spec.tasks[].displayName` |
| `PipelineRun` | `spec.workspaces[].subPath` |
| `PipelineRun` | `spec.workspaces[].persistentVolumeClaim.claimName` |
| `PipelineRun` | `spec.workspaces[].configMap.name` |
| `PipelineRun` | `spec.workspaces[].configMap.items[].key` |
| `PipelineRun` | `spec.workspaces[].configMap.items[].path` |
| `PipelineRun` | `spec.workspaces[].secret.secretName` |
| `PipelineRun` | `spec.workspaces[].secret.items[].key` |
| `PipelineRun` | `spec.workspaces[].secret.items[].path` |
| `PipelineRun` | `spec.workspaces[].projected.sources[].secret.name` |
| `PipelineRun` | `spec.workspaces[].projected.sources[].secret.items[].key` |
| `PipelineRun` | `spec.workspaces[].projected.sources[].secret.items[].path` |
| `PipelineRun` | `spec.workspaces[].projected.sources[].configMap.name` |
| `PipelineRun` | `spec.workspaces[].projected.sources[].configMap.items[].key` |
| `PipelineRun` | `spec.workspaces[].projected.sources[].configMap.items[].path` |
| `PipelineRun` | `spec.workspaces[].csi.driver` |
| `PipelineRun` | `spec.workspaces[].csi.nodePublishSecretRef.name` | | tekton | linkTitle Variable Substitutions weight 407 Variable Substitutions Supported by Tasks and Pipelines This page documents the variable substitutions supported by Tasks and Pipelines For instructions on using variable substitutions see the relevant section of the Tasks doc tasks md using variable substitution Note Tekton does not escape the contents of variables Task authors are responsible for properly escaping a variable s value according to the shell image or scripting language that the variable will be used in Variables available in a Pipeline Variable Description params param name The value of the parameter at runtime params param name see above params param name see above params param name Get the whole param array or object params param name see above params param name see above params param name i Get the i th element of param array This is alpha feature set enable api fields to alpha to use it params param name i see above params param name i see above params object param name Get the value of the whole object param This is alpha feature set enable api fields to alpha to use it params object param name individual key name Get the value of an individual child of an object param This is alpha feature set enable api fields to alpha to use it tasks taskName matrix length The length of the Matrix combination count tasks taskName results resultName The value of the Task s result Can alter Task execution order within a Pipeline tasks taskName results resultName i The ith value of the Task s array result Can alter Task execution order within a Pipeline tasks taskName results resultName The array value of the Task s result Can alter Task execution order within a Pipeline Cannot be used in script tasks taskName results resultName key The key value of the Task s object result Can alter Task execution order within a Pipeline tasks taskName matrix resultName length The length of the matrixed Task s results Can alter Task execution order within a Pipeline workspaces workspaceName bound Whether a Workspace has been bound or not false if the Workspace declaration has optional true and the Workspace binding was omitted by the PipelineRun context pipelineRun name The name of the PipelineRun that this Pipeline is running in context pipelineRun namespace The namespace of the PipelineRun that this Pipeline is running in context pipelineRun uid The uid of the PipelineRun that this Pipeline is running in context pipeline name The name of this Pipeline tasks pipelineTaskName status The execution status of the specified pipelineTask only available in finally tasks The execution status can be set to any one of the values Succeeded Failed or None described here pipelines md using execution status of pipelinetask tasks pipelineTaskName reason The execution reason of the specified pipelineTask only available in finally tasks The reason can be set to any one of the values Failed TaskRunCancelled TaskRunTimeout FailureIgnored etc described here taskruns md monitoring execution status tasks status An aggregate status of all the pipelineTasks under the tasks section excluding the finally section This variable is only available in the finally tasks and can have any one of the values Succeeded Failed Completed or None described here pipelines md using aggregate execution status of all tasks context pipelineTask retries The retries of this PipelineTask tasks taskName outputs artifactName The value of a specific output artifact of the Task tasks taskName inputs artifactName The value of a specific input artifact of the Task Variables available in a Task Variable Description params param name The value of the parameter at runtime params param name see above params param name see above params param name Get the whole param array or object params param name see above params param name see above params param name i Get the i th element of param array This is alpha feature set enable api fields to alpha to use it params param name i see above params param name i see above params object param name individual key name Get the value of an individual child of an object param This is alpha feature set enable api fields to alpha to use it results resultName path The path to the file where the Task writes its results data results resultName path see above results resultName path see above workspaces workspaceName path The path to the mounted Workspace Empty string if an optional Workspace has not been provided by the TaskRun workspaces workspaceName bound Whether a Workspace has been bound or not false if an optional Workspace has not been provided by the TaskRun workspaces workspaceName claim The name of the PersistentVolumeClaim specified as a volume source for the Workspace Empty string for other volume types workspaces workspaceName volume The name of the volume populating the Workspace credentials path The path to credentials injected from Secrets with matching annotations context taskRun name The name of the TaskRun that this Task is running in context taskRun namespace The namespace of the TaskRun that this Task is running in context taskRun uid The uid of the TaskRun that this Task is running in context task name The name of this Task context task retry count The current retry number of this Task steps step stepName exitCode path The path to the file where a Step s exit code is stored steps step unnamed stepIndex exitCode path The path to the file where a Step s exit code is stored for a step without any name artifacts path The path to the file where the Task writes its artifacts data Fields that accept variable substitutions CRD Field Task spec steps name Task spec steps image Task spec steps imagePullPolicy Task spec steps command Task spec steps args Task spec steps script Task spec steps onError Task spec steps env value Task spec steps env valueFrom secretKeyRef name Task spec steps env valueFrom secretKeyRef key Task spec steps env valueFrom configMapKeyRef name Task spec steps env valueFrom configMapKeyRef key Task spec steps volumeMounts name Task spec steps volumeMounts mountPath Task spec steps volumeMounts subPath Task spec volumes name Task spec volumes configMap name Task spec volumes configMap items key Task spec volumes configMap items path Task spec volumes secret secretName Task spec volumes secret items key Task spec volumes secret items path Task spec volumes persistentVolumeClaim claimName Task spec volumes projected sources configMap name Task spec volumes projected sources secret name Task spec volumes projected sources serviceAccountToken audience Task spec volumes csi nodePublishSecretRef name Task spec volumes csi volumeAttributes Task spec sidecars name Task spec sidecars image Task spec sidecars imagePullPolicy Task spec sidecars env value Task spec sidecars env valueFrom secretKeyRef name Task spec sidecars env valueFrom secretKeyRef key Task spec sidecars env valueFrom configMapKeyRef name Task spec sidecars env valueFrom configMapKeyRef key Task spec sidecars volumeMounts name Task spec sidecars volumeMounts mountPath Task spec sidecars volumeMounts subPath Task spec sidecars command Task spec sidecars args Task spec sidecars script Task spec workspaces mountPath TaskRun spec workspaces subPath TaskRun spec workspaces persistentVolumeClaim claimName TaskRun spec workspaces configMap name TaskRun spec workspaces configMap items key TaskRun spec workspaces configMap items path TaskRun spec workspaces secret secretName TaskRun spec workspaces secret items key TaskRun spec workspaces secret items path TaskRun spec workspaces projected sources secret name TaskRun spec workspaces projected sources secret items key TaskRun spec workspaces projected sources secret items path TaskRun spec workspaces projected sources configMap name TaskRun spec workspaces projected sources configMap items key TaskRun spec workspaces projected sources configMap items path TaskRun spec workspaces csi driver TaskRun spec workspaces csi nodePublishSecretRef name Pipeline spec tasks params value Pipeline spec tasks conditions params value Pipeline spec results value Pipeline spec tasks when input Pipeline spec tasks when values Pipeline spec tasks workspaces subPath Pipeline spec tasks displayName PipelineRun spec workspaces subPath PipelineRun spec workspaces persistentVolumeClaim claimName PipelineRun spec workspaces configMap name PipelineRun spec workspaces configMap items key PipelineRun spec workspaces configMap items path PipelineRun spec workspaces secret secretName PipelineRun spec workspaces secret items key PipelineRun spec workspaces secret items path PipelineRun spec workspaces projected sources secret name PipelineRun spec workspaces projected sources secret items key PipelineRun spec workspaces projected sources secret items path PipelineRun spec workspaces projected sources configMap name PipelineRun spec workspaces projected sources configMap items key PipelineRun spec workspaces projected sources configMap items path PipelineRun spec workspaces csi driver PipelineRun spec workspaces csi nodePublishSecretRef name |
tekton Additional configurations when installing Tekton Pipelines title Additional Configuration Options Additional Configuration Options weight 109 | <!--
---
title: "Additional Configuration Options"
linkTitle: "Additional Configuration Options"
weight: 109
description: >
Additional configurations when installing Tekton Pipelines
---
-->
This document describes additional options to configure your Tekton Pipelines
installation.
## Table of Contents
- [Configuring built-in remote Task and Pipeline resolution](#configuring-built-in-remote-task-and-pipeline-resolution)
- [Configuring CloudEvents notifications](#configuring-cloudevents-notifications)
- [Configuring self-signed cert for private registry](#configuring-self-signed-cert-for-private-registry)
- [Configuring environment variables](#configuring-environment-variables)
- [Customizing basic execution parameters](#customizing-basic-execution-parameters)
- [Customizing the Pipelines Controller behavior](#customizing-the-pipelines-controller-behavior)
- [Alpha Features](#alpha-features)
- [Beta Features](#beta-features)
- [Enabling larger results using sidecar logs](#enabling-larger-results-using-sidecar-logs)
- [Configuring High Availability](#configuring-high-availability)
- [Configuring tekton pipeline controller performance](#configuring-tekton-pipeline-controller-performance)
- [Platform Support](#platform-support)
- [Creating a custom release of Tekton Pipelines](#creating-a-custom-release-of-tekton-pipelines)
- [Verify Tekton Pipelines Release](#verify-tekton-pipelines-release)
- [Verify signatures using `cosign`](#verify-signatures-using-cosign)
- [Verify the transparency logs using `rekor-cli`](#verify-the-transparency-logs-using-rekor-cli)
- [Verify Tekton Resources](#verify-tekton-resources)
- [Pipelinerun with Affinity Assistant](#pipelineruns-with-affinity-assistant)
- [TaskRuns with `imagePullBackOff` Timeout](#taskruns-with-imagepullbackoff-timeout)
- [Disabling Inline Spec in TaskRun and PipelineRun](#disabling-inline-spec-in-taskrun-and-pipelinerun)
- [Next steps](#next-steps)
## Configuring built-in remote Task and Pipeline resolution
Four remote resolvers are currently provided as part of the Tekton Pipelines installation.
By default, these remote resolvers are enabled. Each resolver can be disabled by setting
the appropriate feature flag in the `resolvers-feature-flags` ConfigMap in the `tekton-pipelines-resolvers`
namespace:
1. [The `bundles` resolver](./bundle-resolver.md), disabled by setting the `enable-bundles-resolver`
feature flag to `false`.
1. [The `git` resolver](./git-resolver.md), disabled by setting the `enable-git-resolver`
feature flag to `false`.
1. [The `hub` resolver](./hub-resolver.md), disabled by setting the `enable-hub-resolver`
feature flag to `false`.
1. [The `cluster` resolver](./cluster-resolver.md), disabled by setting the `enable-cluster-resolver`
feature flag to `false`.
## Configuring CloudEvents notifications
When configured so, Tekton can generate `CloudEvents` for `TaskRun`,
`PipelineRun` and `CustomRun`lifecycle events. The main configuration parameter is the
URL of the sink. When not set, no notification is generated.
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: config-events
namespace: tekton-pipelines
labels:
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-pipelines
data:
formats: tektonv1
sink: https://my-sink-url
```
The sink used to be configured in the `config-defaults` config map.
This option is still available, but deprecated, and will be removed.
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: config-defaults
namespace: tekton-pipelines
labels:
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-pipelines
data:
default-cloud-events-sink: https://my-sink-url
```
Additionally, CloudEvents for `CustomRuns` require an extra configuration to be
enabled. This setting exists to avoid collisions with CloudEvents that might
be sent by custom task controllers:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: feature-flags
namespace: tekton-pipelines
labels:
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-pipelines
data:
send-cloudevents-for-runs: true
```
## Configuring self-signed cert for private registry
The `SSL_CERT_DIR` is set to `/etc/ssl/certs` as the default cert directory. If you are using a self-signed cert for private registry and the cert file is not under the default cert directory, configure your registry cert in the `config-registry-cert` `ConfigMap` with the key `cert`.
## Configuring environment variables
Environment variables can be configured in the following ways, mentioned in order of precedence from lowest to highest.
1. Implicit environment variables
2. `Step`/`StepTemplate` environment variables
3. Environment variables specified via a `default` `PodTemplate`.
4. Environment variables specified via a `PodTemplate`.
The environment variables specified by a `PodTemplate` supercedes all other ways of specifying environment variables. However, there exists a configuration i.e. `default-forbidden-env`, the environment variable specified in this list cannot be updated via a `PodTemplate`.
For example:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: config-defaults
namespace: tekton-pipelines
data:
default-timeout-minutes: "50"
default-service-account: "tekton"
default-forbidden-env: "TEST_TEKTON"
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: mytask
namespace: default
spec:
steps:
- name: echo-env
image: ubuntu
command: ["bash", "-c"]
args: ["echo $TEST_TEKTON "]
env:
- name: "TEST_TEKTON"
value: "true"
---
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: mytaskrun
namespace: default
spec:
taskRef:
name: mytask
podTemplate:
env:
- name: "TEST_TEKTON"
value: "false"
```
_In the above example the environment variable `TEST_TEKTON` will not be overriden by value specified in podTemplate, because the `config-default` option `default-forbidden-env` is configured with value `TEST_TEKTON`._
## Configuring default resources requirements
Resource requirements of containers created by the controller can be assigned default values. This allows to fully control the resources requirement of `TaskRun`.
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: config-defaults
namespace: tekton-pipelines
data:
default-container-resource-requirements: |
place-scripts: # updates resource requirements of a 'place-scripts' container
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
prepare: # updates resource requirements of a 'prepare' container
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "256Mi"
cpu: "500m"
working-dir-initializer: # updates resource requirements of a 'working-dir-initializer' container
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
prefix-scripts: # updates resource requirements of containers which starts with 'scripts-'
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
prefix-sidecar-scripts: # updates resource requirements of containers which starts with 'sidecar-scripts-'
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
default: # updates resource requirements of init-containers and containers which has empty resource resource requirements
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "256Mi"
cpu: "500m"
```
Any resource requirements set at the `Task` and `TaskRun` levels will overidde the default one specified in the `config-defaults` configmap.
## Customizing basic execution parameters
You can specify your own values that replace the default service account (`ServiceAccount`), timeout (`Timeout`), resolver (`Resolver`), and Pod template (`PodTemplate`) values used by Tekton Pipelines in `TaskRun` and `PipelineRun` definitions. To do so, modify the ConfigMap `config-defaults` with your desired values.
The example below customizes the following:
- the default service account from `default` to `tekton`.
- the default timeout from 60 minutes to 20 minutes.
- the default `app.kubernetes.io/managed-by` label is applied to all Pods created to execute `TaskRuns`.
- the default Pod template to include a node selector to select the node where the Pod will be scheduled by default. A list of supported fields is available [here](./podtemplates.md#supported-fields).
For more information, see [`PodTemplate` in `TaskRuns`](./taskruns.md#specifying-a-pod-template) or [`PodTemplate` in `PipelineRuns`](./pipelineruns.md#specifying-a-pod-template).
- the default `Workspace` configuration can be set for any `Workspaces` that a Task declares but that a TaskRun does not explicitly provide.
- the default maximum combinations of `Parameters` in a `Matrix` that can be used to fan out a `PipelineTask`. For
more information, see [`Matrix`](matrix.md).
- the default resolver type to `git`.
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: config-defaults
data:
default-service-account: "tekton"
default-timeout-minutes: "20"
default-pod-template: |
nodeSelector:
kops.k8s.io/instancegroup: build-instance-group
default-managed-by-label-value: "my-tekton-installation"
default-task-run-workspace-binding: |
emptyDir: {}
default-max-matrix-combinations-count: "1024"
default-resolver-type: "git"
```
**Note:** The `_example` key in the provided [config-defaults.yaml](./../config/config-defaults.yaml)
file lists the keys you can customize along with their default values.
### Customizing the Pipelines Controller behavior
To customize the behavior of the Pipelines Controller, modify the ConfigMap `feature-flags` via
`kubectl edit configmap feature-flags -n tekton-pipelines`.
**Note:** Changing feature flags may result in undefined behavior for TaskRuns and PipelineRuns
that are running while the change occurs.
The flags in this ConfigMap are as follows:
- `disable-affinity-assistant` - set this flag to `true` to disable the [Affinity Assistant](./affinityassistants)
that is used to provide Node Affinity for `TaskRun` pods that share workspace volume.
The Affinity Assistant is incompatible with other affinity rules
configured for `TaskRun` pods.
**Note:** This feature flag is deprecated and will be removed in release `v0.60`. Consider using `coschedule` feature flag to configure Affinity Assistant behavior.
**Note:** Affinity Assistant use [Inter-pod affinity and anti-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)
that require substantial amount of processing which can slow down scheduling in large clusters
significantly. We do not recommend using them in clusters larger than several hundred nodes
**Note:** Pod anti-affinity requires nodes to be consistently labelled, in other words every
node in the cluster must have an appropriate label matching `topologyKey`. If some or all nodes
are missing the specified `topologyKey` label, it can lead to unintended behavior.
- `coschedule`: set this flag determines how PipelineRun Pods are scheduled with [Affinity Assistant](./affinityassistants).
Acceptable values are "workspaces" (default), "pipelineruns", "isolate-pipelinerun", or "disabled".
Setting it to "workspaces" will schedule all the taskruns sharing the same PVC-based workspace in a pipelinerun to the same node.
Setting it to "pipelineruns" will schedule all the taskruns in a pipelinerun to the same node.
Setting it to "isolate-pipelinerun" will schedule all the taskruns in a pipelinerun to the same node,
and only allows one pipelinerun to run on a node at a time. Setting it to "disabled" will not apply any coschedule policy.
- `await-sidecar-readiness`: set this flag to `"false"` to allow the Tekton controller to start a
TasksRun's first step immediately without waiting for sidecar containers to be running first. Using
this option should decrease the time it takes for a TaskRun to start running, and will allow TaskRun
pods to be scheduled in environments that don't support [Downward API](https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/)
volumes (e.g. some virtual kubelet implementations). However, this may lead to unexpected behaviour
with Tasks that use sidecars, or in clusters that use injected sidecars (e.g. Istio). Setting this flag
to `"false"` will mean the `running-in-environment-with-injected-sidecars` flag has no effect.
- `running-in-environment-with-injected-sidecars`: set this flag to `"false"` to allow the
Tekton controller to start a TasksRun's first step immediately if it has no Sidecars specified.
Using this option should decrease the time it takes for a TaskRun to start running.
However, for clusters that use injected sidecars (e.g. Istio) this can lead to unexpected behavior.
- `require-git-ssh-secret-known-hosts`: set this flag to `"true"` to require that
Git SSH Secrets include a `known_hosts` field. This ensures that a git remote server's
key is validated before data is accepted from it when authenticating over SSH. Secrets
that don't include a `known_hosts` will result in the TaskRun failing validation and
not running.
- `enable-tekton-oci-bundles`: set this flag to `"true"` to enable the
tekton OCI bundle usage (see [the tekton bundle
contract](./tekton-bundle-contracts.md)). Enabling this option
allows the use of `bundle` field in `taskRef` and `pipelineRef` for
`Pipeline`, `PipelineRun` and `TaskRun`. By default, this option is
disabled (`"false"`), which means it is disallowed to use the
`bundle` field.
- `disable-creds-init` - set this flag to `"true"` to [disable Tekton's built-in credential initialization](auth.md#disabling-tektons-built-in-auth)
and use Workspaces to mount credentials from Secrets instead.
The default is `false`. For more information, see the [associated issue](https://github.com/tektoncd/pipeline/issues/3399).
- `enable-api-fields`: When using v1beta1 APIs, setting this field to "stable" or "beta"
enables [beta features](#beta-features). When using v1 APIs, setting this field to "stable"
allows only stable features, and setting it to "beta" allows only beta features.
Set this field to "alpha" to allow [alpha features](#alpha-features) to be used.
- `enable-kubernetes-sidecar`: Set this flag to `"true"` to enable native kubernetes sidecar support. This will allow Tekton sidecars to run as Kubernetes sidecars. Must be using Kubernetes v1.29 or greater.
For example:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: feature-flags
data:
enable-api-fields: "alpha" # Allow alpha fields to be used in Tasks and Pipelines.
```
- `trusted-resources-verification-no-match-policy`: Setting this flag to `fail` will fail the taskrun/pipelinerun if no matching policies found. Setting to `warn` will skip verification and log a warning if no matching policies are found, but not fail the taskrun/pipelinerun. Setting to `ignore` will skip verification if no matching policies found.
Defaults to "ignore".
- `results-from`: set this flag to "termination-message" to use the container's termination message to fetch results from. This is the default method of extracting results. Set it to "sidecar-logs" to enable use of a results sidecar logs to extract results instead of termination message.
- `enable-provenance-in-status`: Set this flag to `"true"` to enable populating
the `provenance` field in `TaskRun` and `PipelineRun` status. The `provenance`
field contains metadata about resources used in the TaskRun/PipelineRun such as the
source from where a remote Task/Pipeline definition was fetched. By default, this is set to `true`.
To disable populating this field, set this flag to `"false"`.
- `set-security-context`: Set this flag to `true` to set a security context for containers injected by Tekton that will allow TaskRun pods
to run in namespaces with `restricted` pod security admission. By default, this is set to `false`.
### Alpha Features
Alpha features in the following table are still in development and their syntax is subject to change.
- To enable the features ***without*** an individual flag:
set the `enable-api-fields` feature flag to `"alpha"` in the `feature-flags` ConfigMap alongside your Tekton Pipelines deployment via `kubectl patch cm feature-flags -n tekton-pipelines -p '{"data":{"enable-api-fields":"alpha"}}'`.
- To enable the features ***with*** an individual flag:
set the individual flag accordingly in the `feature-flag` ConfigMap alongside your Tekton Pipelines deployment. Example: `kubectl patch cm feature-flags -n tekton-pipelines -p '{"data":{"<FLAG-NAME>":"<FLAG-VALUE>"}}'`.
Features currently in "alpha" are:
| Feature | Proposal | Release | Individual Flag |
|:-------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------|:-------------------------------------------------|
| [Bundles ](./pipelineruns.md#tekton-bundles) | [TEP-0005](https://github.com/tektoncd/community/blob/main/teps/0005-tekton-oci-bundles.md) | [v0.18.0](https://github.com/tektoncd/pipeline/releases/tag/v0.18.0) | `enable-tekton-oci-bundles` |
| [Hermetic Execution Mode](./hermetic.md) | [TEP-0025](https://github.com/tektoncd/community/blob/main/teps/0025-hermekton.md) | [v0.25.0](https://github.com/tektoncd/pipeline/releases/tag/v0.25.0) | |
| [Windows Scripts](./tasks.md#windows-scripts) | [TEP-0057](https://github.com/tektoncd/community/blob/main/teps/0057-windows-support.md) | [v0.28.0](https://github.com/tektoncd/pipeline/releases/tag/v0.28.0) | |
| [Debug](./debug.md) | [TEP-0042](https://github.com/tektoncd/community/blob/main/teps/0042-taskrun-breakpoint-on-failure.md) | [v0.26.0](https://github.com/tektoncd/pipeline/releases/tag/v0.26.0) | |
| [StdoutConfig and StderrConfig](./tasks#redirecting-step-output-streams-with-stdoutConfig-and-stderrConfig) | [TEP-0011](https://github.com/tektoncd/community/blob/main/teps/0011-redirecting-step-output-streams.md) | [v0.38.0](https://github.com/tektoncd/pipeline/releases/tag/v0.38.0) | |
| [Trusted Resources](./trusted-resources.md) | [TEP-0091](https://github.com/tektoncd/community/blob/main/teps/0091-trusted-resources.md) | [v0.49.0](https://github.com/tektoncd/pipeline/releases/tag/v0.49.0) | `trusted-resources-verification-no-match-policy` |
| [Configure Default Resolver](./resolution.md#configuring-built-in-resolvers) | [TEP-0133](https://github.com/tektoncd/community/blob/main/teps/0133-configure-default-resolver.md) | [v0.46.0](https://github.com/tektoncd/pipeline/releases/tag/v0.46.0) | |
| [Coschedule](./affinityassistants.md) | [TEP-0135](https://github.com/tektoncd/community/blob/main/teps/0135-coscheduling-pipelinerun-pods.md) | [v0.51.0](https://github.com/tektoncd/pipeline/releases/tag/v0.51.0) | `coschedule` |
| [keep pod on cancel](./taskruns.md#cancelling-a-taskrun) | N/A | [v0.52.0](https://github.com/tektoncd/pipeline/releases/tag/v0.52.0) | `keep-pod-on-cancel` |
| [CEL in WhenExpression](./pipelines.md#use-cel-expression-in-whenexpression) | [TEP-0145](https://github.com/tektoncd/community/blob/main/teps/0145-cel-in-whenexpression.md) | [v0.53.0](https://github.com/tektoncd/pipeline/releases/tag/v0.53.0) | `enable-cel-in-whenexpression` |
| [Param Enum](./taskruns.md#parameter-enums) | [TEP-0144](https://github.com/tektoncd/community/blob/main/teps/0144-param-enum.md) | [v0.54.0](https://github.com/tektoncd/pipeline/releases/tag/v0.54.0) | `enable-param-enum` |
### Beta Features
Beta features are fields of stable CRDs that follow our "beta" [compatibility policy](../api_compatibility_policy.md).
To enable these features, set the `enable-api-fields` feature flag to `"beta"` in
the `feature-flags` ConfigMap alongside your Tekton Pipelines deployment via
`kubectl patch cm feature-flags -n tekton-pipelines -p '{"data":{"enable-api-fields":"beta"}}'`.
Features currently in "beta" are:
| Feature | Proposal | Alpha Release | Beta Release | Individual Flag |
|:------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------|:---------------------------------------------------------------------|:------------------------------|
| [Remote Tasks](./taskruns.md#remote-tasks) and [Remote Pipelines](./pipelineruns.md#remote-pipelines) | [TEP-0060](https://github.com/tektoncd/community/blob/main/teps/0060-remote-resolution.md) | | [v0.41.0](https://github.com/tektoncd/pipeline/releases/tag/v0.41.0) | |
| [`Provenance` field in Status](pipeline-api.md#provenance) | [issue#5550](https://github.com/tektoncd/pipeline/issues/5550) | [v0.41.0](https://github.com/tektoncd/pipeline/releases/tag/v0.41.0) | [v0.48.0](https://github.com/tektoncd/pipeline/releases/tag/v0.48.0) | `enable-provenance-in-status` |
| [Isolated `Step` & `Sidecar` `Workspaces`](./workspaces.md#isolated-workspaces) | [TEP-0029](https://github.com/tektoncd/community/blob/main/teps/0029-step-workspaces.md) | [v0.24.0](https://github.com/tektoncd/pipeline/releases/tag/v0.24.0) | [v0.50.0](https://github.com/tektoncd/pipeline/releases/tag/v0.50.0) | |
| [Matrix](./matrix.md) | [TEP-0090](https://github.com/tektoncd/community/blob/main/teps/0090-matrix.md) | [v0.38.0](https://github.com/tektoncd/pipeline/releases/tag/v0.38.0) | [v0.53.0](https://github.com/tektoncd/pipeline/releases/tag/v0.53.0) | |
| [Task-level Resource Requirements](compute-resources.md#task-level-compute-resources-configuration) | [TEP-0104](https://github.com/tektoncd/community/blob/main/teps/0104-tasklevel-resource-requirements.md) | [v0.39.0](https://github.com/tektoncd/pipeline/releases/tag/v0.39.0) | [v0.53.0](https://github.com/tektoncd/pipeline/releases/tag/v0.53.0) | |
| [Reusable Steps via StepActions](./stepactions.md) | [TEP-0142](https://github.com/tektoncd/community/blob/main/teps/0142-enable-step-reusability.md) | [v0.54.0](https://github.com/tektoncd/pipeline/releases/tag/v0.54.0) | `enable-step-actions` |
| [Larger Results via Sidecar Logs](#enabling-larger-results-using-sidecar-logs) | [TEP-0127](https://github.com/tektoncd/community/blob/main/teps/0127-larger-results-via-sidecar-logs.md) | [v0.43.0](https://github.com/tektoncd/pipeline/releases/tag/v0.43.0) | [v0.61.0](https://github.com/tektoncd/pipeline/releases/tag/v0.61.0) | `results-from` |
| [Step and Sidecar Overrides](./taskruns.md#overriding-task-steps-and-sidecars) | [TEP-0094](https://github.com/tektoncd/community/blob/main/teps/0094-specifying-resource-requirements-at-runtime.md) | [v0.34.0](https://github.com/tektoncd/pipeline/releases/tag/v0.34.0) | | [v0.61.0](https://github.com/tektoncd/pipeline/releases/tag/v0.61.0) | |
| [Ignore Task Failure](./pipelines.md#using-the-onerror-field) | [TEP-0050](https://github.com/tektoncd/community/blob/main/teps/0050-ignore-task-failures.md) | [v0.55.0](https://github.com/tektoncd/pipeline/releases/tag/v0.55.0) | [v0.62.0](https://github.com/tektoncd/pipeline/releases/tag/v0.62.0) | N/A |
## Enabling larger results using sidecar logs
**Note**: The maximum size of a Task's results is limited by the container termination message feature of Kubernetes,
as results are passed back to the controller via this mechanism. At present, the limit is per task is “4096 bytes”. All
results produced by the task share this upper limit.
To exceed this limit of 4096 bytes, you can enable larger results using sidecar logs. By enabling this feature, you will
have a configurable limit (with a default of 4096 bytes) per result with no restriction on the number of results. The
results are still stored in the taskRun CRD, so they should not exceed the 1.5MB CRD size limit.
**Note**: to enable this feature, you need to grant `get` access to all `pods/log` to the `tekton-pipelines-controller`.
This means that the tekton pipeline controller has the ability to access the pod logs.
1. Create a cluster role and rolebinding by applying the following spec to provide log access to `tekton-pipelines-controller`.
```
kubectl apply -f optional_config/enable-log-access-to-controller/
```
2. Set the `results-from` feature flag to use sidecar logs by setting `results-from: sidecar-logs` in the
[configMap](#customizing-the-pipelines-controller-behavior).
```
kubectl patch cm feature-flags -n tekton-pipelines -p '{"data":{"results-from":"sidecar-logs"}}'
```
3. If you want the size per result to be something other than 4096 bytes, you can set the `max-result-size` feature flag
in bytes by setting `max-result-size: 8192(whatever you need here)`. **Note:** The value you can set here cannot exceed
the size of the CRD limit of 1.5 MB.
```
kubectl patch cm feature-flags -n tekton-pipelines -p '{"data":{"max-result-size":"<VALUE-IN-BYTES>"}}'
```
## Configuring High Availability
If you want to run Tekton Pipelines in a way so that webhooks are resiliant against failures and support
high concurrency scenarios, you need to run a [Metrics Server](https://github.com/kubernetes-sigs/metrics-server) in
your Kubernetes cluster. This is required by the [Horizontal Pod Autoscalers](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/)
to compute replica count.
See [HA Support for Tekton Pipeline Controllers](./enabling-ha.md) for instructions on configuring
High Availability in the Tekton Pipelines Controller.
The default configuration is defined in [webhook-hpa.yaml](./../config/webhook-hpa.yaml) which can be customized
to better fit specific usecases.
## Configuring tekton pipeline controller performance
Out-of-the-box, Tekton Pipelines Controller is configured for relatively small-scale deployments but there have several options for configuring Pipelines' performance are available. See the [Performance Configuration](tekton-controller-performance-configuration.md) document which describes how to change the default ThreadsPerController, QPS and Burst settings to meet your requirements.
## Running TaskRuns and PipelineRuns with restricted pod security standards
To allow TaskRuns and PipelineRuns to run in namespaces with [restricted pod security standards](https://kubernetes.io/docs/concepts/security/pod-security-standards/),
set the "set-security-context" feature flag to "true" in the [feature-flags configMap](#customizing-the-pipelines-controller-behavior). This configuration option applies a [SecurityContext](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/)
to any containers injected into TaskRuns by the Pipelines controller. If the [Affinity Assistants](affinityassistants.md) feature is enabled, the SecurityContext is also applied to those containers.
This SecurityContext may not be supported in all Kubernetes implementations (for example, OpenShift).
**Note**: running TaskRuns and PipelineRuns in the "tekton-pipelines" namespace is discouraged.
## Platform Support
The Tekton project provides support for running on x86 Linux Kubernetes nodes.
The project produces images capable of running on other architectures and operating systems, but may not be able to help debug issues specific to those platforms as readily as those that affect Linux on x86.
The controller and webhook components are currently built for:
- linux/amd64
- linux/arm64
- linux/arm (Arm v7)
- linux/ppc64le (PowerPC)
- linux/s390x (IBM Z)
The entrypoint component is also built for Windows, which enables TaskRun workloads to execute on Windows nodes.
See [Windows documentation](windows.md) for more information.
## Creating a custom release of Tekton Pipelines
You can create a custom release of Tekton Pipelines by following and customizing the steps in [Creating an official release](https://github.com/tektoncd/pipeline/blob/main/tekton/README.md#create-an-official-release). For example, you might want to customize the container images built and used by Tekton Pipelines.
## Verify Tekton Pipelines Release
> We will refine this process over time to be more streamlined. For now, please follow the steps listed in this section
to verify Tekton pipeline release.
Tekton Pipeline's images are being signed by [Tekton Chains](https://github.com/tektoncd/chains) since [0.27.1](https://github.com/tektoncd/pipeline/releases/tag/v0.27.1). You can verify the images with
`cosign` using the [Tekton's public key](https://raw.githubusercontent.com/tektoncd/chains/main/tekton.pub).
### Verify signatures using `cosign`
With Go 1.16+, you can install `cosign` by running:
```shell
go install github.com/sigstore/cosign/cmd/cosign@latest
```
You can verify Tekton Pipelines official images using the Tekton public key:
```shell
cosign verify -key https://raw.githubusercontent.com/tektoncd/chains/main/tekton.pub gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/controller:v0.28.1
```
which results in:
```shell
Verification for gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/controller:v0.28.1 --
The following checks were performed on each of these signatures:
- The cosign claims were validated
- The signatures were verified against the specified public key
- Any certificates were verified against the Fulcio roots.
{
"Critical": {
"Identity": {
"docker-reference": "gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/controller"
},
"Image": {
"Docker-manifest-digest": "sha256:0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8"
},
"Type": "Tekton container signature"
},
"Optional": {}
}
```
The verification shows a list of checks performed and returns the digest in `Critical.Image.Docker-manifest-digest`
which can be used to retrieve the provenance from the transparency logs for that image using `rekor-cli`.
### Verify the transparency logs using `rekor-cli`
Install the `rekor-cli` by running:
```shell
go install -v github.com/sigstore/rekor/cmd/rekor-cli@latest
```
Now, use the digest collected from the previous [section](#verify-signatures-using-cosign) in
`Critical.Image.Docker-manifest-digest`, for example,
`sha256:0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8`.
Search the transparency log with the digest just collected:
```shell
rekor-cli search --sha sha256:0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8
```
which results in:
```shell
Found matching entries (listed by UUID):
68a53d0e75463d805dc9437dda5815171502475dd704459a5ce3078edba96226
```
Tekton Chains generates provenance based on the custom [format](https://github.com/tektoncd/chains/blob/main/PROVENANCE_SPEC.md)
in which the `subject` holds the list of artifacts which were built as part of the release. For the Pipeline release,
`subject` includes a list of images including pipeline controller, pipeline webhook, etc. Use the `UUID` to get the provenance:
```shell
rekor-cli get --uuid 68a53d0e75463d805dc9437dda5815171502475dd704459a5ce3078edba96226 --format json | jq -r .Attestation | base64 --decode | jq
```
which results in:
```shell
{
"_type": "https://in-toto.io/Statement/v0.1",
"predicateType": "https://tekton.dev/chains/provenance",
"subject": [
{
"name": "gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/controller",
"digest": {
"sha256": "0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8"
}
},
{
"name": "gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/entrypoint",
"digest": {
"sha256": "2fa7f7c3408f52ff21b2d8c4271374dac4f5b113b1c4dbc7d5189131e71ce721"
}
},
{
"name": "gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init",
"digest": {
"sha256": "83d5ec6addece4aac79898c9631ee669f5fee5a710a2ed1f98a6d40c19fb88f7"
}
},
{
"name": "gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/imagedigestexporter",
"digest": {
"sha256": "e4d77b5b8902270f37812f85feb70d57d6d0e1fed2f3b46f86baf534f19cd9c0"
}
},
{
"name": "gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/nop",
"digest": {
"sha256": "59b5304bcfdd9834150a2701720cf66e3ebe6d6e4d361ae1612d9430089591f8"
}
},
{
"name": "gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/pullrequest-init",
"digest": {
"sha256": "4992491b2714a73c0a84553030e6056e6495b3d9d5cc6b20cf7bc8c51be779bb"
}
},
{
"name": "gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/webhook",
"digest": {
"sha256": "bf0ef565b301a1981cb2e0d11eb6961c694f6d2401928dccebe7d1e9d8c914de"
}
}
],
...
```
Now, verify the digest in the `release.yaml` by matching it with the provenance, for example, the digest for the release `v0.28.1`:
```shell
curl -s https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.28.1/release.yaml | grep github.com/tektoncd/pipeline/cmd/controller:v0.28.1 | awk -F"github.com/tektoncd/pipeline/cmd/controller:v0.28.1@" '{print $2}'
```
which results in:
```shell
sha256:0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8
```
Now, you can verify the deployment specifications in the `release.yaml` to match each of these images and their digest.
The `tekton-pipelines-controller` deployment specification has a container named `tekton-pipeline-controller` and a
list of image references with their digest as part of the `args`:
```yaml
containers:
- name: tekton-pipelines-controller
image: gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/controller:v0.28.1@sha256:0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8
args: [
# These images are built on-demand by `ko resolve` and are replaced
# by image references by digest.
"-git-image",
"gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init:v0.28.1@sha256:83d5ec6addece4aac79898c9631ee669f5fee5a710a2ed1f98a6d40c19fb88f7",
"-entrypoint-image",
"gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/entrypoint:v0.28.1@sha256:2fa7f7c3408f52ff21b2d8c4271374dac4f5b113b1c4dbc7d5189131e71ce721",
"-nop-image",
"gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/nop:v0.28.1@sha256:59b5304bcfdd9834150a2701720cf66e3ebe6d6e4d361ae1612d9430089591f8",
"-imagedigest-exporter-image",
"gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/imagedigestexporter:v0.28.1@sha256:e4d77b5b8902270f37812f85feb70d57d6d0e1fed2f3b46f86baf534f19cd9c0",
"-pr-image",
"gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/pullrequest-init:v0.28.1@sha256:4992491b2714a73c0a84553030e6056e6495b3d9d5cc6b20cf7bc8c51be779bb",
```
Similarly, you can verify the rest of the images which were published as part of the Tekton Pipelines release:
```shell
gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init
gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/entrypoint
gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/nop
gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/imagedigestexporter
gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/pullrequest-init
gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/webhook
```
## Verify Tekton Resources
Trusted Resources is a feature to verify Tekton Tasks and Pipelines. The current
version of feature supports `v1beta1` `Task` and `Pipeline`. For more details
please take a look at [Trusted Resources](./trusted-resources.md).
## Pipelineruns with Affinity Assistant
The cluster operators can review the [guidelines](developers/affinity-assistant.md) to `cordon` a node in the cluster
with the tekton controller and the affinity assistant is enabled.
## TaskRuns with `imagePullBackOff` Timeout
Tekton pipelines has adopted a fail fast strategy with a taskRun failing with `TaskRunImagePullFailed` in case of an
`imagePullBackOff`. This can be limited in some cases, and it generally depends on the infrastructure. To allow the
cluster operators to decide whether to wait in case of an `imagePullBackOff`, a setting is available to configure
the wait time such that the controller will wait for the specified duration before declaring a failure.
For example, with the following `config-defaults`, the controller does not mark the taskRun as failure for 5 minutes since
the pod is scheduled in case the image pull fails with `imagePullBackOff`. The `default-imagepullbackoff-timeout` is
of type `time.Duration` and can be set to a duration such as "1m", "5m", "10s", "1h", etc.
See issue https://github.com/tektoncd/pipeline/issues/5987 for more details.
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: config-defaults
namespace: tekton-pipelines
data:
default-imagepullbackoff-timeout: "5m"
```
## Disabling Inline Spec in Pipeline, TaskRun and PipelineRun
Tekton users may embed the specification of a `Task` (via `taskSpec`) or a `Pipeline` (via `pipelineSpec`) as an alternative to referring to an external resource via `taskRef` and `pipelineRef` respectively. This behaviour can be selectively disabled for three Tekton resources: `TaskRun`, `PipelineRun` and `Pipeline`.
In certain clusters and scenarios, an admin might want to disable the customisation of `Tasks` and `Pipelines` and only allow users to run pre-defined resources. To achieve that the admin should disable embedded specification via the `disable-inline-spec` flag, and remote resolvers too.
To disable inline specification, set the `disable-inline-spec` flag to `"pipeline,pipelinerun,taskrun"`
in the `feature-flags` configmap.
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: feature-flags
namespace: tekton-pipelines
labels:
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-pipelines
data:
disable-inline-spec: "pipeline,pipelinerun,taskrun"
```
Inline specifications can be disabled for specific resources only. To achieve that, set the disable-inline-spec flag to a comma-separated list of the desired resources. Valid values are pipeline, pipelinerun and taskrun.
The default value of disable-inline-spec is "", which means inline specification is enabled in all cases.
## Next steps
To get started with Tekton check the [Introductory tutorials][quickstarts],
the [how-to guides][howtos], and the [examples folder][examples].
---
Except as otherwise noted, the content of this page is licensed under the
[Creative Commons Attribution 4.0 License][cca4], and code samples are licensed
under the [Apache 2.0 License][apache2l].
[quickstarts]: https://tekton.dev/docs/getting-started/
[howtos]: https://tekton.dev/docs/how-to-guides/
[examples]: https://github.com/tektoncd/pipeline/tree/main/examples/
[cca4]: https://creativecommons.org/licenses/by/4.0/
[apache2l]: https://www.apache.org/licenses/LICENSE-2.0 | tekton | title Additional Configuration Options linkTitle Additional Configuration Options weight 109 description Additional configurations when installing Tekton Pipelines This document describes additional options to configure your Tekton Pipelines installation Table of Contents Configuring built in remote Task and Pipeline resolution configuring built in remote task and pipeline resolution Configuring CloudEvents notifications configuring cloudevents notifications Configuring self signed cert for private registry configuring self signed cert for private registry Configuring environment variables configuring environment variables Customizing basic execution parameters customizing basic execution parameters Customizing the Pipelines Controller behavior customizing the pipelines controller behavior Alpha Features alpha features Beta Features beta features Enabling larger results using sidecar logs enabling larger results using sidecar logs Configuring High Availability configuring high availability Configuring tekton pipeline controller performance configuring tekton pipeline controller performance Platform Support platform support Creating a custom release of Tekton Pipelines creating a custom release of tekton pipelines Verify Tekton Pipelines Release verify tekton pipelines release Verify signatures using cosign verify signatures using cosign Verify the transparency logs using rekor cli verify the transparency logs using rekor cli Verify Tekton Resources verify tekton resources Pipelinerun with Affinity Assistant pipelineruns with affinity assistant TaskRuns with imagePullBackOff Timeout taskruns with imagepullbackoff timeout Disabling Inline Spec in TaskRun and PipelineRun disabling inline spec in taskrun and pipelinerun Next steps next steps Configuring built in remote Task and Pipeline resolution Four remote resolvers are currently provided as part of the Tekton Pipelines installation By default these remote resolvers are enabled Each resolver can be disabled by setting the appropriate feature flag in the resolvers feature flags ConfigMap in the tekton pipelines resolvers namespace 1 The bundles resolver bundle resolver md disabled by setting the enable bundles resolver feature flag to false 1 The git resolver git resolver md disabled by setting the enable git resolver feature flag to false 1 The hub resolver hub resolver md disabled by setting the enable hub resolver feature flag to false 1 The cluster resolver cluster resolver md disabled by setting the enable cluster resolver feature flag to false Configuring CloudEvents notifications When configured so Tekton can generate CloudEvents for TaskRun PipelineRun and CustomRun lifecycle events The main configuration parameter is the URL of the sink When not set no notification is generated yaml apiVersion v1 kind ConfigMap metadata name config events namespace tekton pipelines labels app kubernetes io instance default app kubernetes io part of tekton pipelines data formats tektonv1 sink https my sink url The sink used to be configured in the config defaults config map This option is still available but deprecated and will be removed yaml apiVersion v1 kind ConfigMap metadata name config defaults namespace tekton pipelines labels app kubernetes io instance default app kubernetes io part of tekton pipelines data default cloud events sink https my sink url Additionally CloudEvents for CustomRuns require an extra configuration to be enabled This setting exists to avoid collisions with CloudEvents that might be sent by custom task controllers yaml apiVersion v1 kind ConfigMap metadata name feature flags namespace tekton pipelines labels app kubernetes io instance default app kubernetes io part of tekton pipelines data send cloudevents for runs true Configuring self signed cert for private registry The SSL CERT DIR is set to etc ssl certs as the default cert directory If you are using a self signed cert for private registry and the cert file is not under the default cert directory configure your registry cert in the config registry cert ConfigMap with the key cert Configuring environment variables Environment variables can be configured in the following ways mentioned in order of precedence from lowest to highest 1 Implicit environment variables 2 Step StepTemplate environment variables 3 Environment variables specified via a default PodTemplate 4 Environment variables specified via a PodTemplate The environment variables specified by a PodTemplate supercedes all other ways of specifying environment variables However there exists a configuration i e default forbidden env the environment variable specified in this list cannot be updated via a PodTemplate For example yaml apiVersion v1 kind ConfigMap metadata name config defaults namespace tekton pipelines data default timeout minutes 50 default service account tekton default forbidden env TEST TEKTON apiVersion tekton dev v1beta1 kind Task metadata name mytask namespace default spec steps name echo env image ubuntu command bash c args echo TEST TEKTON env name TEST TEKTON value true apiVersion tekton dev v1beta1 kind TaskRun metadata name mytaskrun namespace default spec taskRef name mytask podTemplate env name TEST TEKTON value false In the above example the environment variable TEST TEKTON will not be overriden by value specified in podTemplate because the config default option default forbidden env is configured with value TEST TEKTON Configuring default resources requirements Resource requirements of containers created by the controller can be assigned default values This allows to fully control the resources requirement of TaskRun yaml apiVersion v1 kind ConfigMap metadata name config defaults namespace tekton pipelines data default container resource requirements place scripts updates resource requirements of a place scripts container requests memory 64Mi cpu 250m limits memory 128Mi cpu 500m prepare updates resource requirements of a prepare container requests memory 64Mi cpu 250m limits memory 256Mi cpu 500m working dir initializer updates resource requirements of a working dir initializer container requests memory 64Mi cpu 250m limits memory 512Mi cpu 500m prefix scripts updates resource requirements of containers which starts with scripts requests memory 64Mi cpu 250m limits memory 128Mi cpu 500m prefix sidecar scripts updates resource requirements of containers which starts with sidecar scripts requests memory 64Mi cpu 250m limits memory 128Mi cpu 500m default updates resource requirements of init containers and containers which has empty resource resource requirements requests memory 64Mi cpu 250m limits memory 256Mi cpu 500m Any resource requirements set at the Task and TaskRun levels will overidde the default one specified in the config defaults configmap Customizing basic execution parameters You can specify your own values that replace the default service account ServiceAccount timeout Timeout resolver Resolver and Pod template PodTemplate values used by Tekton Pipelines in TaskRun and PipelineRun definitions To do so modify the ConfigMap config defaults with your desired values The example below customizes the following the default service account from default to tekton the default timeout from 60 minutes to 20 minutes the default app kubernetes io managed by label is applied to all Pods created to execute TaskRuns the default Pod template to include a node selector to select the node where the Pod will be scheduled by default A list of supported fields is available here podtemplates md supported fields For more information see PodTemplate in TaskRuns taskruns md specifying a pod template or PodTemplate in PipelineRuns pipelineruns md specifying a pod template the default Workspace configuration can be set for any Workspaces that a Task declares but that a TaskRun does not explicitly provide the default maximum combinations of Parameters in a Matrix that can be used to fan out a PipelineTask For more information see Matrix matrix md the default resolver type to git yaml apiVersion v1 kind ConfigMap metadata name config defaults data default service account tekton default timeout minutes 20 default pod template nodeSelector kops k8s io instancegroup build instance group default managed by label value my tekton installation default task run workspace binding emptyDir default max matrix combinations count 1024 default resolver type git Note The example key in the provided config defaults yaml config config defaults yaml file lists the keys you can customize along with their default values Customizing the Pipelines Controller behavior To customize the behavior of the Pipelines Controller modify the ConfigMap feature flags via kubectl edit configmap feature flags n tekton pipelines Note Changing feature flags may result in undefined behavior for TaskRuns and PipelineRuns that are running while the change occurs The flags in this ConfigMap are as follows disable affinity assistant set this flag to true to disable the Affinity Assistant affinityassistants that is used to provide Node Affinity for TaskRun pods that share workspace volume The Affinity Assistant is incompatible with other affinity rules configured for TaskRun pods Note This feature flag is deprecated and will be removed in release v0 60 Consider using coschedule feature flag to configure Affinity Assistant behavior Note Affinity Assistant use Inter pod affinity and anti affinity https kubernetes io docs concepts scheduling eviction assign pod node inter pod affinity and anti affinity that require substantial amount of processing which can slow down scheduling in large clusters significantly We do not recommend using them in clusters larger than several hundred nodes Note Pod anti affinity requires nodes to be consistently labelled in other words every node in the cluster must have an appropriate label matching topologyKey If some or all nodes are missing the specified topologyKey label it can lead to unintended behavior coschedule set this flag determines how PipelineRun Pods are scheduled with Affinity Assistant affinityassistants Acceptable values are workspaces default pipelineruns isolate pipelinerun or disabled Setting it to workspaces will schedule all the taskruns sharing the same PVC based workspace in a pipelinerun to the same node Setting it to pipelineruns will schedule all the taskruns in a pipelinerun to the same node Setting it to isolate pipelinerun will schedule all the taskruns in a pipelinerun to the same node and only allows one pipelinerun to run on a node at a time Setting it to disabled will not apply any coschedule policy await sidecar readiness set this flag to false to allow the Tekton controller to start a TasksRun s first step immediately without waiting for sidecar containers to be running first Using this option should decrease the time it takes for a TaskRun to start running and will allow TaskRun pods to be scheduled in environments that don t support Downward API https kubernetes io docs tasks inject data application downward api volume expose pod information volumes e g some virtual kubelet implementations However this may lead to unexpected behaviour with Tasks that use sidecars or in clusters that use injected sidecars e g Istio Setting this flag to false will mean the running in environment with injected sidecars flag has no effect running in environment with injected sidecars set this flag to false to allow the Tekton controller to start a TasksRun s first step immediately if it has no Sidecars specified Using this option should decrease the time it takes for a TaskRun to start running However for clusters that use injected sidecars e g Istio this can lead to unexpected behavior require git ssh secret known hosts set this flag to true to require that Git SSH Secrets include a known hosts field This ensures that a git remote server s key is validated before data is accepted from it when authenticating over SSH Secrets that don t include a known hosts will result in the TaskRun failing validation and not running enable tekton oci bundles set this flag to true to enable the tekton OCI bundle usage see the tekton bundle contract tekton bundle contracts md Enabling this option allows the use of bundle field in taskRef and pipelineRef for Pipeline PipelineRun and TaskRun By default this option is disabled false which means it is disallowed to use the bundle field disable creds init set this flag to true to disable Tekton s built in credential initialization auth md disabling tektons built in auth and use Workspaces to mount credentials from Secrets instead The default is false For more information see the associated issue https github com tektoncd pipeline issues 3399 enable api fields When using v1beta1 APIs setting this field to stable or beta enables beta features beta features When using v1 APIs setting this field to stable allows only stable features and setting it to beta allows only beta features Set this field to alpha to allow alpha features alpha features to be used enable kubernetes sidecar Set this flag to true to enable native kubernetes sidecar support This will allow Tekton sidecars to run as Kubernetes sidecars Must be using Kubernetes v1 29 or greater For example yaml apiVersion v1 kind ConfigMap metadata name feature flags data enable api fields alpha Allow alpha fields to be used in Tasks and Pipelines trusted resources verification no match policy Setting this flag to fail will fail the taskrun pipelinerun if no matching policies found Setting to warn will skip verification and log a warning if no matching policies are found but not fail the taskrun pipelinerun Setting to ignore will skip verification if no matching policies found Defaults to ignore results from set this flag to termination message to use the container s termination message to fetch results from This is the default method of extracting results Set it to sidecar logs to enable use of a results sidecar logs to extract results instead of termination message enable provenance in status Set this flag to true to enable populating the provenance field in TaskRun and PipelineRun status The provenance field contains metadata about resources used in the TaskRun PipelineRun such as the source from where a remote Task Pipeline definition was fetched By default this is set to true To disable populating this field set this flag to false set security context Set this flag to true to set a security context for containers injected by Tekton that will allow TaskRun pods to run in namespaces with restricted pod security admission By default this is set to false Alpha Features Alpha features in the following table are still in development and their syntax is subject to change To enable the features without an individual flag set the enable api fields feature flag to alpha in the feature flags ConfigMap alongside your Tekton Pipelines deployment via kubectl patch cm feature flags n tekton pipelines p data enable api fields alpha To enable the features with an individual flag set the individual flag accordingly in the feature flag ConfigMap alongside your Tekton Pipelines deployment Example kubectl patch cm feature flags n tekton pipelines p data FLAG NAME FLAG VALUE Features currently in alpha are Feature Proposal Release Individual Flag Bundles pipelineruns md tekton bundles TEP 0005 https github com tektoncd community blob main teps 0005 tekton oci bundles md v0 18 0 https github com tektoncd pipeline releases tag v0 18 0 enable tekton oci bundles Hermetic Execution Mode hermetic md TEP 0025 https github com tektoncd community blob main teps 0025 hermekton md v0 25 0 https github com tektoncd pipeline releases tag v0 25 0 Windows Scripts tasks md windows scripts TEP 0057 https github com tektoncd community blob main teps 0057 windows support md v0 28 0 https github com tektoncd pipeline releases tag v0 28 0 Debug debug md TEP 0042 https github com tektoncd community blob main teps 0042 taskrun breakpoint on failure md v0 26 0 https github com tektoncd pipeline releases tag v0 26 0 StdoutConfig and StderrConfig tasks redirecting step output streams with stdoutConfig and stderrConfig TEP 0011 https github com tektoncd community blob main teps 0011 redirecting step output streams md v0 38 0 https github com tektoncd pipeline releases tag v0 38 0 Trusted Resources trusted resources md TEP 0091 https github com tektoncd community blob main teps 0091 trusted resources md v0 49 0 https github com tektoncd pipeline releases tag v0 49 0 trusted resources verification no match policy Configure Default Resolver resolution md configuring built in resolvers TEP 0133 https github com tektoncd community blob main teps 0133 configure default resolver md v0 46 0 https github com tektoncd pipeline releases tag v0 46 0 Coschedule affinityassistants md TEP 0135 https github com tektoncd community blob main teps 0135 coscheduling pipelinerun pods md v0 51 0 https github com tektoncd pipeline releases tag v0 51 0 coschedule keep pod on cancel taskruns md cancelling a taskrun N A v0 52 0 https github com tektoncd pipeline releases tag v0 52 0 keep pod on cancel CEL in WhenExpression pipelines md use cel expression in whenexpression TEP 0145 https github com tektoncd community blob main teps 0145 cel in whenexpression md v0 53 0 https github com tektoncd pipeline releases tag v0 53 0 enable cel in whenexpression Param Enum taskruns md parameter enums TEP 0144 https github com tektoncd community blob main teps 0144 param enum md v0 54 0 https github com tektoncd pipeline releases tag v0 54 0 enable param enum Beta Features Beta features are fields of stable CRDs that follow our beta compatibility policy api compatibility policy md To enable these features set the enable api fields feature flag to beta in the feature flags ConfigMap alongside your Tekton Pipelines deployment via kubectl patch cm feature flags n tekton pipelines p data enable api fields beta Features currently in beta are Feature Proposal Alpha Release Beta Release Individual Flag Remote Tasks taskruns md remote tasks and Remote Pipelines pipelineruns md remote pipelines TEP 0060 https github com tektoncd community blob main teps 0060 remote resolution md v0 41 0 https github com tektoncd pipeline releases tag v0 41 0 Provenance field in Status pipeline api md provenance issue 5550 https github com tektoncd pipeline issues 5550 v0 41 0 https github com tektoncd pipeline releases tag v0 41 0 v0 48 0 https github com tektoncd pipeline releases tag v0 48 0 enable provenance in status Isolated Step Sidecar Workspaces workspaces md isolated workspaces TEP 0029 https github com tektoncd community blob main teps 0029 step workspaces md v0 24 0 https github com tektoncd pipeline releases tag v0 24 0 v0 50 0 https github com tektoncd pipeline releases tag v0 50 0 Matrix matrix md TEP 0090 https github com tektoncd community blob main teps 0090 matrix md v0 38 0 https github com tektoncd pipeline releases tag v0 38 0 v0 53 0 https github com tektoncd pipeline releases tag v0 53 0 Task level Resource Requirements compute resources md task level compute resources configuration TEP 0104 https github com tektoncd community blob main teps 0104 tasklevel resource requirements md v0 39 0 https github com tektoncd pipeline releases tag v0 39 0 v0 53 0 https github com tektoncd pipeline releases tag v0 53 0 Reusable Steps via StepActions stepactions md TEP 0142 https github com tektoncd community blob main teps 0142 enable step reusability md v0 54 0 https github com tektoncd pipeline releases tag v0 54 0 enable step actions Larger Results via Sidecar Logs enabling larger results using sidecar logs TEP 0127 https github com tektoncd community blob main teps 0127 larger results via sidecar logs md v0 43 0 https github com tektoncd pipeline releases tag v0 43 0 v0 61 0 https github com tektoncd pipeline releases tag v0 61 0 results from Step and Sidecar Overrides taskruns md overriding task steps and sidecars TEP 0094 https github com tektoncd community blob main teps 0094 specifying resource requirements at runtime md v0 34 0 https github com tektoncd pipeline releases tag v0 34 0 v0 61 0 https github com tektoncd pipeline releases tag v0 61 0 Ignore Task Failure pipelines md using the onerror field TEP 0050 https github com tektoncd community blob main teps 0050 ignore task failures md v0 55 0 https github com tektoncd pipeline releases tag v0 55 0 v0 62 0 https github com tektoncd pipeline releases tag v0 62 0 N A Enabling larger results using sidecar logs Note The maximum size of a Task s results is limited by the container termination message feature of Kubernetes as results are passed back to the controller via this mechanism At present the limit is per task is 4096 bytes All results produced by the task share this upper limit To exceed this limit of 4096 bytes you can enable larger results using sidecar logs By enabling this feature you will have a configurable limit with a default of 4096 bytes per result with no restriction on the number of results The results are still stored in the taskRun CRD so they should not exceed the 1 5MB CRD size limit Note to enable this feature you need to grant get access to all pods log to the tekton pipelines controller This means that the tekton pipeline controller has the ability to access the pod logs 1 Create a cluster role and rolebinding by applying the following spec to provide log access to tekton pipelines controller kubectl apply f optional config enable log access to controller 2 Set the results from feature flag to use sidecar logs by setting results from sidecar logs in the configMap customizing the pipelines controller behavior kubectl patch cm feature flags n tekton pipelines p data results from sidecar logs 3 If you want the size per result to be something other than 4096 bytes you can set the max result size feature flag in bytes by setting max result size 8192 whatever you need here Note The value you can set here cannot exceed the size of the CRD limit of 1 5 MB kubectl patch cm feature flags n tekton pipelines p data max result size VALUE IN BYTES Configuring High Availability If you want to run Tekton Pipelines in a way so that webhooks are resiliant against failures and support high concurrency scenarios you need to run a Metrics Server https github com kubernetes sigs metrics server in your Kubernetes cluster This is required by the Horizontal Pod Autoscalers https kubernetes io docs tasks run application horizontal pod autoscale to compute replica count See HA Support for Tekton Pipeline Controllers enabling ha md for instructions on configuring High Availability in the Tekton Pipelines Controller The default configuration is defined in webhook hpa yaml config webhook hpa yaml which can be customized to better fit specific usecases Configuring tekton pipeline controller performance Out of the box Tekton Pipelines Controller is configured for relatively small scale deployments but there have several options for configuring Pipelines performance are available See the Performance Configuration tekton controller performance configuration md document which describes how to change the default ThreadsPerController QPS and Burst settings to meet your requirements Running TaskRuns and PipelineRuns with restricted pod security standards To allow TaskRuns and PipelineRuns to run in namespaces with restricted pod security standards https kubernetes io docs concepts security pod security standards set the set security context feature flag to true in the feature flags configMap customizing the pipelines controller behavior This configuration option applies a SecurityContext https kubernetes io docs tasks configure pod container security context to any containers injected into TaskRuns by the Pipelines controller If the Affinity Assistants affinityassistants md feature is enabled the SecurityContext is also applied to those containers This SecurityContext may not be supported in all Kubernetes implementations for example OpenShift Note running TaskRuns and PipelineRuns in the tekton pipelines namespace is discouraged Platform Support The Tekton project provides support for running on x86 Linux Kubernetes nodes The project produces images capable of running on other architectures and operating systems but may not be able to help debug issues specific to those platforms as readily as those that affect Linux on x86 The controller and webhook components are currently built for linux amd64 linux arm64 linux arm Arm v7 linux ppc64le PowerPC linux s390x IBM Z The entrypoint component is also built for Windows which enables TaskRun workloads to execute on Windows nodes See Windows documentation windows md for more information Creating a custom release of Tekton Pipelines You can create a custom release of Tekton Pipelines by following and customizing the steps in Creating an official release https github com tektoncd pipeline blob main tekton README md create an official release For example you might want to customize the container images built and used by Tekton Pipelines Verify Tekton Pipelines Release We will refine this process over time to be more streamlined For now please follow the steps listed in this section to verify Tekton pipeline release Tekton Pipeline s images are being signed by Tekton Chains https github com tektoncd chains since 0 27 1 https github com tektoncd pipeline releases tag v0 27 1 You can verify the images with cosign using the Tekton s public key https raw githubusercontent com tektoncd chains main tekton pub Verify signatures using cosign With Go 1 16 you can install cosign by running shell go install github com sigstore cosign cmd cosign latest You can verify Tekton Pipelines official images using the Tekton public key shell cosign verify key https raw githubusercontent com tektoncd chains main tekton pub gcr io tekton releases github com tektoncd pipeline cmd controller v0 28 1 which results in shell Verification for gcr io tekton releases github com tektoncd pipeline cmd controller v0 28 1 The following checks were performed on each of these signatures The cosign claims were validated The signatures were verified against the specified public key Any certificates were verified against the Fulcio roots Critical Identity docker reference gcr io tekton releases github com tektoncd pipeline cmd controller Image Docker manifest digest sha256 0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8 Type Tekton container signature Optional The verification shows a list of checks performed and returns the digest in Critical Image Docker manifest digest which can be used to retrieve the provenance from the transparency logs for that image using rekor cli Verify the transparency logs using rekor cli Install the rekor cli by running shell go install v github com sigstore rekor cmd rekor cli latest Now use the digest collected from the previous section verify signatures using cosign in Critical Image Docker manifest digest for example sha256 0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8 Search the transparency log with the digest just collected shell rekor cli search sha sha256 0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8 which results in shell Found matching entries listed by UUID 68a53d0e75463d805dc9437dda5815171502475dd704459a5ce3078edba96226 Tekton Chains generates provenance based on the custom format https github com tektoncd chains blob main PROVENANCE SPEC md in which the subject holds the list of artifacts which were built as part of the release For the Pipeline release subject includes a list of images including pipeline controller pipeline webhook etc Use the UUID to get the provenance shell rekor cli get uuid 68a53d0e75463d805dc9437dda5815171502475dd704459a5ce3078edba96226 format json jq r Attestation base64 decode jq which results in shell type https in toto io Statement v0 1 predicateType https tekton dev chains provenance subject name gcr io tekton releases github com tektoncd pipeline cmd controller digest sha256 0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8 name gcr io tekton releases github com tektoncd pipeline cmd entrypoint digest sha256 2fa7f7c3408f52ff21b2d8c4271374dac4f5b113b1c4dbc7d5189131e71ce721 name gcr io tekton releases github com tektoncd pipeline cmd git init digest sha256 83d5ec6addece4aac79898c9631ee669f5fee5a710a2ed1f98a6d40c19fb88f7 name gcr io tekton releases github com tektoncd pipeline cmd imagedigestexporter digest sha256 e4d77b5b8902270f37812f85feb70d57d6d0e1fed2f3b46f86baf534f19cd9c0 name gcr io tekton releases github com tektoncd pipeline cmd nop digest sha256 59b5304bcfdd9834150a2701720cf66e3ebe6d6e4d361ae1612d9430089591f8 name gcr io tekton releases github com tektoncd pipeline cmd pullrequest init digest sha256 4992491b2714a73c0a84553030e6056e6495b3d9d5cc6b20cf7bc8c51be779bb name gcr io tekton releases github com tektoncd pipeline cmd webhook digest sha256 bf0ef565b301a1981cb2e0d11eb6961c694f6d2401928dccebe7d1e9d8c914de Now verify the digest in the release yaml by matching it with the provenance for example the digest for the release v0 28 1 shell curl s https storage googleapis com tekton releases pipeline previous v0 28 1 release yaml grep github com tektoncd pipeline cmd controller v0 28 1 awk F github com tektoncd pipeline cmd controller v0 28 1 print 2 which results in shell sha256 0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8 Now you can verify the deployment specifications in the release yaml to match each of these images and their digest The tekton pipelines controller deployment specification has a container named tekton pipeline controller and a list of image references with their digest as part of the args yaml containers name tekton pipelines controller image gcr io tekton releases github com tektoncd pipeline cmd controller v0 28 1 sha256 0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8 args These images are built on demand by ko resolve and are replaced by image references by digest git image gcr io tekton releases github com tektoncd pipeline cmd git init v0 28 1 sha256 83d5ec6addece4aac79898c9631ee669f5fee5a710a2ed1f98a6d40c19fb88f7 entrypoint image gcr io tekton releases github com tektoncd pipeline cmd entrypoint v0 28 1 sha256 2fa7f7c3408f52ff21b2d8c4271374dac4f5b113b1c4dbc7d5189131e71ce721 nop image gcr io tekton releases github com tektoncd pipeline cmd nop v0 28 1 sha256 59b5304bcfdd9834150a2701720cf66e3ebe6d6e4d361ae1612d9430089591f8 imagedigest exporter image gcr io tekton releases github com tektoncd pipeline cmd imagedigestexporter v0 28 1 sha256 e4d77b5b8902270f37812f85feb70d57d6d0e1fed2f3b46f86baf534f19cd9c0 pr image gcr io tekton releases github com tektoncd pipeline cmd pullrequest init v0 28 1 sha256 4992491b2714a73c0a84553030e6056e6495b3d9d5cc6b20cf7bc8c51be779bb Similarly you can verify the rest of the images which were published as part of the Tekton Pipelines release shell gcr io tekton releases github com tektoncd pipeline cmd git init gcr io tekton releases github com tektoncd pipeline cmd entrypoint gcr io tekton releases github com tektoncd pipeline cmd nop gcr io tekton releases github com tektoncd pipeline cmd imagedigestexporter gcr io tekton releases github com tektoncd pipeline cmd pullrequest init gcr io tekton releases github com tektoncd pipeline cmd webhook Verify Tekton Resources Trusted Resources is a feature to verify Tekton Tasks and Pipelines The current version of feature supports v1beta1 Task and Pipeline For more details please take a look at Trusted Resources trusted resources md Pipelineruns with Affinity Assistant The cluster operators can review the guidelines developers affinity assistant md to cordon a node in the cluster with the tekton controller and the affinity assistant is enabled TaskRuns with imagePullBackOff Timeout Tekton pipelines has adopted a fail fast strategy with a taskRun failing with TaskRunImagePullFailed in case of an imagePullBackOff This can be limited in some cases and it generally depends on the infrastructure To allow the cluster operators to decide whether to wait in case of an imagePullBackOff a setting is available to configure the wait time such that the controller will wait for the specified duration before declaring a failure For example with the following config defaults the controller does not mark the taskRun as failure for 5 minutes since the pod is scheduled in case the image pull fails with imagePullBackOff The default imagepullbackoff timeout is of type time Duration and can be set to a duration such as 1m 5m 10s 1h etc See issue https github com tektoncd pipeline issues 5987 for more details yaml apiVersion v1 kind ConfigMap metadata name config defaults namespace tekton pipelines data default imagepullbackoff timeout 5m Disabling Inline Spec in Pipeline TaskRun and PipelineRun Tekton users may embed the specification of a Task via taskSpec or a Pipeline via pipelineSpec as an alternative to referring to an external resource via taskRef and pipelineRef respectively This behaviour can be selectively disabled for three Tekton resources TaskRun PipelineRun and Pipeline In certain clusters and scenarios an admin might want to disable the customisation of Tasks and Pipelines and only allow users to run pre defined resources To achieve that the admin should disable embedded specification via the disable inline spec flag and remote resolvers too To disable inline specification set the disable inline spec flag to pipeline pipelinerun taskrun in the feature flags configmap yaml apiVersion v1 kind ConfigMap metadata name feature flags namespace tekton pipelines labels app kubernetes io instance default app kubernetes io part of tekton pipelines data disable inline spec pipeline pipelinerun taskrun Inline specifications can be disabled for specific resources only To achieve that set the disable inline spec flag to a comma separated list of the desired resources Valid values are pipeline pipelinerun and taskrun The default value of disable inline spec is which means inline specification is enabled in all cases Next steps To get started with Tekton check the Introductory tutorials quickstarts the how to guides howtos and the examples folder examples Except as otherwise noted the content of this page is licensed under the Creative Commons Attribution 4 0 License cca4 and code samples are licensed under the Apache 2 0 License apache2l quickstarts https tekton dev docs getting started howtos https tekton dev docs how to guides examples https github com tektoncd pipeline tree main examples cca4 https creativecommons org licenses by 4 0 apache2l https www apache org licenses LICENSE 2 0 |
tekton Resolver Template This directory contains a working Resolver based on the instructions Parameters Resolver Type from the This Resolver responds to type | # Resolver Template
This directory contains a working Resolver based on the instructions
from the [developer howto in the docs](../how-to-write-a-resolver.md).
## Resolver Type
This Resolver responds to type `demo`.
## Parameters
| Name | Desccription | Example Value |
|--------|------------------------------|-----------------------------|
| `url` | The repository url. | `https://example.com/repo/` |
## Using the template to start a new Resolver
You can use this as a template to quickly get a new Resolver up and
running with your own preferred storage backend.
To reuse the template, simply copy this entire subdirectory to a new
directory.
The entire program for the `latest` framework is defined in
[`./cmd/resolver/main.go`](./cmd/resolver/main.go) and provides stub
implementations of all the methods defined by the [`framework.Resolver`
interface](../../pkg/remoteresolution/resolver/framework/interface.go).
If you choose to use the previous framework (deprecated) is defined in
[`./cmd/demoresolver/main.go`](./cmd/demoresolver/main.go) and provides stub
implementations of all the methods defined by the [`framework.Resolver`
interface](../../pkg/resolution/resolver/framework/interface.go).
Once copied you'll need to run `go mod init` and `go mod tidy` at the root
of your project. We don't need this in `tektoncd/resolution` because this
submodule relies on the `go.mod` and `go.sum` defined at the root of the repo.
After your go module is initialized and dependencies tidied, update
`config/demo-resolver-deployment.yaml`. The `image` field of the container
will need to point to your new go module's name, with a `ko://` prefix.
## Deploying the Resolver
### Requirements
- A computer with
[`kubectl`](https://kubernetes.io/docs/tasks/tools/#kubectl) and
[`ko`](https://github.com/google/ko) installed.
- The `tekton-pipelines` namespace and `ResolutionRequest`
controller installed. See [the getting started
guide](./getting-started.md#step-3-install-tekton-resolution) for
instructions.
### Install
1. Install the `"demo"` Resolver:
```bash
$ ko apply -f ./config/demo-resolver-deployment.yaml
```
### Testing
Try creating a `ResolutionRequest` targeting `"demo"` with no parameters:
```bash
$ cat <<EOF > rrtest.yaml
apiVersion: resolution.tekton.dev/v1beta1
kind: ResolutionRequest
metadata:
name: test-resolver-template
labels:
resolution.tekton.dev/type: demo
EOF
$ kubectl apply -f ./rrtest.yaml
$ kubectl get resolutionrequest -w test-resolver-template
```
You should shortly see the `ResolutionRequest` succeed and the content of
a hello-world `Pipeline` base64-encoded in the object's `status.data`
field.
### Example PipelineRun
Here's an example PipelineRun that uses the hard-coded demo Pipeline:
```yaml
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: resolver-demo
spec:
pipelineRef:
resolver: demo
```
## What's Supported?
- Just one hard-coded `Pipeline` for demonstration purposes.
---
Except as otherwise noted, the content of this page is licensed under the
[Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/),
and code samples are licensed under the
[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). | tekton | Resolver Template This directory contains a working Resolver based on the instructions from the developer howto in the docs how to write a resolver md Resolver Type This Resolver responds to type demo Parameters Name Desccription Example Value url The repository url https example com repo Using the template to start a new Resolver You can use this as a template to quickly get a new Resolver up and running with your own preferred storage backend To reuse the template simply copy this entire subdirectory to a new directory The entire program for the latest framework is defined in cmd resolver main go cmd resolver main go and provides stub implementations of all the methods defined by the framework Resolver interface pkg remoteresolution resolver framework interface go If you choose to use the previous framework deprecated is defined in cmd demoresolver main go cmd demoresolver main go and provides stub implementations of all the methods defined by the framework Resolver interface pkg resolution resolver framework interface go Once copied you ll need to run go mod init and go mod tidy at the root of your project We don t need this in tektoncd resolution because this submodule relies on the go mod and go sum defined at the root of the repo After your go module is initialized and dependencies tidied update config demo resolver deployment yaml The image field of the container will need to point to your new go module s name with a ko prefix Deploying the Resolver Requirements A computer with kubectl https kubernetes io docs tasks tools kubectl and ko https github com google ko installed The tekton pipelines namespace and ResolutionRequest controller installed See the getting started guide getting started md step 3 install tekton resolution for instructions Install 1 Install the demo Resolver bash ko apply f config demo resolver deployment yaml Testing Try creating a ResolutionRequest targeting demo with no parameters bash cat EOF rrtest yaml apiVersion resolution tekton dev v1beta1 kind ResolutionRequest metadata name test resolver template labels resolution tekton dev type demo EOF kubectl apply f rrtest yaml kubectl get resolutionrequest w test resolver template You should shortly see the ResolutionRequest succeed and the content of a hello world Pipeline base64 encoded in the object s status data field Example PipelineRun Here s an example PipelineRun that uses the hard coded demo Pipeline yaml apiVersion tekton dev v1beta1 kind PipelineRun metadata name resolver demo spec pipelineRef resolver demo What s Supported Just one hard coded Pipeline for demonstration purposes Except as otherwise noted the content of this page is licensed under the Creative Commons Attribution 4 0 License https creativecommons org licenses by 4 0 and code samples are licensed under the Apache 2 0 License https www apache org licenses LICENSE 2 0 |
tekton explains This section gives an overview of how the affinity assistant is resilient to a cluster maintenance without losing how an affinity assistant is created when a is used as a volume source for a in a Please refer to the same section for more details on the affinity assistant the running Please refer to the issue https github com tektoncd pipeline issues 6586 for more details Affinity Assistant | # Affinity Assistant
[Specifying `Workspaces` in a `Pipeline`](../workspaces.md#specifying-workspace-order-in-a-pipeline-and-affinity-assistants) explains
how an affinity assistant is created when a `persistentVolumeClaim` is used as a volume source for a `workspace` in a `pipelineRun`.
Please refer to the same section for more details on the affinity assistant.
This section gives an overview of how the affinity assistant is resilient to a cluster maintenance without losing
the running `pipelineRun`. (Please refer to the issue https://github.com/tektoncd/pipeline/issues/6586 for more details.)
When a list of `tasks` share a single workspace, the affinity assistant pod gets created on a `node` along with all
`taskRun` pods. It is very common for a `pipeline` author to design a long-running tasks with a single workspace.
With these long-running tasks, a `node` on which these pods are scheduled can be cordoned while the `pipelineRun` is
still running. The tekton controller migrates the affinity assistant pod to any available `node` in a cluster along with
the rest of the `taskRun` pods sharing the same workspace.
Let's understand this with a sample `pipelineRun`:
```yaml
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
generateName: pipeline-run-
spec:
workspaces:
- name: source
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
pipelineSpec:
workspaces:
- name: source
tasks:
- name: first-task
taskSpec:
workspaces:
- name: source
steps:
- image: alpine
script: |
echo $(workspaces.source.path)
sleep 60
workspaces:
- name: source
- name: last-task
taskSpec:
workspaces:
- name: source
steps:
- image: alpine
script: |
echo $(workspaces.source.path)
sleep 60
runAfter: ["first-task"]
workspaces:
- name: source
```
This `pipelineRun` has two long-running tasks, `first-task` and `last-task`. Both of these tasks are sharing a single
volume with the access mode set to `ReadWriteOnce` which means the volume can be mounted to a single `node` at any
given point of time.
Create a `pipelineRun` and determine on which `node` the affinity assistant pod is scheduled:
```shell
kubectl get pods -l app.kubernetes.io/component=affinity-assistant -o wide -w
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
affinity-assistant-c7b485007a-0 0/1 Pending 0 0s <none> <none> <none> <none>
affinity-assistant-c7b485007a-0 0/1 Pending 0 0s <none> kind-multinode-worker1 <none> <none>
affinity-assistant-c7b485007a-0 0/1 ContainerCreating 0 0s <none> kind-multinode-worker1 <none> <none>
affinity-assistant-c7b485007a-0 0/1 ContainerCreating 0 1s <none> kind-multinode-worker1 <none> <none>
affinity-assistant-c7b485007a-0 1/1 Running 0 5s 10.244.1.144 kind-multinode-worker1 <none> <none>
```
Now, `cordon` that node to mark it unschedulable for any new pods:
```shell
kubectl cordon kind-multinode-worker1
node/kind-multinode-worker1 cordoned
```
The node is cordoned:
```shell
kubectl get node
NAME STATUS ROLES AGE VERSION
kind-multinode-control-plane Ready control-plane 13d v1.26.3
kind-multinode-worker1 Ready,SchedulingDisabled <none> 13d v1.26.3
kind-multinode-worker2 Ready <none> 13d v1.26.3
```
Now, watch the affinity assistant pod getting transferred onto other available node `kind-multinode-worker2`:
```shell
kubectl get pods -l app.kubernetes.io/component=affinity-assistant -o wide -w
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
affinity-assistant-c7b485007a-0 1/1 Running 0 49s 10.244.1.144 kind-multinode-worker1 <none> <none>
affinity-assistant-c7b485007a-0 1/1 Terminating 0 70s 10.244.1.144 kind-multinode-worker1 <none> <none>
affinity-assistant-c7b485007a-0 1/1 Terminating 0 70s 10.244.1.144 kind-multinode-worker1 <none> <none>
affinity-assistant-c7b485007a-0 0/1 Terminating 0 70s 10.244.1.144 kind-multinode-worker1 <none> <none>
affinity-assistant-c7b485007a-0 0/1 Terminating 0 70s 10.244.1.144 kind-multinode-worker1 <none> <none>
affinity-assistant-c7b485007a-0 0/1 Terminating 0 70s 10.244.1.144 kind-multinode-worker1 <none> <none>
affinity-assistant-c7b485007a-0 0/1 Pending 0 0s <none> <none> <none> <none>
affinity-assistant-c7b485007a-0 0/1 Pending 0 1s <none> kind-multinode-worker2 <none> <none>
affinity-assistant-c7b485007a-0 0/1 ContainerCreating 0 1s <none> kind-multinode-worker2 <none> <none>
affinity-assistant-c7b485007a-0 0/1 ContainerCreating 0 2s <none> kind-multinode-worker2 <none> <none>
affinity-assistant-c7b485007a-0 1/1 Running 0 4s 10.244.2.144 kind-multinode-worker2 <none> <none>
```
And, the `pipelineRun` finishes to completion:
```shell
kubectl get pr
NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME
pipeline-run-r2c7k True Succeeded 4m22s 2m1s
kubectl get tr
NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME
pipeline-run-r2c7k-first-task True Succeeded 5m16s 4m7s
pipeline-run-r2c7k-last-task True Succeeded 4m6s 2m56s
``` | tekton | Affinity Assistant Specifying Workspaces in a Pipeline workspaces md specifying workspace order in a pipeline and affinity assistants explains how an affinity assistant is created when a persistentVolumeClaim is used as a volume source for a workspace in a pipelineRun Please refer to the same section for more details on the affinity assistant This section gives an overview of how the affinity assistant is resilient to a cluster maintenance without losing the running pipelineRun Please refer to the issue https github com tektoncd pipeline issues 6586 for more details When a list of tasks share a single workspace the affinity assistant pod gets created on a node along with all taskRun pods It is very common for a pipeline author to design a long running tasks with a single workspace With these long running tasks a node on which these pods are scheduled can be cordoned while the pipelineRun is still running The tekton controller migrates the affinity assistant pod to any available node in a cluster along with the rest of the taskRun pods sharing the same workspace Let s understand this with a sample pipelineRun yaml apiVersion tekton dev v1 kind PipelineRun metadata generateName pipeline run spec workspaces name source volumeClaimTemplate spec accessModes ReadWriteOnce resources requests storage 10Mi pipelineSpec workspaces name source tasks name first task taskSpec workspaces name source steps image alpine script echo workspaces source path sleep 60 workspaces name source name last task taskSpec workspaces name source steps image alpine script echo workspaces source path sleep 60 runAfter first task workspaces name source This pipelineRun has two long running tasks first task and last task Both of these tasks are sharing a single volume with the access mode set to ReadWriteOnce which means the volume can be mounted to a single node at any given point of time Create a pipelineRun and determine on which node the affinity assistant pod is scheduled shell kubectl get pods l app kubernetes io component affinity assistant o wide w NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES affinity assistant c7b485007a 0 0 1 Pending 0 0s none none none none affinity assistant c7b485007a 0 0 1 Pending 0 0s none kind multinode worker1 none none affinity assistant c7b485007a 0 0 1 ContainerCreating 0 0s none kind multinode worker1 none none affinity assistant c7b485007a 0 0 1 ContainerCreating 0 1s none kind multinode worker1 none none affinity assistant c7b485007a 0 1 1 Running 0 5s 10 244 1 144 kind multinode worker1 none none Now cordon that node to mark it unschedulable for any new pods shell kubectl cordon kind multinode worker1 node kind multinode worker1 cordoned The node is cordoned shell kubectl get node NAME STATUS ROLES AGE VERSION kind multinode control plane Ready control plane 13d v1 26 3 kind multinode worker1 Ready SchedulingDisabled none 13d v1 26 3 kind multinode worker2 Ready none 13d v1 26 3 Now watch the affinity assistant pod getting transferred onto other available node kind multinode worker2 shell kubectl get pods l app kubernetes io component affinity assistant o wide w NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES affinity assistant c7b485007a 0 1 1 Running 0 49s 10 244 1 144 kind multinode worker1 none none affinity assistant c7b485007a 0 1 1 Terminating 0 70s 10 244 1 144 kind multinode worker1 none none affinity assistant c7b485007a 0 1 1 Terminating 0 70s 10 244 1 144 kind multinode worker1 none none affinity assistant c7b485007a 0 0 1 Terminating 0 70s 10 244 1 144 kind multinode worker1 none none affinity assistant c7b485007a 0 0 1 Terminating 0 70s 10 244 1 144 kind multinode worker1 none none affinity assistant c7b485007a 0 0 1 Terminating 0 70s 10 244 1 144 kind multinode worker1 none none affinity assistant c7b485007a 0 0 1 Pending 0 0s none none none none affinity assistant c7b485007a 0 0 1 Pending 0 1s none kind multinode worker2 none none affinity assistant c7b485007a 0 0 1 ContainerCreating 0 1s none kind multinode worker2 none none affinity assistant c7b485007a 0 0 1 ContainerCreating 0 2s none kind multinode worker2 none none affinity assistant c7b485007a 0 1 1 Running 0 4s 10 244 2 144 kind multinode worker2 none none And the pipelineRun finishes to completion shell kubectl get pr NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME pipeline run r2c7k True Succeeded 4m22s 2m1s kubectl get tr NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME pipeline run r2c7k first task True Succeeded 5m16s 4m7s pipeline run r2c7k last task True Succeeded 4m6s 2m56s |
tekton This section provides guidelines for running Tekton on your local workstation via the following methods Using Docker Desktop Prerequisites Local Setup | # Local Setup
This section provides guidelines for running Tekton on your local workstation via the following methods:
- [Docker for Desktop](#using-docker-desktop)
- [Minikube](#using-minikube)
## Using Docker Desktop
### Prerequisites
Complete these prerequisites to run Tekton locally using Docker Desktop:
- Install the [required tools](https://github.com/tektoncd/pipeline/blob/main/DEVELOPMENT.md#requirements).
- Install [Docker Desktop](https://www.docker.com/products/docker-desktop)
- Configure Docker Desktop ([Mac](https://docs.docker.com/docker-for-mac/#resources), [Windows](https://docs.docker.com/docker-for-windows/#resources))to use six CPUs, 10 GB of RAM and 2GB of swap space.
- Set `host.docker.internal:5000` as an insecure registry with Docker for Desktop. See the [Docker insecure registry documentation](https://docs.docker.com/registry/insecure/).
for details.
- Pass `--insecure` as an argument to your Kaniko tasks so that you can push to an insecure registry.
- Run a local (insecure) Docker registry as follows:
`docker run -d -p 5000:5000 --name registry-srv -e REGISTRY_STORAGE_DELETE_ENABLED=true registry:2`
- (Optional) Install a Docker registry viewer to verify the images have been pushed:
`docker run -it -p 8080:8080 --name registry-web --link registry-srv -e REGISTRY_URL=http://registry-srv:5000/v2 -e REGISTRY_NAME=localhost:5000 hyper/docker-registry-web`
- Verify that you can push to `host.docker.internal:5000/myregistry/<image_name>`.
### Reconfigure logging
- You can keep your logs in memory only without sending them to a logging service
such as [Stackdriver](https://cloud.google.com/logging/).
- You can deploy Elasticsearch, Beats, or Kibana locally to view logs. You can find an
example configuration at <https://github.com/mgreau/tekton-pipelines-elastic-tutorials>.
- To learn more about obtaining logs, see [Logs](logs.md).
## Using Minikube
### Prerequisites
Complete these prerequisites to run Tekton locally using Minikube:
- Install the [required tools](https://github.com/tektoncd/pipeline/blob/main/DEVELOPMENT.md#requirements).
- Install [minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/) and start a session as follows:
```bash
minikube start --memory 6144 --cpus 2
```
- Point your shell to minikube's docker-daemon by running `eval $(minikube -p minikube docker-env)`
<!-- wokeignore:rule=master -->
- Set up a [registry on minikube](https://github.com/kubernetes/minikube/tree/master/deploy/addons/registry-aliases) by running `minikube addons enable registry` and `minikube addons enable registry-aliases`
### Reconfigure logging
See the information in the "Docker for Desktop" section
## Using kind and local docker registry
### Prerequisites
Complete these prerequisites to run Tekton locally using Kind:
- Install the [required tools](https://github.com/tektoncd/pipeline/blob/main/DEVELOPMENT.md#requirements).
- Install [Docker](https://www.docker.com/get-started).
- Install [kind](https://kind.sigs.k8s.io/).
### Use local registry without authentication
See [Using KinD](https://github.com/tektoncd/pipeline/blob/main/DEVELOPMENT.md#using-kind).
### Use local private registry
1. Create password file with basic auth.
```bash
export TEST_USER=testuser
export TEST_PASS=testpassword
if [ ! -f auth ]; then
mkdir auth
fi
docker run \
--entrypoint htpasswd \
httpd:2 -Bbn $TEST_USER $TEST_PASS > auth/htpasswd
```
2. Start kind cluster and local private registry
Execute the script.
```shell
#!/bin/sh
set -o errexit
# create registry container unless it already exists
reg_name='kind-registry'
reg_port='5000'
running="$(docker inspect -f '' "${reg_name}" 2>/dev/null || true)"
if [ "${running}" != 'true' ]; then
docker run \
-d --restart=always -p "127.0.0.1:${reg_port}:5000" --name "${reg_name}" \
-v "$(pwd)"/auth:/auth \
-e "REGISTRY_AUTH=htpasswd" \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
registry:2
fi
# create a cluster with the local registry enabled in containerd
cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"]
endpoint = ["http://${reg_name}:5000"]
EOF
# connect the registry to the cluster network
# (the network may already be connected)
docker network connect "kind" "${reg_name}" || true
# Document the local registry
# <!-- wokeignore:rule=master -->
# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: local-registry-hosting
namespace: kube-public
data:
localRegistryHosting.v1: |
host: "localhost:${reg_port}"
help: "https://kind.sigs.k8s.io/docs/user/local-registry/"
EOF
```
3. Install tekton [pipeline](https://github.com/tektoncd/pipeline/blob/main/docs/install.md) and create the secret in cluster.
```bash
kubectl create secret docker-registry secret-tekton \
--docker-username=$TEST_USER \
--docker-password=$TEST_PASS \
--docker-server=localhost:5000 \
--namespace=tekton-pipelines
```
4. Config [ko](https://github.com/google/ko#install) and add secret to service acount.
```bash
export KO_DOCKER_REPO='localhost:5000'
```
`200-serviceaccount.yaml`
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: tekton-pipelines-controller
namespace: tekton-pipelines
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-pipelines
imagePullSecrets:
- name: secret-tekton
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: tekton-pipelines-webhook
namespace: tekton-pipelines
labels:
app.kubernetes.io/component: webhook
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-pipelines
imagePullSecrets:
- name: secret-tekton
``` | tekton | Local Setup This section provides guidelines for running Tekton on your local workstation via the following methods Docker for Desktop using docker desktop Minikube using minikube Using Docker Desktop Prerequisites Complete these prerequisites to run Tekton locally using Docker Desktop Install the required tools https github com tektoncd pipeline blob main DEVELOPMENT md requirements Install Docker Desktop https www docker com products docker desktop Configure Docker Desktop Mac https docs docker com docker for mac resources Windows https docs docker com docker for windows resources to use six CPUs 10 GB of RAM and 2GB of swap space Set host docker internal 5000 as an insecure registry with Docker for Desktop See the Docker insecure registry documentation https docs docker com registry insecure for details Pass insecure as an argument to your Kaniko tasks so that you can push to an insecure registry Run a local insecure Docker registry as follows docker run d p 5000 5000 name registry srv e REGISTRY STORAGE DELETE ENABLED true registry 2 Optional Install a Docker registry viewer to verify the images have been pushed docker run it p 8080 8080 name registry web link registry srv e REGISTRY URL http registry srv 5000 v2 e REGISTRY NAME localhost 5000 hyper docker registry web Verify that you can push to host docker internal 5000 myregistry image name Reconfigure logging You can keep your logs in memory only without sending them to a logging service such as Stackdriver https cloud google com logging You can deploy Elasticsearch Beats or Kibana locally to view logs You can find an example configuration at https github com mgreau tekton pipelines elastic tutorials To learn more about obtaining logs see Logs logs md Using Minikube Prerequisites Complete these prerequisites to run Tekton locally using Minikube Install the required tools https github com tektoncd pipeline blob main DEVELOPMENT md requirements Install minikube https kubernetes io docs tasks tools install minikube and start a session as follows bash minikube start memory 6144 cpus 2 Point your shell to minikube s docker daemon by running eval minikube p minikube docker env wokeignore rule master Set up a registry on minikube https github com kubernetes minikube tree master deploy addons registry aliases by running minikube addons enable registry and minikube addons enable registry aliases Reconfigure logging See the information in the Docker for Desktop section Using kind and local docker registry Prerequisites Complete these prerequisites to run Tekton locally using Kind Install the required tools https github com tektoncd pipeline blob main DEVELOPMENT md requirements Install Docker https www docker com get started Install kind https kind sigs k8s io Use local registry without authentication See Using KinD https github com tektoncd pipeline blob main DEVELOPMENT md using kind Use local private registry 1 Create password file with basic auth bash export TEST USER testuser export TEST PASS testpassword if f auth then mkdir auth fi docker run entrypoint htpasswd httpd 2 Bbn TEST USER TEST PASS auth htpasswd 2 Start kind cluster and local private registry Execute the script shell bin sh set o errexit create registry container unless it already exists reg name kind registry reg port 5000 running docker inspect f reg name 2 dev null true if running true then docker run d restart always p 127 0 0 1 reg port 5000 name reg name v pwd auth auth e REGISTRY AUTH htpasswd e REGISTRY AUTH HTPASSWD REALM Registry Realm e REGISTRY AUTH HTPASSWD PATH auth htpasswd registry 2 fi create a cluster with the local registry enabled in containerd cat EOF kind create cluster config kind Cluster apiVersion kind x k8s io v1alpha4 containerdConfigPatches plugins io containerd grpc v1 cri registry mirrors localhost reg port endpoint http reg name 5000 EOF connect the registry to the cluster network the network may already be connected docker network connect kind reg name true Document the local registry wokeignore rule master https github com kubernetes enhancements tree master keps sig cluster lifecycle generic 1755 communicating a local registry cat EOF kubectl apply f apiVersion v1 kind ConfigMap metadata name local registry hosting namespace kube public data localRegistryHosting v1 host localhost reg port help https kind sigs k8s io docs user local registry EOF 3 Install tekton pipeline https github com tektoncd pipeline blob main docs install md and create the secret in cluster bash kubectl create secret docker registry secret tekton docker username TEST USER docker password TEST PASS docker server localhost 5000 namespace tekton pipelines 4 Config ko https github com google ko install and add secret to service acount bash export KO DOCKER REPO localhost 5000 200 serviceaccount yaml yaml apiVersion v1 kind ServiceAccount metadata name tekton pipelines controller namespace tekton pipelines labels app kubernetes io component controller app kubernetes io instance default app kubernetes io part of tekton pipelines imagePullSecrets name secret tekton apiVersion v1 kind ServiceAccount metadata name tekton pipelines webhook namespace tekton pipelines labels app kubernetes io component webhook app kubernetes io instance default app kubernetes io part of tekton pipelines imagePullSecrets name secret tekton |
tekton You may follow the existing Tekton feature flags demo for detailed reference Feature Versioning API driven features are features that are accessed via a specific field in pipeline API They comply to the and the specified in the For example is an API driven feature Adding feature gates for API driven features The stability levels of features feature versioning are independent of CRD | # Feature Versioning
The stability levels of features (feature versioning) are independent of CRD [API versioning](./api-versioning.md).
You may follow the existing Tekton feature flags demo for detailed reference:
- [Tekton Per-feature Flag Demo Slides](https://docs.google.com/presentation/d/1MAwBTKYUN40SZcd6om6LMw217TtNSppbmA8MatMjyjk/edit?usp=sharing&resourcekey=0-JY7-QhCrWJrzFgsFbJGROg)
- [Tekton Per-feature Flag Demo Recording](https://drive.google.com/file/d/1myFHtqps3gt2I6wBkvGIghDaJElOYOq1/view?usp=sharing)
## Adding feature gates for API-driven features
API-driven features are features that are accessed via a specific field in pipeline API. They comply to the [feature gates](../../api_compatibility_policy.md#feature-gates) and the [feature graduation process](../../api_compatibility_policy.md#feature-graduation-process) specified in the [API compatibility policy](../../api_compatibility_policy.md). For example, [remote tasks](https://github.com/tektoncd/pipeline/blob/454bfd340d102f16f4f2838cf4487198537e3cfa/docs/taskruns.md#remote-tasks) is an API-driven feature.
## Adding feature gated API fields for API-driven features
### Per-feature flag
All new features added after [v0.53.0](https://github.com/tektoncd/pipeline/releases/tag/v0.53.0) will be enabled by their dedicated feature flags. To introduce a new per-feature flag, we will proceed as follows:
- Add default values to the new per-feature flag for the new API-driven feature following the `PerFeatureFlag` struct in [feature_flags.go](./../../pkg/apis/config/feature_flags.go).
- Write unit tests to verify the new feature flag and update all test cases that require the configMap setting, such as those related to provenance propagation.
- To add integration tests:
- First, add the tests to `pull-tekton-pipeline-alpha-integration-test` by enabling the newly-introduced per-feature flag at [alpha test Prow environment](./../../test/e2e-tests-kind-prow-alpha.env).
- When the flag is promoted to `beta` stability level, change the test to use [beta Prow environment setup](./../../test/e2e-tests-kind-prow-beta.env).
- To add additional CI tests for combinations of feature flags, add tests for all alpha feature flags being turned on, with one alpha feature turned off at a time.
- Add the tested new per-feature flag key value to the [the pipeline configMap](./../../config/config-feature-flags.yaml).
- Update documentations for the new alpha feature at [alpha-stability-level](./../additional-configs.md#alpha-features).
#### Example of adding a new Per-feature flag
1. Add the default value following the Per-Feature flag struct
```golang
const enableExampleNewFeatureKey = 'example-new-feature'
var DefaultExampleNewFeatre = PerFeatureFlag {
Name: enableExampleNewFeatureKey,
Stability: AlphaAPIFields,
Enabled: DefaultAlphaFeatureEnabled,
}
```
2. Add unit tests with the newly-introduced yamls `feature-flags-example-new-feature` and `feature-flags-invalid-example-new-feature` according to the existing testing framework.
3. For integration tests, add `example-new-feature: true` to [alpha test Prow environment](./../../test/e2e-tests-kind-prow-alpha.env).
4. Add `example-new-feature: false` to [the pipeline configMap](./../../config/config-feature-flags.yaml) with a release note.
5. Update documentations for the new alpha feature at [alpha-stability-level](./../additional-configs.md#alpha-features).
### `enable-api-fields`
Prior to [v0.53.0](https://github.com/tektoncd/pipeline/tree/release-v0.53.x), we have had the global feature flag `enable-api-fields` in
[config-feature-flags.yaml file](../../config/config-feature-flags.yaml)
deployed as part of our releases.
_Note that the `enable-api-fields` flag will has been deprecated since [v0.53.0](https://github.com/tektoncd/pipeline/tree/release-v0.53.x) and we will transition to use [Per-feature flags](#per-feature-flag) instead._
This field can be configured either to be `alpha`, `beta`, or `stable`. This field is
documented as part of our
[install docs](../install.md#customizing-the-pipelines-controller-behavior).
For developers adding new features to Pipelines' CRDs we've got a couple of
helpful tools to make gating those features simpler and to provide a consistent
testing experience.
### Guarding Features with Feature Gates
Writing new features is made trickier when you need to support both the existing
stable behaviour as well as your new alpha behaviour.
In reconciler code you can guard your new features with an `if` statement such
as the following:
```go
alphaAPIEnabled := config.FromContextOrDefaults(ctx).FeatureFlags.EnableAPIFields == "alpha"
if alphaAPIEnabled {
// new feature code goes here
} else {
// existing stable code goes here
}
```
Notice that you'll need a context object to be passed into your function for
this to work. When writing new features keep in mind that you might need to
include this in your new function signatures.
### Guarding Validations with Feature Gates
Just because your application code might be correctly observing the feature gate
flag doesn't mean you're done yet! When a user submits a Tekton resource it's
validated by Pipelines' webhook. Here too you'll need to ensure your new
features aren't accidentally accepted when the feature gate suggests they
shouldn't be. We've got a helper function,
[`ValidateEnabledAPIFields`](../../pkg/apis/version/version_validation.go),
to make validating the current feature gate easier. Use it like this:
```go
requiredVersion := config.AlphaAPIFields
// errs is an instance of *apis.FieldError, a common type in our validation code
errs = errs.Also(ValidateEnabledAPIFields(ctx, "your feature name", requiredVersion))
```
If the user's cluster isn't configured with the required feature gate it'll
return an error like this:
```
<your feature> requires "enable-api-fields" feature gate to be "alpha" but it is "stable"
```
### Unit Testing with Feature Gates
Any new code you write that uses the `ctx` context variable is trivially unit
tested with different feature gate settings. You should make sure to unit test
your code both with and without a feature gate enabled to make sure it's
properly guarded. See the following for an example of a unit test that sets the
feature gate to test behaviour:
```go
// EnableAlphaAPIFields enables alpha features in an existing context (for use in testing)
func EnableAlphaAPIFields(ctx context.Context) context.Context {
return setEnableAPIFields(ctx, config.AlphaAPIFields)
}
func setEnableAPIFields(ctx context.Context, want string) context.Context {
featureFlags, _ := config.NewFeatureFlagsFromMap(map[string]string{
"enable-api-fields": want,
})
cfg := &config.Config{
Defaults: &config.Defaults{
DefaultTimeoutMinutes: 60,
},
FeatureFlags: featureFlags,
}
return config.ToContext(ctx, cfg)
}
```
### Example YAMLs
Writing new YAML examples that require a feature gate to be set is easy. New
YAML example files typically go in a directory called something like
`examples/v1/taskruns` in the root of the repo. To create a YAML that
should only be exercised when the `enable-api-fields` flag is `alpha` just put
it in an `alpha` subdirectory so the structure looks like:
```
examples/v1/taskruns/alpha/your-example.yaml
```
This should work for both taskruns and pipelineruns.
**Note**: To execute alpha examples with the integration test runner you must
manually set the `enable-api-fields` feature flag to `alpha` in your testing
cluster before kicking off the tests.
When you set this flag to `stable` in your cluster it will prevent `alpha`
examples from being created by the test runner. When you set the flag to `alpha`
all examples are run, since we want to exercise backwards-compatibility of the
examples under alpha conditions.
### Integration Tests
For integration tests we provide the
[`requireAnyGate` function](../../test/gate.go) which should be passed to the
`setup` function used by tests:
```go
c, namespace := setup(ctx, t, requireAnyGate(map[string]string{"enable-api-fields": "alpha"}))
```
This will Skip your integration test if the feature gate is not set to `alpha`
with a clear message explaining why it was skipped.
**Note**: As with running example YAMLs you have to manually set the
`enable-api-fields` flag to `alpha` in your test cluster to see your alpha
integration tests run. When the flag in your cluster is `alpha` _all_
integration tests are executed, both `stable` and `alpha`. Setting the feature
flag to `stable` will exclude `alpha` tests. | tekton | Feature Versioning The stability levels of features feature versioning are independent of CRD API versioning api versioning md You may follow the existing Tekton feature flags demo for detailed reference Tekton Per feature Flag Demo Slides https docs google com presentation d 1MAwBTKYUN40SZcd6om6LMw217TtNSppbmA8MatMjyjk edit usp sharing resourcekey 0 JY7 QhCrWJrzFgsFbJGROg Tekton Per feature Flag Demo Recording https drive google com file d 1myFHtqps3gt2I6wBkvGIghDaJElOYOq1 view usp sharing Adding feature gates for API driven features API driven features are features that are accessed via a specific field in pipeline API They comply to the feature gates api compatibility policy md feature gates and the feature graduation process api compatibility policy md feature graduation process specified in the API compatibility policy api compatibility policy md For example remote tasks https github com tektoncd pipeline blob 454bfd340d102f16f4f2838cf4487198537e3cfa docs taskruns md remote tasks is an API driven feature Adding feature gated API fields for API driven features Per feature flag All new features added after v0 53 0 https github com tektoncd pipeline releases tag v0 53 0 will be enabled by their dedicated feature flags To introduce a new per feature flag we will proceed as follows Add default values to the new per feature flag for the new API driven feature following the PerFeatureFlag struct in feature flags go pkg apis config feature flags go Write unit tests to verify the new feature flag and update all test cases that require the configMap setting such as those related to provenance propagation To add integration tests First add the tests to pull tekton pipeline alpha integration test by enabling the newly introduced per feature flag at alpha test Prow environment test e2e tests kind prow alpha env When the flag is promoted to beta stability level change the test to use beta Prow environment setup test e2e tests kind prow beta env To add additional CI tests for combinations of feature flags add tests for all alpha feature flags being turned on with one alpha feature turned off at a time Add the tested new per feature flag key value to the the pipeline configMap config config feature flags yaml Update documentations for the new alpha feature at alpha stability level additional configs md alpha features Example of adding a new Per feature flag 1 Add the default value following the Per Feature flag struct golang const enableExampleNewFeatureKey example new feature var DefaultExampleNewFeatre PerFeatureFlag Name enableExampleNewFeatureKey Stability AlphaAPIFields Enabled DefaultAlphaFeatureEnabled 2 Add unit tests with the newly introduced yamls feature flags example new feature and feature flags invalid example new feature according to the existing testing framework 3 For integration tests add example new feature true to alpha test Prow environment test e2e tests kind prow alpha env 4 Add example new feature false to the pipeline configMap config config feature flags yaml with a release note 5 Update documentations for the new alpha feature at alpha stability level additional configs md alpha features enable api fields Prior to v0 53 0 https github com tektoncd pipeline tree release v0 53 x we have had the global feature flag enable api fields in config feature flags yaml file config config feature flags yaml deployed as part of our releases Note that the enable api fields flag will has been deprecated since v0 53 0 https github com tektoncd pipeline tree release v0 53 x and we will transition to use Per feature flags per feature flag instead This field can be configured either to be alpha beta or stable This field is documented as part of our install docs install md customizing the pipelines controller behavior For developers adding new features to Pipelines CRDs we ve got a couple of helpful tools to make gating those features simpler and to provide a consistent testing experience Guarding Features with Feature Gates Writing new features is made trickier when you need to support both the existing stable behaviour as well as your new alpha behaviour In reconciler code you can guard your new features with an if statement such as the following go alphaAPIEnabled config FromContextOrDefaults ctx FeatureFlags EnableAPIFields alpha if alphaAPIEnabled new feature code goes here else existing stable code goes here Notice that you ll need a context object to be passed into your function for this to work When writing new features keep in mind that you might need to include this in your new function signatures Guarding Validations with Feature Gates Just because your application code might be correctly observing the feature gate flag doesn t mean you re done yet When a user submits a Tekton resource it s validated by Pipelines webhook Here too you ll need to ensure your new features aren t accidentally accepted when the feature gate suggests they shouldn t be We ve got a helper function ValidateEnabledAPIFields pkg apis version version validation go to make validating the current feature gate easier Use it like this go requiredVersion config AlphaAPIFields errs is an instance of apis FieldError a common type in our validation code errs errs Also ValidateEnabledAPIFields ctx your feature name requiredVersion If the user s cluster isn t configured with the required feature gate it ll return an error like this your feature requires enable api fields feature gate to be alpha but it is stable Unit Testing with Feature Gates Any new code you write that uses the ctx context variable is trivially unit tested with different feature gate settings You should make sure to unit test your code both with and without a feature gate enabled to make sure it s properly guarded See the following for an example of a unit test that sets the feature gate to test behaviour go EnableAlphaAPIFields enables alpha features in an existing context for use in testing func EnableAlphaAPIFields ctx context Context context Context return setEnableAPIFields ctx config AlphaAPIFields func setEnableAPIFields ctx context Context want string context Context featureFlags config NewFeatureFlagsFromMap map string string enable api fields want cfg config Config Defaults config Defaults DefaultTimeoutMinutes 60 FeatureFlags featureFlags return config ToContext ctx cfg Example YAMLs Writing new YAML examples that require a feature gate to be set is easy New YAML example files typically go in a directory called something like examples v1 taskruns in the root of the repo To create a YAML that should only be exercised when the enable api fields flag is alpha just put it in an alpha subdirectory so the structure looks like examples v1 taskruns alpha your example yaml This should work for both taskruns and pipelineruns Note To execute alpha examples with the integration test runner you must manually set the enable api fields feature flag to alpha in your testing cluster before kicking off the tests When you set this flag to stable in your cluster it will prevent alpha examples from being created by the test runner When you set the flag to alpha all examples are run since we want to exercise backwards compatibility of the examples under alpha conditions Integration Tests For integration tests we provide the requireAnyGate function test gate go which should be passed to the setup function used by tests go c namespace setup ctx t requireAnyGate map string string enable api fields alpha This will Skip your integration test if the feature gate is not set to alpha with a clear message explaining why it was skipped Note As with running example YAMLs you have to manually set the enable api fields flag to alpha in your test cluster to see your alpha integration tests run When the flag in your cluster is alpha all integration tests are executed both stable and alpha Setting the feature flag to stable will exclude alpha tests |
tekton If specified wait until the file has non zero size user provided binary for each container to manage the execution order of the containers Entrypoint rewriting and step ordering This doc describes how TaskRuns are implemented using pods The binary has the following arguments Tekton releases include a binary called the entrypoint which wraps the If specified file to wait for | This doc describes how TaskRuns are implemented using pods.
## Entrypoint rewriting and step ordering
Tekton releases include a binary called the "entrypoint", which wraps the
user-provided binary for each `Step` container to manage the execution order of the containers.
The `entrypoint` binary has the following arguments:
- `wait_file` - If specified, file to wait for
- `wait_file_content` - If specified, wait until the file has non-zero size
- `post_file` - If specified, file to write upon completion
- `entrypoint` - The command to run in the image being wrapped
As part of the PodSpec created by `TaskRun` the entrypoint for each `Task` step
is changed to the entrypoint binary with the mentioned arguments and a volume
with the binary and file(s) is mounted.
If the image is a private registry, the service account should include an
[ImagePullSecret](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account)
For more details, see [entrypoint/README.md](../../cmd/entrypoint/README.md)
or the talk ["Russian Doll: Extending Containers with Nested Processes"](https://www.youtube.com/watch?v=iz9_omZ0ctk).
## How to access the exit code of a step from any subsequent step in a task
The entrypoint now allows exiting with an error and continue running rest of the
steps in a task i.e., it is possible for a step to exit with a non-zero exit
code. Now, it is possible to design a task with a step which can take an action
depending on the exit code of any prior steps. The user can access the exit code
of a step by reading the file pointed by the path variable
`$(steps.step-<step-name>.exitCode.path)` or
`$(steps.step-unnamed-<step-index>.exitCode.path)`. For example:
- `$(steps.step-my-awesome-step.exitCode.path)` where the step name is
`my-awesome-step`.
- `$(steps.step-unnamed-0.exitCode.path)` where the first step in a task has no
name.
The exit code of a step is stored in a file named `exitCode` under a directory
`/tekton/steps/step-<step-name>/` or `/tekton/steps/step-unnamed-<step-index>/`
which is reserved for any other step specific information in the future.
If you would like to use the tekton internal path, you can access the exit code
by reading the file (which is not recommended though since the path might change
in the future):
```shell
cat /tekton/steps/step-<step-name>/exitCode
```
And, access the step exit code without a step name:
```shell
cat /tekton/steps/step-unnamed-<step-index>/exitCode
```
Or, you can access the step metadata directory via symlink, for example, use
`cat /tekton/steps/0/exitCode` for the first step in a task.
## TaskRun Use of Pod Termination Messages
Tekton Pipelines uses a `Pod's`
[termination message](https://kubernetes.io/docs/tasks/debug-application-cluster/determine-reason-pod-failure/)
to pass data from a Step's container to the Pipelines controller. Examples of
this data include: the time that execution of the user's step began, contents of
task results, contents of pipeline resource results.
The contents and format of the termination message can change. At time of
writing the message takes the form of a serialized JSON blob. Some of the data
from the message is internal to Tekton Pipelines, used for book-keeping, and
some is distributed across a number of fields of the `TaskRun's` `status`. For
example, a `TaskRun's` `status.taskResults` is populated from the termination
message.
## Reserved directories
### /workspace
- `/workspace` - This directory is where volumes for [resources](#resources) and
[workspaces](#workspaces) are mounted.
### /tekton
The `/tekton/` directory is reserved on containers for internal usage.
Here is an example of a directory layout for a simple Task with 2 script steps:
```
/tekton
|-- bin
`-- entrypoint
|-- creds
|-- downward
| |-- ..2021_09_16_18_31_06.270542700
| | `-- ready
| |-- ..data -> ..2021_09_16_18_31_06.270542700
| `-- ready -> ..data/ready
|-- home
|-- results
|-- run
`-- 0
`-- out
`-- status
`-- exitCode
|-- scripts
| |-- script-0-t4jd8
| `-- script-1-4pjwp
|-- steps
| |-- 0 -> /tekton/run/0/status
| |-- 1 -> /tekton/run/1/status
| |-- step-foo -> /tekton/run/1/status
| `-- step-unnamed-0 -> /tekton/run/0/status
`-- termination
```
| Path | Description |
| ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| /tekton | Directory used for Tekton specific functionality |
| /tekton/bin | Tekton provided binaries / tools |
| /tekton/creds | Location of Tekton mounted secrets. See [Authentication at Run Time](../auth.md) for more details. |
| /tekton/debug | Contains [Debug scripts](https://github.com/tektoncd/pipeline/blob/main/docs/debug.md#debug-scripts) used to manage step lifecycle during debugging at a breakpoint and the [Debug Info](https://github.com/tektoncd/pipeline/blob/main/docs/debug.md#mounts) mount used to assist for the same. | |
| /tekton/downward | Location of data mounted via the [Downward API](https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#the-downward-api). |
| /tekton/home | (deprecated - see https://github.com/tektoncd/pipeline/issues/2013) Default home directory for user containers. |
| /tekton/results | Where [results](#results) are written to (path available to `Task` authors via [`$(results.name.path)`](../variables.md)) |
| /tekton/run | Runtime variable data. [Used for coordinating step ordering](#entrypoint-rewriting-and-step-ordering). |
| /tekton/scripts | Contains user provided scripts specified in the TaskSpec. |
| /tekton/steps | Where the `step` exitCodes are written to (path available to `Task` authors via [`$(steps.<stepName>.exitCode.path)`](../variables.md#variables-available-in-a-task)) |
| /tekton/termination | where the eventual [termination log message](https://kubernetes.io/docs/tasks/debug-application-cluster/determine-reason-pod-failure/#writing-and-reading-a-termination-message) is written to [Sequencing step containers](#entrypoint-rewriting-and-step-ordering) |
The following directories are covered by the
[Tekton API Compatibility policy](../../api_compatibility_policy.md), and can be
relied on for stability:
- `/tekton/results`
All other files/directories are internal implementation details of Tekton -
**users should not rely on specific paths or behaviors as it may change in the
future**.
## What and Why of `/tekton/run`
`/tekton/run` is a collection of implicit volumes mounted on a pod and created
for storing the step specific information/metadata. Steps can only write
metadata to their own `tekton/run` directory - all other step volumes are mounted as
`readonly`. The `tekton/run` directories are considered internal implementation details
of Tekton and are not bound by the API compatibility policy - the contents and
structure can be safely changed so long as user behavior remains the same.
### `/tekton/steps`
`/tekton/steps` are special subdirectories are created for each step in a task -
each directory is actually a symlink to a directory in the Step's corresponding
`/tekton/run` volume. This is done to ensure that step directories can only be
modified by their own Step. To ensure that these symlinks are not modified, the
entire `/tekton/steps` volume is initially populated by an initContainer, and
mounted `readonly` on all user steps.
These symlinks are created as a part of the `step-init` entrypoint subcommand
initContainer on each Task Pod.
### Entrypoint configuration
The entrypoint is modified to include an additional flag representing the step
specific directory where step metadata should be written:
```
step_metadata_dir - the dir specified in this flag is created to hold a step specific metadata
```
`step_metadata_dir` is set to `/tekton/run/<step #>/status` for the entrypoint
of each step.
### Example
Let's take an example of a task with two steps, each exiting with non-zero exit
code:
```yaml
kind: TaskRun
apiVersion: tekton.dev/v1beta1
metadata:
generateName: test-taskrun-
spec:
taskSpec:
steps:
- image: alpine
name: step0
onError: continue
script: |
exit 1
- image: alpine
onError: continue
script: |
exit 2
```
During `step-step0`, the first container is actively running so none of the
output files are populated yet. The `/tekton/steps` directories are symlinked to
locations that do not yet exist, but will be populated during execution.
```
/tekton
|-- run
| |-- 0
| `-- 1
|-- steps
|-- 0 -> /tekton/run/0/status
|-- 1 -> /tekton/run/1/status
|-- step-step0 -> /tekton/run/0/status
`-- step-unnamed1 -> /tekton/run/1/status
```
During `step-unnamed1`, the first container has now finished. The output files
for the first step are now populated, and the folder pointed to by
`/tekton/steps/0` now exists, and is populated with a file named `exitCode`
which contains the exit code of the first step.
```
/tekton
|-- run
| |-- 0
| | |-- out
| | `-- status
| | `-- exitCode
| `-- 1
|-- steps
|-- 0 -> /tekton/run/0/status
|-- 1 -> /tekton/run/1/status
|-- step-step0 -> /tekton/run/0/status
`-- step-unnamed1 -> /tekton/run/1/status
```
Notice that there are multiple symlinks showing under `/tekton/steps/` pointing
to the same `/tekton/run` location. These symbolic links are created to provide
simplified access to the step metadata directories i.e., instead of referring to
a directory with the step name, access it via the step index. The step index
becomes complex and hard to keep track of in a task with a long list of steps,
for example, a task with 20 steps. Creating the step metadata directory using a
step name and creating a symbolic link using the step index gives the user
flexibility, and an option to choose whatever works best for them.
## Handling of injected sidecars
Tekton has to take some special steps to support sidecars that are injected into
TaskRun Pods. Without intervention sidecars will typically run for the entire
lifetime of a Pod but in Tekton's case it's desirable for the sidecars to run
only as long as Steps take to complete. There's also a need for Tekton to
schedule the sidecars to start before a Task's Steps begin, just in case the
Steps rely on a sidecars behavior, for example to join an Istio service mesh. To
handle all of this, Tekton Pipelines implements the following lifecycle for
sidecar containers:
First, the
[Downward API](https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#the-downward-api)
is used to project an annotation on the TaskRun's Pod into the `entrypoint`
container as a file. The annotation starts as an empty string, so the file
projected by the downward API has zero length. The entrypointer spins, waiting
for that file to have non-zero size.
The sidecar containers start up. Once they're all in a ready state, the
annotation is populated with string "READY", which in turn populates the
Downward API projected file. The entrypoint binary recognizes that the projected
file has a non-zero size and allows the Task's steps to begin.
On completion of all steps in a Task the TaskRun reconciler stops any sidecar
containers. The `Image` field of any sidecar containers is swapped to the nop
image. Kubernetes observes the change and relaunches the container with updated
container image. The nop container image exits immediately _because it does not
provide the command that the sidecar is configured to run_. The container is
considered `Terminated` by Kubernetes and the TaskRun's Pod stops.
There are known issues with the existing implementation of sidecars:
- When the `nop` image does provide the sidecar's command, the sidecar will
continue to run even after `nop` has been swapped into the sidecar container's
image field. See
[the issue tracking this bug](https://github.com/tektoncd/pipeline/issues/1347)
for the issue tracking this bug. Until this issue is resolved the best way to
avoid it is to avoid overriding the `nop` image when deploying the tekton
controller, or ensuring that the overridden `nop` image contains as few
commands as possible.
- `kubectl get pods` will show a Completed pod when a sidecar exits successfully
but an Error when the sidecar exits with an error. This is only apparent when
using `kubectl` to get the pods of a TaskRun, not when describing the Pod
using `kubectl describe pod ...` nor when looking at the TaskRun, but can be
quite confusing.
## Breakpoint on Failure
Halting a TaskRun execution on Failure of a step.
### Failure of a Step
The entrypoint binary is used to manage the lifecycle of a step. Steps are aligned beforehand by the TaskRun controller
allowing each step to run in a particular order. This is done using `-wait_file` and the `-post_file` flags. The former
let's the entrypoint binary know that it has to wait on creation of a particular file before starting execution of the step.
And the latter provides information on the step number and signal the next step on completion of the step.
On success of a step, the `-post-file` is written as is, signalling the next step which would have the same argument given
for `-wait_file` to resume the entrypoint process and move ahead with the step.
On failure of a step, the `-post_file` is written with appending `.err` to it denoting that the previous step has failed with
and error. The subsequent steps are skipped in this case as well, marking the TaskRun as a failure.
### Halting a Step on failure
The failed step writes `<step-no>.err` to `/tekton/run` and stops running completely. To be able to debug a step we would
need it to continue running (not exit), not skip the next steps and signal health of the step. By disabling step skipping,
stopping write of the `<step-no>.err` file and waiting on a signal by the user to disable the halt, we would be simulating a
"breakpoint".
In this breakpoint, which is essentially a limbo state the TaskRun finds itself in, the user can interact with the step
environment using a CLI or an IDE.
### Exiting onfailure breakpoint
To exit a step which has been paused upon failure, the step would wait on a file similar to `<step-no>.breakpointexit` which
would unpause and exit the step container. eg: Step 0 fails and is paused. Writing `0.breakpointexit` in `/tekton/run`
would unpause and exit the step container.
## Breakpoint before step
TaskRun will be stuck waiting for user debugging before the step execution.
### Halting a Step before execution
The step program will be executed after all the `-wait_file` monitoring ends. If want the user to enter the debugging before the step is executed,
need to pass a parameter `debug_before_step` to `entrypoint`,
and `entrypoint` will end the monitoring of `waitFiles` back pause,
waiting to listen to the `/tekton/run/0/out.beforestepexit` file
### Exiting before step breakpoint
`entrypoint` listening `/tekton/run//out.beforestepexit` or `/tekton/run//out.beforestepexit.err` to
decide whether to proceed this step, `out.beforestepexit` means continue with step,
`out.beforestepexit.err` means do not continue with the step | tekton | This doc describes how TaskRuns are implemented using pods Entrypoint rewriting and step ordering Tekton releases include a binary called the entrypoint which wraps the user provided binary for each Step container to manage the execution order of the containers The entrypoint binary has the following arguments wait file If specified file to wait for wait file content If specified wait until the file has non zero size post file If specified file to write upon completion entrypoint The command to run in the image being wrapped As part of the PodSpec created by TaskRun the entrypoint for each Task step is changed to the entrypoint binary with the mentioned arguments and a volume with the binary and file s is mounted If the image is a private registry the service account should include an ImagePullSecret https kubernetes io docs tasks configure pod container configure service account add imagepullsecrets to a service account For more details see entrypoint README md cmd entrypoint README md or the talk Russian Doll Extending Containers with Nested Processes https www youtube com watch v iz9 omZ0ctk How to access the exit code of a step from any subsequent step in a task The entrypoint now allows exiting with an error and continue running rest of the steps in a task i e it is possible for a step to exit with a non zero exit code Now it is possible to design a task with a step which can take an action depending on the exit code of any prior steps The user can access the exit code of a step by reading the file pointed by the path variable steps step step name exitCode path or steps step unnamed step index exitCode path For example steps step my awesome step exitCode path where the step name is my awesome step steps step unnamed 0 exitCode path where the first step in a task has no name The exit code of a step is stored in a file named exitCode under a directory tekton steps step step name or tekton steps step unnamed step index which is reserved for any other step specific information in the future If you would like to use the tekton internal path you can access the exit code by reading the file which is not recommended though since the path might change in the future shell cat tekton steps step step name exitCode And access the step exit code without a step name shell cat tekton steps step unnamed step index exitCode Or you can access the step metadata directory via symlink for example use cat tekton steps 0 exitCode for the first step in a task TaskRun Use of Pod Termination Messages Tekton Pipelines uses a Pod s termination message https kubernetes io docs tasks debug application cluster determine reason pod failure to pass data from a Step s container to the Pipelines controller Examples of this data include the time that execution of the user s step began contents of task results contents of pipeline resource results The contents and format of the termination message can change At time of writing the message takes the form of a serialized JSON blob Some of the data from the message is internal to Tekton Pipelines used for book keeping and some is distributed across a number of fields of the TaskRun s status For example a TaskRun s status taskResults is populated from the termination message Reserved directories workspace workspace This directory is where volumes for resources resources and workspaces workspaces are mounted tekton The tekton directory is reserved on containers for internal usage Here is an example of a directory layout for a simple Task with 2 script steps tekton bin entrypoint creds downward 2021 09 16 18 31 06 270542700 ready data 2021 09 16 18 31 06 270542700 ready data ready home results run 0 out status exitCode scripts script 0 t4jd8 script 1 4pjwp steps 0 tekton run 0 status 1 tekton run 1 status step foo tekton run 1 status step unnamed 0 tekton run 0 status termination Path Description tekton Directory used for Tekton specific functionality tekton bin Tekton provided binaries tools tekton creds Location of Tekton mounted secrets See Authentication at Run Time auth md for more details tekton debug Contains Debug scripts https github com tektoncd pipeline blob main docs debug md debug scripts used to manage step lifecycle during debugging at a breakpoint and the Debug Info https github com tektoncd pipeline blob main docs debug md mounts mount used to assist for the same tekton downward Location of data mounted via the Downward API https kubernetes io docs tasks inject data application downward api volume expose pod information the downward api tekton home deprecated see https github com tektoncd pipeline issues 2013 Default home directory for user containers tekton results Where results results are written to path available to Task authors via results name path variables md tekton run Runtime variable data Used for coordinating step ordering entrypoint rewriting and step ordering tekton scripts Contains user provided scripts specified in the TaskSpec tekton steps Where the step exitCodes are written to path available to Task authors via steps stepName exitCode path variables md variables available in a task tekton termination where the eventual termination log message https kubernetes io docs tasks debug application cluster determine reason pod failure writing and reading a termination message is written to Sequencing step containers entrypoint rewriting and step ordering The following directories are covered by the Tekton API Compatibility policy api compatibility policy md and can be relied on for stability tekton results All other files directories are internal implementation details of Tekton users should not rely on specific paths or behaviors as it may change in the future What and Why of tekton run tekton run is a collection of implicit volumes mounted on a pod and created for storing the step specific information metadata Steps can only write metadata to their own tekton run directory all other step volumes are mounted as readonly The tekton run directories are considered internal implementation details of Tekton and are not bound by the API compatibility policy the contents and structure can be safely changed so long as user behavior remains the same tekton steps tekton steps are special subdirectories are created for each step in a task each directory is actually a symlink to a directory in the Step s corresponding tekton run volume This is done to ensure that step directories can only be modified by their own Step To ensure that these symlinks are not modified the entire tekton steps volume is initially populated by an initContainer and mounted readonly on all user steps These symlinks are created as a part of the step init entrypoint subcommand initContainer on each Task Pod Entrypoint configuration The entrypoint is modified to include an additional flag representing the step specific directory where step metadata should be written step metadata dir the dir specified in this flag is created to hold a step specific metadata step metadata dir is set to tekton run step status for the entrypoint of each step Example Let s take an example of a task with two steps each exiting with non zero exit code yaml kind TaskRun apiVersion tekton dev v1beta1 metadata generateName test taskrun spec taskSpec steps image alpine name step0 onError continue script exit 1 image alpine onError continue script exit 2 During step step0 the first container is actively running so none of the output files are populated yet The tekton steps directories are symlinked to locations that do not yet exist but will be populated during execution tekton run 0 1 steps 0 tekton run 0 status 1 tekton run 1 status step step0 tekton run 0 status step unnamed1 tekton run 1 status During step unnamed1 the first container has now finished The output files for the first step are now populated and the folder pointed to by tekton steps 0 now exists and is populated with a file named exitCode which contains the exit code of the first step tekton run 0 out status exitCode 1 steps 0 tekton run 0 status 1 tekton run 1 status step step0 tekton run 0 status step unnamed1 tekton run 1 status Notice that there are multiple symlinks showing under tekton steps pointing to the same tekton run location These symbolic links are created to provide simplified access to the step metadata directories i e instead of referring to a directory with the step name access it via the step index The step index becomes complex and hard to keep track of in a task with a long list of steps for example a task with 20 steps Creating the step metadata directory using a step name and creating a symbolic link using the step index gives the user flexibility and an option to choose whatever works best for them Handling of injected sidecars Tekton has to take some special steps to support sidecars that are injected into TaskRun Pods Without intervention sidecars will typically run for the entire lifetime of a Pod but in Tekton s case it s desirable for the sidecars to run only as long as Steps take to complete There s also a need for Tekton to schedule the sidecars to start before a Task s Steps begin just in case the Steps rely on a sidecars behavior for example to join an Istio service mesh To handle all of this Tekton Pipelines implements the following lifecycle for sidecar containers First the Downward API https kubernetes io docs tasks inject data application downward api volume expose pod information the downward api is used to project an annotation on the TaskRun s Pod into the entrypoint container as a file The annotation starts as an empty string so the file projected by the downward API has zero length The entrypointer spins waiting for that file to have non zero size The sidecar containers start up Once they re all in a ready state the annotation is populated with string READY which in turn populates the Downward API projected file The entrypoint binary recognizes that the projected file has a non zero size and allows the Task s steps to begin On completion of all steps in a Task the TaskRun reconciler stops any sidecar containers The Image field of any sidecar containers is swapped to the nop image Kubernetes observes the change and relaunches the container with updated container image The nop container image exits immediately because it does not provide the command that the sidecar is configured to run The container is considered Terminated by Kubernetes and the TaskRun s Pod stops There are known issues with the existing implementation of sidecars When the nop image does provide the sidecar s command the sidecar will continue to run even after nop has been swapped into the sidecar container s image field See the issue tracking this bug https github com tektoncd pipeline issues 1347 for the issue tracking this bug Until this issue is resolved the best way to avoid it is to avoid overriding the nop image when deploying the tekton controller or ensuring that the overridden nop image contains as few commands as possible kubectl get pods will show a Completed pod when a sidecar exits successfully but an Error when the sidecar exits with an error This is only apparent when using kubectl to get the pods of a TaskRun not when describing the Pod using kubectl describe pod nor when looking at the TaskRun but can be quite confusing Breakpoint on Failure Halting a TaskRun execution on Failure of a step Failure of a Step The entrypoint binary is used to manage the lifecycle of a step Steps are aligned beforehand by the TaskRun controller allowing each step to run in a particular order This is done using wait file and the post file flags The former let s the entrypoint binary know that it has to wait on creation of a particular file before starting execution of the step And the latter provides information on the step number and signal the next step on completion of the step On success of a step the post file is written as is signalling the next step which would have the same argument given for wait file to resume the entrypoint process and move ahead with the step On failure of a step the post file is written with appending err to it denoting that the previous step has failed with and error The subsequent steps are skipped in this case as well marking the TaskRun as a failure Halting a Step on failure The failed step writes step no err to tekton run and stops running completely To be able to debug a step we would need it to continue running not exit not skip the next steps and signal health of the step By disabling step skipping stopping write of the step no err file and waiting on a signal by the user to disable the halt we would be simulating a breakpoint In this breakpoint which is essentially a limbo state the TaskRun finds itself in the user can interact with the step environment using a CLI or an IDE Exiting onfailure breakpoint To exit a step which has been paused upon failure the step would wait on a file similar to step no breakpointexit which would unpause and exit the step container eg Step 0 fails and is paused Writing 0 breakpointexit in tekton run would unpause and exit the step container Breakpoint before step TaskRun will be stuck waiting for user debugging before the step execution Halting a Step before execution The step program will be executed after all the wait file monitoring ends If want the user to enter the debugging before the step is executed need to pass a parameter debug before step to entrypoint and entrypoint will end the monitoring of waitFiles back pause waiting to listen to the tekton run 0 out beforestepexit file Exiting before step breakpoint entrypoint listening tekton run out beforestepexit or tekton run out beforestepexit err to decide whether to proceed this step out beforestepexit means continue with step out beforestepexit err means do not continue with the step |