content large_stringlengths 3 20.5k | url large_stringlengths 54 193 | branch large_stringclasses 4
values | source large_stringclasses 42
values | embeddings listlengths 384 384 | score float64 -0.21 0.65 |
|---|---|---|---|---|---|
```json { "data": { "type": "tasks", "attributes": { "name": "example", "url": "http://example.com", "description": "Simple description", "hmac\_key": "secret", "enabled": "true", "category": "task", "global-configuration": { "enabled": true, "stages": ["pre\_plan"], "enforcement-level": "mandatory" } }, "relationships": { "agent-pool": { "data": { "id": "apool-yoGUFz5zcRMMz53i", "type": "agent-pools" } } } } } ``` ### Sample Request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/vnd.api+json" \ --request POST \ --data @payload.json \ https://app.terraform.io/api/v2/organizations/my-organization/tasks ``` ### Sample Response ```json { "data": { "id": "task-7oD7doVTQdAFnMLV", "type": "tasks", "attributes": { "category": "task", "name": "my-run-task", "url": "http://example.com", "description": "Simple description", "enabled": "true", "hmac-key": null, "global-configuration": { "enabled": true, "stages": ["pre\_plan"], "enforcement-level": "mandatory" } }, "relationships": { "organization": { "data": { "id": "hashicorp", "type": "organizations" } }, "tasks": { "data": [] }, "agent-pool": { "data": { "id": "apool-yoGUFz5zcRMMz53i", "type": "agent-pools" } } }, "links": { "self": "/api/v2/tasks/task-7oD7doVTQdAFnMLV" } } } ``` ## List Run Tasks `GET /organizations/:organization\_name/tasks` | Parameter | Description | | -------------------- | ----------------------------------- | | `:organization\_name` | The organization to list tasks for. | | Status | Response | Reason | | ------- | --------------------------------------- | -------------------------------------------------------------- | | [200][] | [JSON API document][] (`type: "tasks"`) | Request was successful | | [404][] | [JSON API error object][] | Organization not found, or user unauthorized to perform action | ### Query Parameters This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs. | Parameter | Description | | -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `include` | \*\*Optional.\*\* Allows including related resource data. Value must be a comma-separated list containing one or more of `workspace\_tasks` or `workspace\_tasks.workspace`. | | `page[number]` | \*\*Optional.\*\* If omitted, the endpoint will return the first page. | | `page[size]` | \*\*Optional.\*\* If omitted, the endpoint will return 20 run tasks per page. | ### Sample Request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ https://app.terraform.io/api/v2/organizations/my-organization/tasks ``` ### Sample Response ```json { "data": [ { "id": "task-7oD7doVTQdAFnMLV", "type": "tasks", "attributes": { "category": "task", "name": "my-task", "url": "http://example.com", "description": "Simple description", "enabled": "true", "hmac-key": null, "global-configuration": { "enabled": true, "stages": [ "pre\_plan" ], "enforcement-level": "mandatory" } }, "relationships": { "organization": { "data": { "id": "hashicorp", "type": "organizations" } }, "tasks": { "data": [] } }, "links": { "self": "/api/v2/tasks/task-7oD7doVTQdAFnMLV" } } ], "links": { "self": "https://app.terraform.io/api/v2/organizations/hashicorp/tasks?page%5Bnumber%5D=1&page%5Bsize%5D=20", "first": "https://app.terraform.io/api/v2/organizations/hashicorp/tasks?page%5Bnumber%5D=1&page%5Bsize%5D=20", "prev": null, "next": null, "last": "https://app.terraform.io/api/v2/organizations/hashicorp/tasks?page%5Bnumber%5D=1&page%5Bsize%5D=20" }, "meta": { "pagination": { "current-page": 1, "page-size": 20, "prev-page": null, "next-page": null, "total-pages": 1, "total-count": 1 } } } ``` ## Show a Run Task `GET /tasks/:id` | Parameter | Description | | --------- | --------------------------------------------------------------------------------------------- | | `:id` | The ID of the task to show. Use the ["List Run Tasks"](#list-run-tasks) endpoint to find IDs. | | Status | Response | Reason | | ------- | --------------------------------------- | --------------------------------------------------------- | | [200][] | [JSON API document][] (`type: "tasks"`) | The request was successful | | [404][] | [JSON API error object][] | Run task not found or user unauthorized to perform action | | Parameter | Description | | --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `include` | \*\*Optional.\*\* Allows including related resource data. Value must be a comma-separated list containing one or more of `workspace\_tasks` or `workspace\_tasks.workspace`. | ### Sample Request ```shell curl --request GET \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/vnd.api+json" \ https://app.terraform.io/api/v2/tasks/task-7oD7doVTQdAFnMLV ``` ### Sample Response ```json { "data": { "id": "task-7oD7doVTQdAFnMLV", "type": "tasks", "attributes": { "category": "task", "name": "my-task", "url": "http://example.com", "description": "Simple description", "enabled": "true", "hmac-key": null, }, "relationships": { "organization": { "data": { "id": "hashicorp", "type": "organizations" } }, "tasks": { "data": [ { "id": "task-xjKZw9KaeXda61az", "type": | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/api-docs/run-tasks/run-tasks.mdx | main | terraform | [
-0.03538518399000168,
0.07561922073364258,
-0.0037793440278619528,
-0.00879843533039093,
-0.021710941568017006,
-0.02557040937244892,
-0.03523963689804077,
-0.03074592351913452,
0.04187627136707306,
0.06211039051413536,
-0.01672610081732273,
-0.09823791682720184,
0.018479300662875175,
0.07... | 0.010481 |
\ https://app.terraform.io/api/v2/tasks/task-7oD7doVTQdAFnMLV ``` ### Sample Response ```json { "data": { "id": "task-7oD7doVTQdAFnMLV", "type": "tasks", "attributes": { "category": "task", "name": "my-task", "url": "http://example.com", "description": "Simple description", "enabled": "true", "hmac-key": null, }, "relationships": { "organization": { "data": { "id": "hashicorp", "type": "organizations" } }, "tasks": { "data": [ { "id": "task-xjKZw9KaeXda61az", "type": "tasks" } ] } }, "links": { "self": "/api/v2/tasks/task-7oD7doVTQdAFnMLV" } } } ``` ## Update a Run Task `PATCH /tasks/:id` | Parameter | Description | | --------- | ----------------------------------------------------------------------------------------------- | | `:id` | The ID of the task to update. Use the ["List Run Tasks"](#list-run-tasks) endpoint to find IDs. | | Status | Response | Reason | | ------- | --------------------------------------- | -------------------------------------------------------------- | | [200][] | [JSON API document][] (`type: "tasks"`) | The request was successful | | [404][] | [JSON API error object][] | Run task not found or user unauthorized to perform action | | [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.) | ### Request Body This PATCH endpoint requires a JSON object with the following properties as a request payload. Properties without a default value are required unless otherwise specified. | Key path | Type | Default | Description | | -------------------------------------------------------- | ------ | ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `data.type` | string | | Must be `"tasks"`. | | `data.attributes.name` | string | (previous value) | The name of the run task. Can include letters, numbers, `-`, and `\_`. | | `data.attributes.url` | string | (previous value) | URL to send a run task payload. | | `data.attributes.description` | string | | The description of the run task. Can be up to 300 characters long including spaces, letters, numbers, and special characters. | | `data.attributes.category` | string | (previous value) | Must be `"task"`. | | `data.attributes.hmac-key` | string | (previous value) | (Optional) HMAC key to verify run task. | | `data.attributes.enabled` | bool | (previous value) | (Optional) Whether the task will be run. | | `data.attributes.global-configuration.enabled` | bool | (previous value) | (Optional) Whether the task will be associated on all workspaces. | | `data.attributes.global-configuration.stages` | array | (previous value) | (Optional) An array of strings representing the stages of the run lifecycle when the run task should begin. Must be one or more of `"pre\_plan"`, `"post\_plan"`, `"pre\_apply"`, or `"post\_apply"`. | | `data.attributes.global-configuration.enforcement-level` | string | (previous value) | (Optional) The enforcement level of the workspace task. Must be `"advisory"` or `"mandatory"`. | ### Sample Payload ```json { "data": { "type": "tasks", "attributes": { "name": "new-example", "url": "http://new-example.com", "description": "New description", "hmac\_key": "new-secret", "enabled": "false", "category": "task", "global-configuration": { "enabled": false } } } } ``` ### Sample Request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/vnd.api+json" \ --request PATCH \ --data @payload.json \ https://app.terraform.io/api/v2/tasks/task-7oD7doVTQdAFnMLV ``` ### Sample Response ```json { "data": { "id": "task-7oD7doVTQdAFnMLV", "type": "tasks", "attributes": { "category": "task", "name": "new-example", "url": "http://new-example.com", "description": "New description", "enabled": "false", "hmac-key": null, "global-configuration": { "enabled": false, "stages": ["pre\_plan"], "enforcement-level": "mandatory" } }, "relationships": { "organization": { "data": { "id": "hashicorp", "type": "organizations" } }, "tasks": { "data": [ { "id": "wstask-xjKZw9KaeXda61az", "type": "workspace-tasks" } ] } }, "links": { "self": "/api/v2/tasks/task-7oD7doVTQdAFnMLV" } } } ``` ## Delete a Run Task `DELETE /tasks/:id` | Parameter | Description | | --------- | --------------------------------------------------------------------------------------------------- | | `:id` | The ID of the run task to delete. Use the ["List Run Tasks"](#list-run-tasks) endpoint to find IDs. | | Status | Response | Reason | | ------- | ------------------------- | ---------------------------------------------------------- | | [204][] | No Content | Successfully deleted the run task | | [404][] | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/api-docs/run-tasks/run-tasks.mdx | main | terraform | [
-0.039078790694475174,
0.04992583021521568,
0.06151007115840912,
0.025103852152824402,
-0.0012970893876627088,
-0.048441655933856964,
-0.06424591690301895,
-0.07829908281564713,
0.04366505518555641,
0.08141220360994339,
-0.024812232702970505,
-0.08443077653646469,
0.02500099129974842,
0.04... | -0.006549 |
| | `:id` | The ID of the run task to delete. Use the ["List Run Tasks"](#list-run-tasks) endpoint to find IDs. | | Status | Response | Reason | | ------- | ------------------------- | ---------------------------------------------------------- | | [204][] | No Content | Successfully deleted the run task | | [404][] | [JSON API error object][] | Run task not found, or user unauthorized to perform action | ### Sample Request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/vnd.api+json" \ --request DELETE \ https://app.terraform.io/api/v2/tasks/task-7oD7doVTQdAFnMLV ``` ## Associate a Run Task to a Workspace `POST /workspaces/:workspace\_id/tasks` | Parameter | Description | | --------------- | ------------------------ | | `:workspace\_id` | The ID of the workspace. | This endpoint associates an existing run task to a specific workspace. This involves setting the run task enforcement level, which determines whether the run task blocks runs from completing. - Advisory run tasks can not block a run from completing. If the task fails, the run will proceed with a warning. - Mandatory run tasks block a run from completing. If the task fails (including a timeout or unexpected remote error condition), the run stops with an error. You may also configure the run task to begin during specific [run stages](/terraform/cloud-docs/workspaces/run/states). Run tasks use the [Post-Plan Stage](/terraform/cloud-docs/workspaces/run/states#the-post-plan-stage) by default. | Status | Response | Reason | | ------- | ------------------------- | ---------------------------------------------------------------------- | | [204][] | No Content | The request was successful | | [404][] | [JSON API error object][] | Workspace or run task not found or user unauthorized to perform action | | [422][] | [JSON API error object][] | Malformed request body | ### Request Body This POST endpoint requires a JSON object with the following properties as a request payload. Properties without a default value are required. | Key path | Type | Default | Description | |-------------------------------------|--------|-----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `data.type` | string | | Must be `"workspace-tasks"`. | | `data.attributes.enforcement-level` | string | | The enforcement level of the workspace task. Must be `"advisory"` or `"mandatory"`. | | `data.attributes.stage` | string | `"post\_plan"` | \*\*DEPRECATED\*\* Use `stages` instead. The stage in the run lifecycle when the run task should begin. Must be `"pre\_plan"`, `"post\_plan"`, `"pre\_apply"`, or `"post\_apply"`. | | `data.attributes.stages` | array | `["post\_plan"]` | An array of strings representing the stages of the run lifecycle when the run task should begin. Must be one or more of `"pre\_plan"`, `"post\_plan"`, `"pre\_apply"`, or `"post\_apply"`. | | `data.relationships.task.data.id` | string | | The ID of the run task. | | `data.relationships.task.data.type` | string | | Must be `"tasks"`. | ### Sample Payload ```json { "data": { "type": "workspace-tasks", "attributes": { "enforcement-level": "advisory", "stages": ["post\_plan"] }, "relationships": { "task": { "data": { "id": "task-7oD7doVTQdAFnMLV", "type": "tasks" } } } } } ``` ### Sample Request ```shell curl \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/vnd.api+json" \ --request POST \ --data @payload.json \ https://app.terraform.io/api/v2/workspaces/ws-PphL7ix3yGasYGrq/tasks ``` ### Sample Response ```json { "data": { "id": "wstask-tBXYu8GVAFBpcmPm", "type": "workspace-tasks", "attributes": { "enforcement-level": "advisory", "stage": "post\_plan", "stages": ["post\_plan"] }, "relationships": { "task": { "data": { "id": "task-7oD7doVTQdAFnMLV", "type": "tasks" } }, "workspace": { "data": { "id": "ws-PphL7ix3yGasYGrq", "type": "workspaces" } } }, "links": { "self": "/api/v2/workspaces/ws-PphL7ix3yGasYGrq/tasks/task-tBXYu8GVAFBpcmPm" } } } ``` ## List Workspace Run Tasks `GET /workspaces/:workspace\_id/tasks` | Parameter | Description | | --------------- | -------------------------------- | | `:workspace\_id` | The workspace to list tasks for. | | Status | Response | Reason | | ------- | --------------------------------------- | ----------------------------------------------------------- | | [200][] | [JSON API document][] (`type: "tasks"`) | Request was successful | | [404][] | [JSON API error object][] | Workspace not found, or | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/api-docs/run-tasks/run-tasks.mdx | main | terraform | [
-0.04742233455181122,
0.03462739288806915,
-0.057387687265872955,
-0.007604143116623163,
0.04131624102592468,
-0.05968967080116272,
0.0042535364627838135,
-0.04147692397236824,
0.07988755404949188,
0.08696896582841873,
-0.022125693038105965,
-0.03487127274274826,
0.04819624125957489,
0.022... | 0.085033 |
| | `:workspace\_id` | The workspace to list tasks for. | | Status | Response | Reason | | ------- | --------------------------------------- | ----------------------------------------------------------- | | [200][] | [JSON API document][] (`type: "tasks"`) | Request was successful | | [404][] | [JSON API error object][] | Workspace not found, or user unauthorized to perform action | ### Query Parameters This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs. | Parameter | Description | | -------------- | --------------------------------------------------------------------------- | | `page[number]` | \*\*Optional.\*\* If omitted, the endpoint will return the first page. | | `page[size]` | \*\*Optional.\*\* If omitted, the endpoint will return 20 run tasks per page. | ### Sample Request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ https://app.terraform.io/api/v2/workspaces/ws-kRsDRPtTmtcEme4t/tasks ``` ### Sample Response ```json { "data": [ { "id": "wstask-tBXYu8GVAFBpcmPm", "type": "workspace-tasks", "attributes": { "enforcement-level": "advisory", "stage": "post\_plan", "stages": ["post\_plan"] }, "relationships": { "task": { "data": { "id": "task-hu74ST39g566Q4m5", "type": "tasks" } }, "workspace": { "data": { "id": "ws-kRsDRPtTmtcEme4t", "type": "workspaces" } } }, "links": { "self": "/api/v2/workspaces/ws-kRsDRPtTmtcEme4t/tasks/task-tBXYu8GVAFBpcmPm" } } ], "links": { "self": "https://app.terraform.io/api/v2/workspaces/ws-kRsDRPtTmtcEme4t/tasks?page%5Bnumber%5D=1&page%5Bsize%5D=20", "first": "https://app.terraform.io/api/v2/workspaces/ws-kRsDRPtTmtcEme4t/tasks?page%5Bnumber%5D=1&page%5Bsize%5D=20", "prev": null, "next": null, "last": "https://app.terraform.io/api/v2/workspaces/ws-kRsDRPtTmtcEme4t/tasks?page%5Bnumber%5D=1&page%5Bsize%5D=20" }, "meta": { "pagination": { "current-page": 1, "page-size": 20, "prev-page": null, "next-page": null, "total-pages": 1, "total-count": 1 } } } ``` ## Show Workspace Run Task `GET /workspaces/:workspace\_id/tasks/:id` | Parameter | Description | | --------- | --------------------------------------------------------------------------------------------------------------------------- | | `:id` | The ID of the workspace task to show. Use the ["List Workspace Run Tasks"](#list-workspace-run-tasks) endpoint to find IDs. | | Status | Response | Reason | | ------- | --------------------------------------- | ------------------------------------------------------------------- | | [200][] | [JSON API document][] (`type: "tasks"`) | The request was successful | | [404][] | [JSON API error object][] | Workspace run task not found or user unauthorized to perform action | ### Sample Request ```shell curl --request GET \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/vnd.api+json" \ https://app.terraform.io/api/v2/workspaces/ws-kRsDRPtTmtcEme4t/tasks/wstask-tBXYu8GVAFBpcmPm ``` ### Sample Response ```json { "data": { "id": "wstask-tBXYu8GVAFBpcmPm", "type": "workspace-tasks", "attributes": { "enforcement-level": "advisory", "stage": "post\_plan", "stages": ["post\_plan"] }, "relationships": { "task": { "data": { "id": "task-hu74ST39g566Q4m5", "type": "tasks" } }, "workspace": { "data": { "id": "ws-kRsDRPtTmtcEme4t", "type": "workspaces" } } }, "links": { "self": "/api/v2/workspaces/ws-kRsDRPtTmtcEme4t/tasks/wstask-tBXYu8GVAFBpcmPm" } } } ``` ## Update Workspace Run Task `PATCH /workspaces/:workspace\_id/tasks/:id` | Parameter | Description | | --------- | ------------------------------------------------------------------------------------------------------------------- | | `:id` | The ID of the task to update. Use the ["List Workspace Run Tasks"](#list-workspace-run-tasks) endpoint to find IDs. | | Status | Response | Reason | | ------- | --------------------------------------- | ------------------------------------------------------------------- | | [200][] | [JSON API document][] (`type: "tasks"`) | The request was successful | | [404][] | [JSON API error object][] | Workspace run task not found or user unauthorized to perform action | | [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.) | ### Request Body This PATCH endpoint requires a JSON object with the following properties as a request payload. Properties without a default value are required. | Key path | Type | Default | Description | |-------------------------------------|--------|------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `data.type` | string | (previous value) | Must be `"workspace-tasks"`. | | `data.attributes.enforcement-level` | string | (previous value) | The enforcement level of the workspace run task. Must be `"advisory"` or `"mandatory"`. | | `data.attributes.stage` | string | (previous value) | \*\*DEPRECATED\*\* Use `stages` instead. The stage in the run lifecycle when the run task should begin. Must be `"pre\_plan"` or `"post\_plan"`. | | `data.attributes.stages` | array | (previous value) | An array of strings representing the stages of the run | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/api-docs/run-tasks/run-tasks.mdx | main | terraform | [
-0.051447611302137375,
0.03770386427640915,
-0.004870361182838678,
-0.00631823530420661,
-0.036734599620103836,
-0.009120476432144642,
-0.021380212157964706,
-0.052189525216817856,
0.031024303287267685,
0.057998061180114746,
-0.005188422743231058,
-0.06378999352455139,
0.04668459668755531,
... | 0.061227 |
`"mandatory"`. | | `data.attributes.stage` | string | (previous value) | \*\*DEPRECATED\*\* Use `stages` instead. The stage in the run lifecycle when the run task should begin. Must be `"pre\_plan"` or `"post\_plan"`. | | `data.attributes.stages` | array | (previous value) | An array of strings representing the stages of the run lifecycle when the run task should begin. Must be one or more of `"pre\_plan"`, `"post\_plan"`, `"pre\_apply"`, or `"post\_apply"`. | ### Sample Payload ```json { "data": { "type": "workspace-tasks", "attributes": { "enforcement-level": "mandatory", "stages": ["post\_plan"] } } } ``` #### Deprecated Payload ```json { "data": { "type": "workspace-tasks", "attributes": { "enforcement-level": "mandatory", "stages": ["post\_plan"] } } } ``` ### Sample Request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/vnd.api+json" \ --request PATCH \ --data @payload.json \ https://app.terraform.io/api/v2/workspaces/ws-kRsDRPtTmtcEme4t/tasks/wstask-tBXYu8GVAFBpcmPm ``` ### Sample Response ```json { "data": { "id": "wstask-tBXYu8GVAFBpcmPm", "type": "workspace-tasks", "attributes": { "enforcement-level": "mandatory", "stage": "post\_plan", "stages": ["post\_plan"] }, "relationships": { "task": { "data": { "id": "task-hu74ST39g566Q4m5", "type": "tasks" } }, "workspace": { "data": { "id": "ws-kRsDRPtTmtcEme4t", "type": "workspaces" } } }, "links": { "self": "/api/v2/workspaces/ws-kRsDRPtTmtcEme4t/tasks/task-tBXYu8GVAFBpcmPm" } } } ``` ## Delete Workspace Run Task `DELETE /workspaces/:workspace\_id/tasks/:id` | Parameter | Description | | --------- | --------------------------------------------------------------------------------------------------------------------------------- | | `:id` | The ID of the Workspace run task to delete. Use the ["List Workspace Run Tasks"](#list-workspace-run-tasks) endpoint to find IDs. | | Status | Response | Reason | | ------- | ------------------------- | -------------------------------------------------------------------- | | [204][] | No Content | Successfully deleted the workspace run task | | [404][] | [JSON API error object][] | Workspace run task not found, or user unauthorized to perform action | ### Sample Request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/vnd.api+json" \ --request DELETE \ https://app.terraform.io/api/v2/workspaces/ws-kRsDRPtTmtcEme4t/tasks/wstask-tBXYu8GVAFBpcmPm ``` | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/api-docs/run-tasks/run-tasks.mdx | main | terraform | [
-0.010674316436052322,
0.08407105505466461,
0.03388189151883125,
0.01589098945260048,
-0.019752584397792816,
0.0034435447305440903,
-0.05808798596262932,
0.02371315471827984,
0.006448940373957157,
-0.023625900968909264,
-0.03007681854069233,
-0.053221870213747025,
-0.0037674885243177414,
0... | -0.00007 |
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200 [201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201 [202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202 [204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204 [400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400 [401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401 [403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403 [404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404 [409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409 [412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412 [422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422 [429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429 [500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500 [504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504 [JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents [JSON API error object]: https://jsonapi.org/format/#error-objects # Key version configuration API reference @include 'tfc-package-callouts/hyok.mdx' A hold your own key(HYOK) version represents the specific version of your key management service (KMS) key. A HYOK configuration is associated with a specific key, but that key can have multiple versions. Having multiple key versions lets you rotate and manage the keys in your key management service without creating a new HYOK configuration for that key. To learn more about hold your own key, refer to the [Overview](/terraform/cloud-docs/hold-your-own-key). ## List HYOK customer key versions for an HYOK configuration `GET /api/v2/hyok-configurations/:hyok\_configuration\_id/hyok-customer-key-versions` | Parameter | Description | | ------------------- | ---------------------------- | | `hyok\_configuration\_id` | The ID of the HYOK configuration to which the HYOK customer key versions belong. | | Status | Response | Reason | |---------|---------------------------|-------------------------------------------------------------------| | [200][] | [JSON API document][] | Successfully fetched HYOK customer key version. | | [404][] | [JSON API error object][] | HYOK customer key version not found, or user unauthorized to perform action. | ### Query parameters This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs. | Parameter | Description | |------------------------------|----------------------------------------------------------------------------------------| | `page[number]` | \*\*Optional.\*\* If omitted, the endpoint will return the first page. | | `page[size]` | \*\*Optional.\*\* If omitted, the endpoint will return 20 HYOK configurations per page. | ### Sample request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/vnd.api+json" \ --request GET \ https://app.terraform.io/api/v2/hyok-configurations/:hyok\_configuration\_id/hyok-customer-key-versions ``` ### Sample response ```json { "data": { "id": "keyv-emT4uwmo1B8B81Jt", "type": "hyok-customer-key-versions", "attributes": { "key-version": "3", "created-at": "2025-04-30T16:09:52.399Z", "status": "available", "error": null, "workspaces-secured": 0 }, "relationships": { "hyok-configuration": { "data": { "id": "hyokc-B4ae7Tzsy52QzMTA", "type": "hyok-configurations" } } } } } ``` ## Show HYOK customer key versions `GET /api/v2/hyok-customer-key-versions/:id` | Parameter | Description | | ------------------- | ---------------------------- | | `id` | The ID of the HYOK customer key version. | | Status | Response | Reason | |---------|---------------------------|-------------------------------------------------------------------| | [200][] | [JSON API document][] | Successfully fetched HYOK customer key version. | | [404][] | [JSON API error object][] | HYOK customer key version not found, or user unauthorized to perform action. | ### Sample request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/vnd.api+json" \ --request GET \ https://app.terraform.io/api/v2/hyok-customer-key-versions/:id ``` ## Response body The endpoint will return a JSON object with the following properties. | Key path | Type | Description | | -------- | ------ | -------------------------------------------------------------------------- | | `data.attributes.key-version` | string | The key version | | `data.attributes.created-at` | string | Creation timestamp | | `data.attributes.status` | string | Relevant status for this HYOK customer key version. Possible values: `available`, `revoking`, `revoked`, `revocation\_failed` | | `data.attributes.error` | string | A supplementary error message that provides additional context when the `status` is `revocation\_failed`. | | `data.attributes.workspaces-secured` | integer | The number of HCP Terraform workspaces that currently have Data Encryption Keys (DEKs) secured with this HYOK customer key version | | `data.relationships.hyok-configuration.data.id` | string | The ID of the HYOK configuration that this HYOK customer key version belongs to. | ### Sample response ```json { "data": { "id": "keyv-emT4uwmo1B8B81Jt", "type": "hyok-customer-key-versions", "attributes": { "key-version": "3", "created-at": "2025-04-30T16:09:52.399Z", "status": "available", "error": null, "workspaces-secured": 0 }, "relationships": { "hyok-configuration": { "data": { "id": "hyokc-B4ae7Tzsy52QzMTA", "type": "hyok-configurations" } } } } } ``` ## Revoke HYOK customer key versions `POST /api/v2/hyok-customer-key-versions/:hyok\_customer\_key\_version\_id/actions/revoke` | Parameter | Description | | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/api-docs/hold-your-own-key/key-versions.mdx | main | terraform | [
-0.015784088522195816,
-0.01700746826827526,
-0.018668580800294876,
-0.028096551075577736,
-0.024631375446915627,
-0.08318780362606049,
-0.0621652752161026,
-0.07521579414606094,
-0.005580500699579716,
-0.03855427727103233,
0.03281282261013985,
0.0540321059525013,
0.006812138482928276,
-0.... | 0.052879 |
```json { "data": { "id": "keyv-emT4uwmo1B8B81Jt", "type": "hyok-customer-key-versions", "attributes": { "key-version": "3", "created-at": "2025-04-30T16:09:52.399Z", "status": "available", "error": null, "workspaces-secured": 0 }, "relationships": { "hyok-configuration": { "data": { "id": "hyokc-B4ae7Tzsy52QzMTA", "type": "hyok-configurations" } } } } } ``` ## Revoke HYOK customer key versions `POST /api/v2/hyok-customer-key-versions/:hyok\_customer\_key\_version\_id/actions/revoke` | Parameter | Description | | ------------------- | ---------------------------- | | `hyok\_customer\_key\_version\_id` | The ID of the HYOK customer key version to revoke. | | Status | Response | Reason | |---------|---------------------------|-------------------------------------------------------------------| | [202][] | | Successfully triggered the revocation of the HYOK customer key version. | | [404][] | [JSON API error object][] | HYOK customer key version not found, or user unauthorized to perform action. | | [422][] | [JSON API error object][] | No agent running, malformed request body (missing attributes, wrong types, etc.). | ### Sample request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/vnd.api+json" \ --request POST \ --data @payload.json \ https://app.terraform.io/api/v2/hyok-customer-key-versions/:hyok\_configuration\_id/actions/revoke ``` ### Sample payload ```json {} ``` ### Sample response ```text HTTP/1.1 202 Accepted Content-Length: 0 ``` ## Check for New HYOK customer key versions `GET /api/v2/hyok-configurations/:hyok\_configuration\_id/key-versions?refresh` | Parameter | Description | | ------------------- | ---------------------------- | | `hyok\_configuration\_id` | The ID of the HYOK configuration. | ### Sample request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/vnd.api+json" \ --request GET \ https://app.terraform.io/api/v2/hyok-configurations/:hyok\_configuration\_id/key-versions?refresh ``` ### Sample response ```json { "data": [ { "id": "keyv-bUW9Za7LfRStunTp", "type": "hyok-customer-key-versions", "attributes": { "key-version": "10", "created-at": "2025-05-01T14:28:18.595Z", "status": "available", "error": null, "workspaces-secured": 0 }, "relationships": { "hyok-configuration": { "data": { "id": "hyokc-tPLXkKbURnrVvMFs", "type": "hyok-configurations" } } } } ], "links": { "self": "https://tfcdev-f3054ebe.ngrok.app/api/v2/hyok-configurations/hyokc-tPLXkKbURnrVvMFs/hyok-customer-key-versions?id=hyokc-tPLXkKbURnrVvMFs\u0026page%5Bnumber%5D=1\u0026page%5Bsize%5D=20\u0026refresh=true", "first": "https://tfcdev-f3054ebe.ngrok.app/api/v2/hyok-configurations/hyokc-tPLXkKbURnrVvMFs/hyok-customer-key-versions?id=hyokc-tPLXkKbURnrVvMFs\u0026page%5Bnumber%5D=1\u0026page%5Bsize%5D=20\u0026refresh=true", "prev": null, "next": null, "last": "https://tfcdev-f3054ebe.ngrok.app/api/v2/hyok-configurations/hyokc-tPLXkKbURnrVvMFs/hyok-customer-key-versions?id=hyokc-tPLXkKbURnrVvMFs\u0026page%5Bnumber%5D=1\u0026page%5Bsize%5D=20\u0026refresh=true" }, "meta": { "pagination": { "current-page": 1, "page-size": 20, "prev-page": null, "next-page": null, "total-pages": 1, "total-count": 1 } } } ``` | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/api-docs/hold-your-own-key/key-versions.mdx | main | terraform | [
-0.025153201073408127,
0.14160478115081787,
0.01775100640952587,
0.006296794395893812,
-0.043506212532520294,
-0.03634696453809738,
-0.01212264597415924,
0.008231650106608868,
0.010825452394783497,
-0.039768751710653305,
0.06328541040420532,
-0.04204237088561058,
0.00945932138711214,
0.000... | 0.019626 |
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200 [201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201 [202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202 [204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204 [400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400 [401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401 [403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403 [404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404 [409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409 [412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412 [422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422 [429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429 [500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500 [504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504 [JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents [JSON API error object]: https://jsonapi.org/format/#error-objects # Hold your own key configuration API reference @include 'tfc-package-callouts/hyok.mdx' Hold your own key (HYOK) lets you authenticate a key management system with HCP Terraform to encrypt workspace state and plan data with a key that you provide and control. HCP Terraform lets you configure multiple HYOK configurations for an organization. Each HYOK configuration corresponds to a single key in a KMS. Additional HYOK configurations can point to different keys in the same KMS, or keys in others KMSs. A HYOK configuration specifies the following in HCP Terraform: - How to authenticate to your KMS using OIDC - Which key from your KMS to use - What name to use to identify this configuration within HCP Terraform To learn more about hold your own key, refer to the [Overview](/terraform/cloud-docs/hold-your-own-key). ## Attributes | Attribute | Description | | -------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `name` | Label for the HYOK configuration to be used within HCP Terraform | | `kek-id` | Refers to the name of your key encryption key stored in your key management service | | `kms-options` | Optional object used to specify additional fields for some key management services. Fields include `key\_region`, `key\_location`, and `key\_ring\_id`. See table below for more details. | | `primary` | Boolean flag that indicates whether this HYOK configuration should be used for HYOK encryption of Terraform artifacts. | | `status` | Indicates a HYOK configuration status. This status can be `active`, `testing`, `test\_failed`, `available`, `revoking`, or `revoked`. See table below for more details. | | `error` | A supplementary error message that provides additional context when the `status` is `errored` or `test\_failed`. | ### HYOK configuration status The HYOK configuration status is found in `data.attributes.status`. The following list describes the possible states of a HYOK configuration. | State | Description | | --- | --- | | `available` | Indicates that the HYOK configuration has been tested and is ready to be used for encryption. | | `testing` | Indicates that the HYOK configuration is currently in the process of testing whether your key encryption key can be used to encrypt and decrypt data. | | `test\_failed` | Indicates that the HYOK configuration was not able to be used fetch your key encryption key and use it to encrypt and decrypt data | | `revoked` | Indicates that the HYOK configuration was successfully revoked. If you wish to stop using a key, you may revoke it within HCP Terraform. | | `revoking` | Indicates that the HYOK configuration is in the process of being revoked. | ### KMS options KMS Options are additional options that can be found in `data.attributes.kms-options`. The following list describes the possible options for KMS options. | Option | Description | | --- | --- | | `key\_region` | The AWS region where your key is located. | | `key\_location` | A key ring is the root resource for Google Cloud KMS keys and key versions. Each key ring exists within a given location. | | `key\_ring\_id` | A key ring is the root resource for Google Cloud KMS keys and key versions. Each key ring exists within a given location. | ## Create an HYOK configuration `POST /organizations/:organization\_name/hyok-configurations` | Parameter | Description | | ------------------- | ---------------------------- | | `organization\_name` | The name of the organization | | Status | Response | Reason | |---------|---------------------------|-------------------------------------------------------------------| | [201][] | [JSON API document][] | | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/api-docs/hold-your-own-key/configurations.mdx | main | terraform | [
-0.015784088522195816,
-0.01700746826827526,
-0.018668580800294876,
-0.028096551075577736,
-0.024631375446915627,
-0.08318780362606049,
-0.0621652752161026,
-0.07521579414606094,
-0.005580500699579716,
-0.03855427727103233,
0.03281282261013985,
0.0540321059525013,
0.006812138482928276,
-0.... | 0.052879 |
Each key ring exists within a given location. | ## Create an HYOK configuration `POST /organizations/:organization\_name/hyok-configurations` | Parameter | Description | | ------------------- | ---------------------------- | | `organization\_name` | The name of the organization | | Status | Response | Reason | |---------|---------------------------|-------------------------------------------------------------------| | [201][] | [JSON API document][] | Successfully created a HYOK configuration. | | [404][] | [JSON API error object][] | Organization not found, or user unauthorized to perform action. | | [409][] | [JSON API error object][] | Conflict; check the error object for more information. | | [412][] | [JSON API error object][] | Precondition failed; check the error object for more information. | | [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.). | ### Request body This POST endpoint requires a JSON object with the following properties as a request payload. Properties without a default value are required. | Key path | Type | Default | Description | |--------------------------------------|---------|-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `data.type` | string | | Must be `"hyok-configurations"`. | | `data.attributes.name` | string | | The name for your HYOK configuration | | `data.attributes.primary` | boolean | | Whether this configuration is the primary configuration to be used for HYOK encryption. Note: only one HYOK configuration can be the primary. | | `data.attributes.kek-id` | string | | The name or identifier of your key stored in your key management service | | `data.relationships.run.organization.id` | string | (nothing) | The ID of the organization to associate with the HYOK configuration. | | `data.relationships.run.agent-pool.id` | string | (nothing) | The ID of the agent-pool to associate with the HYOK configuration. | | `data.relationships.run.oidc-configuration.id` | string | (nothing) | The ID of the oidc-configuration to associate with the HYOK configuration. | ### Sample payload ```json { "data": { "attributes": { "name": "my-key-name", "primary": false, "kek-id": "key1", "status": null, "error": null, "kms-options": {} }, "relationships": { "organization": { "data": { "type": "organizations", "id": "my-hyok-org" } }, "agent-pool": { "data": { "type": "agent-pools", "id": "apool-MFtsuFxHkC9pCRgB" } }, "oidc-configuration": { "data": { "type": "vault-oidc-configurations", "id": "voidc-dMuFy6AKrawNVBJY" } } }, "type": "hyok-configurations" } } ``` ### Sample request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/vnd.api+json" \ --request POST \ --data @payload.json \ https://app.terraform.io/api/v2/organizations/my-hyok-org/hyok-configurations ``` ### Sample response ```json { "data": { "id": "hyokc-L4CxAJEEn8vEUEkj", "type": "hyok-configurations", "attributes": { "kek-id": "key1", "kms-options": {}, "name": "my-key-name", "primary": false, "status": "untested", "error": null }, "relationships": { "organization": { "data": { "id": "my-hyok-org", "type": "organizations" } }, "oidc-configuration": { "data": { "id": "voidc-dMuFy6AKrawNVBJY", "type": "vault-oidc-configurations" } }, "agent-pool": { "data": { "id": "apool-MFtsuFxHkC9pCRgB", "type": "agent-pools" } }, "hyok-customer-key-versions": { "data": [] } } } } ``` ## List HYOK configurations for an organization `GET /api/v2/organizations/:organization\_name/hyok-configurations` | Parameter | Description | | ------------------- | ---------------------------- | | `organization\_name` | The name of the organization | | Status | Response | Reason | |---------|---------------------------|-------------------------------------------------------------------| | [200][] | [JSON API document][] | Successfully fetched HYOK configurations for the specified organization. | | [404][] | [JSON API error object][] | Organization not found, or user unauthorized to perform action. | | [409][] | [JSON API error object][] | Conflict; check the error object for more information. | | [412][] | [JSON API error object][] | Precondition failed; check the error object for more information. | | [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.). | ### Query parameters This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs. | Parameter | Description | |------------------------------|----------------------------------------------------------------------------------------| | `filter[organization][name]` | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/api-docs/hold-your-own-key/configurations.mdx | main | terraform | [
-0.021053634583950043,
0.08362104743719101,
-0.10138977319002151,
-0.012617399916052818,
-0.04507651925086975,
-0.10803480446338654,
-0.036277130246162415,
-0.00543170003220439,
-0.005300328601151705,
-0.06976079195737839,
0.05596686154603958,
-0.04311350733041763,
0.04519737884402275,
-0.... | 0.109238 |
API error object][] | Malformed request body (missing attributes, wrong types, etc.). | ### Query parameters This endpoint supports pagination [with standard URL query parameters](/terraform/cloud-docs/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs. | Parameter | Description | |------------------------------|----------------------------------------------------------------------------------------| | `filter[organization][name]` | \*\*Required\*\* The name of the organization that owns the desired workspace. | | `page[number]` | \*\*Optional.\*\* If omitted, the endpoint will return the first page. | | `page[size]` | \*\*Optional.\*\* If omitted, the endpoint will return 20 HYOK configurations per page. | ### Sample request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/vnd.api+json" \ --request GET \ --data @payload.json \ https://app.terraform.io/api/v2/organizations/:organization\_name/hyok-configuration ``` ### Sample response ```json { "data": [ { "id": "hyokc-B4ae7Tzsy52QzMTA", "type": "hyok-configurations", "attributes": { "kek-id": "tf-rocket-hyok-oasis", "kms-options": {}, "name": "tf-rocket-hyok-oasis", "primary": true, "status": "test\_failed", "error": "Error forwarding request:: Unexpected response from broker: Status code: 424, Message: Failed Dependency, Organization ID: org-y1xsQ1LBbqpDHiMv, Agent Pool ID: apool-MFtsuFxHkC9pCRgB, Organization ID: org-y1xsQ1LBbqpDHiMv, Agent Pool ID: apool-MFtsuFxHkC9pCRgB" }, "relationships": { "organization": { "data": { "id": "my-hyok-org", "type": "organizations" } }, "oidc-configuration": { "data": { "id": "voidc-dMDwrVdzxbWZcjPf", "type": "vault-oidc-configurations" } }, "agent-pool": { "data": { "id": "apool-MFtsuFxHkC9pCRgB", "type": "agent-pools" } }, "hyok-customer-key-versions": { "data": [ { "id": "keyv-PJJ5biGx2xGmk1ko", "type": "hyok-customer-key-versions" }, { "id": "keyv-mdcFMe2pFHxqTSBq", "type": "hyok-customer-key-versions" }, { "id": "keyv-emT4uwmo1B8B81Jt", "type": "hyok-customer-key-versions" }, { "id": "keyv-UuRchWcL9fqadq2F", "type": "hyok-customer-key-versions" }, { "id": "keyv-tpY5A8Vm883sp9jZ", "type": "hyok-customer-key-versions" }, { "id": "keyv-TeELFNmCqpaQ5dNa", "type": "hyok-customer-key-versions" }, { "id": "keyv-BeMwZesoPGs3tpNq", "type": "hyok-customer-key-versions" }, { "id": "keyv-dLG62jSsAq6o4f8V", "type": "hyok-customer-key-versions" }, { "id": "keyv-DjDqHppkyyQ4fgcd", "type": "hyok-customer-key-versions" }, { "id": "keyv-pc9SqwDWFkuaYath", "type": "hyok-customer-key-versions" } ] } } }, { "id": "hyokc-L4CxAJEEn8vEUEkj", "type": "hyok-configurations", "attributes": { "kek-id": "key1", "kms-options": {}, "name": "my-key-name", "primary": false, "status": "test\_failed", "error": "No Agents are available for request forwarding: Agent Pool ID: apool-MFtsuFxHkC9pCRgB, Organization ID: org-y1xsQ1LBbqpDHiMv," }, "relationships": { "organization": { "data": { "id": "my-hyok-org", "type": "organizations" } }, "oidc-configuration": { "data": { "id": "voidc-dMuFy6AKrawNVBJY", "type": "vault-oidc-configurations" } }, "agent-pool": { "data": { "id": "apool-MFtsuFxHkC9pCRgB", "type": "agent-pools" } }, "hyok-customer-key-versions": { "data": [] } } }, { "id": "hyokc-tPLXkKbURnrVvMFs", "type": "hyok-configurations", "attributes": { "kek-id": "tf-rocket-hyok-oasis", "kms-options": {}, "name": "whatever-testing2", "primary": false, "status": "test\_failed", "error": "Error on agent: error creating kms client: Put \"https://my-vault-cluster-public-vault-659decf3.b8298d98.z1.hashicorp.cloud:8200/v1/auth/jwt/login\": dial tcp: lookup my-vault-cluster-public-vault-659decf3.b8298d98.z1.hashicorp.cloud on 127.0.0.11:53: no such host" }, "relationships": { "organization": { "data": { "id": "my-hyok-org", "type": "organizations" } }, "oidc-configuration": { "data": { "id": "voidc-Tje2BewrPfi5gthY", "type": "vault-oidc-configurations" } }, "agent-pool": { "data": { "id": "apool-MFtsuFxHkC9pCRgB", "type": "agent-pools" } }, "hyok-customer-key-versions": { "data": [ { "id": "keyv-bUW9Za7LfRStunTp", "type": "hyok-customer-key-versions" } ] } } } ], "links": { "self": "https://tfcdev-f3054ebe.ngrok.app/api/v2/organizations/my-hyok-org/hyok-configurations?organization\_name=my-hyok-org\u0026page%5Bnumber%5D=1\u0026page%5Bsize%5D=20", "first": "https://tfcdev-f3054ebe.ngrok.app/api/v2/organizations/my-hyok-org/hyok-configurations?organization\_name=my-hyok-org\u0026page%5Bnumber%5D=1\u0026page%5Bsize%5D=20", "prev": null, "next": null, "last": "https://tfcdev-f3054ebe.ngrok.app/api/v2/organizations/my-hyok-org/hyok-configurations?organization\_name=my-hyok-org\u0026page%5Bnumber%5D=1\u0026page%5Bsize%5D=20" }, "meta": { "pagination": { "current-page": 1, "page-size": 20, "prev-page": null, "next-page": null, "total-pages": 1, "total-count": 15 } } } ``` ## Show HYOK configuration `GET /api/v2/hyok-configurations/:id` | Parameter | Description | | ------------------- | ---------------------------- | | `id` | The external id of the hyok-configuration | | Status | Response | Reason | |---------|---------------------------|-------------------------------------------------------------------| | [200][] | [JSON API document][] | Successfully fetched HYOK configuration. | | [404][] | [JSON API error object][] | HYOK configuration not found, or user unauthorized to perform action. | | [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.). | ### Sample request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/vnd.api+json" \ --request GET \ --data @payload.json \ https://app.terraform.io/api/v2/hyok-configurations/:id ``` ### Sample response ```json { "data": { "id": "hyokc-WvptL89g24ixsaVk", "type": "hyok-configurations", "attributes": { "kek-id": "tf-rocket-hyok-oasis", "kms-options": {}, "name": "testingbrother", "primary": false, "status": "untested", "error": null }, "relationships": { "organization": { "data": { "id": "my-hyok-org", "type": "organizations" } | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/api-docs/hold-your-own-key/configurations.mdx | main | terraform | [
-0.02514651045203209,
0.07139038294553757,
0.028683049604296684,
-0.002172094536945224,
-0.0757889598608017,
-0.010007686913013458,
-0.06405070424079895,
-0.08922828733921051,
0.027554966509342194,
0.07709895074367523,
-0.03699803724884987,
-0.12354715913534164,
0.05388015881180763,
-0.019... | -0.016123 |
\ --header "Content-Type: application/vnd.api+json" \ --request GET \ --data @payload.json \ https://app.terraform.io/api/v2/hyok-configurations/:id ``` ### Sample response ```json { "data": { "id": "hyokc-WvptL89g24ixsaVk", "type": "hyok-configurations", "attributes": { "kek-id": "tf-rocket-hyok-oasis", "kms-options": {}, "name": "testingbrother", "primary": false, "status": "untested", "error": null }, "relationships": { "organization": { "data": { "id": "my-hyok-org", "type": "organizations" } }, "oidc-configuration": { "data": { "id": "voidc-QPuDM2pjRRr3yYkh", "type": "vault-oidc-configurations" } }, "agent-pool": { "data": { "id": "apool-MFtsuFxHkC9pCRgB", "type": "agent-pools" } }, "hyok-customer-key-versions": { "data": [] } } } } ``` ## Revoke HYOK configuration `POST /api/v2/hyok-customer-key-versions/:id/actions/revoke` | Parameter | Description | | ------------------- | ---------------------------- | | `id` | The external id of the hyok-customer-key-version | | Status | Response | Reason | |---------|---------------------------|-------------------------------------------------------------------| | [202][] | [No content][] | Successfully revoked a HYOK configuration. | | [404][] | [JSON API error object][] | HYOK configuration not found, or user unauthorized to perform action. | | [409][] | [JSON API error object][] | Conflict; check the error object for more information. | | [422][] | [JSON API error object][] | No agent running, malformed request body (missing attributes, wrong types, etc.). | ### Sample payload ```json {} ``` ### Sample request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/vnd.api+json" \ --request POST \ --data @payload.json \ https://app.terraform.io/api/v2/hyok-customer-key-versions/:id/actions/revoke ``` ### Sample response ```text HTTP/1.1 202 Accepted Content-Length: 0 ``` ## Delete HYOK configuration `DELETE /api/v2/hyok-configurations/:id` | Parameter | Description | | ------------------- | ---------------------------- | | `id` | The external id of the hyok-configuration | Status | Response | Reason | |---------|---------------------------|-------------------------------------------------------------------| | [204][] | [No content][] | Successfully revoked a HYOK configuration. | | [404][] | [JSON API error object][] | HYOK configuration not found, or user unauthorized to perform action. | ### Sample request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/vnd.api+json" \ --request DELETE \ --data @payload.json \ https://app.terraform.io/api/v2/hyok-configurations/:id ``` ### Sample response ```text HTTP/1.1 204 No Content Content-Length: 0 ``` ## Test persisted HYOK configuration `POST /api/v2/hyok-configurations/:id/actions/test` | Parameter | Description | | ------------------- | ---------------------------- | | `id` | The external id of the hyok-configuration | Status | Response | Reason | |---------|---------------------------|-------------------------------------------------------------------| | [204][] | [No content][] | Successfully tested a HYOK configuration. | | [404][] | [JSON API error object][] | HYOK configuration not found, or user unauthorized to perform action. | | [422][] | [JSON API error object][] | No agent running, malformed request body (missing attributes, wrong types, etc.). | ### Sample payload ```json {} ``` ### Sample request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/vnd.api+json" \ --request POST \ --data @payload.json \ https://app.terraform.io/api/v2/hyok-configurations/:id/actions/test ``` ### Sample response ```text HTTP/1.1 204 No Content Content-Length: 0 ``` ## Test unpersisted HYOK configuration `POST /api/v2/organizations/:organization\_name/hyok-configurations/test` | Parameter | Description | | ------------------- | ---------------------------- | | `organization\_name` | The name of the organization | Status | Response | Reason | |---------|---------------------------|-------------------------------------------------------------------| | [204][] | [No content][] | Successfully tested a HYOK configuration. | | [404][] | [JSON API error object][] | HYOK configuration not found, or user unauthorized to perform action. | | [422][] | [JSON API error object][] | No agent running, malformed request body (missing attributes, wrong types, etc.). | ### Sample payload ```json { "hyok-configuration": { "data": { "attributes": { "name": "hyok-config-name", "primary": false, "kek-id": "keyname1", "status": null, "error": null, "kms-options": {} }, "relationships": { "organization": { "data": { "type": "organizations", "id": "my-hyok-org" } }, "agent-pool": { "data": { "type": "agent-pools", "id": "apool-MFtsuFxHkC9pCRgB" } } }, "type": "hyok-configurations" } }, "oidc-configuration": { "data": { "attributes": { "address": "https://my-vault-cluster-public-vault-659decf3.b8298d98.z1.hashicorp.cloud:8200", "role": "tf-rocket-hyok-role-oasis", "namespace": "admin", "auth-path": "jwt", "encoded-cacert": null }, "relationships": { "organization": { | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/api-docs/hold-your-own-key/configurations.mdx | main | terraform | [
-0.030670583248138428,
0.10998262465000153,
0.009437165223062038,
0.0030100615695118904,
-0.007740403059870005,
-0.06940576434135437,
-0.027357833459973335,
-0.04491318017244339,
0.0021389301400631666,
0.026640648022294044,
0.007924099452793598,
-0.11370250582695007,
-0.006919855251908302,
... | 0.047935 |
{} }, "relationships": { "organization": { "data": { "type": "organizations", "id": "my-hyok-org" } }, "agent-pool": { "data": { "type": "agent-pools", "id": "apool-MFtsuFxHkC9pCRgB" } } }, "type": "hyok-configurations" } }, "oidc-configuration": { "data": { "attributes": { "address": "https://my-vault-cluster-public-vault-659decf3.b8298d98.z1.hashicorp.cloud:8200", "role": "tf-rocket-hyok-role-oasis", "namespace": "admin", "auth-path": "jwt", "encoded-cacert": null }, "relationships": { "organization": { "data": { "type": "organizations", "id": "my-hyok-org" } } }, "type": "vault-oidc-configurations" } } } ``` ### Sample request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/vnd.api+json" \ --request POST \ --data @payload.json \ https://app.terraform.io/api/v2/organizations/:organization\_name/hyok-configurations/test ``` ### Sample response ```text HTTP/1.1 204 No Content Content-Length: 0 ``` | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/api-docs/hold-your-own-key/configurations.mdx | main | terraform | [
0.00046163846855051816,
0.08217639476060867,
-0.07831864058971405,
-0.005364688578993082,
-0.011320704594254494,
-0.06282228976488113,
-0.0160786472260952,
-0.0701468363404274,
0.0746159702539444,
-0.04201352596282959,
0.03939463943243027,
-0.09325108677148819,
0.028780685737729073,
0.0575... | 0.012926 |
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200 [201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201 [202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202 [204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204 [400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400 [401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401 [403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403 [404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404 [409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409 [412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412 [422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422 [429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429 [500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500 [504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504 [JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents [JSON API error object]: https://jsonapi.org/format/#error-objects # Encrypted data key API reference @include 'tfc-package-callouts/hyok.mdx' A hold your own key (HYOK) encrypted data key is the key that your key management service (KMS) encrypts. HCP Terraform uses this key to encrypt your Terraform artifacts, such as state and plan files. Your KMS key remains under your control and outside of HCP Terraform networks. To learn more about hold your own key, refer to the [Overview](/terraform/cloud-docs/hold-your-own-key). ## Show HYOK encrypted data key `GET /api/v2/hyok-encrypted-data-keys/:id` | Parameter | Description | | ------------------- | ---------------------------- | | `id` | The ID of the HYOK encrypted data key. | | Status | Response | Reason | |---------|---------------------------|-------------------------------------------------------------------| | [200][] | [JSON API document][] | Successfully fetched HYOK encrypted data key. | | [404][] | [JSON API error object][] | HYOK customer key version not found, or user unauthorized to perform action. | ### Sample request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/vnd.api+json" \ --request GET \ https://app.terraform.io/api/v2/hyok-encrypted-data-keys/:id ``` ## Response body The endpoint will return a JSON object with the following properties. | Key path | Type | Description | | -------- | ------ | -------------------------------------------------------------------------- | | `data.attributes.key-version` | string | The key version of your Key Management Service (KMS) key. | | `data.attributes.created-at` | string | Creation timestamp | | `data.attributes.encrypted-dek` | string | The encrypted Data Encryption Key (DEK) is the key that is used to HYOK encrypt your HCP Terraform artifacts. This DEK is encrypted using your Key Management Service (KMS) key encryption key. | | `data.attributes.customer-key-name` | string | Refers to the HYOK configuration name. | | `data.relationships.hyok-customer-key-version.data.id` | string | The ID of the HYOK customer key version that this HYOK encrypted data key belongs to. ### Sample response ```json { "data": { "id": "dek-M8KCQM8pjAZKmpmW", "type": "hyok-encrypted-data-keys", "attributes": { "encrypted-dek": "dmF1bHQ6djE6NWpSdGhpRmUwRzFGRDhzZnlUeGcyaVBoVW0rVXJXMDBJblJVNjQ3aEpzeU5KMXF1RkV2T3FWYmJTTDF0SFRJbGdySFk4WkJ3dzJKcjVHNXQ=", "created-at": "2025-04-28T15:39:32.157Z", "customer-key-name": "tf-rocket-hyok-oasis" }, "relationships": { "hyok-customer-key-version": { "data": { "id": "keyv-PJJ5biGx2xGmk1ko", "type": "hyok-customer-key-versions" } } } } } ``` | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/api-docs/hold-your-own-key/encrypted-data-keys.mdx | main | terraform | [
-0.015784088522195816,
-0.01700746826827526,
-0.018668580800294876,
-0.028096551075577736,
-0.024631375446915627,
-0.08318780362606049,
-0.0621652752161026,
-0.07521579414606094,
-0.005580500699579716,
-0.03855427727103233,
0.03281282261013985,
0.0540321059525013,
0.006812138482928276,
-0.... | 0.052879 |
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200 [201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201 [202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202 [204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204 [400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400 [401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401 [403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403 [404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404 [409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409 [412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412 [422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422 [429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429 [500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500 [504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504 [JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents [JSON API error object]: https://jsonapi.org/format/#error-objects # GCP OIDC configuration API reference @include 'tfc-package-callouts/hyok.mdx' An GCP OIDC configuration is the model that lets you configure how hold your own key (HYOK) in HCP Terraform connects to your Google Cloud Platform (GCP) keys. To learn more about hold your own key, refer to the [Overview](/terraform/cloud-docs/hold-your-own-key). ## Create OIDC configuration `POST /api/v2/organizations/:organization\_id/oidc-configurations` | Parameter | Description | | ------------------- | ---------------------------- | | `:organization\_id` | The ID of your organization. | | Status | Response | Reason | |---------|---------------------------|-------------------------------------------------------------------| | [201][] | [JSON API document][] | Successfully created OIDC configuration. | | [404][] | [JSON API error object][] | Organization not found, or user unauthorized to perform action. | ### Request body This POST endpoint requires a JSON object with the following properties as a request payload. Properties without a default value are required. | Key path | Type | Default | Description | | -------------------------------- | ------- | --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `data.type` | string | | Must be `"gcp-oidc-configurations"`. | | `data.attributes.service-account-email` | string | | The email of your GCP service account, with permissions to encrypt and decrypt using a Cloud KMS key. | | `data.attributes.workload-provider-name` | string | | The fully qualified workload provider path. This should be in the format `projects/{project\_number}/locations/global/workloadIdentityPools/{workload\_identity\_pool\_id}/providers/{workload\_identity\_pool\_provider\_id}`| | `data.attributes.project-number` | string | | The GCP Project containing the workload provider and service account. | ### Sample payload ```json { "data": { "attributes": { "service-account-email": "myemail@gmail.com", "workload-provider-name": "projects/1/locations/global/workloadIdentityPools/1/providers/1", "project-number": "11111111" }, "type": "gcp-oidc-configurations" } } ``` ### Sample request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/vnd.api+json" \ --data @payload.json \ --request POST \ https://app.terraform.io/api/v2/organizations/:organization\_id/oidc-configurations ``` ### Sample response ```json { "data": { "id": "gcpoidc-9yys1NaZSJXshnVf", "type": "gcp-oidc-configurations", "attributes": { "type": "GcpOidcConfiguration", "service-account-email": "myemail@gmail.com", "workload-provider-name": "projects/1/locations/global/workloadIdentityPools/1/providers/1", "project-number": "11111111" }, "relationships": { "organization": { "data": { "id": "my-hyok-org", "type": "organizations" } } }, "links": { "self": "/api/v2/oidc-configurations/gcpoidc-9yys1NaZSJXshnVf" } } } ``` ## Show OIDC configuration `GET /api/v2/oidc-configurations/:id` | Parameter | Description | | ------------------- | ---------------------------- | | `id` | The ID of the OIDC configuration. | | Status | Response | Reason | |---------|---------------------------|-------------------------------------------------------------------| | [200][] | [JSON API document][] | Successfully fetched OIDC configuration. | | [404][] | [JSON API error object][] | OIDC configuration not found, or user unauthorized to perform action. | ### Sample request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/vnd.api+json" \ --request GET \ https://app.terraform.io/api/v2/oidc-configurations/:id ``` ### Sample response ```json { "data": { "id": "gcpoidc-9yys1NaZSJXshnVf", "type": "gcp-oidc-configurations", "attributes": { "type": "GcpOidcConfiguration", "service-account-email": "myemail@gmail.com", "workload-provider-name": "projects/1/locations/global/workloadIdentityPools/1/providers/1", "project-number": "11111111" }, "relationships": { "organization": { "data": { "id": "my-hyok-org", "type": "organizations" } } }, "links": { "self": "/api/v2/oidc-configurations/gcpoidc-9yys1NaZSJXshnVf" } } } ``` | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/api-docs/hold-your-own-key/oidc-configurations/gcp.mdx | main | terraform | [
-0.015784088522195816,
-0.01700746826827526,
-0.018668580800294876,
-0.028096551075577736,
-0.024631375446915627,
-0.08318780362606049,
-0.0621652752161026,
-0.07521579414606094,
-0.005580500699579716,
-0.03855427727103233,
0.03281282261013985,
0.0540321059525013,
0.006812138482928276,
-0.... | 0.052879 |
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200 [201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201 [202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202 [204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204 [400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400 [401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401 [403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403 [404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404 [409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409 [412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412 [422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422 [429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429 [500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500 [504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504 [JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents [JSON API error object]: https://jsonapi.org/format/#error-objects # Vault OIDC configuration API reference @include 'tfc-package-callouts/hyok.mdx' An Vault OIDC configuration is the model that lets you configure how hold your own key (HYOK) in HCP Terraform connects to your HashiCorp Vault keys. To learn more about hold your own key, refer to the [Overview](/terraform/cloud-docs/hold-your-own-key). ## Create OIDC configuration `POST /api/v2/organizations/:organization\_id/oidc-configurations` | Parameter | Description | | ------------------- | ---------------------------- | | `:organization\_id` | The ID of your organization. | | Status | Response | Reason | |---------|---------------------------|-------------------------------------------------------------------| | [201][] | [JSON API document][] | Successfully created OIDC configuration. | | [404][] | [JSON API error object][] | Organization not found, or user unauthorized to perform action. | ### Request body This POST endpoint requires a JSON object with the following properties as a request payload. Properties without a default value are required. | Key path | Type | Default | Description | | -------------------------------- | ------- | --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `data.type` | string | | Must be `"vault-oidc-configurations"`. | | `data.attributes.address` | string | | The full address of your Vault instance. | | `data.attributes.role ` | string | | The name of a role in your Vault JWT auth path, with permission to encrypt and decrypt with a Transit secrets engine key | | `data.attributes.namespace ` | string | | The namespace your JWT auth path is mounted in. | | `data.attributes.auth-path ` | string | `"jwt"` | The mounting path of JWT auth path of JWT auth. Defaults to `"jwt"` | | `data.attributes.encoded-cacert ` | string | | (Optional) A base64 encoded certificate which can be used to authenticate your Vault certificate. Only needed for self-hosted Vault Enterprise instances with a self-signed certificate. | ### Sample payload ```json { "data": { "attributes": { "address": "https://my-vault-cluster-public-vault-659decf3.b8298d98.z1.hashicorp.cloud:8200", "role": "vault-role-name", "namespace": "admin", "auth-path": "jwt-auth-path", "encoded-cacert": null }, "type": "vault-oidc-configurations" } } ``` ### Sample request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/vnd.api+json" \ --data @payload.json \ --request POST \ https://app.terraform.io/api/v2/organizations/:organization\_id/oidc-configurations ``` ### Sample response ```json { "data": { "id": "voidc-VFmgsjV7WQHqZ8XC", "type": "vault-oidc-configurations", "attributes": { "type": "VaultOidcConfiguration", "address": "https://my-vault-cluster-public-vault-659decf3.b8298d98.z1.hashicorp.cloud:8200", "role": "vault-role-name", "namespace": "admin", "auth-path": "jwt-auth-path", "encoded-cacert": null }, "relationships": { "organization": { "data": { "id": "my-hyok-org", "type": "organizations" } } }, "links": { "self": "/api/v2/oidc-configurations/voidc-VFmgsjV7WQHqZ8XC" } } } ``` ## Show OIDC configuration `GET /api/v2/oidc-configurations/:id` | Parameter | Description | | ------------------- | ---------------------------- | | `id` | The ID of the OIDC configuration. | | Status | Response | Reason | |---------|---------------------------|-------------------------------------------------------------------| | [200][] | [JSON API document][] | Successfully fetched OIDC configuration. | | [404][] | [JSON API error object][] | OIDC configuration not found, or user unauthorized to perform action. | ### Sample request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/vnd.api+json" \ --request GET \ https://app.terraform.io/api/v2/oidc-configurations/:id ``` ### Sample response ```json { "data": { "id": "voidc-8AVxvZyDs3BqysmB", "type": "vault-oidc-configurations", "attributes": { "type": "VaultOidcConfiguration", "address": "https://my-vault-cluster-public-vault-659decf3.b8298d98.z1.hashicorp.cloud:8200", "role": "tf-rocket-hyok-role-oasis", "namespace": "admin", "auth-path": "jwt-path", "encoded-cacert": null }, "relationships": { "organization": { "data": { "id": "my-hyok-org", "type": "organizations" } } }, "links": { "self": "/api/v2/oidc-configurations/voidc-8AVxvZyDs3BqysmB" } } } ``` | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/api-docs/hold-your-own-key/oidc-configurations/vault.mdx | main | terraform | [
-0.015784088522195816,
-0.01700746826827526,
-0.018668580800294876,
-0.028096551075577736,
-0.024631375446915627,
-0.08318780362606049,
-0.0621652752161026,
-0.07521579414606094,
-0.005580500699579716,
-0.03855427727103233,
0.03281282261013985,
0.0540321059525013,
0.006812138482928276,
-0.... | 0.052879 |
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200 [201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201 [202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202 [204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204 [400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400 [401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401 [403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403 [404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404 [409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409 [412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412 [422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422 [429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429 [500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500 [504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504 [JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents [JSON API error object]: https://jsonapi.org/format/#error-objects # AWS OIDC configuration API reference @include 'tfc-package-callouts/hyok.mdx' An AWS OIDC configuration is the model that lets you configure how hold your own key (HYOK) in HCP Terraform connects to your AWS Key Management Service keys. Your AWS OIDC configuration includes the AWS role ARN that HCP Terraform assumes to encrypt a data encryption key. To learn more about hold your own key, refer to the [Overview](/terraform/cloud-docs/hold-your-own-key). ## Create OIDC configuration `POST /api/v2/organizations/:organization\_id/oidc-configurations` | Parameter | Description | | ------------------- | ---------------------------- | | `:organization\_id` | The id of your organization | Status | Response | Reason | |---------|---------------------------|-------------------------------------------------------------------| | [201][] | [JSON API document][] | Successfully created OIDC configuration. | | [404][] | [JSON API error object][] | Organization not found, or user unauthorized to perform action. | ### Request body This POST endpoint requires a JSON object with the following properties as a request payload. Properties without a default value are required. | Key path | Type | Default | Description | | -------------------------------- | ------- | --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `data.type` | string | | Must be `"aws-oidc-configurations"`. | | `data.attributes.role-arn` | string | | The AWS arn of your role. | ### Sample payload ```json { "data": { "attributes": { "role-arn": "arn:aws:iam::533267421525:role/hyok-staging" }, "type": "aws-oidc-configurations" } } ``` ### Sample request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/vnd.api+json" \ --data @payload.json \ --request POST \ https://app.terraform.io/api/v2/organizations/:organization\_id/oidc-configurations ``` ### Sample response ```json { "data": { "id": "awsoidc-EHHrzudV58S9STr5", "type": "aws-oidc-configurations", "attributes": { "type": "AwsOidcConfiguration", "role-arn": "arn:aws:iam::533267421525:role/hyok-staging" }, "relationships": { "organization": { "data": { "id": "my-hyok-org", "type": "organizations" } } }, "links": { "self": "/api/v2/oidc-configurations/awsoidc-EHHrzudV58S9STr5" } } } ``` ## Show OIDC configuration `GET /api/v2/oidc-configurations/:id` | Parameter | Description | | ------------------- | ---------------------------- | | `id` | The ID of the OIDC configuration. | | Status | Response | Reason | |---------|---------------------------|-------------------------------------------------------------------| | [200][] | [JSON API document][] | Successfully fetched OIDC configuration. | | [404][] | [JSON API error object][] | OIDC configuration not found, or user unauthorized to perform action. | ### Sample request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/vnd.api+json" \ --request GET \ https://app.terraform.io/api/v2/oidc-configurations/:id ``` ### Sample response ```json { "data": { "id": "awsoidc-EHHrzudV58S9STr5", "type": "aws-oidc-configurations", "attributes": { "type": "AwsOidcConfiguration", "role-arn": "arn:aws:iam::533267421525:role/hyok-staging" }, "relationships": { "organization": { "data": { "id": "my-hyok-org", "type": "organizations" } } }, "links": { "self": "/api/v2/oidc-configurations/awsoidc-EHHrzudV58S9STr5" } } } ``` | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/api-docs/hold-your-own-key/oidc-configurations/aws.mdx | main | terraform | [
-0.015784088522195816,
-0.01700746826827526,
-0.018668580800294876,
-0.028096551075577736,
-0.024631375446915627,
-0.08318780362606049,
-0.0621652752161026,
-0.07521579414606094,
-0.005580500699579716,
-0.03855427727103233,
0.03281282261013985,
0.0540321059525013,
0.006812138482928276,
-0.... | 0.052879 |
[200]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200 [201]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/201 [202]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202 [204]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/204 [400]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400 [401]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401 [403]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403 [404]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404 [409]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/409 [412]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/412 [422]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422 [429]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429 [500]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500 [504]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504 [JSON API document]: /terraform/cloud-docs/api-docs#json-api-documents [JSON API error object]: https://jsonapi.org/format/#error-objects # Azure OIDC configuration API reference @include 'tfc-package-callouts/hyok.mdx' An Azure OIDC configuration is the model that lets you configure how hold your own key (HYOK) in HCP Terraform connects to your Microsoft Azure keys. To learn more about hold your own key, refer to the [Overview](/terraform/cloud-docs/hold-your-own-key). ## Create OIDC configuration `POST /api/v2/organizations/:organization\_id/oidc-configurations` | Parameter | Description | | ------------------- | ---------------------------- | | `:organization\_id` | The ID of your organization. | | Status | Response | Reason | |---------|---------------------------|-------------------------------------------------------------------| | [201][] | [JSON API document][] | Successfully created OIDC configuration. | | [404][] | [JSON API error object][] | Organization not found, or user unauthorized to perform action. | ### Request body This POST endpoint requires a JSON object with the following properties as a request payload. Properties without a default value are required. | Key path | Type | Default | Description | | -------------------------------- | ------- | --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `data.type` | string | | Must be `"azure-oidc-configurations"`. | | `data.attributes.client-id` | string | | The Client (or Application) ID of your Entra ID application. | | `data.attributes.subscription-id` | string | | The ID of your Azure subscription. | | `data.attributes.tenant-id` | string | | The Tenant (or Directory) ID of your Entra ID application. | ### Sample payload ```json { "data": { "attributes": { "client-id": "application-id1", "subscription-id": "subscription-id1", "tenant-id": "tenant-id1" }, "type": "azure-oidc-configurations" } } ``` ### Sample request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/vnd.api+json" \ --data @payload.json \ --request POST \ https://app.terraform.io/api/v2/organizations/:organization\_id/oidc-configurations ``` ### Sample response ```json { "data": { "id": "azoidc-iWNz3taW7aRYiRfF", "type": "azure-oidc-configurations", "attributes": { "type": "AzureOidcConfiguration", "client-id": "application-id1", "subscription-id": "subscription-id1", "tenant-id": "tenant-id1" }, "relationships": { "organization": { "data": { "id": "my-hyok-org", "type": "organizations" } } }, "links": { "self": "/api/v2/oidc-configurations/azoidc-iWNz3taW7aRYiRfF" } } } ``` ## Show OIDC configuration `GET /api/v2/oidc-configurations/:id` | Parameter | Description | | ------------------- | ---------------------------- | | `id` | The ID of the OIDC configuration. | | Status | Response | Reason | |---------|---------------------------|-------------------------------------------------------------------| | [200][] | [JSON API document][] | Successfully fetched OIDC configuration. | | [404][] | [JSON API error object][] | OIDC configuration not found, or user unauthorized to perform action. | ### Sample request ```shell curl \ --header "Authorization: Bearer $TOKEN" \ --header "Content-Type: application/vnd.api+json" \ --request GET \ https://app.terraform.io/api/v2/oidc-configurations/:id ``` ### Sample response ```json { "data": { "id": "azoidc-iWNz3taW7aRYiRfF", "type": "azure-oidc-configurations", "attributes": { "type": "AzureOidcConfiguration", "client-id": "application-id1", "subscription-id": "subscription-id1", "tenant-id": "tenant-id1" }, "relationships": { "organization": { "data": { "id": "my-hyok-org", "type": "organizations" } } }, "links": { "self": "/api/v2/oidc-configurations/azoidc-iWNz3taW7aRYiRfF" } } } ``` | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/api-docs/hold-your-own-key/oidc-configurations/azure.mdx | main | terraform | [
-0.015784088522195816,
-0.01700746826827526,
-0.018668580800294876,
-0.028096551075577736,
-0.024631375446915627,
-0.08318780362606049,
-0.0621652752161026,
-0.07521579414606094,
-0.005580500699579716,
-0.03855427727103233,
0.03281282261013985,
0.0540321059525013,
0.006812138482928276,
-0.... | 0.052879 |
# Pay-as-you-go Pay-as-you-go accounts are billed for the resources consumed. You can use pay-as-you-go for the \*\*Essentials\*\*, \*\*Standard\*\*, and \*\*Premium\*\* editions. Pay-as-you-go offers dynamic billing based on monthly consumption. Larger organizations can access lower rates through Flex contracts. Review the [billing model documentation](/hcp/docs/hcp/admin/billing#billing-models) to determine which model is right for your organization. HCP Europe organizations do not support pay-as-you-go billing. @include 'eu/billing.mdx' ## Activate pay-as-you-go To enable pay-as-you-go and change your organization's plan, sign in to [HCP Terraform](https://app.terraform.io/) and select your organization. Then choose \*\*Settings\*\* from the sidebar, then \*\*Plan & Billing\*\*. In order to upgrade to \*\*Essentials\*\*, \*\*Standard\*\*, or \*\*Premium\*\* edition and use pay-as-you-go billing, you must sign in to your HashiCorp Cloud Platform account and [link](/terraform/cloud-docs/users-teams-organizations/users#linking-hcp-and-hcp-terraform-accounts) your HCP Terraform account to your [HashiCorp Cloud Platform billing account](/hcp/docs/hcp/admin/billing). Click \*\*Edit plan\*\* to authenticate with HCP and link your accounts.  Under \*\*How would you like to activate HCP Terraform\*\*, select \*\*On a Hashicorp Trial or Pay-as-you-go account\*\*. Select the HCP organization to bill your usage to. Then click \*\*Next\*\*. Select the plan to use for your organization, then click \*\*Next\*\*.  Review your plan summary, then click \*\*Activate\*\*. Once activated, the new feature set will be available and usage and consumption will start immediately. Refer to [Estimate HCP Terraform cost](/terraform/cloud-docs/overview/estimate-hcp-terraform-cost) for more information on how to review your organization's consumption.  ## Manage billing Billing is managed on HashiCorp Cloud Platform. There you can: - Review your organization's [usage](/hcp/docs/hcp/admin/billing#usage). - Change the [credit card](/hcp/docs/hcp/admin/billing/pay-as-you-go#change-credit-card) used for payment. - Setup a [Flex Billing Contract](/hcp/docs/hcp/admin/billing/flex-multiyear). | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/overview/activate-payg.mdx | main | terraform | [
0.0038199543487280607,
0.05612849071621895,
-0.023783527314662933,
-0.041862860321998596,
-0.009783880785107613,
0.04105853661894798,
-0.031327392905950546,
0.005467919632792473,
-0.012259213253855705,
0.00866434071213007,
-0.0524478554725647,
-0.08797502517700195,
0.022907976061105728,
-0... | 0.004893 |
[cli]: /terraform/cli [speculative plans]: /terraform/cloud-docs/workspaces/run/remote-operations#speculative-plans [remote\_state]: /terraform/language/state/remote-state-data [outputs]: /terraform/language/values/outputs [modules]: /terraform/language/modules/develop [terraform enterprise]: /terraform/enterprise # HCP Terraform Plans and Features HCP Terraform is a platform that performs Terraform runs to provision infrastructure, either on demand or in response to various events. Unlike a general-purpose continuous integration (CI) system, it is deeply integrated with Terraform's workflows and data, which allows it to make Terraform significantly more convenient and powerful. > \*\*Hands On:\*\* Try our [What is HCP Terraform - Intro and Sign Up](/terraform/tutorials/cloud-get-started/cloud-sign-up) tutorial. ## Free and paid plans HCP Europe organizations currently support specific contract billing plans. @include 'eu/billing.mdx' [HCP Terraform](https://app.terraform.io/) is a commercial SaaS product developed by HashiCorp. Many of its features are free for small teams, including remote state storage, remote runs, and VCS connections. We also offer paid plans for larger teams that include additional collaboration and governance features. HCP Terraform manages plans and billing at the [organization level](/terraform/cloud-docs/users-teams-organizations/organizations). Each HCP Terraform user can belong to multiple organizations, which might subscribe to different billing plans. The set of features available depends on which organization you are currently working in. Refer to [Terraform pricing](https://www.hashicorp.com/products/terraform/pricing) for details about available plans and their features. ### Free organizations Small teams can use most of HCP Terraform's features for free, including remote Terraform execution, VCS integration, the private module registry, single-sign-on, policy enforcement, run tasks, and more. Free organizations are limited to 500 managed resources. Refer to [What is a managed resource](/terraform/cloud-docs/overview/estimate-hcp-terraform-cost#what-is-a-managed-resource) for more details. ### Paid features Some of HCP Terraform's features are limited to particular paid upgrade plans. Each higher paid upgrade plan is a strict superset of any lower plans — for example, the \*\*Standard\*\* edition includes all of the features of the \*\*Essentials\*\* edition. The \*\*Premium\*\* edition includes all of the features of the \*\*Standard\*\* and \*\*Essentials\*\* editions. Paid feature callouts in the documentation indicate the \_lowest\_ edition at which the feature is available, but any higher plans also include that feature. To learn more about HCP Terraform's paid plans and features, refer to the [pricing page](https://www.hashicorp.com/products/terraform/pricing). Terraform Enterprise generally includes all of HCP Terraform's paid features, plus additional features geared toward large enterprises. However, some features are implemented differently due to the differences between self-hosted and SaaS environments, and some features might be absent due to being impractical or irrelevant in the types of organizations that need Terraform Enterprise. Cloud-only or Enterprise-only features are clearly indicated in documentation. ### Changing Your Payment Plan [Organization owners](/terraform/cloud-docs/users-teams-organizations/teams#the-owners-team) can manage an organization's billing plan. The plan and billing settings include an integrated storefront, and you can subscribe to paid plans with a credit card. To change an organization's plan: 1. Sign in to [HCP Terraform](https://app.terraform.io/). 1. Choose \*\*Settings\*\* from the sidebar. 1. Click \*\*Plan and billing\*\*. The \*\*Plan and Billing\*\* page appears showing your current plan and any available invoices. 1. Click \*\*Change plan\*\*. 1. Select a plan, enter your billing information, and click \*\*Update plan\*\*. ## Terraform Workflow HCP Terraform runs the [Terraform CLI][cli] to provision infrastructure. In its default state, the Terraform CLI uses a local workflow, performing operations on the workstation where it is invoked and storing state in a local directory. In HCP Terraform, there are two main ways of organizing your infrastructure: - Workspaces are ideal for managing a self-contained infrastructure of one Terraform root module. - Stacks are ideal for managing multiple infrastructure modules and repeating that infrastructure at scale. To learn if a workspace or a Stack works best for your use case, refer to [Choose workspaces or Stacks](/terraform/cloud-docs/stack-workspace). Since teams must share responsibilities and awareness to avoid single points of failure, working | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/overview/index.mdx | main | terraform | [
-0.037956736981868744,
0.067014679312706,
0.03074735216796398,
-0.07946258038282394,
-0.019534917548298836,
-0.03881857171654701,
-0.052695535123348236,
-0.037958212196826935,
0.029264820739626884,
0.0380578488111496,
-0.05365283042192459,
-0.09828735142946243,
0.041149575263261795,
-0.034... | 0.160222 |
module. - Stacks are ideal for managing multiple infrastructure modules and repeating that infrastructure at scale. To learn if a workspace or a Stack works best for your use case, refer to [Choose workspaces or Stacks](/terraform/cloud-docs/stack-workspace). Since teams must share responsibilities and awareness to avoid single points of failure, working with Terraform in a team requires a remote workflow. At minimum, state must be shared; ideally, Terraform should execute in a consistent remote environment. HCP Terraform offers a team-oriented remote Terraform workflow, designed to be comfortable for existing Terraform users and easily learned by new users. The foundations of this workflow are remote Terraform execution, a workspace or Stacks-based organizational model, version control integration, command-line integration, remote state management, data sharing across workspaces or Stacks, and a private Terraform module registry. ### Remote Terraform Execution HCP Terraform runs Terraform on disposable virtual machines in its own cloud infrastructure by default. You can leverage [HCP Terraform agents](/terraform/cloud-docs/agents) to run Terraform on your own isolated, private, or on-premises infrastructure. Remote Terraform execution is sometimes referred to as "remote operations." Remote execution helps provide consistency and visibility for critical provisioning operations. To learn more about workspace runs, refer to [Runs and Remote Operations](/terraform/cloud-docs/workspaces/run/remote-operations). To learn more about Stacks runs, refer to [Stack deployment runs](/terraform/cloud-docs/stacks/runs). #### Workspace support for local execution [execution\_mode]: /terraform/cloud-docs/workspaces/settings#execution-mode Remote execution can be disabled on specific workspaces with the ["Execution Mode" setting][execution\_mode]. The workspace will still host remote state, and Terraform CLI can use that state for local runs via the [HCP Terraform CLI integration](/terraform/cli/cloud). ## Organize infrastructure with projects Terraform's local workflow manages a collection of infrastructure with a persistent working directory, which contains configuration, state data, and variables. You can use separate directories to organize infrastructure resources into meaningful groups, and Terraform will use the configuration in the directory you invoke Terraform commands from. HCP Terraform organizes infrastructure into projects that contain workspaces and Stacks. Each workspace contains everything necessary to manage a single Terraform configuration. [Stacks](/terraform/cloud-docs/stacks) use a component architecture based on modules to repeatedly deploy infrastructure, letting Terraform manage your infrastructure at scale. You can use projects to organize your workspaces and Stacks into groups. Organizations with HCP Terraform [Essentials](https://www.hashicorp.com/products/terraform/pricing) edition can assign teams permissions for specific projects. Projects let you grant access to collections of workspaces and Stacks, instead of using workspace or organization-wide permissions, making it easier to limit access to only the resources required for a team member's job function. Refer to [Projects](/terraform/cloud-docs/projects) for more details. ### Remote State Management and data storage HCP Terraform acts as a remote backend for your Terraform state. In workspaces, [state storage](/terraform/cloud-docs/workspaces/state) is tied to that workspace, helping keep state associated with the configuration that created it. In Stacks, [each deployment stores and updates](/terraform/cloud-docs/stacks/state) of its own isolated remote state. Workspaces can share information between each other with root-level [outputs][]. Separate groups of infrastructure resources often need to share a small amount of information, and workspace outputs are an ideal interface for these dependencies. Workspaces that use remote operations can use [`terraform\_remote\_state` data sources][remote\_state] to access other workspaces' outputs, subject to per-workspace access controls. And since new information from one workspace might change the desired infrastructure state in another, you can create workspace-to-workspace run triggers to ensure downstream workspaces react when their dependencies change. Stacks can also directly share information with each other Stacks in the same project, letting you manage the infrastructure independently. Refer to [Pass data from one Stack to another](/terraform/language/stacks/deploy/pass-data) for more details. ### Version Control Integration Like other kinds of code, infrastructure-as-code belongs in version control, so HCP Terraform is designed to work | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/overview/index.mdx | main | terraform | [
-0.028211016207933426,
-0.06859128922224045,
-0.020025257021188736,
-0.004055277910083532,
-0.02946104295551777,
-0.050214655697345734,
-0.0862046554684639,
-0.03219239413738251,
0.019125301390886307,
0.02708899974822998,
-0.01609688252210617,
-0.05465997755527496,
0.03118780627846718,
0.0... | 0.094482 |
directly share information with each other Stacks in the same project, letting you manage the infrastructure independently. Refer to [Pass data from one Stack to another](/terraform/language/stacks/deploy/pass-data) for more details. ### Version Control Integration Like other kinds of code, infrastructure-as-code belongs in version control, so HCP Terraform is designed to work directly with your version control system (VCS) provider. Each workspace can be linked to a VCS repository that contains its configuration, optionally specifying a branch and subdirectory. You can also link Stacks to a VCS repository. By default, HCP Terraform automatically retrieves configuration content from linked repositories, and watches the repository for changes: - When new commits are merged, linked workspaces and Stacks automatically run Terraform plans with the new code. - When pull requests are opened, linked workspaces and Stacks run speculative plans with the proposed code changes and post the results as a pull request check; reviewers can see at a glance whether the plan was successful, and can click through to view the proposed changes in detail. VCS integration is powerful, but optional; if you use an unsupported VCS or want to preserve an existing validation and deployment pipeline, you can use the API or Terraform CLI to upload new configuration versions. You'll still get the benefits of remote execution and HCP Terraform's other features. - More info: [VCS-driven workspace runs](/terraform/cloud-docs/workspaces/run/ui) and [Stack runs](/terraform/cloud-docs/stacks/runs). - More info: [Supported VCS Providers](/terraform/cloud-docs/vcs#supported-vcs-providers) ### Command Line Integration Remote execution offers major benefits to a team, but local execution offers major benefits to individual developers; for example, most Terraform users run `terraform plan` to interactively check their work while editing configurations. -> \*\*Note:\*\* When used with HCP Terraform, the `terraform plan` command runs [speculative plans][] for your workspace, which preview changes without modifying real infrastructure. You can also use `terraform apply` to perform full remote runs, but only with workspaces that are \_not\_ connected to a VCS repository. This helps ensure that your VCS remains the source of record for all real infrastructure changes. HCP Terraform offers the best of both worlds, allowing you to run remote plans from your local command line. Configure the [HCP Terraform CLI integration](terraform/cloud-docs/workspaces/run/cli), and the `terraform plan` command will start a remote run in the configured HCP Terraform workspace. The output of the run streams directly to your terminal, and you can also share a link to the remote run with your teammates. Remote CLI-driven runs use the current working directory's Terraform configuration and the remote workspace's variables, so you don't need to obtain production cloud credentials just to preview a configuration change. The HCP Terraform CLI integration also supports state manipulation commands like `terraform import` or `terraform taint`. If you are working with a Stack, you can use the `terraform stacks` commands to manage your Stack and its configuration, deployments, and more from the command line. Refer to [the `terraform stacks` commands](/terraform/cli/stacks) for more information. ### Private Registry Even small teams can benefit greatly by codifying commonly used infrastructure patterns into reusable [modules][]. Terraform can fetch providers and modules from many sources. HCP Terraform makes it easier to find providers and modules to use with a private registry. Users throughout your organization can browse a directory of internal providers and modules, and can specify flexible version constraints for the modules they use in their configurations. Easy versioning lets downstream teams use private modules with confidence, and frees upstream teams to iterate faster. The private registry uses your VCS as the source of truth, relying on Git tags to manage module versions. Tell HCP Terraform which repositories contain modules, and the registry handles the rest. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/overview/index.mdx | main | terraform | [
-0.02019311487674713,
-0.04868447408080101,
0.010286313481628895,
0.00868828035891056,
-0.010504777543246746,
-0.0038742045871913433,
-0.07312805205583572,
-0.03067542426288128,
0.03460424765944481,
0.04496031999588013,
-0.021410657092928886,
-0.07470539212226868,
0.02820359170436859,
-0.0... | 0.056013 |
configurations. Easy versioning lets downstream teams use private modules with confidence, and frees upstream teams to iterate faster. The private registry uses your VCS as the source of truth, relying on Git tags to manage module versions. Tell HCP Terraform which repositories contain modules, and the registry handles the rest. - More info: [Private Registry](/terraform/cloud-docs/registry) ## Integrations In addition to providing powerful extensions to the core Terraform workflow, HCP Terraform makes it simple to integrate workspace infrastructure provisioning with your business's other systems. ### Full API Nearly all of HCP Terraform's features are available in [its API](/terraform/cloud-docs/api-docs), which means other services can create or configure workspaces or Stacks, upload configurations, start Terraform runs, and more. There's even [a Terraform provider based on the API](https://registry.terraform.io/providers/hashicorp/tfe/latest/docs), so you can manage your HCP Terraform teams and workspaces as a Terraform configuration. - More info: [API](/terraform/cloud-docs/api-docs) ### Notifications HCP Terraform can send notifications about workspace runs to other systems, including Slack and any other service that accepts webhooks. Notifications can be configured per-workspace. - More info: [Notifications](/terraform/cloud-docs/workspaces/settings/notifications) ### Run Tasks Run Tasks let workspaces execute tasks in external systems at specific points in the HCP Terraform run lifecycle. There are several [partner integrations](https://www.hashicorp.com/integrations) already available for workspaces, or you can create your own based on the [API](/terraform/cloud-docs/api-docs/run-tasks/run-tasks). - More info: [Run Tasks](/terraform/cloud-docs/workspaces/settings/run-tasks) ## Access Control and Governance Larger organizations are more complex, and tend to use access controls and explicit policies to help manage that complexity. HCP Terraform's paid upgrade plans provide extra features to help meet the control and governance needs of large organizations. - More info: [Free and Paid Plans](/terraform/cloud-docs/overview) ### Team-Based Permissions System With HCP Terraform's team management, you can define groups of users that match your organization's real-world teams and assign them only the permissions they need. When combined with the access controls your VCS provider already offers for code, workspace, project, and Stack permissions are an effective way to follow the principle of least privilege. - More info: [Users, Teams, and Organizations](/terraform/cloud-docs/users-teams-organizations/permissions) ### Policy Enforcement @include 'tfc-package-callouts/policies.mdx' Policy-as-code lets you define and enforce granular policies for workspaces to control how your organization provisions infrastructure. You can limit the size of compute VMs, confine major updates to defined maintenance windows, and much more. You can use the Sentinel and the Open Policy Agent (OPA) policy-as-code frameworks to define policies. Depending on the settings, policies can act as advisory warnings, firm requirements that prevent Terraform from provisioning infrastructure, or soft requirements that your compliance team can bypass when appropriate. Refer to [Policy Enforcement](/terraform/cloud-docs/policy-enforcement) for details. ### Cost Estimation Before making changes to infrastructure in the major cloud providers, workspaces can display an estimate of their total cost, as well as any change in cost caused by the proposed updates. Cost estimates can also be used in Sentinel policies to provide warnings for major price shifts. - More info: [Cost Estimation](/terraform/cloud-docs/cost-estimation) | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/overview/index.mdx | main | terraform | [
-0.00917169637978077,
-0.01955367997288704,
0.04442611709237099,
-0.0004816685104742646,
0.008105791173875332,
0.002204184653237462,
-0.08392011374235153,
-0.027542171999812126,
0.000675490009598434,
0.024752246215939522,
-0.03440091758966446,
-0.047440629452466965,
0.019519491121172905,
-... | 0.095938 |
# Estimate HCP Terraform cost HCP Terraform offers [multiple plans](https://www.hashicorp.com/products/terraform/pricing) with increasing benefits to match your business scale. This page describes the flat rate per managed resource per hour for Essentials Edition users on pay-as-you-go billing. Additional options are available such as greater discounts at larger scale with contracted plans. [Contact Sales](https://www.hashicorp.com/contact-sales) for additional information. HashiCorp Cloud Platform (HCP) Europe organizations separate workspaces, resources, usage, and billing from your HCP Terraform resources in other regions. To learn more about HCP Europe, refer to [Use HCP Terraform in Europe](/terraform/cloud-docs/europe). ## What is a Managed Resource? A “Managed Resource” or “Resources Under Management (RUM)” is a resource in a HCP-Terraform-managed state file where `mode = "managed"`. HCP Terraform counts a resource as part of this count starting from the first `terraform plan` or `terraform apply` operation on the resource. HCP Terraform does not include resources defined as a `null\_resource` or `terraform\_data` resource in the total managed resource count. HCP Terraform combines workspace and Stacks resources under management when calculating costs. Examples of managed resources include resources provisioned by a Terraform provider (e.g. an AWS VPC), by [count](/terraform/language/meta-arguments/count) and [for\_each](/terraform/language/meta-arguments/for\_each) meta-arguments, and resources provisioned by [modules](/terraform/language/modules) and [no-code ready modules](/terraform/cloud-docs/workspaces/no-code-provisioning/module-design). ## Essentials Edition cost on pay-as-you-go billing HashiCorp charges for each managed resource on a per-hour basis, from the time a managed resource is provisioned until it is destroyed. Each partial hour is billed as a full hour. The peak number of managed resources in a given hour determines the cost. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and choose \*\*Usage\*\* from the sidebar to navigate to a report of your organization's total number of managed resources. You can use the total number of managed resources to determine your number of billable managed resources in use at any given time. You can find your total cost in the [HashiCorp Cloud Platform portal](/hcp/docs/hcp/admin/billing#usage), which is refreshed hourly. The Usage report gives an at a glance view of general usage limits in the Organization: - A summary of projects, workspaces, Stacks, and applies - General limits for managed resources, concurrency, and agents - Compliance feature set limits for run tasks and policies  An example on HCP Terraform \*\*Essentials Edition\*\* that assumes 24x7 usage for a full month for 1000 Managed Resources: - Per Hour Price: 1000 Managed Resources x $0.0001359 = \*\*$0.14\*\* - Per Month Price: (Per Hour Price x 24 x 30) = \*\*$97.85\*\* Another example calculation factors in billable managed resource counts, partial hour billing, and peak managed resources over a 3-hour period: - The first hour, 1000 resources are created, calculated to cost $0.14 per hour. - Over the next hour, 1000 new resources are added, 500 are changed, and then 500 are destroyed for a peak managed resource count of 2000, billing that previous hour for $0.27 and a total consumption of $0.41. - There are no changes in the third hour, for a peak managed resource count of 1500, billed for $0.20 and a total consumption of $0.61. ## Manage your plan HCP Europe organizations does not support manually changing HCP Terraform plans. @include 'eu/billing.mdx' To edit your organization's plan, complete the following steps: 1. Sign in to [HCP Terraform](https://app.terraform.io). 1. Select the organization that you want to manage. 1. Navigate to the \*\*Plan & Billing\*\* page. 1. Click \*\*Edit Plan\*\*. If you cannot edit your organization's plan due to an active contract, contact your HashiCorp account team. Pay-as-you-go and HashiCorp Flex customers can self-activate their HCP Terraform organizations. Refer to [Changing Your | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/overview/estimate-hcp-terraform-cost.mdx | main | terraform | [
0.037318773567676544,
0.033911433070898056,
0.014742353931069374,
0.012318132445216179,
0.02574133686721325,
-0.0408712700009346,
-0.01988181099295616,
-0.010183919221162796,
0.012341387569904327,
0.02364441752433777,
-0.031601682305336,
-0.09209734201431274,
0.015752268955111504,
0.029505... | 0.103135 |
organization that you want to manage. 1. Navigate to the \*\*Plan & Billing\*\* page. 1. Click \*\*Edit Plan\*\*. If you cannot edit your organization's plan due to an active contract, contact your HashiCorp account team. Pay-as-you-go and HashiCorp Flex customers can self-activate their HCP Terraform organizations. Refer to [Changing Your HCP Terraform Plan](/terraform/cloud-docs/overview/change-plan) for details. ## Downgrade your plan Pay-as-you-go customers can downgrade to HCP Terraform Free Edition. To downgrade your organization's plan, navigate to the \*\*Plan & Billing\*\* page and click \*\*Downgrade to the free plan\*\*. HCP Terraform displays a list of the features that you will lose access to when you downgrade to the free edition. Click \*\*Downgrade to free plan\*\* to confirm. ## Optimize cost Contacting sales is the best option to help you optimize your costs. There, we will discuss contract options such as when you commit to a certain amount of spend or usage. In our contract plans, the larger your usage, the greater your discount compared to the flat rate, pay-as-you-go model. [Contact HashiCorp Sales](https://www.hashicorp.com/contact-sales) to learn more. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/overview/estimate-hcp-terraform-cost.mdx | main | terraform | [
0.0187564454972744,
0.059743721038103104,
0.04102277010679245,
-0.014797813259065151,
-0.002073075622320175,
0.009836464188992977,
-0.06320668756961823,
0.0013454905711114407,
-0.033933594822883606,
0.059973880648612976,
-0.008058183826506138,
-0.023556748405098915,
0.01183153223246336,
-0... | 0.051562 |
# Activate HashiCorp Flex Follow these steps to activate HashiCorp Flex for a \*\*Essentials\*\*, \*\*Standard\*\*, or \*\*Premium\*\* edition HCP Terraform organization. HCP Europe organizations do not support HashiCorp Flex. @include 'eu/billing.mdx' ## Prerequisites To activate HashiCorp Flex, you must: 1. [Create an HCP organization](/hcp/docs/hcp/admin/orgs#create-an-organization). 1. Be a member of your HCP Terraform organization's "owners" team. 1. Have the "admin" or "owner" role in your HCP Organization. ## Step 1: Create the required user accounts There are two ways to activate HashiCorp Flex. If you already use both HCP Terraform and HCP, we recommend that you create a linked HCP user account. Otherwise, you can create separate HCP Terraform and HCP accounts. ### Create a linked HCP user account 1. Go to [HCP Terraform](https://app.terraform.io) and click \*\*Continue with HCP account.\*\* 1. Create a free HCP account or sign in using your HCP email. If you have an existing HCP Terraform account, HCP prompts you to [link it](/terraform/cloud-docs/users-teams-organizations/users#linking-hcp-and-hcp-terraform-accounts). If you do not have an HCP Terraform account, HCP creates one for you and links it to the HCP account. ### Create separate HCP Terraform and HCP user accounts 1. Go to [HCP Terraform](https://app.terraform.io) and create a free account or sign in to your existing one. 1. Go to [HashiCorp Cloud Platform](https://cloud.hashicorp.com) and create a free HCP account or sign in to your existing one. ## Step 2: Provision the required HCP Terraform team and HCP roles Organization members may be owners or admins in either the HCP Terraform or HCP organization, but not both. To safeguard your Flex balance, determine whether to invite HCP Terraform organization owners to the HCP organization, or HCP admins to HCP Terraform. Consider the following to decide which approach is right for your organization: - The HCP admin offers HCP Terraform as a multi-tenant managed service. They manage the billing relationship with HashiCorp Cloud Platform and services customers of HCP Terraform. The HCP admin should be invited to the HCP Terraform customer organizations to activate Flex. - The HCP admin manages the billing relationship with HashiCorp Cloud Platform and owns all the HCP Terraform organizations. They can use either approach and invite HCP admins to HCP Terraform organizations or make HCP Terraform organization owners into HCP admins. After you create your user accounts: 1. \*\*Become an HCP Terraform organization owner\*\*: an HCP Terraform organization owner must invite you to the "owners" team. You must be an owner to see the \*\*Plan & Billing\*\* page in the HCP Terraform organization settings. 1. \*\*Become an HCP admin\*\*: An HCP admin or organization owner must invite you to the HCP organization that HashiCorp Flex is allocated to and grant you the [admin](/hcp/docs/hcp/iam/access-management#organization) role. ## Step 3: Activate your Flex plan To use a Flex plan for an HCP organization, you must self-activate a Flex contract. Follow the [Flex contract activation steps](/hcp/docs/hcp/admin/billing/flex-multiyear#activate-your-contract) to activate the Flex plan for your HCP organization. to complete this step. ## Step 4: Verify HashiCorp Flex balance allocation Navigate to the \*\*HCP Org > Billing\*\*. If HashiCorp sales has allocated Flex, the page displays a \*\*Contract Summary\*\* with an available balance. If it only shows your credit card information or a $500 free trial credit, your HashiCorp Flex account may be scheduled to be allocated at a future date. Please contact your \*\*HashiCorp Sales Account Manager\*\* for more information. ~> \*\*Note\*\*: All New HashiCorp organizations are automatically allocated $500 in trial credits. If your HashiCorp Flex account is not yet allocated, you can use the trial credits to temporarily activate \*\*Essentials Edition\*\* for your HCP Terraform organization and immediately access paid features such as team management | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/overview/activate-flex.mdx | main | terraform | [
0.019351866096258163,
-0.006827464792877436,
-0.015020310878753662,
-0.06106074899435043,
-0.02009580284357071,
0.02050839364528656,
-0.04248136281967163,
0.007583240512758493,
-0.0585700124502182,
-0.0008144653402268887,
-0.005864689592272043,
-0.09902479499578476,
0.04711781442165375,
-0... | 0.017207 |
for more information. ~> \*\*Note\*\*: All New HashiCorp organizations are automatically allocated $500 in trial credits. If your HashiCorp Flex account is not yet allocated, you can use the trial credits to temporarily activate \*\*Essentials Edition\*\* for your HCP Terraform organization and immediately access paid features such as team management or more concurrency.  ## Step 5: Edit the HCP Terraform organization plan Navigate to your HCP Terraform organization setting's \*\*Plan & Billing\*\* page. If your HCP Terraform user account is not managed by HCP, you will be prompted to log in to an HCP account that has the `admin` role in the HCP organization with HashiCorp Flex allocated. You must have admin privileges in the HCP organization to manage the HCP Terraform organization's billing plan.  ~> \*\*Note\*\*: If you have a linked HCP account, you do not need to log in and HCP Terraform will automatically identify HCP organizations that your linked HCP user account is a member of and are a member of the Flex HCP Terraform organization. Click \*\*Edit Plan\*\* to go to the Plan Activation page. If your HashiCorp user account has the `owner` or `admin` role in multiple HCP organizations, you will need to select a billing account.  After selecting a plan, confirm your selection and review the \*\*Cost Estimate\*\*. If you meet all the necessary [prerequisites](#prerequisites), the \*\*Plan Selection\*\* page will show \*\*HashiCorp Flex\*\* at the bottom left. If you see \*\*Pay-as-you-go\*\*, please review steps 1-3 or [contact HashiCorp support](https://support.hashicorp.com/hc/en-us/requests/new) for more assistance.  Finally, confirm your selections and verify that the \*\*Monthly estimate\*\* meets your expectations. If you have any questions, contact your HashiCorp Sales Account Manager. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/overview/activate-flex.mdx | main | terraform | [
0.03311863914132118,
0.02625986747443676,
-0.0017643553437665105,
-0.03264231234788895,
-0.0006188593688420951,
0.024140361696481705,
-0.0418267548084259,
0.008518978022038937,
-0.010011531412601471,
0.056354910135269165,
-0.007255151402205229,
-0.12426821887493134,
0.014116608537733555,
-... | 0.047039 |
# Data security HCP Terraform takes the security of the data it manages seriously. This table lists which parts of the HCP Terraform and Terraform Enterprise app can contain sensitive data, what storage is used, and what encryption is used. ### HCP Terraform and Terraform Enterprise | Object | Storage | Encrypted | | :----------------------------------- | :----------- | :----------------------- | | Terraform Configuration (VCS data) | Blob Storage | Vault Transit Encryption | | Private Registry Modules | Blob Storage | Vault Transit Encryption | | Sentinel Policies | PostgreSQL | Vault Transit Encryption | | Terraform/Environment Variables | PostgreSQL | Vault Transit Encryption | | Terraform/Provider Credentials | PostgreSQL | Vault Transit Encryption | | Terraform State File | Blob Storage | Vault Transit Encryption | | Terraform Plan Result | Blob Storage | Vault Transit Encryption | | Terraform Audit Trails | PostgreSQL | No | | Organization/Workspace/Team Settings | PostgreSQL | No | ### HCP Terraform and Terraform Enterprise Secrets | Object | Storage | Encrypted | | :----------------------------------- | :----------- | :----------------------- | | Account Password | PostgreSQL | bcrypt | | 2FA Recovery Codes | PostgreSQL | Vault Transit Encryption | | SSH Keys | PostgreSQL | Vault Transit Encryption | | User/Team/Organization Tokens | PostgreSQL | HMAC SHA512 | | OAuth Client ID + Secret | PostgreSQL | Vault Transit Encryption | | OAuth User Tokens | PostgreSQL | Vault Transit Encryption | ### HCP Terraform Specific | Object | Storage | Encrypted | | :----------------------------------- | :----------- | :----------------------- | | Cloud Data Backups | Amazon S3 | SSE-S3 | | Cloud Copy of Backups for DR | Amazon S3 | SSE-S3 | Please see HashiCorp Cloud [subprocessors](https://www.hashicorp.com/trust/privacy/subprocessors) for third-parties engaged by HashiCorp to deliver cloud services. ### Terraform Enterprise Specific | Object | Storage | Encrypted | | :--------------------------- | :--------- | :----------------------- | | Twilio Account Configuration | PostgreSQL | Vault Transit Encryption | | SMTP Configuration | PostgreSQL | Vault Transit Encryption | | SAML Configuration | PostgreSQL | Vault Transit Encryption | | Vault Unseal Key | PostgreSQL | ChaCha20+Poly1305 | ## Vault Transit Encryption The [Vault Transit Secret Engine](/vault/docs/secrets/transit) handles encryption for data in-transit and is used when encrypting data from the application to persistent storage. ## Blob Storage Encryption All objects persisted to blob storage are symmetrically encrypted prior to being written. Each object is encrypted with a unique encryption key. Objects are encrypted using 128 bit AES in CTR mode. The key material is processed through the [Vault transit secret engine](/vault/docs/secrets/transit), which uses the default transit encryption cipher (AES-GCM with a 256-bit AES key and a 96-bit nonce), and stored alongside the object. This pattern is called envelope encryption. The Vault transit secret engine's [datakey generation](/vault/api-docs/secret/transit#generate-data-key) creates the encryption key material using bit material from the kernel's cryptographically secure pseudo-random number generator (CSPRNG) as the `context` value. Blob storage encryption generates a unique key for each object and relies on envelope encryption, so Vault does not rotate the encryption key material for individual objects. The root encryption keys within the envelope encryption scheme are rotated automatically by HCP Terraform every 365 days. These keys are not automatically rotated within TFE. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/architectural-details/data-security.mdx | main | terraform | [
0.0357498973608017,
0.019988488405942917,
-0.04095231741666794,
-0.0010364606278017163,
-0.014745562337338924,
0.01569175347685814,
0.021048184484243393,
-0.0365445613861084,
0.014526525512337685,
0.03284134715795517,
-0.02995309792459011,
-0.062263067811727524,
0.05675904080271721,
-0.049... | 0.012348 |
# Security model Learn about the essential concepts of HCP Terraform to understand how each piece affects the security model of HCP Terraform. ## HCP Europe In HashiCorp Cloud Platform (HCP) Europe organizations, your resources are hosted, managed, and billed separately to meet [European data residency requirements](https://www.hashicorp.com/en/trust/privacy/hcp-data-privacy). To learn more, refer to [HCP EU region and data governance](https://www.hashicorp.com/en/trust/eu). To learn more about HCP Europe, refer to [Use HCP Terraform in Europe](/terraform/cloud-docs/europe). ## Organize infrastructure HCP Terraform organizes infrastructure using either workspaces or Stacks. The choice between these two models depends on the complexity, scale, and organizational structure of your infrastructure. To learn more about the differences between workspaces and Stacks, refer to [Compare Stacks and workspaces](/terraform/cloud-docs/stack-workspace). Workspaces represent a logical security boundary within the organization. Variables, state, SSH keys, and log output are local to a workspace. You can grant teams [read, plan, write, admin, or a customized set of permissions](/terraform/cloud-docs/users-teams-organizations/permissions) within a workspace. Stacks are a new container for infrastructure built to solve the scaling challenges of multi-workspace setups. While a totally separate construct, Stack deployments offer a similar logical security boundary to workspaces. Each deployment runs in its own isolated environment with its own state, variables, and identity tokens. To learn more about the differences between workspaces and Stacks, refer to [Compare Stacks and workspaces](/terraform/cloud-docs//stack-workspace). You can use projects to assign [read, write, maintain, or admin permissions](/terraform/cloud-docs/users-teams-organizations/permissions#project-permissions) to all Stacks within a project. Projects let you group related workspaces or Stacks in your organization. You can use projects to assign [read, write, maintain, admin, or a customized set of permissions](/terraform/cloud-docs/users-teams-organizations/permissions/project) to a particular team which grants specific permissions to all workspaces in the project. ## Terraform runs HCP Terraform is designed as an execution platform for Terraform, and can perform Terraform runs on its own disposable virtual machines. HCP Terraform's virtual machines provides a consistent and reliable run environment. HCP Terraform provisions infrastructure according to your Terraform configuration which you can upload through the VCS-driven, API-driven, or CLI-driven workflows. ### Workspace runs Workspaces in HCP Terraform have several different run modes and ways of starting runs, [learn more about workspace runs](/terraform/cloud-docs/run/remote-operations#starting-runs). It’s important to note that HCP Terraform performs all Terraform operations within the same privilege context. Both the plan and apply operations have access to the full workspace variables, state versions, and Terraform configuration. ### Stack deployment runs Each Stack in HCP Terraform maintains its own configuration versions, and each version has a queue of deployment runs. HCP Terraform executes deployment runs in the order that you approve them. To learn more about runs and their execution environment, refer to [Stack runs](/terraform/cloud-docs/stacks/runs). ## Terraform state file When developing locally, Terraform stores its state in a plaintext file on your machine. HCP Terraform adds additional security by storing your state file remotely, encrypting it at rest, and protecting your state with TLS in transit. ### Workspace state HCP Terraform retains the current and all historical [state](/terraform/language/state) versions for each workspace. Depending on the resources that are used in your Terraform configuration, these state versions may contain sensitive data such as database passwords, resource IDs, etc. ### Stack state Each Stack deployment stores a state file in HCP Terraform. Stack deployment state files are not directly exposed to users. Instead, a sanitized version is available from the UI and API. The state file for each deployment stores the state of all the components in that deployment. To modify the state of a Stack, you change a Stack’s configuration file, creating a new version and setting off new deployment runs. To learn more, refer to [State in Stacks](/terraform/cloud-docs/stacks/state). ## Personas | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/architectural-details/security-model.mdx | main | terraform | [
0.05257879197597504,
0.021917838603258133,
-0.005626168102025986,
-0.050749894231557846,
0.004582163877785206,
-0.03975784406065941,
-0.051216769963502884,
-0.03697149455547333,
-0.013751082122325897,
0.0260940995067358,
-0.003992629237473011,
-0.0880337655544281,
0.04351165518164635,
-0.0... | 0.114402 |
API. The state file for each deployment stores the state of all the components in that deployment. To modify the state of a Stack, you change a Stack’s configuration file, creating a new version and setting off new deployment runs. To learn more, refer to [State in Stacks](/terraform/cloud-docs/stacks/state). ## Personas HCP Terraform can accommodate the different levels of access and responsibilities of various personas through assigned roles. Understanding these roles is essential for enforcing the principle of least privilege and maintaining a secure environment. ### Organization owners If you are using an HCP Europe organization, there is no organization owners team because you manage users with HCP groups. To learn more about HCP Europe, refer to [Use HCP Terraform in Europe](/terraform/cloud-docs/europe). Members of the [owners team](/terraform/cloud-docs/users-teams-organizations/teams#the-owners-team) have administrator-level privileges within an organization. Members of this team will have access to workspaces, Stacks, projects, and settings within the organization. This role is intended for users who will perform administrative tasks in your organization. ### Team members and permissions @include 'eu/group.mdx' The HCP Terraform permissions model is split into three levels. You can set permissions at the following scopes: - [Organization permissions](/terraform/cloud-docs/users-teams-organizations/permissions/organization) - [Project permissions](/terraform/cloud-docs/users-teams-organizations/permissions/project) - [Workspace permissions](/terraform/cloud-docs/users-teams-organizations/permissions/workspace) Each permission level is additive, granting a user the highest level of permissions possible, regardless of which scope set that permission. A team's \*\*effective permissions\*\* is the sum of the permissions that team has from every permission level. To learn how to assign permissions at different scopes, refer to [Set permissions](/terraform/cloud-docs/users-teams-organizations/permissions/set-permissions). ### Contributors to connected VCS repositories HCP Terraform executes Terraform configuration from connected VCS repositories. Depending on the configuration, HCP Terraform may automatically trigger Terraform operations when the connected repositories receive new contributions. ## Authorization model If you are using an HCP Europe organization, there is no organization owners team because you manage users with HCP groups. To learn more about HCP Europe, refer to [Use HCP Terraform in Europe](/terraform/cloud-docs/europe). This section covers a useful subset of HCP Terraform's authorization model, but is not comprehensive. For more information, refer to [Permissions](/terraform/cloud-docs/users-teams-organizations/permissions). HCP Terraform organizations contain an \*\*owners\*\* team, which grants admin-level access to the organization and all its projects, workspaces, and Stacks along with implicit access to: - \*\*Manage Projects Permission\*\* - \*\*Manage Workspace Permission\*\* - \*\*Manage Team Permission and Membership\*\* - \*\*Manage Organization Access\*\* - \*\*Manage Policy Permission and Overrides\*\* including modifying Policy enforcement mode and overriding soft-mandatory policy checks. - \*\*Manage Run Task Permission\*\* - \*\*Manage VCS Permission\*\* including modifying organization wide VCS Settings and SSH Keys. - \*\*Manage Agent Pools Permission\*\* - \*\*Manage Private and Public Registry Permission\*\* Alternatively, you can create your own teams and customize member access using organization, project, or workspace permissions. ~> \*\*Note:\*\* Teams are not available to free-tier users on HCP Terraform. Organizations at the free-level will only have an owners team. ### Project authorization All workspaces and Stacks in an organization belong to a project. You can grant teams [read, write, maintain, admin, or a customized set of permissions for the project](/terraform/cloud-docs/users-teams-organizations/permissions/project), which grants specific permissions on all workspaces and Stacks within the project. ### Workspace authorization Workspaces provide a logical security boundary within the organization. Environment variables and Terraform configurations are isolated within a workspace, and access to a workspace is granted on a per-team basis. All workspaces in an organization belong to a project. You can grant teams permissions [for a project](/terraform/cloud-docs/users-teams-organizations/permissions/project), which grants specific permissions on all workspaces within the project. You can also grant teams [read, plan, write, admin, or a customized set of permissions](/terraform/cloud-docs/users-teams-organizations/permissions/workspace) for a specific workspace. It’s important to note that, from a security perspective, the plan | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/architectural-details/security-model.mdx | main | terraform | [
-0.01764344982802868,
0.007623064331710339,
0.025313928723335266,
-0.028476178646087646,
-0.04294207692146301,
-0.015003351494669914,
-0.036338429898023605,
-0.04449429363012314,
0.00543385511264205,
0.0496332123875618,
-0.017033297568559647,
-0.07810411602258682,
0.040506381541490555,
-0.... | 0.123416 |
a project. You can grant teams permissions [for a project](/terraform/cloud-docs/users-teams-organizations/permissions/project), which grants specific permissions on all workspaces within the project. You can also grant teams [read, plan, write, admin, or a customized set of permissions](/terraform/cloud-docs/users-teams-organizations/permissions/workspace) for a specific workspace. It’s important to note that, from a security perspective, the plan permission is equivalent to the write permission. The plan permission is provided to protect against accidental Terraform runs but is not intended to stop malicious actors from accessing sensitive data within a workspace or Stack. Terraform `plan` and `apply` operations can execute arbitrary code within the ephemeral build environment. Both of these operations happen in the same security context with access to the full set of workspace variables, Terraform configuration, and Terraform state. By default, teams with read privileges within a workspace can view the workspace's state. You can remove this access by using [customized workspace permissions](/terraform/cloud-docs/users-teams-organizations/permissions/workspace). However, customized workspace permissions only apply to state file access through the API or UI. Terraform must access the state file in order to perform plan and apply operations, so any user with the ability to upload Terraform configurations and initiate runs will transitively have access to the workspace's state. State may be shared across workspaces via the [remote state access workspace setting](/terraform/cloud-docs/workspaces/state#accessing-state-from-other-workspaces). ### Stack authorization Stacks provide a logical security boundary within the organization. Each Stack deployment runs in an isolated environment with its own state, variables, and identity tokens. Access to a Stack is granted on a per-team basis. You can use projects to assign [permissions](/terraform/cloud-docs/users-teams-organizations/permissions/project) to all Stacks within a project. You cannot directly access the state of each of your Stack deployments. A sanitized version is available from the UI and API to users with a minimum of read permissions in the containing project. However, Terraform must access the state file in order to perform plan and apply operations, so any user that can upload Terraform configurations and initiate runs can have access to your Stack's state. ### VCS authorization Terraform configuration files in connected VCS repositories are inherently trusted. Commits to connected repositories will automatically queue a plan within the corresponding workspace or Stack. Pull requests to connected repositories will initiate a speculative plan, though this behavior may be disabled via the [workspace speculative plan setting](/terraform/cloud-docs/workspaces/settings/vcs#automatic-speculative-plans) or [stack speculative plan setting](terraform/cloud-docs/stacks/configure) on the respective settings page. HCP Terraform has no knowledge of your VCS's authorization controls and does not associate HCP Terraform user accounts with VCS user accounts — the two should be considered separate identities. ### Agent pool authorization HCP Terraform Agents let HCP Terraform communicate with isolated, private, or on-premises infrastructure. You can configure agent pool settings from \*\*Settings\*\* > \*\*Security\*\* > \*\*Agents\*\* to limit access to specific projects, workspaces, and Stacks. You can grant agents access to read data between workspaces using the same agent, including: state files, sensitive data, and request forwarding payloads with VCS data, and agent API tokens. ## Threat model HCP Terraform is designed to execute Terraform operations and manage the state file to ensure that infrastructure is reliably created, updated, and destroyed by multiple users of an organization. The following are part of the HCP Terraform threat model: ### Confidentiality and integrity of communication between Terraform clients and HCP Terraform All communication between clients and HCP Terraform is encrypted end-to-end using TLS. HCP Terraform currently supports TLS version 1.2. HCP Terraform communicates with linked VCS repositories using the Oauth2 authorization protocol. HCP Terraform can also be configured to fetch Terraform modules from private repositories using the SSH protocol with a customer-provided private key. ### Confidentiality of state versions, Terraform configurations, | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/architectural-details/security-model.mdx | main | terraform | [
-0.03187674656510353,
0.0603511817753315,
-0.01575910486280918,
-0.00604065740481019,
-0.00954399909824133,
-0.03561389073729515,
-0.010324355214834213,
-0.05427979677915573,
0.07704950124025345,
0.07965075224637985,
-0.04274914413690567,
-0.008388473652303219,
0.02655445970594883,
0.05488... | 0.068283 |
using TLS. HCP Terraform currently supports TLS version 1.2. HCP Terraform communicates with linked VCS repositories using the Oauth2 authorization protocol. HCP Terraform can also be configured to fetch Terraform modules from private repositories using the SSH protocol with a customer-provided private key. ### Confidentiality of state versions, Terraform configurations, and stored variables As a user, you will entrust HCP Terraform with information that is very sensitive to your organization such as API tokens, your Terraform configurations, and your Terraform state file. HCP Terraform is designed to ensure the confidentiality of this information, it relies on [Vault Transit](/vault/docs/secrets/transit) for encrypting variables. Terraform configurations and state are encrypted at rest with uniquely derived encryption keys backed by Vault. You can view how all customer data is encrypted and stored on our [data security page](/terraform/cloud-docs/architectural-details/data-security). ### Enforcement of authentication and authorization policies for data access and actions taken through the UI or API HCP Terraform enforces authorization checks for all actions taken within the API or through the UI. Learn more about [permissions](/terraform/cloud-docs/users-teams-organizations/permissions). ### Isolation of Terraform executions Each Terraform operation (plan and apply) happens in an ephemeral environment that is created immediately before the run and destroyed after it is completed. The build environment is designed to provide isolation between Terraform executions and between HCP Terraform tenants. ### Reliability and availability of HCP Terraform HCP Terraform is spread across multiple availability zones for reliability, we perform regular backups of our production data stores and have a process for recovering in case of a major outage. ## What isn’t part of the threat model The following are not covered by the HCP Terraform security threat model. ### Malicious contributions to Terraform configuration in VCS repositories Commits and pull requests to connected VCS repositories will trigger a plan operation within the associated workspace or Stack. HCP Terraform does not perform any authentication or authorization checks against commits in linked VCS repositories, and cannot prevent malicious Terraform configuration from exfiltrating sensitive data during plan operations. For this reason, it is important to restrict access to connected VCS repositories. Speculative plans for pull requests may be disabled on the [workspace settings page](/terraform/cloud-docs/workspaces/settings/vcs#automatic-speculative-plans) or [Stack settings page](terraform/cloud-docs/stacks/configure). -> \*\*Note:\*\* HCP Terraform will not automatically trigger plans for pull requests from forked repositories. ### Malicious Terraform providers or modules Terraform providers and modules used in your Terraform configuration will have full access to the variables and Terraform state within a workspace or Stack. HCP Terraform cannot prevent malicious providers and modules from exfiltrating this sensitive data. We recommend only using trusted modules and providers within your Terraform configuration. ### Malicious bypasses of Terraform policies The policy-as-code frameworks used by the Terraform [Policy Enforcement](/terraform/cloud-docs/policy-enforcement) feature are embedded within HCP Terraform and can be used to ensure the infrastructure provisioned using Terraform complies with defined organizational policies. The goal of this feature is to enforce compliance with organizational policies and best practices when provisioning infrastructure using Terraform. It is important to note that the policy-as-code integration in HCP Terraform should be viewed as a guide or set of guardrails, not a security boundary. It is not designed to prevent malicious actors from executing malicious Terraform configurations or modifying infrastructure. ### Malicious or insecure third-party run tasks Terraform [Run Tasks](/terraform/cloud-docs/integrations/run-tasks) are provided with access to all Terraform configuration and plan data. HCP Terraform does not have the capability to prevent malicious Run Tasks from potentially exfiltrating sensitive data that may be present in either the Terraform configuration or plan. In order to minimize potential security risks, it is highly recommended to only utilize trusted technology partners for Run Tasks within | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/architectural-details/security-model.mdx | main | terraform | [
-0.02053081803023815,
0.05800303816795349,
-0.001666541793383658,
-0.012210939079523087,
-0.038268834352493286,
0.010881499387323856,
-0.01012398675084114,
-0.058539118617773056,
0.050925757735967636,
0.020977884531021118,
-0.030025986954569817,
-0.06685946881771088,
0.08485934138298035,
-... | -0.016545 |
data. HCP Terraform does not have the capability to prevent malicious Run Tasks from potentially exfiltrating sensitive data that may be present in either the Terraform configuration or plan. In order to minimize potential security risks, it is highly recommended to only utilize trusted technology partners for Run Tasks within your Terraform organization and limit the number of users who have been assigned the [Manage Run Tasks](/terraform/cloud-docs/users-teams-organizations/permissions/workspace#manage-workspace-run-tasks) permission. ### Access to sensitive variables or state from Terraform operations Marking a variable as “sensitive” will prevent it from being displayed in the UI, but will not prevent it from being read by Terraform during plan or apply operations. Similarly, customized workspace permissions allow you to restrict access to workspace state via the UI and API, but will not prevent it from being read during Terraform operations. ### Redaction of sensitive variables in Terraform logs The logs from a Terraform plan or apply operation are visible to any user with at least “read” level access in the associated workspace or Stack. While Terraform tries to avoid writing sensitive information to logs, redactions are best-effort. This feature should not be treated as a security boundary, but instead as a mechanism to mitigate accidental exposure. Additionally, HCP Terraform is unable to protect against malicious users who attempt to use Terraform logs to exfiltrate sensitive data. ### Redact ephemeral values from Terraform logs The logs from a Terraform plan or apply operation are visible to any workspace's or Stack's users with \*\*Read\*\* permissions. Terraform attempts to avoid writing [ephemeral values](/terraform/language/resources/ephemeral) to logs, but Terraform cannot guarantee that all providers will not log ephemeral values. You can reduce the risk of ephemeral values being potentially logged by malicious providers by only [using trusted modules and providers within Terraform configuration](#malicious-terraform-providers-or-modules). ### Redact ephemeral values in memory Terraform does not persist ephemeral values to plan or state files. However, Terraform does not protect ephemeral values from a memory analysis of Terraform while its running. ## Recommendations for securely using HCP Terraform ### Enforce strong authentication HCP Terraform supports [two factor authentication](/terraform/cloud-docs/users-teams-organizations/2fa) via SMS or TOTP. Organizations can configure mandatory 2FA for all members in the [organization settings](/terraform/cloud-docs/users-teams-organizations/organizations#authentication). Organizations may choose to configure [SSO for their organization](/terraform/cloud-docs/users-teams-organizations/single-sign-on). ### Minimize the number of users in the owners team If you are using an HCP Europe organization, there is no organization owners team because you manage users with HCP groups. To learn more about HCP Europe, refer to [Use HCP Terraform in Europe](/terraform/cloud-docs/europe). Users of the [owners team](/terraform/cloud-docs/users-teams-organizations/teams#the-owners-team) will have full access to all workspaces within the organization. If SSO is enabled, members of the “Owners” team will still be able to authenticate with their username and password. This group should be reserved for only a small number of administrators, and membership should be audited periodically. ### Apply the principle of least privilege to project and workspace membership @include 'eu/group.mdx' [Teams](/terraform/cloud-docs/users-teams-organizations/teams) allow you to group users and assign them various privileges within projects and workspaces. We recommend applying the [principle of least privilege](https://en.wikipedia.org/wiki/Principle\_of\_least\_privilege) when creating teams and assigning permissions so that each user within your organization has the minimum required privileges. ### Protect API keys HCP Terraform allows you to create [user, team, and organization API tokens](/terraform/cloud-docs/api-docs#authentication). You should take care to store these tokens securely, and rotate them periodically. Vault users can leverage the [Terraform Cloud secret backend](/vault/docs/secrets/terraform), which allows you to generate ephemeral tokens. ### Control access to source code By default, commits and pull requests to connected VCS repositories will automatically trigger a plan operation in an HCP Terraform workspace or Stack. HCP Terraform cannot protect against malicious | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/architectural-details/security-model.mdx | main | terraform | [
0.023025883361697197,
0.07445819675922394,
0.015347362495958805,
0.0205099917948246,
-0.0008682121406309307,
-0.0173859354108572,
0.020268693566322327,
-0.07843312621116638,
0.002211107173934579,
0.052064020186662674,
-0.0448729507625103,
-0.02398364245891571,
0.05627650395035744,
0.044221... | 0.045315 |
users can leverage the [Terraform Cloud secret backend](/vault/docs/secrets/terraform), which allows you to generate ephemeral tokens. ### Control access to source code By default, commits and pull requests to connected VCS repositories will automatically trigger a plan operation in an HCP Terraform workspace or Stack. HCP Terraform cannot protect against malicious code in linked repositories, so you should take care to only grant trusted operators access to these repositories. Workspaces may be configured to [enable or disable speculative plans for pull requests](/terraform/cloud-docs/workspaces/settings/vcs#automatic-speculative-plans) to linked repositories. You should disable this setting if you allow untrusted users to open pull requests in connected VCS repositories. -> \*\*Note:\*\* HCP Terraform will not automatically trigger plans for pull requests from forked repositories. ### Restrict access to workspace state Workspaces may be configured to share their state with other workspaces within the organization or globally with the entire organization via the [remote state setting](/terraform/cloud-docs/workspaces/state#accessing-state-from-other-workspaces). Because workspace state may contain sensitive information, we recommend that you follow the principle of least privilege and only enable state access between workspaces that specifically need information from each other. ### Restrict access to Stack outputs Stacks within the same project have access to published outputs of other Stacks in the same project. To restrict access to these published outputs we recommend using separate projects to isolate your Stacks. To learn more, refer to [Pass data from one Stack to another](/terraform/language/stacks/deploy/pass-data). ### Use separate agent pools for sensitive workspaces and Stacks You can share [HCP Terraform Agents](/terraform/cloud-docs/agents) across all workspaces or Stacks within an organization or scope them to specific [workspaces or Stacks](/terraform/cloud-docs/agents#scope-an-agent-pool). If multiple workspaces or Stacks share agent pools, a malicious actor in one could exfiltrate the agent’s API token, access private resources from the perspective of the agent, or modify the agent’s environment, potentially impacting other workspaces or Stacks. For this reason, we recommend creating separate agent pools for sensitive workspaces or Stacks and using the agent scoping setting to restrict which workspaces and Stacks can target each agent pool. ### Treat Archivist URLs as secrets HCP Terraform uses a blob storage service called Archivist for storing various pieces of customer data. Archivist URLs have the origin `https://archivist.terraform.io` and are returned by various HCP Terraform APIs, such as the [state versions API](/terraform/cloud-docs/api-docs/state-versions#fetch-the-current-state-version-for-a-workspace). You do not need to submit a bearer token with each request to call the Archivist API. Instead, Archivist URLs contain a short-term signed authorization token that performs authorization checks. The expiry time depends on the API endpoints you used to generate the Archivist link. As a result, you must treat Archivist URLs as secrets and avoid logging or sharing them. ### Use dynamic credentials Storing static credentials in HCP Terraform increases the inherent risk of a malicious user or a compromised plan or apply operation exposing your credentials. Because static credentials are usually long-lived and exposed in many locations, they are troublesome to revoke and replace. Using [dynamic provider credentials](/terraform/cloud-docs/dynamic-provider-credentials/) eliminates the need to store static credentials in HCP Terraform, reducing the risk of exposure. Dynamic provider credentials generate new temporary credentials for each operation and expire after that operation completes. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/architectural-details/security-model.mdx | main | terraform | [
-0.07155807316303253,
-0.0021018197294324636,
0.006218672264367342,
0.0007718449924141169,
-0.03585260733962059,
-0.000987619161605835,
-0.00998468603938818,
-0.04989694803953171,
0.0386631153523922,
0.07337737828493118,
-0.03310029208660126,
-0.06654385477304459,
0.03811763972043991,
-0.0... | 0.054185 |
# IP ranges HCP Terraform uses static IP ranges for certain features, such as notifications and VCS connections, and you can retrieve these IP ranges through the [IP ranges API](/terraform/cloud-docs/api-docs/ip-ranges). The IP ranges API does not publish the ranges for workspaces or Stacks doing Terraform operations in [remote execution mode](/terraform/cloud-docs/workspaces/settings#execution-mode). If you want to limit access to specific CIDRs when connecting to your infrastructure, configure your HCP Terraform [workspace](/terraform/cloud-docs/workspaces/settings#execution-mode) or [Stack](/terraform/cloud-docs/stacks/configure) to run Terraform operations on \*\*Agent\*\* execution mode using an [HCP Terraform agent](/terraform/cloud-docs/agents). -> \*\*Note:\*\* The IP ranges for each feature returned by the IP Ranges API may overlap. Additionally, these published ranges do not currently allow for execution of Terraform runs against local resources, such as for CLI-driven runs. Since HCP Terraform is a shared service, the use of these IP ranges to permit access to restricted resources and you should carefully consider their impact on your security posture. Additionally, these IP ranges may change. While changes are unlikely to be frequent, we strongly recommend checking the IP Ranges API every 24 hours for the most up-to-date information if you do choose to make use of these ranges. -> \*\*Note:\*\* Under normal circumstances, HashiCorp will publish any expected changes to HCP Terraform's IP ranges at least 24 hours in advance of implementing them. This should allow sufficient time for users to update any connected systems to reflect the changes. In the event of an emergency outage or failover operation, it may not be possible to pre-publish these changes. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/architectural-details/ip-ranges.mdx | main | terraform | [
0.02775690332055092,
0.009077117778360844,
0.01728339120745659,
-0.033782318234443665,
-0.02892620675265789,
0.02451782487332821,
-0.006269976496696472,
-0.11572771519422531,
0.019305575639009476,
0.008862443268299103,
0.010546146892011166,
-0.06404555588960648,
0.04179305210709572,
-0.009... | -0.002603 |
--- title: "Documentation" linkTitle: "Documentation" menu: main: weight: 10 --- {{% pageinfo %}} This is a placeholder page that shows you how to use this template site. {{% /pageinfo %}} This section is where the user documentation for your project lives - all the information your users need to understand and successfully use your project. For large documentation sets we recommend adding content under the headings in this section, though if some or all of them don’t apply to your project feel free to remove them or add your own. You can see an example of a smaller Docsy documentation site in the [Docsy User Guide](https://docsy.dev/docs/), which lives in the [Docsy theme repo](https://github.com/google/docsy/tree/master/userguide) if you'd like to copy its docs section. Other content such as marketing material, case studies, and community updates should live in the [About](/about/) and [Community](/community/) pages. Find out how to use the Docsy theme in the [Docsy User Guide](https://docsy.dev/docs/). You can learn more about how to organize your documentation (and how we organized this site) in [Organizing Your Content](https://docsy.dev/docs/best-practices/organizing-content/). | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/_index.md | master | kustomize | [
-0.08071458339691162,
0.050598155707120895,
0.03605857491493225,
0.018402457237243652,
0.08534687012434006,
-0.005934731103479862,
-0.06772171705961227,
0.07972896844148636,
-0.016900446265935898,
0.030702222138643265,
-0.041898079216480255,
0.04986203834414482,
0.05626911297440529,
-0.010... | 0.048529 |
[kustomization reference]: https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/ This page will help you get started with this amazing tool called kustomize! You will start off with a simple nginx deployment manifest and then use it to explore kustomize basics. ### Create resource manifests and Kustomization Let's start off by creating your nginx deployment and service manifests in a dedicated folder: ```bash mkdir kustomize-example cd kustomize-example cat <<'EOF' >deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx name: nginx ports: - containerPort: 80 EOF cat <<'EOF' >service.yaml apiVersion: v1 kind: Service metadata: labels: app: nginx name: nginx spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx EOF ``` Now that you have your `deployment.yaml` and `service.yaml` files created, let's create your Kustomization. You can think of Kustomization as the set of instructions that tell kustomize what it needs to do, and it is defined in a file named `kustomization.yaml`: ```bash cat <<'EOF' >kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - deployment.yaml - service.yaml EOF ``` In this kustomization file, you are telling kustomize to include the `deployment.yaml` and `service.yaml` as its resources. If you now run `kustomize build .` from your current working directory, kustomize will generate a manifest containing the contents of your `deployment.yaml` and `service.yaml` files with no additional changes. ```yaml $ kustomize build . apiVersion: v1 kind: Service metadata: labels: app: nginx name: nginx spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx name: nginx ports: - containerPort: 80 ``` ### Customize resources So far kustomize has not been used to do any modifications, so let's see how you can do that. Kustomize comes with a considerable number of transformers that apply changes to your manifests, and in this section you will have a look at the `namePrefix` transformer. This transformer will add a prefix to the deployment and service names. Modify the `kustomization.yaml` file as follows: ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namePrefix: example- ### add this line resources: - deployment.yaml - service.yaml ``` After re-building you see can see your modified manifest which now has the prefixed deployment and service names: ```yaml $ kustomize build . apiVersion: v1 kind: Service metadata: labels: app: nginx name: example-nginx ### service name changed here spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: example-nginx ### deployment name changed here spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx name: nginx ports: - containerPort: 80 ``` ### Create variants using overlays Now let's assume that you need to deploy the nginx manifests from the previous section to two environments called `Staging` and `Production`. The manifests for these two environments will be mostly identical, with only a few minor changes between them. These two mostly identical manifests are called "variants". Traditionally, to create variants you could duplicate the manifests and apply the changes manually or rely on some templating engine. With kustomize you can avoid templating and duplication of your manifests and apply the different changes you need using overlays. With this approach, the `base` contains what your environments have in common, and the `overlays` contain your environment-specific changes. Create the `kustomization.yaml` files for your two overlays and move the files you have so far into `base`: ```bash mkdir -p base overlays/staging overlays/production mv | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Getting started/first_kustomization.md | master | kustomize | [
0.02830161154270172,
0.04708687961101532,
0.017188038676977158,
-0.05690623074769974,
-0.08772796392440796,
0.01947263814508915,
-0.01088172197341919,
-0.022469094023108482,
0.009480375796556473,
0.029701514169573784,
-0.0389094315469265,
-0.0396336205303669,
-0.01116007100790739,
-0.02395... | 0.131261 |
different changes you need using overlays. With this approach, the `base` contains what your environments have in common, and the `overlays` contain your environment-specific changes. Create the `kustomization.yaml` files for your two overlays and move the files you have so far into `base`: ```bash mkdir -p base overlays/staging overlays/production mv deployment.yaml kustomization.yaml service.yaml base cat <<'EOF' >overlays/staging/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ../../base EOF cat <<'EOF' >overlays/production/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ../../base EOF ``` The kustomization files for the overlays include just the `base` folder, so if you were to run `kustomize build` on the overlay folders at this point you would get the same output you would get if you built `base`. It is important to note that bases can be included in the `resources` field in the same way that your other deployment and service resource files were included. The directory structure you created so far should look like this: ``` kustomize-example ├── base │ ├── deployment.yaml │ ├── kustomization.yaml │ └── service.yaml └── overlays ├── production │ └── kustomization.yaml └── staging └── kustomization.yaml ``` ### Customizing overlays For the purposes of this example, let's define some requirements for how your deployment should look like in the two environments: |Requirement| Production | Staging | |-----------|------------------------------------|-----------------------------| |Name |env1-example-nginx-production |env2-example-nginx-staging | |Namespace |production |staging | |Replicas |3 |2 | You can achieve the names required by making use of `namePrefix` and `nameSuffix` as follows: \_kustomize-example/overlays/production/kustomization.yaml\_: ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namePrefix: env1- nameSuffix: -production resources: - ../../base ``` \_kustomize-example/overlays/staging/kustomization.yaml\_: ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namePrefix: env2- nameSuffix: -staging resources: - ../../base ``` The build output for your `Production` overlay would now be: ```yaml $ kustomize build overlays/production/ apiVersion: v1 kind: Service metadata: labels: app: nginx name: env1-example-nginx-production ### service name changed here spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: env1-example-nginx-production ### deployment name changed here spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx name: nginx ports: - containerPort: 80 ``` It is important to note here that the name for \_both\_ the `deployment` and the `service` were updated with the `namePrefix` and `nameSuffix` defined. If you had additional Kubernetes objects (like an `ingress`) their name would be updated as well. Moving on to the next requirements, you can set the namespace and the number of replicas you want by using `namespace` and `replicas` respectively: \_kustomize-example/overlays/production/kustomization.yaml\_: ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namePrefix: env1- nameSuffix: -production namespace: production replicas: - name: example-nginx count: 3 resources: - ../../base ``` \_kustomize-example/overlays/staging/kustomization.yaml\_: ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namePrefix: env2- nameSuffix: -staging namespace: staging replicas: - name: example-nginx count: 2 resources: - ../../base ``` Note that the deployment name being referenced in `replicas` is the modified name that was output by `base`. Looking at the output of `kustomize build` you can see that all the requirements that were set have been met: \_Production overlay build\_: ```yaml $ kustomize build overlays/production/ apiVersion: v1 kind: Service metadata: labels: app: nginx name: env1-example-nginx-production namespace: production ### namespace has been set to production spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: env1-example-nginx-production namespace: production ### namespace has been set to production spec: replicas: 3 ### replicas have been updated from 1 to 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx name: nginx ports: - containerPort: 80 ``` \_Staging overlay build\_: ```yaml $ kustomize build overlays/staging/ apiVersion: v1 kind: Service metadata: labels: app: | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Getting started/first_kustomization.md | master | kustomize | [
0.05546942353248596,
-0.026264088228344917,
0.053163379430770874,
-0.08013670891523361,
0.026615776121616364,
-0.037482112646102905,
-0.019236117601394653,
0.006124590523540974,
0.02755555510520935,
0.02646280638873577,
0.043365634977817535,
-0.04664532467722893,
0.004397766198962927,
-0.0... | 0.031844 |
to production spec: replicas: 3 ### replicas have been updated from 1 to 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx name: nginx ports: - containerPort: 80 ``` \_Staging overlay build\_: ```yaml $ kustomize build overlays/staging/ apiVersion: v1 kind: Service metadata: labels: app: nginx name: env2-example-nginx-staging namespace: staging ### namespace has been set to staging spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: env2-example-nginx-staging namespace: staging ### namespace has been set to staging spec: replicas: 2 ### replicas have been updated from 1 to 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx name: nginx ports: - containerPort: 80 ``` ### Further customizations Now that you have seen how kustomize works, let's add a few more requirements: |Requirement| Production | Staging | |-----------|------------------------------------|-----------------------------| |Image |nginx:1.20.2 |nginx:latest | |Label |variant=var1 |variant=var2 | |Env Var |ENVIRONMENT=prod |ENVIRONMENT=stg | To keep the example brief, only the changes for the `Production` overlay will be shown and then the updated overlay files and builds for both overlays will be presented at the end. The specific image tag can be set by making use of the `images` field. Add the following to the kustomization files in your overlays: ```yaml images: - name: nginx newTag: 1.20.2 ## For the Staging overlay set this to 'latest' ``` For setting the label, you can use the `labels` field. Add the following to the kustomization files in your overlays: ```yaml labels: - pairs: variant: var1 ## For the Staging overlay set this to 'var2' includeSelectors: false # Setting this to false so that the label is not added to the selectors includeTemplates: true # Setting this to true will make the label available also on the pod and not just the deployment ``` At this point, your kustomization files for your `Production` overlay should be as follows: \_kustomize-example/overlays/production/kustomization.yaml\_: ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namePrefix: env1- nameSuffix: -production namespace: production replicas: - name: example-nginx count: 3 images: - name: nginx newTag: 1.20.2 labels: - pairs: variant: var1 includeSelectors: false includeTemplates: true resources: - ../../base ``` Rebuilding the `Production` overlay gives the following: ```yaml $ kustomize build overlays/production/ apiVersion: v1 kind: Service metadata: labels: app: nginx variant: var1 ### label has been set here name: env1-example-nginx-production namespace: production spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx variant: var1 ### label has been set here name: env1-example-nginx-production namespace: production spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx variant: var1 ### label has been set here spec: containers: - image: nginx:1.20.2 ### image tag has been set to 1.20.2 name: nginx ports: - containerPort: 80 ``` The last requirement to meet is to set the environment variable, and to do that you will create a patch. To do this, create the following file for the `Production` overlay: ```bash cat <<'EOF' >overlays/production/patch-env-vars.yaml apiVersion: apps/v1 kind: Deployment metadata: name: example-nginx spec: template: spec: containers: - name: nginx env: - name: ENVIRONMENT value: prod EOF ``` Next step, add a reference to that patch file in `kustomization.yaml`: ```yaml patches: - path: patch-env-vars.yaml ``` One important thing to note here is that the name of the deployment used is the name that you are getting from the base and not the deployment name that has the prefix and suffix added. Rebuilding the overlay shows that the environment variable has been added to your container: ```yaml $ kustomize build overlays/production/ apiVersion: v1 kind: Service metadata: | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Getting started/first_kustomization.md | master | kustomize | [
-0.033265624195337296,
0.020731914788484573,
0.02783464640378952,
-0.011430418118834496,
0.001252638059668243,
-0.04460051283240318,
-0.05954736843705177,
-0.08980927616357803,
0.03152962028980255,
0.06492452323436737,
-0.018453221768140793,
-0.08346705883741379,
-0.031125245615839958,
-0.... | 0.062311 |
of the deployment used is the name that you are getting from the base and not the deployment name that has the prefix and suffix added. Rebuilding the overlay shows that the environment variable has been added to your container: ```yaml $ kustomize build overlays/production/ apiVersion: v1 kind: Service metadata: labels: app: nginx variant: var1 name: env1-example-nginx-production namespace: production spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx variant: var1 name: env1-example-nginx-production namespace: production spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx variant: var1 spec: containers: - env: - name: ENVIRONMENT ### Environment variable has been added here value: prod image: nginx:1.20.2 name: nginx ports: - containerPort: 80 ``` Looking at the output of `kustomize build` you can see that the additional requirements that were set have now been met. Below are the files as they should be at this point in your overlays and the `kustomize build` output: \_kustomize-example/overlays/production/kustomization.yaml\_: ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namePrefix: env1- nameSuffix: -production namespace: production replicas: - name: example-nginx count: 3 images: - name: nginx newTag: 1.20.2 labels: - pairs: variant: var1 includeSelectors: false includeTemplates: true resources: - ../../base patches: - path: patch-env-vars.yaml ``` \_kustomize-example/overlays/production/patch-env-vars.yaml\_: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: example-nginx spec: template: spec: containers: - name: nginx env: - name: ENVIRONMENT value: prod ``` \_kustomize-example/overlays/staging/kustomization.yaml\_: ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namePrefix: env2- nameSuffix: -staging namespace: staging replicas: - name: example-nginx count: 2 images: - name: nginx newTag: latest labels: - pairs: variant: var2 includeSelectors: false includeTemplates: true resources: - ../../base patches: - path: patch-env-vars.yaml ``` \_kustomize-example/overlays/staging/patch-env-vars.yaml\_: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: example-nginx spec: template: spec: containers: - name: nginx env: - name: ENVIRONMENT value: stg ``` \_Production overlay build\_: ```yaml $ kustomize build overlays/production/ apiVersion: v1 kind: Service metadata: labels: app: nginx variant: var1 name: env1-example-nginx-production namespace: production spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx variant: var1 name: env1-example-nginx-production namespace: production spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx variant: var1 spec: containers: - env: - name: ENVIRONMENT value: prod image: nginx:1.20.2 name: nginx ports: - containerPort: 80 ``` \_Staging overlay build\_: ```yaml $ kustomize build overlays/staging/ apiVersion: v1 kind: Service metadata: labels: app: nginx variant: var2 name: env2-example-nginx-staging namespace: staging spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx variant: var2 name: env2-example-nginx-staging namespace: staging spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx variant: var2 spec: containers: - env: - name: ENVIRONMENT value: stg image: nginx:latest name: nginx ports: - containerPort: 80 ``` ### Next steps Congratulations on making it to the end of this tutorial. As a summary for you, these are the customizations that were presented in this tutorial: - Add a name prefix and a name suffix - Set the namespace for your resources - Set the number of replicas for your deployment - Set the image to use - Add a label to your resources - Add an environment variable to a container by using a patch These are just a few of the things kustomize can do. If you are interested to learn more, the [kustomization reference] is your next step. You will see how you can use components to define base resources and add them to specific overlays where needed, use generators to create configMaps from files, and much more! | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Getting started/first_kustomization.md | master | kustomize | [
0.05159435793757439,
0.013728799298405647,
0.06387455016374588,
-0.04797498136758804,
0.023727959021925926,
-0.00879361666738987,
0.006500079296529293,
-0.04431035369634628,
0.017391404137015343,
0.02877473272383213,
-0.008547174744307995,
-0.07864510267972946,
-0.049318503588438034,
-0.01... | 0.010837 |
more, the [kustomization reference] is your next step. You will see how you can use components to define base resources and add them to specific overlays where needed, use generators to create configMaps from files, and much more! | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Getting started/first_kustomization.md | master | kustomize | [
-0.0059848022647202015,
0.004145062528550625,
-0.03615579009056091,
-0.007556361146271229,
0.00018686431576497853,
0.0038659763522446156,
0.008100587874650955,
0.053250573575496674,
-0.08108355849981308,
-0.05412868410348892,
-0.02635541744530201,
-0.06523869186639786,
0.027180762961506844,
... | 0.115778 |
## Binaries Binaries at various versions for Linux, macOS and Windows are published on the [releases page]. The following [script] detects your OS and downloads the appropriate kustomize binary to your current working directory. ```bash curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install\_kustomize.sh" | bash ``` ## Homebrew / MacPorts For [Homebrew] users: ```bash brew install kustomize ``` For [MacPorts] users: ```bash sudo port install kustomize ``` ## Chocolatey ```bash choco install kustomize ``` For support on the chocolatey package and prior releases, see: - [Choco Package] - [Package Source] ## Docker Images Starting with Kustomize v3.8.7, docker images are available to run Kustomize. The image artifacts are hosted on Google Container Registry (GCR). See [GCR page] for available images. The following commands are how to pull and run kustomize {{}} docker image. ```bash docker pull registry.k8s.io/kustomize/kustomize:{{< example-version >}} docker run registry.k8s.io/kustomize/kustomize:{{< example-version >}} version ``` ## Go Source Requires [Go] to be installed. ### Install the kustomize CLI from source without cloning the repo ```bash go install sigs.k8s.io/kustomize/kustomize/{{< example-major-version >}} ``` ### Install the kustomize CLI from local source ```bash # Clone the repo git clone git@github.com:kubernetes-sigs/kustomize.git # Get into the repo root cd kustomize # Optionally checkout a particular tag if you don't want to build at head git checkout kustomize/{{< example-version >}} # Build the binary (cd kustomize; go install .) # Run it - this assumes your Go bin (generally GOBIN or GOPATH/bin) is on your PATH # See the Go documentation for more details: https://go.dev/doc/code kustomize version ``` [Go]: https://golang.org [releases page]: https://github.com/kubernetes-sigs/kustomize/releases [script]: https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install\_kustomize.sh [GCR page]: https://us.gcr.io/k8s-artifacts-prod/kustomize/kustomize [Homebrew]: https://brew.sh [MacPorts]: https://www.macports.org [Choco Package]: https://chocolatey.org/packages/kustomize [Package Source]: https://github.com/kenmaglio/choco-kustomize | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Getting started/installation.md | master | kustomize | [
0.03648342937231064,
0.004152678418904543,
0.029497357085347176,
-0.05163472890853882,
-0.015939539298415184,
0.007088781334459782,
-0.01565518043935299,
-0.01511902455240488,
-0.03490152209997177,
-0.021056920289993286,
-0.02449864335358143,
-0.06156887114048004,
-0.02011920139193535,
-0.... | 0.084577 |
A common set of labels can be applied to all Resources in a project by adding a [`labels`] or [`commonLabels`] entry to the `kustomization.yaml` file. Similarly, a common set of annotations can be applied to Resources with the [`commonAnnotations`] field. ## Working with Labels ### Add Labels [`labels`] can be used to add labels to the `metadata` field of all Resources in a project. This will override values for label keys that already exist. Here is an example of how to add labels to the `metadata` field. 1. Create a Kustomization file. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization labels: - pairs: someName: someValue owner: alice app: bingo resources: - deploy.yaml - service.yaml ``` 2. Create Deployment and Service manifests. ```yaml # deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: example ``` ```yaml # service.yaml apiVersion: v1 kind: Service metadata: name: example ``` 3. Add labels with `kustomize build`. ```bash kustomize build . ``` The output shows that the `labels` field is used to add labels to the `metadata` field of the Service and Deployment Resources. ```yaml apiVersion: v1 kind: Service metadata: labels: app: bingo owner: alice someName: someValue name: example --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: bingo owner: alice someName: someValue name: example ``` ### Add Template Labels [`labels.includeTemplates`] can be used to add labels to the template field of all applicable Resources in a project. Here is an example of how to add labels to the template field of a Deployment. 1. Create a Kustomization file. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization labels: - pairs: someName: someValue owner: alice app: bingo includeTemplates: true resources: - deploy.yaml - service.yaml ``` 2. Create Deployment and Service manifests. ```yaml # deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: example ``` ```yaml # service.yaml apiVersion: v1 kind: Service metadata: name: example ``` 3. Add labels with `kustomize build`. ```bash kustomize build . ``` The output shows that labels are added to the `metadata` field and the `labels.includeTemplates` field is used to add labels to the template field of the Deployment. However, the [Service] Resource does not have a template field, and Kustomize does not add this field. ```yaml apiVersion: v1 kind: Service metadata: labels: app: bingo owner: alice someName: someValue name: example --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: bingo owner: alice someName: someValue name: example spec: template: metadata: labels: app: bingo owner: alice someName: someValue ``` ### Add Selector Labels [`labels.includeSelectors`] can be used to add labels to the selector field of applicable Resources in a project. Note that this also adds labels to the template field for applicable Resources. Labels added to the selector field should not be changed after Workload and Service Resources have been created in a cluster. Here is an example of how to add labels to the selector field. 1. Create a Kustomization file. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization labels: - pairs: someName: someValue owner: alice app: bingo includeSelectors: true resources: - deploy.yaml - service.yaml ``` 2. Create Deployment and Service manifests. ```yaml # deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: example ``` ```yaml # service.yaml apiVersion: v1 kind: Service metadata: name: example ``` 3. Add labels with `kustomize build`. ```bash kustomize build . ``` The output shows that labels are added to the `metadata` field and the `labels.includeSelectors` field is used to add labels to the selector and template fields for applicable Resources. However, the [Service] Resource does not have a template field, and Kustomize does not add this field. ```yaml apiVersion: v1 kind: Service metadata: labels: app: bingo owner: alice someName: someValue name: example spec: selector: app: bingo owner: alice | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Tasks/labels_and_annotations.md | master | kustomize | [
-0.004168323241174221,
0.041262734681367874,
-0.07011749595403671,
-0.03527260571718216,
-0.027720509096980095,
0.03852396085858345,
0.08033570647239685,
-0.03443165495991707,
0.06386951357126236,
-0.020070355385541916,
0.01665017567574978,
-0.09204738587141037,
0.02767501026391983,
-0.007... | 0.15919 |
to add labels to the selector and template fields for applicable Resources. However, the [Service] Resource does not have a template field, and Kustomize does not add this field. ```yaml apiVersion: v1 kind: Service metadata: labels: app: bingo owner: alice someName: someValue name: example spec: selector: app: bingo owner: alice someName: someValue --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: bingo owner: alice someName: someValue name: example spec: selector: matchLabels: app: bingo owner: alice someName: someValue template: metadata: labels: app: bingo owner: alice someName: someValue ``` The following example produces the same result. The [`commonLabels`] field is equivalent to using [`labels.includeSelectors`]. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization commonLabels: someName: someValue owner: alice app: bingo resources: - deploy.yaml - service.yaml ``` ## Working with Annotations ### Add Annotations [`commonAnnotations`] can be used to add annotations to all Resources in a project. This will override values for annotations keys that already exist. Annotations are propagated to the Deployment Pod template. Here is an example of how to add annotations to a Deployment. 1. Create a Kustomization file. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization commonAnnotations: oncallPager: 800-867-5309 resources: - deploy.yaml ``` 2. Create a Deployment manifest. ```yaml # deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: example ``` 3. Add annotations with `kustomize build`. ```bash kustomize build . ``` The output shows that the `commonAnnotations` field is used to add annotations to a Deployment. ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: example annotations: oncallPager: 800-867-5309 spec: template: metadata: annotations: oncallPager: 800-867-5309 ``` [`labels`]: /docs/reference/api/kustomization-file/labels/ [`labels.includeTemplates`]: /docs/reference/api/kustomization-file/labels/ [`labels.includeSelectors`]: /docs/reference/api/kustomization-file/labels/ [`commonLabels`]: /docs/reference/api/kustomization-file/commonlabels/ [`commonAnnotations`]: /docs/reference/api/kustomization-file/commonannotations/ [Service]: https://kubernetes.io/docs/reference/kubernetes-api/service-resources/service-v1/ | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Tasks/labels_and_annotations.md | master | kustomize | [
0.012572401203215122,
0.07044617831707001,
-0.01744999922811985,
-0.015640990808606148,
-0.002300453605130315,
0.07707075774669647,
0.09007015824317932,
-0.05506715923547745,
0.05663038790225983,
-0.03817922994494438,
0.025627344846725464,
-0.11729399114847183,
-0.03407816216349602,
-0.006... | 0.103535 |
The Namespace can be set for all Resources in a project by adding the [`namespace`] entry to the `kustomization.yaml` file. Consistent naming conventions can be applied to Resource Names in a project with the [`namePrefix`] and [`nameSuffix`] fields. ## Working with Namespaces [`namespace`] sets the Namespace for all namespaced Resources in a project. This sets the Namespace for both generated Resources (e.g. ConfigMaps and Secrets) and non-generated Resources. This will override Namespace values that already exist. ### Add Namespace Here is an example of how to set the Namespace of a Deployment and a generated ConfigMap. The ConfigMap is generated with [`configMapGenerator`]. 1. Create a Kustomization file. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: my-namespace configMapGenerator: - name: my-config literals: - FOO=BAR resources: - deploy.yaml ``` 2. Create a Deployment manifest. ```yaml # deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: example ``` 3. Add Namespace with `kustomize build`. ```bash kustomize build . ``` The output shows that the `namespace` field is used to set the Namespace of the Deployment and the generated ConfigMap. ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: example namespace: my-namespace --- apiVersion: v1 kind: ConfigMap metadata: name: my-config-m2mg5mb749 namespace: my-namespace data: FOO: BAR ``` ## Working with Names A prefix or suffix can be set for all Resources in a project with the [`namePrefix`] and [`nameSuffix`] fields. This sets a name prefix and suffix for both generated Resources (e.g. ConfigMaps and Secrets) and non-generated Resources. Resources such as Deployments and StatefulSets may reference other Resources such as ConfigMaps and Secrets in the Pod Spec. The name prefix and suffix will also propagate to Resource references in a project. Typical uses cases include Service reference from a StatefulSet, ConfigMap reference from a Pod Spec, and Secret reference from a Pod Spec. ### Add Name Prefix [`namePrefix`] can be used to add a prefix to the name of all Resources in a project. Here is an example of how to add a prefix to a Deployment. 1. Create a Kustomization file. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namePrefix: foo- resources: - deploy.yaml ``` 2. Create a Deployment manifest. ```yaml # deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: example ``` 3. Add name prefix with `kustomize build`. ```bash kustomize build . ``` The output shows that the `namePrefix` field is used to add a prefix to the name of the Deployment. ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: foo-example ``` ### Add Name Suffix [`nameSuffix`] can be used to add a suffix to the name of all Resources in a project. Here is an example of how to add a suffix to the name of a Deployment and a generated ConfigMap. The ConfigMap is generated with [`configMapGenerator`]. 1. Create a Kustomization file. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization nameSuffix: -bar configMapGenerator: - name: my-config literals: - FOO=BAR resources: - deploy.yaml ``` 2. Create a Deployment manifest. ```yaml # deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: example ``` 3. Add name suffix with `kustomize build`. ```bash kustomize build . ``` The output shows that the `nameSuffix` field is used to add a suffix to the name of the Deployment and the generated ConfigMap. ```yaml kind: ConfigMap apiVersion: v1 metadata: name: my-config-bar-m2mg5mb749 data: FOO: BAR --- apiVersion: apps/v1 kind: Deployment metadata: name: example-bar ``` ### Propagate Name Prefix to Resource Reference [`namePrefix`] and [`nameSuffix`] propagate Resources name changes to Resource references in a project. Here is an example of how the name prefix of a generated ConfigMap is propagated to the Pod Spec of a Deployment that references the ConfigMap to set a container environment variable. 1. Create a | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Tasks/namespaces_and_names.md | master | kustomize | [
-0.07133664935827255,
-0.061151858419179916,
0.0033898961264640093,
0.02149348147213459,
-0.0512346588075161,
-0.045583318918943405,
0.034903787076473236,
-0.04942973330616951,
0.028298858553171158,
0.004728540778160095,
0.0004753386601805687,
-0.09320484846830368,
0.086971215903759,
-0.02... | 0.113308 |
Resource Reference [`namePrefix`] and [`nameSuffix`] propagate Resources name changes to Resource references in a project. Here is an example of how the name prefix of a generated ConfigMap is propagated to the Pod Spec of a Deployment that references the ConfigMap to set a container environment variable. 1. Create a Kustomization file. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namePrefix: foo- configMapGenerator: - name: special-config literals: - special.how=very resources: - deploy.yaml ``` 2. Create a Deployment manifest. This Deployment is configured to set an environment variable in the `busybox` container using data from the generated ConfigMap. ```yaml # deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: labels: app: example name: example spec: replicas: 1 selector: matchLabels: app: example template: metadata: labels: app: example spec: containers: - image: registry.k8s.io/busybox name: busybox command: [ "/bin/sh", "-c", "env" ] env: - name: SPECIAL\_LEVEL\_KEY valueFrom: configMapKeyRef: name: special-config key: special.how ``` 3. Add name prefix with `kustomize build`. ```bash kustomize build . ``` The output shows that the name prefix is propagated to the ConfigMap name reference in the Deployment Pod Spec. ```yaml kind: ConfigMap apiVersion: v1 metadata: name: foo-special-config-9k6fhm8659 data: special.how: very --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: example name: foo-example spec: replicas: 1 selector: matchLabels: app: example template: metadata: labels: app: example spec: containers: - command: - /bin/sh - -c - env env: - name: SPECIAL\_LEVEL\_KEY valueFrom: configMapKeyRef: key: special.how name: foo-special-config-9k6fhm8659 image: registry.k8s.io/busybox name: busybox ``` [`namespace`]: /docs/reference/api/kustomization-file/namespace/ [`namePrefix`]: /docs/reference/api/kustomization-file/nameprefix/ [`nameSuffix`]: /docs/reference/api/kustomization-file/namesuffix/ [`configMapGenerator`]: /docs/reference/api/kustomization-file/configmapgenerator/ | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Tasks/namespaces_and_names.md | master | kustomize | [
0.008859118446707726,
0.023101788014173508,
0.03481888771057129,
0.02154000662267208,
-0.019170816987752914,
-0.028354229405522346,
0.03615310415625572,
-0.030276190489530563,
0.027870796620845795,
0.0029170506168156862,
-0.0059206862933933735,
-0.11112295836210251,
0.009519723244011402,
-... | 0.131254 |
Secret objects can be generated by adding a [`secretGenerator`] entry to the `kustomization.yaml` file. This is similar to the [`configMapGenerator`]. Secret Resources may be generated from files and literals. It is important to note that the secrets are base64 encoded. ## Create Secret from a file To generate a Secret Resource from a file, add an entry to [`secretGenerator`] with the filename. The Secret will have data values populated from the file contents. The contents of each file will appear as a single data item in the Secret keyed by the filename. The following example generates a Secret with a data item containing the contents of a file. 1. Create a Kustomization file. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization secretGenerator: - name: db-user-pass files: - credentials.txt ``` 2. Create a `credentials.txt` file. ```yaml # credentials.txt username=admin password=S!B\\*d$zDsb= ``` 3. Create the Secret using `kustomize build`. ```bash kustomize build . ``` The Secret manifest is generated. ```yaml apiVersion: v1 kind: Secret type: Opaque metadata: name: db-user-pass-gf9bgh225c data: credentials.txt: dXNlcm5hbWU9YWRtaW4KcGFzc3dvcmQ9UyFCXCpkJHpEc2I9Cg== ``` The credentials key value is base64 encoded. ```bash echo "dXNlcm5hbWU9YWRtaW4KcGFzc3dvcmQ9UyFCXCpkJHpEc2I9Cg==" | base64 -d username=admin password=S!B\\*d$zDsb= ``` ## Create Secret from literals To generate a Secret Resource from literal key-value pairs, add an entry to [`secretGenerator`] with a list of `literals`. {{< alert color="success" title="Literal Syntax" >}} - The key/value are separated by a `=` sign (left side is the key). - The value of each literal will appear as a data item in the Secret keyed by its key. {{< /alert >}} The following example generates a Secret with two data items generated from literals. 1. Create a Kustomization file. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization secretGenerator: - name: db-user-pass literals: - username=admin - password=S!B\\*d$zDsb= ``` 2. Create the Secret using `kustomize build`. ```bash kustomize build . ``` The Secret manifest is generated. ```yaml apiVersion: v1 kind: Secret type: Opaque metadata: name: db-user-pass-t8d2d65755 data: password: UyFCXCpkJHpEc2I9 username: YWRtaW4= ``` The credential key values are base64 encoded. ```bash echo "UyFCXCpkJHpEc2I9" | base64 -d S!B\\*d$zDsb= echo "YWRtaW4=" | base64 -d admin ``` ## Create a TLS Secret The following example generates a TLS Secret with certificate and private key data files. 1. Create a Kustomization file. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization secretGenerator: - name: app-tls files: - "tls.crt" - "tls.key" type: "kubernetes.io/tls" ``` 2. Create a certificate file. ```yaml # tls.crt LS0tLS1CRUd...tCg== ``` 3. Create a private key file. ```yaml # tls.key LS0tLS1CRUd...0tLQo= ``` 4. Create the Secret using `kustomize build`. ```bash kustomize build . ``` The Secret manifest is generated. The data key values are base64 encoded. ```yaml apiVersion: v1 kind: Secret type: kubernetes.io/tls metadata: name: app-tls-c888dfbhf8 data: tls.crt: TFMwdExTMUNSVWQuLi50Q2c9PQ== tls.key: TFMwdExTMUNSVWQuLi4wdExRbz0= ``` [`secretGenerator`]: /docs/reference/api/kustomization-file/secretgenerator/ [`configMapGenerator`]: /docs/reference/api/kustomization-file/configmapgenerator/ | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Tasks/secret_generator.md | master | kustomize | [
-0.09381116181612015,
0.04416899383068085,
-0.08048092573881149,
0.059468988329172134,
-0.020864935591816902,
-0.02791757509112358,
0.012820419855415821,
-0.0007115182816050947,
0.03721414878964424,
0.026864230632781982,
0.01768491417169571,
-0.09332799911499023,
0.05573692172765732,
-0.08... | 0.125036 |
ConfigMap objects can be generated by adding a [`configMapGenerator`] entry to the `kustomization.yaml` file. ## Create ConfigMap from a file ConfigMap Resources may be generated from files - such as a java `.properties` file. To generate a ConfigMap Resource for a file, add an entry to [`configMapGenerator`] with the filename. The ConfigMaps will have data values populated from the file contents. The contents of each file will appear as a single data item in the ConfigMap keyed by the filename. The following example generates a ConfigMap with a data item containing the contents of a file. 1. Create a Kustomization file. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization configMapGenerator: - name: my-application-properties files: - application.properties ``` 2. Create a `.properties` file. ```yaml # application.properties FOO=Bar ``` 3. Create the ConfigMap using `kustomize build`. ```bash kustomize build . ``` The output is similar to: ```yaml apiVersion: v1 data: application.properties: |- FOO=Bar kind: ConfigMap metadata: name: my-application-properties-f7mm6mhf59 ``` It is also possible to [define a key](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-the-key-to-use-when-creating-a-configmap-from-a-file) to set a name different than the filename. The example below creates a ConfigMap with the name of file as `myFileName.ini` while the \_actual\_ filename from which the configmap is created is `whatever.ini`. ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization configMapGenerator: - name: app-whatever files: - myFileName.ini=whatever.ini ``` ## Create ConfigMap from literals ConfigMap Resources may be generated from literal key-value pairs - such as `JAVA\_HOME=/opt/java/jdk`. To generate a ConfigMap Resource from literal key-value pairs, add an entry to [`configMapGenerator`] with a list of `literals`. {{< alert color="success" title="Literal Syntax" >}} - The key/value are separated by a `=` sign (left side is the key). - The value of each literal will appear as a data item in the ConfigMap keyed by its key. {{< /alert >}} The following example generates a ConfigMap with two data items generated from literals. 1. Create a Kustomization file. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization configMapGenerator: - name: my-java-server-env-vars literals: - JAVA\_HOME=/opt/java/jdk - JAVA\_TOOL\_OPTIONS=-agentlib:hprof ``` 2. Create the ConfigMap using `kustomize build`. ```bash kustomize build . ``` The output is similar to: ```yaml apiVersion: v1 data: JAVA\_HOME: /opt/java/jdk JAVA\_TOOL\_OPTIONS: -agentlib:hprof kind: ConfigMap metadata: name: my-java-server-env-vars-44k658k8gk ``` ## Create ConfigMap from env file ConfigMap Resources may be generated from key-value pairs much the same as using the literals option but taking the key-value pairs from an environment file. These generally end in `.env`. To generate a ConfigMap Resource from an environment file, add an entry to [`configMapGenerator`] with a single `envs` entry, e.g. `envs: [ 'config.env' ]`. {{< alert color="success" title="Environment File Syntax" >}} - The key/value pairs inside of the environment file are separated by a `=` sign (left side is the key). - The value of each line will appear as a data item in the ConfigMap keyed by its key. - Pairs may span a single line only. {{< /alert >}} The following example generates a ConfigMap with three data items generated from an environment file. 1. Create a Kustomization file. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization configMapGenerator: - name: tracing-options envs: - tracing.env ``` 2. Create an environment file. ```bash # tracing.env ENABLE\_TRACING=true SAMPLER\_TYPE=probabilistic SAMPLER\_PARAMETERS=0.1 ``` 3. Create the ConfigMap using `kustomize build`. ```bash kustomize build . ``` The output is similar to: ```yaml apiVersion: v1 kind: ConfigMap metadata: # The name has had a suffix applied name: tracing-options-6bh8gkdf7k # The data has been populated from each literal pair data: ENABLE\_TRACING: "true" SAMPLER\_TYPE: "probabilistic" SAMPLER\_PARAMETERS: "0.1" ``` ## Create ConfigMap with options The labels and annotations of a generated ConfigMap can be set with the `options` field. The name suffix hash can also be disabled. Labels and | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Tasks/configmap_generator.md | master | kustomize | [
-0.024207279086112976,
0.021855855360627174,
-0.05853791907429695,
0.035374827682971954,
-0.032551877200603485,
-0.059440914541482925,
0.04830027371644974,
0.012179420329630375,
-0.036942798644304276,
0.01539557334035635,
0.030719302594661713,
-0.11939705908298492,
0.0756613090634346,
-0.0... | 0.148455 |
name: tracing-options-6bh8gkdf7k # The data has been populated from each literal pair data: ENABLE\_TRACING: "true" SAMPLER\_TYPE: "probabilistic" SAMPLER\_PARAMETERS: "0.1" ``` ## Create ConfigMap with options The labels and annotations of a generated ConfigMap can be set with the `options` field. The name suffix hash can also be disabled. Labels and annotations added with `options` will not be overwritten by values defind in the `generatorOptions` field. Note that `disableNameSuffixHash: true` defined in `globalOptions` will override the locally defined `options`. This is a result of boolean behavior. The following example generates a ConfigMap with labels, annotations and does not add a suffix hash to the name. 1. Create a Kustomization file. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generatorOptions: labels: fruit: apple configMapGenerator: - name: my-java-server-env-vars literals: - JAVA\_HOME=/opt/java/jdk - JAVA\_TOOL\_OPTIONS=-agentlib:hprof options: disableNameSuffixHash: true labels: pet: dog annotations: dashboard: "1" ``` 2. Create the ConfigMap using `kustomize build`. ```bash kustomize build . ``` The ConfigMap manifest is created with labels and annotations. ```yaml apiVersion: v1 kind: ConfigMap metadata: name: my-java-server-env-vars labels: fruit: apple pet: dog annotations: dashboard: "1" data: JAVA\_HOME: /opt/java/jdk JAVA\_TOOL\_OPTIONS: -agentlib:hprof ``` ## Override base ConfigMap values ConfigMap values from bases may be overridden by adding another generator for the ConfigMap in the overlay and specifying the `behavior` field. `behavior` may be one of: \* `create` (default value): used to create a new ConfigMap. A name conflict error will be thrown if a ConfigMap with the same name and namespace already exists. \* `replace`: replace an existing ConfigMap from the base. \* `merge`: add or update the values in an existing ConfigMap from the base. When updating an existing ConfigMap with the `merge` or `replace` strategies, you must ensure that both the name and namespace match the ConfigMap you are targeting. For example, if the namespace is unspecified in the base, you should not specify it in the overlay. Conversely, if it is specified in the base, you must specify it in the overlay as well. This is true even if the overlay Kustomization includes a namespace, because `configMapGenerator` runs before the namespace transformer. ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: my-new-namespace resources: - ../base configMapGenerator: - name: existing-name namespace: existing-ns # needs to match target ConfigMap from base behavior: replace literals: - ENV=dev ``` {{< alert color="warning" title="Name suffixing with overlay configMapGenerator" >}} When using `configMapGenerator` to override values of an existing ConfigMap, the overlay `configMapGenerator` does not cause suffixing of the existing ConfigMap's name to occur. To take advantage of name suffixing, use `configMapGenerator` in the base, and the overlay generator will correctly update the suffix based on the new content. {{< /alert >}} ## Propagate ConfigMap Name Suffix Workloads that reference the ConfigMap or Secret will need to know the name of the generated Resource, including the suffix. Kustomize takes care of this automatically by identifying references to generated ConfigMaps and Secrets, and updating them. In the following example, the generated ConfigMap name will be `my-java-server-env-vars` with a suffix unique to its contents. Changes to the contents will change the name suffix, resulting in the creation of a new ConfigMap, which Kustomize will transform Workloads to point to. The PodTemplate volume references the ConfigMap by the name specified in the generator (excluding the suffix). Kustomize will update the name to include the suffix applied to the ConfigMap name. The following example generates a ConfigMap and propagates the ConfigMap name, including the suffix, to a Deployment that mounts the ConfigMap. 1. Create a Kustomization file. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization configMapGenerator: - name: my-java-server-env-vars literals: - JAVA\_HOME=/opt/java/jdk - JAVA\_TOOL\_OPTIONS=-agentlib:hprof resources: - deployment.yaml ``` 2. Create a Deployment | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Tasks/configmap_generator.md | master | kustomize | [
-0.04819507896900177,
0.012450910173356533,
-0.007819163613021374,
0.017803527414798737,
0.01288211066275835,
-0.04322751611471176,
0.059704285115003586,
-0.041640713810920715,
-0.04003274068236351,
-0.02477281168103218,
0.08773714303970337,
-0.09348057210445404,
0.01040729321539402,
-0.07... | 0.083509 |
The following example generates a ConfigMap and propagates the ConfigMap name, including the suffix, to a Deployment that mounts the ConfigMap. 1. Create a Kustomization file. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization configMapGenerator: - name: my-java-server-env-vars literals: - JAVA\_HOME=/opt/java/jdk - JAVA\_TOOL\_OPTIONS=-agentlib:hprof resources: - deployment.yaml ``` 2. Create a Deployment manifest. ```yaml # deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: test-deployment labels: app: test spec: selector: matchLabels: app: test template: metadata: labels: app: test spec: containers: - name: container image: registry.k8s.io/busybox command: [ "/bin/sh", "-c", "ls /etc/config/" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: my-java-server-env-vars ``` 3. Create the ConfigMap using `kustomize build`. ```bash kustomize build . ``` The output is similar to: ```yaml apiVersion: v1 kind: ConfigMap metadata: # The name has been updated to include the suffix name: my-java-server-env-vars-k44mhd6h5f data: JAVA\_HOME: /opt/java/jdk JAVA\_TOOL\_OPTIONS: -agentlib:hprof --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: test name: test-deployment spec: selector: matchLabels: app: test template: metadata: labels: app: test spec: containers: - command: - /bin/sh - -c - ls /etc/config/ image: registry.k8s.io/busybox name: container volumeMounts: - mountPath: /etc/config name: config-volume volumes: - configMap: # The name has been updated to include the # suffix matching the ConfigMap name: my-java-server-env-vars-k44mhd6h5f name: config-volume ``` [`configMapGenerator`]: /docs/reference/api/kustomization-file/configmapgenerator/ | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Tasks/configmap_generator.md | master | kustomize | [
-0.0021349953021854162,
-0.002092685317620635,
-0.0057744309306144714,
-0.06438188999891281,
-0.025800513103604317,
-0.04506024718284607,
-0.006418193224817514,
0.010834873653948307,
-0.03757377713918686,
-0.013958076946437359,
0.058701179921627045,
-0.12400136888027191,
0.004351298790425062... | 0.109086 |
Kustomize build information can be added to resource labels or annotations with the [`buildMetadata`] field. ## Add Managed By Label Specify the `managedByLabel` option in the `buildMetadata` field to mark the resource as having been managed by Kustomize. The following example adds the `app.kubernetes.io/managed-by` label to a resource. 1. Create a Kustomization file. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - service.yaml buildMetadata: - managedByLabel ``` 2. Create a Service manifest. ```yaml # service.yaml apiVersion: v1 kind: Service metadata: name: myService spec: ports: - port: 7002 ``` 3. Add label with `kustomize build`. The output shows that the `managedByLabel` option adds the `app.kubernetes.io/managed-by` label with Kustomize build information. ```yaml apiVersion: v1 kind: Service metadata: name: myService labels: app.kubernetes.io/managed-by: kustomize-v5.2.1 spec: ports: - port: 7002 ``` ## Add Origin Annotation with Local Resource Specify the `originAnnotations` option in the `buildMetadata` field to annotate resources with information about their origin. The possible fields of these annotations are: - `path`: The path to a resource file itself. - `ref`: If from a remote file or generator, the git reference of the repository URL. - `repo`: If from a remote file or generator, the repository source. - `configuredIn`: The path to the generator configuration for a generated resource. This would point to the Kustomization file itself if a generator is invoked via a field. - `configuredBy`: The ObjectReference of the generator configuration for a generated resource. If the resource is from the `resources` field, this annotation contains data about the file it originated from. All local file paths are relative to the top-level Kustomization, i.e. the Kustomization file in the directory upon which `kustomize build` was invoked. For example, if someone were to run `kustomize build foo`, all file paths in the annotation output would be relative to `foo/kustomization.yaml`. All remote file paths are relative to the root of the remote repository. Any fields that are not applicable would be omitted from the final output. The following example adds the `config.kubernetes.io/origin` annotation to a non-generated resource defined in a local file. 1. Create a Kustomization file. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - service.yaml buildMetadata: - originAnnotations ``` 2. Create a Service manifest. ```yaml # service.yaml apiVersion: v1 kind: Service metadata: name: myService spec: ports: - port: 7002 ``` 3. Add origin annotation with `kustomize build`. The output shows that the `originAnnotations` option adds the `config.kubernetes.io/origin` annotation with Kustomize build information. ```yaml apiVersion: v1 kind: Service metadata: name: myService annotations: config.kubernetes.io/origin: | path: service.yaml spec: ports: - port: 7002 ``` ## Add Origin Annotation with Local Generator Generated resources will receive an annotation containing data about the generator that produced it with the `originAnnotations` option. The following example adds the `config.kubernetes.io/origin` annotation to a generated resource. 1. Create a Kustomization file. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization configMapGenerator: - name: my-java-server-env-vars literals: - JAVA\_HOME=/opt/java/jdk - JAVA\_TOOL\_OPTIONS=-agentlib:hprof buildMetadata: - originAnnotations ``` 2. Generate a ConfigMap that includes an origin annotation with `kustomize build`. The output shows that the `originAnnotations` option adds the `config.kubernetes.io/origin` annotation with information about the local ConfigMapGenerator that generated the ConfigMap. ```yaml kind: ConfigMap apiVersion: v1 metadata: name: my-java-server-env-vars-c68g99m4hf annotations: config.kubernetes.io/origin: | configuredIn: kustomization.yaml configuredBy: kind: ConfigMapGenerator apiVersion: builtin data: JAVA\_HOME: /opt/java/jdk JAVA\_TOOL\_OPTIONS: -agentlib:hprof ``` ## Add Origin Annotation with Remote Generator A remote file or generator will receive an annotation containing the repository URL and git reference with the `originAnnotations` option. The following example adds the `config.kubernetes.io/origin` annotation to a resource generated with a remote generator. 1. Create a Kustomization file. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - github.com/examplerepo/?ref=v1.0.6 buildMetadata: - originAnnotations ``` 2. This | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Tasks/build_metadata.md | master | kustomize | [
0.005009638611227274,
0.0250701904296875,
-0.031423456966876984,
0.015800191089510918,
-0.043127551674842834,
0.03652176633477211,
0.03592192754149437,
-0.02745211310684681,
0.02846965566277504,
0.025386398658156395,
-0.00884097721427679,
-0.08281876146793365,
0.03383389487862587,
-0.00692... | 0.137724 |
receive an annotation containing the repository URL and git reference with the `originAnnotations` option. The following example adds the `config.kubernetes.io/origin` annotation to a resource generated with a remote generator. 1. Create a Kustomization file. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - github.com/examplerepo/?ref=v1.0.6 buildMetadata: - originAnnotations ``` 2. This example uses a remote base with the following Kustomization. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization configMapGenerator: - name: my-java-server-env-vars literals: - JAVA\_HOME=/opt/java/jdk - JAVA\_TOOL\_OPTIONS=-agentlib:hprof ``` 3. Generate a ConfigMap that includes an origin annotation with `kustomize build`. The output shows that the `originAnnotations` option adds the `config.kubernetes.io/origin` annotation with build information about the remote ConfigMapGenerator that generated the ConfigMap. ```yaml kind: ConfigMap apiVersion: v1 metadata: name: my-java-server-env-vars-44k658k8gk annotations: config.kubernetes.io/origin: | ref: v1.0.6 repo: github.com/examplerepo configuredIn: kustomization.yaml configuredBy: kind: ConfigMapGenerator apiVersion: builtin data: JAVA\_HOME: /opt/java/jdk JAVA\_TOOL\_OPTIONS: -agentlib:hprof ``` ## Add Annotation with Local Transformer \*\*FEATURE STATE\*\*: [alpha] While this field is in alpha, it will receive the `alpha` prefix, so you will see the annotation key `alpha.config.kubernetes.io/transformations` instead. We are not guaranteeing that the annotation content will be stable during alpha, and reserve the right to make changes as we evolve the feature. Add the `transformerAnnotations` option to the `buildMetadata` field to annotate resources with information about the transformers that have acted on them. When the `transformerAnnotations` option is set, Kustomize will add annotations with information about what transformers have acted on each resource. Transformers can be invoked either through various fields in the Kustomization file (e.g. the `replacements` field will invoke the ReplacementTransformer), or through the `transformers` field. The annotation key for transformer annotations will be `alpha.config.kubernetes.io/transformations`, which will contain a list of transformer data. The possible fields in each item in this list is identical to the possible fields in `config.kubernetes.io/origin`, except that the transformer annotation does not have a `path` field: The possible fields of these annotations are: - `ref`: If from a remote file or generator, the git reference of the repository URL. - `repo`: If from a remote file or generator, the repository source. - `configuredIn`: The path to the transformer configuration. This would point to the Kustomization file itself if a transformer is invoked via a field. - `configuredBy`: The ObjectReference of the transformer configuration. All local file paths are relative to the top-level Kustomization. This behavior is similar to how the `originAnnotations` option works. The following example adds the `alpha.config.kubernetes.io/transformations` annotation to a resource updated with the NamespaceTransformer. 1. Create a Kustomization file. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: app resources: - service.yaml buildMetadata: - transformerAnnotations ``` 2. Create a Service manifest. ```yaml # service.yaml apiVersion: v1 kind: Service metadata: name: myService spec: ports: - port: 7002 ``` 3. Add transformer annotation with `kustomize build`. The output shows that the `transformerAnnotations` option adds the `alpha.config.kubernetes.io/transformations` annotation with build information about the transformer that updated the resource. ```yaml apiVersion: v1 kind: Service metadata: name: myService namespace: app annotations: alpha.config.kubernetes.io/transformations: | - configuredIn: kustomization.yaml configuredBy: apiVersion: builtin kind: NamespaceTransformer spec: ports: - port: 7002 ``` ## Add Annotation with Local and Remote Transformer The following example adds the `alpha.config.kubernetes.io/transformations` annotation to a resource updated by a local and remote transformer. 1. Create a Kustomization file. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: app resources: - github.com/examplerepo/?ref=v1.0.6 buildMetadata: - transformerAnnotations ``` 2. This example uses a remote base with the following Kustomization. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - service.yaml namePrefix: pre- ``` The `service.yaml` contains the following: ```yaml # service.yaml apiVersion: v1 kind: Service metadata: name: myService spec: ports: - port: 7002 ``` 3. Run `kustomize build`. The output shows | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Tasks/build_metadata.md | master | kustomize | [
-0.02506558783352375,
0.03057463839650154,
-0.010112778283655643,
-0.03160349279642105,
-0.017272744327783585,
0.008493754081428051,
-0.015662791207432747,
-0.035627737641334534,
0.0707394927740097,
0.030256429687142372,
-0.027386581525206566,
-0.09337654709815979,
0.013745507225394249,
-0... | 0.106351 |
example uses a remote base with the following Kustomization. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - service.yaml namePrefix: pre- ``` The `service.yaml` contains the following: ```yaml # service.yaml apiVersion: v1 kind: Service metadata: name: myService spec: ports: - port: 7002 ``` 3. Run `kustomize build`. The output shows that the `transformerAnnotations` option adds the `alpha.config.kubernetes.io/transformations` annotation with build information about the transformers that updated the resource. ```yaml apiVersion: v1 kind: Deployment metadata: name: pre-deploy namespace: app annotations: config.kubernetes.io/transformations: | - ref: v1.0.6 repo: github.com/examplerepo configuredIn: kustomization.yaml configuredBy: kind: PrefixSuffixTransformer apiVersion: builtin - configuredIn: kustomization.yaml configuredBy: kind: NamespaceTransformer apiVersion: builtin ``` [`buildMetadata`]: /docs/reference/api/kustomization-file/buildmetadata/ | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Tasks/build_metadata.md | master | kustomize | [
-0.03692062944173813,
-0.0026083907578140497,
0.00102513178717345,
-0.05971166864037514,
-0.026645902544260025,
0.01960049755871296,
-0.04031237214803696,
-0.00693538086488843,
0.04319092631340027,
0.02777705527842045,
0.010603528469800949,
-0.024023395031690598,
-0.024846753105521202,
-0.... | 0.193166 |
--- title: "Tutorials" linkTitle: "Tutorials" weight: 6 date: 2017-01-04 description: > Show your user how to work through some end to end examples. --- {{% pageinfo %}} This is a placeholder page that shows you how to use this template site. {{% /pageinfo %}} Tutorials are \*\*complete worked examples\*\* made up of \*\*multiple tasks\*\* that guide the user through a relatively simple but realistic scenario: building an application that uses some of your project’s features, for example. If you have already created some Examples for your project you can base Tutorials on them. This section is \*\*optional\*\*. However, remember that although you may not need this section at first, having tutorials can be useful to help your users engage with your example code, especially if there are aspects that need more explanation than you can easily provide in code comments. | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Tutorials/_index.md | master | kustomize | [
-0.09864546358585358,
0.04163810983300209,
-0.0019700927659869194,
0.017752371728420258,
0.014177035540342331,
0.041185442358255386,
0.0022274069488048553,
0.038677118718624115,
-0.12214253097772598,
-0.007087365724146366,
-0.0013247421011328697,
-0.014604785479605198,
0.07463400065898895,
... | 0.025484 |
{{< alert color="success" title="TL;DR" >}} - Kustomize helps customizing config files in a template free way. - Kustomize provides a number of handy methods like generators to make customization easier. - Kustomize uses patches to introduce environment specific changes on an already existing standard config file without disturbing it. {{< /alert >}} Kustomize provides a solution for customizing Kubernetes resource configuration free from templates and DSLs. Kustomize lets you customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is. Kustomize targets kubernetes; it understands and can patch `kubernetes style` API objects. It's like [make](https://www.gnu.org/software/make), in that what it does is declared in a file, and it's like [sed](https://www.gnu.org/software/sed), in that it emits edited text. ## Usage ### 1) Make a `kustomization` file In some directory containing your YAML `resource` files (deployments, services, configmaps, etc.), create a `kustomization` file. This file should declare those resources, and any customization to apply to them, e.g. \_add a common label\_. File structure: ``` ~/someApp ├── deployment.yaml ├── kustomization.yaml └── service.yaml ``` The resources in this directory could be a fork of someone else's configuration. If so, you can easily rebase from the source material to capture improvements, because you don't modify the resources directly. Generate customized YAML with: ``` kustomize build ~/someApp ``` The YAML can be directly `applied` to a cluster: ``` kustomize build ~/someApp | kubectl apply -f - ``` ### 2) Create `variants` using `overlays` Manage traditional `variants` of a configuration - like \_development\_, \_staging\_ and \_production\_ - using `overlays` that modify a common `base`. File structure: ``` ~/someApp ├── base │ ├── deployment.yaml │ ├── kustomization.yaml │ └── service.yaml └── overlays ├── development │ ├── cpu\_count.yaml │ ├── kustomization.yaml │ └── replica\_count.yaml └── production ├── cpu\_count.yaml ├── kustomization.yaml └── replica\_count.yaml ``` Take the work from step (1) above, move it into a `someApp` subdirectory called `base`, then place overlays in a sibling directory. An overlay is just another kustomization, referring to the base, and referring to patches to apply to that base. This arrangement makes it easy to manage your configuration with `git`. The base could have files from an upstream repository managed by someone else. The overlays could be in a repository you own. Arranging the repo clones as siblings on disk avoids the need for git submodules (though that works fine, if you are a submodule fan). Generate YAML with ```sh kustomize build ~/someApp/overlays/production ``` The YAML can be directly `applied` to a cluster: ```sh kustomize build ~/someApp/overlays/production | kubectl apply -f - ``` | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Overview/_index.md | master | kustomize | [
-0.024528276175260544,
0.13256986439228058,
0.06870239228010178,
-0.0030565690249204636,
0.014208189211785793,
-0.02062656357884407,
0.028972262516617775,
-0.010147860273718834,
0.08851277828216553,
-0.016573471948504448,
-0.044998593628406525,
-0.03194349259138107,
0.03682612255215645,
-0... | 0.184817 |
This overview covers `kustomize` syntax, describes the command operations, and provides common examples. ## Syntax Use the following syntax to run `kustomize` commands from your terminal window: ```bash kustomize [command] ``` The `command` flag specifies the operation that you want to perform, for example `create`, `build`, `cfg`. If you need help, run `kustomize help` from the terminal window. ## Operations The following table includes short descriptions and the general syntax for all the `kustomize` operations. Operation | Syntax | Description --- | --- | --- build | `kustomize build DIR [flags]` | Build a kustomization target from a directory or URL. cfg | `kustomize cfg [command]` | Commands for reading and writing configuration. completion | `kustomize completion` [bash\|zsh\|fish\|powershell] | Generate shell completion script. create | `kustomize create [flags]` | Create a new kustomization in the current directory. edit | `kustomize edit [command]` | Edits a kustomization file. fn | `kustomize fn [command]` | Commands for running functions against configuration. localize | `kustomize localize [target [destination]] [flags]` | [Alpha] Creates localized copy of target kustomization root at destination. version | `kustomize version [flags]` | Prints the kustomize version. ## Examples: Common Operations Use the following set of examples to help you familiarize yourself with running the commonly used `kustomize` operations: `kustomize build` - Build a kustomization target from a directory or URL. ```bash # Build the current working directory kustomize build # Build some shared configuration directory kustomize build /home/config/production # Build from github kustomize build https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6 ``` `kustomize create` - Create a new kustomization in the current directory. ```bash # Create an empty kustomization.yaml file kustomize create # Create a new overlay from the base '../base". kustomize create --resources ../base # Create a new kustomization detecting resources in the current directory. kustomize create --autodetect # Create a new kustomization with multiple resources and fields set. kustomize create --resources deployment.yaml,service.yaml,../base --namespace staging --nameprefix acme- ``` `kustomize edit` - Edits a kustomization file. ```bash # Adds a configmap to the kustomization file kustomize edit add configmap NAME --from-literal=k=v # Sets the nameprefix field kustomize edit set nameprefix # Sets the namesuffix field kustomize edit set namesuffix ``` | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/CLI/_index.md | master | kustomize | [
0.041180580854415894,
0.012269142083823681,
-0.03751916438341141,
0.020376550033688545,
-0.1037147045135498,
-0.04014287516474724,
0.021505320444703102,
0.04406716674566269,
-0.05305610969662666,
0.045891180634498596,
0.0036111813969910145,
-0.08881690353155136,
-0.002358825644478202,
-0.0... | 0.159723 |
Images modify the name, tags and/or digest for images without creating patches. One can change the `image` in the following ways (Refer the following example to know exactly how this is done): - `postgres:8` to `my-registry/my-postgres:v1`, - nginx tag `1.7.9` to `1.8.0`, - image name `my-demo-app` to `my-app`, - alpine's tag `3.7` to a digest value It is possible to set image tags for container images through the `kustomization.yaml` using the `images` field. When `images` are specified, Apply will override the images whose image name matches `name` with a new tag. | Field | Description | Example Field | Example Result | |-----------|--------------------------------------------------------------------------|----------| --- | | `name` | Match images with this image name| `name: nginx`| | | `newTag` | Override the image \*\*tag\*\* or \*\*digest\*\* for images whose image name matches `name` | `newTag: new` | `nginx:old` -> `nginx:new` | | `newName` | Override the image \*\*name\*\* for images whose image name matches `name` | `newName: nginx-special` | `nginx:old` -> `nginx-special:old` | ## Example ### File Input ```yaml # deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: the-deployment spec: template: spec: containers: - name: mypostgresdb image: postgres:8 - name: nginxapp image: nginx:1.7.9 - name: myapp image: my-demo-app:latest - name: alpine-app image: alpine:3.7 ``` ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization images: - name: postgres newName: my-registry/my-postgres newTag: v1 - name: nginx newTag: 1.8.0 - name: my-demo-app newName: my-app - name: alpine digest: sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d3 resources: - deployment.yaml ``` ### Build Output ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: the-deployment spec: template: spec: containers: - image: my-registry/my-postgres:v1 name: mypostgresdb - image: nginx:1.8.0 name: nginxapp - image: my-app:latest name: myapp - image: alpine@sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d3 name: alpine-app ``` ## Setting a Name The name for an image may be set by specifying `newName` and the name of the old container image. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization images: - name: mycontainerregistry/myimage newName: differentregistry/myimage ``` ## Setting a Tag The tag for an image may be set by specifying `newTag` and the name of the container image. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization images: - name: mycontainerregistry/myimage newTag: v1 ``` ## Setting a Digest The digest for an image may be set by specifying `digest` and the name of the container image. ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization images: - name: alpine digest: sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d3 ``` ## Setting a Tag from the latest commit SHA A common CI/CD pattern is to tag container images with the git commit SHA of source code. e.g. if the image name is `foo` and an image was built for the source code at commit `1bb359ccce344ca5d263cd257958ea035c978fd3` then the container image would be `foo:1bb359ccce344ca5d263cd257958ea035c978fd3`. A simple way to push an image that was just built without manually updating the image tags is to [download the kustomize standalone](/docs/getting-started/installation/) tool and run `kustomize edit set image` command to update the tags for you. \*\*Example:\*\* Set the latest git commit SHA as the image tag for `foo` images. ```bash kustomize edit set image foo:$(git log -n 1 --pretty=format:"%H") kubectl apply -f . ``` ## Setting a Tag from an Environment Variable It is also possible to set a Tag from an environment variable using the same technique for setting from a commit SHA. \*\*Example:\*\* Set the tag for the `foo` image to the value in the environment variable `FOO\_IMAGE\_TAG`. ```bash kustomize edit set image foo:$FOO\_IMAGE\_TAG kubectl apply -f . ``` {{< alert color="success" title="Committing Image Tag Updates" >}} The `kustomization.yaml` changes \*may\* be committed back to git so that they can be audited. When committing the image tag updates that have already been pushed by a CI/CD system, be careful not to trigger | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Kustomization File/images.md | master | kustomize | [
-0.005328438710421324,
0.061830077320337296,
0.054444849491119385,
0.0007340353331528604,
-0.009445077739655972,
-0.059976644814014435,
0.006306597962975502,
-0.004823009949177504,
0.04820563271641731,
-0.02291928417980671,
-0.027570363134145737,
-0.04130955785512924,
0.019564596936106682,
... | 0.014183 |
image foo:$FOO\_IMAGE\_TAG kubectl apply -f . ``` {{< alert color="success" title="Committing Image Tag Updates" >}} The `kustomization.yaml` changes \*may\* be committed back to git so that they can be audited. When committing the image tag updates that have already been pushed by a CI/CD system, be careful not to trigger new builds + deployments for these changes. {{< /alert >}} | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Kustomization File/images.md | master | kustomize | [
0.061392903327941895,
0.07253026962280273,
0.06355160474777222,
-0.004189890343695879,
0.09610302001237869,
-0.06053421273827553,
0.047593917697668076,
-0.04277874529361725,
0.11373629420995712,
0.03243304044008255,
0.05651762709021568,
-0.053033098578453064,
0.03872614726424217,
-0.027247... | 0.106434 |
The `sortOptions` field is used to sort the resources kustomize outputs. It is available in kustomize v5.0.0+. IMPORTANT: - Currently, this field is respected only in the top-level Kustomization (that is, the immediate target of `kustomize build`). Any instances of the field in Kustomizations further down the build chain (for example, in bases included through the `resources` field) will be ignored. - This field is the endorsed way to sort resources. It should be used instead of the `--reorder` CLI flag, which is deprecated. Currently, we support the following sort options: - `legacy` - `fifo` ```yaml kind: Kustomization sortOptions: order: legacy | fifo # "legacy" is the default ``` ## FIFO Sorting In `fifo` order, kustomize does not change the order of resources. They appear in the order they are loaded in `resources`. ### Example 1: FIFO Sorting ```yaml kind: Kustomization sortOptions: order: fifo ``` ## Legacy Sorting The `legacy` sort is the default order, and is used when the sortOrder field is unspecified. In `legacy` order, kustomize sorts resources by using two priority lists: - An `orderFirst` list for resources which should be first in the output. - An `orderLast` list for resources which should be last in the output. - Resources not on the lists will appear in between, sorted using their apiVersion and kind fields. ### Example 2: Legacy Sorting with orderFirst / orderLast lists In this example, we use the `legacy` sort order to output `Namespace` objects first and `Deployment` objects last. ```yaml kind: Kustomization sortOptions: order: legacy legacySortOptions: orderFirst: - Namespace orderLast: - Deployment ``` ### Example 3: Default Legacy Sorting If you specify `legacy` sort order without any arguments for the lists, kustomize will fall back to the lists we were using before introducing this feature. Since legacy sort is the default, this is also equivalent to not specifying the field at all. These two configs are equivalent: ```yaml kind: Kustomization sortOptions: order: legacy ``` is equivalent to: ```yaml kind: Kustomization sortOptions: order: legacy legacySortOptions: orderFirst: - Namespace - ResourceQuota - StorageClass - CustomResourceDefinition - ServiceAccount - PodSecurityPolicy - Role - ClusterRole - RoleBinding - ClusterRoleBinding - ConfigMap - Secret - Endpoints - Service - LimitRange - PriorityClass - PersistentVolume - PersistentVolumeClaim - Deployment - StatefulSet - CronJob - PodDisruptionBudget orderLast: - MutatingWebhookConfiguration - ValidatingWebhookConfiguration ``` | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Kustomization File/sortOptions.md | master | kustomize | [
-0.03582563251256943,
0.03941017761826515,
0.05835406109690666,
-0.04379722476005554,
-0.05036722868680954,
0.021600207313895226,
-0.0021701818332076073,
-0.04390809312462807,
0.03789106756448746,
0.019336048513650894,
0.004120316822081804,
0.0380437970161438,
-0.03622444346547127,
0.03316... | 0.081863 |
Each entry in this list must be a path to a \_file\_, or a path (or URL) referring to another kustomization \_directory\_, e.g. ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - myNamespace.yaml - sub-dir/some-deployment.yaml - ../../commonbase - github.com/kubernetes-sigs/kustomize/examples/multibases?ref=v1.0.6 - deployment.yaml - github.com/kubernetes-sigs/kustomize/examples/helloWorld?ref=test-branch ``` Resources will be read and processed in depth-first order. Files should contain k8s resources in YAML form. A file may contain multiple resources separated by the document marker `---`. File paths should be specified \_relative\_ to the directory holding the kustomization file containing the `resources` field. Directory specification can be relative, absolute, or part of a URL. URL specifications should follow the [hashicorp URL] format. The directory must contain a `kustomization.yaml` file. [hashicorp URL]: https://github.com/hashicorp/go-getter#url-format | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Kustomization File/resources.md | master | kustomize | [
0.005859373137354851,
0.029621629044413567,
0.01595325767993927,
-0.024153951555490494,
-0.023736590519547462,
-0.00914874766021967,
0.0023762190248817205,
0.033072229474782944,
0.03204374760389328,
0.011756453663110733,
0.0012947435025125742,
-0.07528506219387054,
0.005812980234622955,
-0... | 0.151827 |
Each entry in this list should be a relative path to a file for custom resource definition (CRD). The presence of this field is to allow kustomize be aware of CRDs and apply proper transformation for any objects in those types. Typical use case: A CRD object refers to a ConfigMap object. In a kustomization, the ConfigMap object name may change by adding namePrefix, nameSuffix, or hashing. The name reference for this ConfigMap object in CRD object need to be updated with namePrefix, nameSuffix, or hashing in the same way. The annotations can be put into openAPI definitions are: - "x-kubernetes-annotation": "" - "x-kubernetes-label-selector": "" - "x-kubernetes-identity": "" - "x-kubernetes-object-ref-api-version": "v1", - "x-kubernetes-object-ref-kind": "Secret", - "x-kubernetes-object-ref-name-key": "name", ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization crds: - crds/typeA.yaml - crds/typeB.yaml ``` | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Kustomization File/crds.md | master | kustomize | [
-0.023122472688555717,
0.07075393199920654,
0.017784113064408302,
-0.03861766308546066,
-0.004580825567245483,
0.00754702789708972,
0.027840932831168175,
-0.0000546729497727938,
0.06818267703056335,
-0.015392061322927475,
-0.020257826894521713,
-0.13430611789226532,
0.02526775561273098,
-0... | 0.155872 |
[kustomize builtins]: https://kubectl.docs.kubernetes.io/references/kustomize/builtins/#\_helmchartinflationgenerator\_ [Helm support long term plan]: https://github.com/kubernetes-sigs/kustomize/issues/4401 ## Helm Chart Inflation Generator Kustomize has limited support for helm chart inflation through the `helmCharts` field. You can read a detailed description of this field in the docs about [kustomize builtins]. To enable the helm chart inflation generator, you have to specify the `enable-helm` flag as follows: ```sh kustomize build --enable-helm ``` ## Long term support The helm chart inflation generator in kustomize is intended to be a limited subset of helm features to help with getting started with kustomize, and we cannot support the entire helm feature set. ### The current builtin For enhancements to the helm chart inflation generator feature, we will only support the following changes: - bug fixes - critical security issues - additional fields that are analogous to flags passed to `helm template`, except for flags such as `post-renderer` that allow arbitrary commands to be executed We will not add support for: - private repository or registry authentication - OCI registries - other large features that increase the complexity of the feature and/or have significant security implications ### Future support The next iteration of the helm inflation generator will take the form of a KRM function, which will have no such restrictions on what types of features we can add and support. You can see more details in the [Helm support long term plan]. | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Kustomization File/helmCharts.md | master | kustomize | [
-0.004557096865028143,
0.06312348693609238,
0.008348223753273487,
-0.025471748784184456,
-0.032235026359558105,
0.0476367250084877,
-0.0904584527015686,
-0.012791513465344906,
-0.005688194651156664,
0.038575317710638046,
0.006280229892581701,
-0.08818624168634415,
-0.009573515504598618,
-0... | 0.017613 |
[strategic merge]: /docs/reference/api/kustomization-file/patchesstrategicmerge/ [JSON6902]: /docs/reference/api/kustomization-file/patchesjson6902/ Patches (also called overlays) add or override fields on resources. They are provided using the `patches` Kustomization field. The `patches` field contains a list of patches to be applied in the order they are specified. Each patch may: - be either a [strategic merge] patch, or a [JSON6902] patch - be either a file, or an inline string - target a single resource or multiple resources The patch target selects resources by `group`, `version`, `kind`, `name`, `namespace`, `labelSelector` and `annotationSelector`. Any resource which matches all the \*\*specified\*\* fields has the patch applied to it (regular expressions). ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization patches: - path: patch.yaml target: group: apps version: v1 kind: Deployment name: deploy.\* labelSelector: "env=dev" annotationSelector: "zone=west" - patch: |- - op: replace path: /some/existing/path value: new value target: kind: MyKind labelSelector: "env=dev" ``` The `name` and `namespace` fields of the patch target selector are automatically anchored regular expressions. This means that the value `myapp` is equivalent to `^myapp$`. ## Name and kind changes With `patches` it is possible to override the kind or name of the resource it is editing with the options `allowNameChange` and `allowKindChange`. For example: ```yaml resources: - deployment.yaml patches: - path: patch.yaml target: kind: Deployment options: allowNameChange: true allowKindChange: true ``` By default, these fields are false and the patch will leave the kind and name of the resource untouched. ## Name references A patch can refer to a resource by any of its previous names or kinds. For example, if a resource has gone through name-prefix transformations, it can refer to the resource by its current name, original name, or any intermediate name that it had. ## Patching custom resources [Strategic merge] patches may require additional configuration via [openapi](../openapi) field to work as expected with custom resources. For example, if a resource uses a merge key other than `name` or needs a list to be merged rather than replaced, Kustomize needs openapi information informing it about this. [JSON6902] patch usage is the same for built-in and custom resources. ## Examples Consider the following `deployment.yaml` common for all examples: ```yaml # deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: dummy-app labels: app.kubernetes.io/name: nginx spec: selector: matchLabels: app.kubernetes.io/name: nginx template: metadata: labels: app.kubernetes.io/name: nginx spec: containers: - name: nginx image: nginx:stable ports: - name: http containerPort: 80 ``` ### Intents - Make the container image point to a specific version and not to the latest container in the registry. - Adding a standard label containing the deployed version. There are multiple possible strategies that all achieve the same results. ### Patch using Inline Strategic Merge ```yaml # kustomization.yaml resources: - deployment.yaml patches: - patch: |- apiVersion: apps/v1 kind: Deployment metadata: name: dummy-app labels: app.kubernetes.io/version: 1.21.0 - patch: |- apiVersion: apps/v1 kind: Deployment metadata: name: not-used spec: template: spec: containers: - name: nginx image: nginx:1.21.0 target: labelSelector: "app.kubernetes.io/name=nginx" ``` If a `target` is specified, the `name` contained in the metadata is required but not used. ### Patch using Inline JSON6902 ```yaml # kustomization.yaml resources: - deployment.yaml patches: - patch: |- - op: add path: /metadata/labels/app.kubernetes.io~1version value: 1.21.0 target: group: apps version: v1 kind: Deployment - patch: |- - op: replace path: /spec/template/spec/containers/0/image value: nginx:1.21.0 target: labelSelector: "app.kubernetes.io/name=nginx" ``` The `target` field is always required for JSON6902 patches. A special replacement character `~1` is used to replace `/` in label name. ### Patch using Path Strategic Merge ```yaml # kustomization.yaml resources: - deployment.yaml patches: - path: add-label.patch.yaml - path: fix-version.patch.yaml target: labelSelector: "app.kubernetes.io/name=nginx" ``` As with the Inline Strategic Merge, the `target` field can be omitted. In that case, the target resource is | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Kustomization File/patches.md | master | kustomize | [
-0.056338220834732056,
0.06417441368103027,
0.018093140795826912,
-0.04102681204676628,
-0.017990099266171455,
0.004028244875371456,
0.017069770023226738,
-0.020845215767621994,
-0.005064751487225294,
-0.0006761939148418605,
0.00540677085518837,
0.009059695526957512,
-0.006982028018683195,
... | 0.188455 |
is used to replace `/` in label name. ### Patch using Path Strategic Merge ```yaml # kustomization.yaml resources: - deployment.yaml patches: - path: add-label.patch.yaml - path: fix-version.patch.yaml target: labelSelector: "app.kubernetes.io/name=nginx" ``` As with the Inline Strategic Merge, the `target` field can be omitted. In that case, the target resource is matched using the `apiVersion`, `kind` and `name` from the patch. ```yaml # add-label.patch.yaml apiVersion: apps/v1 kind: Deployment metadata: name: dummy-app labels: app.kubernetes.io/version: 1.21.0 ``` ```yaml # fix-version.patch.yaml apiVersion: apps/v1 kind: Deployment metadata: name: not-used spec: template: spec: containers: - name: nginx image: nginx:1.21.0 ``` As with the Inline Strategic Merge, the `name` field in the patch is not used when a `target` is specified. ### Patch using Path JSON6902 ```yaml # kustomization.yaml resources: - deployment.yaml patches: - path: add-label.patch.json target: group: apps version: v1 kind: Deployment - path: fix-version.patch.yaml target: labelSelector: "app.kubernetes.io/name=nginx" ``` As with Inline JSON6902, the `target` field is mandatory. ```yaml # add-label.patch.json [ {"op": "add", "path": "/metadata/labels/app.kubernetes.io~1version", "value": "1.21.0"} ] ``` ```yaml # fix-version.patch.yaml - op: replace path: /spec/template/spec/containers/0/image value: nginx:1.21.0 ``` External patch file can be written both as YAML or JSON. The content must follow the JSON6902 standard. ### Build Output All four patches strategies lead to the exact same output: ```yaml apiVersion: apps/v1 kind: Deployment metadata: labels: app.kubernetes.io/name: nginx app.kubernetes.io/version: 1.21.0 name: dummy-app spec: selector: matchLabels: app.kubernetes.io/name: nginx template: metadata: labels: app.kubernetes.io/name: nginx spec: containers: - image: nginx:1.21.0 name: nginx ports: - containerPort: 80 name: http ``` | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Kustomization File/patches.md | master | kustomize | [
-0.017811141908168793,
0.061624981462955475,
0.05422326922416687,
-0.027514930814504623,
0.011135110631585121,
0.041836705058813095,
0.044580526649951935,
-0.031026894226670265,
0.06633546203374863,
0.016977176070213318,
0.027855459600687027,
-0.02516997419297695,
-0.01564292050898075,
-0.... | 0.131268 |
`apiVersion: kustomize.config.k8s.io/v1beta1` ### Kustomization --- \* \*\*apiVersion\*\*: kustomize.config.k8s.io/v1beta1 \* \*\*kind\*\*: Kustomization \* \*\*openAPI\*\* (map[string]string) [OpenAPI]({{< ref "openapi.md" >}}) contains information about what kubernetes schema to use. \* \*\*namePrefix\*\* (string) [NamePrefix]({{< ref "namePrefix.md" >}}) will prefix the names of all resources mentioned in the kustomization file including generated configmaps and secrets. \* \*\*nameSuffix\*\* (string) [NameSuffix]({{< ref "nameSuffix.md" >}}) will suffix the names of all resources mentioned in the kustomization file including generated configmaps and secrets. \* \*\*namespace\*\* (string) [Namespace]({{< ref "namespace.md" >}}) to add to all objects. \* \*\*commonLabels\*\* (map[string]string) [CommonLabels]({{< ref "commonLabels.md" >}}) to add to all objects and selectors. \* \*\*labels\*\* ([][Label]({{< ref "labels.md" >}})) Labels to add to all objects but not selectors. \* \*\*commonAnnotations\*\* (map[string]string) [CommonAnnotations]({{< ref "commonAnnotations.md" >}}) to add to all objects. \* \*\*patchesStrategicMerge\*\* ([][PatchStrategicMerge]({{< ref "patchesStrategicMerge.md" >}})) Deprecated: Use the Patches field instead, which provides a superset of the functionality of PatchesStrategicMerge. \* \*\*patchesJson6902\*\* ([][Patch]({{< ref "patches.md" >}})) Deprecated: Use the Patches field instead, which provides a superset of the functionality of JSONPatches. [JSONPatches]({{< ref "patchesjson6902.md" >}}) is a list of JSONPatch for applying JSON patch. \* \*\*patches\*\* ([][Patch]({{< ref "patches.md" >}})) Patches is a list of patches, where each one can be either a Strategic Merge Patch or a JSON patch. Each patch can be applied to multiple target objects. \* \*\*images\*\* ([][Image]({{< ref "images.md" >}})) Images is a list of (image name, new name, new tag or digest) for changing image names, tags or digests. This can also be achieved with a patch, but this operator is simpler to specify. \* \*\*imageTags\*\* ([][Image]({{< ref "images.md" >}})) Deprecated: Use the Images field instead. \* \*\*replacements\*\* ([][ReplacementField]({{< ref "replacements.md" >}})) Replacements is a list of replacements, which will copy nodes from a specified source to N specified targets. \* \*\*replicas\*\* ([][Replica]({{< ref "replicas.md" >}})) Replicas is a list of {resourcename, count} that allows for simpler replica specification. This can also be done with a patch. \* \*\*vars\*\* ([][Var]({{< ref "vars.md" >}})) Deprecated: Vars will be removed in future release. Migrate to Replacements instead. Vars allow things modified by kustomize to be injected into a kubernetes object specification. \* \*\*sortOptions\*\* ([sortOptions]({{< ref "sortOptions.md" >}})) SortOptions change the order that kustomize outputs resources. \* \*\*resources\*\* ([]string) [Resources]({{< ref "resources.md" >}}) specifies relative paths to files holding YAML representations of kubernetes API objects, or specifications of other kustomizations via relative paths, absolute paths, or URLs. \* \*\*components\*\* ([]string) [Components]({{< ref "components.md" >}}) specifies relative paths to specifications of other Components via relative paths, absolute paths, or URLs. \* \*\*crds\*\* ([]string) [Crds]({{< ref "crds.md" >}}) specifies relative paths to Custom Resource Definition files. This allows custom resources to be recognized as operands, making it possible to add them to the Resources list. CRDs themselves are not modified. \* \*\*bases\*\* ([]string) Deprecated: Anything that would have been specified here should be specified in the Resources field instead. [Bases]({{< ref "bases.md" >}}) specifies relative paths to files holding YAML representations of Kubernetes API objects. \* \*\*configMapGenerator\*\* ([][ConfigMapArgs]({{< ref "configMapGenerator.md#configmapargs" >}})) [ConfigMapGenerator]({{< ref "configMapGenerator.md" >}}) is a list of configmaps to generate from local data (one configMap per list item). The resulting resource is a normal operand, subject to name prefixing, patching, etc. By default, the name of the map will have a suffix hash generated from its contents. \* \*\*secretGenerator\*\* ([][SecretArgs]({{< ref "secretGenerator.md#secretargs" >}})) [SecretGenerator]({{< ref "secretGenerator.md" >}}) is a list of secrets to generate from local data (one secret per list item). The resulting resource is a normal operand, subject to name prefixing, patching, etc. By default, the name of the map will have a suffix hash generated from | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Kustomization File/kustomization.md | master | kustomize | [
0.014430852606892586,
0.07922813296318054,
0.01580401323735714,
-0.040715478360652924,
-0.04824452102184296,
0.020869022235274315,
-0.012571463361382484,
-0.03175930306315422,
0.030292674899101257,
0.005936723668128252,
-0.03252476081252098,
-0.10687169432640076,
0.009027127176523209,
-0.0... | 0.163436 |
([][SecretArgs]({{< ref "secretGenerator.md#secretargs" >}})) [SecretGenerator]({{< ref "secretGenerator.md" >}}) is a list of secrets to generate from local data (one secret per list item). The resulting resource is a normal operand, subject to name prefixing, patching, etc. By default, the name of the map will have a suffix hash generated from its contents. \* \*\*helmGlobals\*\* (HelmGlobals) HelmGlobals contains helm configuration that isn't chart specific. \* \*\*helmCharts\*\* ([][HelmChart]({{< ref "helmCharts.md" >}})) HelmCharts is a list of helm chart configuration instances. \* \*\*helmChartInflationGenerator\*\* ([]HelmChartArgs) Deprecated: Auto-converted to HelmGlobals and [HelmCharts]({{< ref "helmCharts.md" >}}). HelmChartInflationGenerator is a list of helm chart configurations. \* \*\*generatorOptions\*\* ([GeneratorOptions]({{< ref "generatorOptions.md" >}})) GeneratorOptions modify behavior of all ConfigMap and Secret generators. \* \*\*configurations\*\* ([]string) Configurations is a list of transformer configuration files \* \*\*generators\*\* ([]string) Generators is a list of files containing custom generators \* \*\*transformers\*\* ([]string) Transformers is a list of files containing transformers \* \*\*validators\*\* ([]string) Validators is a list of files containing validators \* \*\*buildMetadata\*\* ([]string) [BuildMetadata]({{< ref "buildMetadata.md" >}}) is a list of strings used to toggle different build options | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Kustomization File/kustomization.md | master | kustomize | [
-0.05696516111493111,
0.11072233319282532,
0.01477402076125145,
0.013823837973177433,
0.06161852926015854,
-0.0027957477141171694,
-0.018886053934693336,
-0.005056011490523815,
0.008866661228239536,
-0.004085621330887079,
-0.02028454653918743,
-0.047514256089925766,
0.06325146555900574,
-0... | 0.003801 |
`apiVersion: kustomize.config.k8s.io/v1beta1` {{% pageinfo color="warning" %}} The `bases` field was deprecated in v2.1.0. This field will never be removed from the kustomize.config.k8s.io/v1beta1 Kustomization API, but it will not be included in the kustomize.config.k8s.io/v1 Kustomization API. When Kustomization v1 is available, we will announce the deprecation of the v1beta1 version. There will be at least two releases between deprecation and removal of Kustomization v1beta1 support from the kustomize CLI, and removal itself will happen in a future major version bump. You can run `kustomize edit fix` to automatically convert `bases` to `resources`. {{% /pageinfo %}} ### bases A base is a kustomization referred to by some other kustomization. Move entries into the [resources] field. --- \* \*\*bases\*\* ([]string) List of relative paths to kustomization specifications. [resources]: /docs/reference/api/kustomization-file/resources | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Kustomization File/bases.md | master | kustomize | [
-0.04171859472990036,
0.07469935715198517,
0.049185167998075485,
-0.04028123989701271,
0.024748491123318672,
0.0017272243276238441,
-0.03716988116502762,
-0.0661727637052536,
-0.011030108667910099,
-0.03372456505894661,
-0.02128373645246029,
0.018431400880217552,
-0.008745212107896805,
-0.... | 0.120468 |
As of ``v3.7.0`` Kustomize supports a special type of kustomization that allows one to define reusable pieces of configuration logic that can be included from multiple overlays. Components come in handy when dealing with applications that support multiple optional features and you wish to enable only a subset of them in different overlays, i.e., different features for different environments or audiences. For more details regarding this feature you can read the [Kustomize Components KEP](https://github.com/kubernetes/enhancements/tree/master/keps/sig-cli/1802-kustomize-components) and the [components concept](/docs/concepts/components/) page. ## Use case Suppose you've written a very simple Web application: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: example spec: template: spec: containers: - name: example image: example:1.0 ``` You want to deploy a \*\*community\*\* edition of this application as SaaS, so you add support for persistence (e.g. an external database), and bot detection (e.g. Google reCAPTCHA). You've now attracted \*\*enterprise\*\* customers who want to deploy it on-premises, so you add LDAP support, and disable Google reCAPTCHA. At the same time, the \*\*devs\*\* need to be able to test parts of the application, so they want to deploy it with some features enabled and others not. Here's a matrix with the deployments of this application and the features enabled for each one: | | External DB | LDAP | reCAPTCHA | |------------|:------------------:|:------------------:|:------------------:| | Community | ✔️ | | ✔️ | | Enterprise | ✔️ | ✔️ | | | Dev | ✅ | ✅ | ✅ | (✔️ enabled, ✅: optional) So, you want to make it easy to deploy your application in any of the above three environments. Here's how you can do this with Kustomize components: each opt-in feature gets packaged as a component, so that it can be referred to from multiple higher-level overlays. First, define a place to work: ```shell DEMO\_HOME=$(mktemp -d) ``` Define a common \*\*base\*\* that has a `Deployment` and a simple `ConfigMap`, that is mounted on the application's container. ```bash BASE=$DEMO\_HOME/base mkdir $BASE ``` ```bash # $BASE/kustomization.yaml resources: - deployment.yaml configMapGenerator: - name: conf literals: - main.conf=| color=cornflower\_blue log\_level=info ``` ```bash # $BASE/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: example spec: template: spec: containers: - name: example image: example:1.0 volumeMounts: - name: conf mountPath: /etc/config volumes: - name: conf configMap: name: conf ``` Define an `external\_db` component, using `kind: Component`, that creates a `Secret` for the DB password and a new entry in the `ConfigMap`: ```shell EXT\_DB=$DEMO\_HOME/components/external\_db mkdir -p $EXT\_DB ``` ```bash # $EXT\_DB/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1alpha1 # <-- Component notation kind: Component secretGenerator: - name: dbpass files: - dbpass.txt patchesStrategicMerge: - configmap.yaml patchesJson6902: - target: group: apps version: v1 kind: Deployment name: example path: deployment.yaml ``` ```bash # $EXT\_DB/deployment.yaml - op: add path: /spec/template/spec/volumes/0 value: name: dbpass secret: secretName: dbpass - op: add path: /spec/template/spec/containers/0/volumeMounts/0 value: mountPath: /var/run/secrets/db/ name: dbpass ``` ```bash # $EXT\_DB/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: conf data: db.conf: | endpoint=127.0.0.1:1234 name=app user=admin pass=/var/run/secrets/db/dbpass.txt ``` Define an `ldap` component, that creates a `Secret` for the LDAP password and a new entry in the `ConfigMap`: ```shell LDAP=$DEMO\_HOME/components/ldap mkdir -p $LDAP ``` ```bash # $LDAP/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1alpha1 kind: Component secretGenerator: - name: ldappass files: - ldappass.txt patchesStrategicMerge: - configmap.yaml patchesJson6902: - target: group: apps version: v1 kind: Deployment name: example path: deployment.yaml ``` ```bash # $LDAP/deployment.yaml - op: add path: /spec/template/spec/volumes/0 value: name: ldappass secret: secretName: ldappass - op: add path: /spec/template/spec/containers/0/volumeMounts/0 value: mountPath: /var/run/secrets/ldap/ name: ldappass ``` ```bash # $LDAP/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: conf data: ldap.conf: | endpoint=ldap://ldap.example.com bindDN=cn=admin,dc=example,dc=com pass=/var/run/secrets/ldap/ldappass.txt ``` Define a `recaptcha` component, that creates a `Secret` for the reCAPTCHA site/secret keys and a new entry in the `ConfigMap`: ```shell RECAPTCHA=$DEMO\_HOME/components/recaptcha mkdir -p $RECAPTCHA ``` ```bash | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Kustomization File/components.md | master | kustomize | [
0.003621606854721904,
0.016138853505253792,
0.04170408099889755,
-0.028232993558049202,
-0.014182173646986485,
0.020153595134615898,
-0.021242640912532806,
-0.03270363807678223,
0.01873808167874813,
-0.04854000732302666,
-0.0037939916364848614,
-0.04119237884879112,
-0.0006523581105284393,
... | 0.097672 |
value: mountPath: /var/run/secrets/ldap/ name: ldappass ``` ```bash # $LDAP/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: conf data: ldap.conf: | endpoint=ldap://ldap.example.com bindDN=cn=admin,dc=example,dc=com pass=/var/run/secrets/ldap/ldappass.txt ``` Define a `recaptcha` component, that creates a `Secret` for the reCAPTCHA site/secret keys and a new entry in the `ConfigMap`: ```shell RECAPTCHA=$DEMO\_HOME/components/recaptcha mkdir -p $RECAPTCHA ``` ```bash # $RECAPTCHA/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1alpha1 kind: Component secretGenerator: - name: recaptcha files: - site\_key.txt - secret\_key.txt # Updating the ConfigMap works with generators as well. configMapGenerator: - name: conf behavior: merge literals: - recaptcha.conf=| enabled=true site\_key=/var/run/secrets/recaptcha/site\_key.txt secret\_key=/var/run/secrets/recaptcha/secret\_key.txt patchesJson6902: - target: group: apps version: v1 kind: Deployment name: example path: deployment.yaml ``` ```bash # $RECAPTCHA/deployment.yaml - op: add path: /spec/template/spec/volumes/0 value: name: recaptcha secret: secretName: recaptcha - op: add path: /spec/template/spec/containers/0/volumeMounts/0 value: mountPath: /var/run/secrets/recaptcha/ name: recaptcha ``` Define a `community` variant, that bundles the external DB and reCAPTCHA components: ```shell COMMUNITY=$DEMO\_HOME/overlays/community mkdir -p $COMMUNITY ``` ```bash # $COMMUNITY/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ../../base components: - ../../components/external\_db - ../../components/recaptcha ``` Define an `enterprise` overlay, that bundles the external DB and LDAP components: ```shell ENTERPRISE=$DEMO\_HOME/overlays/enterprise mkdir -p $ENTERPRISE ``` ```bash # $ENTERPRISE/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ../../base components: - ../../components/external\_db - ../../components/ldap ``` Define a `dev` overlay, that points to all the components and has LDAP disabled: ```shell DEV=$DEMO\_HOME/overlays/dev mkdir -p $DEV ``` ```bash # $DEV/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ../../base components: - ../../components/external\_db #- ../../components/ldap - ../../components/recaptcha ``` Now, the workspace has the following directories: ```shell ├── base │ ├── deployment.yaml │ └── kustomization.yaml ├── components │ ├── external\_db │ │ ├── configmap.yaml │ │ ├── dbpass.txt │ │ ├── deployment.yaml │ │ └── kustomization.yaml │ ├── ldap │ │ ├── configmap.yaml │ │ ├── deployment.yaml │ │ ├── kustomization.yaml │ │ └── ldappass.txt │ └── recaptcha │ ├── deployment.yaml │ ├── kustomization.yaml │ ├── secret\_key.txt │ └── site\_key.txt └── overlays ├── community │ └── kustomization.yaml ├── dev │ └── kustomization.yaml └── enterprise └── kustomization.yaml ``` With this structure, you can generate the YAML manifests for each deployment using `kustomize build`: ```shell kustomize build overlays/community kustomize build overlays/enterprise kustomize build overlays/dev ``` | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Kustomization File/components.md | master | kustomize | [
-0.0323098860681057,
-0.08089450746774673,
-0.07477053254842758,
-0.029491033405065536,
-0.013868431560695171,
-0.030917806550860405,
0.042658064514398575,
0.036465685814619064,
-0.015240740962326527,
0.04168657585978508,
0.1003318727016449,
-0.0864078551530838,
0.056370776146650314,
-0.00... | 0.0235 |
Given this kubernetes Deployment fragment: ```yaml kind: Deployment metadata: name: deployment-name spec: replicas: 3 ``` one can change the number of replicas to 5 by adding the following to your kustomization: ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization replicas: - name: deployment-name count: 5 ``` This field accepts a list, so many resources can be modified at the same time. As this declaration does not take in a `kind:` nor a `group:` it will match any `group` and `kind` that has a matching name and that is one of: - `Deployment` - `ReplicationController` - `ReplicaSet` - `StatefulSet` For more complex use cases, revert to using a patch. ## Example ### Input File ```yaml # deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: the-deployment spec: replicas: 5 template: containers: - name: the-container image: registry/container:latest ``` ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization replicas: - name: the-deployment count: 10 resources: - deployment.yaml ``` ### Output ```yaml # deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: the-deployment spec: replicas: 10 template: containers: - name: the-container image: registry/container:latest ``` | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Kustomization File/replicas.md | master | kustomize | [
-0.040137872099876404,
0.03414524719119072,
0.06557662039995193,
-0.024256162345409393,
-0.021774111315608025,
0.012174851261079311,
0.014599340036511421,
-0.08482414484024048,
0.05196418613195419,
0.04063205420970917,
0.011590410955250263,
-0.04304039105772972,
0.014724650420248508,
-0.03... | 0.131078 |
[replacements]: /docs/reference/api/kustomization-file/replacements/ {{% pageinfo color="warning" %}} The `vars` field was deprecated in v5.0.0. This field will never be removed from the kustomize.config.k8s.io/v1beta1 Kustomization API, but it will not be included in the kustomize.config.k8s.io/v1 Kustomization API. When Kustomization v1 is available, we will announce the deprecation of the v1beta1 version. There will be at least two releases between deprecation and removal of Kustomization v1beta1 support from the kustomize CLI, and removal itself will happen in a future major version bump. Please try to migrate to the the [replacements](/docs/reference/api/kustomization-file/replacements) field. If you are unable to restructure your configuration to use replacements instead of vars, please ask for help in slack or file an issue for guidance. We are experimentally attempting to automatically convert `vars` to `replacements` with `kustomize edit fix --vars`. However, converting vars to replacements in this way will potentially overwrite many resource files and the resulting files may not produce the same output when `kustomize build` is run. We recommend doing this in a clean git repository where the change is easy to undo. {{% /pageinfo %}} Vars are used to capture text from one resource's field and insert that text elsewhere - a reflection feature. For example, suppose one specifies the name of a k8s Service object in a container's command line, and the name of a k8s Secret object in a container's environment variable, so that the following would work: ```yaml containers: - image: myimage command: ["start", "--host", "$(MY\_SERVICE\_NAME)"] env: - name: SECRET\_TOKEN value: $(SOME\_SECRET\_NAME) ``` To do so, add an entry to `vars:` as follows: ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization vars: - name: SOME\_SECRET\_NAME objref: kind: Secret name: my-secret apiVersion: v1 - name: MY\_SERVICE\_NAME objref: kind: Service name: my-service apiVersion: v1 fieldref: fieldpath: metadata.name - name: ANOTHER\_DEPLOYMENTS\_POD\_RESTART\_POLICY objref: kind: Deployment name: my-deployment apiVersion: apps/v1 fieldref: fieldpath: spec.template.spec.restartPolicy ``` A var is a tuple of variable name, object reference and field reference within that object. That's where the text is found. The field reference is optional; it defaults to `metadata.name`, a normal default, since kustomize is used to generate or modify the names of resources. At time of writing, only string type fields are supported. No ints, bools, arrays etc. It's not possible to, say, extract the name of the image in container number 2 of some pod template. A variable reference, i.e. the string '$(FOO)', can only be placed in particular fields of particular objects as specified by kustomize's configuration data. The default config data for vars is at [/api/konfig/builtinpluginconsts/varreference.go](https://github.com/kubernetes-sigs/kustomize/blob/master/api/konfig/builtinpluginconsts/varreference.go) Long story short, the default targets are all container command args and env value fields. Vars should \_not\_ be used for inserting names in places where kustomize is already handling that job. E.g., a Deployment may reference a ConfigMap by name, and if kustomize changes the name of a ConfigMap, it knows to change the name reference in the Deployment. ### Convert vars to replacements There are plans to deprecate vars, so we recommend migration to [replacements] as early as possible. #### Simple migration example Let's first take a simple example of how to manually do this conversion. Suppose we have a container referencing secret (similar to the above example): `pod.yaml` ```yaml apiVersion: v1 kind: Pod metadata: name: my-pod spec: containers: - image: myimage name: hello env: - name: SECRET\_TOKEN value: $(SOME\_SECRET\_NAME) ``` and we are using vars as follows: `kustomization.yaml` ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - pod.yaml - secret.yaml vars: - name: SOME\_SECRET\_NAME objref: kind: Secret name: my-secret apiVersion: v1 ``` In order to convert `vars` to `replacements`, we have to: 1. Replace every instance of $(SOME\_SECRET\_NAME) with any arbitrary placeholder value. 2. Convert the | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Kustomization File/vars.md | master | kustomize | [
-0.03433026000857353,
0.05024856701493263,
0.07300937920808792,
-0.05416041612625122,
0.01650319993495941,
0.03683969005942345,
-0.004074763040989637,
-0.044997524470090866,
0.013687272556126118,
-0.01812177523970604,
-0.015376975759863853,
0.02908976748585701,
-0.02636762335896492,
-0.043... | 0.131769 |
vars as follows: `kustomization.yaml` ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - pod.yaml - secret.yaml vars: - name: SOME\_SECRET\_NAME objref: kind: Secret name: my-secret apiVersion: v1 ``` In order to convert `vars` to `replacements`, we have to: 1. Replace every instance of $(SOME\_SECRET\_NAME) with any arbitrary placeholder value. 2. Convert the vars `objref` field to a [replacements] `source` field. 3. Replace the vars `name` fied with a [replacements] `targets` field that points to every instance of the placeholder value in step 1. In our simple example here, this would look like the following: `pod.yaml` ```yaml apiVersion: v1 kind: Pod metadata: name: my-pod spec: containers: - image: myimage name: hello env: - name: SECRET\_TOKEN value: SOME\_PLACEHOLDER\_VALUE ``` `kustomization.yaml` ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - pod.yaml - secret.yaml replacements: - source: kind: Secret name: my-secret version: v1 targets: - select: kind: Pod name: my-pod fieldPaths: - spec.containers.[name=hello].env.[name=SECRET\_TOKEN].value ``` #### More complex migration example Let's take a more complex usage of vars and convert it to [replacements]. We are going to convert the vars in the [wordpress example](https://github.com/kubernetes-sigs/kustomize/tree/master/examples/wordpress) to replacements. The wordpress example has the following directory structure: ``` . ├── README.md ├── kustomization.yaml ├── mysql │ ├── deployment.yaml │ ├── kustomization.yaml │ ├── secret.yaml │ └── service.yaml ├── patch.yaml └── wordpress ├── deployment.yaml ├── kustomization.yaml └── service.yaml ``` where `patch.yaml` has the following contents: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: wordpress spec: template: spec: initContainers: - name: init-command image: debian command: ["/bin/sh"] args: ["-c", "echo $(WORDPRESS\_SERVICE); echo $(MYSQL\_SERVICE)"] containers: - name: wordpress env: - name: WORDPRESS\_DB\_HOST value: $(MYSQL\_SERVICE) - name: WORDPRESS\_DB\_PASSWORD valueFrom: secretKeyRef: name: mysql-pass key: password ``` and the top level `kustomization.yaml` has the following contents: ``` resources: - wordpress - mysql patchesStrategicMerge: - patch.yaml namePrefix: demo- vars: - name: WORDPRESS\_SERVICE objref: kind: Service name: wordpress apiVersion: v1 - name: MYSQL\_SERVICE objref: kind: Service name: mysql apiVersion: v1 ``` In this example, the patch is used to: - Add an initial container to show the mysql service name - Add environment variable that allow wordpress to find the mysql database We can convert vars to replacements in this more complex case too, by taking the same steps as the previous example. To do this, we can change the contents of `patch.yaml` to: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: wordpress spec: template: spec: initContainers: - name: init-command image: debian command: ["/bin/sh"] args: ["-c", "echo", "WORDPRESS\_SERVICE", ";", "echo", "MYSQL\_SERVICE"] containers: - name: wordpress env: - name: WORDPRESS\_DB\_HOST value: MYSQL\_SERVICE - name: WORDPRESS\_DB\_PASSWORD valueFrom: secretKeyRef: name: mysql-pass key: password ``` Then, in our kustomization, we can have our replacements: `kustomization.yaml` ```yaml resources: - wordpress - mysql patchesStrategicMerge: - patch.yaml namePrefix: demo- replacements: - source: name: demo-wordpress kind: Service version: v1 targets: - select: kind: Deployment name: demo-wordpress fieldPaths: - spec.template.spec.initContainers.[name=init-command].args.2 - source: name: demo-mysql kind: Service version: v1 targets: - select: kind: Deployment name: demo-wordpress fieldPaths: - spec.template.spec.initContainers.[name=init-command].args.5 - spec.template.spec.containers.[name=wordpress].env.[name=WORDPRESS\_DB\_HOST].value ``` | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Kustomization File/vars.md | master | kustomize | [
0.03392722085118294,
0.06569162756204605,
0.060258761048316956,
0.020750688388943672,
0.020392829552292824,
-0.0021114207338541746,
0.04424981027841568,
-0.025819793343544006,
0.07857012748718262,
-0.023393332958221436,
-0.0152038699015975,
-0.08178779482841492,
0.013339287601411343,
-0.04... | 0.105136 |
{{% pageinfo color="warning" %}} The `patchesStrategicMerge` field was deprecated in v5.0.0. This field will never be removed from the kustomize.config.k8s.io/v1beta1 Kustomization API, but it will not be included in the kustomize.config.k8s.io/v1 Kustomization API. When Kustomization v1 is available, we will announce the deprecation of the v1beta1 version. There will be at least two releases between deprecation and removal of Kustomization v1beta1 support from the kustomize CLI, and removal itself will happen in a future major version bump. Please move your `patchesStrategicMerge` into the [patches](/docs/reference/api/kustomization-file/patches) field. This field supports patchesStrategicMerge, but with slightly different syntax. You can run `kustomize edit fix` to automatically convert `patchesStrategicMerge` to `patches`. {{% /pageinfo %}} Each entry in this list should be either a relative file path or an inline content resolving to a partial or complete resource definition. The names in these (possibly partial) resource files must match names already loaded via the `resources` field. These entries are used to \_patch\_ (modify) the known resources. Small patches that do one thing are best, e.g. modify a memory request/limit, change an env var in a ConfigMap, etc. Small patches are easy to review and easy to mix together in overlays. ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization patchesStrategicMerge: - service\_port\_8888.yaml - deployment\_increase\_replicas.yaml - deployment\_increase\_memory.yaml ``` The patch content can be a inline string as well. ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization patchesStrategicMerge: - |- apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: template: spec: containers: - name: nginx image: nignx:latest ``` Note that kustomize does not support more than one patch for the same object that contain a \_delete\_ directive. To remove several fields / slice elements from an object create a single patch that performs all the needed deletions. A patch can refer to a resource by any of its previous names or kinds. For example, if a resource has gone through name-prefix transformations, it can refer to the resource by its current name, original name, or any intermediate name that it had. ## Patching custom resources Strategic merge patches may require additional configuration via [openapi](../openapi) field to work as expected with custom resources. For example, if a resource uses a merge key other than `name` or needs a list to be merged rather than replaced, Kustomize needs openapi information informing it about this. | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Kustomization File/patchesStrategicMerge.md | master | kustomize | [
-0.045757342129945755,
0.08374331891536713,
0.09010989964008331,
-0.04153025895357132,
0.017930353060364723,
0.028155842795968056,
-0.007130678743124008,
-0.06554977595806122,
-0.03382517769932747,
-0.01698649860918522,
-0.019436854869127274,
0.019982896745204926,
-0.010067220777273178,
-0... | 0.091218 |
{{% pageinfo color="warning" %}} The `patchesJson6902` field was deprecated in v5.0.0. This field will never be removed from the kustomize.config.k8s.io/v1beta1 Kustomization API, but it will not be included in the kustomize.config.k8s.io/v1 Kustomization API. When Kustomization v1 is available, we will announce the deprecation of the v1beta1 version. There will be at least two releases between deprecation and removal of Kustomization v1beta1 support from the kustomize CLI, and removal itself will happen in a future major version bump. Please move your `patchesJson6902` into the [patches](/docs/reference/api/kustomization-file/patches) field. This field supports patchesJson6902, but with slightly different syntax. You can run `kustomize edit fix` to automatically convert `patchesJson6902` to `patches`. {{% /pageinfo %}} Each entry in this list should resolve to a kubernetes object and a JSON patch that will be applied to the object. The JSON patch is documented at target field points to a kubernetes object within the same kustomization by the object's group, version, kind, name and namespace. path field is a relative file path of a JSON patch file. The content in this patch file can be either in JSON format as ```json [ {"op": "add", "path": "/some/new/path", "value": "value"}, {"op": "replace", "path": "/some/existing/path", "value": "new value"}, {"op": "copy", "from": "/some/existing/path", "path": "/some/path"}, {"op": "move", "from": "/some/existing/path", "path": "/some/existing/destination/path"}, {"op": "remove", "path": "/some/existing/path"}, {"op": "test", "path": "/some/path", "value": "my-node-value"} ] ``` or in YAML format as ```yaml # add: creates a new entry with a given value - op: add path: /some/new/path value: value # replace: replaces the value of the node with the new specified value - op: replace path: /some/existing/path value: new value # copy: copies the value specified in from to the destination path - op: copy from: /some/existing/path path: /some/path # move: moves the node specified in from to the destination path - op: move from: /some/existing/path path: /some/existing/destination/path # remove: delete's the node('s subtree) - op: remove path: /some/path # test: check if the specified node has the specified value, if the value differs it will throw an error - op: test path: /some/path value: "my-node-value" ``` ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization patchesJson6902: - target: version: v1 kind: Deployment name: my-deployment path: add\_init\_container.yaml - target: version: v1 kind: Service name: my-service path: add\_service\_annotation.yaml ``` The patch content can be an inline string as well: ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization patchesJson6902: - target: version: v1 kind: Deployment name: my-deployment patch: |- - op: add path: /some/new/path value: value - op: replace path: /some/existing/path value: "new value" ``` A patch can refer to a resource by any of its previous names or kinds. For example, if a resource has gone through name-prefix transformations, it can refer to the resource by its current name, original name, or any intermediate name that it had. | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Kustomization File/patchesjson6902.md | master | kustomize | [
-0.08267655223608017,
0.07901842892169952,
0.07087764889001846,
-0.05483151599764824,
0.022602561861276627,
0.029034888371825218,
-0.03245022892951965,
-0.05835685133934021,
-0.039678867906332016,
-0.03234010934829712,
-0.019996648654341698,
0.04593208432197571,
-0.03950073942542076,
-0.01... | 0.080507 |
`apiVersion: kustomize.config.k8s.io/v1beta1` See the [Tasks section] for examples of how to use `labels`. ### labels Adds labels and optionally selectors to all resources. \* \*\*labels\*\* ([]Label) List of labels and label selector options. \_Label holds labels to add to resources and options for customizing how those labels are applied, potentially using selectors and template metadata.\_ \* \*\*pairs\*\* (map[string]string) Map of labels that the transformer will add to resources. \* \*\*includeSelectors\*\* (bool), optional IncludeSelectors indicates whether the transformer should include the fieldSpecs for selectors. Custom fieldSpec specified by `fields` will be merged with builtin fieldSpecs if this is true. Defaults to false. \* \*\*includeTemplates\*\* (bool), optional IncludeTemplates indicates whether the transformer should include the `spec/template/metadata` fieldSpec. Custom fieldSpecs specified by `fields` will be merged with the `spec/template/metadata` fieldSpec if this is true. If IncludeSelectors is true, IncludeTemplates is not needed. Defaults to false. \* \*\*fields\*\* (\[\][FieldSpec]({{< relref "../Common%20Definitions/FieldSpec.md" >}})), optional Fields specifies the field on each resource that LabelTransformer should add the label to. It essentially allows the user to re-define the field path of the Kubernetes labels field from `metadata/labels` for different resources. [Tasks section]: /docs/tasks/labels\_and\_annotations/ [Labels and Selectors]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Kustomization File/labels.md | master | kustomize | [
-0.05637134984135628,
0.09130840748548508,
-0.03188085928559303,
0.011851469986140728,
-0.022629978135228157,
0.08074142783880234,
0.04020380228757858,
0.003565213643014431,
0.05809061601758003,
-0.04166680946946144,
-0.024549368768930435,
-0.0670192614197731,
-0.03336874768137932,
-0.0248... | 0.145044 |
`apiVersion: kustomize.config.k8s.io/v1beta1` ### generatorOptions GeneratorOptions modifies resource generation behavior. --- \* \*\*labels\*\* (map[string]string), optional Labels to add to all generated resources. \* \*\*annotations\*\* (map[string]string), optional Annotations to add to all generated resources. \* \*\*disableNameSuffixHash\*\* (bool), optional DisableNameSuffixHash if true disables the default behavior of adding a suffix to the names of generated resources that is a hash of the resource contents. \* \*\*immutable\*\* (bool), optional Immutable if true add to all generated resources. | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Kustomization File/generatorOptions.md | master | kustomize | [
-0.03871351107954979,
0.09785543382167816,
0.012161199003458023,
-0.016613364219665527,
-0.023019440472126007,
0.030824273824691772,
0.02365667186677456,
-0.07850541919469833,
0.05935710668563843,
-0.01299537904560566,
0.0024033798836171627,
-0.08405851572751999,
0.025629455223679543,
-0.0... | 0.153718 |
Replacements are used to copy fields from one source into any number of specified targets. \ The `replacements` field can support a path to a replacement: `kustomization.yaml` ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization replacements: - path: replacement.yaml ``` `replacement.yaml` ```yaml source: kind: Deployment fieldPath: metadata.name targets: - select: name: my-resource ``` \ Alternatively, `replacements` supports inline replacements: ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization replacements: - source: kind: Deployment fieldPath: metadata.name targets: - select: name: my-resource ``` ### Syntax The full schema of `replacements` is as follows: ```yaml replacements: - source: group: string version: string kind: string name: string namespace: string fieldPath: string options: delimiter: string index: int create: bool targets: - select: group: string version: string kind: string name: string namespace: string reject: - group: string version: string kind: string name: string namespace: string fieldPaths: - string options: delimiter: string index: int create: bool ``` ### Field Descriptions | Field | Required| Description | Default | | -----------: | :----: | ----------- | ---- | | `source`| ✔️ | The source of the value | | `target`| ✔️ | The N fields to write the value to | | `group` | | The group of the referent | | `version`| | The version of the referent |`kind` | | The kind of the referent |`name` | | The name of the referent |`namespace`| | The namespace of the referent |`select` | ✔️ |Include objects that match this |`reject`| |Exclude objects that match this |`fieldPath`| | The structured path to the source value | `metadata.name` |`fieldPaths`| | The structured path(s) to the target nodes | `metadata.name` |`options`| | Options used to refine interpretation of the field |`delimiter`| | Used to split/join the field |`index`| | Which position in the split to consider | `0` |`create`| | If target field is missing, add it | `false` #### Source The source field is a selector that determines the source of the value by finding a match to the specified GVKNN. All the subfields of `source` are optional, but the source selection must resolve to a single resource. #### Targets Replacements will be applied to all targets that are matched by the `select` field and are NOT matched by the `reject` field, and will be applied to all listed `fieldPaths`. ##### Select You can use any of the following fields to select the targets to replace: `group`, `version`, `kind`, `name`, `namespace` For example, the following will select all the Deployments as targets of replacement. ```yaml select: kind: Deployment ``` Also, you can use multiple fields together to select only the resources that match all the conditions. For example, the following will select only the Deployments that are named my-deploy: ```yaml select: - kind: Deployment name: my-deploy ``` Moreover, when the selected target is going to be transformed during the kustomization process, you can use either the original or the transformed resource id to select it. For example, the name of the target could be changed because of the `namePrefix` field, as below: ```yaml namePrefix: my- ``` In this case, below will be enough if we wanted to select all the targets that were originally named deploy: ```yaml select: - name: deploy ``` Alternatively, using the transformed name with the prefix will produce the same behaviour. So the following case will select all the resources that \*will be\* named my-deploy, along with all the resources that \*were\* originally named my-deploy. ```yaml select: - name: my-deploy ``` ##### Reject The reject field is a selector that drops targets selected by select, overruling their selection. For example, if we wanted to reject all Deployments named my-deploy: ```yaml reject: - | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Kustomization File/replacements.md | master | kustomize | [
0.0077133383601903915,
0.02996603585779667,
0.06152497977018356,
0.0019635611679404974,
0.016652319580316544,
-0.012732871808111668,
0.01061226986348629,
-0.051762502640485764,
0.08177272230386734,
0.02026844397187233,
-0.005492491647601128,
-0.05228666961193085,
-0.009041758254170418,
-0.... | 0.145802 |
be\* named my-deploy, along with all the resources that \*were\* originally named my-deploy. ```yaml select: - name: my-deploy ``` ##### Reject The reject field is a selector that drops targets selected by select, overruling their selection. For example, if we wanted to reject all Deployments named my-deploy: ```yaml reject: - kind: Deployment name: my-deploy ``` This is distinct from the following: ```yaml reject: - kind: Deployment - name: my-deploy ``` The first case would only reject resources that are both of kind Deployment and named my-deploy. The second case would reject all Deployments, and all resources named my-deploy. We can also reject more than one kind, name, etc. For example: ```yaml reject: - kind: Deployment - kind: StatefulSet ``` Moreover, when the selected target is going to be transformed during the kustomization process, you can use either the original or the transformed resource id to reject it. For example, the name of the target could be changed because of the `nameSuffix` field, as below: ```yaml nameSuffix: -dev ``` You can use the original target name to prevent it from going through any replacement. ```yaml reject: - name: my-deploy ``` Alternatively, using the transformed name with the suffix will produce the same behaviour. ```yaml reject: - name: my-deploy-dev ``` #### Delimiter This field is intended to be used in conjunction with the `index` field for partial string replacement. For example, say we have a value: `path: my/path/VALUE` In our replacement target, we can specify something like: ```yaml options: delimiter: '/' index: 2 ``` and it would replace VALUE, e.g. `path: my/path/NEW\_VALUE`. #### Index This field is intended to be used in conjunction with the `delimiter` field described above for partial string replacement. The default value is 0. If the index is out of bounds, behavior depends on whether it is in a source or target. In a source, an index out of bounds will throw an error. For a target, a value less than 0 will cause the target to be prefixed, and a value beyond the length of the split will cause the target to be suffixed. If the fields `index` and `delimiter` are specified on sources or targets that are not scalar values (e.g. mapping or list values), kustomize will throw an error. #### Field Path format The fieldPath and fieldPaths fields support a format of a '.'-separated path to a value. For example, the default: `metadata.name` You can escape the '.' one of two ways. For example, say we have the following resource: ```yaml apiVersion: apps/v1 kind: Deployment metadata: annotations: config.kubernetes.io/local-config: true # this is what we want to target ``` We can express our path: 1. With a '\\': `metadata.annotations.config\.kubernetes\.io/local-config` 2. With '[]': `metadata.annotations.[config.kubernetes.io/local-config]` Strings are used for mapping nodes. For sequence nodes, we support three options: 1. Index by number: `spec.template.spec.containers.1.image` 2. Index by key-value pair: `spec.template.spec.containers.[name=nginx].image`. If the key-value pair matches multiple elements in the sequence node, all matching elements will be targetted. 3. Index with a wildcard match: `spec.template.spec.containers.\*.env.[name=TARGET\_ENV].value`. This will target every element in the list. ### Example For example, suppose one specifies the name of a k8s Secret object in a container's environment variable as follows: `job.yaml` ```yaml apiVersion: batch/v1 kind: Job metadata: name: hello spec: template: spec: containers: - image: myimage name: hello env: - name: SECRET\_TOKEN value: SOME\_SECRET\_NAME ``` Suppose you have the following resources: `resources.yaml` ```yaml apiVersion: v1 kind: Pod metadata: name: my-pod spec: containers: - image: busybox name: myapp-container restartPolicy: OnFailure --- apiVersion: v1 kind: Secret metadata: name: my-secret ``` To (1) replace the value of SOME\_SECRET\_NAME with the name of my-secret, and (2) to add a restartPolicy | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Kustomization File/replacements.md | master | kustomize | [
-0.02419915236532688,
0.0769299864768982,
0.10267274081707001,
0.02409026026725769,
0.039373233914375305,
-0.08405976742506027,
0.09830567985773087,
-0.11124328523874283,
0.09528899192810059,
0.010659835301339626,
0.02574550174176693,
-0.07021410018205643,
0.09865789860486984,
-0.019560998... | 0.091435 |
you have the following resources: `resources.yaml` ```yaml apiVersion: v1 kind: Pod metadata: name: my-pod spec: containers: - image: busybox name: myapp-container restartPolicy: OnFailure --- apiVersion: v1 kind: Secret metadata: name: my-secret ``` To (1) replace the value of SOME\_SECRET\_NAME with the name of my-secret, and (2) to add a restartPolicy copied from my-pod, you can do the following: `kustomization.yaml` ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - resources.yaml - job.yaml replacements: - path: my-replacement.yaml - source: kind: Secret name: my-secret targets: - select: name: hello kind: Job fieldPaths: - spec.template.spec.containers.[name=hello].env.[name=SECRET\_TOKEN].value ``` `my-replacement.yaml` ```yaml source: kind: Pod name: my-pod fieldPath: spec.restartPolicy targets: - select: name: hello kind: Job fieldPaths: - spec.template.spec.restartPolicy options: create: true ``` The output of `kustomize build` will be: ```yaml apiVersion: v1 kind: Secret metadata: name: my-secret --- apiVersion: batch/v1 kind: Job metadata: name: hello spec: template: spec: containers: - env: - name: SECRET\_TOKEN value: my-secret # this value is copied from my-secret image: myimage name: hello restartPolicy: OnFailure # this value is copied from my-pod --- apiVersion: v1 kind: Pod metadata: name: my-pod spec: containers: - image: busybox name: myapp-container restartPolicy: OnFailure ``` | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Kustomization File/replacements.md | master | kustomize | [
0.013326425105333328,
0.0115898372605443,
0.03625732660293579,
-0.003716544946655631,
-0.008055881597101688,
0.009481074288487434,
-0.007529414724558592,
-0.005879716947674751,
0.03922346979379654,
0.03076099418103695,
-0.019464880228042603,
-0.061968062072992325,
-0.04492335766553879,
-0.... | 0.108687 |
`apiVersion: kustomize.config.k8s.io/v1beta1` See the [Tasks section] for examples of how to use the `buildMetadata` field. ### buildMetadata BuildMetadata specifies options for adding kustomize build information to resource labels or annotations. --- \* \*\*buildMetadata\*\* ([]string) List of strings used to toggle different build options. The strings can be one of three builtin options that add metadata to each resource about how the resource was built. It is possible to set one or all of these options in the kustomization file. These options are: - `managedByLabel` - `originAnnotations` - `transformerAnnotations` [Tasks section]: /docs/tasks/build\_metadata/ | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Kustomization File/buildMetadata.md | master | kustomize | [
0.004901828244328499,
0.05421029403805733,
-0.020262280479073524,
0.0053199417889118195,
-0.08393213897943497,
0.01405381690710783,
-0.015340064652264118,
-0.05290943756699562,
-0.004740159958600998,
0.01557951234281063,
-0.047230497002601624,
-0.10877690464258194,
-0.009880702942609787,
-... | 0.122109 |
Kustomize uses kubernetes OpenAPI data to get merge key and patch strategy information about resource types. Kustomize has an OpenAPI schema builtin, but this schema only has information about builtin kubernetes types. If you need to provide merge key and patch strategy information about custom resource types, you will have to provide your own OpenAPI schema to do so. In your kustomization file, you can specify where kustomize should get its OpenAPI schema via an `openapi` field. For example: ```yaml resources: - my\_resource.yaml openapi: path: my\_schema.json ``` The `openapi` field of a kustomization file can either a path to a custom schema file, as in the example above. It can also be used to explicitly tell kustomize to use a builtin kubernetes OpenAPI schema: ```yaml resources: - my\_resource.yaml openapi: version: v1.20.4 ``` You can see what builtin kubernetes OpenAPI schemas are available with the command `kustomize openapi info`. Here is an example of a custom resource we might want to edit with a custom OpenAPI schema file. It looks like this: ```yaml apiVersion: example.com/v1alpha1 kind: MyResource metadata: name: service spec: template: spec: containers: - name: server image: server command: example ports: - name: grpc protocol: TCP containerPort: 8080 ``` This resource has an image field. Let's change its value from `server` to `nginx` with a patch. You can get an OpenAPI document like this from your locally favored cluster with the command `kustomize openapi fetch`. Kustomize will use the OpenAPI extensions `x-kubernetes-patch-merge-key` and `x-kubernetes-patch-strategy` to perform a strategic merge. `x-kubernetes-patch-strategy` should be set to "merge", and you can set your merge key to whatever you like. Below, our custom resource inherits merge keys from PodTemplateSpec. In the definition of "io.k8s.api.core.v1.Container", the `ports` field has its merge key set to "containerPort": ```json { "definitions": { "v1alpha1.MyResource": { "properties": { "apiVersion": { "type": "string" }, "kind": { "type": "string" }, "metadata": { "type": "object" }, "spec": { "properties": { "template": { "\$ref": "#/definitions/io.k8s.api.core.v1.PodTemplateSpec" } }, "type": "object" }, "status": { "properties": { "success": { "type": "boolean" } }, "type": "object" } }, "type": "object", "x-kubernetes-group-version-kind": [ { "group": "example.com", "kind": "MyResource", "version": "v1alpha1" } ] }, "io.k8s.api.core.v1.PodTemplateSpec": { "properties": { "metadata": { "\$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta" }, "spec": { "\$ref": "#/definitions/io.k8s.api.core.v1.PodSpec" } }, "type": "object" }, "io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta": { "properties": { "name": { "type": "string" } }, "type": "object" }, "io.k8s.api.core.v1.PodSpec": { "properties": { "containers": { "items": { "\$ref": "#/definitions/io.k8s.api.core.v1.Container" }, "type": "array", "x-kubernetes-patch-merge-key": "name", "x-kubernetes-patch-strategy": "merge" } }, "type": "object" }, "io.k8s.api.core.v1.Container": { "properties": { "command": { "items": { "type": "string" }, "type": "array" }, "image": { "type": "string" }, "name": { "type": "string" }, "ports": { "items": { "\$ref": "#/definitions/io.k8s.api.core.v1.ContainerPort" }, "type": "array", "x-kubernetes-list-map-keys": [ "containerPort", "protocol" ], "x-kubernetes-list-type": "map", "x-kubernetes-patch-merge-key": "containerPort", "x-kubernetes-patch-strategy": "merge" } }, "type": "object" }, "io.k8s.api.core.v1.ContainerPort": { "properties": { "containerPort": { "format": "int32", "type": "integer" }, "name": { "type": "string" }, "protocol": { "type": "string" } }, "type": "object" } } } ``` Then, our kustomization file to do the patch can be as follows: ```yaml resources: - my\_resource.yaml openapi: path: my\_schema.json patchesStrategicMerge: - |- apiVersion: example.com/v1alpha1 kind: MyResource metadata: name: service spec: template: spec: containers: - name: server image: nginx ``` | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Kustomization File/openapi.md | master | kustomize | [
0.03477436676621437,
0.04551848769187927,
0.024954641237854958,
-0.008343338966369629,
-0.062426842749118805,
-0.00977400317788124,
0.003146859584376216,
-0.020512046292424202,
0.03128509968519211,
-0.0022340102586895227,
0.00697659607976675,
-0.060312092304229736,
-0.02147011272609234,
-0... | 0.165134 |
See [Transformers]({{< relref "../Transformers" >}}) for common required fields. \* \*\*apiVersion\*\*: builtin \* \*\*kind\*\*: PrefixTransformer \* \*\*metadata\*\* ([ObjectMeta](https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta)) Standard object's metadata. \* \*\*prefix\*\* (string) Prefix is the value that PrefixTransformer will prepend to the names of resources. If not specified, PrefixTransformer leaves the names of resources unchanged. \* \*\*fieldSpecs\*\* (\[\][FieldSpec]({{< relref "../Common%20Definitions/FieldSpec.md" >}})) fieldSpecs specifies the field on each resource that PrefixTransformer should add the prefix to. It essentially allows the user to re-define the field path of the Kubernetes name field from `metadata/name` for different resources. If not specified, PrefixTransformer applies the prefix to the `metadata/name` field of all resources. | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Transformers/PrefixTransformer.md | master | kustomize | [
-0.030634000897407532,
0.03171498700976372,
0.05341869965195656,
-0.004401179030537605,
-0.051670435816049576,
0.01892261393368244,
-0.02837568335235119,
-0.020245693624019623,
0.12137028574943542,
-0.01449127309024334,
-0.06431606411933899,
-0.06108134612441063,
-0.021815989166498184,
0.0... | 0.09169 |
See [Transformers]({{< relref "../Transformers" >}}) for common required fields. \* \*\*apiVersion\*\*: builtin \* \*\*kind\*\*: LabelTransformer \* \*\*metadata\*\* ([ObjectMeta](https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta)) Standard object's metadata. \* \*\*labels\*\* (map[string]string) Map of labels that LabelTransformer will add to resources. If not specified, LabelTransformer leaves the resources unchanged. \* \*\*fieldSpecs\*\* (\[\][FieldSpec]({{< relref "../Common%20Definitions/FieldSpec.md" >}})) fieldSpecs specifies the field on each resource that LabelTransformer should add the labels to. It essentially allows the user to re-define the field path of the Kubernetes labels field from `metadata/labels` for different resources. If not specified, LabelTransformer applies the labels to the `metadata/labels` field of all resources. | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Transformers/LabelTransformer.md | master | kustomize | [
-0.02161427028477192,
0.04887024313211441,
-0.00597002450376749,
0.020072026178240776,
-0.024246511980891228,
0.06257708370685577,
0.027147183194756508,
-0.01603243499994278,
0.10873895138502121,
-0.03224506601691246,
-0.054573096334934235,
-0.09168126434087753,
-0.010277596302330494,
0.02... | 0.105842 |
See [Transformers]({{< relref "../Transformers" >}}) for common required fields. \* \*\*apiVersion\*\*: builtin \* \*\*kind\*\*: AnnotationsTransformer \* \*\*metadata\*\* ([ObjectMeta](https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta)) Standard object's metadata. \* \*\*annotations\*\* (map[string]string) Map of annotations that AnnotationsTransformer will add to resources. If not specified, AnnotationsTransformer leaves the resources unchanged. \* \*\*fieldSpecs\*\* (\[\][FieldSpec]({{< relref "../Common%20Definitions/FieldSpec.md" >}})) fieldSpecs specifies the field on each resource that AnnotationsTransformer should add the annotations to. It essentially allows the user to re-define the field path of the Kubernetes annotations field from `metadata/annotations` for different resources. If not specified, AnnotationsTransformer applies the annotations to the `metadata/annotations` field of all resources. | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Transformers/AnnotationsTransformer.md | master | kustomize | [
-0.028142843395471573,
0.069993756711483,
0.04563576728105545,
0.018862100318074226,
0.019734319299459457,
0.05083611607551575,
0.023111172020435333,
-0.033484503626823425,
0.11313503235578537,
-0.0032226049806922674,
-0.09298814833164215,
-0.11472070962190628,
-0.01691579818725586,
0.0534... | 0.126571 |
See [Transformers]({{< relref "../Transformers" >}}) for common required fields. \* \*\*apiVersion\*\*: builtin \* \*\*kind\*\*: SuffixTransformer \* \*\*metadata\*\* ([ObjectMeta](https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta)) Standard object's metadata. \* \*\*suffix\*\* (string) Suffix is the value that SuffixTransformer will postfix to the names of resources. If not specified, SuffixTransformer leaves the names of resources unchanged. \* \*\*fieldSpecs\*\* (\[\][FieldSpec]({{< relref "../Common%20Definitions/FieldSpec.md" >}})) fieldSpecs specifies the field on each resource that SuffixTransformer should add the suffix to. It essentially allows the user to re-define the field path of the Kubernetes name field from `metadata/name` for different resources. If not specified, SuffixTransformer applies the suffix to the `metadata/name` field of all resources. | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Transformers/SuffixTransformer.md | master | kustomize | [
-0.03887760639190674,
0.044295527040958405,
0.04529762268066406,
-0.004543118644505739,
-0.04073520004749298,
0.018720272928476334,
-0.016031788662075996,
-0.011370128020644188,
0.129128098487854,
-0.007924145087599754,
-0.0623193085193634,
-0.06281949579715729,
0.0035431210417300463,
0.01... | 0.111134 |
See [Transformers]({{< relref "../Transformers" >}}) for common required fields. \* \*\*apiVersion\*\*: builtin \* \*\*kind\*\*: NamespaceTransformer \* \*\*metadata\*\* ([ObjectMeta](https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta)) Standard object's metadata. \* \*\*fieldSpecs\*\* (\[\][FieldSpec]({{< relref "../Common%20Definitions/FieldSpec.md" >}})), optional fieldSpecs allows the user to re-define the field path of the Kubernetes Namespace field from `metadata/namespace` for different resources. If not specified, NamespaceTransformer applies the namespace to the `metadata/namespace` field of all resources. \* \*\*unsetOnly\*\* (bool), optional UnsetOnly indicates whether the NamespaceTransformer will only set namespace fields that are currently unset. Defaults to false. \* \*\*setRoleBindingSubjects\*\* (RoleBindingSubjectMode), optional SetRoleBindingSubjects determines which subject fields in RoleBinding and ClusterRoleBinding objects will have their namespace fields set. Overrides field specs provided for these types. \_RoleBindingSubjectMode specifies which subjects will be set. It can be one of three possible values:\_ - `defaultOnly` (default): namespace will be set only on subjects named "default". - `allServiceAccounts`: Namespace will be set on all subjects with `kind: ServiceAccount`. - `none`: All subjects will be skipped. | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Transformers/NamespaceTransformer.md | master | kustomize | [
-0.026652613654732704,
0.04243072122335434,
0.03913918882608414,
0.011440706439316273,
-0.03461114317178726,
0.008961145766079426,
-0.009024295024573803,
-0.03630969300866127,
0.12656165659427643,
0.017718564718961716,
-0.0646795704960823,
-0.0836113691329956,
-0.00676605012267828,
0.03664... | 0.119489 |
\* \*\*name\*\* (string), optional Name of the generated resource. A hash suffix will be added to the Name by default. \* \*\*namespace\*\* (string), optional Namespace of the generated resource. \* \*\*behavior\*\* (string), optional Behavior of generated resource, must be one of: \* \*\*create\*\*: Create new resource \* \*\*replace\*\*: Replace existing resource \* \*\*merge\*\*: Merge with existing resource \* \*\*literals\*\* ([]string), optional List of string literal pair sources. Each literal source should be a key and literal value, e.g. `key=value`. \* \*\*files\*\* ([]string), optional List of files paths to use in creating a list of key value pairs. A source should be in the form [{key}=]{path}. If the `key=` argument is not provided, the key is the path's basename. If the `key=` argument is provided, it becomes the key. The value is the file contents. Specifying a directory will iterate each named file in the directory whose basename is a valid resource key. \* \*\*envs\*\* ([]string), optional List of file paths. The contents of each file should be one key=value pair per line. Additionally, npm `.env` and `.ini` files are supported. \* \*\*env\*\* (string), optional Env is the singular form of `envs`. This is merged with `env` on edits with `kustomize fix` for consistency with `literals` and `files`. \* \*\*options\*\* ([GeneratorOptions]({{< ref "../kustomization%20file/generatorOptions.md" >}})) Options override global [generatorOptions]({{< ref "../kustomization%20file/generatorOptions.md" >}}) field. | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/included/generatorargs.md | master | kustomize | [
-0.05626659467816353,
0.04051415994763374,
-0.02056758664548397,
0.03213486447930336,
-0.0718802958726883,
-0.03198203817009926,
0.06453727185726166,
0.009249214082956314,
-0.03675604984164238,
0.010515104979276657,
-0.010308174416422844,
-0.052486952394247055,
0.11471740156412125,
0.01775... | 0.024089 |
\* \*\*group\*\* (string) Kubernetes group that this FieldSpec applies to. If empty, this FieldSpec applies to all groups. Currently, there is no way to specify only the core group, which is also represented by the empty string. \* \*\*version\*\* (string) Kubernetes version that this FieldSpec applies to. If empty, this FieldSpec applies to all versions. \* \*\*kind\*\* (string) Kubernetes kind that this FieldSpec applies to. If empty, this FieldSpec applies to all kinds. \* \*\*path\*\* (string) Path to target field. Fields in path are delimited by forward slashes "/". \* \*\*create\*\* (bool) If true, creates fields in \*\*path\*\* not already present. | https://github.com/kubernetes-sigs/kustomize/blob/master//site/content/en/docs/Reference/API/Common Definitions/FieldSpec.md | master | kustomize | [
0.02285504713654518,
0.018795786425471306,
0.08288685232400894,
0.026446035131812096,
-0.03585905209183693,
0.03437049314379692,
0.020597077906131744,
-0.061459239572286606,
0.10174626111984253,
-0.011398675851523876,
-0.026302045211195946,
-0.10879720747470856,
-0.02412562631070614,
-0.01... | 0.075783 |
# The ingress-nginx kubectl plugin ## Installation Install [krew](https://github.com/GoogleContainerTools/krew), then run ```console kubectl krew install ingress-nginx ``` to install the plugin. Then run ```console kubectl ingress-nginx --help ``` to make sure the plugin is properly installed and to get a list of commands: ```console kubectl ingress-nginx --help A kubectl plugin for inspecting your ingress-nginx deployments Usage: ingress-nginx [command] Available Commands: backends Inspect the dynamic backend information of an ingress-nginx instance certs Output the certificate data stored in an ingress-nginx pod conf Inspect the generated nginx.conf exec Execute a command inside an ingress-nginx pod general Inspect the other dynamic ingress-nginx information help Help about any command info Show information about the ingress-nginx service ingresses Provide a short summary of all of the ingress definitions lint Inspect kubernetes resources for possible issues logs Get the kubernetes logs for an ingress-nginx pod ssh ssh into a running ingress-nginx pod Flags: --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --cache-dir string Default HTTP cache directory (default "/Users/alexkursell/.kube/http-cache") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use -h, --help help for ingress-nginx --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. -n, --namespace string If present, the namespace scope for this CLI request --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0") -s, --server string The address and port of the Kubernetes API server --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use Use "ingress-nginx [command] --help" for more information about a command. ``` ## Common Flags - Every subcommand supports the basic `kubectl` configuration flags like `--namespace`, `--context`, `--client-key` and so on. - Subcommands that act on a particular `ingress-nginx` pod (`backends`, `certs`, `conf`, `exec`, `general`, `logs`, `ssh`), support the `--deployment `, `--pod `, and `--container ` flags to select either a pod from a deployment with the given name, or a pod with the given name (and the given container name). The `--deployment` flag defaults to `ingress-nginx-controller`, and the `--container` flag defaults to `controller`. - Subcommands that inspect resources (`ingresses`, `lint`) support the `--all-namespaces` flag, which causes them to inspect resources in every namespace. ## Subcommands Note that `backends`, `general`, `certs`, and `conf` require `ingress-nginx` version `0.23.0` or higher. ### backends Run `kubectl ingress-nginx backends` to get a JSON array of the backends that an ingress-nginx controller currently knows about: ```console $ kubectl ingress-nginx backends -n ingress-nginx [ { "name": "default-apple-service-5678", "service": { "metadata": { "creationTimestamp": null }, "spec": { "ports": [ { "protocol": "TCP", "port": 5678, "targetPort": 5678 } ], "selector": { "app": "apple" }, "clusterIP": "10.97.230.121", "type": "ClusterIP", "sessionAffinity": "None" }, "status": { "loadBalancer": {} } }, "port": 0, "sslPassthrough": false, "endpoints": [ { "address": "10.1.3.86", "port": "5678" } ], "sessionAffinityConfig": { "name": "", "cookieSessionAffinity": { "name": "" } }, "upstreamHashByConfig": { "upstream-hash-by-subset-size": 3 }, "noServer": false, "trafficShapingPolicy": { "weight": 0, "header": "", "headerValue": "", "cookie": "" } }, { "name": "default-echo-service-8080", ... }, { "name": "upstream-default-backend", ... } ] | https://github.com/kubernetes/ingress-nginx/blob/main//docs/kubectl-plugin.md | main | ingress-nginx | [
-0.040634118020534515,
-0.012750073336064816,
0.023359155282378197,
-0.007113669533282518,
-0.035698480904102325,
-0.03840196505188942,
-0.019477134570479393,
-0.020062852650880814,
0.020450036972761154,
0.05775580555200577,
-0.02022666111588478,
-0.11215437948703766,
-0.01880481466650963,
... | 0.142426 |
false, "endpoints": [ { "address": "10.1.3.86", "port": "5678" } ], "sessionAffinityConfig": { "name": "", "cookieSessionAffinity": { "name": "" } }, "upstreamHashByConfig": { "upstream-hash-by-subset-size": 3 }, "noServer": false, "trafficShapingPolicy": { "weight": 0, "header": "", "headerValue": "", "cookie": "" } }, { "name": "default-echo-service-8080", ... }, { "name": "upstream-default-backend", ... } ] ``` Add the `--list` option to show only the backend names. Add the `--backend ` option to show only the backend with the given name. ### certs Use `kubectl ingress-nginx certs --host ` to dump the SSL cert/key information for a given host. \*\*WARNING:\*\* This command will dump sensitive private key information. Don't blindly share the output, and certainly don't log it anywhere. ```console $ kubectl ingress-nginx certs -n ingress-nginx --host testaddr.local -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- -----END RSA PRIVATE KEY----- ``` ### conf Use `kubectl ingress-nginx conf` to dump the generated `nginx.conf` file. Add the `--host ` option to view only the server block for that host: ```console kubectl ingress-nginx conf -n ingress-nginx --host testaddr.local server { server\_name testaddr.local ; listen 80; set $proxy\_upstream\_name "-"; set $pass\_access\_scheme $scheme; set $pass\_server\_port $server\_port; set $best\_http\_host $http\_host; set $pass\_port $pass\_server\_port; location / { set $namespace ""; set $ingress\_name ""; set $service\_name ""; set $service\_port "0"; set $location\_path "/"; ... ``` ### exec `kubectl ingress-nginx exec` is exactly the same as `kubectl exec`, with the same command flags. It will automatically choose an `ingress-nginx` pod to run the command in. ```console $ kubectl ingress-nginx exec -i -n ingress-nginx -- ls /etc/nginx fastcgi\_params lua mime.types modsecurity modules nginx.conf opentracing.json owasp-modsecurity-crs template ``` ### info Shows the internal and external IP/CNAMES for an `ingress-nginx` service. ```console $ kubectl ingress-nginx info -n ingress-nginx Service cluster IP address: 10.187.253.31 LoadBalancer IP|CNAME: 35.123.123.123 ``` Use the `--service ` flag if your `ingress-nginx` `LoadBalancer` service is not named `ingress-nginx`. ### ingresses `kubectl ingress-nginx ingresses`, alternately `kubectl ingress-nginx ing`, shows a more detailed view of the ingress definitions in a namespace. Compare: ```console $ kubectl get ingresses --all-namespaces NAMESPACE NAME HOSTS ADDRESS PORTS AGE default example-ingress1 testaddr.local,testaddr2.local localhost 80 5d default test-ingress-2 \* localhost 80 5d ``` vs. ```console $ kubectl ingress-nginx ingresses --all-namespaces NAMESPACE INGRESS NAME HOST+PATH ADDRESSES TLS SERVICE SERVICE PORT ENDPOINTS default example-ingress1 testaddr.local/etameta localhost NO pear-service 5678 5 default example-ingress1 testaddr2.local/otherpath localhost NO apple-service 5678 1 default example-ingress1 testaddr2.local/otherotherpath localhost NO pear-service 5678 5 default test-ingress-2 \* localhost NO echo-service 8080 2 ``` ### lint `kubectl ingress-nginx lint` can check a namespace or entire cluster for potential configuration issues. This command is especially useful when upgrading between `ingress-nginx` versions. ```console $ kubectl ingress-nginx lint --all-namespaces --verbose Checking ingresses... ✗ anamespace/this-nginx - Contains the removed session-cookie-hash annotation. Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3743 ✗ othernamespace/ingress-definition-blah - The rewrite-target annotation value does not reference a capture group Lint added for version 0.22.0 https://github.com/kubernetes/ingress-nginx/issues/3174 Checking deployments... ✗ namespace2/ingress-nginx-controller - Uses removed config flag --sort-backends Lint added for version 0.22.0 https://github.com/kubernetes/ingress-nginx/issues/3655 - Uses removed config flag --enable-dynamic-certificates Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3808 ``` To show the lints added \*\*only\*\* for a particular `ingress-nginx` release, use the `--from-version` and `--to-version` flags: ```console $ kubectl ingress-nginx lint --all-namespaces --verbose --from-version 0.24.0 --to-version 0.24.0 Checking ingresses... ✗ anamespace/this-nginx - Contains the removed session-cookie-hash annotation. Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3743 Checking deployments... ✗ namespace2/ingress-nginx-controller - Uses removed config flag --enable-dynamic-certificates Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3808 ``` ### logs `kubectl ingress-nginx logs` is almost the same as `kubectl logs`, with fewer flags. It will automatically choose an `ingress-nginx` pod to read logs from. ```console $ kubectl ingress-nginx logs -n ingress-nginx ------------------------------------------------------------------------------- NGINX Ingress | https://github.com/kubernetes/ingress-nginx/blob/main//docs/kubectl-plugin.md | main | ingress-nginx | [
-0.05456266552209854,
0.09770847856998444,
-0.015607156790792942,
-0.02825905568897724,
-0.004306776449084282,
-0.02948370762169361,
0.003554836381226778,
-0.026285750791430473,
0.03333359211683273,
0.04479796439409256,
-0.07909447699785233,
-0.11914488673210144,
-0.020751046016812325,
-0.... | -0.001869 |
namespace2/ingress-nginx-controller - Uses removed config flag --enable-dynamic-certificates Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3808 ``` ### logs `kubectl ingress-nginx logs` is almost the same as `kubectl logs`, with fewer flags. It will automatically choose an `ingress-nginx` pod to read logs from. ```console $ kubectl ingress-nginx logs -n ingress-nginx ------------------------------------------------------------------------------- NGINX Ingress controller Release: dev Build: git-48dc3a867 Repository: git@github.com:kubernetes/ingress-nginx.git ------------------------------------------------------------------------------- W0405 16:53:46.061589 7 flags.go:214] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false) nginx version: nginx/1.15.9 W0405 16:53:46.070093 7 client\_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I0405 16:53:46.070499 7 main.go:205] Creating API client for https://10.96.0.1:443 I0405 16:53:46.077784 7 main.go:249] Running in Kubernetes cluster version v1.10 (v1.10.11) - git (clean) commit 637c7e288581ee40ab4ca210618a89a555b6e7e9 - platform linux/amd64 I0405 16:53:46.183359 7 nginx.go:265] Starting NGINX Ingress controller I0405 16:53:46.193913 7 event.go:209] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"82258915-563e-11e9-9c52-025000000001", APIVersion:"v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services ... ``` ### ssh `kubectl ingress-nginx ssh` is exactly the same as `kubectl ingress-nginx exec -it -- /bin/bash`. Use it when you want to quickly be dropped into a shell inside a running `ingress-nginx` container. ```console $ kubectl ingress-nginx ssh -n ingress-nginx www-data@ingress-nginx-controller-7cbf77c976-wx5pn:/etc/nginx$ ``` | https://github.com/kubernetes/ingress-nginx/blob/main//docs/kubectl-plugin.md | main | ingress-nginx | [
-0.02865109220147133,
-0.0035637898836284876,
0.06529220193624496,
0.04838477075099945,
0.04374682903289795,
-0.0571146123111248,
-0.016337808221578598,
-0.019039884209632874,
0.10575418919324875,
0.09280502796173096,
-0.012735361233353615,
-0.09198549389839172,
0.0025800946168601513,
0.02... | 0.095972 |
# Troubleshooting ## Ingress-Controller Logs and Events There are many ways to troubleshoot the ingress-controller. The following are basic troubleshooting methods to obtain more information. ### Check the Ingress Resource Events ```console $ kubectl get ing -n NAME HOSTS ADDRESS PORTS AGE cafe-ingress cafe.com 10.0.2.15 80 25s $ kubectl describe ing -n Name: cafe-ingress Namespace: default Address: 10.0.2.15 Default backend: default-http-backend:80 (172.17.0.5:8080) Rules: Host Path Backends ---- ---- -------- cafe.com /tea tea-svc:80 () /coffee coffee-svc:80 () Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{},"name":"cafe-ingress","namespace":"default","selfLink":"/apis/networking/v1/namespaces/default/ingresses/cafe-ingress"},"spec":{"rules":[{"host":"cafe.com","http":{"paths":[{"backend":{"serviceName":"tea-svc","servicePort":80},"path":"/tea"},{"backend":{"serviceName":"coffee-svc","servicePort":80},"path":"/coffee"}]}}]},"status":{"loadBalancer":{"ingress":[{"ip":"169.48.142.110"}]}}} Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 1m ingress-nginx-controller Ingress default/cafe-ingress Normal UPDATE 58s ingress-nginx-controller Ingress default/cafe-ingress ``` ### Check the Ingress Controller Logs ```console $ kubectl get pods -n NAME READY STATUS RESTARTS AGE ingress-nginx-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl logs -n ingress-nginx-controller-67956bf89d-fv58j ------------------------------------------------------------------------------- NGINX Ingress controller Release: 0.14.0 Build: git-734361d Repository: https://github.com/kubernetes/ingress-nginx ------------------------------------------------------------------------------- .... ``` ### Check the Nginx Configuration ```console $ kubectl get pods -n NAME READY STATUS RESTARTS AGE ingress-nginx-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl exec -it -n ingress-nginx-controller-67956bf89d-fv58j -- cat /etc/nginx/nginx.conf daemon off; worker\_processes 2; pid /run/nginx.pid; worker\_rlimit\_nofile 523264; worker\_shutdown\_timeout 240s; events { multi\_accept on; worker\_connections 16384; use epoll; } http { .... ``` ### Check if used Services Exist ```console $ kubectl get svc --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default coffee-svc ClusterIP 10.106.154.35 80/TCP 18m default kubernetes ClusterIP 10.96.0.1 443/TCP 30m default tea-svc ClusterIP 10.104.172.12 80/TCP 18m kube-system default-http-backend NodePort 10.108.189.236 80:30001/TCP 30m kube-system kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 30m kube-system kubernetes-dashboard NodePort 10.103.128.17 80:30000/TCP 30m ``` ## Debug Logging Using the flag `--v=XX` it is possible to increase the level of logging. This is performed by editing the deployment. ```console $ kubectl get deploy -n NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE default-http-backend 1 1 1 1 35m ingress-nginx-controller 1 1 1 1 35m $ kubectl edit deploy -n ingress-nginx-controller # Add --v=X to "- args", where X is an integer ``` - `--v=2` shows details using `diff` about the changes in the configuration in nginx - `--v=3` shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format - `--v=5` configures NGINX in [debug mode](https://nginx.org/en/docs/debugging\_log.html) ## Authentication to the Kubernetes API Server A number of components are involved in the authentication process and the first step is to narrow down the source of the problem, namely whether it is a problem with service authentication or with the kubeconfig file. Both authentications must work: ``` +-------------+ service +------------+ | | authentication | | + apiserver +<-------------------+ ingress | | | | controller | +-------------+ +------------+ ``` \*\*Service authentication\*\* The Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in a couple of ways: \* \_Service Account:\_ This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details. \* \_Kubeconfig file:\_ In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the `--kubeconfig` flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the `--kubeconfig` does not requires the flag `--apiserver-host`. The format of the file is identical to `~/.kube/config` which is used by kubectl to connect to the API server. See 'kubeconfig' section for details. \* \_Using the flag `--apiserver-host`:\_ Using this flag `--apiserver-host=http://localhost:8080` it is possible to specify an unsecured API server or reach a remote kubernetes cluster using [kubectl proxy](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#proxy). Please do not use this | https://github.com/kubernetes/ingress-nginx/blob/main//docs/troubleshooting.md | main | ingress-nginx | [
-0.003992928192019463,
0.011576530523598194,
0.03948787599802017,
0.018210267648100853,
0.046703170984983444,
-0.020657122135162354,
0.004543106537312269,
-0.05783364921808243,
0.07475661486387253,
0.07082587480545044,
-0.0012480602599680424,
-0.12065007537603378,
-0.014859508723020554,
0.... | 0.122743 |
to `~/.kube/config` which is used by kubectl to connect to the API server. See 'kubeconfig' section for details. \* \_Using the flag `--apiserver-host`:\_ Using this flag `--apiserver-host=http://localhost:8080` it is possible to specify an unsecured API server or reach a remote kubernetes cluster using [kubectl proxy](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#proxy). Please do not use this approach in production. In the diagram below you can see the full authentication flow with all options, starting with the browser on the lower left hand side. ``` Kubernetes Workstation +---------------------------------------------------+ +------------------+ | | | | | +-----------+ apiserver +------------+ | | +------------+ | | | | proxy | | | | | | | | | apiserver | | ingress | | | | ingress | | | | | | controller | | | | controller | | | | | | | | | | | | | | | | | | | | | | | | | service account/ | | | | | | | | | | kubeconfig | | | | | | | | | +<-------------------+ | | | | | | | | | | | | | | | | | +------+----+ kubeconfig +------+-----+ | | +------+-----+ | | |<--------------------------------------------------------| | | | | | +---------------------------------------------------+ +------------------+ ``` ### Service Account If using a service account to connect to the API server, the ingress-controller expects the file `/var/run/secrets/kubernetes.io/serviceaccount/token` to be present. It provides a secret token that is required to authenticate with the API server. Verify with the following commands: ```console # start a container that contains curl $ kubectl run -it --rm test --image=curlimages/curl --restart=Never -- /bin/sh # check if secret exists / $ ls /var/run/secrets/kubernetes.io/serviceaccount/ ca.crt namespace token / $ # check base connectivity from cluster inside / $ curl -k https://kubernetes.default.svc.cluster.local { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"", "reason": "Forbidden", "details": { }, "code": 403 }/ $ # connect using tokens }/ $ curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://kubernetes.default.svc.cluster.local && echo { "paths": [ "/api", "/api/v1", "/apis", "/apis/", ... TRUNCATED "/readyz/shutdown", "/version" ] } / $ # when you type `exit` or `^D` the test pod will be deleted. ``` If it is not working, there are two possible reasons: 1. The contents of the tokens are invalid. Find the secret name with `kubectl get secrets | grep service-account` and delete it with `kubectl delete secret `. It will automatically be recreated. 2. You have a non-standard Kubernetes installation and the file containing the token may not be present. The API server will mount a volume containing this file, but only if the API server is configured to use the ServiceAccount admission controller. If you experience this error, verify that your API server is using the ServiceAccount admission controller. If you are configuring the API server by hand, you can set this with the `--admission-control` parameter. > Note that you should use other admission controllers as well. Before configuring this option, you should read about admission controllers. More information: - [User Guide: Service Accounts](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) - [Cluster Administrator Guide: Managing Service Accounts](http://kubernetes.io/docs/admin/service-accounts-admin/) ## Kube-Config If you want to use a kubeconfig file for authentication, follow the [deploy procedure](deploy/index.md) and add the flag `--kubeconfig=/etc/kubernetes/kubeconfig.yaml` to the args section of the deployment. ## Using GDB with Nginx [Gdb](https://www.gnu.org/software/gdb/) can be used to with nginx to perform a configuration dump. This allows us to see which configuration is being used, as well as older configurations. Note: The below is based on the nginx [documentation](https://docs.nginx.com/nginx/admin-guide/monitoring/debugging/#dumping-nginx-configuration-from-a-running-process). 1. SSH into the worker ```console $ ssh user@workerIP ``` 2. | https://github.com/kubernetes/ingress-nginx/blob/main//docs/troubleshooting.md | main | ingress-nginx | [
-0.03180292248725891,
0.04716796800494194,
0.030532002449035645,
-0.030869679525494576,
-0.08040297776460648,
-0.007003395818173885,
-0.02307368814945221,
-0.018327422440052032,
0.07360959053039551,
0.042006704956293106,
-0.04380157217383385,
-0.07033427804708481,
0.04447554424405098,
-0.0... | 0.061453 |
GDB with Nginx [Gdb](https://www.gnu.org/software/gdb/) can be used to with nginx to perform a configuration dump. This allows us to see which configuration is being used, as well as older configurations. Note: The below is based on the nginx [documentation](https://docs.nginx.com/nginx/admin-guide/monitoring/debugging/#dumping-nginx-configuration-from-a-running-process). 1. SSH into the worker ```console $ ssh user@workerIP ``` 2. Obtain the Docker Container Running nginx ```console $ docker ps | grep ingress-nginx-controller CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d9e1d243156a registry.k8s.io/ingress-nginx/controller "/usr/bin/dumb-init …" 19 minutes ago Up 19 minutes k8s\_ingress-nginx-controller\_ingress-nginx-controller-67956bf89d-mqxzt\_kube-system\_079f31ec-aa37-11e8-ad39-080027a227db\_0 ``` 3. Exec into the container ```console $ docker exec -it --user=0 --privileged d9e1d243156a bash ``` 4. Make sure nginx is running in `--with-debug` ```console $ nginx -V 2>&1 | grep -- '--with-debug' ``` 5. Get list of processes running on container ```console $ ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 20:23 ? 00:00:00 /usr/bin/dumb-init /nginx-ingres root 5 1 0 20:23 ? 00:00:05 /ingress-nginx-controller --defa root 21 5 0 20:23 ? 00:00:00 nginx: master process /usr/sbin/ nobody 106 21 0 20:23 ? 00:00:00 nginx: worker process nobody 107 21 0 20:23 ? 00:00:00 nginx: worker process root 172 0 0 20:43 pts/0 00:00:00 bash ``` 6. Attach gdb to the nginx master process ```console $ gdb -p 21 .... Attaching to process 21 Reading symbols from /usr/sbin/nginx...done. .... (gdb) ``` 7. Copy and paste the following: ```console set $cd = ngx\_cycle->config\_dump set $nelts = $cd.nelts set $elts = (ngx\_conf\_dump\_t\*)($cd.elts) while ($nelts-- > 0) set $name = $elts[$nelts]->name.data printf "Dumping %s to nginx\_conf.txt\n", $name append memory nginx\_conf.txt \ $elts[$nelts]->buffer.start $elts[$nelts]->buffer.end end ``` 8. Quit GDB by pressing CTRL+D 9. Open nginx\_conf.txt ```console cat nginx\_conf.txt ``` ## Image related issues faced on Nginx 4.2.5 or other versions (Helm chart versions) 1. Incase you face below error while installing Nginx using helm chart (either by helm commands or helm\_release terraform provider ) ``` Warning Failed 5m5s (x4 over 6m34s) kubelet Failed to pull image "registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.3.0@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47": rpc error: code = Unknown desc = failed to pull and unpack image "registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47": failed to resolve reference "registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47": failed to do request: Head "https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47": EOF ``` Then please follow the below steps. 2. During troubleshooting you can also execute the below commands to test the connectivities from you local machines and repositories details a. curl registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 > /dev/null ``` (⎈ |myprompt)➜ ~ curl registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 > /dev/null % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 (⎈ |myprompt)➜ ~ ``` b. curl -I https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 ``` (⎈ |myprompt)➜ ~ curl -I https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 HTTP/2 200 docker-distribution-api-version: registry/2.0 content-type: application/vnd.docker.distribution.manifest.list.v2+json docker-content-digest: sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 content-length: 1384 date: Wed, 28 Sep 2022 16:46:28 GMT server: Docker Registry x-xss-protection: 0 x-frame-options: SAMEORIGIN alt-svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43" (⎈ |myprompt)➜ ~ ``` Redirection in the proxy is implemented to ensure the pulling of the images. 3. This is the solution recommended to whitelist the below image repositories : ``` \*.appspot.com \*.k8s.io \*.pkg.dev \*.gcr.io ``` More details about the above repos : a. \*.k8s.io -> To ensure you can pull any images from registry.k8s.io b. \*.gcr.io -> GCP services are used for image hosting. This is part of the domains suggested by GCP to allow and ensure users can pull images from their container registry services. c. \*.appspot.com -> This a Google domain. part of the domain used for GCR. ## Unable to listen on port (80/443) One possible reason for this error is lack of permission to bind to the port. Ports 80, 443, and any other port < 1024 are Linux | https://github.com/kubernetes/ingress-nginx/blob/main//docs/troubleshooting.md | main | ingress-nginx | [
-0.06026509404182434,
-0.0061066108755767345,
-0.018728215247392654,
0.006491031032055616,
0.004946993198245764,
-0.0484742671251297,
-0.0002915806253440678,
-0.001782053499482572,
-0.03503844141960144,
0.04532304406166077,
-0.06653435528278351,
0.0287761390209198,
-0.0028249844908714294,
... | 0.031047 |
container registry services. c. \*.appspot.com -> This a Google domain. part of the domain used for GCR. ## Unable to listen on port (80/443) One possible reason for this error is lack of permission to bind to the port. Ports 80, 443, and any other port < 1024 are Linux privileged ports which historically could only be bound by root. The ingress-nginx-controller uses the CAP\_NET\_BIND\_SERVICE [linux capability](https://man7.org/linux/man-pages/man7/capabilities.7.html) to allow binding these ports as a normal user (www-data / 101). This involves two components: 1. In the image, the /nginx-ingress-controller file has the cap\_net\_bind\_service capability added (e.g. via [setcap](https://man7.org/linux/man-pages/man8/setcap.8.html)) 2. The NET\_BIND\_SERVICE capability is added to the container in the containerSecurityContext of the deployment. If encountering this on one/some node(s) and not on others, try to purge and pull a fresh copy of the image to the affected node(s), in case there has been corruption of the underlying layers to lose the capability on the executable. ### Create a test pod The /nginx-ingress-controller process exits/crashes when encountering this error, making it difficult to troubleshoot what is happening inside the container. To get around this, start an equivalent container running "sleep 3600", and exec into it for further troubleshooting. For example: ```yaml apiVersion: v1 kind: Pod metadata: name: ingress-nginx-sleep namespace: default labels: app: nginx spec: containers: - name: nginx image: ##\_CONTROLLER\_IMAGE\_## resources: requests: memory: "512Mi" cpu: "500m" limits: memory: "1Gi" cpu: "1" command: ["sleep"] args: ["3600"] ports: - containerPort: 80 name: http protocol: TCP - containerPort: 443 name: https protocol: TCP securityContext: allowPrivilegeEscalation: true capabilities: add: - NET\_BIND\_SERVICE drop: - ALL runAsUser: 101 restartPolicy: Never nodeSelector: kubernetes.io/hostname: ##\_NODE\_NAME\_## tolerations: - key: "node.kubernetes.io/unschedulable" operator: "Exists" effect: NoSchedule ``` \* update the namespace if applicable/desired \* replace `##\_NODE\_NAME\_##` with the problematic node (or remove nodeSelector section if problem is not confined to one node) \* replace `##\_CONTROLLER\_IMAGE\_##` with the same image as in use by your ingress-nginx deployment \* confirm the securityContext section matches what is in place for ingress-nginx-controller pods in your cluster Apply the YAML and open a shell into the pod. Try to manually run the controller process: ```console $ /nginx-ingress-controller ``` You should get the same error as from the ingress controller pod logs. Confirm the capabilities are properly surfacing into the pod: ```console $ grep CapBnd /proc/1/status CapBnd: 0000000000000400 ``` The above value has only net\_bind\_service enabled (per security context in YAML which adds that and drops all). If you get a different value, then you can decode it on another linux box (capsh not available in this container) like below, and then figure out why specified capabilities are not propagating into the pod/container. ```console $ capsh --decode=0000000000000400 0x0000000000000400=cap\_net\_bind\_service ``` ## Create a test pod as root (Note, this may be restricted by PodSecurityAdmission/Standards, OPA Gatekeeper, etc. in which case you will need to do the appropriate workaround for testing, e.g. deploy in a new namespace without the restrictions.) To test further you may want to install additional utilities, etc. Modify the pod yaml by: \* changing runAsUser from 101 to 0 \* removing the "drop..ALL" section from the capabilities. Some things to try after shelling into this container: Try running the controller as the www-data (101) user: ```console $ chmod 4755 /nginx-ingress-controller $ /nginx-ingress-controller ``` Examine the errors to see if there is still an issue listening on the port or if it passed that and moved on to other expected errors due to running out of context. Install the libcap package and check capabilities on the file: ```console $ apk add libcap (1/1) Installing libcap (2.50-r0) Executing busybox-1.33.1-r7.trigger OK: 26 MiB in 41 packages $ getcap /nginx-ingress-controller | https://github.com/kubernetes/ingress-nginx/blob/main//docs/troubleshooting.md | main | ingress-nginx | [
-0.056038253009319305,
0.019570710137486458,
-0.01105768047273159,
-0.11387158930301666,
-0.04112289473414421,
-0.08703634142875671,
0.025302374735474586,
-0.03895121440291405,
-0.04429876431822777,
0.03432902693748474,
-0.024455837905406952,
-0.017479831352829933,
0.02315678633749485,
0.0... | 0.031026 |
the port or if it passed that and moved on to other expected errors due to running out of context. Install the libcap package and check capabilities on the file: ```console $ apk add libcap (1/1) Installing libcap (2.50-r0) Executing busybox-1.33.1-r7.trigger OK: 26 MiB in 41 packages $ getcap /nginx-ingress-controller /nginx-ingress-controller cap\_net\_bind\_service=ep ``` (if missing, see above about purging image on the server and re-pulling) Strace the executable to see what system calls are being executed when it fails: ```console $ apk add strace (1/1) Installing strace (5.12-r0) Executing busybox-1.33.1-r7.trigger OK: 28 MiB in 42 packages $ strace /nginx-ingress-controller execve("/nginx-ingress-controller", ["/nginx-ingress-controller"], 0x7ffeb9eb3240 /\* 131 vars \*/) = 0 arch\_prctl(ARCH\_SET\_FS, 0x29ea690) = 0 ... ``` | https://github.com/kubernetes/ingress-nginx/blob/main//docs/troubleshooting.md | main | ingress-nginx | [
-0.02913461998105049,
0.0075425622053444386,
-0.013411607593297958,
0.008087432943284512,
0.023766104131937027,
-0.06462561339139938,
0.005880886688828468,
-0.007227427326142788,
0.015477125532925129,
0.10214585810899734,
-0.02940778061747551,
-0.03999237343668938,
-0.054444778710603714,
-... | 0.041551 |
# How it works The objective of this document is to explain how the Ingress-NGINX controller works, in particular how the NGINX model is built and why we need one. ## NGINX configuration The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. \_Though it is important to note that we don't reload Nginx on changes that impact only an `upstream` configuration (i.e Endpoints change when you deploy your app)\_. We use [lua-nginx-module](https://github.com/openresty/lua-nginx-module) to achieve this. Check [below](#avoiding-reloads-on-endpoints-changes) to learn more about how it's done. ## NGINX model Usually, a Kubernetes Controller utilizes the [synchronization loop pattern][1] to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster. To get this object from the cluster, we use [Kubernetes Informers][2], in particular, `FilteredSharedInformer`. These informers allow reacting to change in using [callbacks][3] to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload. One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions. The final representation of the NGINX configuration is generated from a [Go template][6] using the new model as input for the variables required by the template. ## Building the NGINX model Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a [work queue][4] it is possible to not lose changes and remove the use of [sync.Mutex][5] to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the [work queue][4]. Operations to build the model: - Order Ingress rules by `CreationTimestamp` field, i.e., old rules first. - If the same path for the same host is defined in more than one Ingress, the oldest rule wins. - If more than one Ingress contains a TLS section for the same host, the oldest rule wins. - If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins. - Create a list of NGINX Servers (per hostname) - Create a list | https://github.com/kubernetes/ingress-nginx/blob/main//docs/how-it-works.md | main | ingress-nginx | [
-0.08667288720607758,
-0.002579103922471404,
0.04773393273353577,
-0.0025433830451220274,
-0.015284249559044838,
-0.022906968370079994,
0.027999471873044968,
-0.009813620708882809,
0.12266459316015244,
0.055203139781951904,
-0.006005432922393084,
0.016722649335861206,
-0.04120685160160065,
... | 0.133748 |
- If more than one Ingress contains a TLS section for the same host, the oldest rule wins. - If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins. - Create a list of NGINX Servers (per hostname) - Create a list of NGINX Upstreams - If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions. - Annotations are applied to all the paths in the Ingress. - Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses. ## When a reload is required The next list describes the scenarios when a reload is required: - New Ingress Resource Created. - TLS section is added to existing Ingress. - Change in Ingress annotations that impacts more than just upstream configuration. For instance `load-balance` annotation does not require a reload. - A path is added/removed from an Ingress. - An Ingress, Service, Secret is removed. - Some missing referenced object from the Ingress is available, like a Service or Secret. - A Secret is updated. ## Avoiding reloads In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes. ### Avoiding reloads on Endpoints changes On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in [`balancer\_by\_lua`](https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/balancer.md) context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. \_Note\_ that this includes annotation changes that affects only `upstream` configuration in Nginx as well. In a relatively big cluster with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on. ### Avoiding outage from wrong configuration Because the ingress controller works using the [synchronization loop pattern](https://coreos.com/kubernetes/docs/latest/replication-controller.html#the-reconciliation-loop-in-detail), it is applying the configuration for all matching objects. In case some Ingress objects have a broken configuration, for example a syntax error in the `nginx.ingress.kubernetes.io/configuration-snippet` annotation, the generated configuration becomes invalid, does not reload and hence no more ingresses will be taken into account. To prevent this situation to happen, the Ingress-Nginx Controller optionally exposes a [validating admission webhook server][8] to ensure the validity of incoming ingress objects. This webhook appends the incoming ingress objects to the list of ingresses, generates the configuration and calls nginx to ensure the configuration has no syntax errors. [0]: https://github.com/openresty/lua-nginx-module/pull/1259 [1]: https://kubernetes.io/docs/concepts/architecture/controller/#controller-pattern [2]: https://godoc.org/k8s.io/client-go/informers#NewFilteredSharedInformerFactory [3]: https://godoc.org/k8s.io/client-go/tools/cache#ResourceEventHandlerFuncs [4]: https://github.com/kubernetes/ingress-nginx/blob/main/internal/task/queue.go#L38 [5]: https://golang.org/pkg/sync/#Mutex [6]: https://github.com/kubernetes/ingress-nginx/blob/main/rootfs/etc/nginx/template/nginx.tmpl [7]: https://nginx.org/en/docs/beginners\_guide.html#control [8]: https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook | https://github.com/kubernetes/ingress-nginx/blob/main//docs/how-it-works.md | main | ingress-nginx | [
-0.10491666942834854,
0.03386803716421127,
0.050517063587903976,
0.0036637061275541782,
0.0018585495417937636,
-0.043010350316762924,
0.03721540793776512,
-0.0692991316318512,
0.10125464200973511,
0.013771100901067257,
-0.053114332258701324,
-0.022223670035600662,
0.050713688135147095,
-0.... | 0.137054 |
# e2e test suite for [Ingress NGINX Controller](https://github.com/kubernetes/ingress-nginx/tree/main/) ### [[Admission] admission controller](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/admission/admission.go#L38) - [should not allow overlaps of host and paths without canary annotations](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/admission/admission.go#L46) - [should not allow overlaps of host and paths without canary annotations in any rule](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/admission/admission.go#L63) - [should allow overlaps of host and paths with canary annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/admission/admission.go#L94) - [should block ingress with invalid path](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/admission/admission.go#L115) - [should return an error if there is an error validating the ingress definition](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/admission/admission.go#L134) - [should return an error if there is an invalid value in some annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/admission/admission.go#L149) - [should return an error if there is a forbidden value in some annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/admission/admission.go#L163) - [should return an error if there is an invalid path and wrong pathType is set](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/admission/admission.go#L177) - [should not return an error if the Ingress V1 definition is valid with Ingress Class](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/admission/admission.go#L211) - [should not return an error if the Ingress V1 definition is valid with IngressClass annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/admission/admission.go#L227) - [should return an error if the Ingress V1 definition contains invalid annotations](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/admission/admission.go#L245) - [should not return an error for an invalid Ingress when it has unknown class](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/admission/admission.go#L260) ### [affinity session-cookie-name](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L43) - [should set sticky cookie SERVERID](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L50) - [should change cookie name on ingress definition change](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L72) - [should set the path to /something on the generated cookie](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L107) - [does not set the path to / on the generated cookie if there's more than one rule referring to the same backend](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L129) - [should set cookie with expires](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L202) - [should set cookie with domain](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L234) - [should not set cookie without domain annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L257) - [should work with use-regex annotation and session-cookie-path](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L279) - [should warn user when use-regex is true and session-cookie-path is not set](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L303) - [should not set affinity across all server locations when using separate ingresses](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L329) - [should set sticky cookie without host](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L361) - [should work with server-alias annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L381) - [should set secure in cookie with provided true annotation on http](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L421) - [should not set secure in cookie with provided false annotation on http](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L444) - [should set secure in cookie with provided false annotation on https](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinity.go#L467) ### [affinitymode](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinitymode.go#L33) - [Balanced affinity mode should balance](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinitymode.go#L36) - [Check persistent affinity mode](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/affinitymode.go#L69) ### [server-alias](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/alias.go#L31) - [should return status code 200 for host 'foo' and 404 for 'bar'](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/alias.go#L38) - [should return status code 200 for host 'foo' and 'bar'](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/alias.go#L64) - [should return status code 200 for hosts defined in two ingresses, different path with one alias](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/alias.go#L89) ### [app-root](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/approot.go#L28) - [should redirect to /foo](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/approot.go#L35) ### [auth-\*](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L45) - [should return status code 200 when no authentication is configured](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L52) - [should return status code 503 when authentication is configured with an invalid secret](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L71) - [should return status code 401 when authentication is configured but Authorization header is not configured](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L95) - [should return status code 401 when authentication is configured and Authorization header is sent with invalid credentials](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L122) - [should return status code 401 and cors headers when authentication and cors is configured but Authorization header is not configured](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L150) - [should return status code 200 when authentication is configured and Authorization header is sent](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L178) - [should return status code 200 when authentication is configured with a map and Authorization header is sent](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L205) - [should return status code 401 when authentication is configured with invalid content and Authorization header is sent](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L233) - [proxy\_set\_header My-Custom-Header 42;](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L272) - [proxy\_set\_header My-Custom-Header 42;](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L292) - [proxy\_set\_header "My-Custom-Header" "42";](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L311) - [user retains cookie by default](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L420) - [user does not retain cookie if upstream returns error status code](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L431) - [user with annotated ingress retains cookie if upstream returns error status code](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L442) - [should return status code 200 when signed in](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L481) - [should redirect to signin url when not signed in](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L490) - [keeps processing new ingresses even if one of the existing | https://github.com/kubernetes/ingress-nginx/blob/main//docs/e2e-tests.md | main | ingress-nginx | [
0.0021364963613450527,
0.038509856909513474,
0.04794269800186157,
-0.022819800302386284,
0.026790181174874306,
-0.049742747098207474,
-0.04028590768575668,
-0.019650187343358994,
-0.022228946909308434,
0.0555662177503109,
0.005360431503504515,
-0.04941948130726814,
-0.017419731244444847,
0... | 0.095951 |
cookie if upstream returns error status code](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L431) - [user with annotated ingress retains cookie if upstream returns error status code](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L442) - [should return status code 200 when signed in](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L481) - [should redirect to signin url when not signed in](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L490) - [keeps processing new ingresses even if one of the existing ingresses is misconfigured](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L501) - [should overwrite Foo header with auth response](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L525) - [should return status code 200 when signed in](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L701) - [should redirect to signin url when not signed in](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L710) - [keeps processing new ingresses even if one of the existing ingresses is misconfigured](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L721) - [should return status code 200 when signed in after auth backend is deleted ](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L780) - [should deny login for different location on same server](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L800) - [should deny login for different servers](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L828) - [should redirect to signin url when not signed in](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L857) - [should return 503 (location was denied)](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L887) - [should add error to the config](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/auth.go#L895) ### [auth-tls-\*](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/authtls.go#L31) - [should set sslClientCertificate, sslVerifyClient and sslVerifyDepth with auth-tls-secret](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/authtls.go#L38) - [should set valid auth-tls-secret, sslVerify to off, and sslVerifyDepth to 2](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/authtls.go#L86) - [should 302 redirect to error page instead of 400 when auth-tls-error-page is set](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/authtls.go#L116) - [should pass URL-encoded certificate to upstream](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/authtls.go#L163) - [should validate auth-tls-verify-client](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/authtls.go#L208) - [should return 403 using auth-tls-match-cn with no matching CN from client](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/authtls.go#L267) - [should return 200 using auth-tls-match-cn with matching CN from client](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/authtls.go#L296) - [should reload the nginx config when auth-tls-match-cn is updated](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/authtls.go#L325) - [should return 200 using auth-tls-match-cn where atleast one of the regex options matches CN from client](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/authtls.go#L368) ### [backend-protocol](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/backendprotocol.go#L29) - [should set backend protocol to https:// and use proxy\_pass](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/backendprotocol.go#L36) - [should set backend protocol to https:// and use proxy\_pass with lowercase annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/backendprotocol.go#L51) - [should set backend protocol to $scheme:// and use proxy\_pass](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/backendprotocol.go#L66) - [should set backend protocol to grpc:// and use grpc\_pass](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/backendprotocol.go#L81) - [should set backend protocol to grpcs:// and use grpc\_pass](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/backendprotocol.go#L96) - [should set backend protocol to '' and use fastcgi\_pass](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/backendprotocol.go#L111) ### [canary-\*](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L36) - [should response with a 200 status from the mainline upstream when requests are made to the mainline ingress](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L45) - [should return 404 status for requests to the canary if no matching ingress is found](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L89) - [should return the correct status codes when endpoints are unavailable](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L120) - [should route requests to the correct upstream if mainline ingress is created before the canary ingress](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L174) - [should route requests to the correct upstream if mainline ingress is created after the canary ingress](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L232) - [should route requests to the correct upstream if the mainline ingress is modified](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L289) - [should route requests to the correct upstream if the canary ingress is modified](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L363) - [should route requests to the correct upstream](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L445) - [should route requests to the correct upstream](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L513) - [should route requests to the correct upstream](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L594) - [should route requests to the correct upstream](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L647) - [should routes to mainline upstream when the given Regex causes error](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L692) - [should route requests to the correct upstream](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L741) - [respects always and never values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L790) - [should route requests only to mainline if canary weight is 0](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L862) - [should route requests only to canary if canary weight is 100](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L910) - [should route requests only to canary if canary weight is equal to canary weight total](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L952) - [should route requests split between mainline and canary if canary weight is 50](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L995) - [should route requests split between mainline and canary if canary weight is 100 and weight total is 200](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L1031) - [should not use canary as a catch-all server](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L1070) - [should not use canary with domain as a server](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L1104) - [does not crash when canary ingress has multiple paths to the same non-matching backend](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L1138) - [always routes traffic to canary if first request was affinitized to | https://github.com/kubernetes/ingress-nginx/blob/main//docs/e2e-tests.md | main | ingress-nginx | [
-0.055945683270692825,
0.04859098419547081,
0.052098557353019714,
-0.03501260280609131,
0.04113912209868431,
-0.09212569892406464,
0.04066651687026024,
-0.009250854142010212,
0.02028060331940651,
0.06428171694278717,
-0.024962741881608963,
-0.07120190560817719,
-0.056367889046669006,
0.029... | 0.071883 |
weight total is 200](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L1031) - [should not use canary as a catch-all server](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L1070) - [should not use canary with domain as a server](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L1104) - [does not crash when canary ingress has multiple paths to the same non-matching backend](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L1138) - [always routes traffic to canary if first request was affinitized to canary (default behavior)](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L1175) - [always routes traffic to canary if first request was affinitized to canary (explicit sticky behavior)](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L1242) - [routes traffic to either mainline or canary backend (legacy behavior)](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/canary.go#L1310) ### [client-body-buffer-size](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/clientbodybuffersize.go#L30) - [should set client\_body\_buffer\_size to 1000](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/clientbodybuffersize.go#L37) - [should set client\_body\_buffer\_size to 1K](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/clientbodybuffersize.go#L59) - [should set client\_body\_buffer\_size to 1k](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/clientbodybuffersize.go#L81) - [should set client\_body\_buffer\_size to 1m](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/clientbodybuffersize.go#L103) - [should set client\_body\_buffer\_size to 1M](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/clientbodybuffersize.go#L125) - [should not set client\_body\_buffer\_size to invalid 1b](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/clientbodybuffersize.go#L147) ### [connection-proxy-header](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/connection.go#L28) - [set connection header to keep-alive](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/connection.go#L35) ### [cors-\*](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L33) - [should enable cors](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L40) - [should set cors methods to only allow POST, GET](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L67) - [should set cors max-age](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L83) - [should disable cors allow credentials](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L99) - [should allow origin for cors](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L115) - [should allow headers for cors](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L142) - [should expose headers for cors](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L158) - [should allow - single origin for multiple cors values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L174) - [should not allow - single origin for multiple cors values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L201) - [should allow correct origins - single origin for multiple cors values](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L221) - [should not break functionality](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L272) - [should not break functionality - without `\*`](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L296) - [should not break functionality with extra domain](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L319) - [should not match](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L343) - [should allow - single origin with required port](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L363) - [should not allow - single origin with port and origin without port](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L391) - [should not allow - single origin without port and origin with required port](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L410) - [should allow - matching origin with wildcard origin (2 subdomains)](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L430) - [should not allow - unmatching origin with wildcard origin (2 subdomains)](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L473) - [should allow - matching origin+port with wildcard origin](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L493) - [should not allow - portless origin with wildcard origin](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L520) - [should allow correct origins - missing subdomain + origin with wildcard origin and correct origin](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L540) - [should allow - missing origins (should allow all origins)](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L576) - [should allow correct origin but not others - cors allow origin annotations contain trailing comma](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L636) - [should allow - origins with non-http[s] protocols](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/cors.go#L673) ### [custom-headers-\*](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/customheaders.go#L33) - [should return status code 200 when no custom-headers is configured](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/customheaders.go#L40) - [should return status code 503 when custom-headers is configured with an invalid secret](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/customheaders.go#L57) - [more\_set\_headers 'My-Custom-Header' '42';](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/customheaders.go#L78) ### [custom-http-errors](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/customhttperrors.go#L34) - [configures Nginx correctly](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/customhttperrors.go#L41) ### [default-backend](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/default\_backend.go#L29) - [should use a custom default backend as upstream](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/default\_backend.go#L37) ### [disable-access-log disable-http-access-log disable-stream-access-log](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/disableaccesslog.go#L28) - [disable-access-log set access\_log off](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/disableaccesslog.go#L35) - [disable-http-access-log set access\_log off](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/disableaccesslog.go#L53) - [disable-stream-access-log set access\_log off](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/disableaccesslog.go#L71) ### [disable-proxy-intercept-errors](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/disableproxyintercepterrors.go#L31) - [configures Nginx correctly](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/disableproxyintercepterrors.go#L39) ### [backend-protocol - FastCGI](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/fastcgi.go#L30) - [should use fastcgi\_pass in the configuration file](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/fastcgi.go#L37) - [should add fastcgi\_index in the configuration file](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/fastcgi.go#L54) - [should add fastcgi\_param in the configuration file](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/fastcgi.go#L71) - [should return OK for service with backend protocol FastCGI](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/fastcgi.go#L102) ### [force-ssl-redirect](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/forcesslredirect.go#L27) - [should redirect to https](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/forcesslredirect.go#L34) ### [from-to-www-redirect](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/fromtowwwredirect.go#L31) - [should redirect from www HTTP to HTTP](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/fromtowwwredirect.go#L38) - [should redirect from www HTTPS to HTTPS](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/fromtowwwredirect.go#L64) ### [backend-protocol - GRPC](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/grpc.go#L45) - [should return OK for service with backend protocol GRPC](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/grpc.go#L48) - [authorization metadata should be overwritten by external auth response headers](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/grpc.go#L109) - [should return OK for service with backend protocol GRPCS](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/grpc.go#L170) - [should return OK when request not exceed timeout](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/grpc.go#L237) - [should return Error when request exceed timeout](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/grpc.go#L280) ### [http2-push-preload](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/http2pushpreload.go#L27) - [enable the http2-push-preload directive](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/http2pushpreload.go#L34) ### [allowlist-source-range](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/ipallowlist.go#L27) - [should set valid ip allowlist range](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/ipallowlist.go#L34) ### [denylist-source-range](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/ipdenylist.go#L28) - [only deny explicitly denied IPs, allow all others](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/ipdenylist.go#L35) - [only allow explicitly allowed IPs, deny all others](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/ipdenylist.go#L86) ### [Annotation - limit-connections](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/limitconnections.go#L31) - [should limit-connections](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/limitconnections.go#L38) ### [limit-rate](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/limitrate.go#L29) - [Check limit-rate annotation](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/limitrate.go#L37) ### [enable-access-log enable-rewrite-log](https://github.com/kubernetes/ingress-nginx/tree/main//test/e2e/annotations/log.go#L27) - | https://github.com/kubernetes/ingress-nginx/blob/main//docs/e2e-tests.md | main | ingress-nginx | [
0.0029028516728430986,
0.01708296872675419,
0.05387912690639496,
0.029790053144097328,
-0.00553246820345521,
-0.06994067132472992,
-0.03184910863637924,
0.0015265599358826876,
0.012172053568065166,
0.05095863714814186,
-0.044175464659929276,
-0.07432043552398682,
-0.049806710332632065,
0.0... | 0.106642 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.