content large_stringlengths 3 20.5k | url large_stringlengths 54 193 | branch large_stringclasses 4 values | source large_stringclasses 42 values | embeddings listlengths 384 384 | score float64 -0.21 0.65 |
|---|---|---|---|---|---|
from workspace state. | | [Read](#state-read) | Read complete state files from the workspace. | | [Read and write](#state-read-write) | Create new state versions in the workspace. | ### No access No access is granted to the state file from the workspace. ### Read outputs only Allows users to access values in the workspace's most recent Terraform state that have been explicitly marked as public outputs. Refer to [Define outputs to expose module data](/terraform/language/values/outputs#define-outputs-to-expose-module-data) to learn more. Configuration authors often use output values to interface with other workspaces that manage loosely-coupled collections of infrastructure. Making output values readable lets people who have no direct responsibility for the managed infrastructure in one workspace still indirectly use some of its functions in their workspaces. This permission is required to use the following features: - Call the [`/state-versions` API endpoint](/terraform/cloud-docs/api-docs/state-version-outputs) - Run the [`terraform output` command](/terraform/cli/commands/output) - Use the [`terraform\_remote\_state` data source](/terraform/language/state/remote-state-data) against the workspace. ### Read Allows users to read complete state files from the workspace. State files are useful for identifying infrastructure changes over time, but often contain sensitive information. This permission implies permission to read outputs only. ### Read and write Allows users to read and directly create new state versions in the workspace. This permission is required for performing local runs when the workspace's execution mode is set to \*\*Local\*\*. This permission is also required to use any of the [Terraform CLI's state manipulation and maintenance commands](/terraform/cli/state) against this workspace, including `terraform import`, `terraform taint`, and the various `terraform state` subcommands. ### Other controls The following table summarizes additional control permissions for the workspace. | Permission name | Description | |-----------------|-------------| | [Download Sentinel mocks](#download-sentinel-mocks) | Download data from runs for developing Sentinel policies. | | [Lock/unlock workspace](#lock-unlock-workspace) | Manually lock the workspace to prevent runs. | | [Manage workspace Run Tasks](#manage-workspace-run-tasks) | Associate or dissociate run tasks with the workspace. | ### Download Sentinel mocks Allows users to download data from runs in the workspace in a format that you can use for developing Sentinel policies. This run data from Sentinel mocks is detailed and may contain unredacted sensitive information. ### Lock/unlock workspace Allows users to manually lock the workspace to temporarily prevent runs. When a workspace's execution mode is set to \*\*Local\*\*, you must grant this permission so that team members can perform local CLI runs using the workspace's state. ### Manage workspace Run Tasks Allows users to associate or dissociate run tasks with the workspace. HCP Terraform creates run tasks at the organization level, where you can manually associate or dissociate them with specific workspaces. ## HCP group roles In an HCP Europe organization, you manage user access through groups. To learn how to set up groups and assign users to them in HCP, refer to [Groups](/hcp/docs/hcp/iam/groups). To learn more about HCP Terraform in Europe, refer to [Use HCP Terraform in Europe](/terraform/cloud-docs/europe). You can assign permissions to HCP groups in two ways: - [HCP roles](/terraform/cloud-docs/users-teams-organizations/permissions/set-permissions#set-workspace-level-roles-for-hcp-europe-organizations) - You can assign HCP roles to groups in the HashiCorp Cloud Platform (HCP), and these roles automatically grant permissions in HCP Terraform. - [HCP Terraform roles](/terraform/cloud-docs/users-teams-organizations/permissions/set-permissions#set-workspace-level-roles) - Assign additional permissions at the organization, project, and workspace level to further refine group access in HCP Terraform. Each permission a user is granted is additive. HCP Terraform grants a user the highest permissions possible, regardless of whether that permission was set by an HCP or HCP Terraform role. The following table shows which workspace-level permissions each HCP role automatically grants in HCP Terraform: | Permission Category | HCP Terraform workspace permission | Admin | Contributor | Viewer | |---------------------|-----------------------------------|-------|-------------|---------| | [Run access](#run-access) | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/users-teams-organizations/permissions/workspace.mdx | main | terraform | [
-0.021337853744626045,
0.014104248955845833,
-0.017154105007648468,
0.03942346200346947,
-0.001436583581380546,
-0.012090176343917847,
-0.03609391301870346,
-0.0811673104763031,
0.02139047160744667,
0.0799713209271431,
-0.03256639465689659,
-0.010303108021616936,
0.027139099314808846,
-0.0... | 0.088464 |
permissions possible, regardless of whether that permission was set by an HCP or HCP Terraform role. The following table shows which workspace-level permissions each HCP role automatically grants in HCP Terraform: | Permission Category | HCP Terraform workspace permission | Admin | Contributor | Viewer | |---------------------|-----------------------------------|-------|-------------|---------| | [Run access](#run-access) | [Read runs](#run-read) | ✅ | ✅ | ✅ | | | [Plan runs](#run-plan) | ✅ | ✅ | ❌ | | | [Apply runs](#run-apply) | ✅ | ✅ | ❌ | | [Variable access](#variable-access) | [Read variables](#variable-read) | ✅ | ✅ | ✅ | | | [Read and write variables](#variable-read-write) | ✅ | ✅ | ❌ | | [State access](#state-access) | [Read outputs only](#state-read-outputs) | ✅ | ✅ | ✅ | | | [Read state](#state-read) | ✅ | ✅ | ✅ | | | [Read and write state](#state-read-write) | ✅ | ✅ | ❌ | | [Other controls](#other-controls) | [Download Sentinel mocks](#download-sentinel-mocks) | ✅ | ✅ | ❌ | | | [Lock/unlock workspace](#lock-unlock-workspace) | ✅ | ✅ | ❌ | | | [Manage workspace Run Tasks](#manage-workspace-run-tasks) | ✅ | ✅ | ❌ | | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/users-teams-organizations/permissions/workspace.mdx | main | terraform | [
0.024692876264452934,
0.02503758855164051,
-0.014558505266904831,
-0.012966636568307877,
-0.03833143040537834,
0.023159876465797424,
-0.0007898016483522952,
-0.08182653784751892,
-0.04970768839120865,
0.022698763757944107,
-0.023727264255285263,
-0.07088647782802582,
0.018898120149970055,
... | -0.030199 |
# Project permissions Project-level permissions apply to all workspaces and Stacks within a specific project. ## Background If you are in an HCP Terraform organization, you can manage user access and permissions through teams. Refer to the following topics for information about setting permissions in HCP Terraform: - [Set permissions](/terraform/cloud-docs/users-teams-organizations/permissions/set-permissions) - [Organization permissions reference](/terraform/cloud-docs/users-teams-organizations/permissions/organization) - [Workspace permissions reference](/terraform/cloud-docs/users-teams-organizations/permissions/workspace) - [Effective permissions](/terraform/cloud-docs/users-teams-organizations/permissions#effective-permissions) provides information about competing permissions. @include 'eu/permissions.mdx' ## Project roles and permissions A role is a preselected set of permissions that you can assign to a team or group. The following table shows the permissions granted by each project role. Each role builds upon the previous level, with \*\*Admin\*\* granting the most comprehensive access. | Permission category | Permission | [Admin](#project-admin) | [Maintain](#project-maintain) | [Write](#project-write) | [Read](#project-read) | |---------------------|------------|:-----:|:--------:|:-----:|:----:| | [Project access](#project-access) | [Read project](#project-read) | ✅ | ✅ | ✅ | ✅ | | | [Update project settings](#project-update) | ✅ | ❌ | ❌ | ❌ | | | [Delete project](#project-delete) | ✅ | ❌ | ❌ | ❌ | | [Workspace management](#workspace-management) | [Create workspaces](#create-workspaces) | ✅ | ✅ | ❌ | ❌ | | | [Move workspaces](#move-workspaces) | ✅ | ❌ | ❌ | ❌ | | | [Delete workspaces](#delete-workspaces) | ✅ | ✅ | ❌ | ❌ | | [Group management](#group-management) | [Manage team permissions](#group-manage) | ✅ | ❌ | ❌ | ❌ | | | [Read team assignments](#group-read) | ✅ | ❌ | ❌ | ❌ | | [Run Access](#run-access) | [Apply runs](#run-apply) | ✅ | ✅ | ✅ | ❌ | | | [Plan runs](#run-plan) | ✅ | ✅ | ✅ | ❌ | | | [Read runs](#run-read) | ✅ | ✅ | ✅ | ✅ | | [Variable access](#variable-access) | [Read and write variables](#variable-read-write) | ✅ | ✅ | ✅ | ❌ | | | [Read variables](#variable-read) | ✅ | ✅ | ✅ | ✅ | | [Variable set access](#variable-set-access) | [Manage variable sets](#variable-set-manage) | ✅ | ❌ | ❌ | ❌ | | | [Read variable sets](#variable-set-read) | ✅ | ❌ | ❌ | ❌ | | [State access](#state-access) | [Read and write state](#state-read-write) | ✅ | ✅ | ✅ | ❌ | | | [Read state](#state-read) | ✅ | ✅ | ✅ | ✅ | | | [Read outputs only](#state-read-outputs) | ✅ | ✅ | ✅ | ✅ | | [Other controls](#other-controls) | [Download Sentinel mocks](#download-sentinel-mocks) | ✅ | ✅ | ✅ | ❌ | | | [Lock and unlock workspaces](#lock-unlock-workspaces) | ✅ | ✅ | ✅ | ❌ | | | [Manage workspace run tasks](#manage-workspace-run-tasks) | ✅ | ❌ | ❌ | ❌ | In an HCP Europe organization, you can grant permissions at the project-level through both HCP and HCP Terraform roles. To learn more about assigning permissions in HCP Europe organizations, refer to [HCP group roles](#hcp-group-roles). ### Project admin Each project has a group of permissions under the \*\*Admin\*\* role. This role grants permissions for the project and the workspaces and Stacks that belong to that project. Members of teams with \*\*Admin\*\* permissions for a project have [general workspace permissions](#workspace-permissions) for every workspace, as well as \*\*Admin\*\* access for every Stack, in the project, and the ability to do the following: - Read and update project settings. - Delete the project. - Move workspaces and Stacks into or out of the project. This also requires project admin permissions for the source or destination project. - Grant or revoke project permissions for visible teams. Project admins cannot view or manage access for teams that are [secret](/terraform/cloud-docs/users-teams-organizations/teams/manage#team-visibility), unless those admins are also organization owners. - Admin access for all | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/users-teams-organizations/permissions/project.mdx | main | terraform | [
-0.013777797110378742,
0.011356146074831486,
0.004358948674052954,
-0.02801794745028019,
-0.0024263481609523296,
-0.02158113196492195,
-0.011426061391830444,
-0.07982330769300461,
-0.0007275603711605072,
0.036451540887355804,
-0.044890161603689194,
-0.06108351796865463,
0.03355555981397629,
... | 0.037845 |
or out of the project. This also requires project admin permissions for the source or destination project. - Grant or revoke project permissions for visible teams. Project admins cannot view or manage access for teams that are [secret](/terraform/cloud-docs/users-teams-organizations/teams/manage#team-visibility), unless those admins are also organization owners. - Admin access for all workspaces and Stacks in this project, including the ability to: - Create, read, update, and delete workspaces and Stacks in this project. - Initiate, cancel, or apply runs for workspaces and Stacks in the project. ### Project maintain Assign the \*\*Maintain\*\* role when users are responsible for managing existing infrastructure in a single project. The role also grants the ability to create new workspaces and Stacks in that project. \*\*Maintain\*\* access grants full control of everything in the project, including the following permissions: - Read the project name. - Admin access for all workspaces and Stacks in this project, including the ability to: - Create, read, update, and delete workspaces and Stacks in this project. - Initiate, cancel, or apply runs for workspaces and Stacks in the project. ### Project write Assign the \*\*Write\*\* role when users are responsible for most of the day-to-day work of provisioning and modifying managed infrastructure. \*\*Write\*\* access grants the following permissions: - Read the project name. - Write access for all workspaces and Stacks in this project, including the ability to: - Read workspaces and Stacks in this project. - Initiate, cancel, or apply runs for workspaces and Stacks in the project. ### Project read Assign the \*\*Read\*\* role to users who need to view information about the status and configuration of managed infrastructure but are not responsible for maintaining that infrastructure. \*\*Read\*\* access grants the permissions to: - Read the project name. - Read access for all workspaces and Stacks in this project. ### Custom project role Custom permissions enable you to assign specific and granular permissions to a team. You can use custom permission sets to create task-focused permission sets and control sensitive information. Stacks do not support custom group permissions. You can create a set of custom permissions using any of the permissions listed in the [Project roles and permissions](#project-roles-and-permissions) table. ## Project access The following table summarizes the available project access permissions. Click on a specific permission name to learn more about that permission level. | Permission name | Description | |-----------------|-------------| | [Read](#project-read) | View information about the project including the name. | | [Update](#project-update) | Update the project name. | | [Delete](#project-delete) | Delete the project. | In HCP Europe organizations, you cannot assign \*\*Project access\*\* to a group in HCP Terraform. Instead, assign an HCP role that grants \*\*Project access\*\* to that group, then HCP Terraform automatically inherits those permissions. To learn more about which HCP roles grant project access, refer to [HCP group roles](#hcp-group-roles). ### Read Allows users to view information about the project including the name. ### Update Allows users to update the project name. This permission implies permission to read. ### Delete Allows users to delete the project. This permission implies permission to update and read. ## Workspace management The following table summarizes the available workspace management permissions within the project. | Permission name | Description | |-----------------|-------------| | [Create workspaces](#create-workspaces) | Create workspaces in the project. | | [Move workspaces](#move-workspaces) | Move workspaces into or out of the project. | | [Delete workspaces](#delete-workspaces) | Delete workspaces in the project. | ### Create workspaces Allow users to create workspaces in the project. This grants read access to all workspaces in the project. ### Move workspaces Allows users to move workspaces out of the project. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/users-teams-organizations/permissions/project.mdx | main | terraform | [
-0.0360429622232914,
0.009288711473345757,
-0.01977732591331005,
0.024144064635038376,
0.01768435165286064,
-0.043679751455783844,
-0.02962341532111168,
-0.10813364386558533,
0.05696145445108414,
0.09410641342401505,
0.005695791449397802,
-0.03952988609671593,
0.04870223626494408,
-0.00609... | 0.032945 |
into or out of the project. | | [Delete workspaces](#delete-workspaces) | Delete workspaces in the project. | ### Create workspaces Allow users to create workspaces in the project. This grants read access to all workspaces in the project. ### Move workspaces Allows users to move workspaces out of the project. A user must have this permission on both the source and destination project to successfully move a workspace from one project to another. ### Delete workspaces Allows users to delete workspaces in the project. Depending on the [organization's settings](/terraform/cloud-docs/users-teams-organizations/organizations#general), workspace admins may only be able to delete the workspace if it is not actively managing infrastructure. Refer to [Deleting a Workspace With Resources Under Management](/terraform/cloud-docs/workspaces/settings#deleting-a-workspace-with-resources-under-management) for details. ## Group management In HashiCorp Cloud Platform (HCP) Europe organizations, you manage user access through HCP groups, and use group management permissions instead of team management permissions. The following table summarizes the available group management permissions for the project. | Permission name | Description | |-----------------|-------------| | [None](#group-none) | No access to view groups assigned to the project. | | [Read](#group-read) | View groups assigned to the project for groups. | | [Manage](#group-manage) | Set or remove project permissions for groups. | ### None No access to view groups assigned to the project. ### Read Allows users to see groups assigned to the project for visible groups. ### Manage Allows users to set or remove project permissions for visible groups. ## Team management The following table summarizes the available team management permissions for the project. | Permission name | Description | |-----------------|-------------| | [None](#team-none) | No access to view teams assigned to the project. | | [Read](#team-read) | View teams assigned to the project for visible teams. | | [Manage](#team-manage) | Set or remove project permissions for visible teams. | ### None No access to view teams assigned to the project. ### Read Allows users to see teams assigned to the project for visible teams. ### Manage Allows users to set or remove project permissions for visible teams. Project admins can not view or manage teams with \*\*Visibility\*\* set to \*\*Secret\*\* in their team settings unless they are also organization owners. Refer to [Team visiblity](/terraform/cloud-docs/users-teams-organizations/teams/manage#team-visibility) for more information. ## Run access The following table summarizes the available run access permissions for workspaces within the project. | Permission name | Description | |-----------------|-------------| | [Read](#run-read) | View information about workspace runs within the project. | | [Plan](#run-plan) | Queue Terraform plans in workspaces within the project. | | [Apply](#run-apply) | Approve and apply Terraform plans in workspaces within the project. | ### Read Allows users to view information about remote Terraform runs, including the run history, the status of runs, the log output of each stage of a run, and configuration versions associated with a run. ### Plan Allows users to queue Terraform plans in workspaces within the project, including both speculative plans and normal plans. Normal plans must be approved by a user with permission to apply runs. This permission implies permission to read. ### Apply Allows users to approve and apply Terraform plans, causing changes to real infrastructure. This permission implies permission to plan and read. ## Variable access The following table summarizes the available variable access permissions for workspaces within the project. Refer to [Manage variables and variable sets](/terraform/cloud-docs/variables/managing-variables) for more information about variables. | Permission name | Description | |-----------------|-------------| | [No access](#variable-no-access) | No access to workspace variables within the project. | | [Read](#variable-read) | View workspace variables within the project. | | [Read and write](#variable-read-write) | Edit workspace variables within the project. | ### No access No access | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/users-teams-organizations/permissions/project.mdx | main | terraform | [
0.005066196899861097,
-0.004986935760825872,
-0.009038351476192474,
0.00024739489890635014,
0.03821743279695511,
-0.08801180869340897,
0.030824996531009674,
-0.1450158655643463,
0.06975322216749191,
0.08382035046815872,
-0.01302826963365078,
0.0025893046986311674,
0.047219254076480865,
0.0... | 0.029772 |
information about variables. | Permission name | Description | |-----------------|-------------| | [No access](#variable-no-access) | No access to workspace variables within the project. | | [Read](#variable-read) | View workspace variables within the project. | | [Read and write](#variable-read-write) | Edit workspace variables within the project. | ### No access No access is granted to the values of Terraform variables and environment variables for workspaces within the project. ### Read Allows users to view the values of Terraform variables and environment variables for workspaces within the project. Note that variables marked as sensitive are write-only and can't be viewed by any user. ### Read and write Allows users to read and edit the values of variables in workspaces within the project. ## Variable set access The following table summarizes the available permissions for creating and managing sets of variables in the project. Refer to [Manage variables and variable sets](/terraform/cloud-docs/variables/managing-variables) for more information about variable sets. | Permission name | Description | |-----------------|-------------| | [None](#variable-set-none) | No access to variable sets owned by the project. | | [Read](#variable-set-read) | View variable sets owned by the project. | | [Manage](#variable-set-manage) | Create, update, and delete variable sets owned by the project. | ### None No access to variable sets owned by the project, but users can view variable sets that have been applied to the project and its workspaces if their \*\*Variable access\*\* permission is set to \*\*Read\*\* or \*\*Read and write\*\* . ### Read Allows users to view variable sets owned by this project. ### Manage Allows users to read, create, update, and delete variable sets owned by the project. ## State access The following table summarizes the available state access permissions for workspaces within the project. Refer to [State](/terraform/language/state) to learn about state in Terraform. | Permission name | Description | |-----------------|-------------| | [No access](#state-no-access) | No access to workspace state within the project. | | [Read outputs only](#state-read-outputs) | Access public outputs from workspace state within the project. | | [Read](#state-read) | Read complete state files from workspaces within the project. | | [Read and write](#state-read-write) | Create new state versions in workspaces within the project. | ### No access No access is granted to the state file from workspaces within the project. ### Read outputs only Allows users to access values in the workspace's most recent Terraform state that have been explicitly marked as public outputs. This permission is required to access the State Version Outputs API endpoint. ### Read Allows users to read complete state files from workspaces within the project. State files are useful for identifying infrastructure changes over time, but often contain sensitive information. This permission implies permission to read outputs only. ### Read and write Allows users to directly create new state versions in workspaces within the project. This permission is required for performing local runs when the workspace's execution mode is set to "local". This permission implies permission to read. ## Other controls The following table summarizes additional control permissions for the project. | Permission name | Description | |-----------------|-------------| | [Download Sentinel mocks](#download-sentinel-mocks) | Download data from runs for developing Sentinel policies. | | [Lock/unlock workspaces](#lock-unlock-workspaces) | Manually lock workspaces to prevent runs. | | [Manage workspace Run Tasks](#manage-workspace-run-tasks) | Associate or dissociate run tasks with workspaces. | ### Download Sentinel mocks Allows users to download data from runs in workspaces within the project in a format that can be used for developing Sentinel policies. This run data is very detailed, and often contains unredacted sensitive information. Refer to [Generate mock Sentinel data with Terraform](/terraform/cloud-docs/workspaces/policy-enforcement/test-sentinel) for more information about Sentinel mocks. ### Lock/unlock workspaces | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/users-teams-organizations/permissions/project.mdx | main | terraform | [
0.03895081579685211,
-0.010979767888784409,
-0.045765750110149384,
0.049004990607500076,
-0.04628089442849159,
0.019501851871609688,
0.04675602912902832,
-0.053927693516016006,
-0.03409092128276825,
0.08887329697608948,
0.01020662672817707,
-0.034182172268629074,
0.05613533407449722,
0.009... | 0.016558 |
to download data from runs in workspaces within the project in a format that can be used for developing Sentinel policies. This run data is very detailed, and often contains unredacted sensitive information. Refer to [Generate mock Sentinel data with Terraform](/terraform/cloud-docs/workspaces/policy-enforcement/test-sentinel) for more information about Sentinel mocks. ### Lock/unlock workspaces Allows users to manually lock workspaces within the project to temporarily prevent runs. When a workspace's execution mode is set to \*\*Local\*\*, enable the \*\*Lock/unlock workspaces\*\* permission to perform local CLI runs using the workspace's state. Refer to [Workspace settings](/terraform/cloud-docs/workspaces/settings) for information about execution modes and locking workspaces. ### Manage workspace Run Tasks Allows users to associate or dissociate run tasks with workspaces within the project. HCP Terraform creates run tasks at the organization level, where you can manually associate or dissociate them with specific workspaces. Refer to [Set up run task integrations](/terraform/cloud-docs/integrations/run-tasks) for more information about run tasks. ## HCP group roles In an HCP Europe organization, you manage user access through groups. To learn how to set up groups and assign users to them in HCP, refer to [Groups](/hcp/docs/hcp/iam/groups). To learn more about HCP Europe, refer to [Use HCP Terraform in Europe](/terraform/cloud-docs/europe). You can assign permissions to HCP groups in two ways: - [HCP roles](/terraform/cloud-docs/users-teams-organizations/permissions/set-permissions#set-project-level-roles-for-hcp-europe-organizations) - You can assign HCP roles to groups in the HashiCorp Cloud Platform (HCP), and these roles automatically grant permissions in HCP Terraform. - [HCP Terraform roles](/terraform/cloud-docs/users-teams-organizations/permissions/set-permissions#set-project-level-roles) - Assign additional permissions at the organization, project, and workspace level to further refine group access in HCP Terraform. Each permission a user is granted is additive. HCP Terraform grants a user the highest permissions possible, regardless of whether that permission was set by an HCP or HCP Terraform role. The following table lists which project-level permissions each HCP role automatically grants in HCP Terraform: | Permission Category | HCP Terraform permission name | Admin role | Contributor role | Viewer | |---------------------|----------------------------------|-------|-------------|---------| | [Project access](#project-access) | [Read project](#project-read) | ✅ | ✅ | ✅ | | | [Update project](#project-update) | ✅ | ❌ | ❌ | | | [Delete project](#project-delete) | ✅ | ❌ | ❌ | | [Workspace management](#workspace-management) | [Create workspaces](#create-workspaces) | ✅ | ❌ | ❌ | | | [Move workspaces](#move-workspaces) | ✅ | ❌ | ❌ | | | [Delete workspaces](#delete-workspaces) | ✅ | ❌ | ❌ | | [Group management](#group-management) | [Manage teams](#group-manage) | ✅ | ❌ | ❌ | | [Run access](#run-access) | [Apply runs](#run-apply) | ✅ | ❌ | ❌ | | | [Plan runs](#run-plan) | ✅ | ❌ | ❌ | | | [Read runs](#run-read) | ✅ | ❌ | ✅ | | [Variable access](#variable-access) | [Read and write variables](#variable-read-write) | ✅ | ❌ | ❌ | | | [Read variables](#variable-read) | ✅ | ❌ | ✅ | | [Variable set access](#variable-set-access) | [Manage variable sets](#variable-set-manage) | ✅ | ❌ | ❌ | | | [Read variable sets](#variable-set-read) | ✅ | ❌ | ❌ | | [State access](#state-access) | [Read and write state](#state-read-write) | ✅ | ❌ | ❌ | | | [Read state](#state-read) | ✅ | ❌ | ✅ | | [Other controls](#other-controls) | [Download Sentinel mocks](#download-sentinel-mocks) | ✅ | ❌ | ❌ | | | [Lock/unlock workspaces](#lock-unlock-workspaces) | ✅ | ❌ | ❌ | | | [Manage workspace run tasks](#manage-workspace-run-tasks) | ✅ | ❌ | ❌ | | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/users-teams-organizations/permissions/project.mdx | main | terraform | [
-0.013131697662174702,
-0.010933025740087032,
-0.023005012422800064,
0.02273307368159294,
0.030657606199383736,
-0.04654921591281891,
-0.018729906529188156,
-0.09574536234140396,
0.018914131447672844,
0.08274149894714355,
-0.007589069195091724,
-0.06824787706136703,
-0.0012356885708868504,
... | 0.05576 |
❌ | | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/users-teams-organizations/permissions/project.mdx | main | terraform | [
-0.044486191123723984,
0.006478703115135431,
0.06646595895290375,
-0.016474014148116112,
0.04745832458138466,
-0.0029264292679727077,
0.10473047196865082,
-0.0025308136828243732,
0.05098339542746544,
-0.047593992203474045,
0.10172861069440842,
-0.0566021203994751,
0.14391010999679565,
0.04... | 0.000845 |
# Configure single sign-on with Microsoft Entra ID The Microsoft Entra ID (previously Azure Active Directory) SSO integration currently supports the following SAML features: - Service Provider (SP) initiated SSO - Identity Provider (IdP) initiated SSO - Just-in-Time Provisioning For more information on the listed features, visit the [Microsoft Entra ID SAML Protocol Documentation](https://learn.microsoft.com/en-us/entra/identity-platform/single-sign-on-saml-protocol). @include 'eu/sso.mdx' ## Configuration (Microsoft Entra ID) 1. Sign in to the Entra portal. 1. On the left navigation pane, select the \*\*Microsoft Entra ID\*\* service. 1. Navigate to \*\*Enterprise Applications\*\* and then select \*\*All Applications\*\*. 1. To add new application, select \*\*New application\*\*. 1. In the \*\*Add from the gallery\*\* section, type \*\*Terraform Cloud\*\* in the search box. 1. Select \*\*Terraform Cloud\*\* from results panel and then add the app. Wait a few seconds while the app is added to your tenant. 1. On the \*\*Terraform Cloud\*\* application integration page, find the \*\*Manage\*\* section and select \*\*single sign-on\*\*. 1. On the \*\*Select a single sign-on method\*\* page, select \*\*SAML\*\*. 1. In the SAML Signing Certificate section (you may need to refresh the page) copy the \*\*App Federation Metadata Url\*\*. ## Configuration (HCP Terraform) 1. Sign in to [HCP Terraform](https://app.terraform.io/) and select the organization you want to enable SSO for. 1. Select \*\*Settings\*\* from the sidebar, then \*\*SSO\*\*. 1. Click \*\*Setup SSO\*\*.  1. Select \*\*Microsoft Entra ID\*\* and click \*\*Next\*\*.  1. Provide your App Federation Metadata URL.  1. Save, and you should see a completed Terraform Cloud SAML configuration. 1. Copy Entity ID and Reply URL. ## Configuration (Microsoft Entra ID) 1. In the Entra portal, on the \*\*Terraform Cloud\*\* application integration page, find the \*\*Manage\*\* section and select \*\*single sign-on\*\*. 1. On the \*\*Select a single sign-on method\*\* page, select \*\*SAML\*\*. 1. On the \*\*Set up single sign-on with SAML\*\* page, click the edit/pen icon for \*\*Basic SAML Configuration\*\* to edit the settings. 1. In the \*\*Identifier\*\* text box, paste the \*\*Entity ID\*\*. 1. In the \*\*Reply URL\*\* text box, paste the \*\*Reply URL\*\*. 1. For Service Provider initiated SSO, type `https://app.terraform.io/session` in the \*\*Sign-On URL\*\* text box. Otherwise, leave the box blank. 1. Select \*\*Save\*\*. 1. On the \*\*Single sign-on\*\* page, download the `Certificate (Base64)` file from under \*\*SAML Signing Certificate\*\*. 1. In the app's overview page, find the \*\*Manage\*\* section and select \*\*Users and groups\*\*. 1. Select \*\*Add user\*\*, then select \*\*Users and groups\*\* in the \*\*Add Assignment\*\* dialog. 1. In the \*\*Users and groups\*\* dialog, select your user from the Users list, then click the \*\*Select\*\* button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the \*\*Select a role\*\* dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the \*\*Add Assignment\*\* dialog, click the \*\*Assign\*\* button. ## Configuration (HCP Terraform) To edit your Entra SSO configuration settings: 1. Go to \*\*Public Certificate\*\*. 1. Paste the contents of the SAML Signing Certificate you downloaded from Microsoft Entra ID. 1. Save Settings. 1. [Verify](/terraform/cloud-docs/users-teams-organizations/single-sign-on/testing) your settings and click "Enable". 1. Your Entra SSO configuration is complete and ready to [use](/terraform/cloud-docs/users-teams-organizations/single-sign-on#signing-in-with-sso).  ## Team and Username Attributes To configure team management in your Microsoft Entra ID application: 1. Navigate to the single sign-on page. 1. Edit step 2, \*\*User Attributes & Claims\*\*. 1. Add a new group claim. 1. In \*\*Group Claims\*\*, select \*\*Security Groups\*\*. 1. In the \*\*Source Attribute\*\* field, select either \*\*sAMAccountName\*\* to use account names or \*\*Group ID\*\* to use group UUIDs. 1. Check \*\*Customize the name of the group claim\*\*. 1. Set | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/users-teams-organizations/single-sign-on/entra-id.mdx | main | terraform | [
-0.02384675294160843,
-0.026477504521608353,
-0.07157382369041443,
-0.012025863863527775,
-0.009563706815242767,
0.03809743374586105,
0.08219089359045029,
-0.0470929890871048,
0.021255435422062874,
0.054357510060071945,
-0.003764220979064703,
-0.011400225572288036,
0.07744966447353363,
0.0... | 0.100561 |
step 2, \*\*User Attributes & Claims\*\*. 1. Add a new group claim. 1. In \*\*Group Claims\*\*, select \*\*Security Groups\*\*. 1. In the \*\*Source Attribute\*\* field, select either \*\*sAMAccountName\*\* to use account names or \*\*Group ID\*\* to use group UUIDs. 1. Check \*\*Customize the name of the group claim\*\*. 1. Set \*\*Name (required)\*\* to "MemberOf" and leave the namespace field blank. -> \*\*Note:\*\* When you configure Microsoft Entra ID to use Group Claims, it provides Group UUIDs instead of human readable names in its SAML assertions. We recommend [configuring SSO Team IDs](/terraform/cloud-docs/users-teams-organizations/single-sign-on#team-names-and-sso-team-ids) for your HCP Terraform teams to match these Entra Group UUIDs. If you plan to use SAML to set usernames in your Microsoft Entra ID application: 1. Navigate to the single sign-on page. 1. Edit step 2, \*\*User Attributes & Claims.\*\* We recommend naming the claim "username", leaving the namespace blank, and sourcing `user.displayname` or `user.mailnickname` as a starting point. If you have a Terraform Enterprise account, you can source `user.mail` or `user.userprincipalname`. Note that HCP Terraform usernames only allow lowercase letters, numbers, and dashes. If you namespaced any of your claims, then Microsoft Entra ID passes the attribute name using the format ``. Consider this format when setting team and username attribute names. ## Troubleshooting the SAML assertion [Use this guide](https://support.hashicorp.com/hc/en-us/articles/1500005371682-Capturing-a-SAML-Assertion) to verify and validate the claims being sent in the SAML response. Azure AD limits you to 150 group claims in SAML tokens. If you exceed this limit, we recommend that you create a group filter to only include the necessary groups. Refer to [Configure group claims for applications by using Microsoft Entra ID](https://learn.microsoft.com/en-us/entra/identity/hybrid/connect/how-to-connect-fed-group-claims#group-filtering) for more information. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/users-teams-organizations/single-sign-on/entra-id.mdx | main | terraform | [
-0.09392140805721283,
0.033995967358350754,
-0.036469537764787674,
0.04019603505730629,
0.008761635050177574,
0.035575371235609055,
0.09706442058086395,
-0.09634414315223694,
0.01937718875706196,
0.07148787379264832,
0.000507857243064791,
-0.135196715593338,
0.12523077428340912,
0.04012484... | 0.035574 |
# Link a user account for single sign-on You have an SSO identity for every SSO-enabled HCP Terraform organization. HCP Terraform links each SSO identity to a single HCP Terraform user account. This link determines which account you can use to access each organization. You can add and remove SSO identity links for all providers, including Microsoft Entra ID, Okta, and SAML. @include 'eu/sso.mdx' ## Add SSO Identity Link The first time you use SSO to log in to an organization, HCP Terraform links that SSO identity to your user account. You can only log in to that organization using the linked user account. When HCP Terraform does not recognize the email address associated with your identity provider, it asks if you want to create a new user account with that email address. You can choose one of the following actions: - \*\*Create a new account:\*\* HCP Terraform automatically links your SSO identity to this new account after creation. - \*\*Link to an existing account:\*\* Click \*\*Link SSO identity to a different account\*\* and sign in with one of the following account types. - \*\*Linked HashiCorp Cloud Platform (HCP) account\*\*: Click \*\*Continue with HCP account\*\* and use your HCP credentials to sign in to HCP Terraform. HCP Terraform automatically links your SSO identity to that HCP-linked account. Refer to [Linked HCP and HCP Terraform Accounts](/terraform/cloud-docs/users-teams-organizations/users#linked-hcp-and-hcp-terraform-accounts) for more details. - \*\*HCP Terraform account\*\*: Sign in with your HCP Terraform username and password. HCP Terraform automatically links your SSO identity to that account. ## Change SSO Identity Link HCP Terraform shows an error if you try to log in to an SSO-enabled organization with a different user account than the one linked to your SSO identity. To change this SSO identity link: 1. Sign in to [HCP Terraform](https://app.terraform.io/) using the linked account. 1. [Remove the SSO identity link](#remove-sso-identity-link) from the current account. 1. Sign out of HCP Terraform. 1. Log in and [add an SSO identity link](#add-sso-identity-link) to the desired account. ## Remove SSO Identity Link To unlink an SSO identity from an HCP Terraform account: 1. [Sign in with SSO](/terraform/cloud-docs/users-teams-organizations/single-sign-on#signing-in-with-sso) to the linked account. 1. Click your user icon and select \*\*Account Settings\*\*. Your \*\*Profile\*\* page appears. 1. Click \*\*SSO\*\* in the left navigation bar. The \*\*SSO\*\* page appears, showing a list of all of the SSO identities associated with this account. 1. Click the \*\*ellipses (...)\*\* next to the association you want to unlink and select \*\*Unlink SSO identity\*\*. The \*\*Unlink SSO identity\*\* box appears. 1. Click \*\*Unlink SSO identity\*\*. The SSO association is now unlinked and removed from the SSO list. The organization is still available in the \*\*Choose an organization\*\* menu, but HCP Terraform will prompt you to log into that organization through SSO before you can access it. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/users-teams-organizations/single-sign-on/linking-user-account.mdx | main | terraform | [
-0.024964557960629463,
-0.013604514300823212,
0.007235975936055183,
-0.018118396401405334,
0.004453965462744236,
0.015937944874167442,
0.0476909875869751,
-0.068590447306633,
0.0013611819595098495,
-0.00041980555397458375,
-0.04096939414739609,
-0.0899098739027977,
0.07274293154478073,
0.0... | 0.030299 |
# Configure and manage single sign-on This page is about configuring single sign-on in HCP Terraform. To set up Terraform Enterprise's single sign-on, refer to [SAML Configuration](/terraform/enterprise/saml/configuration). HCP Terraform allows organizations to configure SAML single sign-on (SSO), an alternative to traditional user management. SSO gives owners more control to secure accessibility to your organization’s [Projects](/terraform/cloud-docs/projects/manage), [Workspaces](/terraform/cloud-docs/workspaces), and [Managed Resources](/terraform/cloud-docs/users-teams-organizations/organizations#managed-resources). By using SSO, your organization can centralize management of users for HCP Terraform and other Software-as-a-Service (SaaS) vendors, providing greater accountability and security for an organization's identity and user management. @include 'eu/sso.mdx' ## Supported Identity Providers (IdPs) Select your preferred provider to learn more about what is supported for that provider and how to configure SSO for it. \* [Microsoft Entra ID](/terraform/cloud-docs/users-teams-organizations/single-sign-on/entra-id) \* [Okta](/terraform/cloud-docs/users-teams-organizations/single-sign-on/okta) \* [SAML](/terraform/cloud-docs/users-teams-organizations/single-sign-on/saml) ## How SSO Works Organization owners can enable SSO for their organization and configure an identity provider to connect to. Once SSO is enabled for an organization, all non-owner members must sign in through SSO in order to access the organization. (Owners of an SSO-enabled organization can still access the organization through username and password, to enable fixing problems with SSO.) ### SSO Identities and HCP Terraform User Accounts SSO does not automatically provision HCP Terraform user accounts. The first time you sign in with SSO, you must either provide a password to create a new HCP Terraform user account (using your SSO email address as the username), or link your SSO identity to an existing HCP Terraform user account. Once the SSO identity is linked, you can only log in to that organization using the linked account. You must [remove the SSO link](/terraform/cloud-docs/users-teams-organizations/single-sign-on/linking-user-account#remove-sso-identity-link) if you want to access the organization with a different user account. If an organization's owners disable SSO, all members can continue to access the organization using their HCP Terraform or HashiCorp Cloud Platform credentials. ### Enforced Access Policy for HCP Terraform Resources As a non-owner, when you attempt to access an organization that has SSO configured, you will be redirected to the organization's SAML IdP to authenticate and authorize access using your SAML IdP credentials before you can access the organization's [Projects](/terraform/cloud-docs/projects/manage), [Workspaces](/terraform/cloud-docs/workspaces), and [Managed Resources](/terraform/cloud-docs/users-teams-organizations/organizations#managed-resources). Owners of an SSO-enabled organization can still access the organization's resources through their HCP Terraform credentials or their HCP credentials (if linked to their HCP Terraform account). This is to enable a workaround to problems such as your IdP becoming unavailable, lost access to your MFA or IdP credentials, or other authentication issues. HCP Terraform users are able to use their single HCP Terraform account to access resources in different organizations, however, SAML SSO does not authorize access to: - Account Settings (such as to manage 2FA or generate/revoke User API tokens) - Other organizations with SSO configured with a different SAML IdP. You will need to authenticate to each configured IdP separately. - Other organizations where SSO is not configured. In order to access these resources, you may be asked to “step-up” authentication using your HCP Terraform or HCP credentials. In most situations, a step-up HCP Terraform or HCP authentication login prompt will be required immediately after SSO authentication so that HCP Terraform can establish a broad user [session](/terraform/cloud-docs/users-teams-organizations/users#sessions) and to check access and authorization to different resources in HCP Terraform. The below diagram explains the the access enforcement policy and the authentication required for an HCP Terraform user account to access different resources in HCP Terraform:  ## Signing in with SSO 1. Visit [HCP Terraform](https://app.terraform.io) and sign out if you are signed in. 1. Click \*\*Sign in | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/users-teams-organizations/single-sign-on/index.mdx | main | terraform | [
-0.0071891359984874725,
-0.009072352200746536,
-0.03519777953624725,
-0.005685319192707539,
-0.01153529155999422,
0.029691122472286224,
0.03205522894859314,
-0.031120458617806435,
0.0035358318127691746,
0.0016292637446895242,
-0.035085178911685944,
-0.014429627917706966,
0.0735810250043869,
... | 0.126542 |
required for an HCP Terraform user account to access different resources in HCP Terraform:  ## Signing in with SSO 1. Visit [HCP Terraform](https://app.terraform.io) and sign out if you are signed in. 1. Click \*\*Sign in via SSO\*\*. 1. Provide your organization name and click \*\*Next\*\*. 1. If you've signed in to HCP Terraform with SSO before, proceed to the next step. If you're signing in for the first time under this account or for the first time accessing this organization, you'll be required to create a new account or link to an existing account. Use the links below the account creation form if you want to link your SSO identity to an existing account, then fill out and submit the relevant form. 1. You will be redirected to your SSO identity provider. Authenticate your account as necessary. 1. You are now signed in to HCP Terraform. ## Configuring SSO in HCP Terraform Free Edition SSO is available to all HCP Terraform organizations, but is configured and managed differently in HCP Terraform Free Edition because Team Management is only available in HCP Terraform Essentials and Standard editions. In HCP Terraform Free Edition organizations, after you successfully configure SSO, HCP Terraform automatically creates a team named `sso` and adds all current members of the `owners` team into it. In the Free Edition, you cannot modify the organization-level permissions for both the `owners` and `sso` teams. These teams grant every member full administrative access to the organization, projects, workspaces, and managed resources. After configuring SSO access, review the `owners` team membership. Members of the `owners` team have permission to bypass SSO in the event that your Identity Provider (IDP) is unavailable to service authentication requests, for example due to an IDP service outage, an administrator forgot their SSO credentials, or lost access to their software authenticator. An owner can use their HCP Terraform or HashiCorp Cloud Platform credentials (if linked) to bypass HCP Terraform SSO authentication at any time. To encourage least privilege practices, HCP Terraform prompts the user who successfully configures SSO to optionally remove other users from the owners group. ## Managing Owners and SSO Team Membership in the Free Edition The Team Management feature set is available in HCP Terraform Essentials and Standard editions only. Inviting users to any Free edition organization adds them to the `owners` team, but not the `sso` team. Review the following to assign the proper team membership between the two teams. For new users new to HCP Terraform - \*\*Assign new users to the `owners` team\*\* by inviting the user to the organization. - \*\*Assign new users to the `sso` team\*\* by asking the user to login directly to the HCP Terraform organization via the SSO authentication login. For managing existing users permissions: - \*\*Assign existing users from the `sso` team to the `owners` team:\*\* Remove the user from the organization. Re-invite the user. - \*\*Assign existing users from the `owners` team to the `sso` team:\*\* Remove the user from the organization. Ask the user to sign in via SSO authentication login directly. ## Managing Team Membership Through SSO HCP Terraform can automatically add users to teams based on their SAML assertion, so you can manage team membership in your directory service. To enable team membership mapping: 1. Choose \*\*Settings\*\* from the sidebar, then \*\*SSO\*\*. 1. Toggle the \*\*Enable team management to customize your team attribute\*\*. When team management is enabled, you can configure which SAML attribute in the SAMLResponse will control team membership. This defaults to the `MemberOf` attribute. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/users-teams-organizations/single-sign-on/index.mdx | main | terraform | [
-0.00028484006179496646,
-0.015420629642903805,
-0.0372404009103775,
-0.021088216453790665,
-0.004387381486594677,
0.04467972368001938,
0.029976530000567436,
-0.02769303321838379,
-0.007955384440720081,
-0.0007871481357142329,
-0.015169935300946236,
-0.13201011717319489,
0.059387777000665665... | 0.052179 |
directory service. To enable team membership mapping: 1. Choose \*\*Settings\*\* from the sidebar, then \*\*SSO\*\*. 1. Toggle the \*\*Enable team management to customize your team attribute\*\*. When team management is enabled, you can configure which SAML attribute in the SAMLResponse will control team membership. This defaults to the `MemberOf` attribute. The expected format of the corresponding `AttributeValue` in the SAMLResponse is a either a string containing a comma-separated list of teams, or separate `AttributeValue` items specifying teams. When users log in through SAML, Terraform automatically adds them to the teams included in their assertion and automatically removes them from teams that are not included in their assertion. This automatic mapping overrides any manually set team memberships. Each time the user logs in, their team membership is adjusted to match their SAML assertion. HCP Terraform ignores team names that do not exactly match existing teams and will not create new teams from those listed in the assertion. If the chosen SAML attribute is not provided in the SAMLResponse, Terraform assigns users to a default team named `sso` and does not remove them from any existing teams. It is not possible to assign users to the `owners` team through this attribute. ## Team Names and SSO Team IDs HCP Terraform expects the team names in the team membership SAML attribute to exactly match team names or configured SSO team IDs stored in HCP Terraform. Values are case sensitive and literal. HCP Terraform does not process the value passed by the IdP. As a result, you cannot use values such as the full distinguished name (DN). You can configure SSO Team IDs in the organization's \*\*Teams\*\* page. If an SSO Team ID is configured, HCP Terraform will attempt to match the chosen SAML attribute against both the team name and the SSO Team ID when mapping users to teams. You may want to create an SSO Team ID if the team membership SAML attribute is not human readable and is not used as the team's name in HCP Terraform. SSO Team IDs are particularly helpful if your SSO or Microsoft Entra ID provider restricts the `MemberOf` attribute in its SAML responses to Group UUIDs, rather than human readable group names. Setting the SSO Team ID allows you to maintain human readable team names in HCP Terraform, while still managing team membership through SSO or Microsoft Entra ID. ## NameID Format HCP Terraform requires that the NameID format in the SAML response be set to `urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress` with a valid email address being provided as the value for this attribute. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/users-teams-organizations/single-sign-on/index.mdx | main | terraform | [
-0.05564006417989731,
-0.0599716380238533,
-0.020319782197475433,
0.06798078119754791,
-0.018048536032438278,
0.04321637004613876,
0.030787458643317223,
-0.09717059880495071,
0.0441194623708725,
0.04306776449084282,
0.0024723520036786795,
-0.053847234696149826,
0.1457180380821228,
0.107319... | 0.016949 |
# Test single sign-on -> \*\*NOTE:\*\* In an effort to protect users from enabling faulty SAML configurations, HCP Terraform requires a successful test attempt before enabling is possible. To test a completed SSO configuration, click "Test" on the SSO settings page. - This will attempt to initiate SSO sign-in with your IdP. - You will be redirected briefly to your IdP. You may need to reauthenticate depending on your session context. - Finally you should be redirected back to the HCP Terraform settings SSO page with a message about a successful test and the "Enable" action should now be accessible. If a successfully tested SSO configuration is changed in ways that may impact its ability to work correctly, the configuration will revert to an untested state. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/users-teams-organizations/single-sign-on/testing.mdx | main | terraform | [
-0.026241637766361237,
0.0013677228707820177,
0.0269208662211895,
0.01670902967453003,
-0.015440666116774082,
0.05105035752058029,
-0.004248281475156546,
-0.09945060312747955,
-0.07636769115924835,
0.0029637424740940332,
0.006177068687975407,
-0.007180108223110437,
0.05229886993765831,
0.0... | -0.014982 |
# Use single sign-on with SAML The SAML SSO integration currently supports the following features of SAML 2.0: - Service Provider (SP)-initiated SSO - Identity Provider (IdP)-initiated SSO - Just-in-Time Provisioning The SAML SSO integration can be configured by providing a metadata URL or manually with the Single Sign-on URL, Entity ID, and X.509 Certificate. @include 'eu/sso.mdx' ## Configuration (HCP Terraform) 1. Sign in to [HCP Terraform](https://app.terraform.io/) and select the organization you want to enable SSO for. 1. Select \*\*Settings\*\* from the sidebar, then \*\*SSO\*\*. 1. Click \*\*Setup SSO\*\*.  1. Select \*\*SAML\*\* and click \*\*Next\*\*.  1. Configure using the IdP's metadata URL or manually with the Single Sign-On URL, Entity ID, and X.509 Certificate.   1. Click \*\*Save settings\*\*. 1. [Verify](/terraform/cloud-docs/users-teams-organizations/single-sign-on/testing) your settings and click \*\*Enable\*\*. 1. Your SAML SSO configuration is complete and ready to [use](/terraform/cloud-docs/users-teams-organizations/single-sign-on#signing-in-with-sso).  | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/users-teams-organizations/single-sign-on/saml.mdx | main | terraform | [
-0.023166220635175705,
-0.013984623365104198,
-0.043242864310741425,
0.010486220009624958,
-0.06743022054433823,
0.027120579034090042,
-0.0010195754002779722,
-0.06974999606609344,
-0.02204081229865551,
0.03574979677796364,
-0.027893269434571266,
-0.0644351989030838,
0.06899845600128174,
0... | 0.023745 |
# Use single sign-on with Okta The Okta SSO integration currently supports the following SAML features: - Service Provider (SP)-initiated SSO - Identity Provider (IdP)-initiated SSO - Just-in-Time Provisioning For more information on the listed features, visit the [Okta Glossary](https://help.okta.com/en/prod/Content/Topics/Reference/glossary.htm). @include 'eu/sso.mdx' ## Configuration (Okta) 1. From your Okta Admin Dashboard, click the "Add Applications" shortcut. 1. Search for "Terraform Cloud" and select it. 1. Click "Add" on the application's page. 1. Choose a label for your application or keep the default, "Terraform Cloud". 1. Click "Done". 1. Visit the "Sign On" tab in the application. 1. Click "View SAML setup instructions" under "SAML Setup". 1. Copy the "Okta Metadata URL" URL under step 4. For information on configuring automated team mapping using Okta group membership, please see the [Team Mapping Configuration (Okta)](#team-mapping-configuration-okta) section below. ## Configuration (HCP Terraform) Be sure to copy the metadata URL (from the final step of configuring Okta) before proceeding with the following steps. 1. Sign in to [HCP Terraform](https://app.terraform.io/) and select the organization you want to enable SSO for. 1. Select \*\*Settings\*\* from the sidebar, then \*\*SSO\*\*. 1. Click \*\*Setup SSO\*\*.  1. Select \*\*Okta\*\* and click \*\*Next\*\*.  1. Provide your Okta metadata URL and click the \*\*Save settings\*\* button.  1. [Verify](/terraform/cloud-docs/users-teams-organizations/single-sign-on/testing) your settings and click \*\*Enable\*\*. 1. Your Okta SSO configuration is complete and ready to [use](/terraform/cloud-docs/users-teams-organizations/single-sign-on#signing-in-with-sso).  ## Team Mapping Configuration (Okta) HCP Terraform can automatically add users to teams based on their SAML assertion, so you can manage team membership in your directory service. To do this, you must specify the `MemberOf` SAML attribute, and make sure the `AttributeStatement` in the SAML Response contains a list of `AttributeValue` items in the correct format (a comma-separated list of team names). For more details, refer to [HCP Terraform SSO](/terraform/cloud-docs/users-teams-organizations/single-sign-on). If you haven't yet completed all steps outlined in the [Configuration (Okta)](#configuration-okta-) section above, please do so before proceeding. To enable this automated team mapping functionality, edit your Terraform Cloud Okta Application and complete the following steps: 1. Expand the "Attributes" section of the Application configuration (under the "Sign On" tab):  1. Set the "Group Attribute Statements" to the following: - Name: `MemberOf` - Name format: `Basic` - Filter: `Matches regex` - Filter value: `.\*`  Once these configuration steps have been completed, \*\*all\*\* Okta groups to which a given user belongs will be passed in the SAML assertion upon login to HCP Terraform, which means that user will get added automatically to any teams within HCP Terraform for which there’s an \*\*exact\*\* name match. Importantly, please note that those users will also be removed from any teams that \_aren't\_ included in their assertion. This overrides any manually set team memberships, so whenever a user logs in via SSO, their team membership is adjusted to match their SAML assertion.  Using the above SAML assertion as an example, the user in question would get added to the `Everyone`, `ops`, and `test` teams in HCP Terraform if those teams exist in the target Organization, but those values will simply be ignored if no matching team name is found. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/users-teams-organizations/single-sign-on/okta.mdx | main | terraform | [
-0.025933586061000824,
-0.0034355411771684885,
-0.0011518277460709214,
-0.020423592999577522,
-0.06300534307956696,
-0.002111844951286912,
0.025772152468562126,
-0.069117970764637,
0.01772649772465229,
0.06069713085889816,
-0.01391077134758234,
-0.035576626658439636,
0.05766841024160385,
0... | 0.035434 |
`Everyone`, `ops`, and `test` teams in HCP Terraform if those teams exist in the target Organization, but those values will simply be ignored if no matching team name is found. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/users-teams-organizations/single-sign-on/okta.mdx | main | terraform | [
-0.026823624968528748,
0.039077240973711014,
-0.013248457573354244,
-0.020812049508094788,
0.00921925064176321,
-0.033115584403276443,
-0.03220577910542488,
-0.12984789907932281,
0.009713530540466309,
-0.01421288214623928,
0.005720226559787989,
-0.11804062128067017,
0.07280252128839493,
0.... | 0.059307 |
[organizations]: /terraform/cloud-docs/users-teams-organizations/organizations [organization settings]: /terraform/cloud-docs/users-teams-organizations/organizations#organization-settings [users]: /terraform/cloud-docs/users-teams-organizations/users # Teams overview > \*\*Hands-on:\*\* Try the [Manage Permissions in HCP Terraform](/terraform/tutorials/cloud/cloud-permissions?utm\_source=WEBSITE&utm\_medium=WEB\_IO&utm\_offer=ARTICLE\_PAGE&utm\_content=DOCS) tutorial. Teams are groups of HCP Terraform [users][] within an [organization][organizations]. If a user belongs to at least one team in an organization, they are considered a member of that organization. @include 'tfc-package-callouts/team-management.mdx' An organization can [grant workspace permissions to teams](/terraform/cloud-docs/users-teams-organizations/teams/manage#managing-workspace-access) that allow its members to start Terraform runs, create workspace variables, read and write state, and more. Teams can only have permissions on workspaces within their organization, although individual users can belong to multiple teams in this and other organizations. @include 'eu/group.mdx' ## Accessing teams with the API or TFE provider In addition to the HCP Terraform UI, you can use the following methods to manage teams: - [Teams API](/terraform/cloud-docs/api-docs/teams) to list, create, update, and delete teams - [Team Members API](/terraform/cloud-docs/api-docs/team-members) to add and delete users from teams - [Team Tokens API](/terraform/cloud-docs/api-docs/team-tokens) to generate and delete tokens and list an organization's team tokens - [Team Access API](/terraform/cloud-docs/api-docs/team-access) to manage team access to one or more workspaces - The `tfe` provider resources [`tfe\_team`](https://registry.terraform.io/providers/hashicorp/tfe/latest/docs/resources/team), [`tfe\_team\_members`](https://registry.terraform.io/providers/hashicorp/tfe/latest/docs/resources/team\_members), and `tfe\_team\_access` ### API tokens Each team can have an API token not associated with a specific user. You can manage a team's API token from the \*\*Organization settings > API Tokens > Team Token\*\* page. You can create, regenerate, and delete team tokens on the API token page. Refer to [Team API Tokens](/terraform/cloud-docs/users-teams-organizations/api-tokens#team-api-tokens) for details. ## The owners team Every organization has an owners team, and members of the owners team are sometimes called organization owners. An organization's creator is the first member of its owner's team. You can add and remove other members in the same way as you can with other teams. In free organizations, the owner's team is limited to five members. In paid organizations, the size of the owner's team is unlimited. You cannot delete or leave the owner's team empty. If only one member in an owner's team exists, you must add another user before removing the current member. Refer to [organization owners](/terraform/cloud-docs/users-teams-organizations/permissions/organization#organization-owners) for more details about owners team permissions. [permissions-citation]: #intentionally-unused---keep-for-maintainers ## Manage teams You can manage many things about teams, including creating and deleting a team, team membership, and team access to workspaces, projects, Stacks, and organizations. Refer to [Manage teams](/terraform/cloud-docs/users-teams-organizations/teams/manage) to learn more. ## Team notifications You can set up team notifications to notify team members on external systems whenever a particular action takes place. Refer to [Notifications](/terraform/cloud-docs/users-teams-organizations/teams/notifications) to learn more. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/users-teams-organizations/teams/index.mdx | main | terraform | [
-0.017914488911628723,
0.050256889313459396,
-0.0217411108314991,
-0.040454890578985214,
0.019132697954773903,
-0.0509677417576313,
0.017281249165534973,
-0.06133849173784256,
0.02542729862034321,
0.035384658724069595,
-0.010547189973294735,
-0.08930573612451553,
0.027641363441944122,
0.07... | 0.056906 |
# Manage team notifications HCP Terraform can use webhooks to notify external systems about run progress, change requests, and other events. Team notifications allow you to configure relevant alerts that notify teams you specify whenever a certain event occurs. You can only configure team notifications to notify your team of [change requests](/terraform/cloud-docs/workspaces/change-requests). @include 'tfc-package-callouts/notifications.mdx' You can configure an individual team notification to notify up to twenty teams. To set up notifications for teams using the API, refer to the [Notification API](/terraform/cloud-docs/api-docs/notification-configurations#team-notification-configuration). In HashiCorp Cloud Platform (HCP) Europe organizations, you cannot use team notifications. To learn more, refer to [Use HCP Terraform in Europe](/terraform/cloud-docs/europe). ## Requirements To configure team notifications, you need the [\*\*Manage teams\*\*](/terraform/cloud-docs/users-teams-organizations/permissions/organization#manage-teams) permissions for the team for which you want to configure notifications. You can only enable Slack webhook notifications for HCP Terraform or Terraform Enterprise that uses IPv4 addresses. This is because Slack does not support IPv6-only networks. You can use other notification methods, such as email, Microsoft Teams, and custom webhooks instead. ## View notification configuration settings To view your current team notifications, perform the following steps: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the organization you want to view the team notifications of. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Teams\*\*. 1. Select the team for which you want to view the notifications from your list of teams. 1. Select \*\*Notifications\*\* in the sidebar navigation. HCP Terraform displays a list of any notification configurations you have set up. A notification configuration defines how and when you want to send notifications, and once you enable that configuration, it can send notifications. ### Update and enable notification configurations Each notification configuration includes a brief overview of each configuration’s name, type, the events that can trigger the notification, and the last time the notification was triggered. Clicking on a notification configuration opens a page where you can perform the following actions: \* Enable your configuration to send notifications by toggling the switch. \* Delete a configuration by clicking \*\*Delete notification\*\*, then \*\*Yes, delete notification configuration\*\*. \* Test your notification’s configuration by clicking \*\*Send test\*\*. \* Click \*\*Edit notification\*\* to edit your notification configuration. After creating a notification configuration, you can only edit the following aspects of that configuration: 1. The configuration’s name. 1. Whether this configuration notifies everyone on a team or specific members. 1. The workspace events that trigger notifications. You can choose from: \* \*\*All events\*\* triggers a notification for every event in your workspace. \* \*\*No events\*\* means that no workspace events trigger a notification. \* \*\*Only certain events\*\* lets you specify which events trigger a notification. After making any changes, click \*\*Update notification\*\* to save your changes. ## Create and configure a notification To configure a new notification for a team or a user, perform the following steps: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the organization you want to create a team notification in. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Teams\*\*. 1. Select the team you want to view the notifications for from your list of teams. 1. Select \*\*Notifications\*\* in the sidebar navigation. 1. Click \*\*Create a notification\*\*. You must complete the following fields for all new notification configurations: 1. The \*\*Destination\*\* where HCP Terraform should deliver either a generic or a specifically formatted payload. Refer to [Notification payloads](#notification-payloads) for details. 1. The display \*\*Name\*\* for this notification configuration. 1. If you configure an email notification, you can optionally specify which \*\*Email Recipients\*\* will receive this notification. 1. If you choose to configure a webhook, you must also specify: \* A | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/users-teams-organizations/teams/notifications.mdx | main | terraform | [
0.003847799962386489,
-0.003670504316687584,
0.0054581123404204845,
-0.0410880520939827,
0.03827651962637901,
-0.0427725613117218,
-0.006932670250535011,
-0.07756901532411575,
0.06840363889932632,
0.027433615177869797,
-0.048669107258319855,
-0.10612381249666214,
0.05272234231233597,
0.062... | 0.081778 |
a specifically formatted payload. Refer to [Notification payloads](#notification-payloads) for details. 1. The display \*\*Name\*\* for this notification configuration. 1. If you configure an email notification, you can optionally specify which \*\*Email Recipients\*\* will receive this notification. 1. If you choose to configure a webhook, you must also specify: \* A \*\*Webhook URL\*\* for the destination of your webhook payload. Your URL must accept HTTP or HTTPS `POST` requests and be able to use the chosen payload type. \* You can optionally configure a \*\*Token\*\* as an arbitrary secret string that HCP Terraform will use to sign its notification webhooks. Refer to [Notification authenticity](#notification-authenticity) for details. You cannot view the token after you save the notification configuration. 1. If you choose to specify either a \*\*Slack\*\* or \*\*Microsoft Teams\*\* notification, you must also configure your webhook URL for either service. For details, refer to Slack's documentation on [creating an incoming webhook](https://api.slack.com/messaging/webhooks#create\_a\_webhook) and Microsoft's documentation on [creating a workflow from a channel in teams](https://support.microsoft.com/en-us/office/creating-a-workflow-from-a-channel-in-teams-242eb8f2-f328-45be-b81f-9817b51a5f0e). 1. Specify which [\*\*Workspace events\*\*](#workspace-events) should trigger this notification. 1. After you finish configuring your notification, click \*\*Create a notification\*\*. Note that if you are create an email notification, you must have [\*\*Manage membership\*\*](/terraform/cloud-docs/users-teams-organizations/permissions/organization#manage-membership) permissions on a team to select users from that team as email recipients. ### Workspace events HCP Terraform can send notifications for all workspace events, no workspace events, or specific events. The following events are available for you to specify: | Event | Description | | :---- | :---- | | Change Requests | HCP Terraform will notify this team whenever someone creates a change request on a workspace to which this team has access. | ## Enable and verify a notification To configure HCP Terraform to stop sending notifications for a notification configuration, disable the \*\*Enabled\*\* setting on a configuration's detail page . HCP Terraform enables notifications for email configurations by default. Before enabling any webhook notifications, HCP Terraform attempts to verify the notification’s configuration by sending a test message. If the test succeeds, HCP Terraform enables the notification. To verify a notification configuration, the destination must respond with a `2xx` HTTP code. If verification fails, HCP Terraform does not enable the configuration and displays an error message. For successful and unsuccessful verifications, click the \*\*Last Response\*\* box to view more information about the verification results. You can also send additional test messages by clicking \*\*Send a Test\*\*. ## Notification Payloads Notification payloads contain different attributes depending on the integration you specified when configuring that notification. ### Slack Notifications to Slack contain the following information: \* Information about the change request, including the username and avatar of the person who created the change request. \* The event that triggered the notification and the time that event occurred. ### Microsoft Teams Notifications to Microsoft Teams contain the following information: \* Information about the change request, including the username and avatar of the person who created the change request. \* The event that triggered the notification and the time that event occurred. ### Email Email notifications contain the following information: \* Information about the change request, including the username and avatar of the person who created the change request. \* The event that triggered the notification and the time that event occurred. ### Generic A generic notification contains information about the event that triggered it and the time that the event occurred. You can refer to the complete generic notification payload in the [API documentation](/terraform/cloud-docs/api-docs/notification-configurations#notification-payload). You can use some of the values in the payload to retrieve additional information through the API, such as: \* The [workspace ID](/terraform/cloud-docs/api-docs/workspaces#list-workspaces) \* The [organization name](/terraform/cloud-docs/api-docs/organizations#show-an-organization) ## Notification Authenticity | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/users-teams-organizations/teams/notifications.mdx | main | terraform | [
-0.05192651227116585,
0.06697749346494675,
0.015013042837381363,
-0.014019648544490337,
0.052825167775154114,
-0.048480477184057236,
0.028369370847940445,
-0.038973696529865265,
0.03291098028421402,
0.01679571531713009,
-0.04959636554121971,
-0.18702107667922974,
0.07919915020465851,
0.015... | 0.076349 |
and the time that the event occurred. You can refer to the complete generic notification payload in the [API documentation](/terraform/cloud-docs/api-docs/notification-configurations#notification-payload). You can use some of the values in the payload to retrieve additional information through the API, such as: \* The [workspace ID](/terraform/cloud-docs/api-docs/workspaces#list-workspaces) \* The [organization name](/terraform/cloud-docs/api-docs/organizations#show-an-organization) ## Notification Authenticity Slack notifications use Slack's own protocols to verify HCP Terraform's webhook requests. Generic notifications can include a signature to verify the request. For notification configurations that include a secret token, HCP Terraform's webhook requests include an `X-TFE-Notification-Signature` header containing an HMAC signature computed from the token using the SHA-512 digest algorithm. The notification’s receiving service is responsible for validating the signature. For more information and an example of how to validate the signature, refer to the [API documentation](/terraform/cloud-docs/api-docs/notification-configurations#notification-payload). | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/users-teams-organizations/teams/notifications.mdx | main | terraform | [
-0.03302355483174324,
0.03769110515713692,
0.04666396230459213,
-0.010389390401542187,
0.045400187373161316,
-0.05637159198522568,
0.03662805259227753,
-0.04225677624344826,
0.05532965809106827,
0.03926744684576988,
-0.030982203781604767,
-0.09274493157863617,
0.04273028299212456,
0.011812... | 0.081279 |
# Manage teams You can grant team management abilities to members of teams with either one of the manage teams or manage organization access permissions. Refer to [Team Permissions](/terraform/cloud-docs/users-teams-organizations/permissions/organization#team-permissions) for details. [Organization owners](/terraform/cloud-docs/users-teams-organizations/teams#the-owners-team) can also create teams, assign team permissions, or view the full list of teams. Other users can view any teams marked as visible within the organization, plus any secret teams they are members of. Refer to [Team Visibility](#team-visibility) for details. @include 'eu/group.mdx' ## Manage a team To manage a team, perform the following steps: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the organization where you want to manage teams. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Teams\*\*. The \*\*Team Management\*\* page appears, containing a list of all teams within the organization. 1. Click a team to go to its settings page, which lists the team's settings and current members. Members that have [two-factor authentication](/terraform/cloud-docs/users-teams-organizations/2fa) enabled have a \*\*2FA\*\* badge. You can manage a team on its settings page by adding or removing members, changing its visibility, and controlling access to workspaces, projects, and the organization. ## Create teams To create a new team, perform the following steps: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the organization where you want to create a team. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Teams\*\*. 1. Click \*\*Create a team\*\*. 1. Enter a unique team \*\*Name\*\* and click \*\*Create Team\*\*. Team names can include numbers, letters, underscores (`\_`), and hyphens (`-`). The new team's settings page appears, where you can add new members and grant permissions. ## Delete teams ~> \*\*Important:\*\* Team deletion is permanent, and you cannot undo it. To delete a team, perform the following steps: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the organization where you want to delete a team. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Teams\*\*. The \*\*Team Management\*\* page appears, containing a list of all teams within the organization. 1. Click the team you want to delete to go to its settings page. 1. Click \*\*Delete [team name]\*\* at the bottom of the page. The \*\*Deleting team "[team name]"\*\* box appears. 1. Click \*\*Yes, delete team\*\* to permanently delete the team and all of its data from HCP Terraform. ## Manage team membership Team structure often resembles your company's organizational structure. ### Add users If the user is not yet in the organization, [invite them to join the organization](/terraform/cloud-docs/users-teams-organizations/organizations#users) and include a list of teams they should belong to in the invitation. Once the user accepts the invitation, HCP Terraform automatically adds them to those teams. To add a user that is already in the organization: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the organization where you want to add a user to a team. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Teams\*\*. 1. Click on a team's name to go to its settings page. 1. Choose a user under \*\*Add a New Team Member\*\*. Use the text field to filter the list by username or email. 1. Click the user to add them to the team. HCP Terraform now displays the user under \*\*Members\*\*. ### Remove users To remove a user from a team: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the organization where you want to remove a user from a team. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Teams\*\*. 1. Click the team to go to its settings page. 1. Click \*\*...\*\* next to the user's name and choose \*\*Remove from team\*\* from the menu. HCP | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/users-teams-organizations/teams/manage.mdx | main | terraform | [
-0.001438467763364315,
0.0017404686659574509,
-0.029411403462290764,
0.009226318448781967,
0.0018200615886598825,
-0.017597882077097893,
-0.032453544437885284,
-0.07039841264486313,
0.01872648485004902,
0.06537139415740967,
-0.048196595162153244,
-0.0662945806980133,
0.08680098503828049,
0... | 0.035384 |
and navigate to the organization where you want to remove a user from a team. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Teams\*\*. 1. Click the team to go to its settings page. 1. Click \*\*...\*\* next to the user's name and choose \*\*Remove from team\*\* from the menu. HCP Terraform removes the user from the list of team members. ## Team visibility The settings under \*\*Visibility\*\* allow you to control who can see a team within the organization. To edit a team's visibility: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the organization where you want to view teams. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Teams\*\*. 1. Click on a team's name to navigate to its settings page. 1. Enable one of the following settings: - \*\*Visible:\*\* Every user in the organization can see the team and its membership. Non-members have read-only access. - \*\*Secret:\*\* The default setting is that only team members and organization owners can view a team and its membership. We recommend making the majority of teams visible to simplify workspace administration. Secret teams should only have [organization-level permissions](/terraform/cloud-docs/users-teams-organizations/permissions/organization) since workspace admins cannot manage permissions for teams they cannot view. ## Manage workspace access You can grant teams various permissions on workspaces. Refer to [Workspace Permissions](/terraform/cloud-docs/users-teams-organizations/permissions/workspace) for details. HCP Terraform uses the most permissive permission level from your teams to determine what actions you can take on a particular resource. For example, if you belong to a team that only has permission to read runs for a workspace and another team with admin access to that workspace, HCP Terraform grants you admin access. HCP Terraform grants the most permissive permissions regardless of whether an organization, project, team, or workspace set those permissions. For example, if a team has permission to read runs for a given workspace and has permission to manage that workspace through the organization, then members of that team can manage that workspace. Refer to [organization permissions](/terraform/cloud-docs/users-teams-organizations/permissions#organization-permissions) and [project permissions](/terraform/cloud-docs/users-teams-organizations/permissions#project-permissions) for additional information. Another example is when a team has permission at the organization-level to read runs for all workspaces and admin access to a specific workspace. HCP Terraform grants the more permissive admin permissions to that workspace. To manage team permissions on a workspace: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the workspace where you want to set team permissions. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Team Access\*\*. 1. Click \*\*Add team and permissions\*\* to select a team and assign a pre-built or custom permission set. ## Manage project access You can grant teams permissions to manage a project and the Stacks and Stacks and workspaces that belong to it. Refer to [Project Permissions](/terraform/cloud-docs/users-teams-organizations/permissions/project) for details. ## Manage organization access Organization owners can grant teams permissions to manage policies, projects, workspaces, Stacks, team and organization membership, VCS settings, private registry providers and modules, and policy overrides across an organization. Refer to [Organization Permissions](/terraform/cloud-docs/users-teams-organizations/permissions/organization) for details. [permissions-citation]: #intentionally-unused---keep-for-maintainers | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/users-teams-organizations/teams/manage.mdx | main | terraform | [
-0.0061053140088915825,
0.07551493495702744,
-0.0071099549531936646,
-0.022847864776849747,
0.004807664547115564,
-0.0031639914959669113,
0.018486589193344116,
-0.09705507010221481,
0.036696672439575195,
0.05086195841431618,
-0.02689119055867195,
-0.040318991988897324,
0.06026335433125496,
... | 0.031063 |
# Learn Terraform recommended practices This guide is meant for enterprise users looking to advance their Terraform usage from a few individuals to a full organization. For Terraform code style recommended practices, refer to the [Terraform style guide](/terraform/language/style). ## Introduction HashiCorp specializes in helping IT organizations adopt cloud technologies. Based on what we've seen work well, we believe the best approach to provisioning is \*\*collaborative infrastructure as code,\*\* using Terraform as the core workflow and HCP Terraform to manage the boundaries between your organization's different teams, roles, applications, and deployment tiers. The collaborative infrastructure as code workflow is built on many other IT best practices (like using version control and preventing manual changes), and you must adopt these foundations before you can fully adopt our recommended workflow. Achieving state-of-the-art provisioning practices is a journey, with several distinct stops along the way. This guide describes our recommended Terraform practices and how to adopt them. It covers the steps to start using our tools, with special attention to the foundational practices they rely on. - [Part 1: An Overview of Our Recommended Workflow](/terraform/cloud-docs/recommended-practices/part1) is a holistic overview of HCP Terraform's collaborative infrastructure as code workflow. It describes how infrastructure is organized and governed, and how people interact with it. - [Part 2: Evaluating Your Current Provisioning Practices](/terraform/cloud-docs/recommended-practices/part2) is a series of questions to help you evaluate the state of your own infrastructure provisioning practices. We define four stages of operational maturity around provisioning to help you orient yourself and understand which foundational practices you still need to adopt. - [Part 3: How to Evolve Your Provisioning Practices](/terraform/cloud-docs/recommended-practices/part3) is a guide for how to advance your provisioning practices through the four stages of operational maturity. Many organizations are already partway through this process, so use what you learned in part 2 to determine where you are in this journey. This part is split into four pages: - [Part 3.1: How to Move from Manual Changes to Semi-Automation](/terraform/cloud-docs/recommended-practices/part3.1) - [Part 3.2: How to Move from Semi-Automation to Infrastructure as Code](/terraform/cloud-docs/recommended-practices/part3.2) - [Part 3.3: How to Move from Infrastructure as Code to Collaborative Infrastructure as Code](/terraform/cloud-docs/recommended-practices/part3.3) - [Part 3.4: Advanced Improvements to Collaborative Infrastructure as Code](/terraform/cloud-docs/recommended-practices/part3.4) ## Next Begin reading with [Part 1: An Overview of Our Recommended Workflow](/terraform/cloud-docs/recommended-practices/part1). | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/recommended-practices/index.mdx | main | terraform | [
0.0017236059065908194,
-0.0038180656265467405,
0.03010878711938858,
-0.0421704463660717,
-0.003554543247446418,
-0.03995029628276825,
-0.057378388941287994,
-0.040659308433532715,
-0.04205738753080368,
0.058460626751184464,
-0.05873417109251022,
-0.03495987877249718,
0.013177677989006042,
... | 0.120145 |
# Part 3.3: Move from infrastructure as code to collaborative infrastructure as code Using version-controlled Terraform configurations to manage key infrastructure eliminates a great deal of technical complexity and inconsistency. Now that you have the basics under control, you’re ready to focus on other problems. Your next goals are to: - Adopt consistent workflows for Terraform usage across teams - Expand the benefits of Terraform beyond the core of engineers who directly edit Terraform code. - Manage infrastructure provisioning permissions for users and teams. [HCP Terraform](https://www.hashicorp.com/products/terraform/) is the product we’ve built to help you address these next-level problems. The following section describes how to start using it most effectively. Note: If you aren’t already using mature Terraform code to manage a significant portion of your infrastructure, make sure you follow the steps in the previous section first. ## 1. Install or Sign Up for HCP Terraform You have two options for using HCP Terraform: the SaaS hosted by HashiCorp, or a private instance you manage with Terraform Enterprise. If you have chosen the SaaS version then you can skip this step; otherwise visit the [Terraform Enterprise documentation](/terraform/enterprise) to get started. ## 2. Learn HCP Terraform's Run Environment Get familiar with how Terraform runs work in HCP Terraform. With Terraform Community Edition, you generally use external VCS tools to get code onto the filesystem, then execute runs from the command line or from a general purpose CI system. HCP Terraform does things differently: a workspace is associated directly with a VCS repo, and you use HCP Terraform’s UI or API to start and monitor runs. To get familiar with this operating model: - Read the documentation on how to [perform and configure Terraform runs](/terraform/cloud-docs/workspaces/run/remote-operations) in HCP Terraform. - Create a proof-of-concept workspace, associate it with Terraform code in a VCS repo, set variables as needed, and use HCP Terraform to perform some Terraform runs with that code. ## 3. Design Your Organization’s Workspace Structure In HCP Terraform, each Terraform configuration should manage a specific infrastructure component, and each environment of a given configuration should be a separate workspace — in other words, Terraform configurations \\* environments = workspaces. A workspace name should be something like “networking-dev,” so you can tell at a glance which infrastructure and environment it manages. The definition of an “infrastructure component” depends on your organization’s structure. A given workspace might manage an application, a service, or a group of related services; it might provision infrastructure used by a single engineering team, or it might provision shared, foundational infrastructure used by the entire business. You should structure your workspaces to match the divisions of responsibility in your infrastructure. You will probably end up with a mixture: some components, like networking, are foundational infrastructure controlled by central IT staff; others are application-specific and should be controlled by the engineering teams that rely on them. Also, keep in mind: - Some workspaces publish output data to be used by other workspaces. - The workspaces that make up a configuration’s environments (app1-dev, app1-stage, app1-prod) should be run in order, to ensure code is properly verified. The first relationship, a relationship between workspaces for different components but the same environment, creates a graph of dependencies between workspaces, and you should stay aware of it. The second relationship, a relationship between workspaces for the same component but different environments, creates a pipeline between workspaces. HCP Terraform doesn’t currently have the ability to act on these dependencies, but features like cascading updates and promotion are coming soon, and you’ll be able to use them more easily if you already understand how your workspaces relate. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/recommended-practices/part3.3.mdx | main | terraform | [
0.011726204305887222,
0.002201919909566641,
0.025901194661855698,
-0.04635846987366676,
-0.015439990907907486,
-0.04429694265127182,
-0.04086799919605255,
-0.046288006007671356,
-0.07198997586965561,
0.051221176981925964,
-0.044763222336769104,
-0.08045757561922073,
0.04948396235704422,
-0... | 0.04635 |
the same component but different environments, creates a pipeline between workspaces. HCP Terraform doesn’t currently have the ability to act on these dependencies, but features like cascading updates and promotion are coming soon, and you’ll be able to use them more easily if you already understand how your workspaces relate. ## 4. Create Workspaces Create workspaces in HCP Terraform, and map VCS repositories to them. Each workspace reads its Terraform code from your version control system. You’ll need to assign a repository and branch to each workspace. We recommend using the same repository and branch for every environment of a given app or service — write your Terraform code such that you can differentiate the environments via variables, and set those variables appropriately per workspace. This might not be practical for your existing code yet, in which case you can use different branches per workspace and handle promotion through your merge strategy, but we believe a model of one canonical branch works best.  ## 5. Plan and Create Teams HCP Terraform manages workspace access with teams, which are groups of user accounts. Your HCP Terraform teams should match your understanding of who's responsible for which infrastructure. That isn't always an exact match for your org chart, so make sure you spend some time thinking about this and talking to people across the organization. Keep in mind: - Some teams need to administer many workspaces, and others only need permissions on one or two. - A team might not have the same permissions on every workspace they use; for example, application developers might have read/write access to their app’s dev and stage environments, but read-only access to prod. Managing an accurate and complete map of how responsibilities are delegated is one of the most difficult parts of practicing collaborative infrastructure as code. When managing team membership, you have two options: - Manage user accounts with [SAML single sign-on](/terraform/enterprise/saml/configuration). SAML support is exclusive to Terraform Enterprise, and lets users log into HCP Terraform via your organization's existing identity provider. If your organization is at a scale where you use a SAML-compatible identity provider, we recommend this option. If your identity provider already has information about your colleagues' teams or groups, you can [manage team membership via your identity provider](/terraform/enterprise/saml/team-membership). Otherwise, you can add users to teams with the UI or with [the team membership API](/terraform/cloud-docs/api-docs/team-members). - Manage user accounts in HCP Terraform. Your colleagues must create their own HCP Terraform user accounts, and you can add them to your organization by adding their username to at least one team. You can manage team membership with the UI or with [the team membership API](/terraform/cloud-docs/api-docs/team-members). ## 6. Assign Permissions Assign workspace ownership and permissions to teams. HCP Terraform supports granular team permissions for each workspace. For complete information about the available permissions, see [the HCP Terraform permissions documentation.](/terraform/cloud-docs/users-teams-organizations/permissions) [permissions-citation]: #intentionally-unused---keep-for-maintainers Most workspaces will give access to multiple teams with different permissions. | Workspace | Team Permissions | | --------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | | app1-dev | Team-eng-app1: Apply runs, read and write variables Team-owners-app1: Admin Team-central-IT: Admin | | app1-prod | Team-eng-app1: Queue plans, read variables Team-owners-app1: Apply runs, read and write variables Team-central-IT: Admin | | networking-dev | Team-eng-networking: Apply runs, read and write variables Team-owners-networking: Admin Team-central-IT: Admin | | networking-prod | Team-eng-networking: Queue plans, read variables Team-owners-networking: Apply runs, read and write variables Team-central-IT: Admin | ## 7. Restrict Non-Terraform Access Restrict access to cloud provider | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/recommended-practices/part3.3.mdx | main | terraform | [
0.0035777499433606863,
-0.05573531240224838,
0.05000246688723564,
-0.049846600741147995,
-0.034047313034534454,
0.017256665974855423,
-0.03296886011958122,
-0.04491277411580086,
-0.017424708232283592,
0.02557249367237091,
-0.029565760865807533,
-0.10886227339506149,
0.08075141906738281,
0.... | -0.017952 |
read and write variables Team-central-IT: Admin | | networking-dev | Team-eng-networking: Apply runs, read and write variables Team-owners-networking: Admin Team-central-IT: Admin | | networking-prod | Team-eng-networking: Queue plans, read variables Team-owners-networking: Apply runs, read and write variables Team-central-IT: Admin | ## 7. Restrict Non-Terraform Access Restrict access to cloud provider UIs and APIs. Since HCP Terraform is now your organization’s primary interface for infrastructure provisioning, you should restrict access to any alternate interface that bypasses HCP Terraform. For almost all users, it should be impossible to manually modify infrastructure without using the organization’s agreed-upon Terraform workflow. As long as no one can bypass Terraform, your code review processes and your HCP Terraform workspace permissions are the definitive record of who can modify which infrastructure. This makes everything about your infrastructure more knowable and controllable. HCP Terraform is one workflow to learn, one workflow to secure, and one workflow to audit for provisioning any infrastructure in your organization. ## Next At this point, you have successfully adopted a collaborative infrastructure as code workflow with HCP Terraform. You can provision infrastructure across multiple providers using a single workflow, and you have a shared interface that helps manage your organization’s standards around access control and code promotion. Next, you can make additional improvements to your workflows and practices. Continue on to [Part 3.4: Advanced Improvements to Collaborative Infrastructure as Code](/terraform/cloud-docs/recommended-practices/part3.4). | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/recommended-practices/part3.3.mdx | main | terraform | [
0.015495841391384602,
0.020808426663279533,
0.007565408945083618,
-0.0387076660990715,
-0.036221783608198166,
-0.010353890247642994,
0.004138999618589878,
-0.05111490935087204,
-0.06671988219022751,
0.04565947502851486,
-0.0648120790719986,
-0.04895447567105293,
0.07458606362342834,
0.0028... | 0.0428 |
# Part 3.2: Move from semi-automation to infrastructure as code We define semi-automated provisioning as a mix of at least two of the following practices: - Infrastructure as code with Terraform. - Manual CLI or GUI processes. - Scripts. If that describes your current provisioning practices, your next goal is to expand your use of Terraform, reduce your use of manual processes and imperative scripts, and make sure you’ve adopted the foundational practices that make infrastructure as code more consistent and useful. Note: If you aren’t already using infrastructure as code for some portion of your infrastructure, make sure you follow the steps in the previous section first. ## 1. Use Version Control Choose and implement a version control system (VCS) if your organization doesn’t already use a VCS. You might be able to get by with a minimalist Git/Mercurial/SVN server, but we recommend adopting a more robust collaborative VCS application that supports code reviews/approvals and has APIs for accessing data and administering repositories and accounts. Bitbucket, GitLab, and GitHub are popular tools in this space. If you already have established VCS workflows, layouts, and access control practices, great! If not, this is a good time to make these decisions. (We consider [this advice](http://www.drupalwatchdog.net/volume-4/issue-2/version-control-workflow-strategies) to be a good starting point.) Make sure you have a plan for who is allowed to merge changes and under what circumstances — since this code will be managing your whole infrastructure, it’s important to maintain its integrity and quality. Also, make sure to write down your organization’s expectations and socialize them widely among your teams. Make sure you've picked a VCS system that HCP Terraform will be able to access. Currently, HCP Terraform supports integrations with GitHub, GitLab and Atlassian Bitbucket (both Server and Cloud). ## 2. Put Terraform Code in VCS Repos Start moving infrastructure code into version control. New Terraform code should all be going into version control; if you have existing Terraform code that’s outside version control, start moving it in so that everyone in your organization knows where to look for things and can track the history and purpose of changes. Some organizations prefer to keep each Terraform configuration in its own VCS repository, while others prefer to keep all configurations in a shared "monorepo." Both approaches are valid; if your organization doesn't already have a strong preference, we recommend separate repositories. ## 3. Create Your First Module [Terraform modules](/terraform/language/modules/develop) are reusable configuration units. They let you manage pieces of infrastructure as a single package you can call and define multiple times in the main configuration for a workspace. Examples of a good Terraform module candidate would be an auto-scaling group on AWS that wraps a launch configuration, auto-scaling group, and EC2 Elastic Load Balancer (ELB). If you are already using Terraform modules, make sure you’re following the best practices and keep an eye on places where your modules could improve. The diagram below can help you decide when to write a module:  ## 4. Share Knowledge Spread Terraform skills to additional teams, and improve the skills of existing infrastructure teams. In addition to internal training and self-directed learning, you might want to consider: - Sign your teams up for [official HashiCorp Training](https://www.hashicorp.com/training/) . - Make available resources such as [Terraform Up and Running: Writing Infrastructure as Code](https://www.amazon.com/Terraform-Running-Writing-Infrastructure-Code/dp/1492046906/ref=sr\_1\_1?keywords=terraform+up+and+running&qid=1571263416&sr=8-1) or [Getting Started with Terraform](https://www.amazon.com/Getting-Started-Terraform-Kirill-Shirinkin/dp/1786465108/ref=sr\_1\_1?ie=UTF8&qid=1496138892&sr=8-1&keywords=Getting+Started+with+Terraform). These are especially valuable when nobody in your organization has used Terraform before. ## 5. Set Guidelines Create | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/recommended-practices/part3.2.mdx | main | terraform | [
-0.024360746145248413,
-0.030811011791229248,
0.02134515717625618,
-0.018916020169854164,
-0.014531695283949375,
-0.059415701776742935,
-0.06212947145104408,
0.023733941838145256,
-0.05752870440483093,
0.12037885189056396,
-0.03625674545764923,
-0.06416679173707962,
0.04156355559825897,
-0... | 0.064559 |
want to consider: - Sign your teams up for [official HashiCorp Training](https://www.hashicorp.com/training/) . - Make available resources such as [Terraform Up and Running: Writing Infrastructure as Code](https://www.amazon.com/Terraform-Running-Writing-Infrastructure-Code/dp/1492046906/ref=sr\_1\_1?keywords=terraform+up+and+running&qid=1571263416&sr=8-1) or [Getting Started with Terraform](https://www.amazon.com/Getting-Started-Terraform-Kirill-Shirinkin/dp/1786465108/ref=sr\_1\_1?ie=UTF8&qid=1496138892&sr=8-1&keywords=Getting+Started+with+Terraform). These are especially valuable when nobody in your organization has used Terraform before. ## 5. Set Guidelines Create standard build architectures to use as guidelines for writing Terraform code. Modules work best when they’re shared across an organization, and sharing is more effective if everyone has similar expectations around how to design infrastructure. Your IT architects should design some standardized build architectures specific to your organizational needs, to encourage building with high availability, elasticity and disaster recovery in mind, and to support consistency across teams. Here are a few examples of good build patterns from several cloud providers: - AWS: [Well Architected Frameworks](https://d0.awsstatic.com/whitepapers/architecture/AWS\_Well-Architected\_Framework.pdf) and the [Architecture Center](https://aws.amazon.com/architecture/). - Azure: [deploying Azure Reference Architectures](https://github.com/mspnp/reference-architectures) and [Azure Architecture Center](https://docs.microsoft.com/en-us/azure/architecture/). - GCP: [Building scalable and resilient web applications.](https://cloud.google.com/solutions/scalable-and-resilient-apps) - Oracle Public Cloud: [Best Practices for Using Oracle Cloud.](https://docs.oracle.com/en/cloud/iaas-classic/compute-iaas-cloud/stcsg/best-practices.html#GUID-C37FDFF1-7C48-4DA8-B31F-D7D7B35674A8) ## 6. Integrate Terraform With Configuration Management If your organization already has a configuration management tool, then it’s time to integrate it with Terraform — you can use [Terraform’s provisioners](/terraform/language/resources/provisioners/syntax) to pass control to configuration management after a resource is created. Terraform should handle the infrastructure, and other tools should handle user data and applications. If your organization doesn't use a configuration management tool yet, and the configuration of the infrastructure being managed is mutable, you should consider adopting a configuration management tool. This might be a large task, but it supports the same goals that drove you to infrastructure as code, by making application configuration more controllable, understandable, and repeatable across teams. If you’re just getting started, try this tutorial on how to [create a Chef cookbook](/vagrant/docs/provisioning/chef\_solo) and test it locally with Vagrant. We also recommend this article about how to decide which [configuration management tool](https://www.edureka.co/blog/chef-vs-puppet-vs-ansible-vs-saltstack/) is best suited for your organization. ## 7. Manage Secrets Integrate Terraform with [Vault](https://registry.terraform.io/providers/hashicorp/vault/latest/docs) or another secret management tool. Secrets like service provider credentials must stay secret, but they also must be easy to use when needed. The best way to address those needs is to use a dedicated secret management tool. We believe HashiCorp’s Vault is the best choice for most people, but Terraform can integrate with other secret management tools as well. ## Next At this point, your organization has a VCS configured, is managing key infrastructure with Terraform, and has at least one reusable Terraform module. Compared to a semi-automated practice, your organization has much better visibility into infrastructure configuration, using a consistent language and workflow. Next, you need an advanced workflow that can scale and delegate responsibilities to many contributors. Continue on to [Part 3.3: How to Move from Infrastructure as Code to Collaborative Infrastructure as Code](/terraform/cloud-docs/recommended-practices/part3.3). | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/recommended-practices/part3.2.mdx | main | terraform | [
-0.02206379920244217,
-0.01783495396375656,
-0.005307371728122234,
-0.003890666412189603,
-0.014265306293964386,
-0.0286263357847929,
-0.06275500357151031,
-0.06189496070146561,
-0.040042221546173096,
0.04683296009898186,
0.001763570006005466,
-0.025529364123940468,
0.0144611531868577,
0.0... | 0.071972 |
# Part 3.1: Move from manual changes to semi-automation Building infrastructure manually (with CLI or GUI tools) results in infrastructure that is hard to audit, hard to reproduce, hard to scale, and hard to share knowledge about. If your current provisioning practices are largely manual, your first goal is to begin using Terraform Community edition in a small, manageable subset of your infrastructure. Once you’ve gotten some small success using Terraform, you’ll have reached the semi-automated stage of provisioning maturity, and can begin to scale up and expand your Terraform usage. Allow one individual (or a small group) in your engineering team to get familiar with Terraform by following these steps: ## 1. Install Terraform [Follow the instructions here to install Terraform OSS](/terraform/tutorials/aws-get-started/install-cli?utm\_source=WEBSITE&utm\_medium=WEB\_IO&utm\_offer=ARTICLE\_PAGE&utm\_content=DOCS). ## 2. Write Some Code Write your first [Terraform Configuration file](/terraform/tutorials/aws-get-started/aws-build?utm\_source=WEBSITE&utm\_medium=WEB\_IO&utm\_offer=ARTICLE\_PAGE&utm\_content=DOCS). ## 3. Follow the Getting Started Tutorials Follow the rest of the [Terraform: Get Started tutorials](/terraform/tutorials/aws-get-started?utm\_source=WEBSITE&utm\_medium=WEB\_IO&utm\_offer=ARTICLE\_PAGE&utm\_content=DOCS). These tutorials walk you through [changing](/terraform/tutorials/aws-get-started/aws-change?utm\_source=WEBSITE&utm\_medium=WEB\_IO&utm\_offer=ARTICLE\_PAGE&utm\_content=DOCS) and [destroying](/terraform/tutorials/aws-get-started/aws-destroy?utm\_source=WEBSITE&utm\_medium=WEB\_IO&utm\_offer=ARTICLE\_PAGE&utm\_content=DOCS) resources, and more. ## 4. Implement a Real Infrastructure Project Choose a small real-life project and implement it with Terraform. Look at your organization’s list of upcoming projects, and designate one to be a Terraform proof-of-concept. Alternately, you can choose some existing infrastructure to re-implement with Terraform. The key is to choose a project with limited scope and clear boundaries, such as provisioning infrastructure for a new application on AWS. This helps keep your team from getting overwhelmed with features and possibilities. You can also look at some [example AWS projects](https://github.com/terraform-providers/terraform-provider-aws/tree/master/examples) to get a feel for your options. (The [AWS two-tier example](https://github.com/terraform-providers/terraform-provider-aws/tree/master/examples/two-tier) is often a good start.) Your goal here is to build a small but reliable core of expertise with Terraform, and demonstrate its benefits to others in the organization. ## Next At this point, you’ve reached a semi-automated stage of provisioning practices — one or more people in the organization can write Terraform code to provision and modify resources, and a small but meaningful subset of your infrastructure is being managed as code. This is a good time to provide a small demo to the rest of team to show how easy it is to write and provision infrastructure with Terraform. Next, it's time to transition to a more complete infrastructure as code workflow. Continue on to [Part 3.2: How to Move from Semi-Automation to Infrastructure as Code](/terraform/cloud-docs/recommended-practices/part3.2). | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/recommended-practices/part3.1.mdx | main | terraform | [
0.014591766521334648,
-0.009281973354518414,
0.05172577500343323,
0.007554713636636734,
-0.03395972400903702,
-0.022647622972726822,
-0.05192563310265541,
-0.001068107201717794,
-0.1015000268816948,
0.09236416220664978,
-0.019259020686149597,
-0.10679560899734497,
0.06547963619232178,
0.00... | 0.032505 |
# Part 3.4: Learn advanced workflow improvements Now that you have a collaborative interface and workflow for provisioning, you have a solid framework for improving your practices even further. The following suggestions don’t have to be done in order, and some of them might not make sense for every business. We present them as possibilities for when you find yourself asking what’s next. \* Move more processes and resources into HCP Terraform. Even after successfully implementing HCP Terraform, there’s a good chance you still have manual or semi-automated workflows and processes. We suggest holding a discovery meeting with all of the teams responsible for keeping infrastructure running, to identify future targets for automation. You can also use your notes from the questions in section 2 as a guide, or go through old change requests or incident tickets. \* Adopt [HashiCorp Packer](https://www.packer.io/) for image creation. Packer helps you build machine images in a maintainable and repeatable way, and can amplify Terraform’s usefulness. \* Apply policy to your Terraform configurations with [Sentinel](/terraform/cloud-docs/policy-enforcement) to enforce compliance with business and regulatory rules. \* Monitor and retain Terraform Enterprise's audit logs. [Learn more about logging in Terraform Enterprise instances here.](/terraform/enterprise/admin/infrastructure/logging) \* Add infrastructure monitoring and performance metrics. This can help make environment promotion safer, and safeguard the performance of your applications. There are many tools available in this space, and we recommend monitoring both the infrastructure itself, and the user’s-eye-view performance of your applications. \* Use the HCP Terraform API. The [HCP Terraform API](/terraform/cloud-docs/api-docs) can be used to integrate with general-purpose CI/CD tools to trigger Terraform runs in response to a variety of events. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/recommended-practices/part3.4.mdx | main | terraform | [
-0.020207569003105164,
0.030079877004027367,
0.09567274153232574,
-0.06461495161056519,
0.019643332809209824,
-0.021627308800816536,
-0.010328978300094604,
-0.0038120883982628584,
-0.11192874610424042,
0.05370275676250458,
-0.050065383315086365,
-0.06400583684444427,
0.031722474843263626,
... | 0.102667 |
# Part 1: Overview of our recommended workflow Terraform's purpose is to provide one workflow to provision any infrastructure. In this section, we'll show you our recommended practices for organizing Terraform usage across a large organization. This is the set of practices that we call "collaborative infrastructure as code." ## Fundamental Challenges in Provisioning There are two major challenges everyone faces when trying to improve their provisioning practices: technical complexity and organizational complexity. 1. Technical complexity — Different infrastructure providers use different interfaces to provision new resources, and the inconsistency between these interfaces imposes extra costs on daily operations. These costs get worse as you add more infrastructure providers and more collaborators. Terraform addresses this complexity by separating the provisioning workload. It uses a single core engine to read infrastructure as code configurations and determine the relationships between resources, then uses many [provider plugins](/terraform/language/providers) to create, modify, and destroy resources on the infrastructure providers. These provider plugins can talk to IaaS (e.g. AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. GitHub, DNSimple, Cloudflare). In other words, Terraform uses a model of workflow-level abstraction, rather than resource-level abstraction. It lets you use a single workflow for managing infrastructure, but acknowledges the uniqueness of each provider instead of imposing generic concepts on non-equivalent resources. 1. Organizational complexity — As infrastructure scales, it requires more teams to maintain it. For effective collaboration, it's important to delegate ownership of infrastructure across these teams and empower them to work in parallel without conflict. Terraform and HCP Terraform can help delegate infrastructure in the same way components of a large application are delegated. To delegate a large application, companies often split it into small, focused microservice components that are owned by specific teams. Each microservice provides an API, and as long as those APIs don't change, microservice teams can make changes in parallel despite relying on each others' functionality. Similarly, infrastructure code can be split into smaller Terraform configurations, which have limited scope and are owned by specific teams. These independent configurations use [output variables](/terraform/language/values/outputs) to publish information and [remote state resources](/terraform/language/state/remote-state-data) to access output data from other workspaces. Just like microservices communicate and connect via APIs, Terraform workspaces connect via remote state. Once you have loosely-coupled Terraform configurations, you can delegate their development and maintenance to different teams. To do this effectively, you need to control access to that code. Version control systems can regulate who can commit code, but since Terraform affects real infrastructure, you also need to regulate who can run the code. This is how HCP Terraform solves the organizational complexity of provisioning: by providing a centralized run environment for Terraform that supports and enforces your organization's access control decisions across all workspaces. This helps you delegate infrastructure ownership to enable parallel development. ## Personas, Responsibilities, and Desired User Experiences There are four main personas for managing infrastructure at scale. These roles have different responsibilities and needs, and HCP Terraform supports them with different tools and permissions. ### Central IT This team is responsible for defining common infrastructure practices, enforcing policy across teams, and maintaining shared services. Central IT users want a single dashboard to view the status and compliance of all infrastructure, so they can quickly fix misconfigurations or malfunctions. Since HCP Terraform is tightly integrated with Terraform's run data and is designed around Terraform's concepts of workspaces and runs, it offers a more integrated workflow experience than a general-purpose CI system. ### Organization Architect This team defines how global infrastructure is divided and delegated to the teams within the business unit. This team also enables connectivity | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/recommended-practices/part1.mdx | main | terraform | [
-0.020504239946603775,
-0.040931448340415955,
0.030525702983140945,
-0.010478366166353226,
0.010895787738263607,
-0.06910394877195358,
-0.053184837102890015,
-0.0639866515994072,
0.013734642416238785,
0.09220971167087555,
-0.09862209856510162,
-0.06135806441307068,
0.027294203639030457,
0.... | 0.080305 |
Terraform's run data and is designed around Terraform's concepts of workspaces and runs, it offers a more integrated workflow experience than a general-purpose CI system. ### Organization Architect This team defines how global infrastructure is divided and delegated to the teams within the business unit. This team also enables connectivity between workspaces by defining the APIs each workspace must expose, and sets organization-wide variables and policies. Organization Architects want a single dashboard to view the status of all workspaces and the graph of connectivity between them. ### Workspace Owner This individual owns a specific set of workspaces, which build a given Terraform configuration across several environments. They are responsible for the health of those workspaces, managing the full change lifecycle through dev, UAT, staging, and production. They are the main approver of changes to production within their domain. Workspace Owners want: \* A single dashboard to view the status of all workspaces that use their infrastructure code. \* A streamlined way to promote changes between environments. \* An interface to set variables used by a Terraform configuration across environments. ### Workspace Contributor Contributors submit changes to workspaces by making updates to the infrastructure as code configuration. They usually do not have approval to make changes to production, but can make changes in dev, UAT, and staging. Workspace Contributors want a simple workflow to submit changes to a workspace and promote changes between workspaces. They can edit a subset of workspace variables and their own personal variables. Workspace contributors are often already familiar with Terraform's operating model and command line interface, and can usually adapt quickly to HCP Terraform's web interface. ## The Recommended Terraform Workspace Structure ### About Workspaces HCP Terraform's main unit of organization is a workspace. A workspace is a collection of everything Terraform needs to run: a Terraform configuration (usually from a VCS repo), values for that configuration's variables, and state data to keep track of operations between runs. In Terraform Community edition, a workspace is an independent state file on the local disk. In HCP Terraform, they're persistent shared resources; you can assign them their own access controls, monitor their run states, and more. ### One Workspace Per Environment Per Terraform Configuration Workspaces are HCP Terraform's primary tool for delegating control, which means their structure should match your organizational permissions structure. The best approach is to use one workspace for each environment of a given infrastructure component. Or in other words, Terraform configurations \\* environments = workspaces. This is different from how some other tools view environments; notably, you shouldn't use a single Terraform workspace to manage everything that makes up your production or staging environment. Instead, make smaller workspaces that are easy to delegate. This also means not every configuration has to use the exact same environments; if a UAT environment doesn't make sense for your security infrastructure, you aren't forced to use one. Name your workspaces with both their component and their environment. For example, if you have a Terraform configuration for managing an internal billing app and another for your networking infrastructure, you could name the workspaces as follows: \* billing-app-dev \* billing-app-stage \* billing-app-prod \* networking-dev \* networking-stage \* networking-prod ### Delegating Workspaces Since each workspace is one environment of one infrastructure component, you can use per-workspace access controls to delegate ownership of components and regulate code promotion across environments. For example: \* Teams that help manage a component can start Terraform runs and edit variables in dev or staging. \* The owners or senior contributors of a component can start Terraform runs in production, after reviewing other contributors' work. \* Central | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/recommended-practices/part1.mdx | main | terraform | [
-0.028005845844745636,
-0.0020423666574060917,
0.012568168342113495,
-0.00882661808282137,
-0.009392766281962395,
-0.08821919560432434,
-0.02313806861639023,
-0.06656176596879959,
0.04018846154212952,
0.10150381177663803,
-0.0506935752928257,
-0.07714240998029709,
0.03569657728075981,
0.01... | 0.142261 |
of components and regulate code promotion across environments. For example: \* Teams that help manage a component can start Terraform runs and edit variables in dev or staging. \* The owners or senior contributors of a component can start Terraform runs in production, after reviewing other contributors' work. \* Central IT and organization architects can administer permissions on all workspaces, to ensure everyone has what they need to work. \* Teams that have no role managing a given component don't have access to its workspaces. To use HCP Terraform effectively, you must make sure the division of workspaces and permissions matches your organization's division of responsibilities. If it's difficult to separate your workspaces effectively, it might reveal an area of your infrastructure where responsibility is muddled and unclear. If so, this is a chance to disentangle the code and enforce better boundaries of ownership. ## Next Now that you're familiar with the outlines of the HCP Terraform workflow, it's time to assess your organization's provisioning practices. Continue on to [Part 2: Evaluating Your Current Provisioning Practices](/terraform/cloud-docs/recommended-practices/part2). | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/recommended-practices/part1.mdx | main | terraform | [
0.00268276478163898,
-0.00922985840588808,
0.03759956732392311,
-0.020821861922740936,
0.006704698316752911,
0.012396466918289661,
-0.03849663957953453,
-0.06995445489883423,
-0.008348033763468266,
0.06411362439393997,
-0.03517787531018257,
-0.035781390964984894,
0.04324229061603546,
0.022... | 0.010979 |
# Part 2: Evaluate your current provisioning practices HCP Terraform depends on several foundational IT practices. Before you can implement HCP Terraform's collaborative infrastructure as code workflows, you need to understand which of those practices you're already using, and which ones you still need to implement. We've written the section below in the form of a quiz or interview, with multiple-choice answers that represent the range of operational maturity levels we've seen across many organizations. You should read it with a notepad handy, and take note of any questions where your organization can improve its use of automation and collaboration. This quiz doesn't have a passing or failing score, but it's important to know your organization's answers. Once you know which of your IT practices need the most attention, Section 3 will guide you from your current state to our recommended practices in the most direct way. ## Four Levels of Operational Maturity Each question has several answers, each of which aligns with a different level of operational maturity. Those levels are as follows: 1. \*\*Manual\*\* \* Infrastructure is provisioned through a UI or CLI. \* Configuration changes do not leave a traceable history, and aren't always visible. \* Limited or no naming standards in place. 1. \*\*Semi-automated\*\* \* Infrastructure is provisioned through a combination of UI/CLI, infrastructure as code, and scripts or configuration management. \* Traceability is limited, since different record-keeping methods are used across the organization. \* Rollbacks are hard to achieve due to differing record-keeping methods. 1. \*\*Infrastructure as code\*\* \* Infrastructure is provisioned using Terraform OSS. \* Provisioning and deployment processes are automated. \* Infrastructure configuration is consistent, with all necessary details fully documented (nothing siloed in a sysadmin's head). \* Source files are stored in version control to record editing history, and, if necessary, roll back to older versions. \* Some Terraform code is split out into modules, to promote consistent reuse of your organization's more common architectural patterns. 1. \*\*Collaborative infrastructure as code\*\* \* Users across the organization can safely provision infrastructure with Terraform, without conflicts and with clear understanding of their access permissions. \* Expert users within an organization can produce standardized infrastructure templates, and beginner users can consume those to follow infrastructure best practices for the organization. \* Per-workspace access control helps committers and approvers on workspaces protect production environments. \* Functional groups that don't directly write Terraform code have visibility into infrastructure status and changes through HCP Terraform's UI. By the end of this section, you should have a clear understanding of which operational maturity stage you are in. Section 3 will explain the recommended steps to move from your current stage to the next one. Answering these questions will help you understand your organization's method for provisioning infrastructure, its change workflow, its operation model, and its security model. Once you understand your current practices, you can identify the remaining steps for implementing HCP Terraform. ## Your Current Configuration and Provisioning Practices How does your organization configure and provision infrastructure today? Automated and consistent practices help make your infrastructure more knowable and reliable, and reduce the amount of time spent on troubleshooting. The following questions will help you evaluate your current level of automation for configuration and provisioning. ### Q1. How do you currently manage your infrastructure? 1. Through a UI or CLI. This might seem like the easiest option for one-off tasks, but for recurring operations it is a big consumer of valuable engineering time. It's also difficult to track and manage changes. 1. Through reusable command line scripts, or a combination of UI and infrastructure as code. This is | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/recommended-practices/part2.mdx | main | terraform | [
0.03597896918654442,
0.011396506801247597,
0.0311399195343256,
-0.041788145899772644,
-0.024942411109805107,
0.004501793067902327,
-0.030435988679528236,
-0.020961139351129532,
-0.15602858364582062,
0.04896550625562668,
-0.06083514168858528,
-0.12354360520839691,
0.06276147812604904,
-0.00... | 0.074693 |
or CLI. This might seem like the easiest option for one-off tasks, but for recurring operations it is a big consumer of valuable engineering time. It's also difficult to track and manage changes. 1. Through reusable command line scripts, or a combination of UI and infrastructure as code. This is faster and more reliable than pure ad-hoc management and makes recurring operations repeatable, but the lack of consistency and versioning makes it difficult to manage over time. 1. Through an infrastructure as code tool (Terraform, CloudFormation). Infrastructure as code enables scalable, repeatable, and versioned infrastructure. It dramatically increases the productivity of each operator and can enforce consistency across environments when used appropriately. 1. Through a general-purpose automation framework (i.e. Jenkins + scripts / Jenkins + Terraform). This centralizes the management workflow, albeit with a tool that isn't built specifically for provisioning tasks. ### Q2. What topology is in place for your service provider accounts? 1. Flat structure, single account. All infrastructure is provisioned within the same account. 1. Flat structure, multiple accounts. Infrastructure is provisioned using different infrastructure providers, with an account per environment. 1. Tree hierarchy. This features a master billing account, an audit/security/logging account, and project/environment-specific infrastructure accounts. ### Q3. How do you manage the infrastructure for different environments? 1. Manual. Everything is manual, with no configuration management in place. 1. Siloed. Each application team has its own way of managing infrastructure — some manually, some using infrastructure as code or custom scripts. 1. Infrastructure as code with different code bases per environment. Having different code bases for infrastructure as code configurations can lead to untracked changes from one environment to the other if there is no promotion within environments. 1. Infrastructure as code with a single code base and differing environment variables. All resources, regardless of environment, are provisioned with the same code, ensuring that changes promote through your deployment tiers in a predictable way. ### Q4. How do teams collaborate and share infrastructure configuration and code? 1. N/A. Infrastructure as code is not used. 1. Locally. Infrastructure configuration is hosted locally and shared via email, documents or spreadsheets. 1. Ticketing system. Code is shared through journal entries in change requests or problem/incident tickets. 1. Centralized without version control. Code is stored on a shared filesystem and secured through security groups. Changes are not versioned. Rollbacks are only possible through restores from backups or snapshots. 1. Configuration stored and collaborated in a version control system (VCS) (Git repositories, etc.). Teams collaborate on infrastructure configurations within a VCS workflow, and can review infrastructure changes before they enter production. This is the most mature approach, as it offers the best record-keeping and cross-department/cross-team visibility. ### Q5. Do you use reusable modules for writing infrastructure as code? 1. Everything is manual. No infrastructure as code currently used. 1. No modularity. Infrastructure as code is used, but primarily as one-off configurations. Users usually don't share or re-use code. 1. Teams use modules internally but do not share them across teams. 1. Modules are shared organization-wide. Similar to shared software libraries, a module for a common infrastructure pattern can be updated once and the entire organization benefits. ## Your Current Change Control Workflow Change control is a formal process to coordinate and approve changes to a product or system. The goals of a change control process include: \* Minimizing disruption to services. \* Reducing rollbacks. \* Reducing the overall cost of changes. \* Preventing unnecessary changes. \* Allowing users to make changes without impacting changes made by other users. The following questions will help you assess the maturity of your change control | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/recommended-practices/part2.mdx | main | terraform | [
-0.08987059444189072,
-0.06525804102420807,
0.001064281677827239,
-0.005290394648909569,
-0.04568799212574959,
-0.08793169260025024,
0.0038085973355919123,
-0.05224529653787613,
0.0058344025164842606,
0.049373526126146317,
-0.05195194482803345,
-0.031460653990507126,
0.041957344859838486,
... | 0.089902 |
a change control process include: \* Minimizing disruption to services. \* Reducing rollbacks. \* Reducing the overall cost of changes. \* Preventing unnecessary changes. \* Allowing users to make changes without impacting changes made by other users. The following questions will help you assess the maturity of your change control workflow. ### Q6. How do you govern the access to control changes to infrastructure? 1. Access is not restricted or audited. Everyone in the platform team has the flexibility to create, change, and destroy all infrastructure. This leads to a complex system that is unstable and hard to manage. 1. Access is not restricted, only audited. This makes it easier to track changes after the fact, but doesn't proactively protect your infrastructure's stability. 1. Access is restricted based on service provider account level. Members of the team have admin access to different accounts based on the environment they are responsible for. 1. Access is restricted based on user roles. All access is restricted based on user roles at infrastructure provider level. ### Q7. What is the process for changing existing infrastructure? 1. Manual changes by remotely logging into machines. Repetitive manual tasks are inefficient and prone to human errors. 1. Runtime configuration management (Puppet, Chef, etc.). Configuration management tools let you make fast, automated changes based on readable and auditable code. However, since they don't produce static artifacts, the outcome of a given configuration version isn't always 100% repeatable, making rollbacks only partially reliable. 1. Immutable infrastructure (images, containers). Immutable components can be replaced for every deployment (rather than being updated in-place), using static deployment artifacts. If you maintain sharp boundaries between ephemeral layers and state-storing layers, immutable infrastructure can be much easier to test, validate, and roll back. ### Q8. How do you deploy applications? 1. Manually (SSH, WinRM, rsync, robocopy, etc.). Repetitive manual tasks are inefficient and prone to human errors. 1. With scripts (Fabric, Capistrano, custom, etc.). 1. With a configuration management tool (Chef, Puppet, Ansible, Salt, etc.), or by passing userdata scripts to CloudFormation Templates or Terraform configuration files. 1. With a scheduler (Kubernetes, Nomad, Mesos, Swarm, ECS, etc.). ## Your Current Security Model ### Q9. How are infrastructure service provider credentials managed? 1. By hardcoding them in the source code. This is highly insecure. 1. By using infrastructure provider roles (like EC2 instance roles for AWS).Since the service provider knows the identity of the machines it's providing, you can grant some machines permission to make API requests without giving them a copy of your actual credentials. 1. By using a secrets management solution (like Vault, Keywhis, or PAR).We recommend this. 1. By using short-lived tokens. This is one of the most secure methods, since the temporary credentials you distribute expire quickly and are very difficult to exploit. However, this can be more complex to use than a secrets management solution. ### Q10. How do you control users and objects hosted by your infrastructure provider (like logins, access and role control, etc.)? 1. A common 'admin' or 'superuser' account shared by engineers. This increases the possibility of a breach into your infrastructure provider account. 1. Individual named user accounts. This makes a loss of credentials less likely and easier to recover from, but it doesn't scale very well as the team grows. 1. LDAP and/or Microsoft Entra ID integration. This is much more secure than shared accounts, but requires additional architectural considerations to ensure that the provider's access into your corporate network is configured correctly. 1. Single sign-on through OAuth or SAML. This provides token-based access into your infrastructure provider while not requiring your provider to | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/recommended-practices/part2.mdx | main | terraform | [
-0.008796281181275845,
0.012867436744272709,
0.02794221043586731,
-0.03228630870580673,
0.01940729282796383,
-0.013472860679030418,
0.006603341083973646,
-0.02120468206703663,
0.0040490757673978806,
0.06376541405916214,
-0.05399589613080025,
0.008562877774238586,
0.027559693902730942,
-0.0... | 0.066326 |
Entra ID integration. This is much more secure than shared accounts, but requires additional architectural considerations to ensure that the provider's access into your corporate network is configured correctly. 1. Single sign-on through OAuth or SAML. This provides token-based access into your infrastructure provider while not requiring your provider to have access to your corporate network. ### Q11. How do you track the changes made by different users in your infrastructure provider's environments? 1. No logging in place. Auditing and troubleshooting can be very difficult without a record of who made which changes when. 1. Manual changelog. Users manually write down their changes to infrastructure in a shared document. This method is prone to human error. 1. By logging all API calls to an audit trail service or log management service (like CloudTrail, Loggly, or Splunk). We recommend this. This ensures that an audit trail is available during troubleshooting and/or security reviews. ### Q12. How is the access of former employees revoked? 1. Immediately, manually. If you don't use infrastructure as code, the easiest and quickest way is by removing access for that employee manually using the infrastructure provider's console. 1. Delayed, as part of the next release. if your release process is extremely coupled and most of your security changes have to pass through a CAB (Change Advisory Board) meeting in order to be executed in production, this could be delayed. 1. Immediately, writing a hot-fix in the infrastructure as code. this is the most secure and recommended option. Before the employee leaves the building, access must be removed. ## Assessing the Overall Maturity of Your Provisioning Practices After reviewing all of these questions, look back at your notes and assess your organization's overall stage of maturity: are your practices mostly manual, semi-automated, infrastructure as code, or collaborative infrastructure as code? Keep your current state in mind as you read the next section. ## Next Now that you've taken a hard look at your current practices, it's time to begin improving them. Continue on to [Part 3: How to Evolve Your Provisioning Practices](/terraform/cloud-docs/recommended-practices/part3). | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/recommended-practices/part2.mdx | main | terraform | [
-0.04635271057486534,
0.06733155250549316,
0.002597672399133444,
-0.04778258129954338,
0.016033630818128586,
-0.004628210328519344,
0.06933774054050446,
-0.06940611451864243,
0.03464169055223465,
-0.0004751858941745013,
-0.008161280304193497,
0.03224305808544159,
0.025638414546847343,
-0.0... | 0.124667 |
# Part 3: Evolve your provisioning practices This section describes the steps necessary to move an organization from manual provisioning processes to a collaborative infrastructure as code workflow. For each stage of operational maturity, we give instructions for moving your organization to the next stage, eventually arriving at our recommended workflow. We've split this section into multiple pages, so you can skip instructions that you've already implemented. Look back at your notes from Part 2, and start with the page about your current level of operational maturity. - [Part 3.1: How to Move from Manual Changes to Semi-Automation](/terraform/cloud-docs/recommended-practices/part3.1) - [Part 3.2: How to Move from Semi-Automation to Infrastructure as Code](/terraform/cloud-docs/recommended-practices/part3.2) - [Part 3.3: How to Move from Infrastructure as Code to Collaborative Infrastructure as Code](/terraform/cloud-docs/recommended-practices/part3.3) - [Part 3.4: Advanced Improvements to Collaborative Infrastructure as Code](/terraform/cloud-docs/recommended-practices/part3.4) | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/recommended-practices/part3.mdx | main | terraform | [
-0.011964782141149044,
-0.03244985640048981,
0.0532630980014801,
-0.025881320238113403,
0.0024002057034522295,
-0.0412207692861557,
-0.06661967188119888,
-0.044617462903261185,
-0.09658704698085785,
0.11896424740552902,
-0.04993285983800888,
-0.05273712798953056,
0.018886080011725426,
-0.0... | 0.039643 |
# Overview The HCP Terraform ecosystem features a variety of integrations to let HCP Terraform connect with third-party systems and platforms. @include 'eu/integrations.mdx' The following list contains HashiCorp's official HCP Terraform integrations, which use HCP Terraform's native APIs: - The [HCP Terraform Operator for Kubernetes](/terraform/cloud-docs/integrations/kubernetes) integration can manage HCP Terraform resources with Kubernetes custom resources. - The [ServiceNow Service Catalog for Terraform](/terraform/cloud-docs/integrations/service-now/service-catalog-terraform) lets you provision self-serve infrastructure using ServiceNow. - The [ServiceNow Service Graph Connector for Terraform](/terraform/cloud-docs/integrations/service-now/service-graph) integration lets you securely import HCP Terraform resources into your ServiceNow instance. - The [HCP Terraform for AWS Service Catalog](/terraform/cloud-docs/integrations/aws-service-catalog) integration lets you create pre-approved Terraform configurations on the AWS Service Catalog. - The [HCP Terraform for Splunk](/terraform/cloud-docs/integrations/splunk) integration lets you pull HCP Terraform logs into Splunk. If the platform you want to integrate HCP Terraform with does not have an official integration, you can build a custom run task to integrate with a tool of your choice. Run tasks can access plan details, display custom messages in the run pipeline, and prevent runs from applying. Learn more about [Run tasks](/terraform/cloud-docs/integrations/run-tasks). | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/index.mdx | main | terraform | [
-0.021994007751345634,
0.0003393442602828145,
0.023758217692375183,
-0.07665794342756271,
-0.023277534171938896,
0.02844509854912758,
-0.0038365835789591074,
-0.024783335626125336,
-0.028216110542416573,
-0.0026448313146829605,
-0.05348866805434227,
-0.07757244259119034,
0.05312683433294296,... | 0.159524 |
# Set up the ServiceNow Service Graph Connector -> \*\*Note:\*\* Follow the [Configure ServiceNow Service Graph Connector for HCP Terraform](/terraform/tutorials/it-saas/servicenow-sgc) tutorial for hands-on instructions on how to import an AWS resource deployed in your HCP Terraform organization to the ServiceNow CMDB by using the Service Graph Connector for Terraform. The ServiceNow Service Graph Connector for Terraform is a certified scoped application available in the ServiceNow Store. Search for ”Service Graph Connector for Terraform” published by ”HashiCorp Inc” and click \*\*Install\*\*. ## Prerequisites To start using the Service Graph Connector for Terraform, you must have: - An administrator account on a Terraform Enterprise instance or within an HCP Terraform organization. - An administrator account on your ServiceNow vendor instance. The Service Graph Connector for Terraform supports the following ServiceNow server versions: - Washington DC - Xanadu - Yokohama The following ServiceNow plugins are required dependencies: - ITOM Discovery License - Integration Commons for CMDB - Discovery and Service Mapping Patterns - ServiceNow IntegrationHub Standard Pack Additionally, you can install the IntegrationHub ETL application if you want to modify the default CMDB mappings. -> \*\*Note:\*\* Dependent plugins are installed on your ServiceNow instance automatically when the app is downloaded from the ServiceNow Store. Before installing the Service Graph Connector for Terraform, you must activate the ITOM Discovery License plugin in your production instance. ## Connect ServiceNow to HCP Terraform -> \*\*ServiceNow roles:\*\* `admin`, `x\_hashi\_service\_gr.terraform\_user` Once the integration is installed, you can proceed to the guided setup form where you will enter your Terraform credentials. This step will establish a secure connection between HCP Terraform and your ServiceNow instance. ### Create and scope Terraform API token In order for ServiceNow to connect to HCP Terraform, you must give it an HCP Terraform API token. The permissions of this token determine what resources the Service Graph Connector will import into the CMDB. While you could use a user API token, it could import resources from multiple organizations. By providing a team API token, you can scope permissions to only import resources from specified workspaces within a single organization. To create a team API token: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the organization where you want to create a team token. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Teams\*\*. 1. In \*\*Team API Token\*\* section, click \*\*Create a team token\*\*. Save this token in a safe place since HCP Terraform only displays it once. You will use it to configure ServiceNow in the next section.  ### Configure Service Graph Connector for Terraform API token In the top navigation of your ServiceNow instance's control panel, click on \*\*All\*\*, search for \*\*Service Graph Connector for Terraform\*\*, and click \*\*SG-Setup\*\*. Next, click \*\*Get Started\*\*. Next, in the \*\*Configure the Terraform connection\*\* section, click \*\*Get Started\*\*. In the \*\*Configure Terraform authentication credentials\*\* section, click \*\*Configure\*\*. If you want to route traffic between your HCP Terraform and the ServiceNow instance through a MID server acting as a proxy, change the \*\*Applies to\*\* dropdown to "Specific MID servers" and select your previously configured MID server name. If you don't use MID servers, leave the default value. Set the \*\*API Key\*\* to the HCP Terraform team API token that you created in the previous section and click \*\*Update\*\*.  In the \*\*Configure Terraform authentication credentials\*\* section, click \*\*Mark as Complete\*\*. ### Configure Terraform Webhook Notification token To improve security, HCP Terraform includes an HMAC signature | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-graph/service-graph-setup.mdx | main | terraform | [
-0.04708654060959816,
-0.05946928262710571,
-0.043957699090242386,
-0.030539030209183693,
-0.011306609958410263,
0.046359170228242874,
-0.03882593661546707,
-0.04223484918475151,
-0.04994078353047371,
0.08924581855535507,
0.0017146797617897391,
-0.07733248919248581,
0.07058320939540863,
-0... | 0.076213 |
and click \*\*Update\*\*.  In the \*\*Configure Terraform authentication credentials\*\* section, click \*\*Mark as Complete\*\*. ### Configure Terraform Webhook Notification token To improve security, HCP Terraform includes an HMAC signature on all "generic" webhook notifications using a user-provided \*\*token\*\*. This token is an arbitrary secret string that HCP Terraform uses to sign each webhook notification. ServiceNow uses the same token to verify the request authenticity. Refer to [Notification Authenticity](/terraform/cloud-docs/api-docs/notification-configurations#notification-authenticity) for more information. Create a token and save it in a safe place. This secret token can be any value but should be treated as sensitive. In the \*\*Configure Terraform Webhook token\*\* section, click \*\*Configure\*\*. In the \*\*Token\*\* field, enter the secret token that will be shared between the HCP Terraform and your ServiceNow instance and click \*\*Update\*\*.  In the \*\*Configure Terraform Webhook token\*\* section, click \*\*Mark as Complete\*\*. ### Configure Terraform connection In the \*\*Configure Terraform connection\*\* section, click \*\*Configure\*\*. If you are using Terraform Enterprise, set the \*\*Connection URL\*\* to the URL of your Terraform Enterprise instance. If you are using HCP Terraform, leave the \*\*Connection URL\*\* as the default value of `https://app.terraform.io`.  If you are using Terraform Enterprise, ServiceNow requires a Management, Instrumentation, and Discovery (MID) server to communicate with the Terraform Enterprise API. If you are using HCP Terraform, a MID server is optional. To use a MID server: 1. Enable the \*\*Use MID server\*\* option. 1. Choose \*\*Specific MID sever\*\* from the \*\*MID Selection\*\* dropdown. 1. Select your previously configured and validated MID server. Click \*\*Update\*\* to save these settings. In the \*\*Configure Terraform connection\*\* section, click \*\*Mark as Complete\*\*. ## Import Resources Refer to the documentation explaining the difference between the [two modes of import](/terraform/cloud-docs/integrations/service-now/service-graph#import-methods) offered by the Service Graph Connector for Terraform. Both options may be enabled, or you may choose to enable only the webhook or scheduled import. ### Configure scheduled import In the \*\*Set up scheduled import job\*\* section of the setup form, proceed to \*\*Configure the scheduled jobs\*\* and click \*\*Configure\*\*. You can use the \*\*Execute Now\*\* option to run a single import job, which is useful for testing. The import set will be displayed in the table below the scheduled import form, after refreshing the page. Once the import is successfully triggered, click on the \*\*Import Set\*\* field of the record to view the logs associated with the import run, as well as its status. Activate the job by checking the \*\*Activate\*\* box. Set the \*\*Repeat Interval\*\* and click \*\*Update\*\*. Note that the import processing time depends of the number of organizations and workspaces in your HCP Terraform. Setting the import job to run frequently is not recommended for big environments.  You can also access the scheduler interface by searching for \*\*Service Graph Connector for Terraform\*\* in the top navigation menu and selecting \*\*SG-Import Schedule\*\*. ### Configure Terraform Webhook In the top navigation, click on \*\*All\*\*, search for \*\*Scheduled Imports\*\*, and click on \*\*Scheduled Imports\*\*. Select the \*\*SG-Terraform Scheduled Process State\*\* record, then click \*\*To edit the record click here\*\*. Click the \*\*Active\*\* checkbox to enable it. Leave the default value for the \*\*Repeat Interval\*\* of 5 seconds. Click \*\*Update\*\*.  Next, create the webhook in HCP Terraform. Select a workspace and click \*\*Settings > Notifications\*\*. Click \*\*Create a Notification\*\*. Keep the \*\*Destination\*\* as the default option of \*\*Webhook\*\*. Choose a descriptive name \*\*Name\*\*. Set the \*\*Webhook URL\*\* enter `https:///api/x\_hashi\_service\_gr/sg\_terraform\_webhook` and replace `` with the hostname of your ServiceNow instance. In the \*\*Token\*\* field, enter the same string you provided in \*\*Terraform Webhook token\*\* section the of the Service Graph guided setup form. Under \*\*Health Events\*\* choose \*\*No events\*\*. Under \*\*Run Events\*\* choose \*\*Only certain events\*\* and enable notifications only on \*\*Completed\*\* runs. Click \*\*Create Notification\*\*.  Trigger a run in your workspace. Once the run is successfully completed, a webhook notification request will be sent to your ServiceNow instance. ### Monitor the import job By following these steps, you can track the status of import jobs in ServiceNow and verify the completion of the import process before accessing the imported resources in the CMDB. For scheduled imports, navigate back to the \*\*SG-Import Schedule\*\* interface. For webhook imports, go to the \*\*SG-Terraform Scheduled Process State\*\* interface. Under the form, you will find a table containing all registered import sets. Locate and select the relevant import set record. Click on the \*\*Import Set\*\* field to open it and view its details. The \*\*Outbound Http Requests\*\* tab lists all requests made by your ServiceNow instance to HCP Terraform in order to retrieve the latest Terraform state. Monitor the state of the import job. Wait for it to change to \*\*Complete\*\*, indicated by a green mark. Once the import job is complete, you can access the imported resources in the CMDB.  You can also access all import sets, regardless of the import type, by navigating to \*\*All\*\* and selecting \*\*Import Sets\*\* under the \*\*Advanced\*\* category. ### View resources in ServiceNow CMDB In the top navigation of ServiceNow, click on \*\*All\*\* and search for \*\*CMDB Workspace\*\*, and click on \*\*CMDB Workspace\*\*. Perform a search by entering a Configuration Item (CI) name in the \*\*Search\*\* field (for example, \*\*Virtual Machine Instance\*\*). CI names supported by the application are listed on the [resource mapping page](/terraform/cloud-docs/integrations/service-now/service-graph/resource-coverage). | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-graph/service-graph-setup.mdx | main | terraform | [
-0.08886581659317017,
0.031804297119379044,
-0.01466058287769556,
0.034895334392786026,
-0.006485800724476576,
-0.023001424968242645,
0.003988203126937151,
-0.04181884601712227,
-0.010852338746190071,
0.09195888042449951,
-0.04181603714823723,
-0.08981838077306747,
0.06915246695280075,
0.0... | 0.075381 |
# ServiceNow Service Graph Connector for Terraform overview -> \*\*Integration version:\*\* v1.3.0 Use the Service Graph Connector for Terraform to securely import HCP Terraform resources into your ServiceNow instance. The ServiceNow Service Graph for Terraform is a certified scoped application available in the [ServiceNow Store](https://store.servicenow.com/sn\_appstore\_store.do#!/store/application/0b0600891b52c150c216ebd56e4bcb32). The integration is based on the [Service Graph Connector](https://www.servicenow.com/products/service-graph-connectors.html) technology that provides a framework for discovering and mapping relationships between the organization's infrastructure and the ServiceNow Configuration Items (CIs), and then automatically updating the [ServiceNow CMDB (Configuration Management Database)](https://www.servicenow.com/products/servicenow-platform/configuration-management-database.html) with this information. This enables platform teams to gain a comprehensive view of the resources they support. The CMDB is a central repository within the ServiceNow platform, which provides a single source of truth for your infrastructure and offers configurable dashboards for monitoring and reporting. ## Key benefits - \*\*Enhanced visibility\*\*: The Service Graph Connector for Terraform updates the CMDB dashboards with resources deployed in HCP Terraform. - \*\*Improved efficiency\*\*: By connecting Terraform to the ServiceNow CMDB, platform teams can manage and search Terraform-provisioned resources in the CMDB alongside the rest of the company's infrastructure. - \*\*Consistent management\*\*: Terraform state file changes get automatically and securely updated in the ServiceNow CMDB, capturing status changes for all technical resources in a timely manner. - \*\*Extensibility\*\*: ServiceNow admins can customize mappings for additional resource types, potentially working with HashiCorp’s entire Terraform ecosystem made up of thousands of providers. ## Technical design The diagram below shows how the Service Graph Connector for Terraform connects HCP Terraform to your ServiceNow instance.  The Service Graph Connector for Terraform integrates with HCP Terraform to fetch up-to-date information about your deployments. It leverages the Terraform state as the primary data source. The application doesn't make any requests to your cloud provider or require you to share any cloud credentials. ## Import methods The integration offers two methods of importing your Terraform resources into CMDB. You can configure the application to periodically pull all your resources in one batch. Alternatively, you can set up webhooks in your Terraform workspaces, which will notify your ServiceNow instance about new deployments. ### Scheduled polling The Service Graph Connector for Terraform can be scheduled to periodically poll HCP Terraform. Depending on the size of your infrastructure and how frequently the state of your resources needs to be refreshed in CMDB, the polling schedule can be set anywhere from once a week to every second. This option is not recommended for big environments with thousands of Terraform workspaces as the import job will take several hours to complete. The scheduled job makes a request to HCP Terraform to obtain all organizations that the HCP Terraform API token provided to the application has access to. It will attempt to import all relevant resources from all workspaces within each of those organizations. The processing time depends of the number of organizations and workspaces in HCP Terraform. Configuring the import job to run frequently is not recommended for big environments. To access the scheduler, search for \*\*Service Graph Connector for Terraform\*\* in the top navigation menu and select \*\*SG-Import Schedule\*\*. You can change the polling settings and view all previous import sets pulled into your ServiceNow instance using this method. ### HCP Terraform Webhook Notifications You can configure [webhook notifications](/terraform/cloud-docs/workspaces/settings/notifications) for all relevant workspaces in HCP Terraform organization. Webhooks offer an event-based approach to importing your resources. The import is triggered as soon as a Terraform run is successfully completed in HCP Terraform. Webhook POST requests are sent to an API endpoint exposed by the Service Graph Connector for Terraform in your ServiceNow instance. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-graph/index.mdx | main | terraform | [
-0.07602579146623611,
-0.024487124755978584,
-0.036834295839071274,
-0.02952089160680771,
0.013056992553174496,
-0.03521090745925903,
-0.014665141701698303,
-0.04767467826604843,
-0.01439451240003109,
0.07809920608997345,
-0.015028887428343296,
-0.039481498301029205,
0.0788266733288765,
0.... | 0.147626 |
in HCP Terraform organization. Webhooks offer an event-based approach to importing your resources. The import is triggered as soon as a Terraform run is successfully completed in HCP Terraform. Webhook POST requests are sent to an API endpoint exposed by the Service Graph Connector for Terraform in your ServiceNow instance. Each webhook request includes an HMAC token, and the endpoint validates the signature using the secret you provide. Learn more about [HCP Terraform notification authenticity](/terraform/cloud-docs/workspaces/settings/notifications#notification-authenticity). Internally, the application uses a scheduled job as a helper to keep track of the incoming webhook requests. To activate, configure, and view the history of all webhook imports, navigate to \*\*Scheduled Imports\*\* and select \*\*SG-Terraform Scheduled Process State\*\*. By default, the job is set to run every minute. -> \*\*Tip:\*\* Both import options may be enabled, or you may choose to configure only the webhooks or the scheduled import. The [setup page](/terraform/cloud-docs/integrations/service-now/service-graph/service-graph-setup) provides configuration details for both import modes. ## ETL (Extract, Transform, Load) After the application successfully imports the resources, they are temporarily stored in a staging database table. The import set records are then transferred to the ETL (Extract, Transform, Load) pipeline. Search for \*\*IntegrationHub ETL\*\* in the top navigation menu to view and edit the default ETL rules of the Service Graph Connector for Terraform. The application's ETL Transform Map is called \*\*SG-Terraform\*\*. To deactivate resources that you do not want imported into the CMDB, navigate to the \*\*Select CMDB Classes to Map Source Data\*\* section of the application's ETL record, and toggle the switch on the resource mapping record to deactivate it.  -> \*\*Tip:\*\* Run an import before you open the ETL map as the interface requires at least one import set stored in the memory to be able to display the rules. ## Supported resources The Service Graph Connector for Terraform supports selected resources from the following cloud providers: - AWS - Microsoft Azure - Google Cloud - VMware vSphere The [resource mapping](/terraform/cloud-docs/integrations/service-now/service-graph/resource-coverage) documentation contains tables detailing the mapping of objects and attributes between HCP Terraform and ServiceNow CMDB. ## Destroyed resources After the destroy operation is completed in HCP Terraform and the application's import job is finished in your ServiceNow instance, the \*\*Operational Status\*\* field of all resources in the CMDB removed from the Terraform state during the deletion process will be updated to \*\*Non-Operational\*\*. ## Get started Refer to the [setup page](/terraform/cloud-docs/integrations/service-now/service-graph/service-graph-setup) for information on how to configure the integration in your ServiceNow instance. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-graph/index.mdx | main | terraform | [
-0.09741006046533585,
0.05741055682301521,
-0.01619151420891285,
-0.029629413038492203,
0.037017904222011566,
-0.08459348976612091,
0.013416694477200508,
-0.08572006225585938,
0.050814222544431686,
0.03611378371715546,
-0.0073912665247917175,
-0.09840119630098343,
0.06862027943134308,
-0.0... | 0.07552 |
# Customize the ServiceNow Service Graph Connector for Terraform -> \*\*ServiceNow roles:\*\* `admin` -> \*\*ServiceNow plugin requirement:\*\* `IntegrationHub ETL` You can update and customize the default ETL mapping rules offered by the Service Graph Connector for Terraform. To ensure that your custom rules remain intact during future updates, you can clone the existing ETL record and maintain it separately from the default one. This documentation guides you through the process of mapping a resource using an example of an AWS virtual private network (VPC). Although this resource is already covered by the application, the principles discussed apply to any new potential resource mapping. Any customizations should be done from the application's scope: \*\*Service Graph Connector for Terraform\*\*. ## Clone the ETL map Navigate to the \*\*IntegrationHub ETL\*\* in the top menu. Check the \*\*SG-Terraform\*\* record and click \*\*Duplicate\*\*. Refer to the [ServiceNow documentation](https://www.servicenow.com/docs/csh?topicname=duplicate-cmdb-transform-map.html&version=latest) to create a duplicate of an existing ETL transform map. ## Build a resource in HCP Terraform Create a new workspace in your HCP Terraform organization and create a Terraform resource that you would like to map. It helps to have a Terraform state record of the resource to ensure accurate mapping. [Configure a webhook](/terraform/cloud-docs/integrations/service-now/service-graph/service-graph-setup#configure-terraform-webhook) and initiate a Terraform run. ## Download Terraform State Once the run is successfully completed, open your ServiceNow instance, click on \*\*All\*\* and navigate to \*\*Scheduled Imports\*\*. Open the \*\*SG-Terraform Scheduled Process State\*\* record, search for the import set corresponding to the latest webhook request. Click the \*\*Import Set\*\* field to open the import set. Wait for the import set to be successfully processed. Since there are no existing ETL rules configured for the new resource, it is ignored during the ETL process. Open the \*\*Outbound Http Requests\*\* tab to list the requests sent from your ServiceNow instance to HCP Terraform and get the latest state of the workspace.  Open the record that starts with "http://archivist.terraform.io" by clicking on the timestamp. Copy the content of the URL field and open it you your browser to download the Terraform state file. Locate the resource in the state object. This JSON record will serve as a source for the future mapping. ## Identify the CI target Pick a suitable Configuration Item (CI) target for your resource. For example, the AWS virtual private network (VPC) resource is mapped to Cloud Network (`cmdb\_ci\_network`) by the Service Graph Connector for Terraform. Refer to the [ServiceNow CMDB documentation](https://www.servicenow.com/docs/csh?topicname=cmdb-tables-details.html&version=latest) for more details on available CI tables. ## Consult the CI Class Manager After selecting an appropriate CI target, it is important to consult the CI Class Manager for guidance on dependent relationships. Many CMDB resources rely on other CI tables. If a related class is not properly mapped, the ETL job will generate errors or warnings and fail to import your resource into the CMDB. In the top navigation, click on \*\*All\*\*, search for \*\*CI Class Manager\*\*, and click on \*\*Open Hierarchy\*\*. Search for your target CI Class and check \*\*Dependent Relationships\*\* tab to learn more about dependent mappings required by the resource. For example, according to the \*\*CI Class Manager\*\*, \*\*Cloud Network\*\* should be hosted on \*\*Logical Datacenter\*\* and \*\*Cloud Service Account\*\*. ## Set the mapping rules Open the \*\*IntegrationHub ETL\*\* from the top navigation menu and select your cloned ETL map record prepared for customization. Refer to the [ServiceNow documentation](https://docs.servicenow.com/en-US/bundle/utah-servicenow-platform/page/product/configuration-management/concept/create-etl-transform-map.html) for instructions to create an ETL transform map. Click on the first \*\*Specify Basic Details\*\* section of the ETL Transform Map Assistant. Select the import set number containing your resource from the \*\*Sample Import Set\*\* dropdown and click \*\*Mark | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-graph/customizations.mdx | main | terraform | [
-0.06712042540311813,
-0.018138950690627098,
0.005208080168813467,
-0.031528640538454056,
-0.018336221575737,
-0.014064837247133255,
-0.02393355220556259,
-0.05163247138261795,
-0.01950814388692379,
0.10693108290433884,
-0.022823601961135864,
-0.05465741828083992,
0.09605816006660461,
0.02... | 0.051456 |
ETL map record prepared for customization. Refer to the [ServiceNow documentation](https://docs.servicenow.com/en-US/bundle/utah-servicenow-platform/page/product/configuration-management/concept/create-etl-transform-map.html) for instructions to create an ETL transform map. Click on the first \*\*Specify Basic Details\*\* section of the ETL Transform Map Assistant. Select the import set number containing your resource from the \*\*Sample Import Set\*\* dropdown and click \*\*Mark as Complete\*\*. Open the \*\*Preview and Prepare Data\*\* section and review the imported rows. Click \*\*Mark as Complete\*\*. The third section provides the interface for mapping resource attributes. Click on \*\*Select CMDB Classes to Map Source Data\*\*. Click on \*\*Add Conditional Class\*\* button at the top. Set the rules that will identify your resource in the import set. Use the `type` field value from the Terraform state object to identify your resource (on the ServiceNow side, field name are prefaced with `u\_`). Set the target CMDB CI Class name and click \*\*Save\*\*.  To modify the mapping for your new Conditional Class record, select \*\*Edit Mapping\*\*. On the right side of the interface, drag the relevant data pills and drop them into the corresponding CMDB fields on the left side. Refer to the Terraform state record to verify the presence of attributes. For uniqueness, the \*\*Source Native Key\*\* value is typically mapped to the `arn` field when dealing with AWS resources. All resources mapped in the Service Graph Connector for Terraform will have the \*\*Operational status\*\* and \*\*Name\*\* fields populated.  Once the mapping is completed, click on the left arrow at the top to return to the list of Conditional Classes. Map two more conditional classes in the same manner, according to the rules set in the CI Class Manager: \*\*Logical Datacenter\*\* (\*\*AWS Datacenter\*\* in case of AWS VPC) and \*\*Cloud Service Account\*\*. Since the AWS cloud provider is already covered by the application, these classes are already present. Click \*\*Edit Class\*\* to include your newly mapped resource into the listed conditional rules. Add another \*\*OR\*\* condition to each of them and click \*\*Save\*\*.  Click \*\*Mark as Complete\*\* to finalize the \*\*Select CMDB Classes to Map Source Data\*\* section. ## Set the required relationships Click \*\*Add Relationships\*\* to continue to the next section. Click the \*\*Add Conditional Relationship\*\* button at the top of the page. The following configuration tells the ETL that when a record with `aws\_vpc` type is found in the import set, it should be hosted on \*\*AWS Datacenter 1\*\*. Click \*\*Save\*\*.  A similar dependent relationship needs to be established from \*\*AWS Datacenter\*\* to \*\*Cloud Service Account\*\*. Since the AWS cloud provider is already covered by the application, the relationship record is present in the application. Click \*\*Edit Relationship\*\* and add another \*\*OR\*\* condition containing your new resource to the list. Click \*\*Save\*\*.  Click \*\*Mark as Complete\*\* to finalize the \*\*Add Relationships\*\* section. ## Run a test There are two ways to test the new resource mapping. You can utilize the \*\*Test and Rollback Integration Results\*\* interface of the ETL Transform Map Assistant. Alternatively, you can initiate a new run in your HCP Terraform workspace that includes the deployment of the resource. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-graph/customizations.mdx | main | terraform | [
-0.03273546323180199,
0.019781311973929405,
-0.01808692328631878,
0.003206774592399597,
-0.018657643347978592,
0.047733988612890244,
0.04131026566028595,
-0.02836287021636963,
-0.07373790442943573,
0.10681512206792831,
-0.0337442047894001,
-0.13402076065540314,
0.14685995876789093,
-0.0157... | 0.054441 |
can initiate a new run in your HCP Terraform workspace that includes the deployment of the resource. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-graph/customizations.mdx | main | terraform | [
-0.011848922818899155,
0.010130946524441242,
0.05676264688372612,
-0.02401832304894924,
0.01567293517291546,
-0.015472996048629284,
-0.07670561969280243,
-0.09739328920841217,
-0.059352580457925797,
0.0395938903093338,
-0.003040321171283722,
-0.04650035873055458,
0.046232253313064575,
0.00... | 0.005665 |
# ServiceNow Service Graph Connector VMware vSphere resource coverage This page explains the rules of associating VMware vSphere resources, created via Terraform, with the classes in the ServiceNow CMDB. ## Mapping of Terraform resources to CMDB CI Classes | vSphere resource | Terraform resource name | ServiceNow CMDB CI Class | ServiceNow CMDB Category Name | |----------------------------|------------------------------------|---------------------------------|--------------------------------| | vCenter server | N/A | `cmdb\_ci\_cloud\_service\_account` | Cloud Service Account | | vSphere Datacenter | `vsphere\_datacenter` | `cmdb\_ci\_vcenter\_datacenter` | VMware vCenter Datacenter | | vSphere Virtual Machine | `vsphere\_virtual\_machine` | `cmdb\_ci\_vmware\_instance` | VMware Virtual Machine Instance| | vSphere Datastore Cluster | `vsphere\_datastore\_cluster` | `cmdb\_ci\_vcenter\_datastore` | VMware vCenter Datastore | | vSphere Network | `vsphere\_network` | `cmdb\_ci\_vcenter\_network` | VMware vCenter Network | | Tags | N/A | `cmdb\_key\_value` | Key Value | ## Resource relationships | Child CI Class | Relationship type | Parent CI Class | |--------------------------------------------------------------|------------------------|-----------------------------------------------------------| | VMware vCenter Datacenter 1 (`cmdb\_ci\_vcenter\_datacenter`) | Hosted On::Hosts | Cloud Service Account 5 (`cmdb\_ci\_cloud\_service\_account`) | | VMware Virtual Machine Instance 1 (`cmdb\_ci\_vmware\_instance`)| Hosted On::Hosts | VMware vCenter Datacenter 1 (`cmdb\_ci\_vcenter\_datacenter`)| | VMware Virtual Machine Instance 1 (`cmdb\_ci\_vmware\_instance`)| Reference | Key Value 27 (`cmdb\_key\_value`) | | VMware vCenter Network 1 (`cmdb\_ci\_vcenter\_network`) | Hosted On::Hosts | VMware vCenter Datacenter 1 (`cmdb\_ci\_vcenter\_datacenter`)| | VMware vCenter Network 1 (`cmdb\_ci\_vcenter\_network`) | Reference | Key Value 28 (`cmdb\_key\_value`) | | VMware vCenter Datastore 1 (`cmdb\_ci\_vcenter\_datastore`) | Hosted On::Hosts | VMware vCenter Datacenter 1 (`cmdb\_ci\_vcenter\_datacenter`)| | VMware vCenter Datastore 1 (`cmdb\_ci\_vcenter\_datastore`) | Reference | Key Value 29 (`cmdb\_key\_value`) | ## Field attributes mapping ### Cloud Service Account (`cmdb\_ci\_cloud\_service\_account`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | Defaults to "VMware\_vCenter" | | Account Id | Defaults to "VMware\_vCenter" | | Datacenter Type | Defaults to "VMware\_vCenter" | | Object ID | Defaults to "VMware\_vCenter" | | Name | Defaults to "VMware\_vCenter" | | Operational Status| Defaults to "1" ("Operational") | ### VMware vCenter Datacenter (`cmdb\_ci\_vcenter\_datacenter`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `datacenter\_id` | | Object Id | `datacenter\_id` | | Region | `datacenter\_id` | | Name | `datacenter\_id` | | Operational Status| Defaults to "1" ("Operational") | ### VMware Virtual Machine Instance (`cmdb\_ci\_vmware\_instance`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `id` | | Object Id | `id` | | Name | `name` | | Operational Status| Defaults to "1" ("Operational") | ### VMware vCenter Network (`cmdb\_ci\_vcenter\_network`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `id` | | Object Id | `id` | | Name | `name` | | Operational Status| Defaults to "1" ("Operational") | ### VMware vCenter Datastore (`cmdb\_ci\_vcenter\_datastore`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `id` | | Object Id | `id` | | Name | `name` | | Operational Status| Defaults to "1" ("Operational") | | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-graph/resource-coverage/vsphere.mdx | main | terraform | [
-0.06947007775306702,
0.02293369174003601,
0.018139520660042763,
0.018358755856752396,
-0.03903888911008835,
0.011751458048820496,
0.0236317440867424,
-0.05639837682247162,
-0.004418386612087488,
0.060038886964321136,
0.012828252278268337,
-0.13011854887008667,
0.0827857106924057,
-0.01109... | 0.076588 |
# ServiceNow Service Graph Connector Google Cloud resource coverage This page provides details on how Google Cloud resources, set up using Terraform, are corresponded to the classes within the ServiceNow CMDB. ## Mapping of Terraform resources to CMDB CI Classes | Google resource | Terraform resource name | ServiceNow CMDB CI Class | ServiceNow CMDB Category Name | |----------------------------|-----------------------------------------------------------------------|---------------------------------|-------------------------------| | Project ID | N/A | `cmdb\_ci\_cloud\_service\_account` | Cloud Service Account | | Region (location) | N/A | `cmdb\_ci\_google\_datacenter` | Google Datacenter | | Virtual Machine Instance | `google\_compute\_instance` | `cmdb\_ci\_vm\_instance` | Virtual Machine Instance | | Kubernetes Cluster | `google\_container\_cluster` | `cmdb\_ci\_kubernetes\_cluster` | Kubernetes Cluster | | Google Storage | `google\_storage\_bucket` | `cmdb\_ci\_cloud\_storage\_account` | Cloud Storage Account | | Google BigQuery | `google\_bigquery\_table` | `cmdb\_ci\_cloud\_database` | Cloud DataBase | | Google SQL | `google\_sql\_database` | `cmdb\_ci\_cloud\_database` | Cloud DataBase | | Google Compute Firewall | `google\_compute\_firewall` | `cmdb\_ci\_compute\_security\_group`| Compute Security Group | | Cloud Function | `google\_cloudfunctions\_function` or `google\_cloudfunctions2\_function` | `cmdb\_ci\_cloud\_function` | Cloud Function | | Load Balancer | `google\_compute\_forwarding\_rule` | `cmdb\_ci\_cloud\_load\_balancer` | Cloud Load Balancer | | VPC | `google\_compute\_network` | `cmdb\_ci\_network` | Cloud Network | | Tags | N/A | `cmdb\_key\_value` | Key Value | ## Resource relationships | Child CI Class | Relationship type | Parent CI Class | |------------------------------------------------------------|------------------------|-----------------------------------------------------------| | Google Datacenter 1 (`cmdb\_ci\_google\_datacenter`) | Hosted On::Hosts | Cloud Service Account 4 (`cmdb\_ci\_cloud\_service\_account`) | | Google Datacenter 2 (`cmdb\_ci\_google\_datacenter`) | Hosted On::Hosts | Cloud Service Account 4 (`cmdb\_ci\_cloud\_service\_account`) | | Virtual Machine Instance 4 (`cmdb\_ci\_vm\_instance`) | Hosted On::Hosts | Google Datacenter 1 (`cmdb\_ci\_google\_datacenter`) | | Virtual Machine Instance 4 (`cmdb\_ci\_vm\_instance`) | Reference | Key Value 13 (`cmdb\_key\_value`) | | Cloud Network 3 (`cmdb\_ci\_network`) | Hosted On::Hosts | Google Datacenter 1 (`cmdb\_ci\_google\_datacenter`) | | Cloud Network 3 (`cmdb\_ci\_network`) | Reference | Key Value 18 (`cmdb\_key\_value`) | | Compute Security Group 3 (`cmdb\_ci\_compute\_security\_group`)| Hosted On::Hosts | Google Datacenter 1 (`cmdb\_ci\_google\_datacenter`) | | Compute Security Group 3 (`cmdb\_ci\_compute\_security\_group`)| Reference | Key Value 21 (`cmdb\_key\_value`) | | Kubernetes Cluster 3 (`cmdb\_ci\_kubernetes\_cluster`) | Hosted On::Hosts | Google Datacenter 1 (`cmdb\_ci\_google\_datacenter`) | | Kubernetes Cluster 3 (`cmdb\_ci\_kubernetes\_cluster`) | Reference | Key Value 22 (`cmdb\_key\_value`) | | Cloud DataBase 3 (`cmdb\_ci\_cloud\_database` ) | Hosted On::Hosts | Google Datacenter 1 (`cmdb\_ci\_google\_datacenter`) | | Cloud DataBase 2 (`cmdb\_ci\_cloud\_database` ) | Reference | Key Value 24 (`cmdb\_key\_value`) | | Cloud Function 3 (`cmdb\_ci\_cloud\_function`) | Hosted On::Hosts | Google Datacenter 1 (`cmdb\_ci\_google\_datacenter`) | | Cloud Function 3 (`cmdb\_ci\_cloud\_function`) | Reference | Key Value 25 (`cmdb\_key\_value`) | | Cloud Load Balancer 2 (`cmdb\_ci\_cloud\_load\_balancer`) | Hosted On::Hosts | Google Datacenter 1 (`cmdb\_ci\_google\_datacenter`) | | Cloud Load Balancer 2 (`cmdb\_ci\_cloud\_load\_balancer`) | Reference | Key Value 26 (`cmdb\_key\_value`) | | Cloud Storage Account 2 (`cmdb\_ci\_cloud\_storage\_account`) | Hosted On::Hosts | Google Datacenter 2 (`cmdb\_ci\_google\_datacenter`) | | Cloud Storage Account 2 (`cmdb\_ci\_cloud\_storage\_account`) | Reference | Key Value 23 (`cmdb\_key\_value`) | ## Field attributes mapping ### Cloud Service Account (`cmdb\_ci\_cloud\_service\_account`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `project` | | Account Id | `project` | | Datacenter Type | Defaults to `google` | | Object ID | `project` | | Name | `project` | | Operational Status| Defaults to "1" ("Operational") | ### Google Datacenter (`cmdb\_ci\_google\_datacenter`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | Concatenation of `project` and region extracted from `id` | | Object Id | Region extracted from `id` | | Region | Region extracted from `id` | | Name | Region extracted from `id` | | Operational Status| Defaults to "1" ("Operational") | ### Virtual Machine Instance (`cmdb\_ci\_vm\_instance`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-graph/resource-coverage/gcp.mdx | main | terraform | [
-0.10031440854072571,
-0.0024096942506730556,
0.02950112335383892,
0.034809768199920654,
-0.04596623405814171,
0.04444365203380585,
0.0069760787300765514,
-0.09111062437295914,
-0.06052931770682335,
0.06433656811714172,
0.0037780338898301125,
-0.11704052239656448,
0.07734549045562744,
-0.0... | 0.028755 |
`id` | | Object Id | Region extracted from `id` | | Region | Region extracted from `id` | | Name | Region extracted from `id` | | Operational Status| Defaults to "1" ("Operational") | ### Virtual Machine Instance (`cmdb\_ci\_vm\_instance`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `id` | | Object Id | `id` | | Name | `name` | | Category | `machine\_type` | | Operational Status| Defaults to "1" ("Operational") | ### Cloud Network (`cmdb\_ci\_network`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `id` | | Object Id | `id` | | Name | `name` | | Operational Status| Defaults to "1" ("Operational") | ### Compute Security Group (`cmdb\_ci\_compute\_security\_group`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `id` | | Object Id | `id` | | Name | `name` | | Operational Status| Defaults to "1" ("Operational") | ### Kubernetes Cluster (`cmdb\_ci\_kubernetes\_cluster`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `arn` | | IP Address | `endpoint` | | Port | Defaults to "6443" | | Name | `name` | | Operational Status| Defaults to "1" ("Operational") | ### Cloud Object Storage (`cmdb\_ci\_cloud\_object\_storage`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `arn` | | Object Id | `id` | | Cloud Provider | Resource cloud provider extracted from `arn` | | Name | `bucket` | | Operational Status| Defaults to "1" ("Operational") | ### Cloud Storage Account (`cmdb\_ci\_cloud\_storage\_account`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `id` | | Object Id | `id` | | Name | `name` | | Name | `location` | | Operational Status| Defaults to "1" ("Operational") | ### Cloud DataBase (`cmdb\_ci\_cloud\_database`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `id` | | Object Id | `id` | | Name | Name extracted from `id` | | Operational Status| Defaults to "1" ("Operational") | ### Cloud Function (`cmdb\_ci\_cloud\_function`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `id` | | Object Id | `id` | | Name | `name` | | Operational Status| Defaults to "1" ("Operational") | ### Cloud Load Balancer (`cmdb\_ci\_cloud\_load\_balancer`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `id` | | Object Id | `id` | | Name | `name` | | Operational Status| Defaults to "1" ("Operational") | | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-graph/resource-coverage/gcp.mdx | main | terraform | [
0.029122574254870415,
0.07851134240627289,
-0.02025965414941311,
0.02318817749619484,
-0.029837559908628464,
-0.008845618925988674,
0.022410336881875992,
-0.05175987631082535,
0.009697677567601204,
0.05782293528318405,
-0.01795288361608982,
-0.14457227289676666,
0.01586001180112362,
-0.047... | 0.080174 |
# ServiceNow Service Graph Connector AWS resource coverage This page details the mapping rules for importing AWS resources, provisioned with Terraform, into ServiceNow CMDB. ## Mapping of Terraform resources to CMDB CI Classes | AWS resource | Terraform resource name | ServiceNow CMDB CI Class | ServiceNow CMDB Category Name | |--------------------------------------------------------------------------------------|----------------------------|---------------------------------|-------------------------------| | AWS account | N/A | `cmdb\_ci\_cloud\_service\_account` | Cloud Service Account | | AWS region | N/A | `cmdb\_ci\_aws\_datacenter` | AWS Datacenter | | EC2 Instance | `aws\_instance` | `cmdb\_ci\_vm\_instance` | Virtual Machine Instance | | S3 Bucket | `aws\_s3\_bucket` | `cmdb\_ci\_cloud\_object\_storage` | Cloud Object Storage | | ECS Cluster | `aws\_ecs\_cluster` | `cmdb\_ci\_cloud\_ecs\_cluster` | AWS Cloud ECS Cluster | | EKS Cluster | `aws\_eks\_cluster` | `cmdb\_ci\_kubernetes\_cluster` | Kubernetes Cluster | | VPC | `aws\_vpc` | `cmdb\_ci\_network` | Cloud Network | | Database Instance (\*non-Aurora databases: e.g., MySQL, PostgreSQL, SQL Server, etc.\*)| `aws\_db\_instance` | `cmdb\_ci\_cloud\_database` | Cloud DataBase | | RDS Aurora Cluster | `aws\_rds\_cluster` | `cmdb\_ci\_cloud\_db\_cluster` | Cloud DataBase Cluster | | RDS Aurora Instance | `aws\_rds\_cluster\_instance` | `cmdb\_ci\_cloud\_database` | Cloud DataBase | | DynamoDB Global Table | `aws\_dynamodb\_global\_table`| `cmdb\_ci\_dynamodb\_global\_table` | DynamoDB Global Table | | DynamoDB Table | `aws\_dynamodb\_table` | `cmdb\_ci\_dynamodb\_table` | DynamoDB Table | | Security Group | `aws\_security\_group` | `cmdb\_ci\_compute\_security\_group`| Compute Security Group | | Lambda | `aws\_lambda\_function` | `cmdb\_ci\_cloud\_function` | Cloud Function | | Load Balancer | `aws\_lb` | `cmdb\_ci\_cloud\_load\_balancer` | Cloud Load Balancer | | Tags | N/A | `cmdb\_key\_value` | Key Value | ## Resource relationships | Child CI Class | Relationship type| Parent CI Class | |------------------------------------------------------------|------------------|-----------------------------------------------------------| | AWS Datacenter 1 (`cmdb\_ci\_aws\_datacenter`) | Hosted On::Hosts | Cloud Service Account 1 (`cmdb\_ci\_cloud\_service\_account`) | | AWS Datacenter 2 (`cmdb\_ci\_aws\_datacenter`) | Hosted On::Hosts | Cloud Service Account 6 (`cmdb\_ci\_cloud\_service\_account`) | | Virtual Machine Instance 1 (`cmdb\_ci\_vm\_instance`) | Hosted On::Hosts | AWS Datacenter 1 (`cmdb\_ci\_aws\_datacenter`) | | Virtual Machine Instance 1 (`cmdb\_ci\_vm\_instance`) | Reference | Key Value 1 (`cmdb\_key\_value`) | | AWS Cloud ECS Cluster 1 (`cmdb\_ci\_cloud\_ecs\_cluster`) | Hosted On::Hosts | AWS Datacenter 1 (`cmdb\_ci\_aws\_datacenter`) | | AWS Cloud ECS Cluster 1 (`cmdb\_ci\_cloud\_ecs\_cluster`) | Reference | Key Value 2 (`cmdb\_key\_value`) | | Cloud Object Storage 1 (`cmdb\_ci\_cloud\_object\_storage`) | Hosted On::Hosts | AWS Datacenter 2 (`cmdb\_ci\_aws\_datacenter`) | | Cloud Object Storage 1 (`cmdb\_ci\_cloud\_object\_storage`) | Reference | Key Value 3 (`cmdb\_key\_value`) | | Kubernetes Cluster 1 (`cmdb\_ci\_kubernetes\_cluster`) | Hosted On::Hosts | AWS Datacenter 1 (`cmdb\_ci\_aws\_datacenter`) | | Kubernetes Cluster 1 (`cmdb\_ci\_kubernetes\_cluster`) | Reference | Key Value 4 (`cmdb\_key\_value`) | | Cloud Network 1 (`cmdb\_ci\_network`) | Hosted On::Hosts | AWS Datacenter 1 (`cmdb\_ci\_aws\_datacenter`) | | Cloud Network 1 (`cmdb\_ci\_network`) | Reference | Key Value 5 (`cmdb\_key\_value`) | | Cloud DataBase 1 (`cmdb\_ci\_cloud\_database` ) | Hosted On::Hosts | AWS Datacenter 1 (`cmdb\_ci\_aws\_datacenter`) | | Cloud DataBase 1 (`cmdb\_ci\_cloud\_database` ) | Reference | Key Value 6 (`cmdb\_key\_value`) | | Cloud DataBase Cluster 1 (`cmdb\_ci\_cloud\_db\_cluster`) | Hosted On::Hosts | AWS Datacenter 1 (`cmdb\_ci\_aws\_datacenter`) | | Cloud DataBase Cluster 1 (`cmdb\_ci\_cloud\_db\_cluster`) | Reference | Key Value 7 (`cmdb\_key\_value`) | | DynamoDB Global Table 1 (`cmdb\_ci\_dynamodb\_global\_table`) | Hosted On::Hosts | Cloud Service Account 1 (`cmdb\_ci\_cloud\_service\_account`) | | DynamoDB Table 1 (`cmdb\_ci\_dynamodb\_table`) | Hosted On::Hosts | AWS Datacenter 1 (`cmdb\_ci\_aws\_datacenter`) | | DynamoDB Table 1 (`cmdb\_ci\_dynamodb\_table`) | Reference | Key Value 8 (`cmdb\_key\_value`) | | Compute Security Group 1 (`cmdb\_ci\_compute\_security\_group`)| Hosted On::Hosts | AWS Datacenter 1 (`cmdb\_ci\_aws\_datacenter`) | | Compute Security Group 1 (`cmdb\_ci\_compute\_security\_group`)| Reference | Key Value 10 (`cmdb\_key\_value`) | | Cloud Function 1 (`cmdb\_ci\_cloud\_function`) | Hosted On::Hosts | AWS Datacenter 1 (`cmdb\_ci\_aws\_datacenter`) | | Cloud Function 1 (`cmdb\_ci\_cloud\_function`) | Reference | Key Value 11 (`cmdb\_key\_value`) | | Cloud Load Balancer 1 (`cmdb\_ci\_cloud\_load\_balancer`) | Hosted On::Hosts | AWS Datacenter 1 (`cmdb\_ci\_aws\_datacenter`) | | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-graph/resource-coverage/aws.mdx | main | terraform | [
-0.05928180366754532,
-0.0002835213381331414,
-0.011234556324779987,
0.0355084203183651,
-0.029221149161458015,
0.030353374779224396,
-0.0062417443841695786,
-0.07250618189573288,
-0.05885019898414612,
0.08104650676250458,
0.007068697363138199,
-0.1314052790403366,
0.08429346233606339,
-0.... | 0.04245 |
(`cmdb\_ci\_compute\_security\_group`)| Reference | Key Value 10 (`cmdb\_key\_value`) | | Cloud Function 1 (`cmdb\_ci\_cloud\_function`) | Hosted On::Hosts | AWS Datacenter 1 (`cmdb\_ci\_aws\_datacenter`) | | Cloud Function 1 (`cmdb\_ci\_cloud\_function`) | Reference | Key Value 11 (`cmdb\_key\_value`) | | Cloud Load Balancer 1 (`cmdb\_ci\_cloud\_load\_balancer`) | Hosted On::Hosts | AWS Datacenter 1 (`cmdb\_ci\_aws\_datacenter`) | | Cloud Load Balancer 1 (`cmdb\_ci\_cloud\_load\_balancer`) | Reference | Key Value 12 (`cmdb\_key\_value`) | ## Field attributes mapping ### Cloud Service Account (`cmdb\_ci\_cloud\_service\_account`) | CMDB field | Terraform state field | |--------------------|-----------------------------------------------| | Source Native Key | Resource account number extracted from `arn` | | Account Id | Resource account number extracted from `arn` | | Datacenter Type | Resource cloud provider extracted from `arn` | | Object ID | Resource id extracted from `arn` | | Name | Resource name extracted from `arn` | | Operational Status | Defaults to "1" ("Operational") | ### AWS Datacenter (`cmdb\_ci\_aws\_datacenter`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | Concatenation of region and account number extracted from `arn`| | Object Id | Region extracted from `arn` | | Region | Region extracted from `arn` | | Name | Region extracted from `arn` | | Operational Status| Defaults to "1" ("Operational") | ### Virtual Machine Instance (`cmdb\_ci\_vm\_instance`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `arn` | | Object Id | `id` | | Placement Group ID| `placement\_group` | | IP Address | `public\_ip` | | Status | `instance\_state` | | VM Instance ID | `id` | | Name | `id` | | State | `state` | | CPU | `cpu\_core\_count` | | Operational Status| Defaults to "1" ("Operational") | ### AWS Cloud ECS Cluster (`cmdb\_ci\_cloud\_ecs\_cluster`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `arn` | | Object Id | `id` | | Name | `name` | | Operational Status| Defaults to "1" ("Operational") | ### Cloud Object Storage (`cmdb\_ci\_cloud\_object\_storage`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `arn` | | Object Id | `id` | | Cloud Provider | Resource cloud provider extracted from `arn` | | Name | `bucket` | | Operational Status| Defaults to "1" ("Operational") | ### Kubernetes Cluster (`cmdb\_ci\_kubernetes\_cluster`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `arn` | | IP Address | `endpoint` | | Port | Defaults to "6443" | | Name | `name` | | Operational Status| Defaults to "1" ("Operational") | ### Cloud Network (`cmdb\_ci\_network`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `arn` | | Object Id | `id` | | Name | `name` | | Operational Status| Defaults to "1" ("Operational") | ### Cloud DataBase (`cmdb\_ci\_cloud\_database`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `arn` | | Object Id | `id` | | Version | `engine\_version` | | Type | `engine` | | TCP port(s) | `port` | | Category | `instance\_class` | | Fully qualified domain name| `endpoint` | | Location | Region extracted from `arn` | | Name | `name` | | Vendor | Resource cloud provider extracted from `arn` | | Operational Status| Defaults to "1" ("Operational") | ### Cloud DataBase Cluster (`cmdb\_ci\_cloud\_db\_cluster`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `arn` | | Cluster ID | `cluster\_resource\_id` | | Name | `name` | | TCP port(s) | `port` | | Fully qualified domain name| `endpoint` | | Vendor | Resource cloud provider extracted from `arn` | | Operational Status| Defaults to | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-graph/resource-coverage/aws.mdx | main | terraform | [
0.00956205278635025,
0.02472817897796631,
-0.057167381048202515,
-0.0008273497223854065,
0.01477435976266861,
-0.02241944521665573,
0.03870357573032379,
-0.02429862879216671,
0.04299747571349144,
0.06425200402736664,
0.028901802375912666,
-0.12551262974739075,
0.10832185298204422,
-0.06175... | -0.020258 |
state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `arn` | | Cluster ID | `cluster\_resource\_id` | | Name | `name` | | TCP port(s) | `port` | | Fully qualified domain name| `endpoint` | | Vendor | Resource cloud provider extracted from `arn` | | Operational Status| Defaults to "1" ("Operational") | ### DynamoDB Table (`cmdb\_ci\_dynamodb\_table`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `arn` | | Object Id | `arn` | | Location | Region extracted from `arn` | | Name | `name` | | Operational Status| Defaults to "1" ("Operational") | ### DynamoDB Global Table (`cmdb\_ci\_dynamodb\_global\_table`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `arn` | | Object Id | `arn` | | Name | `name` | | Operational Status| Defaults to "1" ("Operational") | ### Compute Security Group (`cmdb\_ci\_compute\_security\_group`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `arn` | | Object Id | `id` | | Location | Region extracted from `arn` | | Name | `name` | | Operational Status| Defaults to "1" ("Operational") | ### Cloud Function (`cmdb\_ci\_cloud\_function`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `arn` | | Object Id | `arn` | | Language | `runtime` | | Code Size | `source\_code\_size` | | Location | Region extracted from `arn` | | Name | `function\_name` | | Operational Status| Defaults to "1" ("Operational") | ### Cloud Load Balancer (`cmdb\_ci\_cloud\_load\_balancer`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `arn` | | Object Id | `id` | | Canonical Hosted Zone Name| `dns\_name` | | Canonical Hosted Zone ID| `zone\_id` | | Location | Region extracted from `arn` | | Name | `name` | | Operational Status| Defaults to "1" ("Operational") | | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-graph/resource-coverage/aws.mdx | main | terraform | [
-0.051711246371269226,
0.024308625608682632,
-0.0462535098195076,
0.00026442421949468553,
-0.011436006985604763,
-0.01969410665333271,
0.010661141015589237,
-0.08174674957990646,
0.008381749503314495,
0.05566280707716942,
-0.016120467334985733,
-0.10767731815576553,
0.08386688679456711,
-0... | 0.073208 |
# ServiceNow Service Graph Connector for Terraform resource coverage overview The tables provided in this section illustrate the mapping of resources from the Terraform state to the ServiceNow CMDB configuration items (CIs) by the Service Graph Connector for Terraform. While the default ETL map provided by the application can be utilized without modification, it is also possible to customize it according to the specific requirements of your organization. Check [customizations](/terraform/cloud-docs/integrations/service-now/service-graph/customizations) for more details. The application supports selected resources from major cloud providers. The following pages provide mapping details for each supported provider: - [AWS](/terraform/cloud-docs/integrations/service-now/service-graph/resource-coverage/aws) - [Azure](/terraform/cloud-docs/integrations/service-now/service-graph/resource-coverage/azure) - [GCP](/terraform/cloud-docs/integrations/service-now/service-graph/resource-coverage/gcp) - [VMware vSphere](/terraform/cloud-docs/integrations/service-now/service-graph/resource-coverage/vsphere) # Importing Tags The Service Graph Connector for Terraform imports the Terraform tags associated with your resource into CMDB. Tags are mapped to the \*\*Key Value\*\* CI Class. Along with the tags assigned in your Terraform code, the integration also includes `tf\_organization` and `tf\_workspace` tags. These tags are used to indicate the HCP Terraform organization and workspace where the resource was provisioned. The visibility of the \*\*Tags\*\* tab in CMDB varies for different configuration items. By default, not every configuration item has the \*\*Tags\*\* tab enabled. For instance, the \*\*Virtual Machine Instance\*\* class page includes the \*\*Tags\*\* tab, whereas the \*\*AWS Cloud ECS Cluster\*\* page does not. The following example illustrates how the \*\*Tags\*\* tab can be enabled for \*\*AWS Cloud ECS Cluster\*\* CI class in CMDB. 1. Enter `cmdb\_ci\_cloud\_ecs\_cluster.list` in the search menu of your ServiceNow instance. 2. Open any record. Right-click on the gray bar at the top, select \*\*Configure\*\* and proceed to \*\*Related Lists\*\*. If you are in a different scope, click \*\*Edit this view\*\*. 3. Transfer \*\*Key Value->Configuration item\*\* from the left column to the right and click \*\*Save\*\*. Tags become available in CMDB for all AWS ECS cluster records.  | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-graph/resource-coverage/index.mdx | main | terraform | [
-0.026740046218037605,
0.013029144145548344,
0.04433279111981392,
-0.010076913982629776,
0.020093804225325584,
0.022556232288479805,
-0.012685313820838928,
-0.03182219713926315,
-0.03516006842255592,
0.1270700842142105,
-0.016042929142713547,
-0.04976667836308479,
0.09391693025827408,
0.05... | 0.101586 |
# ServiceNow Service Graph Connector Microsoft Azure resource coverage This page describes how Terraform-provisioned Azure resources are mapped to the classes within the ServiceNow CMDB. ## Mapping of Terraform resources to CMDB CI Classes | Azure resource | Terraform resource name | ServiceNow CMDB CI Class | ServiceNow CMDB Category Name | |----------------------------|------------------------------------|---------------------------------|-------------------------------| | Azure account | N/A | `cmdb\_ci\_cloud\_service\_account` | Cloud Service Account | | Azure region | N/A | `cmdb\_ci\_azure\_datacenter` | Azure Datacenter | | Resource Group | `azurerm\_resource\_group` | `cmdb\_ci\_resource\_group` | Resource Group | | Windows VM | `azurerm\_windows\_virtual\_machine` | `cmdb\_ci\_vm\_instance` | Virtual Machine Instance | | Linux VM | `azurerm\_linux\_virtual\_machine` | `cmdb\_ci\_vm\_instance` | Virtual Machine Instance | | AKS Cluster | `azurerm\_kubernetes\_cluster` | `cmdb\_ci\_kubernetes\_cluster` | Kubernetes Cluster | | Storage Container | `azurerm\_storage\_container` | `cmdb\_ci\_cloud\_storage\_account` | Cloud Storage Account | | MariaDB Database | `azurerm\_mariadb\_server` | `cmdb\_ci\_cloud\_database` | Cloud DataBase | | MS SQL Database | `azurerm\_mssql\_server` | `cmdb\_ci\_cloud\_database` | Cloud DataBase | | MySQL Database | `azurerm\_mysql\_server` | `cmdb\_ci\_cloud\_database` | Cloud DataBase | | PostgreSQL Database | `azurerm\_postgresql\_server` | `cmdb\_ci\_cloud\_database` | Cloud DataBase | | Network security group | `azurerm\_network\_security\_group` | `cmdb\_ci\_compute\_security\_group`| Compute Security Group | | Linux Function App | `azurerm\_linux\_function\_app` | `cmdb\_ci\_cloud\_function` | Cloud Function | | Windows Function App | `azurerm\_windows\_function\_app` | `cmdb\_ci\_cloud\_function` | Cloud Function | | Virtual Network | `azurerm\_virtual\_network` | `cmdb\_ci\_network` | Cloud Network | | Tags | N/A | `cmdb\_key\_value` | Key Value | ## Resource relationships | Child CI Class | Relationship type | Parent CI Class | |------------------------------------------------------------|------------------------|-----------------------------------------------------------| | Azure Datacenter 1 (`cmdb\_ci\_azure\_datacenter`) | Hosted On::Hosts | Cloud Service Account 2 (`cmdb\_ci\_cloud\_service\_account`) | | Azure Datacenter 2 (`cmdb\_ci\_azure\_datacenter`) | Hosted On::Hosts | Cloud Service Account 3 (`cmdb\_ci\_cloud\_service\_account`) | | Azure Datacenter 1 (`cmdb\_ci\_azure\_datacenter`) | Contains::Contained by | Resource Group 1 (`cmdb\_ci\_resource\_group`) | | Cloud Storage Account 1 (`cmdb\_ci\_cloud\_storage\_account`) | Hosted On::Hosts | Azure Datacenter 2 (`cmdb\_ci\_azure\_datacenter`) | | Virtual Machine Instance 2 (`cmdb\_ci\_vm\_instance`) | Hosted On::Hosts | Azure Datacenter 1 (`cmdb\_ci\_azure\_datacenter`) | | Virtual Machine Instance 2 (`cmdb\_ci\_vm\_instance`) | Reference | Key Value 14 (`cmdb\_key\_value`) | | Virtual Machine Instance 3 (`cmdb\_ci\_vm\_instance`) | Hosted On::Hosts | Azure Datacenter 1 (`cmdb\_ci\_azure\_datacenter`) | | Virtual Machine Instance 3 (`cmdb\_ci\_vm\_instance`) | Reference | Key Value 15 (`cmdb\_key\_value`) | | Kubernetes Cluster 2 (`cmdb\_ci\_kubernetes\_cluster`) | Hosted On::Hosts | Azure Datacenter 1 (`cmdb\_ci\_azure\_datacenter`) | | Kubernetes Cluster 2 (`cmdb\_ci\_kubernetes\_cluster`) | Reference | Key Value 16 (`cmdb\_key\_value`) | | Cloud DataBase 2 (`cmdb\_ci\_cloud\_database` ) | Hosted On::Hosts | Azure Datacenter 1 (`cmdb\_ci\_azure\_datacenter`) | | Cloud DataBase 2 (`cmdb\_ci\_cloud\_database` ) | Reference | Key Value 9 (`cmdb\_key\_value`) | | Compute Security Group 2 (`cmdb\_ci\_compute\_security\_group`)| Hosted On::Hosts | Azure Datacenter 1 (`cmdb\_ci\_azure\_datacenter`) | | Compute Security Group 2 (`cmdb\_ci\_compute\_security\_group`)| Reference | Key Value 17 (`cmdb\_key\_value`) | | Cloud Function 2 (`cmdb\_ci\_cloud\_function`) | Hosted On::Hosts | Azure Datacenter 1 (`cmdb\_ci\_azure\_datacenter`) | | Cloud Function 2 (`cmdb\_ci\_cloud\_function`) | Reference | Key Value 19 (`cmdb\_key\_value`) | | Cloud Network 2 (`cmdb\_ci\_network`) | Hosted On::Hosts | Azure Datacenter 1 (`cmdb\_ci\_azure\_datacenter`) | | Cloud Network 2 (`cmdb\_ci\_network`) | Reference | Key Value 20 (`cmdb\_key\_value`) | ## Field attributes mapping ### Cloud Service Account (`cmdb\_ci\_cloud\_service\_account`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | Subscription ID extracted from `id` | | Account Id | Subscription ID extracted from `id` | | Datacenter Type | Defaults to `azure` | | Object ID | Subscription ID extracted from `id` | | Name | Subscription ID extracted from `id` | | Operational Status| Defaults to "1" ("Operational") | ### Azure Datacenter (`cmdb\_ci\_azure\_datacenter`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | Concatenation of | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-graph/resource-coverage/azure.mdx | main | terraform | [
-0.03770127147436142,
0.0159006305038929,
0.0022221654653549194,
0.024089202284812927,
-0.05406692624092102,
0.04781961441040039,
0.03655198961496353,
-0.11436206847429276,
-0.04718002676963806,
0.07607927173376083,
0.045083481818437576,
-0.13469615578651428,
0.08120337128639221,
0.0194876... | 0.018687 |
to `azure` | | Object ID | Subscription ID extracted from `id` | | Name | Subscription ID extracted from `id` | | Operational Status| Defaults to "1" ("Operational") | ### Azure Datacenter (`cmdb\_ci\_azure\_datacenter`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | Concatenation of `location` and Subscription ID | | Object Id | `location` | | Region | `location` | | Name | `location` | | Operational Status| Defaults to "1" ("Operational") | ### Virtual Machine Instance (`cmdb\_ci\_vm\_instance`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `id` | | Object Id | `id` | | Name | `name` | | Operational Status| Defaults to "1" ("Operational") | ### Cloud Storage Account (`cmdb\_ci\_cloud\_storage\_account`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `resource\_manager\_id` | | Object Id | `resource\_manager\_id` | | Fully qualified domain name| `id` | | Blob Service | `storage\_account\_name` | | Name | `name` | | Operational Status| Defaults to "1" ("Operational") | ### Resource Group (`cmdb\_ci\_resource\_group`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `id` | | Object Id | `id` | | Name | `name` | | Location | `location` | | Operational Status| Defaults to "1" ("Operational") | ### Kubernetes Cluster (`cmdb\_ci\_kubernetes\_cluster`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `id` | | IP Address | `fqdn` | | Port | Defaults to "6443" | | Name | `name` | | Location | `location` | | Operational Status| Defaults to "1" ("Operational") | ### Cloud DataBase (`cmdb\_ci\_cloud\_database`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `id` | | Object Id | `id` | | Version | `engine\_version` | | Fully qualified domain name| `fqdn` | | Name | `name` | | Vendor | Defaults to `azure` | | Operational Status| Defaults to "1" ("Operational") | ### Compute Security Group (`cmdb\_ci\_compute\_security\_group`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `id` | | Object Id | `id` | | Name | `name` | | Operational Status| Defaults to "1" ("Operational") | ### Cloud Function (`cmdb\_ci\_cloud\_function`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `id` | | Object Id | `id` | | Name | `name` | | Operational Status| Defaults to "1" ("Operational") | ### Cloud Network (`cmdb\_ci\_network`) | CMDB field | Terraform state field | |-------------------|----------------------------------------------------------------| | Source Native Key | `id` | | Object Id | `id` | | Name | `name` | | Operational Status| Defaults to "1" ("Operational") | | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-graph/resource-coverage/azure.mdx | main | terraform | [
0.019877849146723747,
0.055544134229421616,
-0.042800650000572205,
0.0447486937046051,
-0.03211669996380806,
0.02330608479678631,
0.03110271878540516,
-0.056779518723487854,
-0.0026204222813248634,
0.10567861050367355,
-0.00105383584741503,
-0.15629318356513977,
0.059711020439863205,
0.007... | 0.0686 |
# ServiceNow Service Catalog integration example configurations This example use case creates a Terraform Catalog Item for requesting resources with custom variable values passed to the Terraform configuration. ## Change the scope When you make a customization to the app, ensure you switch to the "Terraform" scope. This guarantees that all items you create are correctly assigned to that scope. To change the scope in your ServiceNow instance, click the globe icon at the top right of the screen. For detailed instructions on changing the scope, refer to the [ServiceNow documentation](https://developer.servicenow.com/dev.do#!/learn/learning-plans/xanadu/new\_to\_servicenow/app\_store\_learnv2\_buildneedit\_xanadu\_application\_scope). ## Make a copy of the existing Catalog Item The ServiceNow Service Catalog for Terraform application provides pre-configured [Catalog Items](/terraform/cloud-docs/integrations/service-now/service-catalog-terraform/developer-reference#example-service-catalog-flows-and-actions) for immediate use. We recommend creating a copy of the most recent version of the Catalog Item to ensure you have access to the latest features and improvements. Make a copy of the most appropriate Catalog Item for your specific business requirements by following these steps: 1. Navigate to \*\*All > Service Catalog > Catalogs > Terraform Catalog\*\*, and review the Catalog Items based on flows, whose names use the suffix "Flow". We recommend choosing Flows over Workflows because Flows provide enhanced functionality and performance and are actively developed by ServiceNow. For more information, refer to [Catalog Items based on Flows vs. Workflows](/terraform/cloud-docs/integrations/service-now/service-catalog-terraform/developer-reference#example-service-catalog-flows-and-actions). 1. Open the Catalog Item in editing mode: 1. Click the Catalog Item to open the request form. 1. Click \*\*...\*\* in the top right corner. 1. Select \*\*Configure Item\*\* from the menu.  1. Click the \*\*Process Engine\*\* tab in the Catalog Item configuration. Take note of the Flow name associated with the Catalog Item, because you need to create a copy of this Flow as well.  1. Start the copying process: 1. Click the \*\*Copy\*\* button above the \*\*Related Links\*\* section. 1. Assign a new name to the copied Catalog Item. 1. Optionally, modify the description and short description fields. Right-click the header and select \*\*Save\*\*.  ## Adjust the Variable Set If a Catalog Item requires users to input variable values, you must update the variable set with those required variables. Although some default Catalog Items come with pre-defined example variables, it is common practice to remove these and replace them with your own custom variables. 1. Create a new Variable Set. 1. On the Catalog Item's configuration page, under the \*\*Related Links\*\* section, click the \*\*Variable Sets\*\* tab. 1. Click the \*\*New\*\* button to create a new variable set. Ensure that the variables in your new set match the variables required by your Terraform configuration.  1. Select \*\*Single-Row Variable Set\*\* and provide a title and description. 1. Click \*\*Submit\*\*. Upon submission, you will be redirected back to the Catalog Item's configuration page.  1. Click the name of your newly created Variable Set and create your variables. You must follow the [naming convention for Terraform variables](/terraform/cloud-docs/integrations/service-now/service-catalog-terraform/developer-reference#terraform-variables-and-servicenow-variable-sets). ServiceNow offers various types of variable representation (such as strings, booleans, and dropdown menus). Refer to the [ServiceNow documentation on variables](https://docs.servicenow.com/csh?topicname=c\_ServiceCatalogVariables.html&version=latest) and select the types that best suit your use case. You can also set default values for the variables in the \*\*Default Value\*\* tab, which ServiceNow prefills for the end users. . Refer to the [ServiceNow documentation on variables](https://docs.servicenow.com/csh?topicname=c\_ServiceCatalogVariables.html&version=latest) and select the types that best suit your use case. You can also set default values for the variables in the \*\*Default Value\*\* tab, which ServiceNow prefills for the end users.  1. Attach the newly created Variable Set to your custom Catalog Item and remove the default Workspace Variables. 1. Return to the \*\*Variable Sets\*\* tab on the Catalog Item's configuration page and click the \*\*Edit\*\* button. 1. Move the "Workspace Variables" Set from the right side to the left side and click \*\*Save\*\*. Do not remove the "Workspace Request Create" or the "Workspace Request Update" Sets.  ## Make a copy of the Flow and Action 1. Open the ServiceNow Studio by navigating to \*\*All > Studio\*\* and open the "Terraform" application. Once in the \*\*Terraform\*\* application, navigate to \*\*Flow Designer > Flows\*\*.  Another way to access the ServiceNow Studio is to click \*\*All\*\*, select "Flow Designer", then select \*\*Flows\*\*. You can set the \*\*Application\*\* filter to "Terraform" to quickly find the desired Flow. 1. Open the Flow referenced in your Catalog Item. Click \*\*...\*\* in the top right corner of the Flow Designer interface and select \*\*Copy flow\*\*. Provide a name for the copied Flow and click \*\*Copy\*\*.  1. Customize your newly copied Flow by clicking \*\*Edit flow\*\*.  1. Do not change the \*\*Service Catalog\*\* trigger. 1. Update the "Get Catalog Variables" action: 1. Keep the "Requested Item Record" in the \*\*Submitted Request\*\* field. 1. Select your newly created Catalog Item from the dropdown menu for \*\*Template Catalog Item\*\*. 1. Move all of your variables to the \*\*Selected\*\* side in the \*\*Catalog Variables\*\* section. Remove any previous example variables from the \*\*Available\*\* side.  1. Click \*\*Done\*\* to finish configuring this Action. 1. Unfold the second Action in the Flow and click the arrow to open it in the Action Designer.  1. Click \*\*...\*\* in the top right corner and select \*\*Copy Action\*\*.  Rename it and click \*\*Copy\*\*.  1. In the the Inputs section, remove any previous example variables.  1. Add your custom variables by clicking the \*\*Create Input\*\* button. Ensure that the variable names match your Catalog Item variables and select the variable type that matches each variable. Click \*\*Save\*\*.  1. Open the \*\*Script step\*\* within the Action. Remove any example variables and add your custom variables by clicking \*\*Create Variable\*\* at the bottom. Enter the name of each variable and drag the corresponding data pill from the right into the \*\*Value field\*\*.  1. Click \*\*Save\*\* and then \*\*Publish\*\*. 1. Reopen the Flow and attach the newly created Action to the Flow after "Get Catalog | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-catalog-terraform/example-customizations.mdx | main | terraform | [
0.015158978290855885,
-0.05767268314957619,
-0.037200599908828735,
0.09183504432439804,
-0.117185078561306,
0.07516838610172272,
0.0993807315826416,
-0.06534378975629807,
-0.07064381241798401,
0.04602555185556412,
0.029773006215691566,
-0.027305861935019493,
-0.006407998502254486,
0.017808... | 0.005605 |
and drag the corresponding data pill from the right into the \*\*Value field\*\*.  1. Click \*\*Save\*\* and then \*\*Publish\*\*. 1. Reopen the Flow and attach the newly created Action to the Flow after "Get Catalog Variables" step: 1. Remove the "Create Terraform Workspace with Vars" Action that you copied earlier and replace it with your newly created Action.  1. Connect the new Action to the Flow by dragging and dropping the data pills from the "Get Catalog Variables" Action to the corresponding inputs of your new Action. Click \*\*Done\*\* to save this step.  1. Click \*\*Save\*\*. 1. Click \*\*Activate\*\* to enable the Flow and make it available for use. 1. If your flow has a \*\*Approval Step\*\*, open the \*\*Ask for Approval\*\* action and customize it by adding your approver and due date. This approver must have the \*\*approver\_user\*\* role in order to approve requests.  ## Set the Flow for your Catalog Item 1. Navigate back to the Catalog by clicking on \*\*All\*\* and then go to \*\*Service Catalog > Catalogs > Terraform Catalog\*\*. 1. Locate your custom Catalog Item and click \*\*...\*\* at the top of the item. From the dropdown menu, select \*\*Configure item\*\*. 1. In the configuration settings, click the \*\*Process Engine\*\* tab. 1. In the \*\*Flow\*\* field, search for the Flow you recently created. Click the Flow then click the \*\*Update\*\*.  ## Adjust polling schedule intervals Customers can customize the frequency at which Terraform-related updates are retrieved by adjusting polling schedule intervals. There are three configurable polling intervals: 1. \*\*Worker Poll Apply Run\*\* 1. \*\*Worker Poll Destroy Workspace\*\* 1. \*\*Worker Poll Run State\*\* To modify these intervals, navigate to \*\*All > Flow Designer > Flows\*\* in ServiceNow. Modify your polling frequency by changing the associated flow's \*\*Scheduled Trigger\*\*, allowing for more or less frequent updates in the ServiceNow ticket. ## Test the Catalog Item The new item is now available in the Terraform Service Catalog. To make the new item accessible to your end users via the Service Portal, follow these steps: 1. Navigate to the configuration page of the item you want to make available. 1. Locate the \*\*Catalogs\*\* field on the configuration page and click the lock icon next to it. 1. In the search bar, type "Service Catalog" and select it from the search results. Add "Service Catalog" to the list of catalogs associated with the item. Click the lock icon again to lock the changes.  1. Click the \*\*Update\*\* button at the top of the page. After completing these steps, end users will be able to access the new item through the Service Portal of your ServiceNow instance. You can access the Service Portal by navigating to \*\*All > Service Portal Home\*\*. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-catalog-terraform/example-customizations.mdx | main | terraform | [
-0.06272230297327042,
-0.025061529129743576,
0.0097020473331213,
0.025165636092424393,
-0.041173260658979416,
0.05329703912138939,
0.02106659859418869,
-0.04500741511583328,
0.007031447719782591,
0.13467824459075928,
-0.00846312940120697,
-0.05995126813650131,
0.009319945238530636,
0.00434... | 0.004767 |
# Set up ServiceNow Service Catalog integration for HCP Terraform -> \*\*Integration version:\*\* v2.9.0 @include 'tfc-package-callouts/servicenow-catalog.mdx' The Terraform ServiceNow Service Catalog integration enables your end-users to provision self-serve infrastructure via ServiceNow. By connecting ServiceNow to HCP Terraform, this integration lets ServiceNow users order Service Items, create workspaces, and perform Terraform runs using prepared Terraform configurations hosted in VCS repositories or as [no-code modules](/terraform/cloud-docs/workspaces/no-code-provisioning/module-design) for self-service provisioning. @include 'eu/integrations.mdx' ## Summary of the Setup Process The integration relies on Terraform ServiceNow Catalog integration software installed within your ServiceNow instance. Installing and configuring this integration requires administration in both ServiceNow and HCP Terraform. Since administrators of these services within your organization are not necessarily the same person, this documentation refers to a \*\*ServiceNow Admin\*\* and a \*\*Terraform Admin\*\*. First, the Terraform Admin configures your HCP Terraform organization with a dedicated team for the ServiceNow integration, and obtains a team API token for that team. The Terraform Admin provides the following to your ServiceNow admin: \* An Organization name \* A team API token \* The hostname of your HCP Terraform instance \* Any available no-code modules or version control repositories containing Terraform configurations \* Other required variables token, the hostname of your HCP Terraform instance, and details about no-code modules or version control repositories containing Terraform configurations and required variables to the ServiceNow Admin. Next, the ServiceNow Admin will install the Terraform ServiceNow Catalog integration to your ServiceNow instance, and configure it using the team API token and hostname. Finally, the ServiceNow Admin will create a Service Catalog within ServiceNow for the Terraform integration, and configure it using the version control repositories or no-code modules, and variable definitions provided by the Terraform Admin. | ServiceNow Admin | Terraform Admin | | ---------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ | | | Prepare an organization for use with the ServiceNow Catalog. | | | Create a team that can manage workspaces in that organization. | | | Create a team API token so the integration can use that team's permissions. | | | If using VCS repositories, retrieve the OAuth token IDs and repository identifiers that HCP Terraform uses to identify your VCS repositories. If using a no-code flow, [create a no-code ready module](/terraform/cloud-docs/workspaces/no-code-provisioning/provisioning) in your organization's private registry. Learn more in [Configure VCS Repositories or No-Code Modules](/terraform/cloud-docs/integrations/service-now/service-catalog-terraform/service-catalog-config#configure-vcs-repositories-or-no-code-modules).| | | Provide the API token, OAuth token ID, repository identifiers, variable definitions, and HCP Terraform hostname to the ServiceNow Admin. | | Install the Terraform integration application from the ServiceNow App Store. | | | Connect the integration application with HCP Terraform. | | | Add the Terraform Service Catalog to ServiceNow. | | | If you are using the VCS flow, configure the VCS repositories in ServiceNow. | | | Configure variable sets for use with the VCS repositories or no-code modules.| | Once these steps are complete, self-serve infrastructure will be available through the ServiceNow Catalog. HCP Terraform will provision and manage requested infrastructure and report the status back to ServiceNow. ## Prerequisites To start using Terraform with the ServiceNow Catalog Integration, you must have: - An administrator account on a Terraform Enterprise instance or within a HCP Terraform organization. - An administrator account on your ServiceNow instance. - If you are using the VCS flow, one or more [supported version control systems](/terraform/cloud-docs/vcs#supported-vcs-providers) (VCSs) with read access to repositories with Terraform configurations. - If you are using no-code provisioning, one or more [no-code modules](/terraform/cloud-docs/workspaces/no-code-provisioning/provisioning) created in your organization's private registry. Refer to the [no-code module configuration](/terraform/cloud-docs/integrations/service-now/service-catalog-terraform/service-catalog-config#no-code-module-configuration) for information about using no-code modules with the ServiceNow Service Catalog for Terraform. You can use this integration on the | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-catalog-terraform/index.mdx | main | terraform | [
-0.0331806018948555,
-0.03318493068218231,
0.007278041914105415,
-0.054341789335012436,
-0.0658850148320198,
-0.01854405365884304,
0.00857869628816843,
-0.04660424217581749,
-0.013304817490279675,
0.03339499980211258,
-0.012894821353256702,
-0.07414798438549042,
0.047745231539011,
0.006903... | 0.088216 |
access to repositories with Terraform configurations. - If you are using no-code provisioning, one or more [no-code modules](/terraform/cloud-docs/workspaces/no-code-provisioning/provisioning) created in your organization's private registry. Refer to the [no-code module configuration](/terraform/cloud-docs/integrations/service-now/service-catalog-terraform/service-catalog-config#no-code-module-configuration) for information about using no-code modules with the ServiceNow Service Catalog for Terraform. You can use this integration on the following ServiceNow server versions: - Xanadu - Yokohama - Zurich It requires the following ServiceNow plugins as dependencies: - Flow Designer support for the Service Catalog (`com.glideapp.servicecatalog.flow\_designer`) - ServiceNow IntegrationHub Action Step - Script (`com.glide.hub.action\_step.script`) - ServiceNow IntegrationHub Action Step - REST (`com.glide.hub.action\_step.rest`) -> \*\*Note:\*\* Dependent plugins are installed on your ServiceNow instance automatically when the app is downloaded from the ServiceNow Store. ## Configure HCP Terraform Before installing the ServiceNow integration, the Terraform Admin will need to perform the following steps to configure and gather information from HCP Terraform. 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise. 1. Either [create an organization](/terraform/cloud-docs/users-teams-organizations/organizations#creating-organizations) or choose an existing organization where ServiceNow will create new workspaces. \*\*Save the organization name for later.\*\* 2. [Create a team](/terraform/cloud-docs/users-teams-organizations/teams) for that organization called "ServiceNow", and ensure that it has [permission to manage workspaces](/terraform/cloud-docs/users-teams-organizations/permissions/organization#manage-all-workspaces). You do not need to add any users to this team. [permissions-citation]: #intentionally-unused---keep-for-maintainers 3. On the "ServiceNow" team's settings page, generate a [team API token](/terraform/cloud-docs/users-teams-organizations/api-tokens#team-api-tokens). You can apply fine-grained permissions for team tokens at the project level. You can assign teams and their associated tokens permission levels on specific projects. To learn more about the different permissions necessary to provision resources, refer to [Permissions](/terraform/cloud-docs/users-teams-organizations/permissions#project-permissions). \*\*Save the team API token for later.\*\* 5. If you are using the [VCS flow](/terraform/cloud-docs/integrations/service-now/service-catalog-terraform/service-catalog-config#vcs-configuration): 1. Ensure your Terraform organization is [connected to a VCS provider](/terraform/cloud-docs/vcs). Repositories that are connectable to HCP Terraform workspaces can also be used as workspace templates in the ServiceNow integration. 2. On your organization's VCS provider settings page (\*\*Settings\*\* > \*\*VCS Providers\*\*), find the OAuth Token ID for the VCS provider(s) that you intend to use with the ServiceNow integration. HCP Terraform uses the OAuth token ID to identify and authorize the VCS provider. \*\*Save the OAuth token ID for later.\*\* 3. Identify the VCS repositories in the VCS provider containing Terraform configurations that the ServiceNow Terraform integration will deploy. Take note of any Terraform or environment variables used by the repositories you select. Save the Terraform and environment variables for later. 6. If using the [no-code flow](/terraform/cloud-docs/integrations/service-now/service-catalog-terraform/service-catalog-config#no-code-module-configuration), create one or more no-code modules in the private registry of your HCP Terraform. \*\*Save the no-code module names for later.\*\* 7. Provide the following information to the ServiceNow Admin: \* The organization name \* The team API token \* The hostname of your Terraform Enterprise instance, or of HCP Terraform. The hostname of HCP Terraform is `app.terraform.io`. \* The no-code module name(s) or the OAuth token ID(s) of your VCS provider(s), and the repository identifier for each VCS repository containing Terraform configurations that will be used by the integration. \* Any Terraform or environment variables required by the configurations in the given VCS repositories. -> \*\*Note:\*\* Repository identifiers are determined by your VCS provider; they typically use a format like `/` or `/`. Azure DevOps repositories use the format `//\_git/`. A GitHub repository hosted at `https://github.com/exampleorg/examplerepo/` would have the repository identifier `exampleorg/examplerepo`. [permissions-citation]: #intentionally-unused---keep-for-maintainers For instance, if you are configuring this integration for your company, `Example Corp`, using two GitHub repositories, you would share values like the following with the ServiceNow Admin. ```markdown Terraform Enterprise Organization Name: `ServiceNowExampleOrg` Team API Token: `q2uPExampleELkQ.atlasv1.A7jGHmvufExampleTeamAPITokenimVYxwunJk0xD8ObVol054` Terraform Enterprise Hostname: `terraform.corp.example` OAuth Token ID (GitHub org: example-corp): `ot-DhjEXAMPLELVtFA` - Repository ID (Developer Environment): `example-corp/developer-repo` - Environment variables: - | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-catalog-terraform/index.mdx | main | terraform | [
-0.07047232985496521,
-0.04694661870598793,
-0.038555268198251724,
-0.010893495753407478,
-0.05618160218000412,
0.014748533256351948,
-0.07194101065397263,
-0.06537014991044998,
0.007237182930111885,
0.06878505647182465,
-0.001931437524035573,
-0.051358792930841446,
0.002754213521257043,
-... | 0.069571 |
your company, `Example Corp`, using two GitHub repositories, you would share values like the following with the ServiceNow Admin. ```markdown Terraform Enterprise Organization Name: `ServiceNowExampleOrg` Team API Token: `q2uPExampleELkQ.atlasv1.A7jGHmvufExampleTeamAPITokenimVYxwunJk0xD8ObVol054` Terraform Enterprise Hostname: `terraform.corp.example` OAuth Token ID (GitHub org: example-corp): `ot-DhjEXAMPLELVtFA` - Repository ID (Developer Environment): `example-corp/developer-repo` - Environment variables: - `AWS\_ACCESS\_KEY\_ID=AKIAEXAMPLEKEY` - `AWS\_SECRET\_ACCESS\_KEY=ZB0ExampleSecretAccessKeyGjUiJh` - `AWS\_DEFAULT\_REGION=us-west-2` - Terraform variables: - `instance\_type=t2.medium` - Repository ID (Testing Environment): `example-corp/testing-repo` - Environment variables: - `AWS\_ACCESS\_KEY\_ID=AKIAEXAMPLEKEY` - `AWS\_SECRET\_ACCESS\_KEY=ZB0ExampleSecretAccessKeyGjUiJh` - `AWS\_DEFAULT\_REGION=us-west-2` - Terraform variables: - `instance\_type=t2.large` ``` ## Install the ServiceNow Integration Before beginning setup, the ServiceNow Admin must install the Terraform ServiceNow Catalog integration software. This can be added to your ServiceNow instance from the [ServiceNow Store](https://store.servicenow.com/sn\_appstore\_store.do). Search for the "Terraform" integration, published by "HashiCorp Inc".  ## Connect ServiceNow to HCP Terraform -> \*\*ServiceNow Roles:\*\* `admin` or `x\_terraform.config\_user` Once the integration is installed, the ServiceNow Admin can connect your ServiceNow instance to HCP Terraform. Before you begin, you will need the information described in the "Configure HCP Terraform" section from your Terraform Admin. Once you have this information, connect ServiceNow to HCP Terraform with the following steps. 1. Navigate to your ServiceNow Service Management Screen. 1. Using the left-hand navigation, open the configuration table for the integration to manage the HCP Terraform connection. - Terraform > Configs 1. Click on "New" to create a new HCP Terraform connection. - Set Org Name to the HCP Terraform organization name. - Click on the "Lock" icon to set Hostname to the hostname of your Terraform Enterprise instance. If you are using the SaaS version of HCP Terraform, the hostname is `https://app.terraform.io`. Be sure to include "https://" before the hostname. - Set API Team Token to the HCP Terraform team API token. - (Optional) To use the [MID Server](https://docs.servicenow.com/csh?topicname=mid-server-landing.html&version=latest), select the checkbox and type the `name` in the `MID Server Name` field. 1. Click "Submit".  ## Create and Populate a Service Catalog Now that you have connected ServiceNow to HCP Terraform, you are ready to create a Service Catalog using the VCS repositories or no-code modules provided by the Terraform Admin. Navigate to the [Service Catalog documentation](/terraform/cloud-docs/integrations/service-now/service-catalog-terraform/service-catalog-config) to begin. You can also refer to this documentation whenever you need to add or update request items. ### Team Tokens Team-scoped tokens can help you manage a team's access to things in HCP Terraform, but team tokens are limited within the ServiceNow context. We recommend only using one API team token, regardless of the number of teams accessing ServiceNow. To learn more about API tokens, refer to [Team tokens](/terraform/cloud-docs/users-teams-organizations/api-tokens#team-api-tokens). ### ServiceNow Developer Reference ServiceNow developers who wish to customize the Terraform integration can refer to the [developer documentation](/terraform/cloud-docs/integrations/service-now/service-catalog-terraform/developer-reference). ### ServiceNow Administrator's Guide. Refer to the [ServiceNow Administrator documentation](/terraform/cloud-docs/integrations/service-now/service-catalog-terraform/admin-guide) for information about configuring the integration. ### Example Customizations Once the ServiceNow integration is installed, you can consult the [example customizations documentation](/terraform/cloud-docs/integrations/service-now/service-catalog-terraform/example-customizations). | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-catalog-terraform/index.mdx | main | terraform | [
-0.024569712579250336,
-0.020386315882205963,
-0.06222265586256981,
-0.0019688287284225225,
-0.03920047730207443,
-0.04200756177306175,
-0.02577058970928192,
-0.04077025130391121,
0.07370201498270035,
0.07109064608812332,
-0.007407030556350946,
-0.09037556499242783,
0.07386976480484009,
-0... | 0.036218 |
# Troubleshoot the ServiceNow Service Catalog integration This page offers troubleshooting tips for common issues with the ServiceNow Service Catalog Integration for HCP Terraform. It also provides instructions on how to find and read logs to diagnose and resolve issues. ## Find logs Logs are crucial for diagnosing issues. You can find logs in ServiceNow in the following places: ### Workflow logs To find workflow logs, click on the RITM number on a failed ticket to open the request item. Scroll down to \*\*Related Links > Workflow Context\*\* and open the \*\*Workflow Log\*\* tab. ### Flow logs To find flow logs, click on the RITM number on a failed ticket to open the request item. Scroll down to \*\*Related Links > Flow Context > Open Context Record\*\* and open the \*\*Flow engine log entries\*\* tab. ### Application logs To find application logs, navigate to \*\*All > System Log > Application Logs.\*\* Set the \*\*Application\*\* filter to "Terraform". Search for logs around the time your issue occurred. Some records include HTTP status codes and detailed error messages. ### Outbound requests ServiceNow logs all outgoing API calls, including calls to HCP Terraform. To view the log of outbound requests, navigate to \*\*All > System Logs > Outbound HTTP Requests.\*\* To customize the table view, add columns like "URL," "URL Path," and "Application scope." Logs from the Catalog app are marked with the `x\_325709\_terraform` scope. ## Enable email notifications To enable email notifications and receive updates on your requested item tickets: 1. Log in to your ServiceNow instance as an administrator. 1. \*\*Click System Properties > Email Properties\*\*. 1. In the \*\*Outbound Email Configuration\*\* panel, select \*\*Yes\*\* next to the check-box with the email that ServiceNow should send notifications to. To ensure you have relevant notifications configured in your instance: 1. Navigate to \*\*System Notification > Email > Notifications.\*\* 1. Search for "Request Opened" and "Request Item Commented" and ensure they are activated. Reach out to ServiceNow customer support if you run into any issues with the global configurations. ## Common problems This section details frequently encountered issues and how they can be resolved. ### Failure to create a workspace If you order the "create a workspace" catalog item and nothing happens in ServiceNow and HCP Terraform does not create a workspace then there are several possible reasons why: Ensure your HCP Terraform token, hostname, and organization name is correct. 1. Make sure to use a \*\*Team API Token\*\*. This can be found in HCP Terraform under "API Tokens". 1. Ensure the team API token has the correct permissions. 1. Double-check your organization name by copying and pasting it from HCP Terraform or Terraform Enterprise. 1. Double-check your host name. 1. Make sure you created your team API token in the same organization you are using 1. Test your configuration. First click \*\*Update\*\* to process any changes then \*\*Test Config to make sure the connection is working. Verify your VCS configuration. 1. The \*\*Identifier\*\* field should not have any spaces. The ServiceNow Service Catalog Integration requires that you format repository names in the `username/repo\_name` format. 1. The \*\*Name\*\* can be anything, but it is better to avoid special characters as per naming convention. 1. Double-check the OAuth token ID in your HCP Terraform/Terraform Enterprise settings. To retrieve your OAuth token ID, navigate to your HCP Terraform organization's settings page, then click \*\*Provider\*\* in the left navigation bar under \*\*Version Control\*\*. ### Failure to successfully order any catalog item After placing an order for any catalog item, navigate to the comments section in the newly created RITM ticket. The latest comment will contain a | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-catalog-terraform/troubleshoot.mdx | main | terraform | [
-0.04267340153455734,
-0.032259635627269745,
0.056227654218673706,
-0.03508400544524193,
0.006148852873593569,
-0.0197182297706604,
-0.02581428363919258,
-0.08785445243120193,
0.015048826113343239,
0.03760596737265587,
-0.05293573811650276,
-0.08489112555980682,
-0.024904368445277214,
-0.0... | 0.048981 |
your HCP Terraform organization's settings page, then click \*\*Provider\*\* in the left navigation bar under \*\*Version Control\*\*. ### Failure to successfully order any catalog item After placing an order for any catalog item, navigate to the comments section in the newly created RITM ticket. The latest comment will contain a response from HCP Terraform. ### Frequency of comments and outputs When you place an order in the Terraform Catalog, ServiceNow submits and processes the order, then attaches additional comments to the order to indicate whether HCP Terraform successfully created the workspace. By default, ServiceNow polls HCP Terraform every 5 minutes for the latest status of the Terraform run. ServiceNow does not show any comments until the next ping. To configure ServiceNow to poll HCP Terraform more frequently: 1. Navigate to \*\*All > Flow designer\*\*. 1. Set the \*\*Application\*\* filter to \*\*Terraform\*\*. 1. Under the \*\*Name\*\* column click \*\*Worker Poll Run State\*\*. 1. Click on the trigger and adjust the interval to your desired schedule. 1. Click \*\*Done > Save > Activate\*\* to save your changes. ### Using no-code modules feature If ServiceNow fails to deploy a no-code module catalog item, verify the following: 1. Ensure that your HCP Terraform organization has an [HCP Standard tier](https://www.hashicorp.com/products/terraform/pricing) subscription. 1. Ensure the name you enter for your no-code module in the catalog user form matches the no-code module in HCP Terraform. ### Updating no-code workspaces If the “update no-code workspace” catalog item returns the output message “No update has been made to the workspace”, then you have not upgraded your no-code module in HCP Terraform. ### Application Scope If you are making customizations and you encounter unexpected issues, make sure to change the scope from \*\*Global\*\* to \*\*Terraform\*\* and recreate your customized items in the \*\*Terraform scope\*\*. For additional instructions on customizations, refer to the [example customizations](/terraform/cloud-docs/integrations/service-now/service-catalog-terraform/example-customizations) documentation. ### MID server If you are using a MID server in your configuration, check the connectivity by using the \*\*Test Config\*\* button on the configurations page. Additionally, when ServiceNow provisions a MID server, navigate to \*\*MID Servers > Servers\*\* to check if the status is “up” and “validated”. ### Configuration While the app allows multiple config entries, only one should be present as this can interfere with the functionality of the app. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-catalog-terraform/troubleshoot.mdx | main | terraform | [
-0.029154544696211815,
-0.052763473242521286,
0.0820339247584343,
0.023264994844794273,
-0.01776021346449852,
-0.006224692799150944,
-0.021007217466831207,
-0.13529567420482635,
0.03391546383500099,
0.06449665129184723,
-0.05140892043709755,
-0.06749434769153595,
0.03715890645980835,
-0.01... | 0.018413 |
# Create and manage ServiceNow Service Catalog items When using ServiceNow with the HCP Terraform integration, you will configure at least one service catalog item. You will also configure one or more version control system (VCS) repositories or no-code modules containing the Terraform configurations which will be used to provision that infrastructure. End users will request infrastructure from the service catalog, and HCP Terraform will fulfill the request by creating a new workspace, applying the configuration, and then reporting the results back to ServiceNow. ## Prerequisites Before configuring a service catalog, you must install and configure the HCP Terraform integration software on your ServiceNow instance. These steps are covered in the [installation documentation](/terraform/cloud-docs/integrations/service-now/service-catalog-terraform). Additionally, you must have have the following information: 1. The no-code module name or the OAuth token ID and repository identifier for each VCS repository that HCP Terraform will use to provision infrastructure. Your Terraform Admin will provide these to you. Learn more in [Configure VCS Repositories or No-Code Modules](/terraform/cloud-docs/integrations/service-now/service-catalog-terraform/service-catalog-config#configure-vcs-repositories-or-no-code-modules). 1. Any Terraform or environment variables required by the configurations in the given VCS repositories or no-code modules. Once these steps are complete, in order for end users to provision infrastructure with ServiceNow and HCP Terraform, the ServiceNow Admin will perform the following steps to make Service Items available to your end users. 1. Add at least one service catalog for use with Terraform. 1. If you are using the VCS flow, configure at least one one VCS repository in ServiceNow. 1. Create variable sets to define Terraform and environment variables to be used by HCP Terraform to provision infrastructure. ## Add the Terraform Service Catalog -> \*\*ServiceNow Role:\*\* `admin` First, add a Service Catalog for use with the Terraform integration. Depending on your organization's needs, you might use a single service catalog, or several. If you already have a Service Catalog to use with Terraform, skip to the next step. 1. In ServiceNow, open the Service Catalog > Catalogs view by searching for "Service Catalog" in the left-hand navigation. 1. Click the plus sign in the top right. 1. Select "Catalogs > Terraform Catalog > Title and Image" and choose a location to add the Service Catalog. 1. Close the "Sections" dialog box by clicking the "x" in the upper right-hand corner. -> \*\*Note:\*\* In step 1, be sure to choose "Catalogs", not "Catalog" from the left-hand navigation. ## Configure VCS Repositories or No-Code Modules -> \*\*ServiceNow Roles:\*\* `admin` or `x\_terraform.vcs\_repositories\_user` Terraform workspaces created through the ServiceNow Service Catalog for Terraform can be associated with a VCS provider repository or be backed by a [no-code module](/terraform/cloud-docs/workspaces/no-code-provisioning/provisioning) in your organization's private registry. Administrators determine which workspace type end users can request from the Terraform Catalog. Below are the key differences between the version control and no-code approaches. ### VCS configuration To make infrastructure available to your users through version control workspaces, you must add one or more VCS repositories containing Terraform configurations to the Service Catalog for Terraform. 1. In ServiceNow, open the "Terraform > VCS Repositories" table by searching for "Terraform" in the left-hand navigation. 1. Click "New" to add a VCS repository, and fill in the following fields: - Name: The name for this repository. This name will be visible to end users, and does not have to be the same as the repository name as defined by your VCS provider. Ideally it will succinctly describe the infrastructure that will be provisioned by Terraform from the repository. - OAuth Token ID: The OAuth token ID that from your HCP Terraform organization's VCS providers settings. This ID specifies which VCS provider configured in HCP Terraform hosts | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-catalog-terraform/service-catalog-config.mdx | main | terraform | [
-0.023303259164094925,
-0.04859292507171631,
0.01067208219319582,
-0.0297574270516634,
-0.03808146342635155,
0.02309391461312771,
-0.018690234050154686,
-0.051893413066864014,
-0.023539680987596512,
0.054819200187921524,
-0.014758245088160038,
-0.09409252554178238,
0.06012123078107834,
-0.... | 0.041108 |
as defined by your VCS provider. Ideally it will succinctly describe the infrastructure that will be provisioned by Terraform from the repository. - OAuth Token ID: The OAuth token ID that from your HCP Terraform organization's VCS providers settings. This ID specifies which VCS provider configured in HCP Terraform hosts the desired repository. - Identifier: The VCS repository that contains the Terraform configuration for this workspace template. Repository identifiers are determined by your VCS provider; they typically use a format like `/` or `/`. Azure DevOps repositories use the format `//\_git/`. - The remaining fields are optional. - Branch: The branch within the repository, if different from the default branch. - Working Directory: The directory within the repository containing Terraform configuration. - Terraform Version: The version of Terraform to use. This will default to the latest version of Terraform supported by your HCP Terraform instance. 1. Click "Submit".  After configuring your repositories in ServiceNow, the names of those repositories will be available in the "VCS Repository" dropdown menu a user orders new workspaces through the following items in the Terraform Catalog: - \*\*Create Workspace\*\* - \*\*Create Workspace with Variables\*\* - \*\*Provision Resources\*\* - \*\*Provision Resources with Variables\*\* ### No-Code Module Configuration In version 2.5.0 and newer, ServiceNow administrators can configure Catalog Items using [no-code modules](/terraform/cloud-docs/workspaces/no-code-provisioning/provisioning). This release introduces two new additions to the Terraform Catalog - no-code workspace create and update Items. Both utilize no-code modules from the private registry in HCP Terraform to enable end users to request infrastructure without writing code. @include 'tfc-package-callouts/nocode.mdx' The following Catalog Items allow you to build and manage workspaces with no-code modules: - \*\*Provision No-Code Workspace and Deploy Resources\*\*: creates a new Terraform workspace based on a no-code module of your choice, supplies required variable values, runs and applies Terraform. - \*\*Update No-Code Workspace and Deploy Resources\*\*: Updates an existing no-code workspace to the most recent no-code module version, updates that workspace's attached variable values, and then starts and applies a new Terraform run. Administrators can skip configuring VCS repositories in ServiceNow when using no-code modules. The only input required in the no-code workspace request form is the name of the no-code module. Before utilizing a no-code module, you must publish it to the your organization's private module registry. With this one-time configuration complete, ServiceNow Administrators can then call the modules through Catalog requests without repository management, simplifying the use of infrastructure-as-code. > \*\*Hands On:\*\* Try the [Self-service enablement with HCP Terraform and ServiceNow tutorial](/terraform/tutorials/it-saas/servicenow-no-code). ## Configure a Variable Set Most Terraform configurations can be customized with Terraform variables or environment variables. You can create a Variable Set within ServiceNow to contain the variables needed for a given configuration. Your Terraform Admin should provide these to you. 1. In ServiceNow, open the "Service Catalog > Variable Sets" table by searching for "variable sets" in the left-hand navigation. 1. Click "New" to add a Variable Set. 1. Select "Single-Row Variable Set". - Title: User-visible title for the variable set. - Internal name: The internal name for the variable set. - Order: The order in which the variable set will be displayed. - Type: Should be set to "Single Row" - Application: Should be set to "Terraform" - Display title: Whether the title is displayed to the end user. - Layout: How the variables in the set will be displayed on the screen. - Description: A long description of the variable set. 1. Click "Submit" to create the variable set. 1. Find and click on the title of the new variable | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-catalog-terraform/service-catalog-config.mdx | main | terraform | [
0.011141468770802021,
0.005477422382682562,
0.012737788259983063,
-0.025133928284049034,
-0.030146772041916847,
0.032475054264068604,
-0.005148835480213165,
-0.03423752635717392,
0.07794924825429916,
0.04835103824734688,
-0.04579586908221245,
-0.059944432228803635,
0.0524536594748497,
-0.0... | 0.059383 |
the title is displayed to the end user. - Layout: How the variables in the set will be displayed on the screen. - Description: A long description of the variable set. 1. Click "Submit" to create the variable set. 1. Find and click on the title of the new variable set in the Variable Sets table. 1. At the bottom of the variable set details page, click "New" to add a new variable. - Type: Should be "Single Line Text" for most variables, or "Masked" for variables containing sensitive values. - Question: The user-visible question or label for the variable. - Name: The internal name of the variable. This must be derived from the name of the Terraform or environment variable. Consult the table below to determine the proper prefix for each variable name. - Tooltip: A tooltip to display for the variable. - Example Text: Example text to show in the variable's input box. 1. Under the "Default Value" tab, you can set a default value for the variable. 1. Continue to add new variables corresponding to the Terraform and environment variables the configuration requires. When the Terraform integration applies configuration, it will map ServiceNow variables to Terraform and environment variables using the following convention. ServiceNow variables that begin with "sensitive\_" will be saved as sensitive variables within HCP Terraform. | ServiceNow Variable Name | HCP Terraform Variable | | -------------------------------- | ---------------------------------------------------------- | | `tf\_var\_VARIABLE\_NAME` | Terraform Variable: `VARIABLE\_NAME` | | `tf\_env\_ENV\_NAME` | Environment Variable: `ENV\_NAME` | | `sensitive\_tf\_var\_VARIABLE\_NAME` | Sensitive Terraform Variable (Write Only): `VARIABLE\_NAME` | | `sensitive\_tf\_env\_ENV\_NAME` | Sensitive Environment Variable (Write Only): `ENV\_NAME` | ## Provision Infrastructure Once you configure the Service Catalog for Terraform, ServiceNow users can request infrastructure to be provisioned by HCP Terraform. These requests will be fulfilled by HCP Terraform, which will: 1. Create a new workspace from the no-code module or the VCS repository provided by ServiceNow. 1. Configure variables for that workspace, also provided by ServiceNow. 1. Plan and apply the change. 1. Report the results, including any outputs from Terraform, to ServiceNow. Once this is complete, ServiceNow will reflect that the Request Item has been provisioned. -> \*\*Note:\*\* The integration creates workspaces with [auto-apply](/terraform/cloud-docs/workspaces/settings#auto-apply-and-manual-apply) enabled. HCP Terraform will queue an apply for these workspaces whenever changes are merged to the associated VCS repositories. This is known as the [VCS-driven run workflow](/terraform/cloud-docs/workspaces/run/ui). It is important to keep in mind that all of the ServiceNow workspaces connected to a given repository will be updated whenever changes are merged to the associated branch in that repository. ## Execution Mode If using v2.2.0 or above, the Service Catalog app allows you to set an [execution mode](/terraform/cloud-docs/workspaces/settings#execution-mode) for your Terraform workspaces. There are two modes to choose from: - The default value is "Remote", which instructs HCP Terraform to perform runs on its disposable virtual machines. - Selecting "Agent" mode allows you to run Terraform operations on isolated, private, or on-premises infrastructure. This option requires you to create an Agent Pool in your organization beforehand, then provide that Agent Pool's id when you order a new workspace through the Service Catalog. @include 'tfc-package-callouts/agents.mdx' ## Workspace Name Version 2.4.0 of the Service Catalog for Terraform introduces the ability to set custom names for your Terraform workspaces. You can choose a prefix for your workspace name that the Service Catalog app will append the ServiceNow RITM number to. If you do not define a workspace prefix, ServiceNow will use RITM number as the workspace name. Workspace names can include letters, numbers, dashes (`-`), and underscores (`\_`), and should not exceed 90 characters. Refer to the | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-catalog-terraform/service-catalog-config.mdx | main | terraform | [
0.025608565658330917,
0.023986171931028366,
-0.008983371779322624,
0.04353231191635132,
-0.06917866319417953,
0.0348694771528244,
0.08032006025314331,
-0.019366050139069557,
-0.018886875361204147,
0.026751691475510597,
0.00833138544112444,
-0.09359455853700638,
0.07535071671009064,
-0.0074... | 0.020048 |
name that the Service Catalog app will append the ServiceNow RITM number to. If you do not define a workspace prefix, ServiceNow will use RITM number as the workspace name. Workspace names can include letters, numbers, dashes (`-`), and underscores (`\_`), and should not exceed 90 characters. Refer to the [workspace naming recommendations](/terraform/cloud-docs/workspaces/create#workspace-naming) for best practices. ## Workspace Tags Version 2.8.0 extends support for the key-value pair tags while still also supporting flat string tags version 2.4.0 introduced. Use the "Workspace Tags" field to provide a comma-separated list of key-value pair tags in the format "env: prod, instance: test" that will be parsed and attached to the workspace in HCP Terraform. Tags give you an easier way to categorize, filter, and manage workspaces provisioned through the Service Catalog for Terraform. We recommend that you set naming conventions for tags with your end users to avoid variations such as `ec2`, `aws-ec2`, `aws\_ec2`. Workspace tags have a 255 character limit and can contain letters, numbers, colons, hyphens, and underscores. Refer to the [workspace tagging rules](/terraform/cloud-docs/workspaces/create#workspace-tags) for more details. ## Approval Flow Version 2.9.0 introduces an approval step in the flow that ensures requests are reviewed and authorized before resources can be created. You can add an approval steps to your existing catalog items. When a user orders resources, HCP Terraform runs a speculative plan, then pauses the flow and waits for the approver to review the request and the plan in HCP Terraform. If the approver approves the plan approval, HCP Terraform starts another run to apply it. You can use this workflow with the following catalog items: - Provision Resources with Vars with Approval - Update Resources with Vars with Approval | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-catalog-terraform/service-catalog-config.mdx | main | terraform | [
-0.08417831361293793,
-0.017216138541698456,
0.012377381324768066,
-0.050355151295661926,
-0.03120206855237484,
-0.008877850137650967,
0.03497176244854927,
-0.027409404516220093,
0.012407781556248665,
0.03126652166247368,
-0.0008228394435718656,
-0.06813032925128937,
0.054323796182870865,
... | 0.050931 |
# Terraform ServiceNow Service Catalog Integration Developer Reference The Terraform ServiceNow integration can be customized by ServiceNow developers using the information found in this document. ## Terraform Variables and ServiceNow Variable Sets ServiceNow has the concept of a Variable Set which is a collection of ServiceNow Variables that can be referenced in a Flow from a Service Catalog item. The Terraform Integration codebase can create [Terraform Variables and Terraform Environment Variables](/terraform/cloud-docs/variables) via the API using the `tf\_variable.createVariablesFromSet()` function. This function looks for variables following these conventions: | ServiceNow Variable Name | HCP Terraform Variable | | -------------------------------- | ---------------------------------------------------------- | | `tf\_var\_hcl\_VARIABLE\_NAME` | Terraform Variable: `VARIABLE\_NAME` | | `tf\_env\_ENV\_NAME` | Environment Variable: `ENV\_NAME` | | `sensitive\_tf\_var\_hcl\_VARIABLE\_NAME` | Sensitive Terraform Variable (Write Only): `VARIABLE\_NAME` | | `sensitive\_tf\_env\_ENV\_NAME` | Sensitive Environment Variable (Write Only): `ENV\_NAME` | This function takes the ServiceNow Variable Set and HCP Terraform workspace ID. It will loop through the given variable set collection and create any necessary Terraform variables or environment variables in the workspace. ## Customizing with ServiceNow "Script Includes" Libraries The Terraform/ServiceNow Integration codebase includes [ServiceNow Script Includes Classes](https://docs.servicenow.com/csh?topicname=c\_ScriptIncludes.html&version=latest) that are used to interface with HCP Terraform. The codebase also includes example catalog items and flows that implement the interface to the HCP Terraform API. These classes and examples can be used to help create ServiceNow Catalog Items customized to your specific ServiceNow instance and requirements. ### Script Include Classes The ServiceNow Script Include Classes can be found in the ServiceNow Studio > Server Development > Script Include. | Class Name | Description | | --------------------- | ---------------------------------------------------------- | | `tf\_config` | Helper to pull values from the SN Terraform Configs Table | | `tf\_get\_workspace` | Client-callable script to retrieve workspace data | | `tf\_http` | ServiceNow HTTP REST wrapper for requests to the Terraform API | | `tf\_no\_code\_workspace`| Resources for Terraform no-code module API requests | | `tf\_run` | Resources for Terraform run API requests | | `tf\_terraform\_record` | Manage ServiceNow Terraform Table Records | | `tf\_test\_config` | Client-callable script to test Terraform connectivity | | `tf\_util` | Miscellaneous helper functions | | `tf\_variable` | Resources for Terraform variable API Requests | | `tf\_vcs\_record` | Manage ServiceNow Terraform VCS repositories table records | | `tf\_workspace` | Resources for Terraform workspace API requests | ### Example Service Catalog Flows and Actions The ServiceNow Service Catalog for Terraform provides sample catalog items that use \*\*Flows\*\* and \*\*Workflows\*\* as their primary process engines. \*\*Flows\*\* are a newer solution developed by ServiceNow and are generally preferred over \*\*Workflows\*\*. To see which engine an item is using, open it in the edit mode and navigate to the \*\*Process Engine\*\* tab. For example, \*\*Create Workspace\*\* uses a \*\*Workflow\*\*, whereas \*\*Create Workspace Flow\*\* is built upon a \*\*Flow\*\*. You can access both in the \*\*Studio\*\*. You can also manage \*\*Flows\*\* in the \*\*Flow Designer\*\*. To manage \*\*Workflows\*\*, navigate to \*\*All > Workflow Editor\*\*. You can find the ServiceNow Example Flows for Terraform in the \*\*ServiceNow Studio > Flows\*\* (or \*\*All > Flow Designer\*\*). Search for items that belong to the \*\*Terraform\*\* application. By default, Flows execute when someone submits an order request for a catalog item based on a Flow. Admins can customize the Flows and Actions to add approval flows, set approval rules based on certain conditions, and configure multiple users or roles as approvers for specific catalog items. | Flow Name | Description | | ----------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Create Workspace | Creates a new HCP Terraform workspace from VCS repository. | | Create Workspace with Vars | Creates a new HCP Terraform workspace from VCS repository and creates | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-catalog-terraform/developer-reference.mdx | main | terraform | [
-0.021788397803902626,
0.0013185066636651754,
-0.019483420997858047,
0.00031370302895084023,
-0.06404311954975128,
0.06510690599679947,
0.07039330154657364,
-0.022907383739948273,
-0.022149210795760155,
0.02015783078968525,
0.004300614818930626,
-0.08044657111167908,
0.06494364142417908,
-... | 0.056125 |
or roles as approvers for specific catalog items. | Flow Name | Description | | ----------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Create Workspace | Creates a new HCP Terraform workspace from VCS repository. | | Create Workspace with Vars | Creates a new HCP Terraform workspace from VCS repository and creates any variables provided. | | Create Run | Creates and queues a new run in the HCP Terraform workspace. | | Apply Run | Applies a run in the HCP Terraform workspace. | | Provision Resources | Creates a new HCP Terraform workspace (with auto-apply), creates and queues a run, then applies the run when ready. | | Provision Resources with Vars | Creates a new HCP Terraform workspace (with auto-apply), creates any variables, creates/queues a run, applies the run when ready. | Provision No-Code Workspace and Deploy Resources | Creates a new HCP Terraform workspace based on a no-code module configured in the private registry (with auto-apply), creates any variables, creates and queues a run, then applies the run when ready. | | Delete Workspace | Creates a destroy run plan. | | Worker Poll Run State | Polls the HCP Terraform API for the current run state of a workspace. | | Worker Poll Apply Run | Polls the HCP Terraform API and applies any pending Terraform runs. | | Worker Poll Destroy Workspace | Queries ServiceNow Terraform Records for resources marked `is\_destroyable`, applies the destroy run to destroy resources, and deletes the corresponding Terraform workspace. | | Update No-Code Workspace and Deploy Resources | Updates an existing no-code workspace to the most recent no-code module version, updates that workspace's attached variable values, and then starts a new Terraform run. | Update Workspace | Updates HCP Terraform workspace configurations, such as VCS repository, description, project, execution mode, and agent pool ID (if applicable). | | Update Workspace with Vars | Allows you to change details about the HCP Terraform workspace configurations and attached variable values. | | Update Resources | Updates HCP Terraform workspace details and starts a new Terraform run with these new values. | | Update Resources with Vars | Updates your existing HCP Terraform workspace and its variables, then starts a Terraform run with these updated values. | ## ServiceNow ACLs Access control lists (ACLs) restrict user access to objects and operations based on permissions granted. This integration includes the following roles that can be used to manage various components. | Access Control Roles | Description | | :---------------------------------- | ----------------------------------------------------------------------------------------------- | | `x\_terraform.config\_user` | Can manage the connection from the ServiceNow application to your HCP Terraform organization. | | `x\_terraform.terraform\_user` | Can manage all of the Terraform resources created in ServiceNow. | | `x\_terraform.vcs\_repositories\_user` | Can manage the VCS repositories available for catalog items to be ordered by end-users. | For users who only need to order from the Terraform Catalog, we recommend creating another role with read-only permissions for `x\_terraform\_vcs\_repositories` to view the available repositories for ordering infrastructure. Install the Terraform ServiceNow Service Catalog integration by following [the installation guide](/terraform/cloud-docs/integrations/service-now/service-catalog-terraform). | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-catalog-terraform/developer-reference.mdx | main | terraform | [
-0.032697759568691254,
-0.014801896177232265,
0.0051511675119400024,
0.003556675510481,
-0.020101753994822502,
0.015047849155962467,
0.01314979512244463,
-0.058878231793642044,
-0.019965210929512978,
-0.006412094924598932,
-0.005010140594094992,
-0.12606167793273926,
0.0004704869061242789,
... | 0.002541 |
# Configure the ServiceNow Service Catalog integration ServiceNow administrators have several options with configuring the Terraform integration. If you haven't yet installed the integration, see the [installation documentation](/terraform/cloud-docs/integrations/service-now/service-catalog-terraform). Once the integration has been installed, you can add and customize a service catalog and VCS repositories using the [service catalog documentation](/terraform/cloud-docs/integrations/service-now/service-catalog-terraform/service-catalog-config). You can also configure how frequently ServiceNow will poll HCP Terraform using the documentation below. ## Configure Polling Workers The integration includes 3 ServiceNow Scheduled Flows to poll the HCP Terraform API using ServiceNow Outbound HTTP REST requests. By default, all flows schedules are set to 5 minutes. These can be customized inside the ServiceNow Server Studio: 1. Select the Worker Poll Run State Flow. 1. Adjust Repeat Intervals 1. Click "Done" 1. Click "Save" 1. Click "Activate" ### Worker Poll Apply Run This worker approves runs for any workspaces that have finished a Terraform plan and are ready to apply their changes. It also adds a comment on the request item for those workspaces notifying that a run has been triggered. ### Worker Poll Destroy Workspace This worker looks for any records in the Terraform ServiceNow table that are marked for deletion with the value `is\_destroyable` set to true. It then checks the status of the workspace to ensure it is ready to be deleted. Once the destroy run has been completed, this work will send the delete request for the workspace to Terraform. ### Worker Poll Run State The worker synchronizes ServiceNow with the current run state of Terraform workspaces by polling the HCP Terraform API. On state changes, the worker adds a comment to the ServiceNow request item with the updated run state and other metadata.  | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/service-now/service-catalog-terraform/admin-guide.mdx | main | terraform | [
-0.04437242075800896,
-0.020323004573583603,
0.01293229404836893,
-0.012023977935314178,
-0.0679534450173378,
0.00018581045151222497,
-0.034302715212106705,
-0.08355282992124557,
0.007032870780676603,
0.08571528643369675,
-0.04929245263338089,
-0.07142885029315948,
0.019001103937625885,
-0... | 0.075677 |
# Set up HCP Terraform for Splunk HashiCorp HCP Terraform customers can integrate with Splunk® using the official [HCP Terraform for Splunk](https://splunkbase.splunk.com/app/5141/) app to understand HCP Terraform operations. @include 'eu/integrations.mdx' Audit logs from HCP Terraform are regularly pulled into Splunk, immediately giving visibility into key platform events within the predefined dashboards. Identify the most active policies, significant changes in resource operations, or filter actions by specific users within your organization. The app can be used with Splunk Cloud and Splunk Enterprise. ## Prerequisites @include 'tfc-package-callouts/audit-trails.mdx' Access and support for the HCP Terraform for Splunk app requires audit trails. ### Splunk Cloud There are no special prerequisites for Splunk Cloud users. ### Splunk Enterprise -> \*\*Note:\*\* This app is currently not supported on a clustered deployment of Splunk Enterprise. #### Networking Requirements In order for the HCP Terraform for Splunk app to function properly, it must be able to make outbound requests over HTTPS (TCP port 443) to the HCP Terraform application APIs. This may require perimeter networking as well as container host networking changes, depending on your environment. The IP ranges are documented in the [HCP Terraform IP Ranges documentation](/terraform/cloud-docs/architectural-details/ip-ranges). The services which run on these IP ranges are described in the table below. | Hostname | Port/Protocol | Directionality | Purpose | | ---------------- | -------------- | -------------- | ------------------------------------------------------------ | | app.terraform.io | tcp/443, HTTPS | Outbound | Polling for new audit log events via the HCP Terraform API | ### Compatibility The current release of the HCP Terraform for Splunk app supports the following versions: \* Splunk Platform: 8.0 and later \* CIM: 4.3 and later ## Installation & Configuration \* Install the [HCP Terraform for Splunk app via Splunkbase](https://splunkbase.splunk.com/app/5141/) \* Splunk Cloud users should consult the latest instructions on how to [use IDM with cloud-based add-ons](https://docs.splunk.com/Documentation/SplunkCloud/8.0.2007/Admin/IntroGDI#Use\_IDM\_with\_cloud-based\_add-ons) within the Splunk documentation. \* Click "Configure the application" \* Generate an [Audit trails token](/terraform/cloud-docs/users-teams-organizations/api-tokens#audit-trails-tokens) within HCP Terraform \* Click "complete setup" Once configured, you’ll be redirected to the Splunk search interface with the pre-configured HCP Terraform dashboards. Splunk will begin importing and indexing the last 14 days of audit log information and populating the dashboards. This process may take a few minutes to complete. ## Upgrading To upgrade to a new version of the HCP Terraform for Splunk app, repeat the installation and configuration steps above. ## Troubleshooting HCP Terraform only retains 14 days of audit log information. If there are connectivity issues between your Splunk service and HCP Terraform, Splunk will recover events from the last event received up to a maximum period of 14 days. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/splunk/index.mdx | main | terraform | [
0.0009399272967129946,
0.029188448563218117,
0.010851599276065826,
-0.05508054420351982,
0.016381312161684036,
-0.010729124769568443,
0.01490875892341137,
-0.05467541888356209,
-0.015635600313544273,
0.08981268852949142,
-0.02896767295897007,
-0.08826673775911331,
0.0036365508567541838,
0.... | 0.114674 |
# Set up run task integrations In addition to using existing technology partners integrations, HashiCorp HCP Terraform customers can build their own custom run task integrations. Custom integrations have access to plan details in between the plan and apply phase, and can display custom messages within the run pipeline as well as prevent a run from continuing to the apply phase. @include 'tfc-package-callouts/run-tasks.mdx' ## Prerequisites To build a custom integration, you must have a server capable of receiving requests from HCP Terraform and responding with a status update to a supplied callback URL. When creating a run task, you supply an endpoint url to receive the hook. We send a test POST to the supplied URL, and it must respond with a 200 for the run task to be created. This feature relies heavily on the proper parsing of [plan JSON output](/terraform/internals/json-format). When sending this output to an external system, be certain that system can properly interpret the information provided. ## Available Run Tasks You can view the most up-to-date list of run tasks in the [Terraform Registry](https://registry.terraform.io/browse/run-tasks). ## Integration Details When a run reaches the appropriate phase and a run task is triggered, the supplied URL will receive details about the run in a payload similar to the one below. The server receiving the run task should respond `200 OK`, or Terraform will retry to trigger the run task. Refer to the [Run Task Integration API](/terraform/cloud-docs/api-docs/run-tasks/run-tasks-integration) for the exact payload specification. ```json { "payload\_version": 1, "stage": "post\_plan", "access\_token": "4QEuyyxug1f2rw.atlasv1.iDyxqhXGVZ0ykes53YdQyHyYtFOrdAWNBxcVUgWvzb64NFHjcquu8gJMEdUwoSLRu4Q", "capabilities": { "outcomes": true }, "configuration\_version\_download\_url": "https://app.terraform.io/api/v2/configuration-versions/cv-ntv3HbhJqvFzamy7/download", "configuration\_version\_id": "cv-ntv3HbhJqvFzamy7", "is\_speculative": false, "organization\_name": "hashicorp", "plan\_json\_api\_url": "https://app.terraform.io/api/v2/plans/plan-6AFmRJW1PFJ7qbAh/json-output", "run\_app\_url": "https://app.terraform.io/app/hashicorp/my-workspace/runs/run-i3Df5to9ELvibKpQ", "run\_created\_at": "2021-09-02T14:47:13.036Z", "run\_created\_by": "username", "run\_id": "run-i3Df5to9ELvibKpQ", "run\_message": "Triggered via UI", "task\_result\_callback\_url": "https://app.terraform.io/api/v2/task-results/5ea8d46c-2ceb-42cd-83f2-82e54697bddd/callback", "task\_result\_enforcement\_level": "mandatory", "task\_result\_id": "taskrs-2nH5dncYoXaMVQmJ", "vcs\_branch": "main", "vcs\_commit\_url": "https://github.com/hashicorp/terraform-random/commit/7d8fb2a2d601edebdb7a59ad2088a96673637d22", "vcs\_pull\_request\_url": null, "vcs\_repo\_url": "https://github.com/hashicorp/terraform-random", "workspace\_app\_url": "https://app.terraform.io/app/hashicorp/my-workspace", "workspace\_id": "ws-ck4G5bb1Yei5szRh", "workspace\_name": "tfr\_github\_0", "workspace\_working\_directory": "/terraform" } ``` Once your server receives this payload, HCP Terraform expects you to callback to the supplied `task\_result\_callback\_url` using the `access\_token` as an [Authentication Header](/terraform/cloud-docs/api-docs#authentication) with a [jsonapi](/terraform/cloud-docs/api-docs#json-api-formatting) payload of the form: ```json { "data": { "type": "task-results", "attributes": { "status": "running", "message": "Hello task", "url": "https://example.com", "outcomes": [...] } } } ``` The request errors if HCP Terraform does not receive a progress update within 10 minutes or if the request runs for more than 60 minutes. HCP Terraform displays the supplied message attribute on the run details page. The status is either `running`, `passed`, or `failed`. Here's what the data flow looks like:  Refer to the [run task integration API](/terraform/cloud-docs/api-docs/run-tasks/run-tasks-integration#structured-results) for the exact payload specifications, and the [JSON schema for run task results](https://github.com/hashicorp/web-unified-docs/blob/main/content/terraform-docs-common/public/schema/run-tasks/runtask-result.json) for code generation and payload validation. ## Securing your Run Task When creating your run task, you can supply an HMAC key which HCP Terraform will use to create a signature of the payload in the `X-Tfc-Task-Signature` header when calling your service. The signature is a sha512 sum of the webhook body using the provided HMAC key. The generation of the signature depends on your implementation, however an example of how to generate a signature in bash is provided below. ```bash $ echo -n $WEBHOOK\_BODY | openssl dgst -sha512 -hmac "$HMAC\_KEY" ``` ## HCP Packer Run Task > \*\*Hands On:\*\* Try the [Set Up HCP Terraform Run Task for HCP Packer](/packer/tutorials/hcp/setup-hcp-terraform-run-task), [Essentials tier run task image validation](/packer/tutorials/hcp/run-tasks-data-source-image-validation), and [Standard tier run task image validation](/packer/tutorials/hcp/run-tasks-resource-image-validation) tutorials to set up and test the HCP Terraform Run Task integration end to end. [Packer](/packer) lets you create identical machine images for multiple platforms from a single source template. The [HCP Packer registry](/hcp/docs/packer) lets you track | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/run-tasks/index.mdx | main | terraform | [
0.0019322818843647838,
0.020584065467119217,
0.02575426548719406,
-0.03871205821633339,
-0.045869242399930954,
-0.04708116874098778,
-0.07829809188842773,
-0.007289635017514229,
0.012783198617398739,
0.030526477843523026,
-0.07792594283819199,
-0.0629572868347168,
0.015860475599765778,
-0.... | 0.076749 |
[Essentials tier run task image validation](/packer/tutorials/hcp/run-tasks-data-source-image-validation), and [Standard tier run task image validation](/packer/tutorials/hcp/run-tasks-resource-image-validation) tutorials to set up and test the HCP Terraform Run Task integration end to end. [Packer](/packer) lets you create identical machine images for multiple platforms from a single source template. The [HCP Packer registry](/hcp/docs/packer) lets you track golden images, designate images for test and production environments, and query images to use in Packer and Terraform configurations. The HCP Packer validation run task checks the image artifacts within a Terraform configuration. If the configuration references images marked as unusable (revoked), the run task fails and provides an error message containing the number of revoked artifacts and whether HCP Packer has metadata for newer versions. For HCP Packer Plus registries, run tasks also help you identify hardcoded and untracked images that may not meet security and compliance requirements. To get started, [create an HCP Packer account](https://cloud.hashicorp.com/products/packer) and follow the instructions in the [HCP Packer Run Task](/hcp/docs/packer/store/validate-version#set-up-the-hcp-terraform-run-task-for-hcp-packer) documentation. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/run-tasks/index.mdx | main | terraform | [
-0.049213845282793045,
0.03742532059550285,
0.0297564547508955,
-0.04154318571090698,
0.06483805179595947,
-0.06571200489997864,
-0.06130373105406761,
0.01364930346608162,
-0.06980396062135696,
-0.0018329141894355416,
0.011631115339696407,
-0.08252768218517303,
0.09758849442005157,
0.04123... | 0.066625 |
# HCP Terraform Operator for Kubernetes annotations and labels This topic contains reference information about the annotations and labels the HCP Terraform and Terraform Enterprise operators use for Kubernetes. ## Annotations | Annotation key | Target resources | Possible values | Description | | --- | --- | --- | --- | | `workspace.app.terraform.io/run-new` | Workspace | `"true"` | Set this annotation to `"true"` to trigger a new run. Example: `kubectl annotate workspace workspace.app.terraform.io/run-new="true"`. | | `workspace.app.terraform.io/run-type` | Workspace | `plan`, `apply`, `refresh` | Specifies the run type. Changing this annotation does not start a new run. Refer to [Run Modes and Options](/terraform/cloud-docs/workspaces/run/modes-and-options) for more information. Defaults to `"plan"`. | | `workspace.app.terraform.io/run-terraform-version` | Workspace | Any valid Terraform version | Specifies the Terraform version to use. Changing this annotation does not start a new run. Only valid when the annotation `workspace.app.terraform.io/run-type` is set to `plan`. Defaults to the Workspace version. | | `app.terraform.io/paused` | CRD[All] | `"true"`, `"false"` | Set this annotation to `"true"` to pause reconciliation for the custom resource. While paused, the operator skips reconciliation for the annotated resource, even if the custom resource changes. Deletion logic will still be executed. Example: `kubectl annotate workspace app.terraform.io/paused="true"`. | ## Labels | Label key | Target resources | Possible values | Description | | --- | --- | --- | --- | | `agentpool.app.terraform.io/pool-name` | Pod[Agent] | Any valid AgentPool name | Associate the resource with a specific agent pool by specifying the name of the agent pool. | | `agentpool.app.terraform.io/pool-id` | Pod[Agent] | Any valid AgentPool ID | Associate the resource with a specific agent pool by specifying the ID of the agent pool. | | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/kubernetes/annotations-and-labels.mdx | main | terraform | [
0.007293255068361759,
0.01807546615600586,
0.06280767172574997,
0.011634142138063908,
-0.005384660325944424,
0.026450445875525475,
0.012002320028841496,
-0.07753513753414154,
0.04738904535770416,
0.02942531742155552,
-0.05181923136115074,
-0.1036299392580986,
-0.03951182961463928,
-0.01905... | 0.126251 |
# Migrate to HCP Terraform Operator for Kubernetes v2 To upgrade the HCP Terraform Operator for Kubernetes from version 1 to the HCP Terraform Operator for Kubernetes (version 2), there is a one-time process that you need to complete. This process upgrades the operator to the newest version and migrate your custom resources. ## Prerequisites The migration process requires the following tools to be installed locally: - [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) - [Helm](https://helm.sh/docs/intro/install/) ## Prepare for the upgrade Configure an environment variable named `RELEASE\_NAMESPACE` with the value of the namespace that the Helm chart is installed in. ```shell-session $ export RELEASE\_NAMESPACE= ``` Next, create an environment variable named `RELEASE\_NAME` with the value of the name that you gave your installation for the Helm chart. ```shell-session $ export RELEASE\_NAME= ``` Before you migrate to HCP Terraform Operator for Kubernetes v2, you must first update v1 of the operator to the latest version, including the custom resource definitions. ```shell-session $ helm upgrade --namespace ${RELEASE\_NAMESPACE} ${RELEASE\_NAME} hashicorp/terraform ``` Next, backup the workspace resources. ```shell-session $ kubectl get workspace --all-namespaces -o yaml > backup\_tfc\_operator\_v1.yaml ``` ## Manifest schema migration Version 2 of the HCP Terraform Operator for Kubernetes renames and moves many existing fields. When you migrate, you must update your specification to match version 2's field names. ### Workspace controller The table below lists the field mapping of the `Workspace` controller between v1 and v2 of the operator. | Version 1 | Version 2 | Changes between versions | | --- | --- | --- | | `apiVersion: app.terraform.io/v1alpha1` | `apiVersion: app.terraform.io/v1alpha2` | The `apiVersion` is now `v1alpha2`. | | `kind: Workspace` | `kind: Workspace` | None. | | `metadata` | `metadata` | None. | | `spec.organization` | `spec.organization` | None. | | `spec.secretsMountPath` | `spec.token.secretKeyRef` | In v2 the operator keeps the HCP Terraform access token in a Kubernetes Secret. | | `spec.vcs` | `spec.versionControl` | Renamed the `vcs` field to `versionControl`. | | `spec.vcs.token\_id` | `spec.versionControl.oAuthTokenID` | Renamed the `token\_id` field to `oAuthTokenID`. | | `spec.vcs.repo\_identifier` | `spec.versionControl.repository` | Renamed the `repo\_identifier` field to `repository`. | | `spec.vcs.branch` | `spec.versionControl.branch` | None. | | `spec.vcs.ingress\_submodules` | `spec.workingDirectory` | Moved. | | `spec.variables.[\*]` | `spec.environmentVariables.[\*]` OR `spec.terraformVariables.[\*]` | We split variables into two possible places. In v1's CRD, if `spec.variables.environmentVariable` was `true`, migrate those variables to `spec.environmentVariables`. If `false`, migrate those variables to `spec.terraformVariables`. | | `spec.variables.[\*]key` | `spec.environmentVariables.[\*]name` OR `spec.terraformVariables.[\*]name` | Renamed the `key` field as `name`. [Learn more](#workspace-variables).| | `spec.variables.[\*]value` | `spec.environmentVariables.[\*]value` OR `spec.terraformVariables.[\*]value` | [Learn more](#workspace-variables). | | `spec.variables.[\*]valueFrom` | `spec.environmentVariables.[\*]valueFrom` OR `spec.terraformVariables.[\*]valueFrom` | [Learn more](#workspace-variables). | | `spec.variables.[\*]hcl` | `spec.environmentVariables.[\*]hcl` OR `spec.terraformVariables.[\*]hcl` | [Learn more](#workspace-variables). | | `spec.variables.sensitive` | `spec.environmentVariables.[\*]sensitive` OR `spec.terraformVariables.[\*]sensitive` | [Learn more](#workspace-variables). | | `spec.variables.environmentVariable` | N/A | Removed, variables are split between `spec.environmentVariables` and `spec.terraformVariables`. | | `spec.runTriggers.[\*]` | `spec.runTriggers.[\*]` | None. | | `spec.runTriggers.[\*].sourceableName` | `spec.runTriggers.[\*].name` | The `sourceableName` field is now `name`. | | `spec.sshKeyID` | `spec.sshKey.id` | Moved the `sshKeyID` to `spec.sshKey.id`. | | `spec.outputs` | N/A | Removed. | | `spec.terraformVersion` | `spec.terraformVersion` | None. | | `spec.notifications.[\*]` | `spec.notifications.[\*]` | None. | | `spec.notifications.[\*].type` | `spec.notifications.[\*].type` | None. | | `spec.notifications.[\*].enabled` | `spec.notifications.[\*].enabled` | None. | | `spec.notifications.[\*].name` | `spec.notifications.[\*].name` | None. | | `spec.notifications.[\*].url` | `spec.notifications.[\*].url` | None. | | `spec.notifications.[\*].token` | `spec.notifications.[\*].token` | None. | | `spec.notifications.[\*].triggers.[\*]` | `spec.notifications.[\*].triggers.[\*]` | None. | | `spec.notifications.[\*].recipients.[\*]` | `spec.notifications.[\*].emailAddresses.[\*]` | Renamed the `recipients` field to `emailAddresses`. | | `spec.notifications.[\*].users.[\*]` | `spec.notifications.[\*].emailUsers.[\*]` | Renamed the `users` field to `emailUsers`. | | `spec.omitNamespacePrefix` | N/A | Removed. In v1 `spec.omitNamespacePrefix` is a boolean field that affects how the operator generates a workspace name. In v2, | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/kubernetes/ops-v2-migration.mdx | main | terraform | [
0.052391521632671356,
0.04267727956175804,
0.08828587085008621,
-0.06703489273786545,
0.0033852083142846823,
0.03193558752536774,
-0.040954556316137314,
-0.008658832870423794,
0.014439788646996021,
0.07573238760232925,
-0.019498351961374283,
-0.11783363670110703,
0.019618157297372818,
-0.0... | 0.027029 |
None. | | `spec.notifications.[\*].recipients.[\*]` | `spec.notifications.[\*].emailAddresses.[\*]` | Renamed the `recipients` field to `emailAddresses`. | | `spec.notifications.[\*].users.[\*]` | `spec.notifications.[\*].emailUsers.[\*]` | Renamed the `users` field to `emailUsers`. | | `spec.omitNamespacePrefix` | N/A | Removed. In v1 `spec.omitNamespacePrefix` is a boolean field that affects how the operator generates a workspace name. In v2, you must explicitly set workspace names in `spec.name`. | | `spec.agentPoolID` | `spec.agentPool.id` | Moved the `agentPoolID` field to `spec.agentPool.id`. | | `spec.agentPoolName` | `spec.agentPool.name` | Moved the `agentPoolName` field to `spec.agentPool.name`. | | `spec.module` | N/A | Removed. You now configure modules with a separate `Module` CRD. [Learn more](#module-controller). | Below is an example of configuring a variable in v1 of the operator. ```yaml apiVersion: app.terraform.io/v1alpha1 kind: Workspace metadata: name: migration spec: variables: - key: username value: "user" hcl: true sensitive: false environmentVariable: false - key: SECRET\_KEY value: "s3cr3t" hcl: false sensitive: false environmentVariable: true ``` In v2 of the operator, you must configure Terraform variables in `spec.terraformVariables` and environment variables `spec.environmentVariables`. ```yaml apiVersion: app.terraform.io/v1alpha2 kind: Workspace metadata: name: migration spec: terraformVariables: - name: username value: "user" hcl: true sensitive: false environmentVariables: - name: SECRET\_KEY value: "s3cr3t" hcl: false sensitive: false ``` ### Module controller HCP Terraform Operator for Kubernetes v2 configures modules in a new `Module` controller separate from the `Workspace` controller. Below is a template of a custom resource manifest: ```yaml apiVersion: app.terraform.io/v1alpha2 kind: Module metadata: name: spec: organization: token: secretKeyRef: name: key: name: operator ``` The table below describes the mapping between the `Workspace` controller from v1 and the `Module` controller in v2 of the operator. | Version 1 (Workspace CRD) | Version 2 (Module CRD) | Notes | | --- | --- | --- | | `spec.module` | N/A | In v2 of the operator a `Module` is a separate controller with its own CRD. | | N/A | `spec.name: operator` | In v1 of the operator, the name of the generated module is hardcoded to `operator`. In v2, the default name of the generated module is `this`, but you can rename it. | | `spec.module.source` | `spec.module.source` | This supports all Terraform [module sources](/terraform/language/modules/sources). | | `spec.module.version` | `spec.module.version` | Refer to [module sources](/terraform/language/modules/sources) for versioning information for each module source. | | `spec.variables.[\*]` | `spec.variables.[\*].name` | You should include variable names in the module. This is a reference to variables in the workspace that is executing the module. | | `spec.outputs.[\*].key` | `spec.outputs.[\*].name` | You should include output names in the module. This is a reference to the output variables produced by the module. | | `status.workspaceID` OR `metadata.namespace-metadata.name` | `spec.workspace.id` OR `spec.workspace.name` | The workspace where the module is executed. The workspace must be in the same organization. | Below is an example migration of a `Module` between v1 and v2 of the operator: ```yaml apiVersion: app.terraform.io/v1alpha1 kind: Workspace metadata: name: migration spec: module: source: app.terraform.io/org-name/module-name/provider version: 0.0.42 variables: - key: username value: "user" hcl: true sensitive: false environmentVariable: false - key: SECRET\_KEY value: "s3cr3t" hcl: false sensitive: false environmentVariable: true ``` In v2 of the operator, separate controllers manage workspace and modules. ```yaml apiVersion: app.terraform.io/v1alpha2 kind: Workspace metadata: name: migration spec: terraformVariables: - name: username value: "user" hcl: true sensitive: false environmentVariables: - name: SECRET\_KEY value: "s3cr3t" hcl: false sensitive: false ``` ```yaml apiVersion: app.terraform.io/v1alpha2 kind: Module metadata: name: migration spec: name: operator module: source: app.terraform.io/org-name/module-name/provider version: 0.0.42 workspace: name: migration ``` ## Upgrade the operator Download Workspace CRD patch A: ```shell-session $ curl -sO https://raw.githubusercontent.com/hashicorp/hcp-terraform-operator/main/docs/migration/crds/workspaces\_patch\_a.yaml ``` View the changes that patch A applies to the workspace CRD. ```shell-session $ kubectl diff --filename workspaces\_patch\_a.yaml ``` Patch the workspace CRD with patch | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/kubernetes/ops-v2-migration.mdx | main | terraform | [
-0.041149113327264786,
-0.010160098783671856,
-0.0051717921160161495,
0.05881761759519577,
0.013972943648695946,
-0.07616617530584335,
0.13102813065052032,
-0.046750348061323166,
0.006105831824243069,
-0.033960673958063126,
0.017940282821655273,
-0.0930505320429802,
0.03352935612201691,
0.... | 0.075569 |
name: operator module: source: app.terraform.io/org-name/module-name/provider version: 0.0.42 workspace: name: migration ``` ## Upgrade the operator Download Workspace CRD patch A: ```shell-session $ curl -sO https://raw.githubusercontent.com/hashicorp/hcp-terraform-operator/main/docs/migration/crds/workspaces\_patch\_a.yaml ``` View the changes that patch A applies to the workspace CRD. ```shell-session $ kubectl diff --filename workspaces\_patch\_a.yaml ``` Patch the workspace CRD with patch A. This patch adds `app.terraform.io/v1alpha2` support, but excludes `.status.runStatus` because it has a different format in `app.terraform.io/v1alpha1` and causes JSON un-marshalling issues. !> \*\*Upgrade warning\*\*: Once you apply a patch, Kubernetes converts existing `app.terraform.io/v1alpha1` custom resources to `app.terraform.io/v1alpha2` according to the updated schema, meaning that v1 of the operator can no longer serve custom resources. Before patching, update your existing custom resources to satisfy the v2 schema requirements. [Learn more](#manifest-schema-migration). ```shell-session $ kubectl patch crd workspaces.app.terraform.io --patch-file workspaces\_patch\_a.yaml ``` Install the Operator v2 Helm chart with the `helm install` command. Be sure to set the `operator.watchedNamespaces` value to the list of namespaces your Workspace resources are deployed to. If this value is not provided, the operator will watch all namespaces in the Kubernetes cluster. ```shell-session $ helm install \ ${RELEASE\_NAME} hashicorp/hcp-terraform-operator \ --version 2.4.0 \ --namespace ${RELEASE\_NAMESPACE} \ --set 'operator.watchedNamespaces={white,blue,red}' \ --set controllers.agentPool.workers=5 \ --set controllers.module.workers=5 \ --set controllers.workspace.workers=5 ``` Next, create a Kubernetes secret to store the HCP Terraform API token following the [Usage Guide](https://github.com/hashicorp/hcp-terraform-operator/blob/main/docs/usage.md#prerequisites). The API token can be copied from the Kubernetes secret that you created for v1 of the operator. By default, this is named `terraformrc`. Use the `kubectl get secret` command to get the API token. ```shell-session $ kubectl --namespace ${RELEASE\_NAMESPACE} get secret terraformrc -o json | jq '.data.credentials' | tr -d '"' | base64 -d ``` Update existing custom resources [according to the schema migration guidance](#manifest-schema-migration) and apply your changes. ```shell-session $ kubectl apply --filename ``` Download Workspace CRD patch B. ```shell-session $ curl -sO https://raw.githubusercontent.com/hashicorp/hcp-terraform-operator/main/docs/migration/crds/workspaces\_patch\_b.yaml ``` View the changes that patch B applies to the workspace CRD. ```shell-session $ kubectl diff --filename workspaces\_patch\_b.yaml ``` Patch the workspace CRD with patch B. This patch adds `.status.runStatus` support, which was excluded in patch A. ```shell-session $ kubectl patch crd workspaces.app.terraform.io --patch-file workspaces\_patch\_b.yaml ``` The v2 operator will fail to proceed if a custom resource has the v1 finalizer `finalizer.workspace.app.terraform.io`. If you encounter an error, check the logs for more information. ```shell-session $ kubectl logs -f ``` Specifically, look for an error message such as the following. ``` ERROR Migration {"workspace": "default/", "msg": "spec contains old finalizer finalizer.workspace.app.terraform.io"} ``` The `finalizer` exists to provide greater control over the migration process. Verify the custom resource, and when you’re ready to migrate it, use the `kubectl patch` command to update the `finalizer` value. ```shell-session $ kubectl patch workspace migration --type=merge --patch '{"metadata": {"finalizers": ["workspace.app.terraform.io/finalizer"]}}' ``` Review the operator logs once more and verify there are no error messages. ```shell-session $ kubectl logs -f ``` The operator reconciles resources during the next sync period. This interval is set by the `operator.syncPeriod` configuration of the operator and defaults to five minutes. If you have any migrated `Module` custom resources, apply them now. ```shell-session $ kubectl apply --filename ``` In v2 of the operator, the `applyMethod` is set to `manual` by default. In this case, a new run in a managed workspace requires manual approval. Run the following command for each `Workspace` resource to change it to `auto` approval. ```shell-session $ kubectl patch workspace --type=merge --patch '{"spec": {"applyMethod": "auto"}}' ``` | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/kubernetes/ops-v2-migration.mdx | main | terraform | [
-0.05112358182668686,
0.011613710783421993,
0.062028009444475174,
-0.07035648077726364,
-0.02059183083474636,
-0.0420508049428463,
-0.03833475708961487,
-0.059639822691679,
0.01661582849919796,
0.09024163335561752,
0.03795624524354935,
-0.07785621285438538,
0.003084434662014246,
-0.0143344... | 0.068865 |
kubectl patch workspace --type=merge --patch '{"spec": {"applyMethod": "auto"}}' ``` | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/kubernetes/ops-v2-migration.mdx | main | terraform | [
-0.01583818718791008,
0.015897179022431374,
0.05924438312649727,
0.01675228774547577,
-0.06170046329498291,
0.013544761575758457,
0.024058425799012184,
0.0031781045254319906,
0.0071836598217487335,
0.03344670683145523,
0.0005341078503988683,
-0.09567677229642868,
-0.07601708173751831,
-0.0... | 0.085656 |
# HCP Terraform Operator for Kubernetes overview The [HCP Terraform Operator for Kubernetes](https://github.com/hashicorp/hcp-terraform-operator) allows you to manage HCP Terraform resources with Kubernetes custom resources. You can provision infrastructure internal or external to your Kubernetes cluster directly from the Kubernetes control plane. The operator's CustomResourceDefinitions (CRD) let you dynamically create HCP Terraform workspaces with Terraform modules, populate workspace variables, and provision infrastructure with Terraform runs. @include 'eu/integrations.mdx' ## Key benefits The HCP Terraform Operator for Kubernetes v2 offers several improvements over v1: - \*\*Flexible resource management\*\*: The operator now features multiple custom resources, each with separate controllers for different HCP Terraform resources. This provides additional flexibility and the ability to manage more custom resources concurrently, significantly improving performance for large-scale deployments. - \*\*Namespace management\*\*: The `--namespace` option allows you to tailor the operator's watch scope to specific namespaces, which enables more fine-grained resource management. - \*\*Configurable synchronization\*\*: The `--sync-period` option allows you to configure the synchronization frequency between custom resources and HCP Terraform, ensuring timely updates and smoother operations. ## Supported HCP Terraform features The HCP Terraform Operator for Kubernetes allows you to create agent pools, deploy modules, and manage workspaces through Kubernetes controllers. These controllers enable you to automate and manage HCP Terraform resources using custom resources in Kubernetes. ### Agent pools Agent pools in HCP Terraform manage the execution environment for Terraform runs. The HCP Terraform Operator for Kubernetes allows you to create and manage agent pools as part of your Kubernetes infrastructure. The following example creates a new agent pool with the name `agent-pool-development` and generates an agent token with the name `token-red`. ```yaml --- apiVersion: app.terraform.io/v1alpha2 kind: AgentPool metadata: name: my-agent-pool spec: organization: kubernetes-operator token: secretKeyRef: name: tfc-operator key: token name: agent-pool-development agentTokens: - name: token-red ``` The operator stores the `token-red` agent token in a Kubernetes secret named `my-agent-pool-token-red`. You can also enable agent autoscaling by providing a `.spec.autoscaling` configuration in your `AgentPool` specification. ```yaml --- apiVersion: app.terraform.io/v1alpha2 kind: AgentPool metadata: name: this spec: organization: kubernetes-operator token: secretKeyRef: name: tfc-operator key: token name: agent-pool-development agentTokens: - name: token-red agentDeployment: replicas: 1 autoscaling: targetWorkspaces: - name: us-west-development - id: ws-NUVHA9feCXzAmPHx - wildcardName: eu-development-\* minReplicas: 1 maxReplicas: 3 cooldownPeriod: scaleUpSeconds: 30 scaleDownSeconds: 30 ``` In the above example, the operator ensures that at least one agent pod is continuously running and dynamically scales the number of pods up to a maximum of three based on the workload or resource demand. The operator then monitors resource demands by observing the load of the designated workspaces specified by the `name`, `id`, or `wildcardName` patterns. When the workload decreases, the operator downscales the number of agent pods. The operator runs plan-only operations in parallel with other operations, and automatically scales the number of agent pods to meet the demand. Refer to the [agent pool API reference](/terraform/cloud-docs/integrations/kubernetes/api-reference#agentpool) for the complete `AgentPool` specification. ### Module The `Module` controller enforces an [API-driven Run workflow](/terraform/cloud-docs/workspaces/run/api) and lets you deploy Terraform modules within workspaces. The following example deploys version `1.0.0` of the `hashicorp/module/random` module in the `workspace-name` workspace. ```yaml --- apiVersion: app.terraform.io/v1alpha2 kind: Module metadata: name: my-module spec: organization: kubernetes-operator token: secretKeyRef: name: tfc-operator key: token module: source: hashicorp/module/random version: 1.0.0 workspace: name: workspace-name variables: - name: string\_length outputs: - name: random\_string ``` The operator passes the workspace's `string\_length` variable to the module and stores the `random\_string` outputs as either a Kubernetes secret or a ConfigMap. If the workspace marks the output as `sensitive`, the operator stores the `random\_string` as a Kubernetes secret; otherwise, the operator stores it as a ConfigMap. The variables must be accessible within the workspace as a workspace variable, workspace variable set, | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/kubernetes/index.mdx | main | terraform | [
-0.004742987919598818,
0.04031006619334221,
0.04713720083236694,
-0.04906904697418213,
-0.03595537319779396,
0.006301272194832563,
0.0014375806786119938,
-0.0466710701584816,
-0.0379997082054615,
0.01182184275239706,
-0.05069957301020622,
-0.08949634432792664,
0.05064023286104202,
-0.01948... | 0.172823 |
`random\_string` outputs as either a Kubernetes secret or a ConfigMap. If the workspace marks the output as `sensitive`, the operator stores the `random\_string` as a Kubernetes secret; otherwise, the operator stores it as a ConfigMap. The variables must be accessible within the workspace as a workspace variable, workspace variable set, or project variable set. Refer to the [module API reference](/terraform/cloud-docs/integrations/kubernetes/api-reference#module) for the complete `Module` specification. ### Project Projects let you organize your workspaces and scope access to workspace resources. The `Project` controller allows you to create, configure, and manage [projects](/terraform/tutorials/cloud/projects) directly from Kubernetes. The following example creates a new project named `testing`. ```yaml --- apiVersion: app.terraform.io/v1alpha2 kind: Project metadata: name: testing spec: organization: kubernetes-operator token: secretKeyRef: name: tfc-operator key: token name: project-demo ``` The `Project` controller allows you to manage team access [permissions](/terraform/cloud-docs/users-teams-organizations/permissions/project). The following example creates a project named `testing` and grants the `qa` team admin access to the project. ```yaml --- apiVersion: app.terraform.io/v1alpha2 kind: Project metadata: name: testing spec: organization: kubernetes-operator token: secretKeyRef: name: tfc-operator key: token name: project-demo teamAccess: - team: name: qa access: admin ``` Refer to the [project API reference](/terraform/cloud-docs/integrations/kubernetes/api-reference#project) for the complete `Project` specification. ### Workspace HCP Terraform workspaces organize and manage Terraform configurations. The HCP Terraform Operator for Kubernetes allows you to create, configure, and manage workspaces directly from Kubernetes. The following example creates a new workspace named `us-west-development`, configured to use Terraform version `1.6.2`. This workspace has two variables, `nodes` and `rds-secret`. The variable `rds-secret` is treated as sensitive, and the operator reads the value for the variable from a Kubernetes secret named `us-west-development-secrets`. ```yaml --- apiVersion: app.terraform.io/v1alpha2 kind: Workspace metadata: name: us-west-development spec: organization: kubernetes-operator token: secretKeyRef: name: tfc-operator key: token name: us-west-development description: US West development workspace terraformVersion: 1.6.2 applyMethod: auto agentPool: name: ap-us-west-development terraformVariables: - name: nodes value: 2 - name: rds-secret sensitive: true valueFrom: secretKeyRef: name: us-west-development-secrets key: rds-secret runTasks: - name: rt-us-west-development stage: pre\_plan ``` In the above example, the `applyMethod` has the value of `auto`, so HCP Terraform automatically applies any changes to this workspace. The specification also configures the workspace to use the `ap-us-west-development` agent pool and run the `rt-us-west-development` run task at the `pre\_plan` stage. The operator stores the value of the workspace outputs as Kubernetes secrets or ConfigMaps. If the outputs are marked as `sensitive`, they are stored as Kubernetes secrets, otherwise they are stored as ConfigMaps. -> \*\*Note\*\*: The operator rolls back any external modifications made to the workspace to match the state specified in the custom resource definition. Refer to the [workspace API reference](/terraform/cloud-docs/integrations/kubernetes/api-reference#workspace) for the complete `Workspace` specification. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/kubernetes/index.mdx | main | terraform | [
-0.02384236268699169,
0.01165760774165392,
-0.01930868625640869,
0.04989903047680855,
-0.02806241251528263,
-0.024306127801537514,
0.05754890292882919,
-0.06443006545305252,
0.08616095781326294,
0.06966264545917511,
-0.0011706732911989093,
-0.11165335029363632,
0.043665532022714615,
-0.042... | 0.078062 |
# HCP Terraform Operator for Kubernetes API reference ## Packages - [app.terraform.io/v1alpha2](#appterraformiov1alpha2) ## app.terraform.io/v1alpha2 Package v1alpha2 contains API Schema definitions for the app v1alpha2 API group. ### Resource Types - [AgentPool](#agentpool) - [AgentToken](#agenttoken) - [Module](#module) - [Project](#project) - [RunsCollector](#runscollector) - [Workspace](#workspace) #### AgentAPIToken Agent Token is a secret token that an HCP Terraform Agent uses to connect to the HCP Terraform Agent Pool. More infromation: - [HCP Terraform agents](/terraform/cloud-docs/agents) \_Appears in:\_ - [AgentPoolSpec](#agentpoolspec) - [AgentPoolStatus](#agentpoolstatus) - [AgentTokenSpec](#agenttokenspec) - [AgentTokenStatus](#agenttokenstatus) | Field | Description | | --- | --- | | `name` \_string\_ | Agent Token name. | | `id` \_string\_ | Agent Token ID. | | `createdAt` \_integer\_ | Timestamp of when the agent token was created. | | `lastUsedAt` \_integer\_ | Timestamp of when the agent token was last used. | #### AgentDeployment \_Appears in:\_ - [AgentPoolSpec](#agentpoolspec) | Field | Description | | --- | --- | | `replicas` \_integer\_ | | | `spec` \_[PodSpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#podspec-v1-core)\_ | | | `annotations` \_object (keys:string, values:string)\_ | The annotations that the operator will apply to the pod template in the deployment. | | `labels` \_object (keys:string, values:string)\_ | The labels that the operator will apply to the pod template in the deployment. | #### AgentDeploymentAutoscaling AgentDeploymentAutoscaling allows you to configure the operator to scale the deployment for an AgentPool up and down to meet demand. \_Appears in:\_ - [AgentPoolSpec](#agentpoolspec) | Field | Description | | --- | --- | | `maxReplicas` \_integer\_ | MaxReplicas is the maximum number of replicas for the Agent deployment. | | `minReplicas` \_integer\_ | MinReplicas is the minimum number of replicas for the Agent deployment. | | `targetWorkspaces` \_[TargetWorkspace](#targetworkspace)\_ | DEPRECATED: This field has been deprecated since 2.9.0 and will be removed in future versions. TargetWorkspaces is a list of HCP Terraform Workspaces which the agent pool should scale up to meet demand. When this field is omitted the autoscaler will target all workspaces that are associated with the AgentPool. | | `cooldownPeriodSeconds` \_integer\_ | CooldownPeriodSeconds is the time to wait between scaling events. Defaults to 300. | | `cooldownPeriod` \_[AgentDeploymentAutoscalingCooldownPeriod](#agentdeploymentautoscalingcooldownperiod)\_ | CoolDownPeriod configures the period to wait between scaling up and scaling down | #### AgentDeploymentAutoscalingCooldownPeriod AgentDeploymentAutoscalingCooldownPeriod configures the period to wait between scaling up and scaling down. \_Appears in:\_ - [AgentDeploymentAutoscaling](#agentdeploymentautoscaling) | Field | Description | | --- | --- | | `scaleUpSeconds` \_integer\_ | ScaleUpSeconds is the time to wait before scaling up. | | `scaleDownSeconds` \_integer\_ | ScaleDownSeconds is the time to wait before scaling down. | #### AgentDeploymentAutoscalingStatus AgentDeploymentAutoscalingStatus \_Appears in:\_ - [AgentPoolStatus](#agentpoolstatus) | Field | Description | | --- | --- | | `desiredReplicas` \_integer\_ | Desired number of agent replicas | | `lastScalingEvent` \_[Time](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#time-v1-meta)\_ | Last time the agent pool was scaled. | #### AgentPool AgentPool manages HCP Terraform Agent Pools, HCP Terraform Agent Tokens and can perform HCP Terraform Agent scaling. More infromation: - [Manage agent pools](/terraform/cloud-docs/agents/agent-pools) - [Agent API tokens](/terraform/cloud-docs/users-teams-organizations/api-tokens#agent-api-tokens) - [HCP Terraform agents](/terraform/cloud-docs/agents) | Field | Description | | --- | --- | | `apiVersion` \_string\_ | `app.terraform.io/v1alpha2` | `kind` \_string\_ | `AgentPool` | `kind` \_string\_ | Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. [More information](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds). | | `apiVersion` \_string\_ | APIVersion defines the versioned schema of this represePackage v1alpha2 contains API Schema definitions for the app v1alpha2 API groupPackage v1alpha2 contains API Schema definitions for the app v1alpha2 API groupntation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources | | `metadata` \_[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#objectmeta-v1-meta)\_ | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/kubernetes/api-reference.mdx | main | terraform | [
-0.054802119731903076,
0.02910083532333374,
0.027770237997174263,
-0.04574679210782051,
0.02143925242125988,
-0.010317650623619556,
0.029294472187757492,
-0.05273604020476341,
0.06413467973470688,
0.021770024672150612,
-0.022693125531077385,
-0.12089991569519043,
0.02626875601708889,
-0.04... | 0.176113 |
of this represePackage v1alpha2 contains API Schema definitions for the app v1alpha2 API groupPackage v1alpha2 contains API Schema definitions for the app v1alpha2 API groupntation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources | | `metadata` \_[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#objectmeta-v1-meta)\_ | Refer to Kubernetes API documentation for fields of `metadata`. | | `spec` \_[AgentPoolSpec](#agentpoolspec)\_ | | #### AgentPoolDeletionPolicy \_Underlying type:\_ \_string\_ DeletionPolicy defines the strategy the Kubernetes operator uses when you delete a resource, either manually or by a system event. You must use one of the following values: - `retain`: When you delete the custom resource, the operator does not delete the agent pool. - `destroy`: The operator will attempt to remove the managed HCP Terraform agent pool. \_Appears in:\_ - [AgentPoolSpec](#agentpoolspec) #### AgentPoolRef AgentPool allows HCP Terraform to communicate with isolated, private, or on-premises infrastructure. Only one of the fields `ID` or `Name` is allowed. At least one of the fields `ID` or `Name` is mandatory. More information: - [HCP Terraform agents](/terraform/cloud-docs/agents) \_Appears in:\_ - [AgentTokenSpec](#agenttokenspec) - [AgentTokenStatus](#agenttokenstatus) - [RunsCollectorSpec](#runscollectorspec) - [RunsCollectorStatus](#runscollectorstatus) - [WorkspaceSpec](#workspacespec) | Field | Description | | --- | --- | | `id` \_string\_ | Agent Pool ID. Must match pattern: `^apool-[a-zA-Z0-9]+$` | | `name` \_string\_ | Agent Pool name. | #### AgentPoolSpec AgentPoolSpec defines the desired state of AgentPool. \_Appears in:\_ - [AgentPool](#agentpool) | Field | Description | | --- | --- | | `name` \_string\_ | Agent Pool name. [More information](/terraform/cloud-docs/agents/agent-pools). | | `organization` \_string\_ | Organization name where the Workspace will be created. [More information](/terraform/cloud-docs/users-teams-organizations/organizations). | | `token` \_[Token](#token)\_ | API Token to be used for API calls. | | `agentTokens` \_[AgentAPIToken](#agentapitoken) array\_ | List of the agent tokens to generate. | | `agentDeployment` \_[AgentDeployment](#agentdeployment)\_ | Agent deployment settings | | `autoscaling` \_[AgentDeploymentAutoscaling](#agentdeploymentautoscaling)\_ | Agent deployment settings | | `deletionPolicy` \_[AgentPoolDeletionPolicy](#agentpooldeletionpolicy)\_ | The Deletion Policy specifies the behavior of the custom resource and its associated agent pool when the custom resource is deleted. - `retain`: When you delete the custom resource, the operator will remove only the custom resource. The HCP Terraform agent pool will be retained. The managed tokens will remain active on the HCP Terraform side; however, the corresponding secrets and managed agents will be removed. - `destroy`: The operator will attempt to remove the managed HCP Terraform agent pool. On success, the managed agents and the corresponding secret with tokens will be removed along with the custom resource. On failure, the managed agents will be scaled down to 0, and the managed tokens, along with the corresponding secret, will be removed. The operator will continue attempting to remove the agent pool until it succeeds. Default: `retain`. | #### AgentToken AgentToken manages HCP Terraform Agent Tokens. More information: - [HCP Terraform agents](/terraform/cloud-docs/users-teams-organizations/api-tokens#agent-api-tokens) | Field | Description | | --- | --- | | `apiVersion` \_string\_ | `app.terraform.io/v1alpha2` | `kind` \_string\_ | `AgentToken` | `kind` \_string\_ | Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: [More information](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds). | | `apiVersion` \_string\_ | APIVersion defines the versioned schema of this represePackage v1alpha2 contains API Schema definitions for the app v1alpha2 API groupPackage v1alpha2 contains API Schema definitions for the app v1alpha2 API groupntation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. [More information](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources). | | `metadata` \_[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#objectmeta-v1-meta)\_ | Refer to Kubernetes API documentation for fields of `metadata`. | | `spec` \_[AgentTokenSpec](#agenttokenspec)\_ | | #### AgentTokenDeletionPolicy \_Underlying type:\_ \_string\_ The | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/kubernetes/api-reference.mdx | main | terraform | [
-0.008949186652898788,
-0.006899700500071049,
0.012399070896208286,
-0.010324998758733273,
0.02035859227180481,
-0.004749738611280918,
0.0007229795446619391,
-0.07140644639730453,
0.08962494879961014,
0.023217862471938133,
0.0001580257376190275,
-0.08664559572935104,
0.010435451753437519,
... | 0.161587 |
app v1alpha2 API groupntation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. [More information](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources). | | `metadata` \_[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#objectmeta-v1-meta)\_ | Refer to Kubernetes API documentation for fields of `metadata`. | | `spec` \_[AgentTokenSpec](#agenttokenspec)\_ | | #### AgentTokenDeletionPolicy \_Underlying type:\_ \_string\_ The Deletion Policy defines how managed tokens and Kubernetes Secrets should be handled when the custom resource is deleted. - `retain`: When the custom resource is deleted, the operator will remove only the resource itself. The managed HCP Terraform Agent tokens will remain active on the HCP Terraform side, and the corresponding Kubernetes Secret will not be modified. - `destroy`: The operator will attempt to delete the managed HCP Terraform Agent tokens and remove the corresponding Kubernetes Secret. \_Appears in:\_ - [AgentTokenSpec](#agenttokenspec) #### AgentTokenManagementPolicy \_Underlying type:\_ \_string\_ The Management Policy defines how the controller will manage tokens in the specified Agent Pool. - `merge`: The controller will manage its tokens alongside any existing tokens in the pool, without modifying or deleting tokens it does not own. - `owner`: The controller assumes full ownership of all agent tokens in the pool, managing and potentially modifying or deleting all tokens, including those not created by it. \_Appears in:\_ - [AgentTokenSpec](#agenttokenspec) #### AgentTokenSpec AgentTokenSpec defines the desired state of AgentToken. \_Appears in:\_ - [AgentToken](#agenttoken) | Field | Description | | --- | --- | | `organization` \_string\_ | Organization name where the Workspace will be created. [More information](/terraform/cloud-docs/users-teams-organizations/organizations). | | `token` \_[Token](#token)\_ | API Token to be used for API calls. | | `deletionPolicy` \_[AgentTokenDeletionPolicy](#agenttokendeletionpolicy)\_ | The Deletion Policy defines how managed tokens and Kubernetes Secrets should be handled when the custom resource is deleted. - `retain`: When the custom resource is deleted, the operator will remove only the resource itself. The managed HCP Terraform Agent tokens will remain active on the HCP Terraform side, and the corresponding Kubernetes Secret will not be modified. - `destroy`: The operator will attempt to delete the managed HCP Terraform Agent tokens and remove the corresponding Kubernetes Secret. Default: `retain`. | | `agentPool` \_[AgentPoolRef](#agentpoolref)\_ | The Agent Pool name or ID where the tokens will be managed. | | `managementPolicy` \_[AgentTokenManagementPolicy](#agenttokenmanagementpolicy)\_ | The Management Policy defines how the controller will manage tokens in the specified Agent Pool. - `merge` — the controller will manage its tokens alongside any existing tokens in the pool, without modifying or deleting tokens it does not own. - `owner` — the controller assumes full ownership of all agent tokens in the pool, managing and potentially modifying or deleting all tokens, including those not created by it. Default: `merge`. | | `agentTokens` \_[AgentAPIToken](#agentapitoken) array\_ | List of the HCP Terraform Agent tokens to manage. | | `secretName` \_string\_ | secretName specifies the name of the Kubernetes Secret where the HCP Terraform Agent tokens are stored. | #### ConfigurationVersionStatus A configuration version is a resource used to reference the uploaded configuration files. More information: - [Configuration versions API reference](/terraform/cloud-docs/api-docs/configuration-versions) - [The API-driven run workflow](/terraform/cloud-docs/workspaces/run/api) \_Appears in:\_ - [ModuleStatus](#modulestatus) | Field | Description | | --- | --- | | `id` \_string\_ | Configuration Version ID. | #### ConsumerWorkspace ConsumerWorkspace allows access to the state for specific workspaces within the same organization. Only one of the fields `ID` or `Name` is allowed. At least one of the fields `ID` or `Name` is mandatory. More information: - [Remote state access controls](/terraform/cloud-docs/workspaces/state#remote-state-access-controls) \_Appears in:\_ - [RemoteStateSharing](#remotestatesharing) | Field | Description | | --- | --- | | `id` \_string\_ | Consumer Workspace ID. Must match pattern: `^ws-[a-zA-Z0-9]+$` | | `name` \_string\_ | Consumer Workspace name. | #### CustomPermissions | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/kubernetes/api-reference.mdx | main | terraform | [
-0.013875964097678661,
0.03129616007208824,
0.03444284945726395,
0.006441288627684116,
-0.022966159507632256,
0.002552251797169447,
0.031713176518678665,
-0.05863584578037262,
0.10177240520715714,
0.05023123696446419,
-0.03068317100405693,
-0.07805639505386353,
0.010430239140987396,
-0.025... | 0.155777 |
the fields `ID` or `Name` is mandatory. More information: - [Remote state access controls](/terraform/cloud-docs/workspaces/state#remote-state-access-controls) \_Appears in:\_ - [RemoteStateSharing](#remotestatesharing) | Field | Description | | --- | --- | | `id` \_string\_ | Consumer Workspace ID. Must match pattern: `^ws-[a-zA-Z0-9]+$` | | `name` \_string\_ | Consumer Workspace name. | #### CustomPermissions Custom permissions let you assign specific, finer-grained permissions to a team than the broader fixed permission sets provide. More information: - [Custom workspace permissions](/terraform/cloud-docs/users-teams-organizations/permissions/workspace) \_Appears in:\_ - [TeamAccess](#teamaccess) | Field | Description | | --- | --- | | `runs` \_string\_ | Run access. Must be one of the following values: `apply`, `plan`, `read`. Default: `read`. | | `runTasks` \_boolean\_ | Manage Workspace Run Tasks. Default: `false`. | | `sentinel` \_string\_ | Download Sentinel mocks. Must be one of the following values: `none`, `read`. Default: `none`. | | `stateVersions` \_string\_ | State access. Must be one of the following values: `none`, `read`, `read-outputs`, `write`. Default: `none`. | | `variables` \_string\_ | Variable access. Must be one of the following values: `none`, `read`, `write`. Default: `none`. | | `workspaceLocking` \_boolean\_ | Lock/unlock workspace. Default: `false`. | #### CustomProjectPermissions Custom permissions let you assign specific, finer-grained permissions to a team than the broader fixed permission sets provide. More information: - [Custom project permissions](/terraform/cloud-docs/users-teams-organizations/permissions/project) - [General workspace permissions](/terraform/cloud-docs/users-teams-organizations/permissions/workspace) \_Appears in:\_ - [ProjectTeamAccess](#projectteamaccess) | Field | Description | | --- | --- | | `projectAccess` \_[ProjectSettingsPermissionType](#projectsettingspermissiontype)\_ | Project access. Must be one of the following values: `delete`, `read`, `update`. Default: `read`. | | `teamManagement` \_[ProjectTeamsPermissionType](#projectteamspermissiontype)\_ | Team management. Must be one of the following values: `manage`, `none`, `read`. Default: `none`. | | `createWorkspace` \_boolean\_ | Allow users to create workspaces in the project. This grants read access to all workspaces in the project. Default: `false`. | | `deleteWorkspace` \_boolean\_ | Allows users to delete workspaces in the project. Default: `false`. | | `moveWorkspace` \_boolean\_ | Allows users to move workspaces out of the project. A user must have this permission on both the source and destination project to successfully move a workspace from one project to another. Default: `false`. | | `lockWorkspace` \_boolean\_ | Allows users to manually lock the workspace to temporarily prevent runs. When a workspace's execution mode is set to "local", users must have this permission to perform local CLI runs using the workspace's state. Default: `false`. | | `runs` \_[WorkspaceRunsPermissionType](#workspacerunspermissiontype)\_ | Run access. Must be one of the following values: `apply`, `plan`, `read`. Default: `read`. | | `runTasks` \_boolean\_ | Manage Workspace Run Tasks. Default: `false`. | | `sentinelMocks` \_[WorkspaceSentinelMocksPermissionType](#workspacesentinelmockspermissiontype)\_ | Download Sentinel mocks. Must be one of the following values: `none`, `read`. Default: `none`. | | `stateVersions` \_[WorkspaceStateVersionsPermissionType](#workspacestateversionspermissiontype)\_ | State access. Must be one of the following values: `none`, `read`, `read-outputs`, `write`. Default: `none`. | | `variables` \_[WorkspaceVariablesPermissionType](#workspacevariablespermissiontype)\_ | Variable access. Must be one of the following values: `none`, `read`, `write`. Default: `none`. | #### DeletionPolicy \_Underlying type:\_ \_string\_ DeletionPolicy defines the strategy the Kubernetes operator uses when you delete a resource, either manually or by a system event. You must use one of the following values: - `retain`: When you delete the custom resource, the operator does not delete the workspace. - `soft`: Attempts to delete the associated workspace only if it does not contain any managed resources. - `destroy`: Executes a destroy operation to remove all resources managed by the associated workspace. Once the destruction of these resources is successful, the operator deletes the workspace, and then deletes the custom resource. - `force`: Forcefully and immediately deletes the workspace and the custom resource. \_Appears in:\_ - [WorkspaceSpec](#workspacespec) #### Module Module implements API-driven Run Workflows. More information: - | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/kubernetes/api-reference.mdx | main | terraform | [
-0.03523200750350952,
0.05547931417822838,
-0.003313687164336443,
0.02486656978726387,
-0.037361059337854385,
-0.01037246361374855,
0.03327029570937157,
-0.0959964320063591,
0.01832335814833641,
0.08308344334363937,
-0.024024220183491707,
-0.09578392654657364,
0.046882063150405884,
0.05207... | 0.038735 |
managed by the associated workspace. Once the destruction of these resources is successful, the operator deletes the workspace, and then deletes the custom resource. - `force`: Forcefully and immediately deletes the workspace and the custom resource. \_Appears in:\_ - [WorkspaceSpec](#workspacespec) #### Module Module implements API-driven Run Workflows. More information: - [The API-driven run workflow](/terraform/cloud-docs/workspaces/run/api) | Field | Description | | --- | --- | | `apiVersion` \_string\_ | `app.terraform.io/v1alpha2` | `kind` \_string\_ | `Module` | `kind` \_string\_ | Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. [More information](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds). | | `apiVersion` \_string\_ | APIVersion defines the versioned schema of this represePackage v1alpha2 contains API Schema definitions for the app v1alpha2 API groupPackage v1alpha2 contains API Schema definitions for the app v1alpha2 API groupntation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. [More information](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources). | | `metadata` \_[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#objectmeta-v1-meta)\_ | Refer to Kubernetes API documentation for fields of `metadata`. | | `spec` \_[ModuleSpec](#modulespec)\_ | | #### ModuleDeletionPolicy \_Underlying type:\_ \_string\_ Deletion Policy defines the strategies for resource deletion in the Kubernetes operator. It controls how the operator should handle the deletion of resources when triggered by a user action or system event. There is one possible value: - `retain`: When the custom resource is deleted, the associated module is retained. `destroyOnDeletion` must be set to false. Default value. - `destroy`: Executes a destroy operation. Removes all resources and the module. \_Appears in:\_ - [ModuleSpec](#modulespec) #### ModuleOutput Module outputs to store in ConfigMap(non-sensitive) or Secret(sensitive). \_Appears in:\_ - [ModuleSpec](#modulespec) | Field | Description | | --- | --- | | `name` \_string\_ | Output name must match with the module output. | | `sensitive` \_boolean\_ | Specify whether or not the output is sensitive. Default: `false`. | #### ModuleSource Module source and version to execute. \_Appears in:\_ - [ModuleSpec](#modulespec) | Field | Description | | --- | --- | | `source` \_string\_ | Non local Terraform module source. [More information](/terraform/language/modules/sources). | | `version` \_string\_ | Terraform module version. | #### ModuleSpec ModuleSpec defines the desired state of Module. \_Appears in:\_ - [Module](#module) | Field | Description | | --- | --- | | `organization` \_string\_ | Organization name where the Workspace will be created. [More information](/terraform/cloud-docs/users-teams-organizations/organizations). | | `token` \_[Token](#token)\_ | API Token to be used for API calls. | | `module` \_[ModuleSource](#modulesource)\_ | Module source and version to execute. | | `workspace` \_[ModuleWorkspace](#moduleworkspace)\_ | Workspace to execute the module. | | `name` \_string\_ | Name of the module that will be uploaded and executed. Default: `this`. | | `variables` \_[ModuleVariable](#modulevariable) array\_ | Variables to pass to the module, they must exist in the Workspace. | | `outputs` \_[ModuleOutput](#moduleoutput) array\_ | Module outputs to store in ConfigMap(non-sensitive) or Secret(sensitive). | | `destroyOnDeletion` \_boolean\_ | DEPRECATED: Specify whether or not to execute a Destroy run when the object is deleted from the Kubernetes. Default: `false`. | | `restartedAt` \_string\_ | Allows executing a new Run without changing any Workspace or Module attributes. Example: ```kubectl patch KIND NAME --type=merge --patch '{"spec": \{"restartedAt": "'\`date -u -Iseconds\`'"\}\}'``` | | `deletionPolicy` \_[ModuleDeletionPolicy](#moduledeletionpolicy)\_ | Deletion Policy defines the strategies for resource deletion in the Kubernetes operator. It controls how the operator should handle the deletion of resources when triggered by a user action or system event. There is one possible value: - `retain`: When the custom resource is deleted, the associated module is retained. `destroyOnDeletion` must be set to false. - `destroy`: Executes a destroy operation. Removes all | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/kubernetes/api-reference.mdx | main | terraform | [
-0.0867944061756134,
0.044343575835227966,
0.06165214255452156,
0.047784242779016495,
-0.0072157965041697025,
-0.07594874501228333,
0.019791854545474052,
-0.14008723199367523,
0.12348070740699768,
0.06338326632976532,
-0.04010576009750366,
0.01047575380653143,
0.04660353437066078,
0.005636... | 0.118924 |
how the operator should handle the deletion of resources when triggered by a user action or system event. There is one possible value: - `retain`: When the custom resource is deleted, the associated module is retained. `destroyOnDeletion` must be set to false. - `destroy`: Executes a destroy operation. Removes all resources and the module. Default: `retain`. | #### ModuleVariable Variables to pass to the module. \_Appears in:\_ - [ModuleSpec](#modulespec) | Field | Description | | --- | --- | | `name` \_string\_ | Variable name must exist in the Workspace. | #### ModuleWorkspace Workspace to execute the module. Only one of the fields `ID` or `Name` is allowed. At least one of the fields `ID` or `Name` is mandatory. \_Appears in:\_ - [ModuleSpec](#modulespec) | Field | Description | | --- | --- | | `id` \_string\_ | Module Workspace ID. Must match pattern: `^ws-[a-zA-Z0-9]+$` | | `name` \_string\_ | Module Workspace Name. | #### Notification Notifications allow you to send messages to other applications based on run and workspace events. More information: - [Workspace notifications](/terraform/cloud-docs/workspaces/settings/notifications) \_Appears in:\_ - [WorkspaceSpec](#workspacespec) | Field | Description | | --- | --- | | `name` \_string\_ | Notification name. | | `type` \_[NotificationDestinationType](#notificationdestinationtype)\_ | The type of the notification. Must be one of the following values: `email`, `generic`, `microsoft-teams`, `slack`. | | `enabled` \_boolean\_ | Whether the notification configuration should be enabled or not. Default: `true`. | | `token` \_string\_ | The token of the notification. | | `triggers` \_[NotificationTrigger](#notificationtrigger) array\_ | The list of run events that will trigger notifications. Trigger represents the different TFC notifications that can be sent as a run's progress transitions between different states. There are two categories of triggers: - Health Events: `assessment:check\_failure`, `assessment:drifted`, `assessment:failed`. - Run Events: `run:applying`, `run:completed`, `run:created`, `run:errored`, `run:needs\_attention`, `run:planning`. | | `url` \_string\_ | The URL of the notification. Must match pattern: `^https?://.\*` | | `emailAddresses` \_string array\_ | The list of email addresses that will receive notification emails. It is only available for Terraform Enterprise users. It is not available in HCP Terraform. | | `emailUsers` \_string array\_ | The list of users belonging to the organization that will receive notification emails. | #### NotificationTrigger \_Underlying type:\_ \_string\_ NotificationTrigger represents the different TFC notifications that can be sent as a run's progress transitions between different states. This must be aligned with go-tfe type `NotificationTriggerType`. Must be one of the following values: `run:applying`, `assessment:check\_failure`, `run:completed`, `run:created`, `assessment:drifted`, `run:errored`, `assessment:failed`, `run:needs\_attention`, `run:planning`. \_Appears in:\_ - [Notification](#notification) #### OutputStatus Outputs status. \_Appears in:\_ - [ModuleStatus](#modulestatus) | Field | Description | | --- | --- | | `runID` \_string\_ | Run ID of the latest run that updated the outputs. | #### PlanStatus \_Appears in:\_ - [WorkspaceStatus](#workspacestatus) | Field | Description | | --- | --- | | `id` \_string\_ | Latest plan-only/speculative plan HCP Terraform run ID. | | `terraformVersion` \_string\_ | The version of Terraform to use for this run. | #### Project Project manages HCP Terraform Projects. More information: - [Manage projects](/terraform/cloud-docs/projects/manage) | Field | Description | | --- | --- | | `apiVersion` \_string\_ | `app.terraform.io/v1alpha2` | `kind` \_string\_ | `Project` | `kind` \_string\_ | Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. [More information](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds). | | `apiVersion` \_string\_ | APIVersion defines the versioned schema of this represePackage v1alpha2 contains API Schema definitions for the app v1alpha2 API groupPackage v1alpha2 contains API Schema definitions for the app v1alpha2 API groupntation of an object. Servers should convert recognized schemas to the latest | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/kubernetes/api-reference.mdx | main | terraform | [
-0.09191002696752548,
0.023670030757784843,
-0.033922601491212845,
0.07428011298179626,
0.008228981867432594,
-0.011114178225398064,
0.16972477734088898,
-0.05781418830156326,
0.0784253180027008,
-0.005247349385172129,
-0.002481257077306509,
0.007437840104103088,
0.03253859654068947,
0.053... | 0.140396 |
In CamelCase. [More information](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds). | | `apiVersion` \_string\_ | APIVersion defines the versioned schema of this represePackage v1alpha2 contains API Schema definitions for the app v1alpha2 API groupPackage v1alpha2 contains API Schema definitions for the app v1alpha2 API groupntation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. [More information](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources). | | `metadata` \_[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#objectmeta-v1-meta)\_ | Refer to Kubernetes API documentation for fields of `metadata`. | | `spec` \_[ProjectSpec](#projectspec)\_ | | #### ProjectDeletionPolicy \_Underlying type:\_ \_string\_ DeletionPolicy defines the strategy the Kubernetes operator uses when you delete a project, either manually or by a system event. You must use one of the following values: - `retain`: When the custom resource is deleted, the operator will not delete the associated project. - `soft`: Attempts to remove the project. The project must be empty. \_Appears in:\_ - [ProjectSpec](#projectspec) #### ProjectSpec ProjectSpec defines the desired state of Project. More information: - [Manage projects](/terraform/cloud-docs/projects/manage) \_Appears in:\_ - [Project](#project) | Field | Description | | --- | --- | | `organization` \_string\_ | Organization name where the Workspace will be created. [More information](/terraform/cloud-docs/users-teams-organizations/organizations). | | `token` \_[Token](#token)\_ | API Token to be used for API calls. | | `name` \_string\_ | Name of the Project. | | `teamAccess` \_[ProjectTeamAccess](#projectteamaccess) array\_ | HCP Terraform's access model is team-based. In order to perform an action within a HCP Terraform organization, users must belong to a team that has been granted the appropriate permissions. You can assign project-specific permissions to teams. More information: [Permissions](/terraform/enterprise/users-teams-organizations/permissions). | | `deletionPolicy` \_[ProjectDeletionPolicy](#projectdeletionpolicy)\_ | DeletionPolicy defines the strategy the Kubernetes operator uses when you delete a project, either manually or by a system event. You must use one of the following values: - `retain`: When the custom resource is deleted, the operator will not delete the associated project. - `soft`: Attempts to remove the project. The project must be empty. Default: `retain`. | #### ProjectTeamAccess HCP Terraform's access model is team-based. In order to perform an action within a HCP Terraform organization, users must belong to a team that has been granted the appropriate permissions. You can assign project-specific permissions to teams. More information: - [Permissions](/terraform/enterprise/users-teams-organizations/permissions) \_Appears in:\_ - [ProjectSpec](#projectspec) | Field | Description | | --- | --- | | `team` \_[Team](#team)\_ | Team to grant access. [More information](/terraform/cloud-docs/users-teams-organizations/teams). | | `access` \_[TeamProjectAccessType](#teamprojectaccesstype)\_ | There are two ways to choose which permissions a given team has on a project: fixed permission sets, and custom permissions. Must be one of the following values: `admin`, `custom`, `maintain`, `read`, `write`. More information: [Project permissions](/terraform/cloud-docs/users-teams-organizations/permissions/project). | | `custom` \_[CustomProjectPermissions](#customprojectpermissions)\_ | Custom permissions let you assign specific, finer-grained permissions to a team than the broader fixed permission sets provide. [More information](/terraform/cloud-docs/users-teams-organizations/permissions/project). | #### RemoteStateSharing RemoteStateSharing allows remote state access between workspaces. By default, new workspaces in HCP Terraform do not allow other workspaces to access their state. More information: - [Accessing state from other workspaces](/terraform/cloud-docs/workspaces/state#accessing-state-from-other-workspaces) \_Appears in:\_ - [WorkspaceSpec](#workspacespec) | Field | Description | | --- | --- | | `allWorkspaces` \_boolean\_ | Allow access to the state for all workspaces within the same organization. Default: `false`. | | `workspaces` \_[ConsumerWorkspace](#consumerworkspace) array\_ | Allow access to the state for specific workspaces within the same organization. | #### RunStatus \_Appears in:\_ - [ModuleStatus](#modulestatus) - [WorkspaceStatus](#workspacestatus) | Field | Description | | --- | --- | | `id` \_string\_ | Current(both active and finished) HCP Terraform run ID. | | `configurationVersion` \_string\_ | The configuration version of this run. | | `outputRunID` \_string\_ | Run ID of the latest run that could update the outputs. | #### RunTrigger RunTrigger allows you | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/kubernetes/api-reference.mdx | main | terraform | [
0.009326728992164135,
0.013033025898039341,
0.022303618490695953,
-0.03636912256479263,
-0.0025694118812680244,
-0.041587501764297485,
-0.07152862846851349,
-0.015566227026283741,
0.03824453428387642,
0.001532538328319788,
0.01219861675053835,
-0.08847185969352722,
-0.014519920572638512,
-... | 0.123173 |
| --- | --- | | `id` \_string\_ | Current(both active and finished) HCP Terraform run ID. | | `configurationVersion` \_string\_ | The configuration version of this run. | | `outputRunID` \_string\_ | Run ID of the latest run that could update the outputs. | #### RunTrigger RunTrigger allows you to connect this workspace to one or more source workspaces. These connections allow runs to queue automatically in this workspace on successful apply of runs in any of the source workspaces. Only one of the fields `ID` or `Name` is allowed. At least one of the fields `ID` or `Name` is mandatory. More information: - [Run triggers](/terraform/cloud-docs/workspaces/settings/run-triggers) \_Appears in:\_ - [WorkspaceSpec](#workspacespec) | Field | Description | | --- | --- | | `id` \_string\_ | Source Workspace ID. Must match pattern: `^ws-[a-zA-Z0-9]+$` | | `name` \_string\_ | Source Workspace Name. | #### RunsCollector RunsCollector scraptes HCP Terraform Run statuses from a given Agent Pool and exposes them as Prometheus-compatible metrics. More information: - [Remote Operations](/terraform/cloud-docs/run/remote-operations) | Field | Description | | --- | --- | | `apiVersion` \_string\_ | `app.terraform.io/v1alpha2` | `kind` \_string\_ | `RunsCollector` | `kind` \_string\_ | Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. [More information](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds). | | `apiVersion` \_string\_ | APIVersion defines the versioned schema of this represePackage v1alpha2 contains API Schema definitions for the app v1alpha2 API groupPackage v1alpha2 contains API Schema definitions for the app v1alpha2 API groupntation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. [More information](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources). | | `metadata` \_[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#objectmeta-v1-meta)\_ | Refer to Kubernetes API documentation for fields of `metadata`. | | `spec` \_[RunsCollectorSpec](#runscollectorspec)\_ | | #### RunsCollectorSpec \_Appears in:\_ - [RunsCollector](#runscollector) | Field | Description | | --- | --- | | `organization` \_string\_ | Organization name where the Workspace will be created. [More information](/terraform/cloud-docs/users-teams-organizations/organizations). | | `token` \_[Token](#token)\_ | API Token to be used for API calls. | | `agentPool` \_[AgentPoolRef](#agentpoolref)\_ | The Agent Pool name or ID from which the controller will collect runs. [More information](/terraform/cloud-docs/run/states). | #### SSHKey SSH key used to clone Terraform modules. Only one of the fields `ID` or `Name` is allowed. At least one of the fields `ID` or `Name` is mandatory. More information: - [Use SSH Keys for cloning modules](/terraform/cloud-docs/workspaces/settings/ssh-keys) \_Appears in:\_ - [WorkspaceSpec](#workspacespec) | Field | Description | | --- | --- | | `id` \_string\_ | SSH key ID. Must match pattern: `^sshkey-[a-zA-Z0-9]+$` | | `name` \_string\_ | SSH key name. | #### Tag \_Underlying type:\_ \_string\_ Tags allows you to correlate, organize, and even filter workspaces based on the assigned tags. Tags must be one or more characters; can include letters, numbers, colons, hyphens, and underscores; and must begin and end with a letter or number. Must match pattern: `^[A-Za-z0-9][A-Za-z0-9:\_-]\*$` \_Appears in:\_ - [WorkspaceSpec](#workspacespec) #### TargetWorkspace TargetWorkspace is the name or ID of the workspace you want autoscale against. \_Appears in:\_ - [AgentDeploymentAutoscaling](#agentdeploymentautoscaling) | Field | Description | | --- | --- | | `id` \_string\_ | Workspace ID | | `name` \_string\_ | Workspace Name | | `wildcardName` \_string\_ | Wildcard Name to match match workspace names using `\*` on name suffix, prefix, or both. | #### Team Teams are groups of HCP Terraform users within an organization. If a user belongs to at least one team in an organization, they are considered a member of that organization. Only one of the fields `ID` or `Name` is allowed. At least one of the fields `ID` or `Name` is | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/kubernetes/api-reference.mdx | main | terraform | [
-0.04934672638773918,
0.009368348866701126,
0.013657934963703156,
0.006608520168811083,
-0.015855878591537476,
0.004828207194805145,
0.02124352566897869,
-0.10723717510700226,
0.016687491908669472,
0.06072878837585449,
-0.05125705897808075,
-0.08253074437379837,
0.056688569486141205,
-0.04... | 0.027795 |
Teams are groups of HCP Terraform users within an organization. If a user belongs to at least one team in an organization, they are considered a member of that organization. Only one of the fields `ID` or `Name` is allowed. At least one of the fields `ID` or `Name` is mandatory. More information: - [Teams overview](/terraform/cloud-docs/users-teams-organizations/teams) \_Appears in:\_ - [ProjectTeamAccess](#projectteamaccess) - [TeamAccess](#teamaccess) | Field | Description | | --- | --- | | `id` \_string\_ | Team ID. Must match pattern: `^team-[a-zA-Z0-9]+$` | | `name` \_string\_ | Team name. | #### TeamAccess HCP Terraform workspaces can only be accessed by users with the correct permissions. You can manage permissions for a workspace on a per-team basis. When a workspace is created, only the owners team and teams with the "manage workspaces" permission can access it, with full admin permissions. These teams' access can't be removed from a workspace. More information: - [Manage access to workspaces](/terraform/cloud-docs/workspaces/settings/access) \_Appears in:\_ - [WorkspaceSpec](#workspacespec) | Field | Description | | --- | --- | | `team` \_[Team](#team)\_ | Team to grant access. [More information](/terraform/cloud-docs/users-teams-organizations/teams). | | `access` \_string\_ | There are two ways to choose which permissions a given team has on a workspace: fixed permission sets, and custom permissions. Must be one of the following values: `admin`, `custom`, `plan`, `read`, `write`. [More information](/terraform/cloud-docs/users-teams-organizations/permissions/workspace). | | `custom` \_[CustomPermissions](#custompermissions)\_ | Custom permissions let you assign specific, finer-grained permissions to a team than the broader fixed permission sets provide. [More information](/terraform/cloud-docs/users-teams-organizations/permissions/workspace). | #### Token Token refers to a Kubernetes Secret object within the same namespace as the Workspace object \_Appears in:\_ - [AgentPoolSpec](#agentpoolspec) - [AgentTokenSpec](#agenttokenspec) - [ModuleSpec](#modulespec) - [ProjectSpec](#projectspec) - [RunsCollectorSpec](#runscollectorspec) - [WorkspaceSpec](#workspacespec) | Field | Description | | --- | --- | | `secretKeyRef` \_[SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#secretkeyselector-v1-core)\_ | Selects a key of a secret in the workspace's namespace | #### ValueFrom ValueFrom source for the variable's value. Cannot be used if value is not empty. \_Appears in:\_ - [Variable](#variable) | Field | Description | | --- | --- | | `configMapKeyRef` \_[ConfigMapKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#configmapkeyselector-v1-core)\_ | Selects a key of a ConfigMap. | | `secretKeyRef` \_[SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#secretkeyselector-v1-core)\_ | Selects a key of a Secret. | #### Variable Variables let you customize configurations, modify Terraform's behavior, and store information like provider credentials. More information: - [Workspace variables](/terraform/cloud-docs/variables) \_Appears in:\_ - [WorkspaceSpec](#workspacespec) | Field | Description | | --- | --- | | `name` \_string\_ | Name of the variable. | | `description` \_string\_ | Description of the variable. | | `hcl` \_boolean\_ | Parse this field as HashiCorp Configuration Language (HCL). This allows you to interpolate values at runtime. Default: `false`. | | `sensitive` \_boolean\_ | Sensitive variables are never shown in the UI or API. They may appear in Terraform logs if your configuration is designed to output them. Default: `false`. | | `value` \_string\_ | Value of the variable. | | `valueFrom` \_[ValueFrom](#valuefrom)\_ | Source for the variable's value. Cannot be used if value is not empty. | #### VariableSetStatus \_Appears in:\_ - [WorkspaceStatus](#workspacestatus) | Field | Description | | --- | --- | | `id` \_string\_ | | | `name` \_string\_ | | #### VariableStatus \_Appears in:\_ - [WorkspaceStatus](#workspacestatus) | Field | Description | | --- | --- | | `name` \_string\_ | Name of the variable. | | `id` \_string\_ | ID of the variable. | | `versionID` \_string\_ | VersionID is a hash of the variable on the TFC end. | | `valueID` \_string\_ | ValueID is a hash of the variable on the CRD end. | | `category` \_string\_ | Category of the variable. | #### VersionControl VersionControl settings for the workspace's VCS repository, enabling the UI/VCS-driven | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/kubernetes/api-reference.mdx | main | terraform | [
-0.018980564549565315,
0.05315258353948593,
-0.006982311140745878,
-0.02141638658940792,
-0.016856009140610695,
-0.029508618637919426,
0.027107689529657364,
-0.09676233679056168,
0.05631629750132561,
0.05160657688975334,
-0.048727668821811676,
-0.06896399706602097,
0.07392407953739166,
0.0... | 0.037572 |
| VersionID is a hash of the variable on the TFC end. | | `valueID` \_string\_ | ValueID is a hash of the variable on the CRD end. | | `category` \_string\_ | Category of the variable. | #### VersionControl VersionControl settings for the workspace's VCS repository, enabling the UI/VCS-driven run workflow. Omit this argument to utilize the CLI-driven and API-driven workflows, where runs are not driven by webhooks on your VCS provider. More information: - [UI and VCS-driven run workflow](/terraform/cloud-docs/workspaces/run/ui) - [Connect to VCS Providers](/terraform/cloud-docs/vcs) \_Appears in:\_ - [WorkspaceSpec](#workspacespec) | Field | Description | | --- | --- | | `oAuthTokenID` \_string\_ | The VCS Connection (OAuth Connection + Token) to use. Must match pattern: `^ot-[a-zA-Z0-9]+$` | | `repository` \_string\_ | A reference to your VCS repository in the format `/` where `` and `` refer to the organization and repository in your VCS provider. | | `branch` \_string\_ | The repository branch that Run will execute from. This defaults to the repository's default branch (e.g. main). | | `speculativePlans` \_boolean\_ | Whether this workspace allows automatic speculative plans on PR. Default: `true`. More information: [Speculative plans on pull requests](/terraform/cloud-docs/workspaces/run/ui#speculative-plans-on-pull-requests) and [Speculative plans](/terraform/cloud-docs/workspaces/run/remote-operations#speculative-plans). | | `enableFileTriggers` \_boolean\_ | File triggers allow you to queue runs in HCP Terraform when files in your VCS repository change. Default: `false`. Refer to [Automatic run triggering](/terraform/cloud-docs/workspaces/settings/vcs#automatic-run-triggering) for more information. | | `triggerPatterns` \_string array\_ | The list of pattern triggers that will queue runs in HCP Terraform when files in your VCS repository change `spec.versionControl.fileTriggersEnabled` must be set to `true`. Refer to [Automatic run triggering](/terraform/cloud-docs/workspaces/settings/vcs#automatic-run-triggering) for more information. | | `triggerPrefixes` \_string array\_ | The list of pattern prefixes that will queue runs in HCP Terraform when files in your VCS repository change `spec.versionControl.fileTriggersEnabled` must be set to `true`. Refer to [Automatic run triggering](/terraform/cloud-docs/workspaces/settings/vcs#automatic-run-triggering) for more information. | #### Workspace Workspace manages HCP Terraform Workspaces. More information: - [Workspaces](/terraform/cloud-docs/workspaces) | Field | Description | | --- | --- | | `apiVersion` \_string\_ | `app.terraform.io/v1alpha2` | `kind` \_string\_ | `Workspace` | `kind` \_string\_ | Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. [More information](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds). | | `apiVersion` \_string\_ | APIVersion defines the versioned schema of this represePackage v1alpha2 contains API Schema definitions for the app v1alpha2 API groupPackage v1alpha2 contains API Schema definitions for the app v1alpha2 API groupntation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. [More information](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources). | | `metadata` \_[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#objectmeta-v1-meta)\_ | Refer to Kubernetes API documentation for fields of `metadata`. | | `spec` \_[WorkspaceSpec](#workspacespec)\_ | | #### WorkspaceProject Projects let you organize your workspaces into groups. Only one of the fields `ID` or `Name` is allowed. At least one of the fields `ID` or `Name` is mandatory. More information: - [Organize workspaces with projects](/terraform/tutorials/cloud/projects) \_Appears in:\_ - [WorkspaceSpec](#workspacespec) | Field | Description | | --- | --- | | `id` \_string\_ | Project ID. Must match pattern: `^prj-[a-zA-Z0-9]+$` | | `name` \_string\_ | Project name. | #### WorkspaceRunTask Run tasks allow HCP Terraform to interact with external systems at specific points in the HCP Terraform run lifecycle. Only one of the fields `ID` or `Name` is allowed. At least one of the fields `ID` or `Name` is mandatory. More information: - [Run tasks](/terraform/cloud-docs/workspaces/settings/run-tasks) \_Appears in:\_ - [WorkspaceSpec](#workspacespec) | Field | Description | | --- | --- | | `id` \_string\_ | Run Task ID. Must match pattern: `^task-[a-zA-Z0-9]+$` | | `name` \_string\_ | Run Task Name. | | `enforcementLevel` \_string\_ | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/kubernetes/api-reference.mdx | main | terraform | [
-0.034326523542404175,
0.06874502450227737,
0.004994137212634087,
-0.010639461688697338,
0.012691616080701351,
-0.023631544783711433,
-0.028872018679976463,
-0.04768501594662666,
0.11857288330793381,
0.031828559935092926,
-0.0237609650939703,
-0.08259368687868118,
-0.01756427064538002,
-0.... | 0.054847 |
of the fields `ID` or `Name` is mandatory. More information: - [Run tasks](/terraform/cloud-docs/workspaces/settings/run-tasks) \_Appears in:\_ - [WorkspaceSpec](#workspacespec) | Field | Description | | --- | --- | | `id` \_string\_ | Run Task ID. Must match pattern: `^task-[a-zA-Z0-9]+$` | | `name` \_string\_ | Run Task Name. | | `enforcementLevel` \_string\_ | Run Task Enforcement Level. Can be one of `advisory` or `mandatory`. Default: `advisory`. Must be one of the following values: `advisory`, `mandatory` Default: `advisory`. | | `stage` \_string\_ | Run Task Stage. Must be one of the following values: `pre\_apply`, `pre\_plan`, `post\_plan`. Default: `post\_plan`. | #### WorkspaceSpec WorkspaceSpec defines the desired state of Workspace. #### WorkspaceSpec WorkspaceSpec defines the desired state of Workspace. \_Appears in:\_ - [Workspace](#workspace) | Field | Description | | --- | --- | | `name` \_string\_ | Workspace name. | | `organization` \_string\_ | Organization name where the Workspace will be created. [More information](/terraform/cloud-docs/users-teams-organizations/organizations). | | `token` \_[Token](#token)\_ | API Token to be used for API calls. | | `applyMethod` \_string\_ | Define either change will be applied automatically(auto) or require an operator to confirm(manual). Must be one of the following values: `auto`, `manual`. Default: `manual`. [More information](/terraform/cloud-docs/workspaces/settings#auto-apply-and-manual-apply). | | `applyRunTrigger` \_string\_ | Specifies the type of apply, whether manual or auto. Must be of value `auto` or `manual` Default: `manual`. [More information](/terraform/cloud-docs/workspaces/settings#auto-apply). | | `allowDestroyPlan` \_boolean\_ | Allows a destroy plan to be created and applied. Default: `true`. [More information](/terraform/cloud-docs/workspaces/settings#destruction-and-deletion). | | `description` \_string\_ | Workspace description. | | `agentPool` \_[WorkspaceAgentPool](#workspaceagentpool)\_ | HCP Terraform Agents allow HCP Terraform to communicate with isolated, private, or on-premises infrastructure. [More information](/terraform/cloud-docs/agents). | | `executionMode` \_string\_ | Define where the Terraform code will be executed. Must be one of the following values: `agent`, `local`, `remote`. Default: `remote`. [More information](/terraform/cloud-docs/workspaces/settings#execution-mode). | | `runTasks` \_[WorkspaceRunTask](#workspaceruntask) array\_ | Run tasks allow HCP Terraform to interact with external systems at specific points in the HCP Terraform run lifecycle. [More information](/terraform/cloud-docs/workspaces/settings/run-tasks). | | `tags` \_[Tag](#tag) array\_ | Workspace tags are used to help identify and group together workspaces. Tags must be one or more characters; can include letters, numbers, colons, hyphens, and underscores; and must begin and end with a letter or number. | | `teamAccess` \_[TeamAccess](#teamaccess) array\_ | HCP Terraform workspaces can only be accessed by users with the correct permissions. You can manage permissions for a workspace on a per-team basis. When a workspace is created, only the owners team and teams with the "manage workspaces" permission can access it, with full admin permissions. These teams' access can't be removed from a workspace. [More information](/terraform/cloud-docs/workspaces/settings/access). | | `terraformVersion` \_string\_ | The version of Terraform to use for this workspace. If not specified, the latest available version will be used. Must match pattern: `^\\d\{1\}\\.\\d\{1,2\}\\.\\d\{1,2\}$` [More information](/terraform/cloud-docs/workspaces/settings#terraform-version) | | `workingDirectory` \_string\_ | The directory where Terraform will execute, specified as a relative path from the root of the configuration directory. [More information](/terraform/cloud-docs/workspaces/settings#terraform-working-directory) | | `environmentVariables` \_[Variable](#variable) array\_ | Terraform Environment variables for all plans and applies in this workspace. Variables defined within a workspace always overwrite variables from variable sets that have the same type and the same key. More information: [Workspace variables](/terraform/cloud-docs/variables) and [Environment variables](/terraform/cloud-docs/variables#environment-variables). | | `terraformVariables` \_[Variable](#variable) array\_ | Terraform variables for all plans and applies in this workspace. Variables defined within a workspace always overwrite variables from variable sets that have the same type and the same key. More information: [Workspace variables](/terraform/cloud-docs/variables) and [Terraform variables](/terraform/cloud-docs/variables#terraform-variables). | | `remoteStateSharing` \_[RemoteStateSharing](#remotestatesharing)\_ | Remote state access between workspaces. By default, new workspaces in HCP Terraform do not allow other workspaces to access their state. [More information](/terraform/cloud-docs/workspaces/state#accessing-state-from-other-workspaces). | | `runTriggers` \_[RunTrigger](#runtrigger) array\_ | | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/kubernetes/api-reference.mdx | main | terraform | [
-0.06179403141140938,
0.02429227903485298,
-0.0030903543811291456,
0.011883549392223358,
-0.005310471635311842,
-0.0431273952126503,
0.020899014547467232,
-0.05314101651310921,
0.01140047237277031,
0.08330200612545013,
-0.04692564532160759,
-0.11657434701919556,
0.05520707741379738,
0.0326... | 0.070704 |
that have the same type and the same key. More information: [Workspace variables](/terraform/cloud-docs/variables) and [Terraform variables](/terraform/cloud-docs/variables#terraform-variables). | | `remoteStateSharing` \_[RemoteStateSharing](#remotestatesharing)\_ | Remote state access between workspaces. By default, new workspaces in HCP Terraform do not allow other workspaces to access their state. [More information](/terraform/cloud-docs/workspaces/state#accessing-state-from-other-workspaces). | | `runTriggers` \_[RunTrigger](#runtrigger) array\_ | Run triggers allow you to connect this workspace to one or more source workspaces. These connections allow runs to queue automatically in this workspace on successful apply of runs in any of the source workspaces. [More information](/terraform/cloud-docs/workspaces/settings/run-triggers). | | `versionControl` \_[VersionControl](#versioncontrol)\_ | Settings for the workspace's VCS repository, enabling the UI/VCS-driven run workflow. Omit this argument to utilize the CLI-driven and API-driven workflows, where runs are not driven by webhooks on your VCS provider. More information: [UI and VCS-driven run workflow](/terraform/cloud-docs/workspaces/run/ui) and [Connect to VCS providers](/terraform/cloud-docs/vcs) | | `sshKey` \_[SSHKey](#sshkey)\_ | SSH key used to clone Terraform modules. [More information](/terraform/cloud-docs/workspaces/settings/ssh-keys). | | `notifications` \_[Notification](#notification) array\_ | Notifications allow you to send messages to other applications based on run and workspace events. [More information](/terraform/cloud-docs/workspaces/settings/notifications). | | `project` \_[WorkspaceProject](#workspaceproject)\_ | Projects let you organize your workspaces into groups. Default: default organization project. [More information](/terraform/tutorials/cloud/projects). | | `deletionPolicy` \_[DeletionPolicy](#deletionpolicy)\_ | The Deletion Policy specifies the behavior of the custom resource and its associated workspace when the custom resource is deleted. - `retain`: When you delete the custom resource, the operator does not delete the workspace. - `soft`: Attempts to delete the associated workspace only if it does not contain any managed resources. - `destroy`: Executes a destroy operation to remove all resources managed by the associated workspace. Once the destruction of these resources is successful, the operator deletes the workspace, and then deletes the custom resource. - `force`: Forcefully and immediately deletes the workspace and the custom resource. Default: `retain`. | | `variableSets` \_[WorkspaceVariableSet](#workspacevariableset) array\_ | HCP Terraform variable sets let you reuse variables in an efficient and centralized way. [More information](/terraform/tutorials/cloud/cloud-multiple-variable-sets). | #### WorkspaceVariableSet \_Appears in:\_ - [WorkspaceSpec](#workspacespec) | Field | Description | | --- | --- | | `id` \_string\_ | ID of the variable set. Must match pattern: `varset-[a-zA-Z0-9]+$`[More information](/terraform/tutorials/cloud/cloud-multiple-variable-sets | | `name` \_string\_ | Name of the variable set. [More information](/terraform/tutorials/cloud/cloud-multiple-variable-sets | | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/kubernetes/api-reference.mdx | main | terraform | [
-0.039164893329143524,
-0.052382562309503555,
-0.016536770388484,
0.02759549766778946,
-0.043162573128938675,
0.025135941803455353,
0.011720892041921616,
-0.10978426039218903,
0.024151751771569252,
0.04727315530180931,
-0.01460384763777256,
-0.037269026041030884,
0.04179283604025841,
-0.02... | 0.048455 |
# HCP Terraform Operator for Kubernetes metrics This topic provides reference information about the Prometheus-compatible metrics available in the HCP Terraform and Terraform Enterprise operators for Kubernetes. ## Available metrics The operator exposes all metrics provided by the controller-runtime by default. Refer to the [Kubebuilder documentation](https://book.kubebuilder.io/reference/metrics-reference.html) for a full list of available metrics. Starting with version `2.10.0`, the operator introduces HCP Terraform–specific metrics. These metrics use the prefix `hcp\_tf\_\*`. The operator exposes the following metrics. Note that the metrics are provided by specific controllers. Refer to the `Controller` column for the corresponding metric. These metrics may change in the future, refer to this documentation before upgrading to a new version of the operator. | Metric name | Type | Description | Controller | |-------------|------|-------------|------------| | `hcp\_tf\_runs{run\_status, agent\_pool\_id, agent\_pool\_name}` | Gauge | Pending runs by statuses. | RunsCollector | | `hcp\_tf\_runs\_total{agent\_pool\_id, agent\_pool\_name}` | Gauge | Total number of pending runs. | RunsCollector | ## Scrape metrics The operator exposes metrics in the Prometheus format for each controller. Refer to the [Prometheus data model](https://prometheus.io/docs/concepts/data\_model/) for more information on the metric format. Metrics are available at the standard `/metrics` path over the HTTPS port `8443`. The metrics are protected by [kube-rbac-proxy](https://github.com/brancz/kube-rbac-proxy). This allows RBAC-based access to the metrics within the Kubernetes cluster. How metrics are scraped will depend on how you operate your Prometheus server. The following example assumes that you use the [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator) to run Prometheus. If you deployed the HCP Terraform Operator using the [Helm chart](https://artifacthub.io/packages/helm/hashicorp/hcp-terraform-operator), it creates a Kubernetes ClusterIP Service resource. Use this service as a target for Prometheus. The service name is generated using the following template: `{{ .Release.Name }}-controller-manager-metrics-service`. The following example shows a Prometheus Operator ConfigMap configured to scrape metrics from an HCP Terraform Operator Helm release named `hcpt-operator`. In this configuration, the service name is `hcpt-operator-controller-manager-metrics-service`. ```yaml apiVersion: v1 data: ... prometheus.yml: | ... - job\_name: hcpt-operator bearer\_token\_file: /var/run/secrets/kubernetes.io/serviceaccount/token scheme: https scrape\_interval: 15s scrape\_timeout: 5s static\_configs: - targets: - hcpt-operator-controller-manager-metrics-service:8443 tls\_config: insecure\_skip\_verify: true ``` | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/kubernetes/metrics.mdx | main | terraform | [
-0.01672447845339775,
-0.012267732061445713,
0.00482647679746151,
-0.024239685386419296,
-0.08071921765804291,
-0.009990667924284935,
0.0037546532694250345,
0.003097760723903775,
0.042356617748737335,
-0.002219011541455984,
-0.02533969283103943,
-0.1500280648469925,
0.0017737079178914428,
... | 0.159877 |
# Set up the HCP Terraform Operator for Kubernetes The HCP Terraform Operator for Kubernetes' CustomResourceDefinitions (CRD) allow you to dynamically create HCP Terraform workspaces with Terraform modules, populate workspace variables, and provision infrastructure with Terraform runs. You can install the operator with the official [HashiCorp Helm chart](https://github.com/hashicorp/hcp-terraform-operator). ## Prerequisites All HCP Terraform users can use the HCP Terraform Operator for Kubernetes. You can use the operator to manage the supported features that your organization's pricing tier enables. ## Networking requirements The HCP Terraform Operator for Kubernetes makes outbound requests over HTTPS (TCP port 443) to the HCP Terraform application APIs. This may require perimeter networking as well as container host networking changes, depending on your environment. Refer to [HCP Terraform IP Ranges](/terraform/cloud-docs/architectural-details/ip-ranges) for more information about IP ranges. Below, we list the services that run on specific IP ranges. | Hostname | Port/Protocol | Directionality | Purpose | | ---------------- | -------------- | -------------- | ------------------------------------------------------------------------------------------------------------------ | | `app.terraform.io` | tcp/443, HTTPS | Outbound | Dynamically managing HCP Terraform workspaces and returning the output to Kubernetes with the HCP Terraform API | For self-managed Terraform Enterprise instances, ensure that the operator can reach your Terraform Enterprise hostname over HTTPS (TCP port 443). ## Compatibility The HCP Terraform Operator for Kubernetes supports the following versions: \* Helm 3.0.1 and above \* Kubernetes 1.15 and above ## Install and configure 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the organization you want to integrate with Kubernetes. 1. Generate a [user](/terraform/cloud-docs/users-teams-organizations/api-tokens#user-api-tokens) or [team](/terraform/cloud-docs/users-teams-organizations/api-tokens#team-api-tokens) API token in Terraform Cloud. The user or team must have permission to create workspaces and apply runs. Save the token to a file named `credentials`. 1. Set the `NAMESPACE` environment variable. This will be the namespace that you will install the Helm chart to. ``` export NAMESPACE=tfc-operator-system ``` 1. Create the namespace. ``` kubectl create namespace $NAMESPACE ``` 1. Create a [Kubernetes Secret](https://kubernetes.io/docs/concepts/configuration/secret/) with the HCP Terraform API credentials. ``` kubectl -n $NAMESPACE create secret generic terraformrc --from-file=credentials ``` 1. Add sensitive variables, such as your cloud provider credentials, to the namespace. ``` kubectl -n $NAMESPACE create secret generic workspacesecrets --from-literal=secret\_key=abc123 ``` 1. Add the HashiCorp Helm repository. ``` helm repo add hashicorp https://helm.releases.hashicorp.com ``` 1. Install the [HCP Terraform Operator for Kubernetes with Helm](https://github.com/hashicorp/hcp-terraform-operator). By default, the operator communicates with HCP Terraform at `app.terraform.io`. The following example command installs the Helm chart for HCP Terraform: ```shell-session $ helm install --namespace ${NAMESPACE} tfc-operator hashicorp/hcp-terraform-operator ``` When deploying in self-managed Terraform Enterprise, you must set the `operator.tfeAddress` to the specific hostname of the Terraform Enterprise instance: ```shell-session $ helm install --namespace ${NAMESPACE} tfc-operator hashicorp/hcp-terraform-operator \ --set operator.tfeAddress="TERRAFORM\_ENTERPRISE\_HOSTNAME" ``` Alternatively, you can set the `tfeAddress` configuration for Terraform Enterprise in the [value.yaml](https://github.com/hashicorp/hcp-terraform-operator/blob/main/charts/hcp-terraform-operator/values.yaml) file. ```yaml operator: tfeAddress: ``` Run the following command to apply the value.yaml file: ```shell-session $ helm install --namespace ${NAMESPACE} tfc-operator hashicorp/hcp-terraform-operator -f value.yaml ``` 1. To create a Terraform workspace, agent pool, or other object, refer to the example YAML manifests in the [operator repository on GitHub](https://github.com/hashicorp/hcp-terraform-operator/tree/main/docs/examples). ### Upgrade When a new version of the HCP Terraform Operator for Kubernetes Helm Chart is available from the HashiCorp Helm repository, you can upgrade with the following command. ```shell-session $ helm upgrade --namespace ${NAMESPACE} hashicorp/hcp-terraform-operator tfc-operator ``` | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/kubernetes/setup.mdx | main | terraform | [
0.03664693981409073,
0.021942470222711563,
0.04096376523375511,
-0.04513119161128998,
-0.1095486506819725,
0.026013098657131195,
-0.027151240035891533,
-0.012567766942083836,
-0.0038080685772001743,
0.02907327376306057,
-0.07285365462303162,
-0.12483106553554535,
0.019748935475945473,
-0.0... | 0.102578 |
# HCP Terraform for AWS Service Catalog overview This integration allows administrators to curate a portfolio of pre-approved Terraform configurations on AWS Service Catalog. This enables end users like engineers, database administrators, and data scientists to deploy these Terraform configurations with a single action from the AWS interface. @include 'eu/integrations.mdx' By combining HCP Terraform with AWS Service Catalog, we’re combining a self-service interface that many customers are familiar with, AWS Service Catalog, with the existing workflows and policy guardrails of HCP Terraform. @include 'tfc-package-callouts/aws-service-catalog.mdx' ## Installation and configuration To use the AWS service catalog integration with an HCP Europe organization, set the example [`tfc\_hostname` variable](https://github.com/hashicorp/aws-service-catalog-engine-for-tfc/blob/main/variables.tf#L15-L19) to `app.eu.terraform.io` in your configuration. To start using this integration, you'll need to install the [AWS Service Catalog Engine for Terraform Cloud](https://github.com/hashicorp/aws-service-catalog-engine-for-tfc) provided by HashiCorp on GitHub by following the [setup instructions](https://github.com/hashicorp/aws-service-catalog-engine-for-tfc#getting-started) provided in the README. If you run into any setup troubles along the way, the README also includes [troubleshooting steps](https://github.com/hashicorp/aws-service-catalog-engine-for-tfc#troubleshooting) that should help resolve common issues that you may encounter. With the engine installed, the necessary code and infrastructure to integrate the HCP Terraform engine with AWS Service Catalog will automatically be configured. The setup can be completed in just a few minutes, and it only needs to be done once. Once the setup is complete, you can immediately start using AWS Service Catalog to develop and manage AWS Service Catalog products, and make them accessible to your end users across all your accounts. ## Usage You can access this new feature through the AWS Service Catalog console in any AWS regions where AWS Service Catalog is supported and follow the AWS Service Catalog Administrator Guide to [create your first Terraform product](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/getstarted-product-Terraform.html). | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/integrations/aws-service-catalog/index.mdx | main | terraform | [
-0.009646443650126457,
0.014428242109715939,
-0.008471350185573101,
-0.0949026420712471,
-0.04654568433761597,
0.008271189406514168,
0.0033878509420901537,
-0.02461165189743042,
-0.034471917897462845,
0.01117449440062046,
-0.038146354258060455,
-0.11238360404968262,
0.04776490107178688,
-0... | 0.100836 |
# Workload identity Dynamic Provider Credentials are powered by Terraform Workload Identity, which allows HCP Terraform to present information about a Terraform workload to an external system – like its workspace, organization, or whether it’s a plan or apply – and allows other external systems to verify that the information is accurate. You can think of it like an identity card for your Terraform runs: one that comes with a way for another system to easily verify whether the card is genuine. If the other system can confirm that the ID card is legitimate, it can trust the information the card contains and use it to decide whether to let that Terraform workload in the door. The “identity card” in this analogy is a workload identity token: a JSON Web Token (JWT) that contains information about a plan or apply, is signed by HCP Terraform’s private key, and expires at the end of the plan or apply timeout. Other systems can use HCP Terraform’s [public key](https://app.terraform.io/.well-known/jwks) to verify that a token that claims to be from HCP Terraform is genuine and has not been tampered with. This workflow is built on the [OpenID Connect protocol](https://openid.net/connect/), a trusted standard for verifying identity across different systems. ## Token Specification Workload identity tokens contain useful metadata in their payloads, known as \_claims\_. This is the equivalent of the name and date of birth on an identity card. Once a cloud platform verifies a token using HCP Terraform’s public key, it can look at the claims in the identity token to either match it to the correct permissions or reject it. You don’t need to understand the full token specification and what every claim means in order to use dynamic credentials, but it’s useful for debugging. ### Token Structure The following example shows a decoded HCP Terraform workload identity token: #### Header ```json { "typ": "JWT", "alg": "RS256", "kid": "j-fFp9evPJAzV5I2\_58HY5UvdCK6Q4LLB1rnPOUfQAk" } ``` #### Payload ```json { "jti": "1192426d-b525-4fde-9d42-f238be437bbd", "iss": "https://app.terraform.io", "aud": "my-example-audience", "iat": 1650486122, "nbf": 1650486117, "exp": 1650486422, "sub": "organization:my-org:project:Default Project:workspace:my-workspace:run\_phase:apply", "terraform\_organization\_id": "org-GRNbCjYNpBB6NEH9", "terraform\_organization\_name": "my-org", "terraform\_project\_id": "prj-vegSA59s1XPwMr2t", "terraform\_project\_name": "Default Project", "terraform\_workspace\_id": "ws-mbsd5E3Ktt5Rg2Xm", "terraform\_workspace\_name": "my-workspace", "terraform\_full\_workspace": "organization:my-org:project:Default Project:workspace:my-workspace", "terraform\_run\_id": "run-X3n1AUXNGWbfECsJ", "terraform\_run\_phase": "apply" } ``` This payload includes a number of standard claims defined in the OIDC spec as well as a number of custom claims for further customization. ### Standard Claims | Claim | Value | |--------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `jti` (JWT ID) | A unique identifier for each JWT. | | `iss` (issuer) | Full URL of HCP Terraform or the Terraform Enterprise instance which signed the JWT. | | `iat` (issued at) | Unix Timestamp when the JWT was issued. May be required by certain relying parties. | | `nbf` (not before) | Unix Timestamp when the JWT can start being used. This will be the same as `iat` for tokens issued by HCP Terraform, but may be required by certain relying parties. | | `aud` (audience) | Intended audience for the JWT. For example, `aws.workload.identity` for AWS. This can be customized. | | `exp` (expiration) | Unix Timestamp based on the timeout of the run phase that it was issued for. This will follow the `plan` and `apply` timeouts set at the organization and site admin level. | | `sub` (subject) | Fully qualified path to a workspace, followed by the run phase. For example: `organization:my-organization-name:project:Default Project:workspace:my-workspace-name:run\_phase:apply` | ### Custom Claims | Claim | Value | |--------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------| | `terraform\_organization\_id` (organization ID) | ID of the HCP Terraform organization performing the run. | | `terraform\_organization\_name` (organization name) | Human-readable name of the HCP Terraform organization performing the run. Note that organization names can be changed. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/dynamic-provider-credentials/workload-identity-tokens.mdx | main | terraform | [
-0.12173347175121307,
0.06933142989873886,
0.004221160896122456,
0.009393483400344849,
0.04599474370479584,
-0.036282945424318314,
0.07274419069290161,
-0.03981570899486542,
0.10573311150074005,
-0.04761670157313347,
-0.03435762971639633,
-0.02237970568239689,
0.06526867300271988,
-0.00124... | 0.177834 |
example: `organization:my-organization-name:project:Default Project:workspace:my-workspace-name:run\_phase:apply` | ### Custom Claims | Claim | Value | |--------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------| | `terraform\_organization\_id` (organization ID) | ID of the HCP Terraform organization performing the run. | | `terraform\_organization\_name` (organization name) | Human-readable name of the HCP Terraform organization performing the run. Note that organization names can be changed. | | `terraform\_project\_id` (project ID) | ID of the HCP Terraform project performing the run. | | `terraform\_project\_name` (project name) | Human-readable name of the HCP Terraform project performing the run. Note that project names can be changed. The default project name is `Default Project`. | | `terraform\_workspace\_id` (workspace ID) | ID of the HCP Terraform workspace performing the run. | | `terraform\_workspace\_name` (workspace name) | Human-readable name of the HCP Terraform workspace performing the run. Note that workspace names can be changed. | | `terraform\_full\_workspace` (fully qualified workspace) | Fully qualified path to a workspace. For example: `organization:my-organization-name:project:my-project-name:workspace:my-workspace-name` | | `terraform\_run\_id` (run ID) | ID of the run that the token was generated for. This is intended to aid in traceability and logging. | | `terraform\_run\_phase` (run phase) | The phase of the run this token was issued for. For example, `plan` or `apply` | ### Configuring Trust with your Cloud Platform When configuring the trust relationship between HCP Terraform and your cloud platform, you’ll set up conditions to validate the contents of the identity token provided by HCP Terraform against your roles and policies. At the minimum, you should match against the following claims: \* `aud` - the audience value of the token. This ensures that, for example, a workload identity token intended for AWS can’t be used to authenticate to Vault. \* `sub` - the subject value, which includes the organization and workspace performing the run. If you don’t match against at least the organization name, any organization or workspace on HCP Terraform will be able to access your cloud resources! You can match on as many claims as you want, depending on your cloud platform. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/dynamic-provider-credentials/workload-identity-tokens.mdx | main | terraform | [
-0.12329836189746857,
0.0634426549077034,
-0.03302612900733948,
0.01878545619547367,
0.01786670833826065,
0.02828609012067318,
0.041862260550260544,
0.03745897486805916,
0.010490992106497288,
0.07366357743740082,
0.016254151239991188,
-0.10945501178503036,
0.04160410165786743,
-0.007519926... | 0.088455 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.