content large_stringlengths 3 20.5k | url large_stringlengths 54 193 | branch large_stringclasses 4 values | source large_stringclasses 42 values | embeddings listlengths 384 384 | score float64 -0.21 0.65 |
|---|---|---|---|---|---|
# What is HCP Terraform? > \*\*Hands On:\*\* Try our [What is HCP Terraform - Intro and Sign Up](/terraform/tutorials/cloud-get-started/cloud-sign-up) tutorial. [HCP Terraform](https://cloud.hashicorp.com/products/terraform) is an application that helps teams use Terraform together. It manages Terraform runs in a consistent and reliable environment, and includes easy access to shared state and secret data, access controls for approving changes to infrastructure, a private registry for sharing Terraform modules, detailed policy controls for governing the contents of Terraform configurations, and more. HCP Terraform is available as a hosted service at . Small teams can sign up for free to connect Terraform to version control, share variables, run Terraform in a stable remote environment, and securely store remote state. Paid editions allow you to add more than five users, create teams with different levels of permissions, and collaborate more effectively. HCP Terraform \*\*Standard\*\* Edition allows organizations to enable audit logging, continuous validation, and automated configuration drift detection. -> \*\*Introducing HCP Terraform\*\*: Effective April 22, 2024, Terraform Cloud is now HCP Terraform. HCP Terraform's functionality remains the same, and we plan to introduce new features soon to support a unified HCP experience. To learn more about HashiCorp's vision, refer to [Introducing the Infrastructure Cloud](https://www.hashicorp.com/blog/introducing-the-infrastructure-cloud). ## What is Terraform Enterprise? Organizations with advanced security and compliance needs can purchase [Terraform Enterprise](/terraform/enterprise), our self-hosted distribution of HCP Terraform. It offers enterprises a private instance that includes the advanced features available in HCP Terraform. Refer to the [Terraform Enterprise Documentation](/terraform/enterprise) for requirements, reference architectures, and installation instructions. ## Use HCP Terraform in Europe The HashiCorp Cloud Platform (HCP) supports managing infrastructure in Europe. With HCP Europe, your resources are hosted, managed, and billed separately to meet [European data residency requirements](https://www.hashicorp.com/en/trust/privacy/hcp-data-privacy). HCP Terraform is available in HCP Europe, letting you manage Terraform resources in Europe with familiar workflows while adhering to additional data and privacy regulations. To learn more, refer to [Use HCP Terraform in Europe](/terraform/cloud-docs/europe). | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/index.mdx | main | terraform | [
-0.005587208550423384,
0.0013303757878020406,
0.019103089347481728,
-0.035444434732198715,
0.005407756194472313,
-0.014685757458209991,
-0.036790262907743454,
-0.05388469249010086,
-0.017092151567339897,
0.04966100677847862,
-0.022516950964927673,
-0.05186603590846062,
0.06190868094563484,
... | 0.137146 |
# Compare Stacks and workspaces In HCP Terraform, there are two ways of organizing your infrastructure: - Workspaces are ideal for managing a self-contained infrastructure of one Terraform root module. - Stacks are ideal for managing multiple infrastructure modules and repeating that infrastructure at scale. Learn if a workspace or a Stack works best for your use case by comparing what each is best at and the features they support. ## When to use workspaces A [workspace](/terraform/cloud-docs/workspaces) contains one Terraform root module, one set of inputs, and one state file. We recommend using workspaces if your use case meets any of the following conditions: - You can manage your infrastructure using a single Terraform configuration. - You require a strict separation between environments. - You can plan and apply your infrastructure with one operation. - You are setting up a mandatory CI/CD-driven pipeline to control promotion across environments or contexts. - Your team uses a branch-per-environment workflow. Workspaces provide a hard separation between environments because each workspace maintains its own isolated state file. Workspaces are suitable for strict separation requirements, such as compliance boundaries, production safeguards, or minimal blast radius scenarios. If your team uses a branching strategy where each environment maps to a separate Git branch, workspaces can align with your workflow. You can promote changes across environments through pull requests between branches. Workspaces also work well when CI/CD pipelines control promotion across environments, especially when a promotion must be gated by approvals or automated test outcomes. Previously, those looking to deploy repeatable infrastructure using HCP Terraform would create separate workspaces and then use run triggers or other automation tools to coordinate changes between them. However, workspaces are not truly coupled to each other, hampering your ability to flexibly manage infrastructure as it scales. If you have a tightly orchestrated infrastructure that changes across different environments, we recommend defining a Stack. If you want to migrate an existing workspace to a Stack, refer to the [Terraform migrate CLI](/terraform/migrate/stacks). Workspaces do support validating your code with policies, drift detection, and a range of other features that Stacks do not support. Refer to [Feature support](#feature-support) to learn more. ## When to use Stacks [Stacks](/terraform/cloud-docs/stacks) use a component-based architecture to repeatedly deploy infrastructure, simplifying managing your infrastructure lifecycle at scale. You define the modules that make up your Stack as components in a [component configuration](/terraform/language/stacks/component/config), then define how to repeatedly deploy that infrastructure in a [deployment configuration](/terraform/language/stacks/deploy/config). We recommend using Stacks if your use case meets any of the following conditions: - Your infrastructure is too large to manage as a single Terraform configuration. - Your infrastructure cannot be planned in a single operation. - Your infrastructure shares a common lifecycle. - Your infrastructure does not share a lifecycle, but does have interconnected services that pass information back and forth. - You repeat infrastructure across different environments, regions, or accounts that require consistent and synchronized deployments. - You have a tightly orchestrated infrastructure that changes across different environments. Stacks codify the entire behavior of your infrastructure lifecycle within version-controlled configuration files. To learn more about Stacks and review examples, refer to [Stack use cases](/terraform/language/stacks/use-cases). If you want to migrate an existing workspace to a Stack, refer to the [Terraform migrate CLI](/terraform/migrate/stacks). Stacks were designed to simplify and scale Terraform workflows that become complex or repetitive using workspaces. When you use a Stack, HCP Terraform automatically recognizes the dependency between components, automatically deferring the component's plan and apply steps until it can complete them successfully. Learn more about how Stacks plan [deferred changes](/terraform/cloud-docs/stacks/deploy/runs#deferred-changes). If you have multiple Stacks that do not share a | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/stack-workspace.mdx | main | terraform | [
-0.015670254826545715,
-0.04484124854207039,
0.035008303821086884,
-0.020590338855981827,
-0.014480223879218102,
-0.06367415189743042,
-0.08168289810419083,
-0.0230496134608984,
-0.00022519391495734453,
0.042266082018613815,
-0.02120133675634861,
-0.10584627091884613,
0.0548885241150856,
-... | 0.031435 |
complex or repetitive using workspaces. When you use a Stack, HCP Terraform automatically recognizes the dependency between components, automatically deferring the component's plan and apply steps until it can complete them successfully. Learn more about how Stacks plan [deferred changes](/terraform/cloud-docs/stacks/deploy/runs#deferred-changes). If you have multiple Stacks that do not share a provisioning lifecycle, you can link those Stacks together to export data from one Stack for another to consume. Like [workspace run triggers](/terraform/cloud-docs/workspaces/settings/run-triggers), if the output value of a Stack changes after a run, HCP Terraform automatically triggers runs for any Stacks that depend on those outputs. To learn more, refer to [Pass data from one Stack to another](/terraform/language/stacks/deploy/pass-data). You can also define rules to automatically approve Stack deployment runs using [deployment group orchestration rules](/terraform/language/stacks/deploy/conditions). To learn more about the features that Stack do and do not support, refer to [Feature support](#feature-support). ## Feature support The following table shows which featured capabilities are available in workspaces and Stacks: | Feature | Workspaces | Stacks | |---------|------------|---------| | Infrastructure as code | β
| β
| | Remote state storage | β
| β
| | Remote runs (plan & apply) | β
| β
| | Dynamic provider credentials | β
| β
| | VCS connections | β
| β
| | Mono-repo support | β
| β | | Projects | β
| β
| | Variable sets | β
| β
| | Self-hosted agents | β
| β
| | Audit logs | β
| β
| | Config-driven import | β
| β οΈ Partial support | | Team management | β
| β οΈ Partial support | | Run triggers | β
| β | | Linking Stacks together with `publish\_output` and `upstream\_input` | β | β
| | Deferred changes | β | β
| | Deployment group orchestration rules | β | β
| | Policy as code | β
| β | | Explorer | β
| β | | Publishing configurations to the private registry | β
| β | | Run tasks | β
| β | | Drift detection | β
| β | | Continuous validation | β
| β | | No-code provisioning | β
| β | | Ephemeral workspaces | β
| β | | Integrations with ServiceNow, AWS, and Kubernetes | β
| β | | HCP Waypoint support | β
| β | | Cost optimization | β
| β | | Module deprecation and revocation | β
| β | | Private VCS | β
| β
| | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/stack-workspace.mdx | main | terraform | [
-0.02293364331126213,
-0.06293253600597382,
0.058789126574993134,
0.00047102264943532646,
-0.01122481469064951,
-0.031056666746735573,
-0.07483435422182083,
-0.05472465604543686,
0.024225270375609398,
0.039560094475746155,
0.005803014617413282,
-0.07693126797676086,
0.021555159240961075,
-... | -0.004737 |
# Create a Stack Stacks enable you to provision and coordinate your infrastructure lifecycle at scale, offering an organized and reusable approach that expands upon infrastructure as code (IaC). > \*\*Hands-on\*\*: Try out the [Deploy a Stack with HCP Terraform](/terraform/tutorials/cloud/stacks-deploy) tutorial to get started with Stacks quickly. Before creating a Stack in HCP Terraform, write a [component configuration file](/terraform/language/stacks/component/config) to define your Stack's infrastructure, and a [deployment configuration file](/terraform/language/stacks/deploy/config) to tell HCP Terraform how to deploy your Stack. ## Requirements Stacks are not available for users on legacy HCP Terraform team plans. Learn more about [migrating to current HCP Terraform plans](/terraform/cloud-docs/overview/migrate-teams-standard). To create a Stack in HCP Terraform, you must be a member of a team in your organization with one of the following permissions: - [Organization-level \*\*Manage all projects\*\*](/terraform/cloud-docs/users-teams-organizations/permissions/organization#manage-all-projects) - [Project-level \*\*Maintain\*\*](/terraform/cloud-docs/users-teams-organizations/permissions/project#project-maintain) or higher ## Create a Stack You can create a Stack using any of the following methods: - The [HCP Terraform API](/terraform/cloud-docs/api-docs/stacks) - The HCP Terraform UI - The Terraform CLI If you want to migrate an existing workspace to a Stack, refer to the [Terraform migrate CLI](/terraform/migrate/stacks). If you are creating a Stack in an organization for the first time, you must enable Stacks for your organization. Navigate to your organizationβs \*\*Settings\*\* page, and in the \*\*General\*\* settings, check the box next to \*\*Stacks\*\*. ### HCP Terraform workflow Stacks live alongside workspaces in a project. To create a new Stack in HCP Terraform UI, perform the following steps: 1. Sign in to [HCP Terraform](https://app.terraform.io), and select the organization where you want to create your Stack. 1. In the navigation menu, click \*\*Projects\*\* under \*\*Manage\*\*. 1. Select the project where you want to create your Stack. 1. Click \*\*New\*\*, then \*\*Stack\*\*. 1. Select a version control provider from the list. 1. Choose an organization and repository from the filterable list. If your repository is missing, enter its ID in the text field below the list. The list only displays the first 100 repositories from your VCS provider. 1. Enter a new \*\*Stack Name\*\*. \* The name must be unique within the project and can include letters, numbers, dashes (`-`), and underscores (`\_`). We recommend using 90 characters or less for the name of your Stack. 1. You can optionally add a description for your Stack. 1. By default, HCP Terraform fetches your Stack's configuration after creating your Stack. To fetch your Stack configuration manually, uncheck \*\*Fetch configuration after HCP Terraform creates stack\*\*. 1. Click \*\*Create Stack\*\*. ### Terraform CLI workflow Stacks live alongside workspaces in an HCP Terraform project. To create a new Stack using the Terraform CLI, perform the following steps: 1. Create an account or sign in to [HCP Terraform](https://app.terraform.io). 1. Run `terraform login` to authenticate with HCP Terraform. Alternatively, you can manually configure credentials in the CLI config file or through environment variables. Refer to [CLI Configuration](/terraform/cli/config/config-file#environment-variable-credentials) for details. 1. Run the `terraform stacks create` command, replacing the placeholders with your organization name, project name, and desired Stack name: ```shell-session $ terraform stacks create -organization-name -project-name -stack-name ``` A Stack name must be unique within the project and can include letters, numbers, dashes (`-`), and underscores (`\_`). We recommend using 90 characters or less for the name of your Stack. 1. After running the command, you can view your new Stack in the HCP Terraform UI, or by running the `terraform stacks list` command: ```shell-session $ terraform stacks list -organization-name -project-name ``` 1. After creating your Stack, you can push up your Stack component and deployment configuration files to create a new configuration version. Use the `terraform stacks configuration upload` command to manually upload | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/stacks/create.mdx | main | terraform | [
-0.033735428005456924,
-0.05953460931777954,
0.0022641171235591173,
-0.016995146870613098,
-0.06167047098278999,
-0.02654505893588066,
-0.08771714568138123,
-0.011186460964381695,
-0.05228981375694275,
0.04228166490793228,
-0.003790367627516389,
-0.13271524012088776,
0.04642239212989807,
0... | 0.059419 |
Terraform UI, or by running the `terraform stacks list` command: ```shell-session $ terraform stacks list -organization-name -project-name ``` 1. After creating your Stack, you can push up your Stack component and deployment configuration files to create a new configuration version. Use the `terraform stacks configuration upload` command to manually upload your configuration files: ```shell-session $ terraform stacks configuration upload -organization-name -project-name -stack-name ``` The Terraform CLI uploads the configuration files, and returns a new configuration version ID and sequence number: ```shell-session Uploading stack configuration... Configuration for Stack (id: 'st-MLQLSJVrdtGazA4aU') was uploaded Configuration ID: stc-6fSRO81hOzTPKMM Sequence Number: 1 See run at: ``` A Stack configuration version is a snapshot of all of the pieces that make up your Stack. Each configuration version creates a deployment run for every deployment of your Stack in order to implement the changes in that version. To learn more, refer to [Stack deployment runs](/terraform/cloud-docs/stacks/runs). After uploading your configuration, you can watch your Stack's configuration roll out using `terraform stacks configuration watch` to review a list of the deployment groups in your configuration: ```shell-session $ terraform stacks configuration watch -organization-name -project-name ``` The Terraform CLI then displays the status of each deployment group in your Stack: ```shell-session [Stack Id: st-MLQLSJVrdtGazA4aU] β Configuration: 'stc-6fSRO81hOzTPKMM' [Completed] [11s] β» Deployment Group: 'many\_default' [Pending] [58s] β» Deployment Group: 'some\_default' [Pending] [58s] β» Deployment Group: 'single\_default' [Failed] [6s] Press q to quit ``` You can continue to use the `terraform stacks` CLI commands to directly approve deployment runs, review configuration versions, and manage your Stack. To learn more, refer to the [`terraform stacks` commands](/terraform/cli/commands/stacks). Note that though you can create a Stack from the Terraform CLI, you cannot deploy a Stack locally. You can only deploy a Stack remotely in HCP Terraform, or by running your a Stack on a custom HCP Terraform agent. To learn more, refer to [Stack deployment runs](/terraform/cloud-docs/stacks/runs). ## Next steps After creating your Stack, you can continue to iterate on your configuration and [review configuration versions](/terraform/cloud-docs/stacks/deploy/configuration-versions) or learn how to [review your Stackβs deployment runs](/terraform/cloud-docs/stacks/deploy/runs). | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/stacks/create.mdx | main | terraform | [
-0.024098362773656845,
-0.07263882458209991,
0.004144657868891954,
-0.007758010178804398,
-0.03793882951140404,
-0.04577340930700302,
-0.07972666621208191,
-0.03912023454904556,
0.013188069686293602,
0.07439004629850388,
0.005887837614864111,
-0.036985207349061966,
-0.002184477634727955,
-... | 0.020724 |
# Stacks overview As your infrastructure grows, managing Terraform configurations becomes increasingly complex. Stacks are a powerful configuration layer in Terraform that simplifies managing your infrastructure modules and then repeating that infrastructure. Stacks let you split your Terraform configuration into components and then deploy and manage those components across multiple environments. You can manage the lifecycle of each deployment separately, roll out configuration changes across your deployments, and manage your Stack as a unit in HCP Terraform. ## Background Stacks are an alternative way to organize your infrastructure and fundamentally differ from [HCP Terraform workspaces](/terraform/cloud-docs/workspaces). Stacks are not built on top of HCP Terraform workspaces, but can exist alongside them in the same project. To learn if a workspace or Stack is better for your use case, refer to [Choose a workspace or Stack](/terraform/cloud-docs/stack-workspace). > \*\*Hands-on\*\*: Try out the [Deploy a Stack with HCP Terraform](/terraform/tutorials/cloud/stacks-deploy) tutorial to get started with Stacks quickly. Stacks are particularly useful when managing complex, multi-environment infrastructures where consistency and reusability are crucial. Refer to [Stack use cases](/terraform/language/stacks/use-cases) for inspiration and examples of how to use stacks. ## Workflow Start by creating a [component configuration file](/terraform/language/stacks/component/config) and filling it with `component` blocks. Each `component` block includes a Terraform module as its source, and you can configure each component further using input arguments. Your Stack components share a lifecycle, which you can repeatedly deploy together using HCP Terraform.  After configuring your Stack's components, you create a separate [deployment configuration file](/terraform/language/stacks/deploy/config) to define how you want to repeat your Stack's infrastructure. Each deployment in a Stack represents a group of infrastructure that works together. You can define `deployment` blocks for your development environments, cloud provider accounts, or regions. Once ready to deploy, you can create a Stack in HCP Terraform to deploy your Stackβs defined infrastructure. In HCP Terraform, you can manage your stack, its configuration version, deployments, and deployment plans. ## Primary workflow The overall process for managing your infrastructure using Stacks consists of the following steps. ### Write Stack configuration Begin by [designing your stack](/terraform/language/stacks/design) and codifying your infrastructure in a Stack configuration. To learn more, refer to [Define component configuration](/terraform/language/stacks/component/config). ### Define deployments With your Stack configuration complete, the next step is to define how you want to deploy your Stack. In stacks, deployments allow you to replicate infrastructure across multiple environments, regions, or accounts. Refer to [Define deployment configuration](/terraform/language/stacks/deploy/config) to learn more. ### Deploy After writing your Stack and deployment configurations, you can deploy your Stack's defined infrastructure using HCP Terraform. In HCP Terraform, you can [create Stacks](/terraform/cloud-docs/stacks/create) that live alongside your workspaces within a specified project. You can also [reconfigure](/terraform/cloud-docs/stacks/configure) the settings of existing Stacks. When you change your Stack configuration, you can [fetch and review configuration versions](/terraform/cloud-docs/stacks/deploy/configuration-versions) before applying them to your deployments. Changes to your Stack deployment configuration create [deployment runs](/terraform/cloud-docs/stacks/deploy/runs) that you can review and apply like normal Terraform operations. ## Concurrency Concurrency traditionally refers to the number of plan and apply operations that HCP Terraform can run simultaneously. However, HCP Terraform runs Stack operations in the same agent pool and queue as workspaces. For those using stacks, your maximum HCP Terraform concurrency is the combination of your workspace runs and your Stack operations. Your HCP Terraform [subscription plan](https://www.hashicorp.com/products/terraform/pricing?product\_intent=terraform) limits your maximum concurrency. ## Permissions Stacks and do not have separate permissions, but do inherit project permissions. Refer to [project permissions](/terraform/cloud-docs/users-teams-organizations/permissions#project-permissions) to learn more about the available permissions. ## Access variable sets Stacks | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/stacks/index.mdx | main | terraform | [
-0.0242931991815567,
-0.054358575493097305,
0.0417337529361248,
-0.012525675818324089,
-0.04033276066184044,
-0.027583034709095955,
-0.07491051405668259,
-0.018092375248670578,
0.007359622977674007,
0.045426610857248306,
-0.0021285261027514935,
-0.055447619408369064,
0.04177374020218849,
-... | 0.02951 |
is the combination of your workspace runs and your Stack operations. Your HCP Terraform [subscription plan](https://www.hashicorp.com/products/terraform/pricing?product\_intent=terraform) limits your maximum concurrency. ## Permissions Stacks and do not have separate permissions, but do inherit project permissions. Refer to [project permissions](/terraform/cloud-docs/users-teams-organizations/permissions#project-permissions) to learn more about the available permissions. ## Access variable sets Stacks can access variable set values in deployments using the [`store` block](/terraform/language/block/stack/tfdeploy/store). Your Stack must have access to the variable set you are targeting, meaning it must be globally available or assigned to the project containing your Stack or the Stack itself. To learn how to assign and create variable sets, refer to [Manage variables](/terraform/cloud-docs/variables/managing-variables#variable-sets). ## Pass data between Stacks If you have multiple Stacks that reside in the same project and do not share a provisioning lifecycle, you can link Stacks together to export data from one Stack for another to consume. If the output value of a Stack changes after a run, HCP Terraform automatically triggers runs for any Stacks that depend on those outputs. To learn more, refer to [Pass data from one Stack to another](/terraform/language/stacks/deploy/pass-data). ## Constraints and limitations While Stacks provide exciting capabilities, there are some limitations to be aware of during this beta phase: - Each Stack currently supports a maximum of 20 deployments, which may limit scalability for large environments. - Stack deployment groups currently support only one deployment per group. - Each Stack supports up to 100 components. - Each Stack supports up to 10,000 resources. - Stacks can link to up to 20 other upstream Stacks. [Learn more about passing data between Stacks](/terraform/language/stacks/deploy/pass-data). - Stacks can expose values to up to 25 downstream Stacks. [Learn more about passing data between Stacks](/terraform/language/stacks/deploy/pass-data). - Stacks are not available for users on legacy HCP Terraform team plans. Learn more about [migrating to a current HCP Terraform plan](/terraform/cloud-docs/overview/migrate-teams-standard). | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/stacks/index.mdx | main | terraform | [
0.052615053951740265,
-0.04019998013973236,
-0.023140378296375275,
-0.024083388969302177,
-0.07648876309394836,
-0.01670490764081478,
-0.01166971493512392,
-0.03410618379712105,
0.02040896937251091,
0.08081072568893433,
0.02265343815088272,
-0.0753684714436531,
0.027400648221373558,
0.0018... | -0.000903 |
# Destroy a Stack We recommend destroying a Stack in phases to ensure HCP Terraform destroys your infrastructure safely. Destroy your Stack deployments before destroying the Stack itself to ensure you do not leave any unmanaged infrastructure behind. ## Requirements To destroy a Stack and delete or destroy a deployment in HCP Terraform, you must also be a member of a team with one of the following permissions: \* [Organization-level \*\*Manage all projects\*\*](/terraform/cloud-docs/users-teams-organizations/permissions/organization#manage-all-projects) \* [Project-level \*\*Maintain\*\*](/terraform/cloud-docs/users-teams-organizations/permissions/project#project-maintain) or higher ## Delete deployments Before destroying a Stack, we recommend destroying your deployments to remove the resources that those deployments manage. Otherwise, the infrastructure managed by your Stackβs deployments can continue without destruction, and you will have to clean them up manually. To destroy a deploymentβs infrastructure, add the `destroy` argument to every `deployment` block in your deployment configuration file. The following example destroys the `production` deployment using the `destroy` argument: ```hcl deployment "production" { inputs = { region = "us-west-2" instances = 2 } destroy = true } ``` After uploading the updated deployment configuration file and approving the subsequent plan, HCP Terraform destroys any infrastructure for the `production` deployment. After deleting all of a Stack's deployments, you can safely delete that Stack. ## Delete a Stack Before destroying a Stack in HCP Terraform, we strongly recommend [deleting all of that Stackβs deployments](#delete-deployments). Once your Stack contains no deployments, you can delete a Stack by performing the following steps: Begin by navigating to the Stack you want to interact with: 1. Sign in to [HCP Terraform](https://app.terraform.io), and select the organization that contains your Stack. 1. In the navigation menu, click \*\*Projects\*\* under \*\*Manage\*\*. 1. Select the project containing your Stack. 1. Select \*\*Settings\*\* in the navigation menu. 1. Select \*\*Destruction and Deletion\*\*. 1. Click \*\*Delete stack \*\*. 1. Enter `delete` in the confirmation modal, then click \*\*Delete Stack\*\*. By following the steps above, you can forcefully delete a Stack without removing its deployments first. However, your Stackβs deletion does not affect Stackβs resources, so they continue without management, and you have to clean them up manually. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/stacks/destroy.mdx | main | terraform | [
-0.03723113238811493,
0.02834736928343773,
0.08032668381929398,
-0.04445028677582741,
-0.0032180880662053823,
-0.07983993738889694,
-0.038728583604097366,
-0.10015282034873962,
0.000057367320550838485,
0.08834851533174515,
-0.013343635015189648,
-0.030092522501945496,
0.06579644232988358,
... | 0.032252 |
# Stack deployment runs HCP Terraform is the interface for keeping track of your Stack configuration over time and the corresponding deployments for each version of your configuration. Learn how HCP Terraform executes Terraform runs to keep Stack deployments up to date. ## Run environment HCP Terraform is designed as an execution platform for Terraform, and executes runs on its own disposable virtual machines. Terraform runs managed by HCP Terraform are called remote operations. HCP Terraform executes Stack deployment runs remotely, and you cannot execute Stack runs locally. ### Protecting private environments Your HCP Terraform agent must use v1.25.0 or above to execute Stack deployment runs. To learn more about agent versioning, refer to [Updates](/terraform/cloud-docs/agents/agents#updates). [HCP Terraform agents](/terraform/cloud-docs/agents/) let HCP Terraform communicate with isolated, private, or on-premises infrastructure. The agent polls HCP Terraform for any changes to your configuration and executes the changes locally, so you do not need to allow public ingress traffic to your resources. Agents let you control infrastructure in private environments without modifying your network perimeter. Note that [agent hooks](/terraform/cloud-docs/agents/hooks) do not support Stack workflows. ## Configuration versions A Stack configuration version is a snapshot of all of the pieces that make up your Stack. A Stack configuration version includes: - Your component configuration, in `tfcomponent.hcl` files. - Your deployment configuration, in the `tfdeploy.hcl` file. - The modules that implement your individual Stack components - The input values that deployment block passes to your Stack components. HCP Terraform creates a new configuration version each time any of the following change: - Your configuration changes and VCS automatically fetches that change - Your configuration changes and you manually fetch that change in HCP Terraform - You manually upload a configuration - An [`upstream\_input`](/terraform/language/stacks/deploy/pass-data#consume-the-output-from-an-upstream-stack) that your Stack depends on changes. ## Stack deployment runs Each configuration version creates a deployment run for every deployment of your Stack to implement the changes in that version. A deployment run consists of the steps that Terraform can perform to update that deployment to match the associated configuration version. A single run can include many plan and apply cycles, and a successful run ends with an empty plan to reflect that deployments have executed all downstream changes. The goal of deployment runs is to apply a configuration version to every deployment in the Stack. When all deployments successfully match the same Stack configuration, the Stack achieves convergence. To learn more about interacting with your Stack's deployment runs, refer to [Review deployment runs](/terraform/cloud-docs/stacks/deploy/runs). ### Deployment run order Each Stack in HCP Terraform maintains its own configuration versions, and each version has a queue of deployment runs. HCP Terraform executes deployment runs in the order that you approve them. When you approve a deployment plan, HCP Terraform automatically dismisses all pending plans for that deployment from other configuration versions. HCP Terraform permanently discards plans from older configuration versions because you cannot revert your deployment to a previous configuration version. Plans from newer configuration versions remain rerunnable. After the current deployment completes, you can rerun plans from newer configuration versions and approve them to update your deployment to that newer version. ### Deployment runs and state HCP Terraform lets multiple deployment runs exist simultaneously for different configuration versions, but only one run can modify a deployment's state at a time. Deployment runs lock state while they are applying to ensure a run can finish without interference. When you approve a deployment run, HCP Terraform locks the deployment's state before beginning execution. This lock remains in place throughout the entire run, because a run may require multiple plan-and-apply cycles to reach convergence. The lock prevents any | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/stacks/runs.mdx | main | terraform | [
0.026359954848885536,
-0.032114747911691666,
0.017489206045866013,
-0.03443274646997452,
-0.020975394174456596,
-0.02340967021882534,
-0.04765442758798599,
-0.0705922320485115,
-0.013258318416774273,
0.05154075101017952,
0.005213329568505287,
-0.06673526018857956,
0.02410406805574894,
-0.0... | 0.082659 |
they are applying to ensure a run can finish without interference. When you approve a deployment run, HCP Terraform locks the deployment's state before beginning execution. This lock remains in place throughout the entire run, because a run may require multiple plan-and-apply cycles to reach convergence. The lock prevents any other approved runs from starting their execution and potentially creating conflicts. Each step in a deployment run can acquire a state lock, but only steps that modify state actually require locking. Two step types modify state and require locks: - Apply steps that create, update, or destroy infrastructure. - Import steps that move existing workspace resources under the management of a Stack. While a deployment holds a state lock, other runs for that deployment must wait. These waiting runs queue in order and cannot begin execution until the current run: - Successfully completes all its plan-and-apply cycles - Fails and stops execution - Gets manually canceled by an operator After the running deployment releases its lock, the next approved run in the queue can acquire the lock and begin its execution. To learn more about state for deployments in a Stack, refer to [State for Stacks](/terraform/cloud-docs/stacks/state). ### Deployment run modes Deployment runs have three possible run modes: - The default run mode, normal mode, creates plans and applies them after approval. - The destroy run mode triggers when you delete a deployment from your configuration. - The import run mode triggers when HCP Terraform converts a workspace into a Stack. To learn more, refer to [Terraform migrate](#LINK). #### Destroy mode You can trigger the destroy run mode for Stacks in two ways: - If you set the destroy argument to true on a deployment block. - If you remove a deployment block from your deployment configuration. If you remove a deployment block from your configuration, HCP Terraform starts a destroy run using the last configuration that included that deployment. If you plan on updating the provider configuration in your Stack and also destroying some deployments, we recommend using the destroy argument to remove deployments to ensure your configuration has the authentication necessary to destroy that deployment. ### Steps in a deployment run A deployment run consists of the steps that Terraform can perform to update that deployment to match the associated configuration version. The number of steps is different for each plan, depending on what is necessary to get a deployment to match the change in that configuration version. A deployment run can include multiple plan-and-apply steps to ensure the downstream impacts of changes fully roll out to that deployment. ## States of a deployment run step Each step in a deployment run matches a specific state as HCP Terraform executes a deployment run. A deployment run step exists in one of four primary states: - [Pending](#pending) - [Running](#running) - [Waiting](#waiting) - [Complete](#complete) ### Pending Pending steps waits for previous steps to complete. Steps execute sequentially within a deployment run, so each step remains in the pending state until all preceding steps finish. For example, an apply step waits in the pending state while its associated plan step executes. ### Running Running steps are when HCP Terraform actively executes part of the deployment run. During a running step Terraform is doing one of the following: - Planning what changes a deployment needs to match the configuration version. - Applying approved changes. - Importing existing workspace resources under the management of a Stack. ### Waiting Runs that produce changes pause in the waiting step state to wait for operator approval. HCP Terraform requires manual confirmation before proceeding with infrastructure changes, unless | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/stacks/runs.mdx | main | terraform | [
-0.06582510471343994,
-0.019090592861175537,
0.07855170965194702,
-0.025248635560274124,
0.03137053921818733,
-0.017589887604117393,
-0.022367015480995178,
-0.11185874789953232,
0.05634630098938942,
0.05665472894906998,
0.02524319849908352,
0.005953686777502298,
0.013878347352147102,
-0.08... | 0.09395 |
deployment needs to match the configuration version. - Applying approved changes. - Importing existing workspace resources under the management of a Stack. ### Waiting Runs that produce changes pause in the waiting step state to wait for operator approval. HCP Terraform requires manual confirmation before proceeding with infrastructure changes, unless you configure [auto-approval rules for your deployments](/terraform/language/stacks/deploy/conditions). ### Complete The step finishes execution and enters one of three final states: - Success - Failed - Cancelled Once a step reaches the complete state, you cannot restart the deployment run. If a step fails or gets canceled, you must create a new deployment run to retry the operation. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/stacks/runs.mdx | main | terraform | [
0.008687448687851429,
-0.03835515305399895,
0.10310039669275284,
-0.04442460089921951,
-0.022380048409104347,
-0.026797285303473473,
-0.04242001101374626,
-0.10485575348138809,
-0.027541209012269974,
0.07712432742118835,
0.011395669542253017,
-0.07371817529201508,
0.02858862653374672,
-0.0... | -0.02871 |
# Configure a Stack This guide explains configuring a Stackβs name, description, project, and VCS settings in HCP Terraform. ## Requirements To view a Stack and its configurations, you must also be a member of a team in your organization with one of the following permissions: \* [Organization-level \*\*Manage all projects\*\*](/terraform/cloud-docs/users-teams-organizations/permissions/organization) or higher \* [Project-level \*\*Read\*\*](/terraform/cloud-docs/users-teams-organizations/permissions/project) or higher to view a Stack \* [Project-level \*\*Maintain\*\*](/terraform/cloud-docs/users-teams-organizations/permissions/project) or higher to update a Stack ## Configure a Stack You can configure a Stack by performing the following steps: 1. Sign in to [HCP Terraform](https://app.terraform.io) and select the organization that contains your Stack. 1. In the navigation menu, select \*\*Projects\*\* under \*\*Manage\*\*. 1. Select the project containing your Stack. 1. Select the Stack you want to configure. 1. Select \*\*Settings\*\* in the side navigation. 1. Update any of the following settings: - Name - Description - Project, learn about [moving a Stack to another project](/terraform/cloud-docs/stacks/configure#move-a-stack-to-another-project). - Version control settings, including: - You can also view any linked VCS repository, add a VCS connection, change your source repository, or disconnect your Stack from VCS. If you disconnect your Stack from VCS, you must upload further Stack configurations manually through the CLI. - Stacks automatically trigger new runs when you push a configuration change to a linked repository. To stop HCP Terraform from automatically fetching configuration changes, toggle \*\*VCS Trigger Enabled\*\*. - You can enable or disable \*\*Automatic speculative plans\*\* to have your Stack create plan-only runs for pull requests to your linked VCS repository. - You can also configure Stacks to work from either a branch or tag-based workflow. - You can \*\*Enable debug logging\*\* to be able to download detailed logs from your Stack's runs. - You can change your Stack's \*\*Execution mode\*\* to either \*\*Remote\*\* or \*\*Agent\*\*. Refer to [Change execution mode](#change-execution-mode) to learn more. 1. Click \*\*Save settings\*\* to apply your changes. ## Move a Stack to another project To move a Stack, you must have the \*\*Manage all Projects\*\* organization permission or explicit team admin privileges on both the source and destination projects. Moving a Stack may alter or remove access for some teams based on their [project-level permissions](/terraform/cloud-docs/users-teams-organizations/permissions/project). Upstream Stacks must be in the same project as their downstream Stacks. Moving an upstream Stack to another project breaks any downstream Stack inputs, affecting future plans. Move upstream and downstream Stacks together to avoid disruptions. Learn more about [passing data from one Stack to another](/terraform/language/stacks/deploy/pass-data). ### Change execution mode By default, HCP Terraform deploys infrastructure remotely using its own disposable virtual machines. The default execution mode is called \*\*Remote\*\* mode. [HCP Terraform agents](/terraform/cloud-docs/agents/) let HCP Terraform communicate with isolated, private, or on-premises infrastructure. Your HCP Terraform agent must use v1.25.0 or above to execute Stack deployment runs. To learn more about agent versioning, refer to [Updates](/terraform/cloud-docs/agents/agents#updates). After setting up an agent and an agent pool, you can choose to change a Stack's execution mode to \*\*Agent\*\* to have HCP Terraform execute your Stack runs on an agent. To learn more about setting up agents, refer to [Install and run agents](/terraform/cloud-docs/agents/agents). ## Next steps After configuring your Stack, you can learn more about how to [review your Stackβs deployment runs](/terraform/cloud-docs/stacks/deploy/runs) or [destroy your Stack](/terraform/cloud-docs/stacks/destroy). | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/stacks/configure.mdx | main | terraform | [
-0.01611001044511795,
-0.07596974819898605,
0.004862023051828146,
-0.029409829527139664,
-0.03538161888718605,
-0.050775494426488876,
-0.08347701281309128,
-0.04339546337723732,
0.01187130156904459,
0.08953073620796204,
-0.021002938970923424,
-0.10075275599956512,
0.015813330188393593,
0.0... | 0.028203 |
# State in Stacks HCP Terraform stores the state of Stacks in files corresponding to each deployment of that Stack. Each deployment's state file includes the state of every component in that deployment. ## View state You can view the state of a deployment in each [run](/terraform/cloud-docs/stacks/runs) of that deployment. To view the state of a deployment run, perform the following steps: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and select your organization. 2. Select the project your Stack lives in and click that Stack name. 3. Select a version of your Stack configuration. 4. Click on the deployment you want to review the state of. 5. Click \*\*View state\*\* in the top right-hand corner to view your deployment state at each point in that deployment run. HCP Terraform lists the components that make up the current deployment. You can expand each component and search the available components to review the resources within those components. Note that HCP Terraform redacts sensitive values in the UI of deployment state, but those values are still stored. To learn more about sensitive values in state, refer to [Manage sensitive data](/terraform/language/manage-sensitive-data). ## Manage state Stacks automatically update and upgrade State whenever you plan or apply a deployment run. To update the state of a deployment, update your component configuration files and apply the corresponding deployment runs to update the state of each deployment. You cannot manually affect the state of a Stack deployment without updating your configuration. The following actions in your component configuration update the state of a deployment when you apply the corresponding deployment runs: - Define new `component` blocks - Update a `component` block - [Remove components](/terraform/language/stacks/component/manage#remove-components) - [Update the resources in the module](/terraform/language/stacks/component/manage#manage-resources) that a `component` block sources Adding new `deployment` blocks to your [deployment configuration file](/terraform/language/stacks/deploy/config) creates a new state file for that deployment after you apply the corresponding run. Applying a run that removes a `deployment` block, or using the `destroy=true` argument in a deployment, removes that deployment and its corresponding state file in HCP Terraform. ## State locking and runs HCP Terraform lets multiple deployment runs exist simultaneously for different configuration versions, but only one run can modify a deployment's state at a time. Deployment runs lock state while they are applying to ensure a run can finish without interference. To learn more, refer to [deployment runs](/terraform/cloud-docs/stacks/runs). ### Access state from other Stacks If you have multiple Stacks that do not share a provisioning lifecycle, you can link Stacks together to export data from one Stack for another to consume. If the output value of a Stack changes after a run, HCP Terraform automatically detects the change in state and triggers runs for any Stacks that depend on those outputs. To learn more and review examples, refer to [Pass data from one Stack to another](/terraform/language/stacks/deploy/pass-data). ## Managed resources in state A managed resource is a resource in a state file where `mode = "managed"`. HCP Terraform reads all the deployment's state files to determine the total number of managed resources for that Stack and determine the cost for your organization. To learn more about managed resources and billing, refer to [Estimate HCP Terraform cost](/terraform/cloud-docs/overview/estimate-hcp-terraform-cost). | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/stacks/state.mdx | main | terraform | [
0.004596299957484007,
-0.011922239325940609,
-0.004949169233441353,
0.014065716415643692,
0.034780509769916534,
-0.012547150254249573,
0.0010820850729942322,
-0.06722406297922134,
0.01072133332490921,
0.055419571697711945,
-0.005975006148219109,
-0.08093857020139694,
0.025883246213197708,
... | 0.037977 |
# Create and review configuration versions Each Stack deployment uses a configuration version to determine what infrastructure to create, update, or delete. You can rollout changes to your Stack deployments by creating new configuration versions. HCP Terraform is the interface for keeping track of your Stack configuration over time and the corresponding deployments for each version of your configuration. ## Background A Stack configuration version is a snapshot of all of the pieces that make up your Stack. A Stack configuration version includes: - Your component configuration, in `tfcomponent.hcl` files. - Your deployment configuration, in the `tfdeploy.hcl` file. - The modules that implement your individual Stack components - The input values that deployment block passes to your Stack components. ## Requirements To view a Stack and its configuration versions, you must also be a member of a team in your organization with one of the following permissions: \* [Organization-level \*\*Manage all projects\*\*](/terraform/cloud-docs/users-teams-organizations/permissions/organization) or higher \* [Project-level \*\*Read\*\*](/terraform/cloud-docs/users-teams-organizations/permissions/project) or higher ## Create a configuration version HCP Terraform creates a new configuration version each time any of the following change: - Your configuration changes and VCS automatically fetches that change - Your configuration changes and you manually fetch that change in HCP Terraform - You manually upload a configuration - An [`upstream\_input`](/terraform/language/stacks/deploy/pass-data#consume-the-output-from-an-upstream-stack) that your Stack depends on changes. Each configuration version creates a deployment run for every deployment of your Stack to implement the changes in that version. To learn more about how deployments and how they plan and apply infrastructure, refer to [Stack deployment runs](/terraform/cloud-docs/stacks/runs). ### VCS-linked repositories Stacks automatically detect when you push changes to your configuration from a VCS-linked repository. Every time you push changes to your repository, HCP Terraform automatically fetches your configuration and creates a new configuration version. Whether automatic or manual, every time HCP Terraform fetches a new version of your configuration file, it creates a new configuration version, whether you made changes to that file or not. Each configuration version creates a deployment run for every deployment of your Stack in order to implement the changes in that version. Learn more about [reviewing and approving deployment runs](/terraform/cloud-docs/stacks/deploy/runs). #### Manually create a configuration version If something changes outside of your VCS-linked repository you can manually create a configuration version to update your Stack deployments. For example, if a value in a variable set changes or if someone changes resources directly in the AWS console. If your Stack is linked to a VCS repository, you can manually generate a new configuration version by doing the following: 1. Sign in to [HCP Terraform](https://app.terraform.io), and select the organization that contains your Stack. 1. In the navigation menu, Select \*\*Projects\*\* under \*\*Manage\*\*. 1. Select the project containing your Stack. 1. Select the Stack you want to review. 1. Click \*\*Fetch configurations from VCS\*\*. ### Upload configuration with the CLI You can use the `terraform stacks configuration upload` command to manually upload your configuration files to create a new configuration version. This command is useful if your Stack is not linked to a VCS repository, or if you want to upload configuration changes outside of your VCS repository. For more details, refer to the [`terraform stacks configuration upload` reference](/terraform/cli/commands/stacks/configuration/upload). ### Upload configuration with the API You can also create a configuration version using the HCP Terraform API. To learn more, refer to the [Stacks API](/terraform/cloud-docs/api-docs/stacks/configurations#create-a-stack-configuration). ## Review configuration versions The \*\*Configurations\*\* page contains a numbered sequential list of your configuration versions. Configuration versions create deployment runs for each of your Stack's deployments. HCP Terraform marks each configuration version as \*\*Completed\*\* if it created deployment runs for all of your Stack's deployments, | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/stacks/deploy/configuration-versions.mdx | main | terraform | [
0.00549527071416378,
-0.038142554461956024,
0.022994961589574814,
-0.04416216164827347,
-0.0005210461094975471,
-0.008727716282010078,
-0.0861717015504837,
-0.06071466580033302,
0.003547987900674343,
0.05328001081943512,
-0.00029687213827855885,
-0.07526207715272903,
0.009867176413536072,
... | 0.000445 |
to the [Stacks API](/terraform/cloud-docs/api-docs/stacks/configurations#create-a-stack-configuration). ## Review configuration versions The \*\*Configurations\*\* page contains a numbered sequential list of your configuration versions. Configuration versions create deployment runs for each of your Stack's deployments. HCP Terraform marks each configuration version as \*\*Completed\*\* if it created deployment runs for all of your Stack's deployments, or \*\*Failed\*\* if it failed out and could not create the necessary deployment runs. Clicking on an individual configuration version reveals the following: - How long ago the configuration version was created. - The latest commit that HCP Terraform based this configuration version on and who made that commit. - The deployment runs that correspond to this configuration version. Each deployment run lists: - The name of the deployment associated with this run. - The name of the deployment group associated with this run. - The status of the run. Clicking the ellipsis next to a run reveals options to \*\*View deployment run\*\* or \*\*Approve all plans\*\*. Note that you can only approve plans in the \*\*Pending\*\* state. Clicking \*\*Details\*\* reveals any associated commit message with the run, and if there were any updates for components sourced from any registry modules or external repositories. Clicking on a Stackβs \*\*Inputs\*\* reveals a list of the inputs this Stack has access to from `upstream\_input` blocks, along with the upstream Stacks that supply those inputs. Clicking on a Stackβs \*\*Outputs\*\* reveals a list of the outputs of this Stack and any downstream Stacks that intake those outputs. To learn more about passing data from one Stack to another, refer to [Pass data from one Stack to another](/terraform/language/stacks/deploy/pass-data). ## Next steps Learn about [deployment runs](/terraform/cloud-docs/stacks/deploy/runs) or how to [approve or discard deployment run plans](/terraform/cloud-docs/stacks/deploy/runs) to manage your deployments. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/stacks/deploy/configuration-versions.mdx | main | terraform | [
0.00004557242937153205,
-0.02998334914445877,
0.08184067904949188,
-0.01609797589480877,
0.003818061901256442,
-0.010125941596925259,
-0.08280020952224731,
-0.10858621448278427,
0.012258621864020824,
0.041897453367710114,
0.017969852313399315,
-0.05267472565174103,
0.035135190933942795,
-0... | -0.020093 |
# Review deployment runs Deployment runs are a combination of an individual configuration version and one of your Stack's deployments. As in the traditional Terraform workflow, HCP Terraform creates runs every time a new configuration version introduces potential changes for a deployment. To learn more about how deployment runs work, including how they update Stack deployment state, refer to [Stack deployment runs](/terraform/cloud-docs/stacks/runs). You can also view, approve, or cancel deployment runs directly from the Terraform CLI, to learn more refer to the [`terraform stacks deployment-run` commands](/terraform/cli/commands/stacks/deployment-run). ## Requirements To view a Stack and its deployment runs, you must also be a member of a team in your organization with one of the following permissions: \* [Organization-level \*\*View all projects\*\*](/terraform/cloud-docs/users-teams-organizations/permissions) or higher \* [Project-level \*\*Read\*\*](/terraform/cloud-docs/users-teams-organizations/permissions) or higher to view deployment runs \* [Project-level \*\*Write\*\*](/terraform/cloud-docs/users-teams-organizations/permissions) or higher to interact with deployment runs ## View a deployment If you are not already on your Stack's deployment page, navigate to it: 1. Sign in to [HCP Terraform](https://app.terraform.io), and select the organization that contains your Stack. 1. In the navigation menu, click \*\*Projects\*\* under \*\*Manage\*\*. 1. Select the project containing your Stack. 1. Select the Stack you want to review. 1. Select \*\*Deployments\*\* in the side navigation. A Stack's \*\*Deployments\*\* page displays a list of all of that Stack's deployments. Each deployment lists the latest available configuration version for that deployment. Click \*\*View run history\*\* to view a list of all of the configuration versions a deployment has executed deployment runs for, and the status of each run. You can review the details of a deployment run by clicking on a specific configuration version. ## View deployment runs A deployment run is a combination of an individual configuration version and one of your Stack's deployments. You can review the latest deployment run from the \*\*Deployments\*\* page by clicking on the configuration version number under \*\*Activity\*\*. You can also view deployment runs for specific versions on the \*\*Configurations\*\* page by selecting a version and clicking on the name of a specific deployment. A deployment run consists of the steps that Terraform can perform to update that deployment to match the associated configuration version. All deployment run pages display the following information: - The ID of the deployment run. - The name of the associated deployment. - The configuration version number associated with this deployment run. - The status of the deployment run. - If a deployment run was approve, the name of the approver, and any comment they left when approving. - When HCP Terraform created the run. - Any infrastructure resources the deployment run plans to create, update, or destroy. - A button to \*\*Approve all plans\*\* for this configuration version, approving this deployment run and other associated runs for this configuration version. - A button to \*\*Cancel run\*\* to discard this deployment run. HCP Terraform stores the state of Stacks in files corresponding to each deployment of that Stack. After applying any plan, HCP Terraform displays a \*\*View state\*\* button next to each step of that plan to let you view a sanitized version of the state for the deployment at that step. To learn more about how state works for deployments, refer to [State in Stacks](/terraform/cloud-docs/stacks/state). If you have enabled [debug logging](/terraform/cloud-docs/stacks/configure) for your Stack, you can download detailed logs of each step of your deployment run by expanding the \*\*Inspect\*\* dropdown. ### Deployment run status A deployment run can be in one of several states: | Status | Description | | :---- | :---- | | Plan: Queued | HCP Terraform is preparing to create a plan for this deployment. | | | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/stacks/deploy/runs.mdx | main | terraform | [
0.012215729802846909,
-0.04358180984854698,
0.04391250014305115,
-0.03700624406337738,
-0.017786270007491112,
-0.04091387987136841,
-0.07608995586633682,
-0.09780901670455933,
0.01298497524112463,
0.06757695972919464,
-0.01786629483103752,
-0.05806291103363037,
0.0008726479718461633,
-0.03... | 0.026696 |
of your deployment run by expanding the \*\*Inspect\*\* dropdown. ### Deployment run status A deployment run can be in one of several states: | Status | Description | | :---- | :---- | | Plan: Queued | HCP Terraform is preparing to create a plan for this deployment. | | Plan: Started | HCP Terraform is starting to create a plan for a deployment. | | Plan: Running | HCP Terraform is creating a plan for a deployment. | | Plan: Pending operator | HCP Terraform created a plan for a deployment and is waiting for you to approve that plan. | | Apply: Queued | HCP Terraform is preparing to apply a plan for this deployment. | | Errored | The deployment run has encountered an error. Review the deployment run's errors to learn more. | ## Approve plans Like traditional Terraform plans, Stack deployment runs list the changes that occur if you approve that plan. Each deployment run lists its expected resource changes for that deployment, and you can review those changes to whether to apply a plan. You can manage each deployment plan independently, so any plans you approve only affect the current deployment you are interacting with. To approve a deployment run's plan you can perform the following steps: 1. On the \*\*Configurations\*\* page, click on a specific configuration version, then click on the ellipsis next to a deployment and select \*\*View deployment run\*\*. 1. On the deployment run page, you can click \*\*Approve plan\*\* to approve the plan for this deployment, or \*\*Approve plan with comment\*\* to approve the plan and add a comment. You can also approve a deployment run's plan from the \*\*Deployments\*\* page: 1. On the \*\*Deployments\*\* page, click on the configuration version number under \*\*Activity\*\*. 1. Click \*\*Approve plan\*\* to approve the plan for this deployment, or \*\*Approve plan with comment\*\* to approve the plan and add a comment. To approve all of the deployment runs for a specific configuration version, you can do either of the following: 1. Click \*\*Approve all plans\*\* on a deployment run to approve all pending plans for this configuration version across all deployments. 1. On the \*\*Configurations\*\* page, click on a specific configuration version, then click on the ellipsis next to a specific deployment and click \*\*Approve all plans\*\*. You can also view, approve, or cancel deployment runs directly from the Terraform CLI, to learn more refer to the [`terraform stacks deployment-run` commands](/terraform/cli/commands/stacks/deployment-run). ### Convergence checks By default, each Stack has an `auto\_approve` rule named `empty\_plan`, which auto-approves a plan if it does not contain changes. After applying any step in a plan, HCP Terraform automatically triggers a step called a convergence check. A convergence check is a replan to ensure components do not have any [deferred changes](#deferred-changes). HCP Terraform continues to trigger new replan steps until the convergence check returns an empty plan, indicating that the deployment has reached convergence. To learn more about deployment runs and their steps, refer to [Stack deployment runs](/terraform/cloud-docs/stacks/runs). ## Deferred changes Like with Terraform configuration files, HCP Terraform generates a dependency graph and creates resources defined in `.tfcomponent.hcl` and `.tfdeploy.hcl` files. When you deploy a Stack with resources that depend on resources provisioned by other components in your stack, HCP Terraform recognizes the dependency between components and automatically defers the step in that plan until HCP Terraform can complete it successfully. -> \*\*Hands-on\*\*: Complete the [Manage Kubernetes workloads with stacks](/terraform/tutorials/cloud/stacks-eks-deferred) tutorial to create plans with deferred changes. HCP Terraform notifies you in the UI if a plan contains deferred changes. Approving a plan with deferred changes makes HCP | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/stacks/deploy/runs.mdx | main | terraform | [
-0.022550076246261597,
0.007409713696688414,
0.09427135437726974,
-0.010746569372713566,
-0.004296249244362116,
-0.04779619351029396,
-0.038802653551101685,
-0.08916112035512924,
-0.02121305651962757,
0.07613956183195114,
-0.02419975958764553,
-0.09114247560501099,
0.03054148331284523,
-0.... | 0.040391 |
defers the step in that plan until HCP Terraform can complete it successfully. -> \*\*Hands-on\*\*: Complete the [Manage Kubernetes workloads with stacks](/terraform/tutorials/cloud/stacks-eks-deferred) tutorial to create plans with deferred changes. HCP Terraform notifies you in the UI if a plan contains deferred changes. Approving a plan with deferred changes makes HCP Terraform automatically create a follow-up plan to properly set up resources in the order of operations those resources require. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/stacks/deploy/runs.mdx | main | terraform | [
-0.07297949492931366,
0.032695621252059937,
0.11361268162727356,
-0.0018985976930707693,
-0.057957496494054794,
-0.03683968633413315,
-0.05674275755882263,
-0.0652727484703064,
0.03017791174352169,
0.06795146316289902,
-0.0051660239696502686,
-0.06707912683486938,
-0.007136437110602856,
-0... | 0.079331 |
# Overview @include 'tfc-package-callouts/project-permissions.mdx' Projects let you organize your workspaces and Stacks and scope access to those workspace and Stack resources. Each project has a separate permissions set that you can use to grant teams access to all workspaces and Stacks in the project, defining access control boundaries for teams and their resources. Project-level permissions are more granular than organization-level permissions, but more specific than individual workspace-level grants. When deciding how to structure your projects, consider which groups of resources need distinct access rules. You may wish to define projects by business units, departments, subsidiaries, or technical teams. > \*\*Hands On:\*\* Try our [Managing Projects](/terraform/tutorials/cloud/projects) tutorial. @include 'eu/project.mdx' ## Default project Every workspace and Stack must belong to exactly one project. By default, all workspaces belong to an organization's \*\*Default Project\*\*. You can rename the default project, but you cannot delete it. You can specify a workspace's project at the time of creation and move it to a different project later. The βManage Workspacesβ team permission lets users create and manage workspaces. Users with this permission can read and manage all workspaces, but new workspaces are automatically added to the βDefault Projectβ and users cannot access the metadata for other projects. To create workspaces under other projects, users also need the "Manage Projects & Workspaces" permission or the admin role for the project they wish to use. ## Managing projects The "Manage all Projects" team permission lets users manage projects. Users with this permission can view, edit, delete, and assign team access to all of an organization's projects. Refer to [Managing Projects](/terraform/cloud-docs/projects/manage) for more details. ## Default execution mode By default, a project uses the organization's [default execution mode](/terraform/cloud-docs/users-teams-organizations/organizations#organization-settings) when choosing the execution platform, but you can choose a custom execution mode for a project. ## Execution mode By default, a project uses the organization's execution mode, which is either \*\*Remote\*\* or \*\*Local\*\*, but you can override the organization execution mode in your project. Any workspaces created in the project after changing the project execution mode inherit the project default. Refer to [Change the execution mode](/terraform/cloud-docs/projects/manage#change-the-execution-mode) for instructions. You can enable the following execution modes: - \*\*Organization Default\*\*: Uses the organization's execution mode. This is either \*\*Remote\*\* or \*\*Local\*\*. - \*\*Remote\*\*: Plan and apply operations run on HCP Terraform's or Terraform Enterprise's infrastructure. You and your team have the ability to review and collaborate on runs within the application. - \*\*Local\*\*: Plan and apply operations run on machines that you control. HCP Terraform and Terraform Enterprise only store and synchronize state. Stacks do not support local execution. - \*\*Agent\*\*: Plan and apply operations that your agent executes are managed by HCP Terraform or Terraform Enterprise. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/projects/index.mdx | main | terraform | [
-0.032419800758361816,
-0.01749548502266407,
-0.02606535702943802,
-0.07019607722759247,
-0.021166963502764702,
-0.05209312215447426,
0.006811339873820543,
-0.06822549551725388,
0.06457342207431793,
0.05490918830037117,
-0.06625418365001678,
-0.04638912156224251,
0.03110693395137787,
0.045... | 0.058538 |
# Project Best Practices Projects let you group and scope access to your workspaces and Stacks. You can group related workspaces and Stacks into projects and give teams more permissive access to individual projects rather than granting them permissions to the entire organization. Projects offer several advantages to help you further develop your workspace and Stack grouping strategy: - \*\*Focused view\*\*: You can scope which workspaces and Stacks HCP Terraform displays by project, allowing for a more organized view. - \*\*Simplified management\*\*: You can create project-level permissions and variable sets that apply to all current and future workspaces and Stacks in the project. For example, you can create a project variable set containing your cloud provider credentials for all workspaces and Stacks in the project to access. - \*\*Reduced risk with centralized control\*\*: You can scope project permissions to only grant teams administrator access to the projects and workspaces and Stacks they need. ## Recommendations When using projects, we recommend the following: - \*\*Automate with Terraform\*\*: Automate the creation of projects, variable sets, and teams together using the [TFE provider](https://registry.terraform.io/providers/hashicorp/tfe/latest/docs). - \*\*Designate a landing zone project\*\*: Landing zone projects contain what you need to create all other projects, teams, Stacks, and workspaces. This lets you have a variable set that includes the organization token, which the TFE provider can use to create other resources in your organization. You can also create a [Sentinel policy](/terraform/cloud-docs/policy-enforcement) to prevent users in other projects from accessing the organization token. - \*\*Maintain least privilege\*\*: Restrict the number of project administrators to maintain the principal of least privilege. ## Project boundaries Finally, decide on the logical boundaries for your projects. Some considerations to keep in mind include: - \*\*Provider boundaries\*\*: For smaller organizations, creating one project per cloud account may make it easier to manage access. Projects can use [dynamic credentials](/terraform/tutorials/cloud/dynamic-credentials) by configuring a project variable set to avoid hard-coding long-lived static credentials. - \*\*Least privilege\*\*: You can create teams and grant them access to projects with workspaces or Stacks of similar areas of ownership. For example, a production networking workspace should be in a separate project from a development compute workspace. - \*\*Use variable sets\*\*: Project-wide variable sets let you configure and reuse values such as default tags for cost-codes, owners, and support contacts. Projects can own variable sets, enabling you to separate management and access to sets between projects. - \*\*Practitioner efficiency\*\*: Consider if it makes sense for a practitioner to need to visit multiple projects to complete a deployment. ## Next steps This article introduces some considerations to keep in mind as your organization matures their project usage. Being deliberate about how you use these to organize your infrastructure will ensure smoother and safer operations. To learn more about HCP Terraform and Terraform Enterprise best practices for workspaces, refer to [Workspace Best Practices](/terraform/cloud-docs/workspaces/best-practices). To learn best practices for writing Terraform configuration, refer to the [Terraform Style Guide](/terraform/language/style). [HCP Terraform](/terraform/tutorials/cloud-get-started) provides a place to try these concepts hands-on, and you can [get started for free](https://app.terraform.io/public/signup/account). | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/projects/best-practices.mdx | main | terraform | [
-0.04388429597020149,
-0.0010442158672958612,
0.0053748018108308315,
-0.002839701483026147,
-0.015757398679852486,
-0.008115601725876331,
-0.01791408844292164,
-0.04861398786306381,
0.0022937566973268986,
0.09328263998031616,
-0.02469327114522457,
-0.07390222698450089,
0.059871722012758255,
... | 0.025114 |
# Manage projects This topic describes how to create and manage projects in HCP Terraform and Terraform Enterprise. A project is a folder containing one or more workspaces and Stacks and Stacks. @include 'eu/project.mdx' ## Requirements You must have the following permissions to manage projects: - You must be a member of a team with the \*\*Manage all Projects\*\* permissions enabled to create a project. Refer to [Organization Permissions](/terraform/cloud-docs/users-teams-organizations/permissions/organization) for additional information. - You must be a member of a team with the \*\*Visible\*\* option enabled under \*\*Visibility\*\* in the organization settings to configure a new team's access to the project. Refer to [Team Visibility](/terraform/cloud-docs/users-teams-organizations/teams/manage#team-visibility) for additional information. - You must be a member of a team with update and delete permissions to be able to update and delete teams respectively. To delete tags on a project, you must be member of a team with the \*\*Admin\*\* permission group enabled for the project. To create tags for a project, you must be member of a team with the \*\*Write\*\* permission group enabled for the project. ## View a project To view your organization's projects: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and select \*\*Projects\*\* from the sidebar. 1. Search for a project that you want to view. You can use the following methods: - Sort by column header. - Use the search bar to search on the name of a project or a tag. 1. Click on a project's name to view more details. ## Create a project 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and select \*\*Projects\*\* from the sidebar. 1. Click \*\*+ New project\*\*. 1. Specify a name for the project. The name must be unique within the organization and can only include letters, numbers, inner spaces, hyphens, and underscores. 1. Add a description for the project. This field is optional. 1. Open the \*\*Add key value tags\*\* menu to add tags to your project. Workspaces you create within the project inherit project tags. Refer to [Define project tags](#define-project-tags) for additional information. 1. Click \*\*+Add tag\*\* and specify a tag key and tag value. If your organization has defined reserved tag keys, they appear in the \*\*Tag key\*\* field as suggestions. Refer to [Create and manage reserved tags](/terraform/cloud-docs/users-teams-organizations/organizations/manage-reserved-tags) for additional information. 1. Click \*\*+ Add tag\*\* to attach any additional tags. 1. Click \*\*Create\*\* to finish creating the project. HCP Terraform returns a new project page displaying all the project information. ## Edit a project 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and select \*\*Projects\*\* from the sidebar. 1. Click on a project name of the project you want to edit. 1. Choose \*\*Settings\*\* from the sidebar. On this \*\*General settings\*\* page, you can update the project name, project description, and delete the project. On the \*\*Team access\*\* page, you can modify team access to the project. ## Automatically destroy inactive workspaces @include 'tfc-package-callouts/ephemeral-workspaces.mdx' You can configure HCP Terraform to automatically destroy each workspace's infrastructure in a project after a period of inactivity. A workspace is inactive if the workspace's state has not changed within your designated time period. If you configure a project to auto-destroy its infrastructure when inactive, any run that updates Terraform state further delays the scheduled auto-destroy time by the length of your designated timeframe. The user interface does not prompt you to approve automated destroy plans. We recommend only using this setting for development environments. To schedule an auto-destroy run after a period of workspace inactivity: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the project with workspaces you want to destroy. 1. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/projects/manage.mdx | main | terraform | [
-0.042104799300432205,
-0.010487695224583149,
-0.057564809918403625,
-0.02114407904446125,
0.02216315269470215,
-0.044713109731674194,
-0.006580795627087355,
-0.07532704621553421,
0.04514758288860321,
0.08184798806905746,
-0.02046569250524044,
-0.09074348211288452,
0.085314080119133,
0.035... | 0.038298 |
does not prompt you to approve automated destroy plans. We recommend only using this setting for development environments. To schedule an auto-destroy run after a period of workspace inactivity: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the project with workspaces you want to destroy. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Auto-destroy Workspaces\*\*. 1. Click \*\*Set up default\*\*. 1. Select or customize a desired timeframe of inactivity. 1. Click \*\*Confirm default\*\*. You can configure an individual workspace's auto-destroy settings to override this default configuration. Refer to [automatically destroy workspaces](/terraform/cloud-docs/workspaces/settings/deletion#automatically-destroy) for more information. ## Delete a project You can only delete projects that do not contain Stacks or workspaces. To delete an empty project: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise. 1. Click \*\*Projects\*\*. 1. Search for a project that you want to review by scrolling down the table or searching for a project name in the search bar above the project table. 1. Choose \*\*Settings\*\* from the sidebar. 1. Click the \*\*Delete\*\* button. A \*\*Delete project\*\* modal appears. 1. Click the \*\*Delete\*\* button to confirm the deletion. HCP Terraform returns to the \*\*Projects\*\* view with the deleted project removed from the list. ## Define project tags You can define tags stored as key-value pairs to help you organize your projects and track resource consumption. Workspaces created in the project automatically inherit the tags, but workspace administrators with appropriate permissions can attach new key-value pairs to their workspaces to override inherited tags. Refer to [Create workspace tags](/terraform/cloud-docs/workspaces/tags) for additional information about using tags in workspaces. The following rules apply to tag keys and values: - Tags must be one or more characters. - Tags have a 255 character limit. - Tags can include letters, numbers, colons, hyphens, and underscores. - Tag values are optional. - You can create up to 10 unique tags per workspace and 10 unique tags per project. As a result, each workspace can have up to 20 tags. - You cannot use the following strings at the beginning of a tag key: - `hcp` - `hc` - `ibm` ## Change the execution mode In HashiCorp Cloud Platform (HCP) Europe organizations, you cannot set execution mode at the project-level. Instead, you can set an execution mode at the [organization](/terraform/cloud-docs/users-teams-organizations/organizations#general) or [workspace](/terraform/cloud-docs/workspaces/settings#execution-mode) level. To learn more about HCP Europe, refer to [Use HCP Terraform in Europe](/terraform/cloud-docs/europe). Navigate to the organization settings and choose the execution mode. Enabling the \*\*Remote\*\* execution mode instructs HCP Terraform or Terraform Enterprise to perform Terraform runs on its own disposable virtual machines. This provides a consistent and reliable run environment for workspaces that enables advanced features such as Sentinel policy enforcement, cost estimation, notifications, and version control integration. If you are using a Stack and want to learn about the available execution modes, refer to [deployment runs](/terraform/cloud-docs/stacks/runs). To disable remote execution for a project, enable the \*\*Local\*\* execution. This mode lets you perform Terraform runs locally with the [CLI-driven run workflow](/terraform/enterprise/run/cli). Stacks do not support local execution. To let HCP Terraform or Terraform Enterprise communicate with isolated, private, or on-premises infrastructure, consider using [HCP Terraform agents](/terraform/cloud-docs/agents). By deploying a lightweight agent, you can establish a simple connection between your environment and HCP Terraform or Terraform Enterprise. Changing your project's execution mode after a workspace run has already been planned causes the run to error when it is applied. To minimize the number of runs that error when changing your project's execution mode for your workspaces, complete the following steps: 1. Disable [auto-apply](/terraform/enterprise/workspaces/settings#auto-apply) if it is enabled. 1. Complete any runs that are no longer in the [pending | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/projects/manage.mdx | main | terraform | [
-0.029684212058782578,
0.04947930574417114,
0.09923548251390457,
0.006866819225251675,
0.017941247671842575,
-0.05496092513203621,
-0.014397447928786278,
-0.09735312312841415,
0.05462614819407463,
0.10317970812320709,
-0.06950575113296509,
-0.03581619635224342,
0.035700712352991104,
0.0237... | -0.004773 |
been planned causes the run to error when it is applied. To minimize the number of runs that error when changing your project's execution mode for your workspaces, complete the following steps: 1. Disable [auto-apply](/terraform/enterprise/workspaces/settings#auto-apply) if it is enabled. 1. Complete any runs that are no longer in the [pending stage](/terraform/enterprise/run/states#the-pending-stage). 1. [Lock](/terraform/enterprise/workspaces/settings#locking) your workspace to prevent any new runs. 1. Change the execution mode. 1. Enable [auto-apply](/terraform/enterprise/workspaces/settings#auto-apply), if it was enabled before changing your execution mode. 1. [Unlock](/terraform/enterprise/workspaces/settings#locking) your workspace. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/projects/manage.mdx | main | terraform | [
0.005996487569063902,
0.014871218241751194,
0.07447915524244308,
0.011750701814889908,
-0.017275657504796982,
-0.0438508577644825,
-0.07917282730340958,
-0.05172142758965492,
-0.018697278574109077,
0.11913655698299408,
-0.05593317747116089,
-0.037789177149534225,
-0.008842605166137218,
-0.... | -0.007567 |
# Create workspaces This topic describes how to create and manage workspaces in HCP Terraform and Terraform Enterprise UI. A workspace is a group of infrastructure resources managed by Terraform. Refer to [Workspaces overview](/terraform/cloud-docs/workspaces) for additional information. > \*\*Hands-on:\*\* Try the [Get Started - HCP Terraform](/terraform/tutorials/cloud-get-started?utm\_source=WEBSITE&utm\_medium=WEB\_IO&utm\_offer=ARTICLE\_PAGE&utm\_content=DOCS) tutorials. ## Introduction Create new workspaces when you need to manage a new collection of infrastructure resources. You can use the following methods to create workspaces: - HCP Terraform UI: Refer to [Create a workspace](#create-a-workspace) for instructions. - Workspaces API: Send a `POST`call to the `/organizations/:organization\_name/workspaces` endpoint to create a workspace. Refer to the [API documentation](/terraform/cloud-docs/api-docs/workspaces#create-a-workspace) for instructions. - Terraform Enterprise provider: Install the `tfe` provider and add the `tfe\_workspace` resource to your configuration. Refer to the [`tfe` provider documentation](https://registry.terraform.io/providers/hashicorp/tfe/latest/docs/resources/workspace) in the Terraform registry for instructions. - No-code provisioning: Use a no-code module from the registry to create a new workspace and deploy the module's resources. Refer to [Provisioning No-Code Infrastructure](/terraform/cloud-docs/workspaces/no-code-provisioning/provisioning) for instructions. Each workspace belongs to a project. Refer to [Manage projects](/terraform/cloud-docs/projects/manage) for additional information. ## Requirements You must be a member of a team with one of the following permissions enabled to create and manage workspaces: - \*\*Manage all projects\*\* - \*\*Manage all workspaces\*\* - \*\*Admin\*\* permission group for a project. [permissions-citation]: #intentionally-unused---keep-for-maintainers ## Workspace naming We recommend using consistent and informative names for new workspaces. One common approach is combining the workspace's important attributes in a consistent order. Attributes can be any defining characteristic of a workspace, such as the component, the componentβs run environment, and the region where the workspace is provisioning infrastructure. This strategy could produce the following example workspace names: - networking-prod-us-east - networking-staging-us-east - networking-prod-eu-central - networking-staging-eu-central - monitoring-prod-us-east - monitoring-staging-us-east - monitoring-prod-eu-central - monitoring-staging-eu-central You can add additional attributes to your workspace names as needed. For example, you may add the infrastructure provider, datacenter, or line of business. We recommend using 90 characters or less for the name of your workspace. ## Create a workspace [workdir]: /terraform/cloud-docs/workspaces/settings#terraform-working-directory [trigger]: /terraform/cloud-docs/workspaces/settings/vcs#automatic-run-triggering [branch]: /terraform/cloud-docs/workspaces/settings/vcs#vcs-branch [submodules]: /terraform/cloud-docs/workspaces/settings/vcs#include-submodules-on-clone Complete the following steps to use the HCP Terraform or Terraform Enterprise UI to create a workspace: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and choose your organization. 1. Click \*\*New\*\* and choose \*\*Workspace\*\* from the drop-down menu. 1. If you have multiple projects, HCP Terraform may prompt you to choose the project to create the workspace in. Only users on teams with permissions for the entire project or the specific workspace can access the workspace. Refer to [Manage projects](/terraform/cloud-docs/projects/manage) for additional information. 1. Choose a workflow type. 1. Complete the following steps if you are creating a workspace that follows the VCS workflow: 1. Choose an existing version control provider from the list or configure a new system. You must enable the workspace project to connect to your provider. Refer to [Connecting VCS Providers](/terraform/cloud-docs/vcs) for more details. 1. If you choose the \*\*GitHub App\*\* provider, choose an organization and repository when prompted. The list only displays the first 100 repositories from your VCS provider. If your repository is missing from the list, enter the repository ID in the text field . 1. Refer to the following topics for information about configuring workspaces settings in the \*\*Advanced options\*\* screen: - [Terraform Working Directory][workdir] - [Automatic Run Triggering][trigger] - [VCS branch][branch] - [Include submodules on clone][submodules] 1. Specify a name for the workspace. VCS workflow workspaces default to the name of the repository. The name must be unique within the organization and can include letters, numbers, hyphens, and underscores. Refer to [Workspace naming](#workspace-naming) for additional information. 1. Add an optional description for | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/create.mdx | main | terraform | [
-0.027040041983127594,
-0.0357428602874279,
-0.01677946373820305,
0.02048737369477749,
-0.05603787675499916,
-0.036093272268772125,
-0.04875825345516205,
-0.039105068892240524,
-0.0069291372783482075,
0.07271189987659454,
-0.024388344958424568,
-0.10686299204826355,
0.06970454007387161,
0.... | 0.05832 |
[Include submodules on clone][submodules] 1. Specify a name for the workspace. VCS workflow workspaces default to the name of the repository. The name must be unique within the organization and can include letters, numbers, hyphens, and underscores. Refer to [Workspace naming](#workspace-naming) for additional information. 1. Add an optional description for the workspace. The description appears at the top of the workspace in the HCP Terraform UI. 1. Click \*\*Create workspace\*\* to finish. For CLI or API-driven workflow, the system opens the new workspace overview. For version control workspaces, the \*\*Configure Terraform variables\*\* page appears. ### Configure Terraform variables for VCS workflows After you create a new workspace from a version control repository, HCP Terraform scans its configuration files for [Terraform variables](/terraform/cloud-docs/variables#terraform-variables) and displays variables without default values or variables that are undefined in an existing [global or project-scoped variable set](/terraform/cloud-docs/variables/managing-variables#variable-sets). Terraform cannot perform successful runs in the workspace until you set values for these variables. Choose one of the following actions: - To skip this step, click \*\*Go to workspace overview\*\*. You can [load these variables from files](/terraform/cloud-docs/variables/managing-variables#loading-variables-from-files) or create and set values for them later from within the workspace. HCP Terraform does not automatically scan your configuration again; you can only add variables from within the workspace individually. - To configure variables, enter a value for each variable on the page. You may want to leave a variable empty if you plan to provide it through another source, like an `auto.tfvars` file. Click \*\*Save variables\*\* to add these variables to the workspace. ## Next steps If you have already configured all Terraform variables, we recommend [manually starting a run](/terraform/cloud-docs/workspaces/run/ui#manually-starting-runs) to prepare VCS-driven workspaces. You may also want to do one or more of the following actions: - [Upload configuration versions](/terraform/cloud-docs/workspaces/configurations#providing-configuration-versions): If you chose the API or CLI-Driven workflow, you must upload configuration versions for the workspace. - [Edit environment variables](/terraform/cloud-docs/variables): Shell environment variables store credentials and customize Terraform's behavior. - [Edit additional workspace settings](/terraform/cloud-docs/workspaces/settings): This includes notifications, permissions, and run triggers to start runs automatically. - [Learn more about running Terraform in your workspace](/terraform/cloud-docs/workspaces/run/remote-operations): This includes how Terraform processes runs within the workspace, run modes, run states, and other operations. - [Create workspace tags](/terraform/cloud-docs/workspaces/tags): Add tags to your workspaces so that you can organize and track them. - [Browse workspaces](/terraform/cloud-docs/workspaces/browse): Use the interfaces available in the UI to browse, sort, and filter workspaces so that you can track resource consumption. ### VCS Connection If you connected a VCS repository to the workspace, HCP Terraform automatically registers a webhook with your VCS provider. A workspace with no runs will not accept new runs from a VCS webhook, so you must [manually start at least one run](/terraform/cloud-docs/workspaces/run/ui#manually-starting-runs). After you manually start a run, HCP Terraform automatically queues a plan when new commits appear in the selected branch of the linked repository or someone opens a pull request on that branch. Refer to [Webhooks](/terraform/cloud-docs/vcs#webhooks) for more details. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/create.mdx | main | terraform | [
-0.046445462852716446,
-0.030843647196888924,
0.04293088614940643,
0.006018694024533033,
-0.023975735530257225,
-0.011696373112499714,
-0.023458803072571754,
-0.06986898183822632,
0.01704709231853485,
0.021892238408327103,
-0.019992347806692123,
-0.10739736258983612,
0.04988317936658859,
-... | 0.045347 |
# JSON data filtering Certain pages where JSON data is displayed, such as the [state viewer](/terraform/cloud-docs/workspaces/state) and [policy check JSON data viewer](/terraform/cloud-docs/policy-enforcement/sentinel/json), allow you to filter the results. This enables you to see just the data you need, and even create entirely new datasets to see data in the way you want to see it!  -> \*\*NOTE:\*\* \_Filtering\_ the data in the JSON viewer is separate from \_searching\_ it. To search, press Control-F (or Command-F on MacOS). You can search and apply a filter at the same time. ## Entering a Filter Filters are entered by putting the filter in the aptly named \*\*filter\*\* box in the JSON viewer. After entering the filter, pressing \*\*Apply\*\* or the enter key on your keyboard will apply the filter. The filtered results, if any, are displayed in result box. Clearing the filter will restore the original JSON data.  ## Filter Language The JSON filter language is a small subset of the [jq](https://stedolan.github.io/jq/) JSON filtering language. Selectors, literals, indexes, slices, iterators, and pipes are supported, as are also array and object construction. At this time, parentheses, and more complex operations such as mathematical operators, conditionals, and functions are not supported. Below is a quick reference of some of the more basic functions to get you started. ### Selectors Selectors allow you to pick an index out of a JSON object, and are written as `.KEY.SUBKEY`. So, as an example, given an object of `{"foo": {"bar": "baz"}}`, and the filter `.foo.bar`, the result would be displayed as `"baz"`. A single dot (`.`) without anything else always denotes the current value, unaltered. ### Indexes Indexes can be used to fetch array elements, or select non-alphanumeric object fields. They are written as `[0]` or `["foo-bar"]`, depending on the purpose. Given an object of `{"foo-bar": ["baz", "qux"]}` and the filter of `.["foo-bar"][0]`, the result would be displayed as `"baz"`. ### Slices Arrays can be sliced to get a subset an array. The syntax is `[LOW:HIGH]`. Given an array of `[0, 1, 2, 3, 4]` and the filter of `.[1:3]`, the result would be displayed as `[1, 2]`. This also illustrates that the result of the slice operation is always of length HIGH-LOW. Slices can also be applied to strings, in which a substring is returned with the same rules applied, with the first character of the string being index 0. ### Iterators Iterators can iterate over arrays and objects. The syntax is `[]`. Iterators iterate over the \_values\_ of an object only. So given a object of `{"foo": 1, "bar": 2}`, the filter `.[]` would yield an iteration of `1, 2`. Note that iteration results are not necessarily always arrays. Iterators are handled in a special fashion when dealing with pipes and object creators (see below). ### Array Construction Wrapping an expression in brackets (`[ ... ]`) creates an array with the sub-expressions inside the array. The results are always concatenated. For example, for an object of `{"foo": [1, 2], "bar": [3, 4]}`, the construction expressions `[.foo[], .bar[]]` and `[.[][]]`, are the same, producing the resulting array `[1, 2, 3, 4]`. ### Object Construction Wrapping an expression in curly braces `{KEY: EXPRESSION, ...}` creates an object. Iterators work uniquely with object construction in that an object is constructed for each \_iteration\_ that the iterator produces. As a basic example, Consider an array `[1, 2, 3]`. While the expression `{foo: .}` will produce `{"foo": [1, 2, 3]}`, adding an iterator to the expression so that it reads `{foo: .[]}` will produce 3 individual objects: `{"foo": 1}`, `{"foo": 2}`, and `{"foo": 3}`. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/json-filtering.mdx | main | terraform | [
-0.03693048655986786,
0.08573707193136215,
0.06669352948665619,
0.020879503339529037,
0.07554949074983597,
-0.009138649329543114,
-0.04356750100851059,
-0.07996489107608795,
0.016459479928016663,
0.056589722633361816,
0.02487904205918312,
-0.039911579340696335,
0.043503809720277786,
0.0551... | 0.012094 |
\_iteration\_ that the iterator produces. As a basic example, Consider an array `[1, 2, 3]`. While the expression `{foo: .}` will produce `{"foo": [1, 2, 3]}`, adding an iterator to the expression so that it reads `{foo: .[]}` will produce 3 individual objects: `{"foo": 1}`, `{"foo": 2}`, and `{"foo": 3}`. ### Pipes Pipes allow the results of one expression to be fed into another. This can be used to re-write expressions to help reduce complexity. Iterators work with pipes in a fashion similar to object construction, where the expression on the right-hand side of the pipe is evaluated once for every iteration. As an example, for the object `{"foo": {"a": 1}, "bar": {"a": 2}}`, both the expression `{z: .[].a}` and `.[] | {z: .a}` produce the same result: `{"z": 1}` and `{"z": 2}`. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/json-filtering.mdx | main | terraform | [
-0.05899414047598839,
0.031509511172771454,
0.0688442662358284,
0.009710869751870632,
-0.04619983956217766,
-0.054173100739717484,
-0.002006465569138527,
0.03239915519952774,
0.04440304636955261,
-0.03070705756545067,
-0.05195610597729683,
0.05728723481297493,
-0.008709220215678215,
-0.005... | 0.086199 |
# Health HCP Terraform can perform automatic health assessments in a workspace to assess whether its real infrastructure matches the requirements defined in its Terraform configuration. Health assessments include the following types of evaluations: - [Drift detection](#drift-detection) determines whether your real-world infrastructure matches your Terraform configuration. - [Continuous validation](#continuous-validation) determines whether custom conditions in the workspaceβs configuration continue to pass after Terraform provisions the infrastructure. When you enable health assessments, HCP Terraform periodically runs health assessments for your workspace. Refer to [Health Assessment Scheduling](#health-assessment-scheduling) for details. @include 'tfc-package-callouts/health-assessments.mdx' ## Permissions Working with health assessments requires the following permissions: - To view health status for a workspace, you need read access to that workspace. - To change organization health settings, you must be an [organization owner](/terraform/cloud-docs/users-teams-organizations/permissions/organization#organization-owners). - To change a workspaceβs health settings, you must be an [administrator for that workspace](/terraform/cloud-docs/users-teams-organizations/permissions/workspace#workspace-admin). - To trigger [on-demand health assessments](/terraform/cloud-docs/workspaces/health#on-demand-assessments) for a workspace, you must be an [administrator for that workspace](/terraform/cloud-docs/users-teams-organizations/permissions/workspace#workspace-admin). ## Workspace requirements Workspaces require the following settings to receive health assessments: - Terraform version 0.15.4+ for drift detection only - Terraform version 1.3.0+ for drift detection and continuous validation - [Remote execution mode](/terraform/cloud-docs/workspaces/settings#execution-mode) or [Agent execution mode](/terraform/cloud-docs/agents/agent-pools#configure-workspaces-to-use-the-agent) for Terraform runs The latest Terraform run in the workspace must have been successful. If the most recent run ended in an errored, canceled, or discarded state, HCP Terraform pauses health assessments until there is a successfully applied run. The workspace must also have at least one run in which Terraform successfully applies a configuration. HCP Terraform does not perform health assessments in workspaces with no real-world infrastructure. ## Enable health assessments You can enforce health assessments across all eligible workspaces in an organization within the [organization settings](/terraform/cloud-docs/users-teams-organizations/organizations#health). Enforcing health assessments at an organization-level overrides workspace-level settings. You can only enable health assessments within a specific workspace when HCP Terraform is not enforcing health assessments at the organization level. To enable health assessments within a workspace: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the workspace you want to enable health assessments on. 1. Verify that your workspace satisfies the [requirements](#workspace-requirements). 1. Go to the workspace and click \*\*Settings\*\*, then click \*\*Health\*\*. 1. Select \*\*Enable\*\* under \*\*Health Assessments\*\*. 1. Click \*\*Save settings\*\*. ## Health assessment scheduling When you enable health assessments for a workspace, HCP Terraform runs the first health assessment based on whether there are active Terraform runs for the workspace: - \*\*No active runs:\*\* A few minutes after you enable the feature. - \*\*Active speculative plan:\*\* A few minutes after that plan is complete. - \*\*Other active runs:\*\* During the next assessment period. After the first health assessment, HCP Terraform starts a new health assessment during the next assessment period if there are no active runs in the workspace. Health assessments may take longer to complete when you enable health assessments in many workspaces at once or your workspace contains a complex configuration with many resources. A health assessment never interrupts or interferes with runs. If you start a new run during a health assessment, HCP Terraform cancels the current assessment and runs the next assessment during the next assessment period. This behavior may prevent HCP Terraform from performing health assessments in workspaces with frequent runs. HCP Terraform pauses health assessments if the latest run ended in an errored state. This behavior occurs for all run types, including plan-only runs and speculative plans. Once the workspace completes a successful run, HCP Terraform restarts health assessments during the next assessment period. Terraform Enterprise administrators can modify their installation's [assessment frequency and number of maximum concurrent assessments](/terraform/enterprise/admin/application/general#health-assessments) from the admin settings | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/health.mdx | main | terraform | [
0.018267706036567688,
0.03903505951166153,
0.009065416641533375,
-0.03156974911689758,
0.008152979426085949,
-0.05111528933048248,
-0.022042162716388702,
-0.07265257090330124,
-0.033527474850416183,
0.03639152646064758,
-0.029699202626943588,
-0.08734377473592758,
0.016267474740743637,
0.0... | 0.029274 |
state. This behavior occurs for all run types, including plan-only runs and speculative plans. Once the workspace completes a successful run, HCP Terraform restarts health assessments during the next assessment period. Terraform Enterprise administrators can modify their installation's [assessment frequency and number of maximum concurrent assessments](/terraform/enterprise/admin/application/general#health-assessments) from the admin settings console. ### On-demand assessments -> \*\*Note:\*\* On-demand assessments are only available in the HCP Terraform user interface. If you are an administrator for a workspace and it satisfies all [assessment requirements](/terraform/cloud-docs/workspaces/health#workspace-requirements), you can trigger a new assessment by clicking \*\*Start health assessment\*\* on the workspace's \*\*Health\*\* page. After clicking \*\*Start health assessment\*\*, the workspace displays a message in the bottom lefthand corner of the page to indicate if it successfully triggered a new assessment. The time it takes to complete an assessment can vary based on network latency and the number of resources managed by the workspace. You cannot trigger another assessment while one is in progress. An on-demand assessment resets the scheduling for automated assessments, so HCP Terraform waits to run the next assessment until the next scheduled period. ### Concurrency If you enable health assessments on multiple workspaces, assessments may run concurrently. Health assessments do not affect your concurrency limit. HCP Terraform also monitors and controls health assessment concurrency to avoid issues for large-scale deployments with thousands of workspaces. However, HCP Terraform performs health assessments in batches, so health assessments may take longer to complete when you enable them in a large number of workspaces. ### Notifications HCP Terraform sends [notifications](/terraform/cloud-docs/workspaces/settings/notifications) about health assessment results according to your workspaceβs settings. ## Workspace health status On the organization's \*\*Workspaces\*\* page, HCP Terraform displays a \*\*Health warning\*\* status for workspaces with infrastructure drift or failed continuous validation checks. On the right of a workspaceβs overview page, HCP Terraform displays a \*\*Health\*\* bar that summarizes the results of the last health assessment. - The \*\*Drift\*\* summary shows the total number of resources in the configuration and the number of resources that have drifted. - The \*\*Checks\*\* summary shows the number of passed, failed, and unknown statuses for objects with continuous validation checks. ### View workspace health in explorer The [Explorer page](/terraform/cloud-docs/workspaces/explorer) presents a condensed overview of the health status of the workspaces within your organization. You can see the following information: - Workspaces that are monitoring workspace health - Status of any configured continuous validation checks - Count of drifted resources for each workspace For additional details on the data available for reporting, refer to the [Explorer](/terraform/cloud-docs/workspaces/explorer) documentation.  ## Drift detection Drift detection helps you identify situations where your actual infrastructure no longer matches the configuration defined in Terraform. This deviation is known as \_configuration drift\_. Configuration drift occurs when changes are made outside Terraform's regular process, leading to inconsistencies between the remote objects and your configured infrastructure. For example, a teammate could create configuration drift by directly updating a storage bucket's settings with conflicting configuration settings using the cloud provider's console. Drift detection could detect these differences and recommend steps to address and rectify the discrepancies. Configuration drift differs from state drift. Drift detection does not detect state drift. Configuration drift happens when external changes affecting remote objects invalidate your infrastructure configuration. State drift occurs when external changes affecting remote objects \_do not\_ invalidate your infrastructure configuration. Refer to [Refresh-Only Mode](/terraform/cloud-docs/workspaces/run/modes-and-options#refresh-only-mode) to learn more about remediating state drift. ### View workspace drift To view the drift detection results from the latest health assessment, go to the workspace and click \*\*Health > Drift\*\*. If there is configuration drift, HCP Terraform proposes the necessary changes to bring | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/health.mdx | main | terraform | [
0.003449970856308937,
0.01860705018043518,
0.04305100813508034,
0.051193077117204666,
0.0006338785751722753,
-0.05996016040444374,
-0.04561498016119003,
-0.10245826095342636,
0.005942638032138348,
0.054361492395401,
-0.03530286252498627,
-0.05386818200349808,
0.0034101177006959915,
0.01016... | 0.029893 |
infrastructure configuration. Refer to [Refresh-Only Mode](/terraform/cloud-docs/workspaces/run/modes-and-options#refresh-only-mode) to learn more about remediating state drift. ### View workspace drift To view the drift detection results from the latest health assessment, go to the workspace and click \*\*Health > Drift\*\*. If there is configuration drift, HCP Terraform proposes the necessary changes to bring the infrastructure back in sync with its configuration. ### Resolve drift You can use one of the following approaches to correct configuration drift: - \*\*Overwrite drift\*\*: If you do not want the drift's changes, queue a new plan and apply the changes to revert your real-world infrastructure to match your Terraform configuration. - \*\*Update Terraform configuration:\*\* If you want the drift's changes, modify your Terraform configuration to include the changes and push a new configuration version. This prevents Terraform from reverting the drift during the next apply. Refer to the [Manage Resource Drift](/terraform/tutorials/state/resource-drift) tutorial for a detailed example. ## Continuous validation Continuous validation regularly verifies whether your configurationβs custom assertions continue to pass, validating your infrastructure. For example, you can monitor whether your website returns an expected status code, or whether an API gateway certificate is valid. Identifying failed assertions helps you resolve the failure and prevent errors during your next time Terraform operation. Continuous validation evaluates preconditions, postconditions, and check blocks as part of an assessment, but we recommend using [check blocks](/terraform/language/checks) for post-apply monitoring. Use check blocks to create custom rules to validate your infrastructure's resources, data sources, and outputs. ### Preventing false positives Health assessments create a speculative plan to access the current state of your infrastructure. Terraform evaluates any check blocks in your configuration as the last step of creating the speculative plan. If your configuration relies on data sources and the values queried by a data source change between the time of your last run and the assessment, the speculative plan will include those changes. HCP Terraform will not modify your infrastructure as part of an assessment, but it can use those updated values to evaluate checks. This may lead to false positive results for alerts since your infrastructure did not yet change. To ensure your checks evaluate the current state of your configuration instead of against a possible future change, use nested data sources that query your actual resource configuration, rather than a computed latest value. Refer to the [AMI image scenario](#asserting-up-to-date-amis-for-compute-instances) below for an example. ### Example use cases Review the provider documentation for `check` block examples with [AWS](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/continuous-validation-examples), [Azure](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/tfc-check-blocks), and [GCP](https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/google-continuous-validation). #### Monitoring the health of a provisioned website The following example uses the [HTTP](https://registry.terraform.io/providers/hashicorp/http/latest/docs) Terraform provider and a [scoped data source](/terraform/language/checks#scoped-data-sources) within a [`check` block](/terraform/language/checks) to assert the Terraform website returns a `200` status code, indicating it is healthy. ```hcl check "health\_check" { data "http" "terraform\_io" { url = "https://www.terraform.io" } assert { condition = data.http.terraform\_io.status\_code == 200 error\_message = "${data.http.terraform\_io.url} returned an unhealthy status code" } } ``` Continuous Validation alerts you if the website returns any status code besides `200` while Terraform evaluates this assertion. You can also find failures in your workspace's [Continuous Validation Results](#view-continuous-validation-results) page. You can configure continuous validation alerts in your workspace's [notification settings](/terraform/cloud-docs/workspaces/settings/notifications). #### Monitoring certificate expiration [Vault](https://www.vaultproject.io/) lets you secure, store, and tightly control access to tokens, passwords, certificates, encryption keys, and other sensitive data. The following example uses a `check` block to monitor for the expiration of a Vault certificate. ```hcl resource "vault\_pki\_secret\_backend\_cert" "app" { backend = vault\_mount.intermediate.path name = vault\_pki\_secret\_backend\_role.test.name common\_name = "app.my.domain" } check "certificate\_valid" { assert { condition = !vault\_pki\_secret\_backend\_cert.app.renew\_pending error\_message = "Vault cert is ready to renew." } } ``` #### Asserting up-to-date AMIs for compute instances [HCP | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/health.mdx | main | terraform | [
-0.029524169862270355,
0.006116969510912895,
0.11657366156578064,
-0.0009391666390001774,
0.014835343696177006,
-0.05269628018140793,
-0.008786585181951523,
-0.1204301193356514,
-0.004899970255792141,
0.055265359580516815,
-0.02855866588652134,
-0.0626065731048584,
0.04948044568300247,
-0.... | 0.017398 |
monitor for the expiration of a Vault certificate. ```hcl resource "vault\_pki\_secret\_backend\_cert" "app" { backend = vault\_mount.intermediate.path name = vault\_pki\_secret\_backend\_role.test.name common\_name = "app.my.domain" } check "certificate\_valid" { assert { condition = !vault\_pki\_secret\_backend\_cert.app.renew\_pending error\_message = "Vault cert is ready to renew." } } ``` #### Asserting up-to-date AMIs for compute instances [HCP Packer](/hcp/docs/packer) stores metadata about your [Packer](https://www.packer.io/) images. The following example check fails when there is a newer AMI version available. ```hcl data "hcp\_packer\_artifact" "hashiapp\_image" { bucket\_name = "hashiapp" channel\_name = "latest" platform = "aws" region = "us-west-2" } resource "aws\_instance" "hashiapp" { ami = data.hcp\_packer\_artifact.hashiapp\_image.external\_identifier instance\_type = var.instance\_type associate\_public\_ip\_address = true subnet\_id = aws\_subnet.hashiapp.id vpc\_security\_group\_ids = [aws\_security\_group.hashiapp.id] key\_name = aws\_key\_pair.generated\_key.key\_name tags = { Name = "hashiapp" } } check "ami\_version\_check" { data "aws\_instance" "hashiapp\_current" { instance\_tags = { Name = "hashiapp" } } assert { condition = aws\_instance.hashiapp.ami == data.hcp\_packer\_artifact.hashiapp\_image.external\_identifier error\_message = "Must use the latest available AMI, ${data.hcp\_packer\_artifact.hashiapp\_image.external\_identifier}." } } ``` ### View continuous validation results To view the continuous validation results from the latest health assessment, go to the workspace and click \*\*Health > Continuous validation\*\*. The page shows all of the resources, outputs, and data sources with custom assertions that HCP Terraform evaluated. Next to each object, HCP Terraform reports whether the assertion passed or failed. If one or more assertions fail, HCP Terraform displays the error messages for each assertion. The health assessment page displays each assertion by its [named value](/terraform/language/expressions/references). A `check` block's named value combines the prefix `check` with its configuration name. If your configuration contains multiple [preconditions and postconditions](/terraform/language/expressions/custom-conditions#preconditions-and-postconditions) within a single resource, output, or data source, HCP Terraform will not show the results of individual conditions unless they fail. If all custom conditions on the object pass, HCP Terraform reports that the entire check passed. The assessment results will display the results of any precondition and postconditions alongside the results of any assertions from `check` blocks, identified by the named values of their parent block. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/health.mdx | main | terraform | [
-0.018190467730164528,
0.06654301285743713,
-0.008747270330786705,
-0.03852425515651703,
0.04764675348997116,
-0.07905996590852737,
-0.05015350878238678,
0.0022894281428307295,
0.024705423042178154,
-0.03598964214324951,
0.08147817850112915,
-0.10310003161430359,
0.10984187573194504,
0.044... | -0.05814 |
# Workspaces This topic provides an overview of the workspaces resource in HCP Terraform and Terraform Enterprise. A workspace is a group of infrastructure resources managed by Terraform. ## Introduction Working with Terraform involves managing collections of infrastructure resources, and most organizations manage many different collections. When run locally, Terraform manages each collection of infrastructure with a persistent working directory, which contains a configuration, state data, and variables. Since Terraform CLI uses content from the directory it runs in, you can organize infrastructure resources into meaningful groups by keeping their configurations in separate directories. HCP Terraform manages infrastructure collections with workspaces instead of directories. A workspace contains everything Terraform needs to manage a given collection of infrastructure, and separate workspaces function like completely separate working directories. > \*\*Hands-on:\*\* Try the [Create a Workspace](/terraform/tutorials/cloud-get-started/cloud-workspace-create) tutorial. ## Workspace Contents HCP Terraform workspaces and local working directories serve the same purpose, but they store their data differently: | Component | Local Terraform | HCP Terraform | | ----------------------- | ------------------------------------------------------------- | -------------------------------------------------------------------------- | | Terraform configuration | On disk | In linked version control repository, or periodically uploaded via API/CLI | | Variable values | As `.tfvars` files, as CLI arguments, or in shell environment | In workspace | | State | On disk or in remote backend | In workspace | | Credentials and secrets | In shell environment or entered at prompts | In workspace, stored as sensitive variables | In addition to the basic Terraform content, HCP Terraform keeps some additional data for each workspace: - \*\*State versions:\*\* Each workspace retains backups of its previous state files. Although only the current state is necessary for managing resources, the state history can be useful for tracking changes over time or recovering from problems. Refer to [Terraform State in HCP Terraform](/terraform/cloud-docs/workspaces/state) for more details. - \*\*Run history:\*\* When HCP Terraform manages a workspace's Terraform runs, it retains a record of all run activity, including summaries, logs, a reference to the changes that caused the run, and user comments. Refer to [Viewing and Managing Runs](/terraform/cloud-docs/workspaces/run/manage) for more details. The top of each workspace shows a resource count, which reflects the number of resources recorded in the workspaceβs state file. This includes both managed [resources](/terraform/language/resources/syntax) and [data sources](/terraform/language/data-sources). ## Terraform Runs For workspaces with remote operations enabled (the default), HCP Terraform performs Terraform runs on its own disposable virtual machines, using that workspace's configuration, variables, and state. Refer to [Terraform Runs and Remote Operations](/terraform/cloud-docs/workspaces/run/remote-operations) for more details. ## HCP Terraform vs. Terraform CLI Workspaces Both HCP Terraform and Terraform CLI have features called workspaces, but they function differently. - HCP Terraform workspaces are required. They represent all of the collections of infrastructure in an organization. They are also a major component of role-based access in HCP Terraform. You can grant individual users and user groups permissions for one or more workspaces that dictate whether they can manage variables, perform runs, etc. You cannot manage resources in HCP Terraform without creating at least one workspace. - Terraform CLI workspaces are associated with a specific working directory and isolate multiple state files in the same working directory, letting you manage multiple groups of resources with a single configuration. The Terraform CLI does not require you to create CLI workspaces. Refer to [Workspaces](/terraform/language/state/workspaces) in the Terraform Language documentation for more details. ## Planning and Organizing Workspaces We recommend that organizations break down large monolithic Terraform configurations into smaller ones, then assign each one to its own workspace and delegate permissions and responsibilities for them. HCP Terraform can manage monolithic configurations just fine, but managing infrastructure as smaller | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/index.mdx | main | terraform | [
-0.015323699451982975,
-0.020807869732379913,
0.014306876808404922,
0.014964793808758259,
0.004143770318478346,
-0.05396319553256035,
-0.010408874601125717,
-0.07628920674324036,
0.05630805715918541,
0.015509802848100662,
0.004383568651974201,
-0.06626760959625244,
0.05716189369559288,
0.0... | 0.056828 |
documentation for more details. ## Planning and Organizing Workspaces We recommend that organizations break down large monolithic Terraform configurations into smaller ones, then assign each one to its own workspace and delegate permissions and responsibilities for them. HCP Terraform can manage monolithic configurations just fine, but managing infrastructure as smaller components is the best way to take full advantage of HCP Terraform's governance and delegation features. For example, the code that manages your production environment's infrastructure could be split into a networking configuration, the main application's configuration, and a monitoring configuration. After splitting the code, you would create "networking-prod", "app1-prod", "monitoring-prod" workspaces, and assign separate teams to manage them. Much like splitting monolithic applications into smaller microservices, this enables teams to make changes in parallel. In addition, it makes it easier to re-use configurations to manage other environments of infrastructure ("app1-dev," etc.). In Terraform Enterprise, administrators can use [Admin Settings](/terraform/enterprise/api-docs/admin/settings) to set the maximum number of workspaces for any single organization. You can also set a workspaces limit with the [tfe-terraform-provider](https://registry.terraform.io/providers/hashicorp/tfe/latest/docs/resources/organization#workspace\_limit). ## Organize Workspaces with Projects Projects let you organize your workspaces into groups. @include 'tfc-package-callouts/project-workspaces.mdx' Refer to [Organize Workspaces with Projects](/terraform/cloud-docs/projects/manage) for more details. ## Creating Workspaces You can create workspaces through the [HCP Terraform UI](/terraform/cloud-docs/workspaces/create), the [Workspaces API](/terraform/cloud-docs/api-docs/workspaces), or the [HCP Terraform CLI integration](/terraform/cli/cloud). ## Import existing infrastructure resources You can search your existing infrastructure for resources and import them into your workspace so that you can manage them with Terraform. Refer to [Import existing resources](/terraform/enterprise/workspaces/import) for instructions. ## Workspace Health @include 'tfc-package-callouts/health-assessments.mdx' HCP Terraform can perform automatic health assessments in a workspace to assess whether its real infrastructure matches the requirements defined in its Terraform configuration. Health assessments include the following types of evaluations: - Drift detection determines whether your real-world infrastructure matches your Terraform configuration. - Continuous validation determines whether custom conditions in the workspaceβs configuration continue to pass after Terraform provisions the infrastructure. You can enforce health assessments for all eligible workspaces or let each workspace opt in to health assessments through workspace settings. Refer to [Health](/terraform/cloud-docs/workspaces/health) in the workspaces documentation for more details. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/index.mdx | main | terraform | [
-0.0017027768772095442,
-0.027172058820724487,
0.02393423393368721,
-0.030199281871318817,
-0.016868755221366882,
-0.07367437332868576,
-0.046214647591114044,
-0.02818700112402439,
-0.018086738884449005,
0.07180172950029373,
-0.05527142062783241,
-0.06407086551189423,
0.05534107983112335,
... | 0.078854 |
# Import existing resources to state Terraform can search your existing infrastructure for resources to let you import any unmanaged resources to an HCP Terraform workspace in bulk. For instructions on importing single resources or small batches of resources in your configuration, refer to [Import a single resource](/terraform/language/import/single-resource). ## Overview You can search for unmanaged resources in [UI and VCS-driven](/terraform/cloud-docs/run/ui) and [CLI-driven](/terraform/cloud-docs/run/cli) workspaces. HCP Terraform presents results so that you can use the UI to import the resources and begin managing them as code. Complete the following steps to search for resources and import them into your Terraform state: 1. \*\*Define queries\*\*: Add `list` blocks to your Terraform configuration. If you are using the VCS-driven workflow, commit this change and push it to the repository associated with your workspace in HCP Terraform. Refer to [Import resources in bulk](/terraform/language/import/bulk) for more information. 1. \*\*Run the queries\*\*: You can run queries in the HCP Terraform UI or run the Terraform CLI on your local workstation. 1. \*\*Review search results\*\*: HCP Terraform shows the management status for resources it finds. 1. \*\*Generate code\*\*: HCP Terraform generates `import` and `resource` blocks for the resources it discovers. 1. \*\*Apply the configuration\*\*: Copy the generated code to your Terraform configuration and run it to finish importing the resources. ### HCP Terraform agents If you use HCP Terraform agents for your runs, you must enable the `query` operation when starting the agent pool. Refer to [Install and run agents](/terraform/cloud-docs/agents/agents) for more information. ## Requirements You must enable Terraform 1.14.0 or newer for your workspace to access the \*\*Search & Import\*\* page. Refer to the [general workspace settings documentation](/terraform/cloud-docs/workspaces/settings#general) for more information about changing workspace settings. HCP Terraform identifies resources managed by other workspaces when the workspace uses Terraform v1.12 and newer. ## Define queries Add `list` blocks to your workspace's Terraform configuration to create search queries you want to run against your existing infrastructure. Refer to [Import resources in bulk](/terraform/language/import/bulk) for instructions on how to define queries. If HCP Terraform is connected to your VCS, commit the configuration to version control. You can also connect your local workstation to HCP Terraform with the CLI-driven workflow and use the Terraform CLI to perform operations in your HCP Terraform workspace. Refer to [Connect to HCP Terraform](/terraform/cli/cloud/settings) for instructions. ## Run queries Complete the following steps after defining queries and copying them to your HCP Terraform workspace: 1. [Log into HCP Terraform](https://app.terraform.io) and navigtate to your workspace. 1. Click \*\*Search & Import\*\* in the sidebar menu. The page shows previously completed queries, as well as any queries that are in progress. 1. Click \*\*New Query\*\* to start a query. As the query progresses, HCP Terraform loads the results to the page. ## Review queries In the \*\*IaC\*\* column, HCP Terraform indicates one of the following statuses for resources returned by the query: - \*\*Managed\*\* indicates that the resource has the same [identity](/terraform/language/import#resource-identity) as a resource applied by a similar provider version. - \*\*Unknown\*\* indicates that the resource has the same [identity](/terraform/language/import#resource-identity) as a resource applied by an older version of Terraform or older provider version. - \*\*Unmanaged\*\* indicates that there are no managed resources of this type or that Terraform can't attribute this resource to a similar provider and Terraform version. Over time, providers may change their [resource identity](/terraform/language/import#resource-identity) definitions, but HCP Terraform attempts to capture all resource identities as providers evolve. When a schema for a resource type changes between versions, HCP Terraform may list a resource as \*\*Unknown\*\* instead of \*\*Unmanaged\*\* when a resource was applied by one version of a provider but queried by a different version. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/import.mdx | main | terraform | [
-0.03796777501702309,
-0.02960868924856186,
0.0035381908528506756,
0.023744113743305206,
0.012274687178432941,
-0.015274303033947945,
-0.006950948853045702,
-0.07915715128183365,
-0.04973537102341652,
0.0425591766834259,
-0.02424781210720539,
-0.10756082832813263,
0.10202424973249435,
-0.0... | 0.046392 |
but HCP Terraform attempts to capture all resource identities as providers evolve. When a schema for a resource type changes between versions, HCP Terraform may list a resource as \*\*Unknown\*\* instead of \*\*Unmanaged\*\* when a resource was applied by one version of a provider but queried by a different version. You can perform the following actions in the results area: - Use the search bar and filters to sort and filter the results. - Click on a resource in the \*\*Resource type\*\* column to view details made available by the provider. - Select resources that you want to import to your workspace. Refer to [Generate code](#generate-code) for instructions. ## Generate code Select one or more resource instances in the search results area and click \*\*Generate configuration\*\*. HCP Terraform generates `import` and `resource` blocks that you can add to your configuration to import the resource instances into your state. ## Import resources Copy the code to your Terraform configuration and apply the configuration. Before applying the configuration, we recommend running the [`terraform fmt` command](/terraform/cli/commands/fmt) to ensure that the code is properly formatted. Use one of the following methods to apply the configuration: - Navigate to your workspace in HCP Terraform and click \*\*New run\*\*. - Run the `terraform apply` command on your workstation. - If your workspace is configured to run on VCS changes, check the updated configuration into your VCS to trigger a new run. ## Next steps After importing the resources to state, you can delete the generated import blocks or keep them as a historical record. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/import.mdx | main | terraform | [
0.007954129949212074,
0.024654196575284004,
0.06239163875579834,
0.02479003742337227,
0.05327686667442322,
0.019109711050987244,
0.01803271658718586,
-0.11830759048461914,
-0.03295488283038139,
0.03159967437386513,
-0.02107122540473938,
-0.10286431759595871,
0.10377518832683563,
0.00611378... | -0.050888 |
# Explorer for workspace visibility @include 'beta/explorer.mdx' As your organization grows, keeping track of your sprawling infrastructure estate can get increasingly more complicated. The explorer for workspace visibility helps surface a wide range of valuable information from across your organization. Open the explorer for workspace visibility by clicking \*\*Explorer\*\* in your organization's top-level side navigation. The \*\*Explorer\*\* page displays buttons grouped by \*\*Types\*\* and \*\*Use Cases\*\*. Each button offers a new view into your organization or workspace's data. Clicking a button triggers the explorer to perform a query and display the results in a table of data.  The \*\*Types\*\* buttons present generic use cases. For example, "Workspaces" displays a paginated and unfiltered list of your organization's workspaces and each workspace's accompanying data. The \*\*Use Cases\*\* buttons present sorted and filtered results to give you a focused view of your organizational data. You can sort each column of the explorer results table. Clicking a hyperlinked field shows increasingly specific views of your data. For example, a workspace's modules count field links to a view of that workspace's associated modules. Clearing a query takes you back to the explorer landing page. To clear a query, click the back arrow at the top left of your current explorer view page. ## Permissions The explorer for workspace visibility requires access to a broad range of an organization's data. To use the explorer, you must have either of the following organization permissions: - [Organization owner](/terraform/cloud-docs/users-teams-organizations/permissions/organization#organization-owners) - [View all workspaces](/terraform/cloud-docs/users-teams-organizations/permissions/organization#view-all-workspaces) or greater ## Types The explorer for workspace visibility supports four types: - [Workspaces](/terraform/cloud-docs/workspaces) - [Modules](/terraform/language/modules) - [Providers](/terraform/language/providers) - [Terraform Versions](/terraform/language/upgrade-guides#upgrading-to-terraform-v1-4) ## Use cases The explorer for workspace visibility provides the following queries for specific use cases: - \*\*Top module versions\*\* shows modules sorted by usage frequency. - \*\*Latest Terraform versions\*\* displays a sorted list of Terraform versions in use. - \*\*Top provider versions\*\* lists providers sorted by usage frequency. - \*\*Workspaces without VCS\*\* lists workspaces not backed by VCS. - \*\*Workspace VCS source\*\* lists VCS-backed workspaces sorted by repository name. - \*\*Workspaces with failed checks\*\* lists workspaces that failed at least one [continuous validation](/terraform/enterprise/workspaces/health#continuous-validation) check. - \*\*Drifted workspaces\*\* displays [workspaces with drift](/terraform/enterprise/workspaces/health#drift-detection) and relevant drift information. - \*\*Workspace VCS source\*\* displays a subset of workspace data sorted by the underlying VCS repository. - \*\*All workspace versions\*\* is a simplified view of your workspaces with current run and version information. - \*\*Runs by status\*\* provides a run-focused view by sorting workspaces by their current run status. - \*\*Top Terraform versions\*\* lists all Terraform versions by usage frequency - \*\*Latest updated workspaces\*\* displays your most recently updated workspaces. - \*\*Oldest applied workspaces\*\* sorts workspaces by the date of the current applied run. ## Custom filter conditions The explorer's query builder allows you to execute queries with custom filter conditions against any of the supported [types](#types). To use the query builder, select a type or use case from the explorer home page. Expand the \*\*Modify conditions\*\* section to show the filter conditions in use for the current query and to define new filter conditions.  Each filter condition is represented by a row of inputs, each made up of a target field, operator, and value. 1. Choose a target field from the first dropdown to select field that the explorer will run the query against. The options available will vary based on the target field's data type. 1. Choose an operator from the second dropdown. The options available will vary based on the target field's data type. 1. Provide a value in the third field to compare against the target field's value. 1. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/explorer.mdx | main | terraform | [
0.01512267254292965,
-0.03094487637281418,
-0.022059449926018715,
0.10393485426902771,
0.06005198508501053,
-0.0022121109068393707,
0.0019905196968466043,
-0.09525486826896667,
-0.007085798308253288,
0.05942754074931145,
-0.012156663462519646,
-0.0029874860774725676,
0.059433627873659134,
... | 0.112803 |
query against. The options available will vary based on the target field's data type. 1. Choose an operator from the second dropdown. The options available will vary based on the target field's data type. 1. Provide a value in the third field to compare against the target field's value. 1. Click \*\*Run Query\*\* to evaluate the filter conditions. -> \*\*Tip:\*\* Inspect the filter conditions used by the various pre-canned [\*\*Use Cases\*\*](#use-cases) to learn how they are constructed. You can create multiple filter conditions for a query. When you provide multiple conditions, the explorer evaluates them at query time with a logical AND. To add a new condition, use the \*\*Add condition\*\* button below the condition list. To remove a condition, use the trash bin button on the right hand side of the condition. ## Save a view You can save explorer views to revisit a custom query or use case. HCP Terraform and Terraform Enterprise do not save query results or query history. Returning to them re-runs the query and shows current results only. You can save the explorerβs view of your data by performing the following steps: 1. Navigate to the \*\*Explorer\*\* page in the sidebar of your organization. 1. Click on a tile in the \*\*Types\*\* or \*\*Use cases\*\* section. 1. Define a query using the query building interface. 1. By default, the explorer displays all available information, but you can adjust which columns you want your view to include. 1. Open the \*\*Actions\*\* dropdown menu and select \*\*Save view\*\*, saving the last query you performed in the explorer. 1. Specify a new, unique name for your saved view. 1. Click \*\*Save\*\*. When your explorer saves a view, it saves the last query it performed. If you change a query and do not rerun it, the explorer does not save those changes. After you have saved a view of your data, you can access it from the explorerβs main page underneath the \*\*Saved views\*\* tab. Saved views keep track of the following attributes: \* The name of the saved view. \* The type of data you are querying, either module, workspace, provider, or Terraform versions. \* The owner of the saved view. \* When the saved view was last updated. You can rename or delete a saved view from the \*\*Saved views\*\* tab by opening the ellipsis menu next to a view and selecting either \*\*Rename\*\* or \*\*Delete\*\*. ### Manage a saved view Complete the following steps to update a saved view: 1. Open the view in the explorer and make changes. 1. Open the \*\*Actions\*\* dropdown menu and select \*\*Save view.\*\* Complete the following steps to save a new view based on an existing saved view: 1. Open a saved view in the explorer and make changes. 1. Open the \*\*Actions\*\* dropdown menu and select \*\*Save as\*\*. 1. Entering a name for this new saved view. 1. Click \*\*Save\*\*. Complete the following steps to delete a saved view: 1. Open a saved view in the explorer. 1. Choose \*\*Delete view\*\* from the \*\*Actions\*\* drop-down menu. 1. Click \*\*Delete\*\* when prompted to confirm that you want to permanently delete the view. @include 'beta/explorer-limitations.mdx' | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/explorer.mdx | main | terraform | [
0.0032086458522826433,
0.08080892264842987,
0.028295980766415596,
0.06801362335681915,
0.000280519830994308,
-0.006481543183326721,
0.052484866231679916,
-0.09139316529035568,
0.005102644208818674,
0.04162419214844704,
-0.029247764497995377,
-0.09729648381471634,
0.07485903054475784,
-0.02... | 0.02649 |
# Workspace Best Practices An HCP Terraform workspace manages a single state file and the lifecycle of its resources. It is the smallest collection of HCP Terraform-managed infrastructure. Any operation on a resource can potentially affect other resources managed in the same state file, so it is best to keep the potential blast radius of your operations small. To do so, manage resources in separate workspaces when possible, grouping together only necessary and logically-related resources. For example, even though your application may require both compute resources and a database, these resources can operate independently and should be in their own workspaces. Scoping your configuration and planning your workspace strategy early in your adoption of HCP Terraform and Terraform Enterprise will simplify your operations and make them safer. ## Name your Workspace We recommend using the following naming convention so you can identify and associate workspaces with specific components of your infrastructure: `---` - ``: The business unit or team that owns the workspace. - ``: The name of the application or service that the workspace manages. - ``: The layer of the infrastructure that the workspace manages (or example, network, compute, filestore). - ``: The environment that the workspace manages (for example, prod, staging, QA, prod). If your application team does not have a `layer`, use `main` or `app` in its place to maintain consistency across the organization. ## Group by volatility Volatility refers to the rate of change of the resources in a workspace. Infrastructure such as databases, VPCs, and subnets change much less frequently than infrastructure such as your web servers. By exposing your long-living infrastructure to unnecessary volatility, you introduce more opportunities for accidental changes. When planning your workspace organization, group resources by volatility.  The above example groups together tightly-coupled resources like networking, security, and identity in a shared workspace. Compute, storage, and databases have separate workspaces, since they change at different frequencies. You may scale compute instances multiple times a day, but your database instances probably change far less frequently. By grouping these parts of your infrastructure into separate workspaces, you decouple unrelated resources and reduce the risk of unexpected changes. ## Determine stateful vs stateless infrastructure Stateful resources are ones that you cannot delete and recreate because they persist data, such as databases and object storage. By managing stateful resources independently of stateless ones, such as separating databases from compute instances, you limit the blast radius of operations that cause the resource recreation and help protect against accidental data loss. Consider the workspace structure in the [Volatility section](#group-by-volatility). You could potentially manage filestore and database resources together, as they are both stateful resources. Your compute resources are stateless and should still have a separate workspace. ## Separate privileges and responsibilities A best practice is to split up workspaces based on team responsibilities and required privileges. For example, consider an application that requires separate developer and production environments, each with special networking and application infrastructure. One approach is to create four different workspaces, two for the developer environment and two for production. Only the networking team has access to the networking workspaces. In this setup, only the networking team needs permissions to manage the resources in the networking workspaces, and others cannot manage those workspace resources. If a workspace's scope is too large, a user might need more permissions than appropriate in order to perform | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/best-practices.mdx | main | terraform | [
-0.034842006862163544,
0.018836021423339844,
0.015007381327450275,
-0.02090669982135296,
-0.039953477680683136,
-0.01842115819454193,
0.0107710687443614,
-0.04494535177946091,
-0.03415573388338089,
0.03799324855208397,
-0.03104102611541748,
-0.05471805855631828,
0.08254581689834595,
0.0002... | 0.055877 |
team has access to the networking workspaces. In this setup, only the networking team needs permissions to manage the resources in the networking workspaces, and others cannot manage those workspace resources. If a workspace's scope is too large, a user might need more permissions than appropriate in order to perform operations the workspace.  Splitting your workspaces by team also helps limit the responsibility per workspace and allows teams to maintain distinct areas of ownership. If you need to reference attributes of resources managed in other workspaces, you can share the outputs using the [tfe\_outputs](https://registry.terraform.io/providers/hashicorp/tfe/latest/docs/data-sources/outputs) data source. By limiting the scope of each workspace and sharing just the required outputs with others, you reduce the risk of leaking potentially sensitive information in a workspace's state. To share outputs from a workspace, you must explicitly enable remote state sharing in the workspace settings. ## Avoid large Terraform plans and applies HCP Terraform and Terraform Enterprise execute workloads using agents. Every time an agent refreshes a workspace's state, it builds a [dependency graph](/terraform/internals/graph) of the resources to determine how to sequence operations in the workspace. As the number of resources your workspace manages grows, these graphs become larger and more complex. As these graphs grow, they require more worker RAM to build them. If your agent's performance degrades or workloads take longer to complete, we suggest exploring ways to split up the workspace to reduce the size of the dependency graph. ## Determine workspace concurrency vs Terraform parallelism Concurrency refers to the number of plan and apply operations that HCP Terraform or Terraform Enterprise can run simultaneously. In HCP Terraform, your edition limits the maximum concurrency for your organization. Refer to [HCP Terraform pricing](https://www.hashicorp.com/products/terraform/pricing?product\_intent=terraform) for details. Terraform Enterprise lets you configure the concurrency, but defaults to 10 concurrent runs. As you increase concurrency, the amount of memory your Terraform Enterprise installation requires increases as well. Refer to the [Capacity and performance](/terraform/enterprise/replicated/architecture/system-overview/capacity) documentation for more information. Parallelism refers to the number of tasks the Terraform CLI performs simultaneously in a single workload. By default, Terraform performs a maximum of 10 operations in parallel. When running a `terraform apply` command, Terraform refreshes each resource in the state file and compares to the remote object. Every resource refresh, creation, update, or destruction is an individual operation. If your workload creates 11 resources, Terraform starts by creating the first 10 resources in its dependency graph, and will begin creating the 11th once it finishes creating one of the first 10 resources. You can [increase the parallelism](/terraform/cloud-docs/variables#parallelism) of Terraform, but this increases a run's CPU usage. We recommend that you instead break down large Terraform configurations into smaller ones with fewer resources when possible. Long-running Terraform workloads are an early sign of a bloated workspace scope. ## Next steps This article introduces some considerations to keep in mind as your organization matures their workspace usage. Being deliberate about how you use these to organize your infrastructure will ensure smoother and safer operations. [HCP Terraform](/terraform/tutorials/cloud-get-started) provides a place to try these concepts hands-on, and you can [get started for free](https://app.terraform.io/public/signup/account). To learn more about HCP Terraform and Terraform Enterprise best practices, refer to [Project Best Practices](/terraform/cloud-docs/projects/best-practices). To learn best practices for writing Terraform configuration, refer to the [Terraform Style Guide](/terraform/language/style). | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/best-practices.mdx | main | terraform | [
0.00022802095918450505,
0.007821554318070412,
-0.01947232335805893,
-0.0025971385184675455,
0.00907190516591072,
-0.08244486153125763,
0.005849780980497599,
-0.03793201595544815,
0.044227343052625656,
0.042406223714351654,
-0.03690702095627785,
-0.046190958470106125,
0.05346734821796417,
0... | 0.061564 |
for free](https://app.terraform.io/public/signup/account). To learn more about HCP Terraform and Terraform Enterprise best practices, refer to [Project Best Practices](/terraform/cloud-docs/projects/best-practices). To learn best practices for writing Terraform configuration, refer to the [Terraform Style Guide](/terraform/language/style). | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/best-practices.mdx | main | terraform | [
0.0030499259009957314,
0.01335112564265728,
0.022381003946065903,
-0.03846671059727669,
-0.012350850738584995,
0.01127632986754179,
-0.023985544219613075,
-0.0009856931865215302,
-0.03353016451001167,
0.09509086608886719,
-0.021620960906147957,
-0.038532938808202744,
0.020733311772346497,
... | 0.090188 |
# Browse workspaces This topic describes how to use browse, sort, and filter workspaces in the UI so that you can track consumption across your organizations. ## Overview HCP Terraform and Terraform Enterprise include several interfaces for browsing, sorting, and filtering resource data so that you can effectively manage workspaces and projects. You can also use interfaces together, such as applying a tag filter and sorting by workspace name, to refine results. ### Explorer view The explorer for workspace visibility surfaces a wider range of valuable information from across your workspaces. Refer to [Explorer for workspace visibility](/terraform/cloud-docs/workspaces/explorer) for additional information. ## Requirements You must be a member of a team with the \*\*Read\*\* permissions enabled for Terraform runs to view workspaces associated with the run. Refer to the [permissions reference](/terraform/cloud-docs/users-teams-organizations/permissions) for additional information. If your organization contains many workspaces, you can use the filter tools at the top of the list to find the workspaces you are interested in. ## Find a workspace 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and select your organization. 1. Click \*\*Workspaces\*\* to view workspaces you have access to. 1. To view projects you have access to, click on either \*\*Projects\*\* in the sidebar menu or the drawer icon in the \*\*Workspaces\*\* bar. 1. If your organization contains several workspaces or projects, you can paginate through the workspace screen or project drawer to find the workspace you are looking for. 1. You can also use the search bar in the \*\*Workspace\*\* drawer to find a project by name ## Filter workspaces You can use the following interfaces to sort and filter workspaces: - Click on a run status button to filter workspaces by one of the most common run statuses. You can filter by one of the following statuses: - Needs attention - Errored - Running - On hold - Applied - Choose one or more tag keys, values, or key-value pairs from the \*\*Tags\*\* drop-down to filter workspaces by tag. - Choose one or more run statuses from the \*\*Status\*\* drop-down to filter workspaces by run status. The \*\*Status\*\* drop-down lists all available run statuses, including the common statuses available in the run status button bar. - The tag filter shows a list of tags added to all workspaces, limited to the first 1,000 tags alphabetically. Choosing one or more will show only workspaces tagged with all of the chosen tags. - Choose a health assessment label form the \*\*Health\*\* drop-down to filter workspaces according to the latest health assessment results. You can filter according to the following labels: - Drifted - Health error - Check failed ## Sort workspaces Click on a column header to sort workspaces by trait. Traits appear in either ascending or descending alphabetical order. You can sort according to the following traits: - Workspace name - Run status - Repository - Latest change - Tag | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/browse.mdx | main | terraform | [
0.019076848402619362,
-0.008321990258991718,
0.03145722672343254,
0.04852768033742905,
0.031764108687639236,
-0.045484572649002075,
0.009138496592640877,
-0.08923768252134323,
0.004241675604134798,
0.07852959632873535,
-0.06595921516418457,
-0.056535422801971436,
0.016369497403502464,
0.01... | 0.00847 |
# Manage Terraform configurations [remote operations]: /terraform/cloud-docs/workspaces/run/remote-operations [execution mode]: /terraform/cloud-docs/workspaces/settings#execution-mode [Terraform configuration]: /terraform/language Each HCP Terraform workspace is associated with a particular [Terraform configuration][], which is expected to change and evolve over time. Since every organization has its own preferred source code control practices, HCP Terraform does not provide integrated version management. Instead, it expects Terraform configurations to be managed in your existing version control system (VCS). In order to perform [remote Terraform runs][remote operations] for a given workspace, HCP Terraform needs to periodically receive new versions of its configuration. Usually, this can be handled automatically by connecting a workspace to a VCS repository. -> \*\*Note:\*\* If a workspace's [execution mode is set to local][execution mode], it doesn't require configuration versions, since HCP Terraform won't perform runs for that workspace. ## Providing Configuration Versions There are two ways to provide configuration versions for a workspace: - \*\*With a connected VCS repository.\*\* HCP Terraform can automatically fetch content from supported VCS providers, and uses webhooks to get notified of code changes. This is the most convenient way to use HCP Terraform. See [The UI- and VCS-driven Run Workflow](/terraform/cloud-docs/workspaces/run/ui) for more information. A VCS connection can be configured [when a workspace is created](/terraform/cloud-docs/workspaces/create), or later in its [version control settings](/terraform/cloud-docs/workspaces/settings/vcs). -> \*\*Note:\*\* When a workspace is connected to a VCS repository, directly uploaded configuration versions can only be used for [speculative plans](/terraform/cloud-docs/workspaces/run/remote-operations#speculative-plans). This helps ensure your VCS remains the source of truth for all real infrastructure changes. - \*\*With direct uploads.\*\* You can use a variety of tools to directly upload configuration content to HCP Terraform: - \*\*Terraform CLI:\*\* With the [CLI integration](/terraform/cli/cloud) configured, the `terraform plan` and `terraform apply` commands will perform remote runs by uploading a configuration from a local working directory. See [The CLI-driven Run Workflow](/terraform/cloud-docs/workspaces/run/cli) for more information. - \*\*API:\*\* HCP Terraform's API can accept configurations as `.tar.gz` files, which can be uploaded by a CI system or other workflow tools. See [The API-driven Run Workflow](/terraform/cloud-docs/workspaces/run/api) for more information. When configuration versions are provided via the CLI or API, HCP Terraform can't automatically react to code changes in the underlying VCS repository. ## Code Organization and Repository Structure ### Organizing Separate Configurations Most organizations either keep each Terraform configuration in a separate repository, or keep many Terraform configurations as separate directories in a single repository (often called a "monorepo"). HCP Terraform works well with either approach, but monorepos require some extra configuration: - Each workspace must [specify a Terraform working directory](/terraform/cloud-docs/workspaces/settings#terraform-working-directory), so HCP Terraform knows which configuration to use. - If the repository includes any shared Terraform modules, you must add those directories to the [automatic run triggering setting](/terraform/cloud-docs/workspaces/settings/vcs#automatic-run-triggering) for any workspace that uses those modules. -> \*\*Note:\*\* If your organization does not have a strong preference, we recommend using separate repositories for each configuration and using the private module registry to share modules. This allows for faster module development, since you don't have to update every configuration that consumes a module at the same time as the module itself. ### Organizing Multiple Environments for a Configuration There are also a variety of ways to handle multiple environments. The most common approaches are: - All environments use the same main branch, and environment differences are handled with Terraform variables. To protect production environments, wait to apply runs until their changes are verified in staging. - Different environments use different long-lived VCS branches. To protect production environments, merge changes to the production branch after they have been verified in staging. - Different environments use completely separate configurations, and shared behaviors are handled with shared Terraform modules. To protect production | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/configurations.mdx | main | terraform | [
0.01563088409602642,
-0.01556940097361803,
0.06496937572956085,
-0.010956818237900734,
-0.03832976892590523,
-0.001723884022794664,
-0.059818487614393234,
-0.10176173597574234,
0.009128526784479618,
0.07127336412668228,
-0.026227964088320732,
-0.05749663710594177,
0.014581150375306606,
-0.... | 0.017284 |
their changes are verified in staging. - Different environments use different long-lived VCS branches. To protect production environments, merge changes to the production branch after they have been verified in staging. - Different environments use completely separate configurations, and shared behaviors are handled with shared Terraform modules. To protect production environments, verify new module versions in staging before updating the version used in production. HCP Terraform works well with all of these approaches. If you used long-lived branches, be sure to specify which branch to use in each workspace's VCS connection settings. ## Archiving Configuration Versions Once all runs using a particular configuration version are complete, HCP Terraform no longer needs the associated `.tar.gz` file and may discard it to save storage space. This process is handled differently depending on how the configuration version was created. - \*\*Created with a connected VCS repository.\*\* HCP Terraform will automatically archive VCS configuration versions once all runs are completed and they are no longer current for any workspace. HCP Terraform will re-fetch the configuration files from VCS as needed for new runs. - \*\*Created with direct uploads via the API or CLI.\*\* HCP Terraform does not archive CLI and API configuration versions automatically, because it cannot re-fetch the files for new runs. However, you can use the [Archive a Configuration Version](/terraform/cloud-docs/api-docs/configuration-versions#archive-a-configuration-version) endpoint to archive them manually. For Terraform Enterprise customers upgrading from a previous version, the functionality has a backfill capability that will clean up space for historical runs in batches. In each organization, Terraform Enterprise archives a batch of 100 configurations each time a run completes or a new configuration version is uploaded. This will gradually free up existing object storage space over time. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/configurations.mdx | main | terraform | [
-0.02498098462820053,
0.05237796902656555,
0.0791744664311409,
-0.04591794312000275,
0.039176683872938156,
0.0020349144469946623,
-0.07589505612850189,
-0.07090659439563751,
0.007950971834361553,
0.037008099257946014,
-0.006862107198685408,
-0.012199189513921738,
0.006349687464535236,
-0.0... | -0.012775 |
# Create workspace tags This topic describes how to create and attach tags to your workspaces. ## Overview Tagging workspaces helps organization administrators organize, sort, and filter workspaces so that they can track resource consumption. For example, you could add a `cost-center` tag so that administrators can sort workspaces according to cost center. HCP Terraform stores tags as key-value pairs or as key-only tags. Key-only tags enable you to associate a single Terraform configuration file with several workspaces according to tag. Refer to the following topics in the Terraform CLI and configuration language documentation for additional information: - [`terraform{}.cloud{}.workspaces` reference](/terraform/language/terraform#terraform-cloud-workspaces) - [Define connection settings](/terraform/cli/cloud/settings#define-connection-settings) ### Reserved tags You can reserve a set of tag keys for each organization. Reserved tag keys appear as suggestions when people create tags for projects and workspaces so that you can use consistent terms for tags. Refer to [Create and manage reserved tags](/terraform/cloud-docs/users-teams-organizations/organizations/manage-reserved-tags) for additional information. ### Single-value tags Your system may contain single-value tags created using Terraform v1.10 and older. You can migrate existing single-value tags to the key-value scheme. Refer to [Migrate single-value tags](#migrate-single-value-tags) for instructions. ## Requirements - You must be member of a team with the \*\*Write\*\* permission group enabled for the workspace to create tags for a workspace. - You must have one of the following permissions to create and manage tags: - \*\*Admin\*\* permission group for the workspace. - \*\*Manage all workspaces\*\* permissions for the organization. This permissions allows you to manage tags for all workspaces. You cannot create tags for a workspace using the CLI. ## Define tags 1. Open your workspace. 1. Click either the count link for the \*\*Tags\*\* label or \*\*Manage Tags\*\* in the \*\*Tags\*\* card on the right-sidebar to open the \*\*Manage workspace tags\*\* drawer. 1. Click \*\*+Add tag\*\* and perform one of the following actions: - Specify a key-value pair: Lets you sort, filter, and search on either key or value. - Specify a tag key and leave the \*\*Value\*\* field empty: Lets you sort, filter, and search on only the key name. - Choose a reserved key from the suggested tag key list and specify a value: Ensures that you are using the key name consistently and lets you sort, filter, and search on either key or value. - Choose a reserved key from the suggested tag key list and leave the \*\*Value\*\* field empty: Ensures that you are using the key name consistently and lets you sort, filter, and search on only the key name. Refer to [Tag syntax](#Tag-syntax) for information about supported characters. 1. Tags inherited from the project appear in the \*\*Inherited Tags\*\* section. You can attach new key-value pairs to their projects to override inherited tags. Refer to [Manage projects](/terraform/cloud-docs/projects/manage) for additional information about using tags in projects. You cannot override reserved tag keys when the \*\*Disable overrides\*\* option is enabled. Refer to [Create and manage reserved tags](/terraform/cloud-docs/users-teams-organizations/organizations/manage-reserved-tags) for additional information. You can also click on tag links in the \*\*Inherited Tags\*\* section to view workspaces that use the same tag. 1. Click \*\*Save\*\*. Tags that you create appear in the tags management screen in the organization settings. Refer to [Organizations](/terraform/cloud-docs/users-teams-organizations/organizations) for additional information. ## Update tags 1. Open your workspace. 1. Click either the count link for the \*\*Tags\*\* label or \*\*Manage Tags\*\* in the \*\*Tags\*\* card on the right-sidebar to open the \*\*Manage workspace tags\*\* drawer. 1. In the \*\*Tags applied to this resource\*\* section, modify a key, value, or both and click \*\*Save\*\*. ## Migrate single-value tags You can use the API to convert existing single-value tags to key-value tags. You must have permissions in | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/tags.mdx | main | terraform | [
-0.026445135474205017,
0.0015633631264790893,
-0.01699863001704216,
0.00974904466420412,
-0.010167522355914116,
-0.027218766510486603,
0.015974638983607292,
-0.04997122287750244,
0.018555037677288055,
0.059797920286655426,
-0.017103679478168488,
-0.09430860728025436,
0.0666205883026123,
0.... | 0.024293 |
on the right-sidebar to open the \*\*Manage workspace tags\*\* drawer. 1. In the \*\*Tags applied to this resource\*\* section, modify a key, value, or both and click \*\*Save\*\*. ## Migrate single-value tags You can use the API to convert existing single-value tags to key-value tags. You must have permissions in the workspace to perform the following task. Refer to [Requirements](#requirements) for additional information. Terraform v1.10 and older adds single-value workspace tags defined in the associated Terraform configuration to workspaces selected by the configuration. As result, your workspace may include duplicate tags. Refer to the [Terraform reference documentation](/terraform/language/terraform#terraform-cloud-workspaces) for additional information. ### Re-create existing workspace tags as resource tags 1. Send a `GET` request to the [`/organizations/:organization\_name/tags`](/terraform/cloud-docs/api-docs/organization-tags#list-tags) endpoint to request all workspaces for your organization. The response may span several pages. 1. For each workspace, check the `tag-names` attribute for existing tags. 1. Send a `PATCH` request to the [`/workspaces/:workspace\_id`](/terraform/cloud-docs/api-docs/workspaces#update-a-workspace) endpoint and include the `tag-binding` relationship in the request body for each workspace tag. ### Delete single-value workspace tags 1. Send a `GET` request to the [`/organizations/:organization\_name/tags`](/terraform/cloud-docs/api-docs/organization-tags#list-tags) endpoint to request all workspaces for your organization. 1. Enumerate the external IDs for all tags. 2. Send a `DELETE` request to the [`/organizations/:organization\_name/tags`](/terraform/cloud-docs/api-docs/organization-tags#delete-tags) endpoint to delete tags. ## Tag syntax The following rules apply to tags: - Tags must be one or more characters. - Tags have a 255 character limit. - Tags can include letters, numbers, colons, hyphens, and underscores. - For tags stored as key-value pairs, tag values are optional. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/tags.mdx | main | terraform | [
-0.035306502133607864,
-0.006493663880974054,
0.023958036676049232,
0.059198107570409775,
-0.006006255745887756,
-0.034054748713970184,
-0.0053011830896139145,
-0.06999608129262924,
0.01259588822722435,
0.07404227554798126,
-0.03250272572040558,
-0.07780027389526367,
0.06405138969421387,
0... | -0.044236 |
# Manage workspace state Each HCP Terraform workspace has its own separate state data, used for runs within that workspace. -> \*\*API:\*\* See the [State Versions API](/terraform/cloud-docs/api-docs/state-versions). ## State Usage in Terraform Runs In [remote runs](/terraform/cloud-docs/workspaces/run/remote-operations), HCP Terraform automatically configures Terraform to use the workspace's state; the Terraform configuration does not need an explicit backend configuration. (If a backend configuration is present, it will be overridden.) In local runs (available for workspaces whose execution mode setting is set to "local"), you can use a workspace's state by configuring the [CLI integration](/terraform/cli/cloud) and authenticating with a user token that has permission to read and write state versions for the relevant workspace. When using a Terraform configuration that references outputs from another workspace, the authentication token must also have permission to read state outputs for that workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions)) During an HCP Terraform run, Terraform incrementally creates intermediate state versions and marks them as finalized once it uploads the state content. When a workspace is unlocked, HCP Terraform selects the latest state and sets it as the current state version, deletes all other intermediate state versions that were saved as recovery snapshots for the duration of the lock, and discards all pending intermediate state versions that were superseded by newer state versions. [permissions-citation]: #intentionally-unused---keep-for-maintainers ## State Versions In addition to the current state, HCP Terraform retains historical state versions, which can be used to analyze infrastructure changes over time. You can view a workspace's state versions from its \*\*States\*\* tab. Each state in the list indicates which run and which VCS commit (if applicable) it was associated with. Click a state in the list for more details, including a diff against the previous state and a link to the raw state file. ## Managed Resources Count -> \*\*Note:\*\* A managed resources count for each organization is available in your organization's settings. Your organizationβs managed resource count helps you understand the number of infrastructure resources that HCP Terraform manages across all your workspaces. HCP Terraform reads all the workspacesβ state files to determine the total number of managed resources. Each [resource](/terraform/language/resources/syntax) in the state equals one managed resource. HCP Terraform includes resources in modules and each resource instance created with the `count` or `for\_each` meta-arguments. For example, `"aws\_instance" "servers" { count = 10 }` creates ten separate managed resources in state. HCP Terraform does not include [data sources](/terraform/language/data-sources) in the count. ### Examples - Managed Resources The following Terraform state excerpt describes a `random` resource. HCP Terraform counts `random` as one managed resource because `βmodeβ: βmanagedβ`. ```json "resources": [ { "mode": "managed", "type": "random\_pet", "name": "random", "provider": "provider[\"registry.terraform.io/hashicorp/random\"]", "instances": [ { "schema\_version": 0, "attributes": { "id": "puma", "keepers": null, "length": 1, "prefix": null, "separator": "-" }, "sensitive\_attributes": [] } ] } ] ``` A single resource configuration block can describe multiple resource instances with the [`count`](/terraform/language/meta-arguments/count) or [`for\_each`](/terraform/language/meta-arguments/for\_each) meta-arguments. Each of these instances counts as a managed resource. The following example shows a Terraform state excerpt with 2 instances of a `aws\_subnet` resource. HCP Terraform counts each instance of `aws\_subnet` as a separate managed resource. ```json { "module": "module.vpc", "mode": "managed", "type": "aws\_subnet", "name": "public", "provider": "provider[\"registry.terraform.io/hashicorp/aws\"]", "instances": [ { "index\_key": 0, "schema\_version": 1, "attributes": { "arn": "arn:aws:ec2:us-east-2:561656980159:subnet/subnet-024b05c4fba9c9733", "assign\_ipv6\_address\_on\_creation": false, "availability\_zone": "us-east-2a", ##... "private\_dns\_hostname\_type\_on\_launch": "ip-name", "tags": { "Name": "-public-us-east-2a" }, "tags\_all": { "Name": "-public-us-east-2a" }, "timeouts": null, "vpc\_id": "vpc-0f693f9721b61333b" }, "sensitive\_attributes": [], "private": "eyJlMmJmYjczMC1lY2FhLTExZTYtOGY4OC0zNDM2M2JjN2M0YzAiOnsiY3JlYXRlIjo2MDAwMDAwMDAwMDAsImRlbGV0ZSI6MTIwMDAwMDAwMDAwMH0sInNjaGVtYV92ZXJzaW9uIjoiMSJ9", "dependencies": [ "data.aws\_availability\_zones.available", "module.vpc.aws\_vpc.this", "module.vpc.aws\_vpc\_ipv4\_cidr\_block\_association.this" ] }, { "index\_key": 1, "schema\_version": 1, "attributes": { "arn": "arn:aws:ec2:us-east-2:561656980159:subnet/subnet-08924f16617e087b2", "assign\_ipv6\_address\_on\_creation": false, "availability\_zone": "us-east-2b", ##... "private\_dns\_hostname\_type\_on\_launch": "ip-name", "tags": { "Name": "-public-us-east-2b" }, "tags\_all": { "Name": "-public-us-east-2b" }, "timeouts": null, "vpc\_id": "vpc-0f693f9721b61333b" }, | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/state.mdx | main | terraform | [
-0.022066840901970863,
-0.019582966342568398,
0.019041337072849274,
-0.01007507648319006,
-0.04572491720318794,
-0.008437390439212322,
-0.03243931382894516,
-0.06725453585386276,
-0.009447723627090454,
0.033613450825214386,
-0.05104656144976616,
-0.06773639470338821,
0.06925247609615326,
-... | 0.000753 |
"Name": "-public-us-east-2a" }, "timeouts": null, "vpc\_id": "vpc-0f693f9721b61333b" }, "sensitive\_attributes": [], "private": "eyJlMmJmYjczMC1lY2FhLTExZTYtOGY4OC0zNDM2M2JjN2M0YzAiOnsiY3JlYXRlIjo2MDAwMDAwMDAwMDAsImRlbGV0ZSI6MTIwMDAwMDAwMDAwMH0sInNjaGVtYV92ZXJzaW9uIjoiMSJ9", "dependencies": [ "data.aws\_availability\_zones.available", "module.vpc.aws\_vpc.this", "module.vpc.aws\_vpc\_ipv4\_cidr\_block\_association.this" ] }, { "index\_key": 1, "schema\_version": 1, "attributes": { "arn": "arn:aws:ec2:us-east-2:561656980159:subnet/subnet-08924f16617e087b2", "assign\_ipv6\_address\_on\_creation": false, "availability\_zone": "us-east-2b", ##... "private\_dns\_hostname\_type\_on\_launch": "ip-name", "tags": { "Name": "-public-us-east-2b" }, "tags\_all": { "Name": "-public-us-east-2b" }, "timeouts": null, "vpc\_id": "vpc-0f693f9721b61333b" }, "sensitive\_attributes": [], "private": "eyJlMmJmYjczMC1lY2FhLTExZTYtOGY4OC0zNDM2M2JjN2M0YzAiOnsiY3JlYXRlIjo2MDAwMDAwMDAwMDAsImRlbGV0ZSI6MTIwMDAwMDAwMDAwMH0sInNjaGVtYV92ZXJzaW9uIjoiMSJ9", "dependencies": [ "data.aws\_availability\_zones.available", "module.vpc.aws\_vpc.this", "module.vpc.aws\_vpc\_ipv4\_cidr\_block\_association.this" ] } ] } ``` ### Example - Excluded Data Source The following Terraform state excerpt describes a `aws\_availability\_zones` data source. HCP Terraform does not include `aws\_availability\_zones` in the managed resource count because `βmodeβ: βdataβ`. ```json "resources": [ { "mode": "data", "type": "aws\_availability\_zones", "name": "available", "provider": "provider[\"registry.terraform.io/hashicorp/aws\"]", "instances": [ { "schema\_version": 0, "attributes": { "all\_availability\_zones": null, "exclude\_names": null, "exclude\_zone\_ids": null, "filter": null, "group\_names": [ "us-east-2" ], "id": "us-east-2", "names": [ "us-east-2a", "us-east-2b", "us-east-2c" ], "state": null, "zone\_ids": [ "use2-az1", "use2-az2", "use2-az3" ] }, "sensitive\_attributes": [] } ] } ] ``` ## State Manipulation Certain tasks (including importing resources, tainting resources, moving or renaming existing resources to match a changed configuration, and more) may require modifying Terraform state outside the context of a run, depending on which version of Terraform your HCP Terraform workspace is configured to use. Newer Terraform features like [`moved` blocks](/terraform/language/modules/develop/refactoring), [`import` blocks](/terraform/language/import), and the [`replace` option](/terraform/cloud-docs/workspaces/run/modes-and-options#replacing-selected-resources) allow you to accomplish these tasks using the usual plan and apply workflow. However, if the Terraform version you're using doesn't support these features, you may need to fall back to manual state manipulation. Manual state manipulation in HCP Terraform workspaces, with the exception of [rolling back to a previous state version](#rolling-back-to-a-previous-state), requires the use of Terraform CLI, using the same commands as would be used in a local workflow (`terraform import`, `terraform taint`, etc.). To manipulate state, you must configure the [CLI integration](/terraform/cli/cloud) and authenticate with a user token that has permission to read and write state versions for the relevant workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions)) ### Rolling Back to a Previous State You can rollback to a previous, known good state version using the HCP Terraform UI. Navigate to the state you want to rollback to and click the \*\*Advanced\*\* toggle button. This option requires that you have access to create new state and that you lock the workspace. It works by duplicating the state that you specify and making it the workspace's current state version. The workspace remains locked. To undo the rollback operation, rollback to the state version that was previously the latest state. -> \*\*Note:\*\* You can rollback to any prior state, but you should use caution because replacing state improperly can result in orphaned or duplicated infrastructure resources. This feature is provided as a convenient alternative to manually downloading older state and using state manipulation commands in the CLI to push it to HCP Terraform. [permissions-citation]: #intentionally-unused---keep-for-maintainers ## Accessing State from Other Workspaces -> \*\*Note:\*\* Provider-specific [data sources](/terraform/language/data-sources) are usually the most resilient way to share information between separate Terraform configurations. `terraform\_remote\_state` is more flexible, but we recommend using specialized data sources whenever it is convenient to do so. Terraform's built-in [`terraform\_remote\_state` data source](/terraform/language/state/remote-state-data) lets you share arbitrary information between configurations via root module [outputs](/terraform/language/values/outputs). HCP Terraform automatically manages API credentials for `terraform\_remote\_state` access during [runs managed by HCP Terraform](/terraform/cloud-docs/workspaces/run/remote-operations#remote-operations). This means you do not usually need to include an API token in a `terraform\_remote\_state` data source's configuration. ## Upgrading State You can upgrade a workspace's state version to a new Terraform version without making any configuration changes. To upgrade, we recommend the following steps: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the workspace you want to upgrade. 1. Run a [speculative | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/state.mdx | main | terraform | [
-0.031773488968610764,
0.007033158093690872,
-0.08828665316104889,
0.03567973151803017,
0.05644708126783371,
-0.0025573372840881348,
-0.0008377849590033293,
-0.07388678193092346,
0.03599437698721886,
0.05938482657074928,
0.038622379302978516,
-0.11856149882078171,
0.00875586923211813,
0.02... | 0.011907 |
configuration. ## Upgrading State You can upgrade a workspace's state version to a new Terraform version without making any configuration changes. To upgrade, we recommend the following steps: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the workspace you want to upgrade. 1. Run a [speculative plan](/terraform/cloud-docs/workspaces/run/ui#testing-terraform-upgrades-with-speculative-plans) to test whether your configuration is compatible with the new Terraform version. You can run speculative plans with a Terraform version that is different than the one currently selected for the workspace. 1. Select \*\*Settings > General\*\* and select the desired new \*\*Terraform Version\*\*. 1. Click \*\*+ New run\*\* and then select \*\*Allow empty apply\*\* as the run type. An [empty apply](/terraform/cloud-docs/workspaces/run/modes-and-options#allow-empty-apply) allows Terraform to apply a plan that produces no infrastructure changes. Terraform upgrades the state file version during the apply process. -> \*\*Note:\*\* If the desired Terraform version is incompatible with a workspace's existing state version, the run fails and HCP Terraform prompts you to run an apply with a compatible version first. Refer to the [Terraform upgrade guides](/terraform/language/upgrade-guides) for details about upgrading between versions. ### Remote State Access Controls Remote state access between workspaces is subject to access controls: - Only workspaces within the same organization can access each other's state. - The workspace whose state is being read must be configured to allow that access. State access permissions are configured on a workspace's [general settings page](/terraform/cloud-docs/workspaces/settings). There are two ways a workspace can allow access: - Globally, to all workspaces within the same organization. - Selectively, to a list of specific approved workspaces. By default, new workspaces in HCP Terraform do not allow other workspaces to access their state. We recommend that you follow the principle of least privilege and only enable state access between workspaces that specifically need information from each other. -> \*\*Note:\*\* The default access permissions for new workspaces in HCP Terraform changed in April 2021. Workspaces created before this change defaulted to allowing global access within their organization. These workspaces can be changed to more restrictive access at any time on their [general settings page](/terraform/cloud-docs/workspaces/settings). Terraform Enterprise administrators can choose whether new workspaces on their instances default to global access or selective access. ### Data Source Configuration To configure a `tfe\_outputs` data source that references an HCP Terraform workspace, specify the organization and workspace in the `config` argument. You must still properly configure the `tfe` provider with a valid authentication token and correct permissions to HCP Terraform. ```hcl data "tfe\_outputs" "vpc" { config = { organization = "example\_corp" workspaces = { name = "vpc-prod" } } } resource "aws\_instance" "redis\_server" { # Terraform 0.12 and later: use the "outputs." attribute subnet\_id = data.tfe\_outputs.vpc.outputs.subnet\_id } ``` -> \*\*Note:\*\* Remote state access controls do not apply when using the `tfe\_outputs` data source. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/state.mdx | main | terraform | [
0.0004675609525293112,
-0.0028530003037303686,
0.06139405444264412,
-0.03435337170958519,
-0.039579156786203384,
-0.029808932915329933,
-0.07309076189994812,
-0.10409876704216003,
-0.06045818328857422,
0.0681094378232956,
-0.03911672532558441,
-0.06226349622011185,
0.027120349928736687,
-0... | -0.034996 |
# Workspace settings You can change a workspaceβs settings after creation. Workspace settings are separated into several pages. - [General](#general): Settings that determine how the workspace functions, including its name, description, associated project, Terraform version, and execution mode. - [Health](/terraform/cloud-docs/workspaces/health): Settings that let you configure health assessments, including drift detection and continuous validation. - [Locking](#locking): Locking a workspace temporarily prevents new plans and applies. - [Notifications](#notifications): Settings that let you configure run notifications. - [Policies](#policies): Settings that let you toggle between Sentinel policy evaluation experiences. - [Run Triggers](#run-triggers): Settings that let you configure run triggers. Run triggers allow runs to queue automatically in your workspace when runs in other workspaces are successful. - [SSH Key](#ssh-key): Set a private SSH key for downloading Terraform modules from Git-based module sources. - [Team Access](#team-access): Settings that let you manage which teams can view the workspace and use it to provision infrastructure. - [Version Control](#version-control): Manage the workspaceβs VCS integration. - [Destruction and Deletion](#destruction-and-deletion): Remove a workspace and the infrastructure it manages. Changing settings requires admin access to the relevant workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions)) [permissions-citation]: #intentionally-unused---keep-for-maintainers -> \*\*API:\*\* See the [Update a Workspace endpoint](/terraform/cloud-docs/api-docs/workspaces#update-a-workspace) (`PATCH /organizations/:organization\_name/workspaces/:name`). ## General General settings let you change a workspace's name, description, the project it belongs to, and details about how Terraform runs operate. After changing these settings, click \*\*Save settings\*\* at the bottom of the page. ### ID Every workspace has a unique ID that you cannot change. You may need to reference the workspace's ID when using the [HCP Terraform API](/terraform/cloud-docs/api-docs). Click the icon beside the ID to copy it to your clipboard. ### Name The display name of the workspace. !> \*\*Warning:\*\* Some API calls refer to a workspace by its name, so changing the name may break existing integrations. ### Project The [project](/terraform/cloud-docs/projects) that this workspace belongs to. Changing the workspace's project can change the read and write permissions for the workspace and which users can access it. To move a workspace, you must have the "Manage all Projects" organization permission or explicit team admin privileges on both the source and destination projects. Remember that moving a workspace to another project may affect user visibility for that project's workspaces. Refer to [Project Permissions](/terraform/cloud-docs/users-teams-organizations/permissions/project) for details on workspace access. ### Description (Optional) Enter a brief description of the workspace's purpose or types of infrastructure. ### Execution Mode Whether to use HCP Terraform as the Terraform execution platform for this workspace. By default, HCP Terraform uses a project's [default execution mode](/terraform/cloud-docs/users-teams-organizations/organizations#organization-settings) to choose the execution platform for a workspace. Alternatively, you can instead choose a custom execution mode for a workspace. Specifying the "Remote" execution mode instructs HCP Terraform to perform Terraform runs on its own disposable virtual machines. This provides a consistent and reliable run environment and enables advanced features like Sentinel policy enforcement, cost estimation, notifications, version control integration, and more. To disable remote execution for a workspace, change its execution mode to "Local". This mode lets you perform Terraform runs locally with the [CLI-driven run workflow](/terraform/cloud-docs/workspaces/run/cli). The workspace will store state, which Terraform can access with the [CLI integration](/terraform/cli/cloud). HCP Terraform does not evaluate workspace variables or variable sets in local execution mode. If you instead need to allow HCP Terraform to communicate with isolated, private, or on-premises infrastructure, consider using [HCP Terraform agents](/terraform/cloud-docs/agents). By deploying a lightweight agent, you can establish a simple connection between your environment and HCP Terraform. Changing your workspace's execution mode after a run has already been planned will cause the run to error when it is applied. To minimize the number of runs that error when changing your | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/settings/index.mdx | main | terraform | [
-0.036472756415605545,
-0.038387514650821686,
0.006033963989466429,
0.054848138242959976,
0.018449855968356133,
-0.017261821776628494,
0.0028869120869785547,
-0.08012206852436066,
-0.006209434941411018,
0.08307505398988724,
-0.020908813923597336,
-0.04054119065403938,
0.013260655105113983,
... | 0.094327 |
By deploying a lightweight agent, you can establish a simple connection between your environment and HCP Terraform. Changing your workspace's execution mode after a run has already been planned will cause the run to error when it is applied. To minimize the number of runs that error when changing your workspace's execution mode, you should: 1. Disable [auto-apply](/terraform/cloud-docs/workspaces/settings#auto-apply) if you have it enabled. 1. Complete any runs that are no longer in the [pending stage](/terraform/cloud-docs/workspaces/run/states#the-pending-stage). 1. [Lock](/terraform/cloud-docs/workspaces/settings#locking) your workspace to prevent any new runs. 1. Change the execution mode. 1. Enable [auto-apply](/terraform/cloud-docs/workspaces/settings#auto-apply), if you had it enabled before changing your execution mode. 1. [Unlock](/terraform/cloud-docs/workspaces/settings#locking) your workspace. ### Auto-apply Whether or not HCP Terraform should automatically apply a successful Terraform plan. If you choose manual apply, an operator must confirm a successful plan and choose to apply it. The main auto-apply setting affects runs created by the HCP Terraform user interface, API, and version control webhooks. HCP Terraform also has a separate setting for runs created by [run triggers](/terraform/cloud-docs/workspaces/settings/run-triggers) from another workspace. Auto-apply has the following exceptions: - Runs created by the terraform CLI must use the `-auto-approve` argument flag to control auto-apply of a particular run. - Plans queued by users without permission to apply runs for the workspace must be approved by a user who does have permission. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions)) [permissions-citation]: #intentionally-unused---keep-for-maintainers ### Terraform Version The Terraform version to use for all operations in the workspace. The default value is whichever release was current when HCP Terraform created the workspace. You can also update a workspace's Terraform version to an exact version or a valid [version constraint](/terraform/language/expressions/version-constraints). > \*\*Hands-on:\*\* Try the [Upgrade Terraform Version in HCP Terraform](/terraform/tutorials/cloud/cloud-versions) tutorial. -> \*\*API:\*\* You can specify a Terraform version when you [create a workspace](/terraform/cloud-docs/api-docs/workspaces#create-a-workspace) with the API. ### Terraform Working Directory The directory where Terraform will execute, specified as a relative path from the root of the configuration directory. Defaults to the root of the configuration directory. HCP Terraform will change to this directory before starting a Terraform run, and will report an error if the directory does not exist. Setting a working directory creates a default filter for automatic run triggering, and sometimes causes CLI-driven runs to upload additional configuration content. #### Default Run Trigger Filtering In VCS-backed workspaces that specify a working directory, HCP Terraform assumes that only changes within that working directory should trigger a run. You can override this behavior with the [Automatic Run Triggering](/terraform/cloud-docs/workspaces/settings/vcs#automatic-run-triggering) settings. #### Parent Directory Uploads If a working directory is configured, HCP Terraform always expects the complete shared configuration directory to be available, since the configuration might use local modules from outside its working directory. In [runs triggered by VCS commits](/terraform/cloud-docs/workspaces/run/ui), this is automatic. In [CLI-driven runs](/terraform/cloud-docs/workspaces/run/cli), Terraform's CLI sometimes uploads additional content: - When the local working directory \_does not match\_ the name of the configured working directory, Terraform assumes it is the root of the configuration directory, and uploads only the local working directory. - When the local working directory \_matches\_ the name of the configured working directory, Terraform uploads one or more parents of the local working directory, according to the depth of the configured working directory. (For example, a working directory of `production` is only one level deep, so Terraform would upload the immediate parent directory. `consul/production` is two levels deep, so Terraform would upload the parent and grandparent directories.) If you use the working directory setting, always run Terraform from a complete copy of the configuration directory. Moving one subdirectory to a new location can result in unexpected content uploads. ### Remote State Sharing These options let | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/settings/index.mdx | main | terraform | [
0.019306983798742294,
0.008063887245953083,
0.07601887732744217,
0.010659498162567616,
-0.027705572545528412,
-0.003316560760140419,
-0.06347434222698212,
-0.08587796986103058,
-0.02117951214313507,
0.06343910843133926,
-0.037929780781269073,
-0.06513164192438126,
0.047734372317790985,
-0.... | -0.019727 |
two levels deep, so Terraform would upload the parent and grandparent directories.) If you use the working directory setting, always run Terraform from a complete copy of the configuration directory. Moving one subdirectory to a new location can result in unexpected content uploads. ### Remote State Sharing These options let you choose which workspaces in the organization can access the state of the workspace during [runs managed by HCP Terraform](/terraform/cloud-docs/workspaces/run/remote-operations#remote-operations). The [`terraform\_remote\_state` data source](/terraform/language/state/remote-state-data) relies on state sharing to access workspace outputs. You can enable one of the following options: - \*\*Share with all workspaces in this organization\*\*: All other workspaces in the organization can access this workspace's state during runs. - \*\*Share with all workspaces in this project\*\*: All other workspaces in the same project can access this workspace's state during runs. - \*\*Share with specific workspaces\*\*: Lets you specify a list of workspaces in the organization that can access this workspace's state. The workspace selector is searchable. If you don't initially see a workspace you're looking for, type part of its name. By default, the \*\*Share with specific workspaces\*\* option is enabled for new workspaces in HCP Terraform. The workspace selector is also empty. As a result, other workspaces can't access the new workpaces's state. Terraform Enterprise administrators can enable remote state sharing between workspaces globally so that new workspaces share state during runs by default. The option you choose in the workspace settings overrides the global default. Refer to [Remote State Sharing](/terraform/enterprise/application-administration/general#remote-state-sharing) in the administration documentation for more information. We recommend that you follow the principle of least privilege and only enable state access between workspaces that specifically need information from each other. To configure remote state sharing, you must have read access for the destination workspace. If you do not have access to the destination workspace due to scoped project or workspace permissions, you will not have complete visibility into the list of other workspace that can access its state. The default access permissions for new workspaces in HCP Terraform changed in April 2021. Workspaces created before this change default to allowing global access within their organization. You can change workspaces to more restrictive access at any time. ### User Interface Select the user experience for displaying plan and apply details. By default, \*\*Structured Run Output\*\* is enabled. This mode displays your plan and apply results in a human-readable format. This includes nodes that you can expand to view details about each resource and any configured output. Enable the \*\*Console UI\*\* option to stream live text logging to the UI in real time. This experience resembles the CLI output. When \*\*Structured Run Output\*\* is enabled, tour workspace must be configured to use Terraform version 1.0.5 or higher for full functionality. Workspaces using Terraform 0.15.2 and older may deliver partial functionality. The \*\*Console UI\*\* option is enabled by default for workspaces that use Terraform 0.15.2 and older. ## Locking ~> \*\*Important:\*\* Unlike other settings, locks can also be managed by users with permission to lock and unlock the workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions)) [permissions-citation]: #intentionally-unused---keep-for-maintainers If you need to prevent Terraform runs for any reason, you can lock a workspace. This prevents all applies (and many kinds of plans) from proceeding, and affects runs created via UI, CLI, API, and automated systems. To enable runs again, a user must unlock the workspace. Two kinds of run operations can ignore workspace locking because they cannot affect resources or state and do not attempt to lock the workspace themselves: - Plan-only runs. - The planning stages of [saved plan runs](/terraform/cloud-docs/workspaces/run/modes-and-options.mdx#saved-plans). You can only \_apply\_ a saved plan if the workspace | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/settings/index.mdx | main | terraform | [
-0.03885941952466965,
-0.036754194647073746,
0.049253787845373154,
0.0014376050094142556,
-0.00278068077750504,
-0.0321585088968277,
-0.06879877299070358,
-0.0636676624417305,
0.00495644798502326,
0.06319849193096161,
0.0333399698138237,
-0.020099708810448647,
0.06873568147420883,
0.025847... | -0.010794 |
must unlock the workspace. Two kinds of run operations can ignore workspace locking because they cannot affect resources or state and do not attempt to lock the workspace themselves: - Plan-only runs. - The planning stages of [saved plan runs](/terraform/cloud-docs/workspaces/run/modes-and-options.mdx#saved-plans). You can only \_apply\_ a saved plan if the workspace is unlocked, and applying that plan locks the workspace as usual. Terraform Enterprise does not yet support this workflow. Locking a workspace also restricts state uploads. In order to upload state, the workspace must be locked by the user who is uploading state. Users with permission to lock and unlock a workspace can't unlock a workspace which was locked by another user. Users with admin access to a workspace can force unlock a workspace even if another user has locked it. [permissions-citation]: #intentionally-unused---keep-for-maintainers Locks are managed with a single "Lock/Unlock/Force unlock ``" button. HCP Terraform asks for confirmation when unlocking. You can also manage the workspace's lock from the \*\*Actions\*\* menu. ## Notifications The "Notifications" page allows HCP Terraform to send webhooks to external services whenever specific run events occur in a workspace. See [Run Notifications](/terraform/cloud-docs/workspaces/settings/notifications) for detailed information about configuring notifications. ## Policies HCP Terraform offers two experiences for Sentinel policy evaluations. On the "Policies" page, you can adjust your \*\*Sentinel Experience\*\* settings to your preferred experience. By default, HCP Terraform enables the newest policy evaluation experience. To toggle between the two Sentinel policy evaluation experiences, click the \*\*Enable the new Sentinel policy experience\*\* toggle under the \*\*Sentinel Experience\*\* heading. HCP Terraform persists your changes automatically. If HCP Terraform is performing a run on a different page, you must refresh that page to see changes to your policy evaluation experience. ## Run Triggers The "Run Triggers" page configures connections between a workspace and one or more source workspaces. These connections, called "run triggers", allow runs to queue automatically in a workspace on successful apply of runs in any of the source workspaces. See [Run Triggers](/terraform/cloud-docs/workspaces/settings/run-triggers) for detailed information about configuring run triggers. ## SSH Key If a workspace's configuration uses [Git-based module sources](/terraform/language/modules/sources) to reference Terraform modules in private Git repositories, Terraform needs an SSH key to clone those repositories. The "SSH Key" page lets you choose which key it should use. See [Using SSH Keys for Cloning Modules](/terraform/cloud-docs/workspaces/settings/ssh-keys) for detailed information about this page. ## Team Access The "Team Access" page configures which teams can perform which actions on a workspace. See [Managing Access to Workspaces](/terraform/cloud-docs/workspaces/settings/access) for detailed information. ## Version Control The "Version Control" page configures an optional VCS repository that contains the workspace's Terraform configuration. Version control integration is only relevant for workspaces with [remote execution](#execution-mode) enabled. See [VCS Connections](/terraform/cloud-docs/workspaces/settings/vcs) for detailed information about this page. ## Destruction and Deletion The \*\*Destruction and Deletion\*\* page allows [admin users](/terraform/cloud-docs/users-teams-organizations/permissions) to delete a workspace's managed infrastructure or delete the workspace itself. For details, refer to [Destruction and Deletion](/terraform/cloud-docs/workspaces/settings/deletion) for detailed information about this page. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/settings/index.mdx | main | terraform | [
-0.05039048567414284,
-0.007762114517390728,
0.027589429169893265,
-0.003744977293536067,
0.023882441222667694,
0.01421778742223978,
-0.011530673131346703,
-0.1045616865158081,
0.032689422369003296,
0.1212138831615448,
-0.051567308604717255,
0.011329851113259792,
0.011549096554517746,
-0.0... | 0.042382 |
# Manage access to workspaces @include 'tfc-package-callouts/team-management.mdx' HCP Terraform workspaces can only be accessed by users with the correct permissions. You can manage permissions for a workspace on a per-team basis. Teams with [admin access](/terraform/cloud-docs/users-teams-organizations/permissions) on a workspace can manage permissions for other teams on that workspace. Since newly created workspaces don't have any team permissions configured, the initial setup of a workspace's permissions requires the owners team or a team with permission to manage workspaces. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions)) [permissions-citation]: #intentionally-unused---keep-for-maintainers -> \*\*API:\*\* See the [Team Access APIs](/terraform/cloud-docs/api-docs/team-access). \*\*Terraform:\*\* See the `tfe` provider's [`tfe\_team\_access`](https://registry.terraform.io/providers/hashicorp/tfe/latest/docs/resources/team\_access) resource. ## Background HCP Terraform manages users' permissions to workspaces with teams. - [Workspace-level permissions](/terraform/cloud-docs/users-teams-organizations/permissions/workspace) can be granted to an individual team on a particular workspace. These permissions can be managed on the workspace by anyone with admin access to the workspace. - In addition, some [organization-level permissions](/terraform/cloud-docs/users-teams-organizations/permissions/organization) can be granted to a team which apply to every workspace in the organization. For example, the [manage all workspaces](/terraform/cloud-docs/users-teams-organizations/permissions/organization#manage-all-workspaces) and [manage all projects](/terraform/cloud-docs/users-teams-organizations/permissions/organization#manage-all-projects) permissions grant the workspace-level admin permission to every workspace in the organization. Organization-level permissions can only be managed by organization owners. ## Managing Workspace Access Permissions When a user creates a workspace, the following teams can access that workspace with full admin permissions: - [the owners team](/terraform/cloud-docs/users-teams-organizations/teams#the-owners-team) - teams with "Manage all workspaces" and/or βManage all projectsβ [organization permissions](/terraform/cloud-docs/users-teams-organizations/permissions/project) - teams with βProject Adminβ project permissions You cannot override these teams' permissions through the workspace's specific permissions. To manage a team's access to a workspace, select "Team Access" from the workspace's "Settings" menu. This screen displays all teams granted workspace-level permissions to the workspace. To add a team, select "Add team and permissions". HCP Terraform displays the teams you can grant workspace access to. Select a team to continue and configure that team's permissions. There are four [fixed permissions sets](/terraform/cloud-docs/users-teams-organizations/permissions/workspace) available for basic usage: Read, Plan, Write, and Admin. To enable finer-grained selection of non-admin permissions, select "Customize permissions for this team". On this screen, you can select specific permissions to grant the team for the workspace. For more information on permissions, see [the documentation on Workspace Permissions](/terraform/cloud-docs/users-teams-organizations/permissions/workspace). | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/settings/access.mdx | main | terraform | [
-0.023388827219605446,
0.0006046608905307949,
-0.008771796710789204,
-0.01620812527835369,
-0.011306765489280224,
-0.03533864766359329,
-0.02552667446434498,
-0.07298963516950607,
0.06275957822799683,
0.06358768045902252,
-0.042814284563064575,
-0.020506015047430992,
0.03391038998961449,
0... | 0.015058 |
# Destroy infrastructure resources and delete workspaces HCP Terraform workspaces have two primary delete actions: - [Destroying infrastructure](#destroy-infrastructure) deletes resources managed by the HCP Terraform workspace by triggering a destroy run. - [Deleting a workspace](#delete-workspaces) deletes the workspace itself without triggering a destroy run. In general, you should perform both actions in the above order when destroying a workspace to ensure resource cleanup for all of a workspace's managed infrastructure. ## Destroy Infrastructure Destroy plans delete the infrastructure managed by a workspace. We recommend destroying the infrastructure managed by a workspace \_before\_ deleting the workspace itself. Otherwise, the unmanaged infrastructure resources will continue to exist but will become unmanaged, and you must go into your infrastructure providers to delete the resources manually. Before queuing a destroy plan, enable the \*\*Allow destroy plans\*\* toggle setting on this page. ### Automatically Destroy @include 'tfc-package-callouts/ephemeral-workspaces.mdx' Configuring automatic infrastructure destruction for a workspace requires [admin permissions](/terraform/cloud-docs/users-teams-organizations/permissions/workspace#workspace-admin) for that workspace. There are two main ways to automatically destroy a workspace's resources: \* Schedule a run to destroy all resources in a workspace at a specific date and time. \* Configure HCP Terraform to destroy a workspace's infrastructure after a period of workspace inactivity. Workspaces can inherit auto-destroy settings from their project. Refer to [managing projects](/terraform/cloud-docs/projects/manage#automatically-destroy-inactive-workspaces) for more information. You can configure an individual workspace's auto-destroy settings to override the project's configuration. You can reduce your spending on infrastructure by automatically destroying temporary resources like development environments. After HCP Terraform performs an auto-destroy run, it unsets the `auto-destroy-at` field on the workspace. If you continue using the workspace, you can schedule another future auto-destroy run to remove any new resources. !> \*\*Note:\*\* Automatic destroy plans \_do not\_ prompt you for apply approval in the HCP Terraform user interface. We recommend only using this setting for development environments. You can schedule an auto-destroy run using the HCP Terraform web user interface, or the [workspace API](/terraform/cloud-docs/api-docs/workspaces). You can also schedule [notifications](/terraform/cloud-docs/workspaces/settings/notifications) to alert you 12 and 24 hours before an auto-destroy run, and to report auto-destroy run results. #### Destroy at a specific day and time To schedule an auto-destroy run at a specific time in HCP Terraform: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the workspace you want to destroy. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Destruction and Deletion\*\*. 1. Under \*\*Automatically destroy\*\*, click \*\*Set up auto-destroy\*\*. 1. Enter the desired date and time. HCP Terraform defaults to your local time zone for scheduling and displays how long until the scheduled operation. 1. Click \*\*Confirm auto-destroy\*\*. To cancel a scheduled auto-destroy run in HCP Terraform: 1. Navigate to the workspace's \*\*Settings\*\* > \*\*Destruction and Deletion\*\* page. 1. Under \*\*Automatically destroy\*\*, click \*\*Edit\*\* next to your scheduled run's details. 1. Click \*\*Remove\*\*. #### Destroy if a workspace is inactive You can configure HCP Terraform to automatically destroy a workspace's infrastructure after a period of inactivity. A workspace is \_inactive\_ if the workspace's state has not changed within your designated time period. !> \*\*Caution:\*\* As opposed to configuring an auto-destroy run for a specific date and time, this setting \_persists\_ after queueing auto-destroy runs. If you configure a workspace to auto-destroy its infrastructure when inactive, any run that updates Terraform state further delays the scheduled auto-destroy time by the length of your designated timeframe. To schedule an auto-destroy run after a period of workspace inactivity: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the workspace you want to destroy. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Destruction and Deletion\*\*. 1. Under \*\*Automatically destroy\*\*, click \*\*Set up auto-destroy\*\*. 1. Click | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/settings/deletion.mdx | main | terraform | [
-0.03457939252257347,
0.026882261037826538,
0.0833221971988678,
-0.03102325089275837,
0.040055494755506516,
-0.06619581580162048,
-0.0004023352521471679,
-0.12674011290073395,
0.030715223401784897,
0.06766338646411896,
-0.01590191200375557,
-0.017450394108891487,
0.051949191838502884,
0.01... | 0.030334 |
timeframe. To schedule an auto-destroy run after a period of workspace inactivity: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the workspace you want to destroy. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Destruction and Deletion\*\*. 1. Under \*\*Automatically destroy\*\*, click \*\*Set up auto-destroy\*\*. 1. Click the \*\*Destroy if inactive\*\* toggle. 1. Select or customize a desired timeframe of inactivity. 1. Click \*\*Confirm auto-destroy\*\*. When configured for the first time, the auto-destroy duration setting displays the scheduled date and time that HCP Terraform will perform the auto-destroy run. Subsequent auto-destroy runs and Terraform runs that update state both update the next scheduled auto-destroy date. After HCP Terraform completes a manual or automatic destroy run, it waits until further state updates to schedule a new auto-destroy run. To remove your workspace's auto-destroy run: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the workspace you want to disable the auto-destroy run for. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Destruction and Deletion\*\*. 1. Under \*\*Auto-destroy settings\*\*, click \*\*Edit\*\* to change the auto-destroy settings. 1. Click \*\*Remove\*\*. When you move a workspace to a different project, it inherits the auto-destroy settings from the new project. If you configured the workspace to override the previous project's auto-destroy settings, it retains the override configuration in the new project. ## Delete Workspace Terraform does not automatically destroy managed infrastructure when you delete a workspace. After you delete the workspace and its state file, Terraform can \_no longer track or manage\_ that infrastructure. You must manually delete or [import](/terraform/cli/commands/import) any remaining resources into another Terraform workspace. By default, [workspace administrators](/terraform/cloud-docs/users-teams-organizations/permissions/workspace#workspace-admin) can only delete unlocked workspaces that are not managing any infrastructure. Organization owners can force delete a workspace to override these protections. Organization owners can also configure the [organization's settings](/terraform/cloud-docs/users-teams-organizations/organizations#general) to let workspace administrators force delete their own workspaces. ## Data Retention Policies Data retention policies are exclusive to Terraform Enterprise, and not available in HCP Terraform. [Learn more about Terraform Enterprise](https://developer.hashicorp.com/terraform/enterprise). Define configurable data retention policies for workspaces to help reduce object storage consumption. You can define a policy that allows Terraform to \_soft delete\_ the backing data associated with configuration versions and state versions. Soft deleting refers to marking a data object for garbage collection so that Terraform can automatically delete the object after a set number of days. Once an object is soft deleted, any attempts to read the object will fail. Until the garbage collection grace period elapses, you can still restore an object using the APIs described in the [configuration version documentation](/terraform/enterprise/api-docs/configuration-versions) and [state version documentation](/terraform/enterprise/api-docs/state-versions). After the garbage collection grace period elapses, Terraform permanently deletes the archivist storage. The [organization policy](/terraform/enterprise/users-teams-organizations/organizations#destruction-and-deletion) is the default policy applied to workspaces, but members of individual workspaces can override the policy for their workspaces. The workspace policy always overrides the organization policy. A workspace admin can set or override the following data retention policies: - \*\*Organization default policy\*\* - \*\*Do not auto-delete\*\* - \*\*Auto-delete data\*\* Setting the data retention policy to \*\*Organization default policy\*\* disables the other data retention policy settings. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/settings/deletion.mdx | main | terraform | [
-0.03701159730553627,
0.04906672239303589,
0.06997252255678177,
0.007560253608971834,
0.0034889208618551493,
-0.04681120440363884,
-0.003305548569187522,
-0.0724089965224266,
0.057833071798086166,
0.0631294697523117,
-0.07244571298360825,
-0.018182070925831795,
0.03215209022164345,
0.00541... | 0.049926 |
# Use SSH Keys for cloning modules Terraform configurations can pull in Terraform modules from [a variety of different sources](/terraform/language/modules/sources), and private Git repositories are a common source for private modules. -> \*\*Note:\*\* The [private module registry](/terraform/cloud-docs/registry) is an easier way to manage private Terraform modules in HCP Terraform, and doesn't require setting SSH keys for workspaces. The rest of this page only applies to configurations that fetch modules directly from a private Git repository. To access a private Git repository, Terraform either needs login credentials (for HTTPS access) or an SSH key. HCP Terraform can store private SSH keys centrally, and you can easily use them in any workspace that clones modules from a Git server. -> \*\*Note:\*\* SSH keys for cloning Terraform modules from Git repos are only used during Terraform runs. They are managed separately from any [keys used for bringing VCS content into HCP Terraform](/terraform/cloud-docs/vcs#ssh-keys). HCP Terraform manages SSH keys used to clone Terraform modules at the organization level, and allows multiple keys to be added for the organization. You can add or delete keys via the organization's settings. Once a key is uploaded, the text of the key is not displayed to users. To assign a key to a workspace, go to its settings and choose a previously added key from the drop-down menu on Integrations under "SSH Key". Each workspace can only use one SSH key. -> \*\*API:\*\* See the [SSH Keys API](/terraform/cloud-docs/api-docs/ssh-keys) and [Assign an SSH Key to a Workspace endpoint](/terraform/cloud-docs/api-docs/workspaces#assign-an-ssh-key-to-a-workspace). \*\*Terraform:\*\* See the `tfe` provider's [`tfe\_ssh\_key`](https://registry.terraform.io/providers/hashicorp/tfe/latest/docs/resources/ssh\_key) resource. ## Adding Keys To add a key: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and choose the organization you want to add a key to. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*SSH Keys\*\*. This page has a form for adding new keys and a list of existing keys. 1. Obtain a PEM formatted SSH keypair that HCP Terraform can use to download modules during a Terraform run. You might already have an appropriate key. If not, create one on a secure workstation and distribute the public key to your VCS provider(s). Do not use or generate a key that has a passphrase. Git is running non-interactively and cannot prompt for it. The exact command to create a PEM formatted SSH keypair depends on your operating system. The following example command creates a `service\_terraform` file with the private key and a `service\_terraform.pub` file with the public key. ```bash ssh-keygen -t rsa -m PEM -f "/Users//.ssh/service\_terraform" -C "service\_terraform\_enterprise" 1. Enter a name for the key in the \*\*Name\*\* field. Choose something identifiable. Keys are only listed by name. HCP Terraform retains the text of each private key, but never displays it for any purpose. 1. Paste the text of the private key in the \*\*Private SSH Key\*\* field. 1. Click \*\*Add Private SSH Key\*\*. The new key appears in the list of keys on the page. If you upload an invalid SSH key, upload the correct key and push a new commit for the new key to take effect. ## Deleting Keys Before deleting a key, you should assign a new key to any workspaces that are using it. Otherwise workspaces using the deleted key can no longer clone modules from private repositories. This inability might cause Terraform runs to fail. To delete a key: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and choose the organization you want to delete a key from. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*SSH Keys\*\*. 1. Find the key you want to delete and click \*\*Delete\*\*. ## Assigning Keys to Workspaces To assign a key | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/settings/ssh-keys.mdx | main | terraform | [
-0.025725573301315308,
-0.0555897131562233,
-0.04164138436317444,
-0.009314147755503654,
-0.033602602779865265,
0.0298279020935297,
0.009006875567138195,
-0.030552716925740242,
0.05320976302027702,
0.04301767796278,
0.04693923518061638,
-0.05173212289810181,
0.052982840687036514,
-0.020599... | 0.047871 |
key: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and choose the organization you want to delete a key from. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*SSH Keys\*\*. 1. Find the key you want to delete and click \*\*Delete\*\*. ## Assigning Keys to Workspaces To assign a key to a workspace, navigate to that workspace's page and choose "SSH Key" from the "Settings" menu. Select a named key from the "SSH Key" dropdown menu, then click the "Update SSH key" button. In subsequent runs, HCP Terraform will use the selected SSH key in this workspace when cloning modules from Git. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/settings/ssh-keys.mdx | main | terraform | [
0.0017286116490140557,
0.0032667184714227915,
-0.014211016707122326,
-0.052103009074926376,
-0.04689450562000275,
0.007382788695394993,
-0.005547116044908762,
-0.10238811373710632,
0.05715636536478996,
0.08565344661474228,
0.007115527521818876,
-0.025378670543432236,
0.08382546156644821,
-... | 0.02126 |
# Workspace notifications HCP Terraform can use webhooks to notify external systems about run progress and other events. Each workspace has its own notification settings and can notify up to 20 destinations. -> \*\*Note:\*\* [Speculative plans](/terraform/cloud-docs/workspaces/run/modes-and-options#plan-only-speculative-plan) and workspaces configured with `Local` [execution mode](/terraform/cloud-docs/workspaces/settings#execution-mode) do not support notifications. Configuring notifications requires admin access to the workspace. Refer to [Permissions](/terraform/cloud-docs/users-teams-organizations/permissions) for details. [permissions-citation]: #intentionally-unused---keep-for-maintainers -> \*\*API:\*\* Refer to [Notification Configuration APIs](/terraform/cloud-docs/api-docs/notification-configurations). ## Viewing and Managing Notification Settings To add, edit, or delete notifications for a workspace, go to the workspace and click \*\*Settings > Notifications\*\*. The \*\*Notifications\*\* page appears, showing existing notification configurations. ## Creating a Notification Configuration A notification configuration specifies a destination URL, a payload type, and the events that should generate a notification. To create a notification configuration: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and select the workspace you want to configure notifications for. 1. Click \*\*Settings\*\*, then \*\*Notifications\*\*. 1. Click \*\*Create a Notification\*\*. The \*\*Create a Notification\*\* form appears. 1. Configure the notifications: - \*\*Destination:\*\* HCP Terraform can deliver either a generic payload or a payload formatted specifically for Slack, Microsoft Teams, or Email. Refer to [Notification Payloads](#notification-payloads) for details. - \*\*Name:\*\* A display name for this notification configuration. - \*\*Webhook URL\*\* This URL is only available for generic, Slack, and Microsoft Teams webhooks. The webhook URL is the destination for the webhook payload. This URL must accept HTTP or HTTPS `POST` requests and should be able to use the chosen payload type. For details, refer to Slack's documentation on [creating an incoming webhook](https://api.slack.com/messaging/webhooks#create\_a\_webhook) and Microsoft's documentation on [creating a workflow from a channel in teams](https://support.microsoft.com/en-us/office/creating-a-workflow-from-a-channel-in-teams-242eb8f2-f328-45be-b81f-9817b51a5f0e). - \*\*Token\*\* (Optional) This notification is only available for generic webhooks. A token is an arbitrary secret string that HCP Terraform will use to sign its notification webhooks. Refer to [Notification Authenticity][inpage-hmac] for details. You cannot view the token after you save the notification configuration. - \*\*Email Recipients\*\* This notification is only available for emails. Select users that should receive notifications. - \*\*Workspace Events\*\*: HCP Terraform can send notifications for all events or only for specific events. The following events are available: - \*\*Check failed\*\*: HCP Terraform detected one or more failed continuous validation checks. This notification is only available if you enable [health assessments](/terraform/cloud-docs/workspaces/health) for the workspace. - \*\*Drift detected\*\*: HCP Terraform detected configuration drift for the first time, or a previously detected drift has changed. This notification is only available if you enable health assessments for the workspace. - \*\*Health assessment errored\*\*: A health assessment failed. This notification is only available if you enable health assessments for the workspace. Health assessments fail when HCP Terraform cannot perform drift detection, continuous validation, or both. The notification does not specify the cause of the failure, but you can use the [Assessment Result](/terraform/cloud-docs/api-docs/assessment-results) logs to help diagnose the issue. - \*\*Auto destroy reminder\*\*: Sends reminders 12 and 24 hours before a scheduled auto destroy run. - \*\*Auto destroy results\*\*: HCP Terraform performed an auto destroy run in the workspace. Reports both successful and errored runs. @include 'tfc-package-callouts/health-assessments.mdx' - \*\*Run Events:\*\* HCP Terraform can send notifications for all events or only for specific events. The following events are available: - \*\*Created\*\*: A run begins and enters the [Pending stage](/terraform/enterprise/run/states#the-pending-stage). - \*\*Planning\*\*: A run acquires the lock and starts to execute. - \*\*Needs Attention\*\*: A plan has changes and Terraform requires user input to continue. This event may include approving the plan or a [policy override](/terraform/enterprise/run/states#the-policy-check-stage). - \*\*Applying\*\*: A run enters the [Apply stage](/terraform/enterprise/run/states#the-apply-stage), where Terraform makes the infrastructure changes described in the plan. - \*\*Completed\*\*: A run completed successfully. - \*\*Errored\*\*: | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/settings/notifications.mdx | main | terraform | [
0.013506289571523666,
-0.022296471521258354,
0.027835803106427193,
0.021241851150989532,
0.05062616243958473,
-0.07008080929517746,
0.020708274096250534,
-0.1038946807384491,
0.056473761796951294,
0.07210533320903778,
-0.03873578459024429,
-0.09301917254924774,
0.05064050853252411,
0.03897... | 0.068339 |
\*\*Needs Attention\*\*: A plan has changes and Terraform requires user input to continue. This event may include approving the plan or a [policy override](/terraform/enterprise/run/states#the-policy-check-stage). - \*\*Applying\*\*: A run enters the [Apply stage](/terraform/enterprise/run/states#the-apply-stage), where Terraform makes the infrastructure changes described in the plan. - \*\*Completed\*\*: A run completed successfully. - \*\*Errored\*\*: A run terminated early due to error or cancellation. 4. Click \*\*Create a notification\*\*. ## Enabling and Verifying a Configuration To enable or disable a configuration, toggle the \*\*Enabled/Disabled\*\* switch on its detail page. HCP Terraform will attempt to verify the configuration for generic and slack webhooks by sending a test message, and will enable the notification configuration if the test succeeds. For a verification to be successful, the destination must respond with a `2xx` HTTP code. If verification fails, HCP Terraform displays the error message and the configuration will remain disabled. For both successful and unsuccessful verifications, click the \*\*Last Response\*\* box to view more information about the verification results. You can also send additional test messages with the \*\*Send a Test\*\* link. ## Notification Payloads ### Slack Notifications to Slack will contain the following information: - The run's workspace (as a link) - The HCP Terraform username and avatar of the person that created the run - The run ID (as a link) - The reason the run was queued (usually a commit message or a custom message) - The time the run was created - The event that triggered the notification and the time that event occurred ### Microsoft Teams Notifications to Microsoft Teams contain the following information: - The run's workspace (as a link) - The HCP Terraform username and avatar of the person that created the run - The run ID - A link to view the run - The reason the run was queued (usually a commit message or a custom message) - The time the run was created - The event that triggered the notification and the time that event occurred ### Email Email notifications will contain the following information: - The run's workspace (as a link) - The run ID (as a link) - The event that triggered the notification, and if the run needs to be acted upon or not ### Generic A generic notification will contain information about a run and its state at the time the triggering event occurred. The complete generic notification payload is described in the [API documentation][generic-payload]. [generic-payload]: /terraform/cloud-docs/api-docs/notification-configurations#notification-payload Some of the values in the payload can be used to retrieve additional information through the API, such as: - The [run ID](/terraform/cloud-docs/api-docs/run#get-run-details) - The [workspace ID](/terraform/cloud-docs/api-docs/workspaces#list-workspaces) - The [organization name](/terraform/cloud-docs/api-docs/organizations#show-an-organization) ## Notification Authenticity [inpage-hmac]: #notification-authenticity Slack notifications use Slack's own protocols for verifying HCP Terraform's webhook requests. Generic notifications can include a signature for verifying the request. For notification configurations that include a secret token, HCP Terraform's webhook requests will include an `X-TFE-Notification-Signature` header, which contains an HMAC signature computed from the token using the SHA-512 digest algorithm. The receiving service is responsible for validating the signature. More information, as well as an example of how to validate the signature, can be found in the [API documentation](/terraform/cloud-docs/api-docs/notification-configurations#notification-authenticity). | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/settings/notifications.mdx | main | terraform | [
-0.027670571580529213,
0.03953707590699196,
0.06570980697870255,
-0.014673135243356228,
0.012541105970740318,
-0.033006440848112106,
-0.005018660333007574,
-0.0982498899102211,
0.0018297753995284438,
0.10631545633077621,
-0.018977098166942596,
-0.09561937302350998,
0.05786574259400368,
-0.... | 0.074701 |
# Run triggers > \*\*Hands-on:\*\* Try the [Connect Workspaces with Run Triggers](/terraform/tutorials/cloud/cloud-run-triggers?utm\_source=WEBSITE&utm\_medium=WEB\_IO&utm\_offer=ARTICLE\_PAGE&utm\_content=DOCS) tutorial. HCP Terraform provides a way to connect your workspace to one or more workspaces within your organization, known as "source workspaces". These connections, called run triggers, allow runs to queue automatically in your workspace on successful apply of runs in any of the source workspaces. You can connect each workspace to up to 20 source workspaces. Run triggers are designed for workspaces that rely on information or infrastructure produced by other workspaces. If a Terraform configuration uses [data sources](/terraform/language/data-sources) to read values that might be changed by another workspace, run triggers let you explicitly specify that external dependency. -> \*\*API:\*\* See the [Run Triggers APIs](/terraform/cloud-docs/api-docs/run-triggers). ## Viewing and Managing Run Triggers To add or delete a run trigger, navigate to the desired workspace and choose "Run Triggers" from the "Settings" menu: This takes you to the run triggers settings page, which shows any existing run triggers. Configuring run triggers requires admin access to the workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions)) Admins are able to delete any of their workspaceβs run triggers from this page. [permissions-citation]: #intentionally-unused---keep-for-maintainers ## Creating a Run Trigger Creating run triggers requires admin access to the workspace. You must also have permission to read runs for the source workspace you wish to connect to. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions)) [permissions-citation]: #intentionally-unused---keep-for-maintainers Under the "Source Workspaces" section, select the workspace you would like to connect as your source and click "Add workspace". You now have a run trigger established with your source workspace. Any run from that source workspace which applies successfully will now cause a new run to be queued in your workspace. ## Run Triggers Auto-Apply Setting Runs initiated by a run trigger do not auto-apply unless you enable the \*\*Auto-apply run triggers\*\* setting. This setting operates independently of the primary workspace [auto-apply](/terraform/cloud-docs/workspaces/settings#auto-apply) setting. ## Interacting with Run Triggers Runs which are queued in your workspace through a run trigger will include extra information in their run details section. This includes links to the source workspace and the successfully applied run that activated the run trigger. The source workspace includes a message in the [plan](/terraform/docs/glossary#plan-noun-1-) and [apply](/terraform/docs/glossary#apply-noun-) run details that specifies the workspaces where HCP Terraform automatically starts a run. ## Using a Remote State Data Source A common way to share information between workspaces is the [`terraform\_remote\_state` data source](/terraform/language/state/remote-state-data), which allows a Terraform configuration to access a source workspace's root-level [outputs](/terraform/language/values/outputs). Before other workspaces can read the outputs of a workspace, it must be configured to allow access. For more information about cross-workspace state access in HCP Terraform, see [Terraform State in HCP Terraform](/terraform/cloud-docs/workspaces/state). ~> \*\*Important:\*\* We recommend using the [`tfe\_outputs` data source](https://registry.terraform.io/providers/hashicorp/tfe/latest/docs/data-sources/outputs) in the [HCP Terraform/Enterprise Provider](https://registry.terraform.io/providers/hashicorp/tfe/latest/docs) to access remote state outputs in HCP Terraform or Terraform Enterprise. The `tfe\_outputs` data source is more secure because it does not require full access to workspace state to fetch outputs. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/settings/run-triggers.mdx | main | terraform | [
-0.0148926367983222,
-0.06989556550979614,
0.022505983710289,
0.029831981286406517,
0.012708145193755627,
-0.03727617859840393,
-0.03561357781291008,
-0.10017441213130951,
0.024838630110025406,
0.07177861779928207,
-0.02007393166422844,
-0.062134139239788055,
0.0784328356385231,
-0.0693526... | 0.043341 |
# Configure workspace VCS connections You can connect any HCP Terraform [workspace](/terraform/cloud-docs/workspaces) to a version control system (VCS) repository that contains a Terraform configuration. This page explains the workspace VCS connection settings in the HCP Terraform UI. Refer to [Terraform Configurations in HCP Terraform Workspaces](/terraform/cloud-docs/workspaces/configurations) for details on handling configuration versions and connected repositories. Refer to [Connecting VCS Providers](/terraform/cloud-docs/vcs) for a list of supported VCS providers and details about configuring VCS access, viewing VCS events, etc. ## API You can use the [Update a Workspace endpoint](/terraform/cloud-docs/api-docs/workspaces#update-a-workspace) in the Workspaces API to change one or more VCS settings. We also recommend using this endpoint to automate changing VCS connections for many workspaces at once. For example, when you move a VCS server or remove a deprecated API version. ## Version Control Settings To change a workspace's VCS settings: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and find the workspace you want to update. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Version Control\*\*. 1. Choose the settings you want, then click \*\*Update VCS settings\*\*. You can update the following types of VCS settings for the workspace. ### VCS Connection You can take one of the following actions: - To add a new VCS connection, click \*\*Connect to version control\*\*. Select \*\*Version control workflow\*\* and follow the steps to [select a VCS provider and repository](/terraform/cloud-docs/workspaces/create#create-a-workspace). - To edit an existing VCS connection, click \*\*Change source\*\*. Choose the \*\*Version control workflow\*\* and follow the steps to [select VCS provider and repository](/terraform/cloud-docs/workspaces/create#create-a-workspace). - To remove the VCS connection, click \*\*Change source\*\*. Select either the \*\*CLI-driven workflow\*\* or the \*\*API-driven workflow\*\*, and click \*\*Update VCS settings\*\*. The workspace is no longer connected to VCS. [permissions-citation]: #intentionally-unused---keep-for-maintainers ### Terraform Working Directory Specify the directory where Terraform will execute runs. This defaults to the root directory in your repository, but you may want to specify another directory if you have directories for multiple different Terraform configurations within the same repository. For example, if you had one `staging` directory and one `production` directory. A working directory is required when you use [trigger prefixes](#automatic-run-triggering). ### Apply Method Choose a workflow for Terraform runs. - \*\*Auto apply:\*\* Terraform will apply changes from successful plans without prompting for approval. A push to the default branch of your repository will trigger a plan and apply cycle. You may want to do this in non-interactive environments, like continuous deployment workflows. !> \*\*Warning:\*\* If you choose auto apply, make sure that no one can change your infrastructure outside of your automated build pipeline. This reduces the risk of configuration drift and unexpected changes. - \*\*Manual apply:\*\* Terraform will ask for approval before applying changes from a successful plan. A push to the default branch of your repository will trigger a plan, and then Terraform will wait for confirmation. ### Automatic Run Triggering HCP Terraform uses your VCS provider's API to retrieve the changed files in your repository. You can choose one of the following options to specify which changes trigger Terraform runs. #### Always trigger runs This option instructs Terraform to begin a run when changes are pushed to any file within the repository. This can be useful for repositories that do not have multiple configurations but require a working directory for some other reason. However, we do not recommend this approach for true monorepos, as it queues unnecessary runs and slows down your ability to provision infrastructure. #### Only trigger runs when files in specified paths change This option instructs Terraform to begin new runs only for changes that affect specified files and directories. This behavior also applies to [speculative plans](/terraform/cloud-docs/workspaces/run/remote-operations#speculative-plans) on | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/settings/vcs.mdx | main | terraform | [
-0.026186110451817513,
-0.03876841440796852,
0.02049354277551174,
-0.008608308620750904,
-0.045399848371744156,
0.027311906218528748,
-0.061602137982845306,
-0.05660959705710411,
0.008509190753102303,
0.06759169697761536,
-0.05939797684550285,
-0.06442846357822418,
0.005012165289372206,
-0... | 0.046148 |
true monorepos, as it queues unnecessary runs and slows down your ability to provision infrastructure. #### Only trigger runs when files in specified paths change This option instructs Terraform to begin new runs only for changes that affect specified files and directories. This behavior also applies to [speculative plans](/terraform/cloud-docs/workspaces/run/remote-operations#speculative-plans) on pull requests. You can use trigger patterns and trigger prefixes in the \*\*Add path\*\* field to specify groups of files and directories. - \*\*Trigger Patterns:\*\* (Recommended) Use glob patterns to specify the files that should trigger a new run. For example, `/submodule/\*\*/\*.tf`, specifies all files with the `.tf` extension that are nested below the `submodule` directory. You can also use more complex patterns like `/\*\*/networking/\*\*/\*`, which specifies all files that have a `networking` folder in their file path. (e.g., `/submodule/service-1/networking/private/main.tf`). Note, the glob patterns match hidden files and directories (names starting with `.`). Refer to [Glob Patterns for Automatic Run Triggering](#glob-patterns-for-automatic-run-triggering) for details. - \*\*Trigger Prefixes:\*\* HCP Terraform will queue runs for changes in any of the specified trigger directories matching the provided prefixes (including the working directory). For example, if you use a top-level `modules` directory to share Terraform code across multiple configurations, changes to the shared modules are relevant to every workspace that uses that repository. You can add `modules` as a trigger directory for each workspace to track changes to shared code. -> \*\*Note:\*\* HCP Terraform triggers runs on all attached workspaces if it does not receive a list of changed files or if that list is too large to process. When this happens, HCP Terraform may show several runs with completed plans that do not result in infrastructure changes. #### Trigger runs when a git tag is published This option instructs Terraform to begin new runs only for changes that have a specific tag format. The tag format can be chosen between the following options: - \*\*Semantic Versioning:\*\* It matches tags in the popular [SemVer format](https://semver.org/). For example, `0.4.2`. - \*\*Version contains a prefix:\*\* It matches tags which have an additional prefix before the [SemVer format](https://semver.org/). For example, `version-0.4.2`. - \*\*Version contains a suffix:\*\* It matches tags which have an additional suffix after the [SemVer format](https://semver.org/). For example `0.4.2-alpha`. - \*\*Custom Regular Expression:\*\* You can define your own regex for HCP Terraform to match against tags. You must include an additional `\` to escape the regex pattern when you manage your workspace with the [hashicorp/tfe provider](https://registry.terraform.io/providers/hashicorp/tfe/latest/docs/resources/workspace#tags\_regex) and trigger runs through matching git tags. Refer to [Terraform escape sequences](/terraform/language/expressions/strings#escape-sequences) for more details. | Tag Format | Regex Pattern | Regex Pattern (Escaped) | |-------------------------------|---------------------------|--------------------------| | \*\*Semantic Versioning\*\* | `^\d+.\d+.\d+$` | `^\\d+.\\d+.\\d+$` | | \*\*Version contains a prefix\*\* | `\d+.\d+.\d+$` | `\\d+.\\d+.\\d+$` | | \*\*Version contains a suffix\*\* | `^\d+.\d+.\d+` | `^\\d+.\\d+.\\d+` | HCP Terraform triggers runs for all tags matching this pattern, regardless of the value in the [VCS Branch](#vcs-branch) setting. ### VCS Branch This setting designates which branch of the repository HCP Terraform should use when the workspace is set to [Always Trigger Runs](#always-trigger-runs) or [Only trigger runs when files in specified paths change](#only-trigger-runs-when-files-in-specified-paths-change). If you leave this setting blank, HCP Terraform uses the repository's default branch. If the workspace is set to trigger runs when a [git tag is published](#trigger-runs-when-a-git-tag-is-published), all tags will trigger runs, regardless of the branch specified in this setting. ### Automatic Speculative Plans Whether to perform [speculative plans on pull requests](/terraform/cloud-docs/workspaces/run/ui#speculative-plans-on-pull-requests) to the connected repository, to assist in reviewing proposed changes. Automatic speculative plans are enabled by default, but you can disable them for any workspace. ### Include Submodules on Clone Select \*\*Include submodules on clone\*\* to recursively clone all of | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/settings/vcs.mdx | main | terraform | [
-0.06561760604381561,
-0.034824978560209274,
0.10573819279670715,
0.012133646756410599,
-0.01037499401718378,
-0.0682128518819809,
-0.057217858731746674,
-0.04948437213897705,
0.05793313309550285,
0.12407990545034409,
-0.012505787424743176,
-0.04354778304696083,
-0.004438750445842743,
-0.0... | 0.055815 |
Speculative Plans Whether to perform [speculative plans on pull requests](/terraform/cloud-docs/workspaces/run/ui#speculative-plans-on-pull-requests) to the connected repository, to assist in reviewing proposed changes. Automatic speculative plans are enabled by default, but you can disable them for any workspace. ### Include Submodules on Clone Select \*\*Include submodules on clone\*\* to recursively clone all of the repository's Git submodules when HCP Terraform fetches a configuration. -> \*\*Note:\*\* The [SSH key for cloning Git submodules](/terraform/cloud-docs/vcs#ssh-keys) is set in the VCS provider settings for the organization and is not related to the workspace's SSH key for Terraform modules. ## Glob Patterns for Automatic Run Triggering We support `glob` patterns to describe a set of triggers for automatic runs. Refer to [trigger patterns](#only-trigger-runs-when-files-in-specified-paths-change) for details. Supported wildcards: - `\*` Matches zero or more characters. - `?` Matches one or more characters. - `\*\*` Matches directories recursively. The following examples demonstrate how to use the supported wildcards: - `/\*\*/\*` matches every file in every directory - `/module/\*\*/\*` matches all files in any directory below the `module` directory - `/\*\*/networking/\*` matches every file that is inside any `networking` directory - `/\*\*/networking/\*\*/\*` matches every file that has `networking` directory on its path - `/\*\*/\*.tf` matches every file in any directory that has the `.tf` extension - `/submodule/\*.???` matches every file inside `submodule` directory which has three characters long extension. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/settings/vcs.mdx | main | terraform | [
-0.051744502037763596,
-0.017603304237127304,
0.06563445925712585,
-0.01462196558713913,
0.0122119951993227,
-0.058607131242752075,
-0.0028557227924466133,
-0.05497606843709946,
0.0161319300532341,
0.045749738812446594,
0.005205671302974224,
-0.03749481588602066,
0.0326400026679039,
-0.041... | 0.042137 |
[entitlement]: /terraform/cloud-docs/api-docs#feature-entitlements # Run tasks HCP Terraform run tasks let you directly integrate third-party tools and services at certain stages in the HCP Terraform run lifecycle. Use run tasks to validate Terraform configuration files, analyze execution plans before applying them, scan for security vulnerabilities, or perform other custom actions. Run tasks send data about a run to an external service at [specific run stages](#understanding-run-tasks-within-a-run). The external service processes the data, evaluates whether the run passes or fails, and sends a response to HCP Terraform. HCP Terraform then uses this response and the run task enforcement level to determine if a run can proceed. [Explore run tasks in the Terraform registry](https://registry.terraform.io/browse/run-tasks). @include 'tfc-package-callouts/run-tasks.mdx' You can manage run tasks through the HCP Terraform UI or the [Run Tasks API](/terraform/cloud-docs/api-docs/run-tasks/run-tasks). > \*\*Hands-on:\*\* Try the [HCP Packer validation run task](/packer/tutorials/hcp/setup-hcp-terraform-run-task) tutorial. ## Requirements \*\*Terraform Version\*\* - You can assign run tasks to workspaces that use a Terraform version of 1.1.9 and later. You can downgrade a workspace with existing runs to use a prior Terraform version without causing an error. However, HCP Terraform no longer triggers the run tasks during plan and apply operations. \*\*Permissions\*\* - To create a run task, you must have a user account with the [Manage Run Tasks permission](/terraform/cloud-docs/users-teams-organizations/permissions/organization#manage-run-tasks). To associate run tasks with a workspace, you need the [Manage Workspace Run Tasks permission](/terraform/cloud-docs/users-teams-organizations/permissions/workspace) on that particular workspace. ## Creating a Run Task Explore the full list of [run tasks in the Terraform Registry](https://registry.terraform.io/browse/run-tasks). Run tasks send an API payload to an external service. The API payload contains run-related information, including a callback URL, which the service uses to return a pass or fail status to HCP Terraform. For example, the [HCP Packer integration](/terraform/cloud-docs/integrations/run-tasks#hcp-packer-run-task) checks image artifacts within a Terraform configuration for validity. If the configuration references images marked as unusable (revoked), then the run task fails and provides an error message. To create a new run task: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the workspace where you want to create a run task. 1. Navigate to \*\*Organization Settings\*\* and select \*\*Run Tasks\*\*. 1. Click \*\*Create a new run task\*\*. The \*\*Run Tasks\*\* page appears. 1. Enter the information about the run task to be configured: - \*\*Enabled\*\* (optional): Whether the run task will run across all associated workspaces. New tasks are enabled by default. - \*\*Name\*\* (required): A human-readable name for the run task. This will be displayed in workspace configuration pages and can contain letters, numbers, dashes and underscores. - \*\*Endpoint URL\*\* (required): The URL for the external service. Run tasks will POST the [run tasks payload](/terraform/cloud-docs/integrations/run-tasks#integration-details) to this URL. - \*\*Description\*\* (optional): A human-readable description for the run task. This information can contain letters, numbers, spaces, and special characters. - \*\*HMAC key\*\* (optional): A secret key that may be required by the external service to verify request authenticity. 1. Select a \*\*Source\*\*: - \*\*Managed\*\* HCP Terraform's infrastructure initiates run task requests. This is the default option. - \*\*Agent\*\* HCP Terraform can initiate run task requests within your self-managed HCP Terraform agents to let run tasks communicate with isolated, private, or on-premises infrastructure. To use this option, an HCP Terraform agent in the agent pool must have [request forwarding](/terraform/cloud-docs/agents/request-forwarding) enabled, and you must be on the [HCP Terraform \*\*Premium\*\* edition](https://www.hashicorp.com/products/terraform/pricing). 1. Click \*\*Create run task\*\*. The run task is now available within the organization, and you can associate it with one or more workspaces. ### Global Run Tasks When you create a new run task, you can choose to apply it globally to every workspace in an organization. Your organization must have | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/settings/run-tasks.mdx | main | terraform | [
-0.016961049288511276,
-0.0011518315877765417,
0.003663755487650633,
-0.05250461772084236,
-0.007576661184430122,
-0.020728781819343567,
-0.0660528689622879,
-0.026410723105072975,
-0.007950610481202602,
0.05151963606476784,
-0.04318154975771904,
-0.08521123230457306,
0.06135213002562523,
... | 0.069933 |
\*\*Create run task\*\*. The run task is now available within the organization, and you can associate it with one or more workspaces. ### Global Run Tasks When you create a new run task, you can choose to apply it globally to every workspace in an organization. Your organization must have the `global-run-task` [entitlement][] to use global run tasks. 1. Select the \*\*Global\*\* checkbox 1. Choose when HCP Terraform should start the run task: - \*\*Pre-plan\*\*: Before Terraform creates the plan. - \*\*Post-plan\*\*: After Terraform creates the plan. - \*\*Pre-apply\*\*: Before Terraform applies a plan. - \*\*Post-apply\*\*: After Terraform applies a plan. 1. Choose an enforcement level: - \*\*Advisory\*\*: Run tasks can not block a run from completing. If the task fails, the run proceeds with a warning in the user interface. - \*\*Mandatory\*\*: Failed run tasks can block a run from completing. If the task fails (including timeouts or unexpected remote errors), the run stops and errors with a warning in the user interface. ## Associating Run Tasks with a Workspace 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise, and choose \*\*Workspaces\*\* from the sidebar. 1. Select the workspace that you want to associate with a run task. 1. Open the \*\*Settings\*\* menu and select \*\*Run Tasks\*\*. 1. Click the \*\*+\*\* next to the task you want to add to the workspace. 1. Choose when HCP Terraform should start the run task: - \*\*Pre-plan\*\*: Before Terraform creates the plan. - \*\*Post-plan\*\*: After Terraform creates the plan. - \*\*Pre-apply\*\*: Before Terraform applies a plan. - \*\*Post-apply\*\*: After Terraform applies a plan. 1. Choose an enforcement level: - \*\*Advisory\*\*: Run tasks can not block a run from completing. If the task fails, the run will proceed with a warning in the UI. - \*\*Mandatory\*\*: Run tasks can block a run from completing. If the task fails (including a timeout or unexpected remote error condition), the run will transition to an Errored state with a warning in the UI. 1. Click \*\*Create\*\*. Your run task is now configured. ## Understanding Run Tasks Within a Run Run tasks perform actions before and after, the [plan](/terraform/cloud-docs/workspaces/run/states#the-plan-stage) and [apply](/terraform/cloud-docs/workspaces/run/states#the-apply-stage) stages of a [Terraform run](/terraform/cloud-docs/workspaces/run/remote-operations). Once all run tasks complete, the run ends based on the most restrictive enforcement level in each associated run task. For example, if a mandatory task fails and an advisory task succeeds, the run fails. If an advisory task fails, but a mandatory task succeeds, the run succeeds and proceeds to the apply stage. Regardless of the exit status of a task, HCP Terraform displays the status and any related message data in the UI. ## Removing a Run Task from a Workspace Removing a run task from a workspace does not delete it from the organization. To remove a run task from a specific workspace: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the workspace where you want to remove a run task. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Run Tasks\*\*. 1. Click the ellipses (...) on the associated run task, and then click \*\*Remove\*\*. The run task will no longer be applied to runs within the workspace. ## Deleting a Run Task You must remove a run task from all associated workspaces before you can delete it. To delete a run task: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the workspace associated with a run task you want to delete. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Run Tasks\*\*. 1. Click the ellipses (...) next to the run task you want to delete, and then click \*\*Edit\*\*. 1. Click | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/settings/run-tasks.mdx | main | terraform | [
-0.027520062401890755,
-0.013936163857579231,
0.007655446883291006,
-0.0006371336057782173,
-0.01819504424929619,
-0.014703701250255108,
-0.05976643040776253,
-0.06939397007226944,
-0.032736361026763916,
0.0729675441980362,
-0.060941264033317566,
-0.09254902601242065,
0.06141646206378937,
... | 0.027432 |
in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the workspace associated with a run task you want to delete. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Run Tasks\*\*. 1. Click the ellipses (...) next to the run task you want to delete, and then click \*\*Edit\*\*. 1. Click \*\*Delete run task\*\*. You cannot delete run tasks that are still associated with a workspace. If you attempt this, you will see a warning in the UI containing a list of all workspaces that are associated with the run task. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/settings/run-tasks.mdx | main | terraform | [
0.0004278031119611114,
0.0063525475561618805,
0.010957895778119564,
0.0018962026806548238,
0.03347879648208618,
-0.044016458094120026,
-0.006331958808004856,
-0.12837380170822144,
0.03223869949579239,
0.061370763927698135,
-0.0774676576256752,
-0.034968599677085876,
0.050503797829151154,
0... | 0.046866 |
# GCP resources included in cost estimation HCP Terraform can estimate monthly costs for many GCP Terraform resources. -> \*\*Note:\*\* Terraform Enterprise requires GCP credentials to support cost estimation. These credentials are configured at the instance level, not the organization level. See the [Application Administration docs](/terraform/enterprise/admin/application/integration) for more details. ## Supported Resources Cost estimation supports the following resources. Not all possible values for attributes of each resource are supported, ex. new or custom machine types. | Resource | Incurs Cost | | ----------- | ----------- | | `google\_compute\_disk` | X | | `google\_compute\_instance` | X | | `google\_sql\_database\_instance` | X | | `google\_billing\_account\_iam\_member` | | | `google\_compute\_address` | | | `google\_compute\_subnetwork\_iam\_member` | | | `google\_folder\_iam\_member` | | | `google\_folder\_iam\_policy` | | | `google\_kms\_crypto\_key\_iam\_member` | | | `google\_kms\_key\_ring\_iam\_member` | | | `google\_kms\_key\_ring\_iam\_policy` | | | `google\_organization\_iam\_member` | | | `google\_project` | | | `google\_project\_iam\_member` | | | `google\_project\_iam\_policy` | | | `google\_project\_service` | | | `google\_pubsub\_subscription\_iam\_member` | | | `google\_pubsub\_subscription\_iam\_policy` | | | `google\_pubsub\_topic\_iam\_member` | | | `google\_service\_account` | | | `google\_service\_account\_iam\_member` | | | `google\_service\_account\_key` | | | `google\_storage\_bucket\_iam\_member` | | | `google\_storage\_bucket\_iam\_policy` | | | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/cost-estimation/gcp.mdx | main | terraform | [
-0.026868298649787903,
-0.054063521325588226,
0.027280807495117188,
0.013313006609678268,
-0.0492006279528141,
-0.01882658526301384,
0.02735494263470173,
-0.044250424951314926,
-0.029882410541176796,
0.06860684603452682,
-0.0477617047727108,
-0.1480909287929535,
0.07914625108242035,
-0.015... | 0.021295 |
# AWS resources included in cost estimation HCP Terraform can estimate monthly costs for many AWS Terraform resources. -> \*\*Note:\*\* Terraform Enterprise requires AWS credentials to support cost estimation. These credentials are configured at the instance level, not the organization level. See the [Application Administration docs](/terraform/enterprise/admin/application/integration) for more details. ## Supported Resources Cost estimation supports the following resources. Not all possible values for attributes of each resource are supported, ex. newer instance types or EBS volume types. | Resource | Incurs Cost | | ----------- | ----------- | | `aws\_alb` | X | | `aws\_cloudhsm\_v2\_hsm` | X | | `aws\_cloudwatch\_dashboard` | X | | `aws\_cloudwatch\_metric\_alarm` | X | | `aws\_db\_instance` | X | | `aws\_dynamodb\_table` | X | | `aws\_ebs\_volume` | X | | `aws\_elasticache\_cluster` | X | | `aws\_elasticsearch\_domain` | X | | `aws\_elb` | X | | `aws\_instance` | X | | `aws\_kms\_key` | X | | `aws\_lb` | X | | `aws\_rds\_cluster\_instance` | X | | `aws\_acm\_certificate\_validation` | | | `aws\_alb\_listener` | | | `aws\_alb\_listener\_rule` | | | `aws\_alb\_target\_group` | | | `aws\_alb\_target\_group\_attachment` | | | `aws\_api\_gateway\_api\_key` | | | `aws\_api\_gateway\_deployment` | | | `aws\_api\_gateway\_integration` | | | `aws\_api\_gateway\_integration\_response` | | | `aws\_api\_gateway\_method` | | | `aws\_api\_gateway\_method\_response` | | | `aws\_api\_gateway\_resource` | | | `aws\_api\_gateway\_usage\_plan\_key` | | | `aws\_appautoscaling\_policy` | | | `aws\_appautoscaling\_target` | | | `aws\_autoscaling\_group` | | | `aws\_autoscaling\_lifecycle\_hook` | | | `aws\_autoscaling\_policy` | | | `aws\_cloudformation\_stack` | | | `aws\_cloudfront\_distribution` | | | `aws\_cloudfront\_origin\_access\_identity` | | | `aws\_cloudwatch\_event\_rule` | | | `aws\_cloudwatch\_event\_target` | | | `aws\_cloudwatch\_log\_group` | | | `aws\_cloudwatch\_log\_metric\_filter` | | | `aws\_cloudwatch\_log\_stream` | | | `aws\_cloudwatch\_log\_subscription\_filter` | | | `aws\_codebuild\_webhook` | | | `aws\_codedeploy\_deployment\_group` | | | `aws\_cognito\_identity\_provider` | | | `aws\_cognito\_user\_pool` | | | `aws\_cognito\_user\_pool\_client` | | | `aws\_cognito\_user\_pool\_domain` | | | `aws\_config\_config\_rule` | | | `aws\_customer\_gateway` | | | `aws\_db\_parameter\_group` | | | `aws\_db\_subnet\_group` | | | `aws\_dynamodb\_table\_item` | | | `aws\_ecr\_lifecycle\_policy` | | | `aws\_ecr\_repository\_policy` | | | `aws\_ecs\_cluster` | | | `aws\_ecs\_task\_definition` | | | `aws\_efs\_mount\_target` | | | `aws\_eip\_association` | | | `aws\_elastic\_beanstalk\_application` | | | `aws\_elastic\_beanstalk\_application\_version` | | | `aws\_elastic\_beanstalk\_environment` | | | `aws\_elasticache\_parameter\_group` | | | `aws\_elasticache\_subnet\_group` | | | `aws\_flow\_log` | | | `aws\_iam\_access\_key` | | | `aws\_iam\_account\_alias` | | | `aws\_iam\_account\_password\_policy` | | | `aws\_iam\_group` | | | `aws\_iam\_group\_membership` | | | `aws\_iam\_group\_policy` | | | `aws\_iam\_group\_policy\_attachment` | | | `aws\_iam\_instance\_profile` | | | `aws\_iam\_policy` | | | `aws\_iam\_policy\_attachment` | | | `aws\_iam\_role` | | | `aws\_iam\_role\_policy` | | | `aws\_iam\_role\_policy\_attachment` | | | `aws\_iam\_saml\_provider` | | | `aws\_iam\_service\_linked\_role` | | | `aws\_iam\_user` | | | `aws\_iam\_user\_group\_membership` | | | `aws\_iam\_user\_login\_profile` | | | `aws\_iam\_user\_policy` | | | `aws\_iam\_user\_policy\_attachment` | | | `aws\_iam\_user\_ssh\_key` | | | `aws\_internet\_gateway` | | | `aws\_key\_pair` | | | `aws\_kms\_alias` | | | `aws\_lambda\_alias` | | | `aws\_lambda\_event\_source\_mapping` | | | `aws\_lambda\_function` | | | `aws\_lambda\_layer\_version` | | | `aws\_lambda\_permission` | | | `aws\_launch\_configuration` | | | `aws\_lb\_listener` | | | `aws\_lb\_listener\_rule` | | | `aws\_lb\_target\_group` | | | `aws\_lb\_target\_group\_attachment` | | | `aws\_network\_acl` | | | `aws\_network\_acl\_rule` | | | `aws\_network\_interface` | | | `aws\_placement\_group` | | | `aws\_rds\_cluster\_parameter\_group` | | | `aws\_route` | | | `aws\_route53\_record` | | | `aws\_route53\_zone\_association` | | | `aws\_route\_table` | | | `aws\_route\_table\_association` | | | `aws\_s3\_bucket` | | | `aws\_s3\_bucket\_notification` | | | `aws\_s3\_bucket\_object` | | | `aws\_s3\_bucket\_policy` | | | `aws\_s3\_bucket\_public\_access\_block` | | | `aws\_security\_group` | | | `aws\_security\_group\_rule` | | | `aws\_service\_discovery\_service` | | | `aws\_sfn\_state\_machine` | | | `aws\_sns\_topic` | | | `aws\_sns\_topic\_subscription` | | | `aws\_sqs\_queue` | | | `aws\_sqs\_queue\_policy` | | | `aws\_ssm\_maintenance\_window` | | | `aws\_ssm\_maintenance\_window\_target` | | | `aws\_ssm\_maintenance\_window\_task` | | | `aws\_ssm\_parameter` | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/cost-estimation/aws.mdx | main | terraform | [
-0.004047076683491468,
0.0060239313170313835,
-0.0326266847550869,
0.024434098973870277,
0.010256864130496979,
0.004688901361078024,
0.004998968448489904,
-0.04830693453550339,
0.042820725589990616,
0.07045062631368637,
-0.07163330167531967,
-0.16418646275997162,
0.11306688189506531,
-0.04... | 0.040259 |
| `aws\_s3\_bucket\_public\_access\_block` | | | `aws\_security\_group` | | | `aws\_security\_group\_rule` | | | `aws\_service\_discovery\_service` | | | `aws\_sfn\_state\_machine` | | | `aws\_sns\_topic` | | | `aws\_sns\_topic\_subscription` | | | `aws\_sqs\_queue` | | | `aws\_sqs\_queue\_policy` | | | `aws\_ssm\_maintenance\_window` | | | `aws\_ssm\_maintenance\_window\_target` | | | `aws\_ssm\_maintenance\_window\_task` | | | `aws\_ssm\_parameter` | | | `aws\_subnet` | | | `aws\_volume\_attachment` | | | `aws\_vpc` | | | `aws\_vpc\_dhcp\_options` | | | `aws\_vpc\_dhcp\_options\_association` | | | `aws\_vpc\_endpoint` | | | `aws\_vpc\_endpoint\_route\_table\_association` | | | `aws\_vpc\_endpoint\_service` | | | `aws\_vpc\_ipv4\_cidr\_block\_association` | | | `aws\_vpc\_peering\_connection\_accepter` | | | `aws\_vpc\_peering\_connection\_options` | | | `aws\_vpn\_connection\_route` | | | `aws\_waf\_ipset` | | | `aws\_waf\_rule` | | | `aws\_waf\_web\_acl` | | | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/cost-estimation/aws.mdx | main | terraform | [
-0.007586939260363579,
-0.03574264422059059,
-0.10996222496032715,
-0.0060491859912872314,
0.06049330160021782,
-0.01926915906369686,
0.08510826528072357,
-0.07860792428255081,
0.08257487416267395,
0.0711645632982254,
0.03924620896577835,
-0.019803009927272797,
0.06963642686605453,
-0.0099... | 0.065168 |
# Cost estimation overview HCP Terraform provides cost estimates for many resources found in your Terraform configuration. For each resource an hourly and monthly cost is shown, along with the monthly delta. The total cost and delta of all estimable resources is also shown. ## Enabling Cost Estimation HCP Terraform disables cost estimation by default. To enable cost estimation: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise, then navigate to the organization where you want to enable cost estimation. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Cost Estimation\*\*. 1. Toggle the \*\*Enable cost estimation for all workspaces\*\* setting. 1. Click \*\*Update settings\*\*. ## Viewing a Cost Estimate When enabled, HCP Terraform performs a cost estimate for every run. Estimated costs appear in the run UI as an extra run phase, between the plan and apply. The estimate displays a total monthly cost by default; you can expand the estimate to see an itemized list of resource costs, as well as the list of unestimated resources. Note that this is just an estimate; some resources don't have cost information available or have unpredictable usage-based pricing. Supported resources are listed in this document's sub-pages. ## Verifying Costs in Policies You can use a Sentinel policy to validate your configuration's cost estimates using the [`tfrun`](/terraform/cloud-docs/policy-enforcement/import-reference/tfrun) import. The example policy below checks that the new cost delta is no more than $100. A new `t3.nano` instance should be well below that. A `decimal` import is available for more accurate math when working with currency numbers. Cost estimation is only available in policy checks, HCP Terraform's \*\*Legacy\*\* policy execution mode. Policy checks support Sentinel versions up to 0.40.x, but do not support newer Sentinel versions. ```python import "tfrun" import "decimal" delta\_monthly\_cost = decimal.new(tfrun.cost\_estimate.delta\_monthly\_cost) if delta\_monthly\_cost.greater\_than(100) { print("This policy prevents a user from increasing their spending by more than $100 per month in a single run without a warning.") } main = rule { delta\_monthly\_cost.less\_than\_or\_equals(100) } ``` ## Supported Resources Cost estimation in HCP Terraform supports Terraform resources within three major cloud providers. - [AWS](/terraform/cloud-docs/cost-estimation/aws) - [GCP](/terraform/cloud-docs/cost-estimation/gcp) - [Azure](/terraform/cloud-docs/cost-estimation/azure) | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/cost-estimation/index.mdx | main | terraform | [
0.014419727958738804,
0.02319350093603134,
0.011799476109445095,
0.04957189783453941,
-0.004785711877048016,
-0.030981386080384254,
-0.04155623912811279,
-0.04128878563642502,
0.022761080414056778,
0.08334432542324066,
-0.0784752294421196,
-0.1501104086637497,
0.04769834131002426,
-0.03303... | 0.054046 |
# Azure resources included in cost estimation HCP Terraform can estimate monthly costs for many Azure Terraform resources. -> \*\*Note:\*\* Terraform Enterprise requires Azure credentials to support cost estimation. These credentials are configured at the instance level, not the organization level. See the [Application Administration docs](/terraform/enterprise/admin/application/integration) for more details. ## Supported Resources Cost estimation supports the following resources. Not all possible values for attributes of each resource are supported, ex. newer VM sizes or managed disk types. | Resource | Incurs Cost | | ----------- | ----------- | | `azurerm\_app\_service\_custom\_hostname\_binding` | X | | `azurerm\_app\_service\_environment` | X | | `azurerm\_app\_service\_plan` | X | | `azurerm\_app\_service\_virtual\_network\_swift\_connection` | X | | `azurerm\_cosmosdb\_sql\_database` | X | | `azurerm\_databricks\_workspace` | X | | `azurerm\_firewall` | X | | `azurerm\_hdinsight\_hadoop\_cluster` | X | | `azurerm\_hdinsight\_hbase\_cluster` | X | | `azurerm\_hdinsight\_interactive\_query\_cluster` | X | | `azurerm\_hdinsight\_kafka\_cluster` | X | | `azurerm\_hdinsight\_spark\_cluster` | X | | `azurerm\_integration\_service\_environment` | X | | `azurerm\_linux\_virtual\_machine` | X | | `azurerm\_linux\_virtual\_machine\_scale\_set` | X | | `azurerm\_managed\_disk` | X | | `azurerm\_mariadb\_server` | X | | `azurerm\_mssql\_elasticpool` | X | | `azurerm\_mysql\_server` | X | | `azurerm\_postgresql\_server` | X | | `azurerm\_sql\_database` | X | | `azurerm\_virtual\_machine` | X | | `azurerm\_virtual\_machine\_scale\_set` | X | | `azurerm\_windows\_virtual\_machine` | X | | `azurerm\_windows\_virtual\_machine\_scale\_set` | X | | `azurerm\_app\_service` | | | `azurerm\_cosmosdb\_account` | | | `azurerm\_cosmosdb\_sql\_container` | | | `azurerm\_cosmosdb\_table` | | | `azurerm\_mysql\_database` | | | `azurerm\_network\_security\_group` | | | `azurerm\_postgresql\_database` | | | `azurerm\_resource\_group` | | | `azurerm\_sql\_server` | | | `azurerm\_sql\_virtual\_network\_rule` | | | `azurerm\_subnet` | | | `azurerm\_subnet\_route\_table\_association` | | | `azurerm\_virtual\_network` | | | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/cost-estimation/azure.mdx | main | terraform | [
0.01508402731269598,
-0.0062492904253304005,
-0.024853069335222244,
0.03723170608282089,
-0.03611459955573082,
0.0265312772244215,
0.013640671968460083,
-0.04261116310954094,
0.021352574229240417,
0.10147374123334885,
-0.07807710766792297,
-0.14529705047607422,
0.10249660164117813,
0.01010... | 0.026683 |
# Change requests overview @include 'beta/explorer.mdx' Change requests are a way to create a backlog of action items recorded on a workspace, enabling administrators to notify teams directly if a workspace requires action. Workspace action items can include updating [deprecated or revoked module versions](/terraform/cloud-docs/registry/manage-module-versions), security updates, bugs, or compliance fixes. Change requests let you record tasks directly on the workspaces that need those changes. When someone addresses a change request, they can archive it to reflect its completion. @include 'tfc-package-callouts/change-requests.mdx' ## Introduction The [explorer for workspace visibility](/terraform/cloud-docs/workspaces/explorer) helps surface valuable information across your organization. While browsing data about workspaces in the explorer, you may find workspaces that need to be updated or fixed. You can keep your context and create a change request directly from the explorer to leave action items on workspaces. You can also [save a view in the explorer](/terraform/cloud-docs/workspaces/explorer#save-a-view) if you want to revisit workspaces for which you created a change request or if you need to regularly create change requests for specific workspaces. If a specific person or team owns a workspace, you can set up and configure a team notification to directly notify that person or team if one of their workspaces has a new change request. Refer to [Team notifications](/terraform/cloud-docs/users-teams-organizations/teams/notifications) to learn more. ## Primary workflow Administrators can [create new change requests](/terraform/cloud-docs/workspaces/change-requests/manage#create-a-change-request) directly from explorer queries on workspace data. After selecting the workspaces to include in the request, they can write a message describing the details and goal of that change request. Workspaces manage and track their change requests directly, and team members can [archive a change request](/terraform/cloud-docs/workspaces/change-requests/manage#archive-existing-change-requests) once theyβve completed that requestβs task. @include 'beta/explorer-limitations.mdx' | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/change-requests/index.mdx | main | terraform | [
-0.041440851986408234,
-0.00047619204269722104,
0.04279472306370735,
-0.00090984464623034,
0.05696779116988182,
-0.008863783441483974,
0.014129106886684895,
-0.08162470906972885,
0.038373664021492004,
0.10077609866857529,
-0.06950946152210236,
-0.007894739508628845,
-0.01844107359647751,
-... | 0.066693 |
# Manage change requests @include 'beta/explorer.mdx' This topic describes how to create and manage change requests. Refer to [Change requests overview](/terraform/cloud-docs/workspaces/change-requests) for additional information about change requests. @include 'tfc-package-callouts/change-requests.mdx' ## Requirements To create a change request you must be a member of a team with one of the following permissions: \* The [owners' team](/terraform/cloud-docs/users-teams-organizations/permissions/organization#organization-owners) \* A team with [\*\*Manage all projects\*\*](/terraform/cloud-docs/users-teams-organizations/permissions/organization#manage-all-projects) \* A team with [\*\*Manage all workspaces\*\*](/terraform/cloud-docs/users-teams-organizations/permissions/organization#manage-all-workspaces) To view change requests, you must have at least [\*\*Read\*\* permissions](/terraform/cloud-docs/users-teams-organizations/permissions/workspace#read-role) on the workspace associated with that change request. To archive change requests, you must have at least [\*\*Write\*\* permissions](/terraform/cloud-docs/users-teams-organizations/permissions/workspace#write-role) on the workspace associated with that change request. ## Create a change request The first step in making a change request is to specify which workspaces this change request is about. You can use the explorer to query workspace data, then select specific workspaces to assign to your change request. To create a change request, perform the following steps: 1. Open the [explorer and perform a query](/terraform/cloud-docs/workspaces/explorer) on your workspace data. 1. Choose the workspaces you want to include in your change request using one of the following methods: \* Select or deselect individual workspace names. \* Select the box in the \*\*Name\*\* column to choose all workspaces visible in the table. \* Select the box in the \*\*Name\*\* column to choose all workspaces visible in the table, then click \*\*Select all workspaces from query result\*\* to choose all of the workspaces in this query. 1. Click \*\*Create change request\*\*. 1. Write a subject for your change request in the \*\*Subject\*\* filed. We recommend implementing a standard format for change requests to help people processing them easily identify your request. 1. Write a description for your request in the \*\*Description\*\* field. You can use markdown to format the description. You can preview your change request's text by opening the \*\*Preview\*\* tab. 1. Click \*\*Create change request\*\*. After creating your change request, HCP Terraform notifies you if it has successfully received that request. Depending on the number of workspaces you selected, HCP Terraform may take a few minutes to create the change request. ## View change requests Open a workspace and select \*\*Change requests\*\* from the side navigation to display a list of that workspace's active change requests. You can view the full description of a change request by clicking on that request. If someone creates a change request on multiple workspaces, each workspace has its own independent copy of that change request. The change request navigation displays a badge indicating how many unarchived change requests exist on a workspace. The table for active change requests displays the following attributes for each request: \* Subject \* Description \* Creation date and time Opening the \*\*Archived\*\* tab displays the archived change requests for this workspace. The table for archived change requests displays the following attributes for each request: \* Subject \* Description \* Date the request was archived You can find who archived a particular change request by clicking that request and viewing the expanded details. ## Archive existing change requests Once you have completed the requested actions on a workspace, you can archive a change request to reflect its completion. Each workspace has its own independent copy of a change request, so archiving a change request on one workspace does not affect the affiliated change request on others. To archive a change request in a workspace, perform the following steps: 1. Navigate to a workspaceβs \*\*Change requests\*\* page. 1. Click on the ellipsis menu next to a change request. 1. Click \*\*Archive\*\*. You can also archive a change request by navigating to a | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/change-requests/manage.mdx | main | terraform | [
-0.026010291650891304,
-0.013875329867005348,
0.023269521072506905,
-0.02594556100666523,
-0.013036688789725304,
-0.05487639456987381,
-0.05186042934656143,
-0.07561852037906647,
0.03798031061887741,
0.12814244627952576,
-0.08422834426164627,
-0.07755306363105774,
-0.011954020708799362,
0.... | 0.009162 |
affiliated change request on others. To archive a change request in a workspace, perform the following steps: 1. Navigate to a workspaceβs \*\*Change requests\*\* page. 1. Click on the ellipsis menu next to a change request. 1. Click \*\*Archive\*\*. You can also archive a change request by navigating to a workspaceβs \*\*Change requests\*\* page, opening an individual change request, and clicking \*\*Archive\*\*. @include 'beta/explorer-limitations.mdx' | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/change-requests/manage.mdx | main | terraform | [
-0.05996199697256088,
-0.06730342656373978,
0.010547948069870472,
0.01950095407664776,
0.08215802162885666,
0.01229036133736372,
-0.09149857610464096,
-0.12821823358535767,
-0.0007530536386184394,
0.07049138098955154,
-0.06964395940303802,
0.08724386245012283,
-0.020397081971168518,
-0.027... | 0.017585 |
# HCP Terraform policy enforcement overview This topic provides overview information about policies in HCP Terraform. Policies are rules for Terraform runs that let you validate that Terraform plans comply with security rules and best practices. @include 'tfc-package-callouts/policies.mdx' > \*\*Hands-on:\*\* Try the [Enforce Policy with Sentinel](/terraform/tutorials/policy) and [Detect Infrastructure Drift and Enforce OPA Policies](/terraform/tutorials/cloud/drift-and-policy) tutorials. ## Introduction You can implement policies that check for any number of conditions, such as whether infrastructure configuration adheres to security standards or best practices. For example, you may want to write a policy to check whether Terraform plans to deploy production infrastructure to the correct region. You can also use policies to enforce standards for your organizationβs workflows. For example, you could write a policy to prevent new infrastructure deployments on Fridays, reducing the risk of production incidents outside of your teamβs working hours. ## Workflow The following workflow describes how to create and manage policies manually. ### Define policy You can use either the Sentinel or OPA framework to create custom policies. You can also copy pre-written Sentinel policies created and maintained by HashiCorp. ### Create and apply policy sets Policy sets are collections of policies you can apply globally or to specific [projects](/terraform/cloud-docs/projects/manage) and workspaces in your organization. For each run in the selected workspaces, HCP Terraform checks the Terraform plan against the policy set. You can also exclude specific workspaces from global or project-scoped policy sets. HCP Terraform won't enforce a policy set's policies on any runs in an excluded workspace. For example, if you attach a policy set to a project and then exclude one of the project's workspaces from that policy set, HCP Terraform will not enforce the policy set on the excluded workspace. You can create policy sets from the [user interface](/terraform/cloud-docs/policy-enforcement/manage-policy-sets#create-policy-sets), the API, or by connecting HCP Terraform to your version control system. A policy set can only contain policies written in a single policy framework, but you can add Sentinel or OPA policy sets to the same workspace. Refer to [Managing Policy Sets](/terraform/cloud-docs/policy-enforcement/manage-policy-sets) for details. ### Review policy results The HCP Terraform UI displays policy results for each policy set you apply to the workspace. Depending on their [enforcement level](/terraform/cloud-docs/policy-enforcement/manage-policy-sets#policy-enforcement-levels), failed policies can stop the run. You can override failed policies with the right permissions. Refer to [Policy Results](/terraform/cloud-docs/policy-enforcement/view-results) for details. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/index.mdx | main | terraform | [
-0.02642533928155899,
0.05637277290225029,
0.06048962101340294,
-0.07243695855140686,
0.014808385632932186,
-0.055201172828674316,
-0.003338177688419819,
-0.06945393234491348,
-0.0643717348575592,
0.07646467536687851,
-0.03522249683737755,
-0.05011516809463501,
0.03997819870710373,
0.00753... | 0.094818 |
# Pre-written policy library reference This topic provides reference information about the Sentinel policy libraries that HashiCorp authors and maintains. For instructions on how to run the policy libraries, refer to [Run pre-written Sentinel policies ](/terraform/cloud-docs/policy-enforcement/prewritten-sentinel). ## AWS policies HashiCorp publishes pre-written policies for the following AWS standards. ### Center for Internet Security (CIS) The Center for Internet Security (CIS) is a non-profit organization that publishes prescriptive guidance for configuring secure cloud services. Refer to the [CIS website](https://www.cisecurity.org) for additional information. CIS refers to their standards as benchmarks. HashiCorp publishes pre-written policies that support the following CIS benchmarks for AWS: - Amazon Web Services Foundations version 1.2. Refer to the [AWS documentation](https://docs.aws.amazon.com/securityhub/latest/userguide/cis-aws-foundations-benchmark.html#cis1v2-standard) for additional information about this version. - Amazon Web Services Foundations version 1.4. Refer to the [AWS documentation](https://docs.aws.amazon.com/securityhub/latest/userguide/cis-aws-foundations-benchmark.html#cis1v4-standard) for additional information about this version. - Amazon Web Services Foundations version 3.0. Refer to the [AWS documentation](https://docs.aws.amazon.com/securityhub/latest/userguide/cis-aws-foundations-benchmark.html#cis3v0-standard) for additional information about this version. Refer to the [CIS policy set for AWS GitHub repository](https://github.com/hashicorp/policy-library-CIS-Policy-Set-for-AWS-Terraform) for details about these policies. ### Foundational Security Best Practices (FSBP) The Foundational Security Best Practices (FSBP) standard enforces security best practices on AWS resources. HashiCorp publishes pre-written policies that support the following AWS FSBP standards: - AWS Foundational Security Best Practices v1.0.0. Refer to the [AWS documentation](https://docs.aws.amazon.com/securityhub/latest/userguide/fsbp-standard.html) for additional information. Refer to the [AWS FSBP policy set repository](https://github.com/hashicorp/policy-library-FSBP-Policy-Set-for-AWS-Terraform/) for details about these policies. ### PCI DSS The Payment Card Industry Data Security Standard (PCI DSS) is set of rules for protecting payment data throughout the data's lifecycle. Compliance with PCI DSS is mandatory for organizations that handle credit card information. Refer to the [PCI DSS website](https://www.pcisecuritystandards.org/standards/) for more information about the standard. Refer to the [PCI DSS policy set reposity](https://github.com/hashicorp/policy-library-pcidss-policy-set-for-aws-terraform) for details about the policies HashiCorp publishes and maintains. ### NIST SP 800-53 Revision 5 NIST Special Publication 800-53 Revision 5 (NIST SP 800-53 Rev. 5) framework provides a catalog of security and privacy requirements for protecting the confidentiality, integrity, and availability of information systems and critical resources. Refer to the [AWS NIST SP 800-53 documentation](https://docs.aws.amazon.com/securityhub/latest/userguide/standards-reference-nist-800-53.html) information about the AWS implementation. Refer to [Pre-written Sentinel Policies for AWS NIST Foundations Benchmarking repository](https://github.com/hashicorp/policy-library-NIST-Policy-Set-for-AWS-Terraform) for details about these policies. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/prewritten-library.mdx | main | terraform | [
-0.014149153605103493,
0.002612812677398324,
-0.0416770875453949,
-0.015596630983054638,
0.04469088837504387,
-0.052951786667108536,
-0.022696493193507195,
-0.00527274701744318,
0.016000015661120415,
0.07578550279140472,
0.003191250143572688,
0.011400575749576092,
0.02109568379819393,
-0.0... | 0.199251 |
# Generate mock Sentinel data with Terraform We recommend that you test your Sentinel policies extensively before deploying them within HCP Terraform. An important part of this process is mocking the data that you wish your policies to operate on. @include 'tfc-package-callouts/policies.mdx' Due to the highly variable structure of data that can be produced by an individual Terraform configuration, HCP Terraform provides the ability to generate mock data from existing configurations. This can be used to create sample data for a new policy, or data to reproduce issues in an existing one. Testing policies is done using the [Sentinel CLI](/sentinel/docs/commands). More general information on testing Sentinel policies can be found in the [Testing section](/sentinel/docs/writing/testing) of the [Sentinel runtime documentation](https://docs.hashicorp.com/sentinel). ~> \*\*Be careful!\*\* Mock data generated by HCP Terraform directly exposes any and all data within the configuration, plan, and state. Terraform attempts to scrub sensitive data from these mocks, but we do not guarantee 100% accuracy. Treat this data with care, and avoid generating mocks with live sensitive data when possible. Access to this information requires [permission to download Sentinel mocks](/terraform/cloud-docs/users-teams-organizations/permissions) for the workspace where the data was generated. [permissions-citation]: #intentionally-unused---keep-for-maintainers ## Generating Mock Data Using the UI Mock data can be generated using the UI by expanding the plan status section of the run page, and clicking on the \*\*Download Sentinel mocks\*\* button.  For more information on creating a run, see the [Terraform Runs and Remote Operations](/terraform/cloud-docs/workspaces/run/remote-operations) section of the docs. If the button is not visible, then the plan is ineligible for mock generation or the user doesn't have the necessary permissions. See [Mock Data Availability](#mock-data-availability) for more details. ## Generating Mock Data Using the API Mock data can also be created with the [Plan Export API](/terraform/cloud-docs/api-docs/plan-exports). Multiple steps are required for mock generation. The export process is asynchronous, so you must monitor the request to know when the data is generated and available for download. 1. Get the plan ID for the run that you want to generate the mock for by [getting the run details](/terraform/cloud-docs/api-docs/run#get-run-details). Look for the `id` of the `plan` object within the `relationships` section of the return data. 1. [Request a plan export](/terraform/cloud-docs/api-docs/plan-exports#create-a-plan-export) using the discovered plan ID. Supply the Sentinel export type `sentinel-mock-bundle-v0`. 1. Monitor the export request by [viewing the plan export](/terraform/cloud-docs/api-docs/plan-exports#show-a-plan-export). When the status is `finished`, the data is ready for download. 1. Finally, [download the export data](/terraform/cloud-docs/api-docs/plan-exports#download-exported-plan-data). You have up to an hour from the completion of the export request - after that, the mock data expires and must be re-generated. ## Using Mock Data -> \*\*Note:\*\* The v2 mock files are only available on Terraform 0.12 and higher. Mock data is supplied as a bundled tarball, containing the following files: ``` mock-tfconfig.sentinel # tfconfig mock data mock-tfconfig-v2.sentinel # tfconfig/v2 mock data mock-tfplan.sentinel # tfplan mock data mock-tfplan-v2.sentinel # tfplan/v2 mock data mock-tfstate.sentinel # tfstate mock data mock-tfstate-v2.sentinel # tfstate/v2 mock data mock-tfrun.sentinel # tfrun mock data sentinel.hcl # sample configuration file ``` The sample `sentinel.hcl` file contains mappings to the mocks so that you can get started testing with `sentinel apply` right away. For `sentinel test`, however, we recommend a more detailed layout. We recommend placing the files for `sentinel test` in a subdirectory of the repository holding your policies, so they don't interfere with the command's automatic policy detection. While the test data is Sentinel code, it's not a policy and will produce errors if evaluated like one. ``` . βββ foo.sentinel βββ sentinel.hcl βββ test β βββ foo β βββ fail.hcl β βββ pass.hcl βββ testdata βββ mock-tfconfig.sentinel βββ mock-tfconfig-v2.sentinel βββ mock-tfplan.sentinel | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/test-sentinel.mdx | main | terraform | [
-0.0013211843324825168,
0.02046222798526287,
0.02798341028392315,
-0.005517049692571163,
0.0005962333525530994,
-0.057463403791189194,
-0.012860278598964214,
-0.07417743653059006,
-0.07515431195497513,
0.06668096035718918,
-0.010293226689100266,
-0.08730807900428772,
0.06535736471414566,
-... | 0.06792 |
with the command's automatic policy detection. While the test data is Sentinel code, it's not a policy and will produce errors if evaluated like one. ``` . βββ foo.sentinel βββ sentinel.hcl βββ test β βββ foo β βββ fail.hcl β βββ pass.hcl βββ testdata βββ mock-tfconfig.sentinel βββ mock-tfconfig-v2.sentinel βββ mock-tfplan.sentinel βββ mock-tfplan-v2.sentinel βββ mock-tfstate.sentinel βββ mock-tfstate-v2.sentinel βββ mock-tfrun.sentinel ``` Each configuration that needs access to the mock should reference the mock data files within the `mock` block in the Sentinel configuration file. For `sentinel apply`, this path is relative to the working directory. Assuming you always run this command from the repository root, the `sentinel.hcl` configuration file would look like: ```hcl mock "tfconfig" { module { source = "testdata/mock-tfconfig.sentinel" } } mock "tfconfig/v1" { module { source = "testdata/mock-tfconfig.sentinel" } } mock "tfconfig/v2" { module { source = "testdata/mock-tfconfig-v2.sentinel" } } mock "tfplan" { module { source = "testdata/mock-tfplan.sentinel" } } mock "tfplan/v1" { module { source = "testdata/mock-tfplan.sentinel" } } mock "tfplan/v2" { module { source = "testdata/mock-tfplan-v2.sentinel" } } mock "tfstate" { module { source = "testdata/mock-tfstate.sentinel" } } mock "tfstate/v1" { module { source = "testdata/mock-tfstate.sentinel" } } mock "tfstate/v2" { module { source = "testdata/mock-tfstate-v2.sentinel" } } mock "tfrun" { module { source = "testdata/mock-tfrun.sentinel" } } ``` For `sentinel test`, the paths are relative to the specific test configuration file. For example, the contents of `pass.hcl`, asserting that the result of the `main` rule was `true`, would be: ``` mock "tfconfig" { module { source = "../../testdata/mock-tfconfig.sentinel" } } mock "tfconfig/v1" { module { source = "../../testdata/mock-tfconfig.sentinel" } } mock "tfconfig/v2" { module { source = "../../testdata/mock-tfconfig-v2.sentinel" } } mock "tfplan" { module { source = "../../testdata/mock-tfplan.sentinel" } } mock "tfplan/v1" { module { source = "../../testdata/mock-tfplan.sentinel" } } mock "tfplan/v2" { module { source = "../../testdata/mock-tfplan-v2.sentinel" } } mock "tfstate" { module { source = "../../testdata/mock-tfstate.sentinel" } } mock "tfstate/v1" { module { source = "../../testdata/mock-tfstate.sentinel" } } mock "tfstate/v2" { module { source = "../../testdata/mock-tfstate-v2.sentinel" } } mock "tfrun" { module { source = "../../testdata/mock-tfrun.sentinel" } } test { rules = { main = true } } ``` ## Mock Data Availability The following factors can prevent you from generating mock data: \* You do not have permission to download Sentinel mocks for the workspace. ([More about permissions.](/terraform/cloud-docs/users-teams-organizations/permissions)) Permission is required to protect the possibly sensitive data which can be produced via mock generation. \* The run has not progressed past the planning stage, or did not create a plan successfully. \* The run progressed past the planning stage prior to July 23, 2021. Prior to this date, HCP Terraform only kept JSON plans for 7 days. [permissions-citation]: #intentionally-unused---keep-for-maintainers If a plan cannot have its mock data exported due to any of these reasons, the \*\*Download Sentinel mocks\*\* button within the plan status section of the UI will not be visible. -> \*\*Note:\*\* Only a successful plan is required for mock generation. Sentinel can still generate the data if apply or policy checks fail. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/test-sentinel.mdx | main | terraform | [
-0.016863344237208366,
-0.04150024801492691,
-0.03468893840909004,
0.04339000582695007,
0.037535347044467926,
-0.03778430074453354,
0.018017370253801346,
0.04014632850885391,
-0.0381653718650341,
0.014780577272176743,
0.0546095110476017,
-0.08482716232538223,
0.03294201195240021,
0.0008871... | 0.062699 |
# Run pre-written Sentinel policies This topic describes how to run Sentinel policies created and maintained by HashiCorp. For instructions about how to create your own custom Sentinel policies, refer to [Define custom Sentinel policies](/terraform/cloud-docs/policy-enforcement/define-policies/custom-sentinel). ## Overview Pre-written Sentinel policy libraries streamline your compliance processes and enhance security across your infrastructure. HashiCorp's ready-to-use policies can help you enforce best practices and security standards across your AWS environment. Complete the following steps to implement pre-written Sentinel policies in your workspaces: 1. Obtain the policies you want to implement. Download policies directly into your repository or create a fork of the HashiCorp repositories. 1. Connect policies to your workspace. After you download policies or fork policy repositories, you must connect them to your HCP Terraform or Terraform Enterprise workspaces. Refer to the [Sentinel documentation](/sentinel/docs) for information about the Sentinel language. ## Requirements You must use one of the following Terraform applications: - HCP Terraform - Terraform Enterprise v202406-1 or newer ### Permissions To create new policy sets and policies, your HCP Terraform or Terraform Enterprise user account must either be a member of the owners team or have the \*\*Manage Policies\*\* organization-level permissions enabled. Refer to the following topics for additional information: - [Organization owners](/terraform/cloud-docs/users-teams-organizations/permissions/organization#organization-owners) - [Manage policies](/terraform/cloud-docs/users-teams-organizations/permissions/organization#manage-policies) ### Version control system You must have a GitHub account connected to HCP Terraform or Terraform Enterprise to manually connect policy sets to your workspaces. Refer to [Connecting VCS Providers](/terraform/cloud-docs/vcs) for instructions. ## Get policies Refer to the [pre-written policy library reference](/terraform/cloud-docs/policy-enforcement/prewritten-library) for a complete list of available policy sets. Use one of the following methods to get pre-written policies: - \*\*Download policies from the registry\*\*: Use this method if you want to assemble custom policy sets without customizing policies. - \*\*Fork the HashiCorp policy GitHub repository\*\*: Use this method if you intend to customize the policies. Complete the following steps to download policies from the registry and apply them directly to your workspaces. 1. Browse the policy libraries available in the [Terraform registry](https://registry.terraform.io/search/policies?q=Pre-written). 1. Click on a policy library and click \*\*Choose policies\*\*. 1. Select the policies you want to implement. The registry generates code in the \*\*USAGE INSTRUCTIONS\*\* box. 1. Click \*\*Copy Code Snippet\*\* to copy the code to your clipboard. 1. Create a GitHub repository to store the policies and the policy set configuration file. 1. Create a file called `sentinel.hcl` in the repository. 1. Paste the code from your clipboard into `sentinel.hcl` and commit your changes. 1. Complete the instructions for [connecting the policies to your workspace](#connect-policies-to-your-workspace). Create a fork of the repository containing the policies you want to implement. Refer to the [GitHub documentation](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo) for instructions on how to create a fork. You can create forks of the following repositories: - [`policy-library-CIS-Policy-Set-for-AWS-Terraform`](https://github.com/hashicorp/policy-library-CIS-Policy-Set-for-AWS-Terraform) - [`policy-library-FSBP-Policy-Set-for-AWS-Terraform`](https://github.com/hashicorp/policy-library-FSBP-Policy-Set-for-AWS-Terraform/) - [`policy-library-NIST-Policy-Set-for-AWS-Terraform`](https://github.com/hashicorp/policy-library-NIST-Policy-Set-for-AWS-Terraform) Each repository contains a `sentinel.hcl` file that defines an example policy set using the policies included in the library. Modify the `sentinel.hcl` file to customize your policy set. Refer to [Sentinel Policy Set VCS Repositories](/terraform/cloud-docs/policy-enforcement/manage-policy-sets/sentinel-vcs) for additional information. After forking the repository, complete the instructions for [connecting the policies to your workspace](#connect-policies-to-your-workspace). ## Connect policies to your workspace 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the organization with workspaces you want to connect policies to. 1. Choose \*\*Settings\*\* from the sidebar. 1. Click \*\*Policy Sets\*\* and click \*\*Connect a new policy set\*\*. 1. Click the \*\*Version control provider (VCS)\*\* tile. 1. Enable the \*\*Sentinel\*\* option as the policy framework. 1. Specify a name and description for the set. 1. Configure any additional options for the policy set and click \*\*Next\*\*. 1. Choose the GitHub connection type, then choose | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/prewritten-sentinel.mdx | main | terraform | [
0.03832034394145012,
-0.002399500459432602,
0.015239323489367962,
-0.03061717376112938,
0.011030412279069424,
-0.019368531182408333,
-0.032744575291872025,
-0.03014359250664711,
-0.04003585875034332,
0.08557140827178955,
-0.016968321055173874,
-0.05486011505126953,
0.01815694011747837,
-0.... | 0.111457 |
a new policy set\*\*. 1. Click the \*\*Version control provider (VCS)\*\* tile. 1. Enable the \*\*Sentinel\*\* option as the policy framework. 1. Specify a name and description for the set. 1. Configure any additional options for the policy set and click \*\*Next\*\*. 1. Choose the GitHub connection type, then choose the repository you created in [Set up a repository for the policies](#set-up-a-repository-for-the-policies). 1. If the `sentinel.hcl` policy set file is stored in a subfolder, specify the path to the file in the \*\*Policies path\*\* field. The default is the root directory. 1. If you want to apply updated policy sets to the workspace from a specific branch, specify the name in the \*\*VCS branch\*\* field. The default is the default branch configured for the repository. 1. Click \*\*Next\*\* and specify any additional parameters you want to pass to the Sentinel runtime and click \*\*Connect policy set\*\* to finish applying the policies to the workspace. Run a plan in the workspace to trigger the connected policies. Refer to [Start a Terraform run](/terraform/cloud-docs/run/remote-operations#starting-runs) for additional information. ## Next steps - Group your policies into sets and apply them to your workspaces. Refer to [Create policy sets](/terraform/cloud-docs/policy-enforcement/manage-policy-sets#create-policy-sets) for additional information. - View results and address Terraform runs that do not comply with your policies. Refer to [View results](/terraform/cloud-docs/policy-enforcement/view-results) for additional information. - You can also view Sentinel policy results in JSON format. Refer to [View Sentinel JSON results](/terraform/cloud-docs/policy-enforcement/view-results/json) for additional information. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/prewritten-sentinel.mdx | main | terraform | [
0.006422447506338358,
-0.10447866469621658,
-0.05207598954439163,
0.03285585716366768,
-0.025737673044204712,
0.031912606209516525,
-0.012425481341779232,
-0.019352689385414124,
-0.028700171038508415,
0.08172620087862015,
-0.022706054151058197,
-0.033485427498817444,
-0.0018335938220843673,
... | 0.095875 |
# View and filter Sentinel JSON data When using the HCP Terraform UI, Sentinel policy check results are available both in a human-readable log form, and in a more detailed, lower-level JSON form. While the logs may suppress some output that would make the logs harder to read, the JSON output exposes the lower-level output directly to you. Being able to parse this data in its entirety is especially important when working with [non-boolean rule data](/sentinel/docs/language/rules#non-boolean-values) in a policy designed to work with Sentinel 0.17.0 and higher. @include 'tfc-package-callouts/policies.mdx' -> The JSON data exposed is the same as you would see when using the [policy checks API](/terraform/cloud-docs/api-docs/policy-checks), with the data starting at the `sentinel` key. ## Viewing JSON Data To view the JSON data, expand the policy check on the [runs page](/terraform/cloud-docs/workspaces/run/manage) if it is not already expanded. The logs are always displayed first, so click the \*\*View JSON Data\*\* button to view the JSON data. You can click the \*\*View Logs\*\* button to switch back to the log view.  ## Filtering JSON Data The JSON data is filterable using a [jq](https://stedolan.github.io/jq/)-subset filtering language. See the [JSON filtering](/terraform/cloud-docs/workspaces/json-filtering) page for more details on the filtering language. Filters are entered by putting the filter in the aptly named \*\*filter\*\* box in the JSON viewer. After entering the filter, pressing \*\*Apply\*\* or the enter key on your keyboard will apply the filter. The filtered results, if any, are displayed in result box. Clearing the filter will restore the original JSON data.  ### Quick-Filtering `main` Rules Clicking the \*\*Filter "main" rules\*\* button will quickly apply a filter that shows you the results of the `main` rule for every policy in the policy set. You can use this to quickly get the results of each policy in the set, without having navigate through the rest of the policy result data.  | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/view-results/json.mdx | main | terraform | [
0.016420042142271996,
0.062107134610414505,
0.04901578649878502,
-0.03834172710776329,
0.07217849045991898,
-0.020402319729328156,
0.016772277653217316,
-0.0668395385146141,
0.03230086714029312,
0.0783185064792633,
-0.014189183712005615,
-0.050988782197237015,
0.0038819736801087856,
0.0391... | 0.036465 |
# View policy enforcement results When you add [policy sets](/terraform/cloud-docs/policy-enforcement/manage-policy-sets) to a workspace, HCP Terraform enforces those policy sets on every Terraform run. HCP Terraform displays the policy enforcement results in the UI for each run. Depending on each policyβs [enforcement level](/terraform/cloud-docs/policy-enforcement/manage-policy-sets#policy-enforcement-levels), policy failures can also stop the run and prevent Terraform from provisioning infrastructure. @include 'tfc-package-callouts/policies.mdx' ## Policy Evaluation Run Stages HCP Terraform only evaluates policies for successful plans. HCP Terraform evaluates Sentinel and OPA policy sets separately and at different points in the run. - Sentinel policy checks occur after Terraform completes the plan and after both [run tasks](/terraform/cloud-docs/workspaces/settings/run-tasks) and [cost estimation](https://terraform.io/cloud-dodcs/cost-estimation). This order lets you write Sentinel policies to restrict costs based on the data in the cost estimates. - Sentinel policy evaluations occur after Terraform completes the plan and after any run tasks. HCP Terraform evaluates Sentinel policy evaluations immediately before cost estimation. - OPA policy evaluations occur after Terraform completes the plan and after any run tasks. HCP Terraform evaluates OPA policies immediately before cost estimation. Refer to [Run States and Stages](/terraform/cloud-docs/workspaces/run/states) for more details. ## View Policy Results To view the policy results for both Sentinel and OPA policies: 1. Go to your workspace and navigate to the \*\*Runs\*\* page. 1. Click a run to view its details. HCP Terraform displays a timeline of the runβs events. For workspaces with both Sentinel and OPA policy sets, the run details page displays two separate run events: \*\*OPA policies\*\* for OPA policy sets and \*\*Policy check\*\* for Sentinel policy sets. Click a policy evaluation event to view policy results and details about any failed policies. -> \*\*Note:\*\* For Sentinel, the Terraform CLI also prints policy results for [CLI-driven runs](/terraform/cloud-docs/workspaces/run/cli). CLI support for policy results is not available for OPA. ## Override Policies You need [manage policy overrides](/terraform/cloud-docs/users-teams-organizations/permissions/organization#manage-policy-overrides) permissions to override failed Sentinel and OPA policies. Sentinel and OPA have different policy enforcement levels that determine when you need to override failed policies to allow a run to continue. To override failed policies, go to the run details page and click \*\*Override and Continue\*\* at the bottom. For Sentinel only, you can also override `soft-mandatory` policies with the Terraform CLI. Run the `terraform apply` command and then enter `override` when prompted. -> \*\*Note:\*\* HCP Terraform does not allow policy overrides for [no-operation plans containing no infrastructure changes](/terraform/cloud-docs/workspaces/run/modes-and-options#allow-empty-apply), unless you choose the \*\*Allow empty apply\*\* option when starting the run. ### Sentinel #### Policy checks Policies with an `advisory` enforcement level never stop runs. If they fail, HCP Terraform displays a warning in the policy results and the run continues. You can override `soft-mandatory` policies to allow the run to continue. Overriding failed policies on a run does not affect policy evaluations on future runs in that workspace. You cannot override `hard-mandatory` policies, and all of these policies must pass for the run to continue. #### Policy evaluations Policies with an `advisory` enforcement level never stop runs. If they fail, HCP Terraform displays a warning in the policy results and the run continues. When running Sentinel policies as policy evaluations, `soft-mandatory` and `hard-mandatory` enforcement levels are internally converted to `mandatory` enforcement level. You can override `mandatory` policies to allow the run to continue. ### OPA Policies with an `advisory` enforcement level never stop runs. If they fail, HCP Terraform displays a warning in the policy results and the run continues. You can override `mandatory` policies to allow the run to continue. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/view-results/index.mdx | main | terraform | [
0.013974086381494999,
0.043668996542692184,
0.05711255967617035,
-0.022489625960588455,
0.03233829140663147,
-0.019513852894306183,
-0.004319616127759218,
-0.05613379925489426,
0.014130190014839172,
0.06527642160654068,
-0.04999653622508049,
-0.0692121833562851,
0.05202905833721161,
-0.018... | 0.038919 |
Terraform displays a warning in the policy results and the run continues. You can override `mandatory` policies to allow the run to continue. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/view-results/index.mdx | main | terraform | [
0.026020914316177368,
0.010740438476204872,
0.042742013931274414,
0.021259821951389313,
0.04837614670395851,
-0.05041501298546791,
-0.08881402015686035,
-0.12225441634654999,
0.004770640749484301,
0.0684119313955307,
-0.04739873483777046,
-0.007365591358393431,
0.006967212539166212,
0.0036... | -0.037644 |
# Manage policies and policy sets in HCP Terraform Policies are rules that HCP Terraform enforces on Terraform runs. You can define policies using either the [Sentinel](/terraform/cloud-docs/policy-enforcement/define-policies/custom-sentinel) or [Open Policy Agent (OPA)](/terraform/cloud-docs/policy-enforcement/opa) policy-as-code frameworks. @include 'tfc-package-callouts/policies.mdx' Policy sets are collections of policies you can apply globally or to specific [projects](/terraform/cloud-docs/projects/manage) and workspaces in your organization. For each run in the applicable workspaces, HCP Terraform checks the Terraform plan against the policy set. Depending on the [enforcement level](#policy-enforcement-levels), failed policies can stop a run in a workspace. If you do not want to enforce a policy set on a specific workspace, you can exclude the workspace from that set. ## Permissions To view and manage policies and policy sets, you must have [manage policy permissions](/terraform/cloud-docs/users-teams-organizations/permissions/organization#manage-policies) for your organization. ## Policy checks versus policy evaluations Policy checks and evaluations can access different types of data and enable slightly different workflows. ### Policy checks Only Sentinel policies can run as policy checks. Checks can access cost estimation data but can only use the latest version of Sentinel. @include 'deprecation/policy-checks.mdx' ### Policy evaluations OPA policy sets can only run as policy evaluations, and you can enable policy evaluations for Sentinel policy sets by selecting the `Agent` policy set type. HCP Terraform runs a workspace's policy evaluation in your self-managed agent pool if you meet the following requirements: - You are on the HCP Terraform \*\*Premium\*\* edition. - You configure the workspace to run Terraform operations in your self-managed agent pool. Refer to [Configure Workspaces to use the Agent](/terraform/cloud-docs/agents/agent-pools#configure-workspaces-to-use-the-agent) for more information. - You configure at least one agent in the agent pool to accept `policy` jobs. Refer to the [HCP Terraform agent reference](/terraform/cloud-docs/agents/agents#accept) for more information. If you do not meet all of the above requirements, then policy evaluations run within HCP Terraform's infrastructure. For Sentinel policy sets, using policy evaluations lets you: - Enable overrides for soft-mandatory and hard-mandatory policies, letting any user with [Manage Policy Overrides permission](/terraform/cloud-docs/users-teams-organizations/permissions/organization#manage-policy-overrides) proceed with a run in the event of policy failure. - Select a specific Sentinel runtime version for the policy set. Policy evaluations \*\*cannot\*\* access cost estimation data, so use policy checks if your policies rely on cost estimates. Sentinel runtime version pinning is supported by v0.23.1 and above, and HCP Terraform agent versions v1.13.1 and above ## Policy enforcement levels You can set an enforcement level for each policy that determines what happens when a Terraform plan does not pass the policy rule. Sentinel and OPA policies have different enforcement levels available. ### Sentinel You can enable one of the following options to set the enforcement level when creating a Sentinel policy: - \*\*Advisory:\*\* Failed policies never interrupt the run. They provide information about policy check failures in the UI. - \*\*Soft mandatory:\*\* Failed policies stop the run, but any user with [Manage Policy Overrides permission](/terraform/cloud-docs/users-teams-organizations/permissions/organization#manage-policy-overrides) can override these failures and allow the run to complete. - \*\*Hard mandatory:\*\* Failed policies stop the run. Unless the set containing the policy is configured to [allow overrides](#allow-policy-level-overrides), Terraform does not apply runs until a user fixes the issue that caused the failure. #### Allow policy level overrides When adding policies to a policy set, you can enable the \*\*This policy set can be overridden in the event of mandatory failures\*\* option. Enabling this option lets users with the appropriate permissions, such as admins or team owners, override any failed policy checks in that set, even policies set to \*\*Hard mandatory\*\*. This override setting takes precedence over the individual policyβs enforcement level. ### OPA OPA provides two policy enforcement levels: - \*\*advisory\*\* Failed policies never interrupt the | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/manage-policy-sets/index.mdx | main | terraform | [
0.005558430217206478,
0.007918212562799454,
0.026347169652581215,
-0.05038164183497429,
0.03468761593103409,
-0.016639485955238342,
0.002054805401712656,
-0.06766176223754883,
-0.0035942387767136097,
0.057310014963150024,
-0.022831875830888748,
-0.04296411573886871,
0.04486130177974701,
-0... | 0.045803 |
with the appropriate permissions, such as admins or team owners, override any failed policy checks in that set, even policies set to \*\*Hard mandatory\*\*. This override setting takes precedence over the individual policyβs enforcement level. ### OPA OPA provides two policy enforcement levels: - \*\*advisory\*\* Failed policies never interrupt the run. They provide information about policy failures in the UI. - \*\*mandatory:\*\* Failed policies stop the run, but any user with [Manage Policy Overrides permission](/terraform/cloud-docs/users-teams-organizations/permissions/organization#manage-policy-overrides) can override these failures and allow the run to complete. ## Policy publishing workflows You can create policies and policy sets for your HCP Terraform organization in one of three ways: - \*\*HCP Terraform web UI:\*\* Add individually-managed policies manually in the HCP Terraform UI, and store your policy code in HCP Terraform. This workflow is ideal for initial experimentation with policy enforcement, but we do not recommend it for organizations with large numbers of policies. - \*\*Version control:\*\* Connect HCP Terraform to a version control repository containing a policy set. When you push changes to the repository, HCP Terraform automatically uses the updated policy set. - \*\*Automated:\*\* Push versions of policy sets to HCP Terraform with the [HCP Terraform Policy Sets API](/terraform/cloud-docs/api-docs/policy-sets#create-a-policy-set-version) or the `tfe` provider [`tfe\_policy\_set`](https://registry.terraform.io/providers/hashicorp/tfe/latest/docs/resources/policy\_set) resource. This workflow is ideal for automated Continuous Integration and Deployment (CI/CD) pipelines. ### Manage individual policies in the web UI You can add policies directly to HCP Terraform using the web UI. This process requires you to paste completed, valid Sentinel or Rego code into the UI. We recommend validating your policy code before adding it to HCP Terraform. #### Add managed policies To add an individually managed policy: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the organization you want to add policies to. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Policies\*\*. A list of managed policies in HCP Terraform appears. Each policy designates its policy framework (Sentinel or OPA) and associated policy sets. 1. Click \*\*Create a new policy\*\*. 1. Choose the \*\*Policy framework\*\* you want to use. You can only create a policy set from policies written using the same framework. You cannot change the framework type after you create the policy. 1. Complete the following fields to define the policy: - \*\*Policy Name:\*\* Add a name containing letters, numbers, `-`, and `\_`. HCP Terraform displays this name in the UI. The name must be unique within your organization. - \*\*Description:\*\* Describe the policyβs purpose. The description supports Markdown rendering, and HCP Terraform displays this text in the UI. - \*\*Enforcement mode:\*\* Choose whether this policy can stop Terraform runs and whether users can override it. Refer to [policy enforcement levels](#policy-enforcement-levels) for more details. - \*\*(OPA Only) Query:\*\* Write a query to identify a specific policy rule within your rego code. HCP Terraform uses this query to determine the result of the policy. The query is typically a combination of the policy package name and rule name, such as `terraform.deny`. The result of this query must be an array. The policy passes when the array is empty. - \*\*Policy code\*\*: Paste the code for the policy: either Sentinel code or Rego code for OPA policies. The UI provides syntax highlighting for the policy language. - \*\*(Optional) Policy sets:\*\* Select one or more existing managed policy sets where you want to add the new policy. You can only select policy sets compatible with the chosen policy set framework. If there are no policy sets available, you can [create a new one](#create-policy-sets). The policy is now available in the HCP Terraform UI for you to edit and add to one or | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/manage-policy-sets/index.mdx | main | terraform | [
-0.02802932634949684,
0.04929151013493538,
0.08425625413656235,
-0.009834525175392628,
-0.01637345179915428,
-0.04563075676560402,
-0.042596615850925446,
-0.06376740336418152,
-0.04652126133441925,
0.10066396743059158,
-0.041321177035570145,
-0.01515243947505951,
0.042091697454452515,
0.00... | -0.015886 |
to add the new policy. You can only select policy sets compatible with the chosen policy set framework. If there are no policy sets available, you can [create a new one](#create-policy-sets). The policy is now available in the HCP Terraform UI for you to edit and add to one or more policy sets. #### Edit managed policies To edit a managed policy: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the organization you want to edit policies for. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Policies\*\*. 1. Click the policy you want to edit to go to its details page. 1. Edit the policy's fields and then click \*\*Update policy\*\*. #### Delete managed policies ~> \*\*Warning:\*\* Deleting a policy that applies to an active run causes that runβs policy evaluation stage to error. We recommend warning other members of your organization before you delete widely used policies. You can not restore policies after deletion. You must manually re-add them to HCP Terraform. You may want to save the policy code in a separate location before you delete the policy. To delete a managed policy: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the organization you want to delete a policy in. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Policies\*\*. 1. Click the policy you want to delete to go to its details page. 1. Click \*\*Delete policy\*\* and then click \*\*Yes, delete policy\*\* to confirm. The policy no longer appears in HCP Terraform and in any associated policy sets. ## Manage policy sets Policy sets are collections of policies that you can apply globally or to specific [projects](/terraform/cloud-docs/projects/manage) and workspaces. To view and manage policy sets, go to the \*\*Policy Sets\*\* section of your organizationβs settings. This page contains all of the policy sets available in the organization, including those added through the API. The way you set up and configure a new policy set depends on your workflow and where you store policies. - For [managed policies](#managed-policies), you use the UI to create a policy set and add managed policies. - For policy sets in a version control system, you use the UI to create a policy set connected to that repository. HCP Terraform automatically refreshes the policy set when you change relevant files in that repository. Version control policy sets have specific organization and formatting requirements. Refer to [Sentinel VCS Repositories](/terraform/cloud-docs/policy-enforcement/sentinel/vcs) and [OPA VCS Repositories](/terraform/cloud-docs/policy-enforcement/opa/vcs) for details. - For automated workflows like continuous deployment, you can use the UI to create an empty policy set and then use the [Policy Sets API](/terraform/cloud-docs/api-docs/policy-sets) to add policies. You can also use the API or the [`tfe` provider](https://registry.terraform.io/providers/hashicorp/tfe/latest/docs/resources/policy\_set) to add an entire, packaged policy set. ### Create policy sets To create a policy set: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the organization you want to create a policy set in. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Policies\*\*. 1. Click \*\*Connect a new policy set\*\*. 1. Choose your workflow. - For managed policies, click \*\*create a policy set with individually managed policies\*\*. HCP Terraform shows a form to create a policy set and add individually managed policies. - For version control policies, choose a version control provider and then select the repository with your policy set. HCP Terraform shows a form to create a policy set connected to that repository. - For automated workflows, click \*\*No VCS Connection\*\*. HCP Terraform shows a form to create an empty policy set. You can use the API to add policies to this empty policy set later. 1. Choose a \*\*Policy | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/manage-policy-sets/index.mdx | main | terraform | [
-0.011576639488339424,
0.014768988825380802,
0.03452688455581665,
-0.01734188199043274,
0.013842360116541386,
0.04253824055194855,
0.002515897387638688,
-0.06626135855913162,
-0.06390103697776794,
0.15098139643669128,
-0.034128692001104355,
-0.056450728327035904,
0.06395741552114487,
-0.02... | -0.015641 |
Terraform shows a form to create a policy set connected to that repository. - For automated workflows, click \*\*No VCS Connection\*\*. HCP Terraform shows a form to create an empty policy set. You can use the API to add policies to this empty policy set later. 1. Choose a \*\*Policy framework\*\* for the policies you want to add. A policy set can only contain policies that use the same framework (OPA or Sentinel). You cannot change a policy set's framework type after creation. 1. Choose a policy set scope: - \*\*Policies enforced globally:\*\* HCP Terraform automatically enforces this global policy set on all of an organization's existing and future workspaces. - \*\*Policies enforced on selected projects and workspaces:\*\* Use the text fields to find and select the workspaces and projects to enforce this policy set on. This affects all current and future workspaces for any chosen projects. 1. \*\*(Optional)\*\* Add \*\*Policy exclusions\*\* for this policy set. Specify any workspaces in the policy set's scope that HCP Terraform will not enforce this policy set on. 1. \*\*(Sentinel Only)\*\* Choose a policy set type: - \*\*Standard:\*\* This is the default workflow. A Sentinel policy set uses a [policy check](#policy-checks) in HCP Terraform and lets you access cost estimation data. - \*\*Agent:\*\* A Sentinel policy set uses a [policy evaluation](#policy-evaluations) in HCP Terraform. This lets you enable policy overrides and enforce a Sentinel runtime version. 1. \*\*(OPA Only)\*\* Select a \*\*Runtime version\*\* for this policy set. 1. \*\*(OPA Only)\*\* Allow \*\*Overrides\*\*, which enables users with override policy permissions to apply plans that have [mandatory policy](#policy-enforcement-levels) failures. 1. \*\*(VCS Only)\*\* Optionally specify the \*\*VCS branch\*\* within your VCS repository where HCP Terraform should import new versions of policies. If you do not set this field, HCP Terraform uses your selected VCS repository's default branch. 1. \*\*(VCS Only)\*\* Specify where your policy set files live using the \*\*Policies path\*\*. This lets you maintain multiple policy sets within a single repository. Use a relative path from your root directory to the directory that contains either the `sentinel.hcl` (Sentinel) or `policies.hcl` (OPA) configuration files. If you do not set this field, HCP Terraform uses the repository's root directory. 1. \*\*(Managed Policies Only)\*\* Select managed \*\*Policies\*\* to add to the policy set. You can only add policies written with the same policy framework you selected for this set. 1. Choose a descriptive and unique \*\*Name\*\* for the policy set. You can use any combination of letters, numbers, `-`, and `\_`. 1. Write an optional \*\*Description\*\* that tells other users about the purpose of the policy set and what it contains. Depending on the type of policy set you choose to create, you can then add policies to the set using the UI, connected VCS repository, the API, or the `tfe` provider. If you are creating an OPA policy set or a Sentinel policy set using agents, we recommend choosing a specific runtime version for your policy set to ensure consistent behavior. ### Edit policy sets To edit a policy set: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the organization you want to edit a policy set in. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Policies\*\*. 1. Click the policy set you want to edit to go to its settings page. 1. Adjust the settings and click \*\*Update policy set\*\*. ### Evaluate a policy runtime upgrade For OPA and Sentinel policy sets using agents, we recommend choosing a specific runtime version for your policy set to ensure consistent behavior. HCP Terraform and Terraform Enterprise no longer supports the `latest` tag for OPA policy sets. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/manage-policy-sets/index.mdx | main | terraform | [
-0.036064889281988144,
0.0025042358320206404,
0.02318938635289669,
-0.016288047656416893,
-0.01060237642377615,
0.010340956039726734,
0.021831190213561058,
-0.07170477509498596,
-0.006185824517160654,
0.11510631442070007,
-0.04797060415148735,
-0.09286419302225113,
0.03172069787979126,
0.0... | 0.035616 |
settings and click \*\*Update policy set\*\*. ### Evaluate a policy runtime upgrade For OPA and Sentinel policy sets using agents, we recommend choosing a specific runtime version for your policy set to ensure consistent behavior. HCP Terraform and Terraform Enterprise no longer supports the `latest` tag for OPA policy sets. By default, OPA policy sets using the `latest` tag are pinned to the most recently supported version. To upgrade to a newer OPA runtime version, specify a version in your policy set settings. You can test a new runtime version for a policy set to ensure your policies work as expected before upgrading to that version. To test a new policy set runtime version, perform the following steps: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to your organization. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Policies\*\* in your organizationβs settings. 1. Click the policy set you want to upgrade. 1. Click the \*\*Evaluate\*\* tab. 1. Select the \*\*Runtime version\*\* you want to upgrade to. 1. Select a \*\*Workspace\*\* to test the policy set and upgraded version against. 1. Click \*\*Evaluate\*\*. HCP Terraform executes the policy set using the specified version and the latest plan data for the specified workspace, then displays the evaluation results. If the evaluation returns a `Failed` status, inspect the JSON output to determine if the issue is related to a non-compliant resource or a syntax issue. If the evaluation results in an error, check that the policy configuration is valid with your new runtime version. ### Delete policy sets ~> \*\*Warning:\*\* Deleting a policy set that applies to an active run causes that runβs policy evaluation stage to error. We recommend warning other members of your organization before you delete widely used policy sets. You can not restore policy sets after deletion. You must manually re-add them to HCP Terraform. To delete a policy set: 1. Sign in to [HCP Terraform](https://app.terraform.io/) or Terraform Enterprise and navigate to the organization you want to delete a policy set in. 1. Choose \*\*Settings\*\* from the sidebar, then \*\*Policies\*\* in your organizationβs settings. 2. Click the policy set you want to delete to go to its details page. 3. Click \*\*Delete policy\*\* and then click \*\*Yes, delete policy set\*\* to confirm. The policy set no longer appears on the UI and HCP Terraform no longer applies it to any workspaces. For managed policy sets, all of the individual policies are still available in HCP Terraform. You must delete each policy individually to remove it from your organization. ### (Sentinel only) Sentinel parameters [Sentinel parameters](/sentinel/docs/language/parameters) are a list of key/value pairs that HCP Terraform sends to the Sentinel runtime when performing policy checks on workspaces. If the value parses as JSON, HCP Terraform sends it to Sentinel as the corresponding type (string, boolean, integer, map, or list). If the value fails JSON validation, HCP Terraform sends it as a string. You can set Sentinel parameters when you [edit a policy set](#edit-policy-sets). | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/manage-policy-sets/index.mdx | main | terraform | [
0.0019130632281303406,
-0.0103034358471632,
0.0688566043972969,
-0.03462880849838257,
0.00404382636770606,
-0.06962715089321136,
-0.06154618039727211,
-0.05898287519812584,
-0.03257074952125549,
0.1000949814915657,
-0.01397516205906868,
-0.032288677990436554,
-0.02504446543753147,
-0.01130... | 0.000251 |
# Configure a Sentinel policy set with a VCS repository To enable policy enforcement, you must group Sentinel policies into policy sets. You can then apply those policy sets globally or to specific [projects](/terraform/cloud-docs/projects/manage) and workspaces. @include 'tfc-package-callouts/policies.mdx' One way to create policy sets is by connecting HCP Terraform to a version control repository. When you push changes to the repository, HCP Terraform automatically uses the updated policy set. Refer to [Managing Policy Sets](/terraform/cloud-docs/policy-enforcement/manage-policy-sets) for more details. A Sentinel policy set repository contains a Sentinel configuration file, policy files, and module files. ## Configuration File Your repository must contain a configuration file named `sentinel.hcl` that defines the following features of the policy set: - Each policy included in the set. The policy name must match the names of individual [policy code files](#policy-code-files) exactly. HCP Terraform ignores policy files in the repository that are not listed in the configuration file. For each policy, the configuration file must designate the policyβs [enforcement level](/terraform/cloud-docs/policy-enforcement/manage-policy-sets#policy-enforcement-levels) and [source](#policy-source). - [Terraform modules](#modules) that policies in the set need to access. The following example shows a portion of a `sentinel.hcl` configuration file that defines a policy named `terraform-maintenance-windows`. The policy has a `hard-mandatory` enforcement level, meaning that it can block Terraform runs when it fails and users cannot override it. ```hcl policy "terraform-maintenance-windows" { source = "./terraform-maintenance-windows.sentinel" enforcement\_level = "hard-mandatory" } ``` To configure a module, add a `module` entry to your `sentinel.hcl` file. The following example adds a module called `timezone`. ```hcl module "timezone" { source = "./modules/timezone.sentinel" } ``` The repositories for [policy libraries on the Terraform Registry](https://registry.terraform.io/browse/policies) contain more examples. ## Policy Code Files Define each Sentinel policy in a separate file within your repository. All local policy files must reside in the same directory as the `sentinel.hcl` configuration file and end with the `.sentinel` suffix. ### Policy Source A policy's `source` field can either reference a file within the policy repository, or it can reference a remote source. For example, the configuration could reference a policy from HashiCorp's [foundational policies library](https://github.com/hashicorp/terraform-foundational-policies-library). Sentinel only supports HTTP and HTTPS remote sources. To specify a local source, prefix the `source` with a `./`, or `../`. The following example shows how to reference a local source policy called `terraform-maintenance-windows.sentinel`. ```hcl policy "terraform-maintenance-windows" { source = "./terraform-maintenance-windows.sentinel" enforcement\_level = "hard-mandatory" } ``` To specify a remote source, supply the URL as the `source`. The following example references a policy from HashiCorp's foundational policies library. ```hcl policy "deny-public-ssh-nsg-rules" { source = "https://registry.terraform.io/v2/policies/hashicorp/azure-networking-terraform/1.0.2/policy/deny-public-ssh-nsg-rules.sentinel?checksum=sha256:75c95bf1d6eb48153cb31f15c49c237bf7829549beebe20effa07bcdd3f3cb74" enforcement\_level = "advisory" } ``` For GitHub, you must use the URL of the raw policy content. Other URL types cause HCP Terraform to error when checking the policy. For example, do not use `https://github.com/hashicorp/policy-library-azure-networking-terraform/blob/main/policies/deny-public-ssh-nsg-rules/deny-public-ssh-nsg-rules.sentinel`. To access the raw URL, open the Sentinel file in your Github repository, right-click \*\*Raw\*\* on the top right of the page, and save the link address. ### Example Policy The following example policy uses the `time` and `tfrun` imports and a custom `timezone` module to do the following tasks: 1. Load the time when the Terraform run occurred 1. Convert the loaded time with the correct offset using the [Timezone API](https://timezoneapi.io/) 1. Verify that the provisioning operation occurs only on a specific day The example policy also uses a [rule expression](/sentinel/docs/language/spec#rule-expressions) with the `when` predicate. If the value of `tfrun.workspace.auto\_apply` is false, the rule is not evaluated and returns true. Finally, the example uses parameters to facilitate module reuse within Terraform. Refer to the [Sentinel parameter documentation](/sentinel/docs/language/parameters) for details. ```hcl import "time" import "tfrun" import "timezone" param token default "WbNKULOBheqV" param maintenance\_days default ["Friday", "Saturday", "Sunday"] param timezone\_id default "America/Los\_Angeles" tfrun\_created\_at | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/manage-policy-sets/sentinel-vcs.mdx | main | terraform | [
-0.0014656811254099011,
-0.0341072753071785,
0.013395879417657852,
-0.05015052855014801,
-0.0005655868444591761,
0.032689157873392105,
0.009097123518586159,
-0.055672917515039444,
-0.0020073093473911285,
0.07131915539503098,
-0.03622801974415779,
-0.060594797134399414,
0.05117619037628174,
... | 0.047808 |
false, the rule is not evaluated and returns true. Finally, the example uses parameters to facilitate module reuse within Terraform. Refer to the [Sentinel parameter documentation](/sentinel/docs/language/parameters) for details. ```hcl import "time" import "tfrun" import "timezone" param token default "WbNKULOBheqV" param maintenance\_days default ["Friday", "Saturday", "Sunday"] param timezone\_id default "America/Los\_Angeles" tfrun\_created\_at = time.load(tfrun.created\_at) supported\_maintenance\_day = rule when tfrun.workspace.auto\_apply is true { tfrun\_created\_at.add(time.hour \* timezone.offset(timezone\_id, token)).weekday\_name in maintenance\_days } main = rule { supported\_maintenance\_day } ``` To expand the policy, you could use the [time.hour](/sentinel/docs/imports/time#time-hour) function to also restrict provisioning to specific times of day. ## Modules HCP Terraform supports [Sentinel modules](/sentinel/docs/extending/modules). Modules let you write reusable policy code that you can import and use within several policies at once. You can store modules locally or retrieve them from a remote HTTP or HTTPS source. -> \*\*Note:\*\* We recommend reviewing [Sentinel runtime's modules documentation](/sentinel/docs/extending/modules) to learn how to use modules within Sentinel. However, the configuration examples in the runtime documentation are relevant to the Sentinel CLI and not HCP Terraform. The following example module loads the code at `./modules/timezone.sentinel` relative to the policy set working directory. Other modules can access this code with the statement `import "timezone"`. ```hcl import "http" import "json" import "decimal" httpGet = func(id, token){ uri = "https://timezoneapi.io/api/timezone/?" + id + "&token=" + token request = http.get(uri) return json.unmarshal(request.body) } offset = func(id, token) { tz = httpGet(id, token) offset = decimal.new(tz.data.datetime.offset\_hours).int return offset } ``` | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/manage-policy-sets/sentinel-vcs.mdx | main | terraform | [
-0.012414602562785149,
0.01506882719695568,
0.06429408490657806,
0.04367939382791519,
0.0414905920624733,
-0.038659434765577316,
0.038212019950151443,
-0.006927721668034792,
-0.026568276807665825,
0.0525888092815876,
-0.06858217716217041,
-0.09649887681007385,
-0.013147898949682713,
0.0138... | 0.086828 |
# Configure an OPA policy set with a VCS repository To enable policy enforcement, you must group OPA policies into policy sets and apply those policy sets globally or to specific [projects](/terraform/cloud-docs/projects/manage) and workspaces. > \*\*Hands-on:\*\* Try the [Detect Infrastructure Drift and Enforce OPA Policies](/terraform/tutorials/cloud/drift-and-policy) tutorial. One way to create policy sets is by connecting HCP Terraform to a version control repository. When you push changes to the repository, HCP Terraform automatically uses the updated policy set. Refer to [Managing Policy Sets](/terraform/cloud-docs/policy-enforcement/manage-policy-sets) for more details. An OPA policy set repository contains a HashiCorp Configuration Language (HCL) configuration file and policy files. @include 'tfc-package-callouts/policies.mdx' ## Configuration File The root directory of your policy set repository must contain a configuration file named either `policies.hcl` or `policies.json`. Policy enforcement supports both HCL and the JSON variant of HCL syntax. The configuration file contains one or more `policy` blocks that define each policy in the policy set. Unlike Sentinel, OPA policies do not need to be in separate files. You use an [OPA query](/terraform/cloud-docs/policy-enforcement/opa#opa-query) to identify each policy rule. The following example uses a query to define a policy named `policy1`. This query may evaluate across multiple files, or a single file. ```hcl policy "policy1" { query = "data.terraform.policy1.deny" } ``` Optionally, you can also provide a `description` and an `enforcement\_level` property. If you do not specify an enforcement level, HCP Terraform uses `advisory`, meaning policy failures produce warnings but do not block Terraform runs. Refer to [Policy Enforcement Levels](/terraform/cloud-docs/policy-enforcement/manage-policy-sets#policy-enforcement-levels) for more details. ```hcl policy "policy1" { query = "data.terraform.policy1.deny" enforcement\_level = "mandatory" description = "policy1 description" } ``` ## Policy Code Files All Rego policy files must end with `.rego` and exist in the local GitHub repository for the policy set. You can store them in separate directories from the configuration file. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/manage-policy-sets/opa-vcs.mdx | main | terraform | [
-0.026194589212536812,
0.00785592757165432,
0.06667562574148178,
-0.053536612540483475,
-0.022806894034147263,
-0.05880415067076683,
-0.009939100593328476,
-0.06927427649497986,
0.017486434429883957,
0.0720740556716919,
-0.007397790439426899,
-0.047668829560279846,
0.028799613937735558,
-0... | 0.012282 |
-> \*\*Note:\*\* This is documentation for the next version of the `tfstate` Sentinel import, designed specifically for Terraform 0.12. This import requires Terraform 0.12 or higher, and must currently be loaded by path, using an alias, example: `import "tfstate/v2" as tfstate`. # tfstate/v2 Sentinel import The `tfstate/v2` import provides access to a Terraform state. @include 'tfc-package-callouts/policies.mdx' The \_state\_ is the data that Terraform has recorded about a workspace at a particular point in its lifecycle, usually after an apply. You can read more general information about how Terraform uses state [here](/terraform/language/state). -> \*\*NOTE:\*\* Since HCP Terraform currently only supports policy checks at plan time, the usefulness of this import is somewhat limited, as it will usually give you the state \_prior\_ to the plan the policy check is currently being run for. Depending on your needs, you may find the [`planned\_values`](/terraform/cloud-docs/policy-enforcement/import-reference/tfplan-v2#the-planned\_values-collection) collection in `tfplan/v2` more useful, which will give you a \_predicted\_ state by applying plan data to the data found here. The one exception to this rule is \_data sources\_, which will always give up to date data here, as long as the data source could be evaluated at plan time. The data in the `tfstate/v2` import is sourced from the JSON configuration file that is generated by the [`terraform show -json`](/terraform/cli/commands/show#json-output) command. For more information on the file format, see the [JSON Output Format](/terraform/internals/json-format) page. ## Import Overview The `tfstate/v2` import is structured as currently two \_collections\_, keyed in resource address and output name, respectively. ``` (tfstate/v2) βββ terraform\_version (string) βββ resources β βββ (indexed by address) β βββ address (string) β βββ module\_address (string) β βββ mode (string) β βββ type (string) β βββ name (string) β βββ index (float (number) or string) β βββ provider\_name (string) β βββ values (map) β βββ depends\_on (list of strings) β βββ tainted (boolean) β βββ deposed\_key (string) βββ outputs βββ (indexed by name) βββ name (string) βββ sensitive (boolean) βββ value (value) ``` The collections are: \* [`resources`](#the-resources-collection) - The state of all resources across all modules in the state. \* [`outputs`](#the-outputs-collection) - The state of all outputs from the root module in the state. These collections are specifically designed to be used with the [`filter`](/sentinel/docs/language/collection-operations#filter-expression) quantifier expression in Sentinel, so that one can collect a list of resources to perform policy checks on without having to write complex module traversal. As an example, the following code will return all `aws\_instance` resource types within the state, regardless of what module they are in: ``` all\_aws\_instances = filter tfstate.resources as \_, r { r.mode is "managed" and r.type is "aws\_instance" } ``` You can add specific attributes to the filter to narrow the search, such as the module address. The following code would return resources in a module named `foo` only: ``` all\_aws\_instances = filter tfstate.resources as \_, r { r.module\_address is "module.foo" and r.mode is "managed" and r.type is "aws\_instance" } ``` ## The `terraform\_version` Value The top-level `terraform\_version` value in this import gives the Terraform version that recorded the state. This can be used to do version validation. ``` import "tfstate/v2" as tfstate import "strings" v = strings.split(tfstate.terraform\_version, ".") version\_major = int(v[1]) version\_minor = int(v[2]) main = rule { version\_major is 12 and version\_minor >= 19 } ``` -> \*\*NOTE:\*\* The above example will give errors when working with pre-release versions (example: `0.12.0beta1`). Future versions of this import will include helpers to assist with processing versions that will account for these kinds of exceptions. ## The `resources` Collection The `resources` collection is a collection representing all of the resources in the state, across all modules. This collection is | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/import-reference/tfstate-v2.mdx | main | terraform | [
-0.029476070776581764,
-0.015151423402130604,
0.015326186083257198,
-0.025310110300779343,
0.03925075754523277,
-0.006940439809113741,
0.04303285479545593,
-0.039112161844968796,
0.002719488460570574,
0.03575166314840317,
-0.03783399611711502,
-0.06418929994106293,
0.016313211992383003,
-0... | 0.154162 |
with pre-release versions (example: `0.12.0beta1`). Future versions of this import will include helpers to assist with processing versions that will account for these kinds of exceptions. ## The `resources` Collection The `resources` collection is a collection representing all of the resources in the state, across all modules. This collection is indexed on the complete resource address as the key. An element in the collection has the following values: \* `address` - The absolute resource address - also the key for the collection's index. \* `module\_address` - The address portion of the absolute resource address. \* `mode` - The resource mode, either `managed` (resources) or `data` (data sources). \* `type` - The resource type, example: `aws\_instance` for `aws\_instance.foo`. \* `name` - The resource name, example: `foo` for `aws\_instance.foo`. \* `index` - The resource index. Can be either a number or a string. \* `provider\_name` - The name of the provider this resource belongs to. This allows the provider to be interpreted unambiguously in the unusual situation where a provider offers a resource type whose name does not start with its own name, such as the `googlebeta` provider offering `google\_compute\_instance`. -> \*\*Note:\*\* Starting with Terraform 0.13, the `provider\_name` field contains the \_full\_ source address to the provider in the Terraform Registry. Example: `registry.terraform.io/hashicorp/null` for the null provider. \* `values` - An object (map) representation of the attribute values of the resource, whose structure depends on the resource type schema. When accessing proposed state through the [`planned\_values`](/terraform/cloud-docs/policy-enforcement/import-reference/tfplan-v2#the-planned\_values-collection) collection of the tfplan/v2 import, unknown values will be omitted. \* `depends\_on` - The addresses of the resources that this resource depends on. \* `tainted` - `true` if the resource has been explicitly marked as [tainted](/terraform/cli/commands/taint) in the state. \* `deposed\_key` - Set if the resource has been marked deposed and will be destroyed on the next apply. This matches the deposed field in the [`resource\_changes`](/terraform/cloud-docs/policy-enforcement/import-reference/tfplan-v2#the-resource\_changes-collection) collection in the [`tfplan/v2`](/terraform/cloud-docs/policy-enforcement/import-reference/tfplan-v2) import. ## The `outputs` Collection The `outputs` collection is a collection of outputs from the root module of the state. Note that no child modules are included in this output set, and there is no way to fetch child module output values. This is to encourage the correct flow of outputs to the recommended root consumption level. The collection is indexed on the output name, with the following fields: \* `name`: The name of the output, also the collection key. \* `sensitive`: Whether or not the value was marked as [sensitive](/terraform/language/values/outputs#sensitive-suppressing-values-in-cli-output) in configuration. \* `value`: The value of the output. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/import-reference/tfstate-v2.mdx | main | terraform | [
-0.05141504481434822,
0.02091391570866108,
-0.016470029950141907,
0.05307997390627861,
0.08398152142763138,
-0.04301143437623978,
0.01878119632601738,
-0.0379675067961216,
0.006943580228835344,
-0.013283561915159225,
0.022884301841259003,
-0.022602658718824387,
0.10087119042873383,
-0.0526... | 0.047552 |
# tfrun Sentinel import reference The `tfrun` import provides access to data associated with a [Terraform run][run-glossary]. @include 'tfc-package-callouts/policies.mdx' This import currently consists of run attributes, as well as namespaces for the `organization`, `workspace` and `cost-estimate`. Each namespace provides static data regarding the HCP Terraform application that can then be consumed by Sentinel during a policy evaluation. ``` tfrun βββ id (string) βββ created\_at (string) βββ created\_by (string) βββ message (string) βββ commit\_sha (string) βββ is\_destroy (boolean) βββ refresh (boolean) βββ refresh\_only (boolean) βββ replace\_addrs (array of strings) βββ speculative (boolean) βββ target\_addrs (array of strings) βββ project β βββ id (string) β βββ name (string) βββ variables (map of keys) βββ organization β βββ name (string) βββ workspace β βββ id (string) β βββ name (string) β βββ created\_at (string) β βββ description (string) β βββ execution\_mode (string) β βββ auto\_apply (bool) β βββ tags (array of strings) | βββ tag\_bindings (array of objects) β βββ working\_directory (string) β βββ vcs\_repo (map of keys) βββ cost\_estimate βββ prior\_monthly\_cost (string) βββ proposed\_monthly\_cost (string) βββ delta\_monthly\_cost (string) ``` -> \*\*Note:\*\* When writing policies using this import, keep in mind that workspace data is generally editable by users outside of the context of policy enforcement. For example, consider the case of omitting the enforcement of policy rules for development workspaces by the workspace name (allowing the policy to pass if the workspace ends in `-dev`). While this is useful for extremely granular exceptions, the workspace name could be edited by workspace admins, effectively bypassing the policy. In this case, where an extremely strict separation of policy managers vs. workspace practitioners is required, using [policy sets](/terraform/cloud-docs/policy-enforcement/manage-policy-sets) to only enforce the policy on non-development workspaces is more appropriate. [run-glossary]: /terraform/docs/glossary#run [workspace-glossary]: /terraform/docs/glossary#workspace ## Namespace: root The \*\*root namespace\*\* contains data associated with the current run. ### Value: `id` \* \*\*Value Type:\*\* String. Specifies the ID that is associated with the current Terraform run. ### Value: `created\_at` \* \*\*Value Type:\*\* String. The `created\_at` value within the [root namespace](#namespace-root) specifies the time that the run was created. The timestamp returned follows the format outlined in [RFC3339](https://datatracker.ietf.org/doc/html/rfc3339). Users can use the `time` import to [load](/sentinel/docs/imports/time#time-load-timeish) a run timestamp and create a new timespace from the specified value. See the `time` import [documentation](/sentinel/docs/imports/time#import-time) for available actions that can be performed on timespaces. ### Value: `created\_by` \* \*\*Value Type:\*\* String. The `created\_by` value within the [root namespace](#namespace-root) is string that specifies the user name of the HCP Terraform user for the specific run. ### Value: `message` \* \*\*Value Type:\*\* String. Specifies the message that is associated with the Terraform run. The default value is \_"Queued manually via the Terraform Enterprise API"\_. ### Value: `commit\_sha` \* \*\*Value Type:\*\* String. Specifies the checksum hash (SHA) that identifies the commit. ### Value: `is\_destroy` \* \*\*Value Type:\*\* Boolean. Specifies if the plan is a destroy plan, which will destroy all provisioned resources. ### Value: `refresh` \* \*\*Value Type:\*\* Boolean. Specifies whether the state was refreshed prior to the plan. ### Value: `refresh\_only` \* \*\*Value Type:\*\* Boolean. Specifies whether the plan is in refresh-only mode, which ignores configuration changes and updates state with any changes made outside of Terraform. ### Value: `replace\_addrs` \* \*\*Value Type:\*\* An array of strings representing [resource addresses](/terraform/cli/state/resource-addressing). Provides the targets specified using the [`-replace`](/terraform/cli/commands/plan#resource-targeting) flag in the CLI or the `replace-addrs` attribute in the API. Will be null if no resource targets are specified. ### Value: `speculative` \* \*\*Value Type:\*\* Boolean. Specifies whether the plan associated with the run is a [speculative plan](/terraform/cloud-docs/workspaces/run/remote-operations#speculative-plans) only. ### Value: `target\_addrs` \* \*\*Value Type:\*\* An array of strings representing [resource addresses](/terraform/cli/state/resource-addressing). Provides the | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/import-reference/tfrun.mdx | main | terraform | [
-0.034029409289360046,
0.0008736985619179904,
0.021001648157835007,
-0.00248105195350945,
-0.02726253680884838,
-0.030852241441607475,
0.05031345412135124,
-0.04953046143054962,
0.023016231134533882,
0.08414127677679062,
-0.010921608656644821,
-0.12530043721199036,
0.023779090493917465,
-0... | 0.132639 |
the `replace-addrs` attribute in the API. Will be null if no resource targets are specified. ### Value: `speculative` \* \*\*Value Type:\*\* Boolean. Specifies whether the plan associated with the run is a [speculative plan](/terraform/cloud-docs/workspaces/run/remote-operations#speculative-plans) only. ### Value: `target\_addrs` \* \*\*Value Type:\*\* An array of strings representing [resource addresses](/terraform/cli/state/resource-addressing). Provides the targets specified using the [`-target`](/terraform/cli/commands/plan#resource-targeting) flag in the CLI or the `target-addrs` attribute in the API. Will be null if no resource targets are specified. To prohibit targeted runs altogether, make sure the `target\_addrs` value is null or empty: ``` import "tfrun" main = tfrun.target\_addrs is null or tfrun.target\_addrs is empty ``` ### Value: `variables` \* \*\*Value Type:\*\* A string-keyed map of values. Provides the names of the variables that are configured within the run and the [sensitivity](/terraform/cloud-docs/variables/managing-variables#sensitive-values) state of the value. ``` variables (map of keys) βββ name (string) βββ category (string) βββ sensitive (boolean) ``` ## Namespace: project The \*\*project namespace\*\* contains data associated with the current run's [projects](/terraform/cloud-docs/api-docs/projects). ### Value: `id` \* \*\*Value Type:\*\* String. Specifies the ID that is associated with the current project. ### Value: `name` \* \*\*Value Type:\*\* String. Specifies the name assigned to the HCP Terraform project. ## Namespace: organization The \*\*organization namespace\*\* contains data associated with the current run's HCP Terraform [organization](/terraform/cloud-docs/users-teams-organizations/organizations). ### Value: `name` \* \*\*Value Type:\*\* String. Specifies the name assigned to the HCP Terraform organization. ## Namespace: workspace The \*\*workspace namespace\*\* contains data associated with the current run's workspace. ### Value: `id` \* \*\*Value Type:\*\* String. Specifies the ID that is associated with the Terraform workspace. ### Value: `name` \* \*\*Value Type:\*\* String. The name of the workspace, which can only include letters, numbers, `-`, and `\_`. As an example, in a workspace named `app-us-east-dev` the following policy would evaluate to `true`: ``` # Enforces production rules on all non-development workspaces import "tfrun" import "strings" # (Actual policy logic omitted) evaluate\_production\_policy = rule { ... } main = rule when strings.has\_suffix(tfrun.workspace.name, "-dev") is false { evaluate\_production\_policy } ``` ### Value: `created\_at` \* \*\*Value Type:\*\* String. Specifies the time that the workspace was created. The timestamp returned follows the format outlined in [RFC3339](https://datatracker.ietf.org/doc/html/rfc3339). Users can use the `time` import to [load](/sentinel/docs/imports/time#time-load-timeish) a workspace timestamp, and create a new timespace from the specified value. See the `time` import [documentation](/sentinel/docs/imports/time#import-time) for available actions that can be performed on timespaces. ### Value: `description` \* \*\*Value Type:\*\* String. Contains the description for the workspace. This value can be `null`. ### Value: `auto\_apply` \* \*\*Value Type:\*\* Boolean. Contains the workspace's [auto-apply](/terraform/cloud-docs/workspaces/settings#auto-apply-and-manual-apply) setting. ### Value: `tags` \* \*\*Value Type:\*\* Array of strings. Contains the list of tag names for the workspace, as well as the keys from tag bindings. ### Value: `tag\_bindings` \* \*\*Value Type:\*\* Array of objects. Contains the complete list of tag bindings for the workspace, which includes inherited tag bindings, as well as the workspace key-only tags. Each binding has a string `key`, a nullable string `value`, as well as a boolean `inherited` properties. ``` tag\_bindings (array of objects) βββ key (string) βββ value (string or null) βββ inherited (boolean) ``` ### Value: `working\_directory` \* \*\*Value Type:\*\* String. Contains the configured [Terraform working directory](/terraform/cloud-docs/workspaces/settings#terraform-working-directory) of the workspace. This value can be `null`. ### Value: `execution\_mode` \* \*\*Value Type:\*\* String. Contains the configured [Terraform execution mode](/terraform/cloud-docs/workspaces/settings#execution-mode) of the workspace. The default value is `remote`. ### Value: `vcs\_repo` \* \*\*Value Type:\*\* A string-keyed map of values. Contains data associated with a VCS repository connected to the workspace. Details regarding each attribute can be found in the documentation for the HCP Terraform [Workspaces API](/terraform/cloud-docs/api-docs/workspaces). This value can be `null`. ``` vcs\_repo (map of keys) βββ identifier | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/import-reference/tfrun.mdx | main | terraform | [
-0.01809627376496792,
0.030520213767886162,
0.04901162534952164,
-0.01889745332300663,
0.019804339855909348,
-0.06676195561885834,
0.01097145862877369,
-0.10204111784696579,
0.020525721833109856,
0.040195271372795105,
-0.01925910823047161,
-0.11611612886190414,
0.04060082882642746,
-0.0181... | -0.006121 |
### Value: `vcs\_repo` \* \*\*Value Type:\*\* A string-keyed map of values. Contains data associated with a VCS repository connected to the workspace. Details regarding each attribute can be found in the documentation for the HCP Terraform [Workspaces API](/terraform/cloud-docs/api-docs/workspaces). This value can be `null`. ``` vcs\_repo (map of keys) βββ identifier (string) βββ display\_identifier (string) βββ branch (string) βββ ingress\_submodules (bool) ``` ## Namespace: cost\_estimate The \*\*cost\_estimation namespace\*\* contains data associated with the current run's cost estimate. This namespace is only present if a cost estimate is available. -> Cost estimation is disabled for runs using [resource targeting](/terraform/cli/commands/plan#resource-targeting), which may cause unexpected failures. -> \*\*Note:\*\* Cost estimates are not available for Terraform 0.11. ### Value: `prior\_monthly\_cost` \* \*\*Value Type:\*\* String. Contains the monthly cost estimate at the beginning of a plan. This value contains a positive decimal and can be `"0.0"`. ### Value: `proposed\_monthly\_cost` \* \*\*Value Type:\*\* String. Contains the monthly cost estimate if the plan were to be applied. This value contains a positive decimal and can be `"0.0"`. ### Value: `delta\_monthly\_cost` \* \*\*Value Type:\*\* String. Contains the difference between the prior and proposed monthly cost estimates. This value may contain a positive or negative decimal and can be `"0.0"`. | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/import-reference/tfrun.mdx | main | terraform | [
-0.005882018245756626,
0.05280306935310364,
0.01876404881477356,
0.043744321912527084,
-0.016691699624061584,
0.005265553016215563,
0.019823379814624786,
-0.03040136955678463,
0.02964256890118122,
0.08166410773992538,
-0.024929052218794823,
-0.16929173469543457,
0.10123056173324585,
-0.044... | 0.06284 |
# tfconfig Sentinel import ~> \*\*Warning:\*\* The `tfconfig` import is now deprecated and will be permanently removed in August 2025. We recommend that you start using the updated [tfconfig/v2](/terraform/cloud-docs/policy-enforcement/import-reference/tfconfig-v2) import as soon as possible to avoid disruptions. The `tfconfig/v2` import offers improved functionality and is designed to better support your policy enforcement needs. The `tfconfig` import provides access to a Terraform configuration. The Terraform configuration is the set of `\*.tf` files that are used to describe the desired infrastructure state. Policies using the `tfconfig` import can access all aspects of the configuration: providers, resources, data sources, modules, and variables. @include 'tfc-package-callouts/policies.mdx' Some use cases for `tfconfig` include: \* \*\*Organizational naming conventions\*\*: requiring that configuration elements are named in a way that conforms to some organization-wide standard. \* \*\*Required inputs and outputs\*\*: organizations may require a particular set of input variable names across all workspaces or may require a particular set of outputs for asset management purposes. \* \*\*Enforcing particular modules\*\*: organizations may provide a number of "building block" modules and require that each workspace be built only from combinations of these modules. \* \*\*Enforcing particular providers or resources\*\*: an organization may wish to require or prevent the use of providers and/or resources so that configuration authors cannot use alternative approaches to work around policy restrictions. Note with these use cases that this import is concerned with object \_names\_ in the configuration. Since this is the configuration and not an invocation of Terraform, you can't see values for variables, the state, or the diff for a pending plan. If you want to write policy around expressions used within configuration blocks, you likely want to use the [`tfplan`](/terraform/cloud-docs/policy-enforcement/import-reference/tfplan-v2) import. ## Namespace Overview The following is a tree view of the import namespace. For more detail on a particular part of the namespace, see below. -> \*\*Note:\*\* The root-level alias keys shown here (`data`, `modules`, `providers`, `resources`, and `variables`) are shortcuts to a [module namespace](#namespace-module) scoped to the root module. For more details, see the section on [root namespace aliases](#root-namespace-aliases). ``` tfconfig βββ module() (function) β βββ (module namespace) β βββ data β β βββ TYPE.NAME β β βββ config (map of keys) β β βββ references (map of keys) (TF 0.12 and later) β β βββ provisioners β β βββ NUMBER β β βββ config (map of keys) β β βββ references (map of keys) (TF 0.12 and later) β β βββ type (string) β βββ modules β β βββ NAME β β βββ config (map of keys) β β βββ references (map of keys) (TF 0.12 and later) β β βββ source (string) β β βββ version (string) β βββoutputs β β βββ NAME β β βββ depends\_on (list of strings) β β βββ description (string) β β βββ sensitive (boolean) β β βββ references (list of strings) (TF 0.12 and later) β β βββ value (value) β βββ providers β β βββ TYPE β β βββ alias β β β βββ ALIAS β β β βββ config (map of keys) β β | βββ references (map of keys) (TF 0.12 and later) β β β βββ version (string) β β βββ config (map of keys) β β βββ references (map of keys) (TF 0.12 and later) β β βββ version (string) β βββ resources β β βββ TYPE.NAME β β βββ config (map of keys) β β βββ references (map of keys) (TF 0.12 and later) β β βββ provisioners β β βββ NUMBER β β βββ config (map of keys) β β βββ references (map of keys) (TF 0.12 and later) β β βββ type (string) β βββ variables | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/import-reference/tfconfig.mdx | main | terraform | [
-0.02689613774418831,
-0.014453859999775887,
0.018468255177140236,
-0.020745260640978813,
0.06009069085121155,
-0.008242922835052013,
0.007745780050754547,
-0.03890826553106308,
0.00635113101452589,
0.048501066863536835,
0.003489350201562047,
-0.05377423018217087,
0.0050857397727668285,
0.... | 0.087235 |
βββ config (map of keys) β β βββ references (map of keys) (TF 0.12 and later) β β βββ provisioners β β βββ NUMBER β β βββ config (map of keys) β β βββ references (map of keys) (TF 0.12 and later) β β βββ type (string) β βββ variables β βββ NAME β βββ default (value) β βββ description (string) βββ module\_paths ([][]string) β βββ data (root module alias) βββ modules (root module alias) βββ outputs (root module alias) βββ providers (root module alias) βββ resources (root module alias) βββ variables (root module alias) ``` ### `references` with Terraform 0.12 \*\*With Terraform 0.11 or earlier\*\*, if a configuration value is defined as an expression (and not a static value), the value will be accessible in its raw, non-interpolated string (just as with a constant value). As an example, consider the following resource block: ```hcl resource "local\_file" "accounts" { content = "some text" filename = "${var.subdomain}.${var.domain}/accounts.txt" } ``` In this example, one might want to ensure `domain` and `subdomain` input variables are used within `filename` in this configuration. With Terraform 0.11 or earlier, the following policy would evaluate to `true`: ```python import "tfconfig" # filename\_value is the raw, non-interpolated string filename\_value = tfconfig.resources.local\_file.accounts.config.filename main = rule { filename\_value contains "${var.domain}" and filename\_value contains "${var.subdomain}" } ``` \*\*With Terraform 0.12 or later\*\*, any non-static values (such as interpolated strings) are not present within the configuration value and `references` should be used instead: ```python import "tfconfig" # filename\_references is a list of string values containing the references used in the expression filename\_references = tfconfig.resources.local\_file.accounts.references.filename main = rule { filename\_references contains "var.domain" and filename\_references contains "var.subdomain" } ``` The `references` value is present in any namespace where non-constant configuration values can be expressed. This is essentially every namespace which has a `config` value as well as the `outputs` namespace. -> \*\*Note:\*\* Remember, this import enforces policy around the literal Terraform configuration and not the final values as a result of invoking Terraform. If you want to write policy around the \_result\_ of expressions used within configuration blocks (for example, if you wanted to ensure the final value of `filename` above includes `accounts.txt`), you likely want to use the [`tfplan`](/terraform/cloud-docs/policy-enforcement/import-reference/tfplan-v2) import. ## Namespace: Root The root-level namespace consists of the values and functions documented below. In addition to this, the root-level `data`, `modules`, `providers`, `resources`, and `variables` keys all alias to their corresponding namespaces within the [module namespace](#namespace-module). ### Function: `module()` ``` module = func(ADDR) ``` \* \*\*Return Type:\*\* A [module namespace](#namespace-module). The `module()` function in the [root namespace](#namespace-root) returns the [module namespace](#namespace-module) for a particular module address. The address must be a list and is the module address, split on the period (`.`), excluding the root module. Hence, a module with an address of simply `foo` (or `root.foo`) would be `["foo"]`, and a module within that (so address `foo.bar`) would be read as `["foo", "bar"]`. [`null`][ref-null] is returned if a module address is invalid, or if the module is not present in the configuration. [ref-null]: /sentinel/docs/language/spec#null As an example, given the following module block: ```hcl module "foo" { # ... } ``` If the module contained the following content: ```hcl resource "null\_resource" "foo" { triggers = { foo = "bar" } } ``` The following policy would evaluate to `true`: ```python import "tfconfig" main = rule { subject.module(["foo"]).resources.null\_resource.foo.config.triggers[0].foo is "bar" } ``` ### Value: `module\_paths` \* \*\*Value Type:\*\* List of a list of strings. The `module\_paths` value within the [root namespace](#namespace-root) is a list of all of the modules within the Terraform configuration. Modules not present in the configuration will not be present | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/import-reference/tfconfig.mdx | main | terraform | [
-0.03724582865834236,
0.06801741570234299,
-0.0037875273264944553,
0.008692199364304543,
0.022823186591267586,
-0.0028043778147548437,
0.04597160965204239,
-0.001692384947091341,
-0.0032971655018627644,
0.02303023263812065,
0.03345199674367905,
-0.09298020601272583,
0.10694249719381332,
-0... | 0.010825 |
main = rule { subject.module(["foo"]).resources.null\_resource.foo.config.triggers[0].foo is "bar" } ``` ### Value: `module\_paths` \* \*\*Value Type:\*\* List of a list of strings. The `module\_paths` value within the [root namespace](#namespace-root) is a list of all of the modules within the Terraform configuration. Modules not present in the configuration will not be present here, even if they are present in the diff or state. This data is represented as a list of a list of strings, with the inner list being the module address, split on the period (`.`). The root module is included in this list, represented as an empty inner list. As an example, if the following module block was present within a Terraform configuration: ```hcl module "foo" { # ... } ``` The value of `module\_paths` would be: ``` [ [], ["foo"], ] ``` And the following policy would evaluate to `true`: ```python import "tfconfig" main = rule { tfconfig.module\_paths contains ["foo"] } ``` #### Iterating Through Modules Iterating through all modules to find particular resources can be useful. This [example][iterate-over-modules] shows how to use `module\_paths` with the [`module()` function](#function-module-) to find all resources of a particular type from all modules using the `tfplan` import. By changing `tfplan` in this function to `tfconfig`, you could make a similar function find all resources of a specific type in the Terraform configuration. [iterate-over-modules]: /terraform/cloud-docs/policy-enforcement/sentinel#sentinel-imports ## Namespace: Module The \*\*module namespace\*\* can be loaded by calling [`module()`](#root-function-module) for a particular module. It can be used to load the following child namespaces: \* `data` - Loads the [resource namespace](#namespace-resources-data-sources), filtered against data sources. \* `modules` - Loads the [module configuration namespace](#namespace-module-configuration). \* `outputs` - Loads the [output namespace](#namespace-outputs). \* `providers` - Loads the [provider namespace](#namespace-providers). \* `resources` - Loads the [resource namespace](#namespace-resources-data-sources), filtered against resources. \* `variables` - Loads the [variable namespace](#namespace-variables). ### Root Namespace Aliases The root-level `data`, `modules`, `providers`, `resources`, and `variables` keys all alias to their corresponding namespaces within the module namespace, loaded for the root module. They are the equivalent of running `module([]).KEY`. ## Namespace: Resources/Data Sources The \*\*resource namespace\*\* is a namespace \_type\_ that applies to both resources (accessed by using the `resources` namespace key) and data sources (accessed using the `data` namespace key). Accessing an individual resource or data source within each respective namespace can be accomplished by specifying the type and name, in the syntax `[resources|data].TYPE.NAME`. In addition, each of these namespace levels is a map, allowing you to filter based on type and name. Some examples of multi-level access are below: \* To fetch all `aws\_instance` resources within the root module, you can specify `tfconfig.resources.aws\_instance`. This would give you a map of resource namespaces indexed from the names of each resource (`foo`, `bar`, and so on). \* To fetch all resources within the root module, irrespective of type, use `tfconfig.resources`. This is indexed by type, as shown above with `tfconfig.resources.aws\_instance`, with names being the next level down. As an example, perhaps you wish to deny use of the `local\_file` resource in your configuration. Consider the following resource block: ```hcl resource "local\_file" "foo" { content = "foo!" filename = "${path.module}/foo.bar" } ``` The following policy would fail: ```python import "tfconfig" main = rule { tfconfig.resources not contains "local\_file" } ``` Further explanation of the namespace will be in the context of resources. As mentioned, when operating on data sources, use the same syntax, except with `data` in place of `resources`. ### Value: `config` \* \*\*Value Type:\*\* A string-keyed map of values. The `config` value within the [resource namespace](#namespace-resources-data-sources) is a map of key-value pairs that directly map to Terraform config keys and values. -> \*\*With Terraform 0.11 | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/import-reference/tfconfig.mdx | main | terraform | [
-0.03719453886151314,
0.04247044026851654,
0.035069722682237625,
0.026302680373191833,
0.061476729810237885,
-0.06519405543804169,
0.038429148495197296,
-0.047721873968839645,
0.01969507709145546,
0.014742914587259293,
0.021722886711359024,
-0.052058737725019455,
0.0779787078499794,
-0.027... | 0.028069 |
data sources, use the same syntax, except with `data` in place of `resources`. ### Value: `config` \* \*\*Value Type:\*\* A string-keyed map of values. The `config` value within the [resource namespace](#namespace-resources-data-sources) is a map of key-value pairs that directly map to Terraform config keys and values. -> \*\*With Terraform 0.11 or earlier\*\*, if the config value is defined as an expression (and not a static value), the value will be in its raw, non-interpolated string. \*\*With Terraform 0.12 or later\*\*, any non-static values (such as interpolated strings) are not present and [`references`](#resources-value-references) should be used instead. As an example, consider the following resource block: ```hcl resource "local\_file" "accounts" { content = "some text" filename = "accounts.txt" } ``` In this example, one might want to access `filename` to validate that the correct file name is used. Given the above example, the following policy would evaluate to `true`: ```python import "tfconfig" main = rule { tfconfig.resources.local\_file.accounts.config.filename is "accounts.txt" } ``` ### Value: `references` \* \*\*Value Type:\*\* A string-keyed map of list values containing strings. -> \*\*Note:\*\* This value is only present when using Terraform 0.12 or later. The `references` value within the [resource namespace](#namespace-resources-data-sources) contains the identifiers within non-constant expressions found in [`config`](#resources-value-config). See the [documentation on `references`](#references-with-terraform-0-12) for more information. ### Value: `provisioners` \* \*\*Value Type:\*\* List of [provisioner namespaces](#namespace-provisioners). The `provisioners` value within the [resource namespace](#namespace-resources) represents the [provisioners][ref-tf-provisioners] within a specific resource. Provisioners are listed in the order they were provided in the configuration file. While the `provisioners` value will be present within data sources, it will always be an empty map (in Terraform 0.11) or `null` (in Terraform 0.12) since data sources cannot actually have provisioners. The data within a provisioner can be inspected via the returned [provisioner namespace](#namespace-provisioners). [ref-tf-provisioners]: /terraform/language/resources/provisioners/syntax ## Namespace: Provisioners The \*\*provisioner namespace\*\* represents the configuration for a particular [provisioner][ref-tf-provisioners] within a specific resource. ### Value: `config` \* \*\*Value Type:\*\* A string-keyed map of values. The `config` value within the [provisioner namespace](#namespace-provisioners) represents the values of the keys within the provisioner. -> \*\*With Terraform 0.11 or earlier\*\*, if the config value is defined as an expression (and not a static value), the value will be in its raw, non-interpolated string. \*\*With Terraform 0.12 or later\*\*, any non-static values (such as interpolated strings) are not present and [`references`](#provisioners-value-references) should be used instead. As an example, given the following resource block: ```hcl resource "null\_resource" "foo" { # ... provisioner "local-exec" { command = "echo ${self.private\_ip} > file.txt" } } ``` The following policy would evaluate to `true`: ```python import "tfconfig" main = rule { tfconfig.resources.null\_resource.foo.provisioners[0].config.command is "echo ${self.private\_ip} > file.txt" } ``` ### Value: `references` \* \*\*Value Type:\*\* A string-keyed map of list values containing strings. -> \*\*Note:\*\* This value is only present when using Terraform 0.12 or later. The `references` value within the [provisioner namespace](#namespace-provisioners) contains the identifiers within non-constant expressions found in [`config`](#provisioners-value-config). See the [documentation on `references`](#references-with-terraform-0-12) for more information. ### Value: `type` \* \*\*Value Type:\*\* String. The `type` value within the [provisioner namespace](#namespace-provisioners) represents the type of the specific provisioner. As an example, in the following resource block: ```hcl resource "null\_resource" "foo" { # ... provisioner "local-exec" { command = "echo ${self.private\_ip} > file.txt" } } ``` The following policy would evaluate to `true`: ```python import "tfconfig" main = rule { tfconfig.resources.null\_resource.foo.provisioners[0].type is "local-exec" } ``` ## Namespace: Module Configuration The \*\*module configuration\*\* namespace displays data on \_module configuration\_ as it is given within a `module` block. This means that the namespace concerns itself with the contents of the declaration block (example: the `source` parameter and variable assignment keys), not the | https://github.com/hashicorp/web-unified-docs/blob/main//content/terraform-docs-common/docs/cloud-docs/workspaces/policy-enforcement/import-reference/tfconfig.mdx | main | terraform | [
-0.05763954669237137,
0.06357241421937943,
-0.006939605809748173,
0.027385588735342026,
-0.005842703394591808,
-0.02335551753640175,
0.08406446874141693,
-0.03662044182419777,
-0.012305022217333317,
0.025762634351849556,
-0.0019502838840708137,
-0.10346728563308716,
0.1189611628651619,
-0.... | 0.039948 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.