question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
The following code snippet is my terraform configuration to create an Azure SignalR Service: output "signalrserviceconnstring" { value = azurerm_signalr_service.mysignalrservice.primary_connection_string description = "signalR service's primary connection string" sensitive = true } I got an error when sensitive = true is not included but I still do not see the output results on the console. What's the solution or workaround for this problem?
The entire point of sensitive = true is to prevent the values from being displayed on the console every time you run terraform apply. You have to output the sensitive value explicitly, like this: terraform output signalrserviceconnstring I highly suggest reading the documentation.
Terraform
67,650,019
58
Does Terraform support conditional attributes? I only want to use an attribute depending on a variable's value. Example: resource "aws_ebs_volume" "my_volume" { availability_zone = "xyz" size = 30 if ${var.staging_mode} == true: snapshot_id = "a_specific_snapshot_id" endif } The above if statement enclosing the attribute snapshot_id is what I'm looking for. Does Terraform support such attribute inclusion based on a variable's value.
Terraform 0.12 (yet to be released) will also bring support for HCL2 which allows you to use nullable arguments with something like this: resource "aws_ebs_volume" "my_volume" { availability_zone = "xyz" size = 30 snapshot_id = var.staging_mode ? local.a_specific_snapshot_id : null } Nullable arguments are covered in this 0.12 preview guide. For version of Terraform before 0.12, Markus's answer is probably your best bet although I'd be more explicit with the count with something like this: resource "aws_ebs_volume" "staging_volume" { count = "${var.staging_mode ? 1 : 0}" availability_zone = "xyz" size = 30 snapshot_id = "a_specific_snapshot_id" } resource "aws_ebs_volume" "non_staging_volume" { count = "${var.staging_mode ? 0 : 1}" availability_zone = "xyz" size = 30 } Note that the resource names must be unique or Terraform will complain. This then causes issues if you need to refer to the EBS volume such as with an aws_volume_attachment as in pre 0.12 the ternary expression is not lazy so something like this doesn't work: resource "aws_volume_attachment" "ebs_att" { device_name = "/dev/sdh" volume_id = "${var.staging_mode ? aws_ebs_volume.staging_volume.id : aws_ebs_volume.non_staging_volume.id}" instance_id = "${aws_instance.web.id}" } Because it will attempt to evaluate both sides of the ternary where only one can be valid at any point. In Terraform 0.12 this will no longer be the case but obviously you could solve it more easily with the nullable arguments.
Terraform
51,496,445
56
I have a Terraform 0.11 project with 30-40 different resources. I would like to delete all of them except a few - and those few are logically related to each other. I was looking for something close to terraform destroy --except=resource-id but that of course doesn't exist. Is there a way to achieve that without too much scripting (Terraform admins have various OSs)? Would using modules make that process easier perhaps?
There is no --except feature in terraform destroy command currently. If you really want to do that, and you know what you are doing, here is the workaround. # list all resources terraform state list # remove that resource you don't want to destroy # you can add more to be excluded if required terraform state rm <resource_to_be_deleted> # destroy the whole stack except above excluded resource(s) terraform destroy So why do these commands work for your idea? The state (*.tfstate) is used by Terraform to map real world resources to your configuration, keep track of metadata. terraform state rm cleans a record (resource) from the state file (*.tfstate) only. It doesn't destroy the real resource. Since you don't run terraform apply or terraform refresh, after terraform state rm, terraform doesn't know the excluded resource was created at all. When you run terraform destroy, it has no detail about that excluded resource’s state and will not destroy it. It will destroy the rest. By the way, later you still have chance to import the resource back with terraform import command if you want.
Terraform
55,265,203
55
I defined a aws_cloudwatch_event_target in terraform to fire an event to lambda from cloudwatch. The input field is the event parameter for example: resource "aws_cloudwatch_event_target" "data" { rule = "${aws_cloudwatch_event_rule.scheduler.name}" target_id = "finance_producer_cloudwatch" arn = "${aws_lambda_function.finance_data_producer.arn}" input = "{\"test\": [\"111\"]}" } I wonder how I can load the input json data from an external file.
The answer here depends on a few different questions: Is this file a static part of your configuration, checked in to version control alongside your .tf files, or is it dynamically generated as part of the apply process? Do you want to use the file contents literally, or do you need to substitute values into it from elsewhere in the Terraform configuration? These two questions form a matrix of four different answers: | Literal Content Include Values from Elsewhere -------------|---------------------------------------------------------- Static File | file(...) function templatefile(...) function Dynamic File | local_file data source template_file data source I'll describe each of these four options in more detail below. A common theme in all of these examples will be references to path.module, which evaluates to the path where the current module is loaded from. Another way to think about that is that it is the directory containing the current .tf file. Accessing files in other directories is allowed, but in most cases it's appropriate to keep things self-contained in your module by keeping the data files and the configuration files together. Terraform strings are sequences of unicode characters, so Terraform can only read files containing valid UTF-8 encoded text. For JSON that's no problem, but worth keeping in mind for other file formats that might not conventionally be UTF-8 encoded. The file function The file function reads the literal content of a file from disk as part of the initial evaluation of the configuration. The content of the file is treated as if it were a literal string value for validation purposes, and so the file must exist on disk (and usually, in your version control) as a static part of your configuration, as opposed to being generated dynamically during terraform apply. resource "aws_cloudwatch_event_target" "data" { rule = aws_cloudwatch_event_rule.scheduler.name target_id = "finance_producer_cloudwatch" arn = aws_lambda_function.finance_data_producer.arn input = file("${path.module}/input.json") } This is the most common and simplest option. If the file function is sufficient for your needs then it's the best option to use as a default choice. The templatefile function The templatefile function is similar to the file function, but rather than just returning the file contents literally it instead parses the file contents as a string template and then evaluates it using a set of local variables given in its second argument. This is useful if you need to pass some data from elsewhere in the Terraform configuration, as in this example: resource "aws_cloudwatch_event_target" "data" { rule = aws_cloudwatch_event_rule.scheduler.name target_id = "finance_producer_cloudwatch" arn = aws_lambda_function.finance_data_producer.arn input = templatefile("${path.module}/input.json.tmpl", { instance_id = aws_instance.example.id }) } In input.json.tmpl you can use the Terraform template syntax to substitute that variable value: {"instance_id":${jsonencode(instance_id)}} In cases like this where the whole result is JSON, I'd suggest just generating the whole result using jsonencode, since then you can let Terraform worry about the JSON escaping etc and just write the data structure in Terraform's object syntax: ${jsonencode({ instance_id = instance_id })} As with file, because templatefile is a function it gets evaluated during initial decoding of the configuration and its result is validated as a literal value. The template file must therefore also be a static file that is distributed as part of the configuration, rather than a dynamically-generated file. The local_file data source Data sources are special resource types that read an existing object or compute a result, rather than creating and managing a new object. Because they are resources, they can participate in the dependency graph and can thus make use of objects (including local files) that are created by other resources in the same Terraform configuration during terraform apply. The local_file data source belongs to the hashicorp/local provider and is essentially the data source equivalent of the file function. In the following example, I'm using var.input_file as a placeholder for any reference to a file path that is created by some other resource in the same configuration. In a real example, that is most likely to be a direct reference to an attribute of a resource. data "local_file" "input" { filename = var.input_file } resource "aws_cloudwatch_event_target" "data" { rule = aws_cloudwatch_event_rule.scheduler.name target_id = "finance_producer_cloudwatch" arn = aws_lambda_function.finance_data_producer.arn input = data.local_file.input.content } The template_file data source NOTE: Since I originally wrote this answer, the provider where template_file was implemented has been declared obsolete and no longer maintained, and there is no replacement. In particular, the provider was archived prior to the release of Apple Silicon and so there is no available port for macOS on that architecture. The Terraform team does not recommend rendering of dynamically-loaded templates, because it pushes various errors that could normally be detected at plan time to be detected only during apply time instead. I've retained this content as I originally wrote it in case it's useful, but I would suggest treating this option as a last resort. The template_file data source is the data source equivalent of the templatefile function. It's similar in usage to local_file though in this case we populate the template itself by reading it as a static file, using either the file function or local_file as described above depending on whether the template is in a static file or a dynamically-generated one, though if it were a static file we'd prefer to use the templatefile function and so we'll use the local_file data source here: data "local_file" "input_template" { filename = var.input_template_file } data "template_file" "input" { template = data.local_file.input_template.content vars = { instance_id = aws_instance.example.id } } resource "aws_cloudwatch_event_target" "data" { rule = aws_cloudwatch_event_rule.scheduler.name target_id = "finance_producer_cloudwatch" arn = aws_lambda_function.finance_data_producer.arn input = data.template_file.input.rendered } The templatefile function was added in Terraform 0.12.0, so you may see examples elsewhere of using the template_file data source to render static template files. That is an old pattern, now deprecated in Terraform 0.12, because the templatefile function makes for a more direct and readable configuration in most cases. One quirk of the template_file data source as opposed to the templatefile function is that the data source belongs to the hashicorp/template provider rather than to Terraform Core, and so which template features are available in it will depend on which version of the provider is installed rather than which version of Terraform CLI is installed. The template provider is likely to lag behind Terraform Core in terms of which template language features are available, which is another reason to prefer the templatefile function where possible. Other Possibilities This question was specifically about reading data from a file, but for completeness I also want to note that for small JSON payloads it can sometimes be preferable to inline them directly in the configuration as a Terraform data structure and convert to JSON using jsonencode, like this: resource "aws_cloudwatch_event_target" "data" { rule = aws_cloudwatch_event_rule.scheduler.name target_id = "finance_producer_cloudwatch" arn = aws_lambda_function.finance_data_producer.arn input = jsonencode({ instance_id = aws_instance.example.id }) } Writing the data structure inline as a Terraform expression means that a future reader can see directly what will be sent without needing to refer to a separate file. However, if the data structure is very large and complicated then it can hurt overall readability to include it inline because it could overwhelm the other configuration in the same file. Which option to choose will therefore depend a lot on the specific circumstances, but always worth considering whether the indirection of a separate file is the best choice for readability. Terraform also has a yamlencode function which can do similarly for YAML-formatted data structures, either directly inside a .tf file or in an interpolation sequence in an external template.
Terraform
57,454,591
53
I have problem with terraform on my macOS Ventura 13.3.1. When I try to initialize terragrunt: terragrunt init I have information that Terraform will damage your computer My colleague is using M1 and terraform version 1.0.11 and he don't have problem. I tried latest version and also 1.0.11, but I still have this error. I installed terraform via tfenv.
Looks like the rotated signing key was the issue for me too (and probably every mac user of terraform) Reinstall with brew solved this for me. I keep previous versions in case I need them so I have a symlink to the current version. Remove that first which terraform #remove my symlink so brew can replace it #rm '/usr/local/bin/terraform' then update with brew (as recommended by hashicorp) brew tap hashicorp/tap brew install hashicorp/tap/terraform
Terraform
76,129,509
53
I have two environment variables. One is TF_VAR_UN and another is TF_VAR_PW. Then I have a terraform file that looks like this. resource "google_container_cluster" "primary" { name = "marcellus-wallace" zone = "us-central1-a" initial_node_count = 3 master_auth { username = ${env.TF_VAR_UN} password = ${env.TF_VAR_PW} } node_config { oauth_scopes = [ "https://www.googleapis.com/auth/compute", "https://www.googleapis.com/auth/devstorage.read_only", "https://www.googleapis.com/auth/logging.write", "https://www.googleapis.com/auth/monitoring" ] } } The two values I'd like to replace with the environment variables TF_VAR_UN and TF_VAR_PW are the values username and password. I tried what is shown above, with no success, and I've toyed around with a few other things but always get syntax issues.
I would try something more like this, which seems closer to the documentation. variable "UN" { type = string } variable "PW" { type = string } resource "google_container_cluster" "primary" { name = "marcellus-wallace" zone = "us-central1-a" initial_node_count = 3 master_auth { username = var.UN password = var.PW } node_config { oauth_scopes = [ "https://www.googleapis.com/auth/compute", "https://www.googleapis.com/auth/devstorage.read_only", "https://www.googleapis.com/auth/logging.write", "https://www.googleapis.com/auth/monitoring" ] } } With the CLI command being the below. TF_VAR_UN=foo TF_VAR_PW=bar terraform apply
Terraform
36,629,367
52
In Terraform, I'm trying to create a module with that includes a map with variable keys. I'm not sure if this is possible but I've tried the following without success. resource "aws_instance" "web" { ami = "${var.base_ami}" availability_zone = "${var.region_a}" instance_type = "${var.ec2_instance_size}" security_groups = ["sec1"] count = "${var.ec2_instance_count}" tags { Name = "${var.role} ${var_env}" role = "${var.app_role}" ${var.app_role} = "${var_env}" } } and this: tags { Name = "${var.role} ${var_env}" } tags."${var.role}" = "${var.env}" Any ideas? Is this not possible with Terraform currently?
There's (now) a lookup function supported in the terraform interpolation syntax, that allows you to lookup dynamic keys in a map. Using this, I can now do stuff like: output "image_bucket_name" { value = "${lookup(var.image_bucket_names, var.environment, "No way this should happen")}" } where: variable "image_bucket_names" { type = "map" default = { development = "bucket-dev" staging = "bucket-for-staging" preprod = "bucket-name-for-preprod" production = "bucket-for-production" } } and environment is a simple string variable.
Terraform
35,491,987
51
Is there a way to use something like this in Terraform? count = "${var.I_am_true}"&&"${var.I_am_false}"
This is more appropriate in the actual version (0.12.X) The supported operators are: Equality: == and != Numerical comparison: >, <, >=, <= Boolean logic: &&, ||, unary ! https://www.terraform.io/docs/configuration/interpolation.html#conditionals condition_one and condition two: count = var.condition_one && var.condition_two ? 1 : 0 condition_one and NOT condition_two: count = var.condition_one && !var.condition_two ? 1 : 0 condition_one OR condition_two: count = var.condition_one || var.condition_two ? 1 : 0
Terraform
39,479,849
51
Using Terraform modules with a git branch as a source, I am referring to: git::ssh://private_server:myport/kbf/my_repository.git//ecs-cluster?ref=v0.0.1 In my module source parameter, this works great and provides me with my module at tag v0.0.1 on master. However I'd like to specify a branch, not a tag, but am not sure how to do this.
As mentioned in the Terraform documentation here: git::ssh://private_server:myport/kbf/my_repository.git//ecs-cluster?ref=myBranch
Terraform
52,538,920
50
I'm trying to do a rather simple task in Terraform and it's not working: tfvars: hosted_zone = "example.com" domain = "my.${var.hosted_zone}" route_53_record: resource "aws_route53_record" "regional" { zone_id = "${data.aws_route53_zone.selected.zone_id}" name = "${var.domain}" type = "A" ttl = "300" records = ["4.4.4.4"] } When I run terraform plan I'm getting this: + aws_route53_record.regional id: <computed> allow_overwrite: "true" fqdn: <computed> name: "my.${var.hosted_zone}" records.#: "1" records.3178571330: "4.4.4.4" ttl: "300" type: "A" zone_id: "REDACTED" domain should be my.example.com. How do I join the variable hosted_zoned and a string to form domain?
You can't use interpolation in a tfvars file. Instead you could either join it directly in your Terraform like this: terraform.tfvars hosted_zone = "example.com" domain = "my" main.tf resource "aws_route53_record" "regional" { zone_id = data.aws_route53_zone.selected.zone_id name = "${var.domain}.${var.hosted_zone}" type = "A" ttl = "300" records = ["4.4.4.4"] } Or, if you always need to compose these things together you could use a local: locals { domain = "${var.domain}.${var.hosted_zone}" } resource "aws_route53_record" "regional" { zone_id = data.aws_route53_zone.selected.zone_id name = local.domain type = "A" ttl = "300" records = ["4.4.4.4"] }
Terraform
54,752,049
50
Is there a way to write a conditional statement with multiple branches in terraform? I'm setting up a terraform module to create an Amazon Aurora cluster. I need to have an option for cross region replication so I need to decide the region of the replica in relation to the source region.
This is one way using the coalesce() function: locals{ prod = "${var.environment == "PROD" ? "east" : ""}" prod2 = "${var.environment == "PROD2" ? "west2" : ""}" nonprod = "${var.environment != "PROD" && var.environment != "PROD2" ? "west" : ""}" region = "${coalesce(local.prod,local.prod2, local.nonprod)}" }
Terraform
55,555,963
50
I'm just getting to grips with Terraform (so apologies if this is a stupid question). I'm setting up an azure vnet with a set of subnets, each subnet has a routing table that sends traffic via a firewall. It looks like the subnet and route table combination would make a good re-useable module. By convention, I'd like to create the subnet and the route table in the same resource group and location as the parent vnet. If I provide all the values needed in the module as individual values, the module works ok :) What I'd rather do is effectively pass the resource that represents the parent vnet into the module as a "parameter" and have the module read things like the resource group name, location and vnet name directly from the vnet resource, so that: - I type less (I create the module instance with just the vnet, rather than seperate values for vnet name, resource group name and location) - it removes the opportunity for error when setting the location and resource group names on the route table and subnet (if I read the values from the parent vnet in the module then the values will be the same as the parent vnet) Currently the module variables are defined as: variable "parent-vnet-name" {} variable "parent-vnet-resource-group-name" {} variable "parent-vnet-location" {} variable "subnet-name" { type = "string" } variable "subnet-address-prefix" {} variable "firewall-ip-private" {} and the module as: resource "azurerm_route_table" "rg-mgt-network__testtesttest" { name = "${var.subnet-name}" location = "${var.parent-vnet-location}" resource_group_name = "${var.parent-vnet-resource-group-name}" route { name = "Default" address_prefix = "0.0.0.0/0" next_hop_type = "VirtualAppliance" next_hop_in_ip_address = "${var.firewall-ip-private}" } } Really what I'd like to do is more like variables: variable "parent-vnet" {} variable "subnet-name" { type = "string" } variable "subnet-address-prefix" {} variable "firewall-ip-private" {} with module doing something like: resource "azurerm_route_table" "rg-mgt-network__testtesttest" { name = "${var.subnet-name}" location = "${var.parent-vnet.location}" resource_group_name = "${var.parent-vnet.resource-group-name}" route { name = "Default" address_prefix = "0.0.0.0/0" next_hop_type = "VirtualAppliance" next_hop_in_ip_address = "${var.firewall-ip-private}" } } I've played around with a variety of things (such as trying to pass the vnet without specifying an attribute name (validation failure) or using a data source to pull back the vnet (the data sources doesn't have the resource group info) ) but haven't got anywhere so I'm wondering if I've missed something? Cheers, Andy
This is possible with terraform > 0.12. You can use the object type, but you have to explicitly list fields that you use within your module. # module's variables.tf variable "parent_vnet" { # List each field in `azurerm_route_table` that your module will access type = object({ name = string location = string resource_group_name = string }) } # caller resource "azurerm_route_table" "my_parent_vnet" { # ... } module "my-module" { parent_vnet = azurerm_route_table.my_parent_vnet }
Terraform
50,740,412
48
I have the following list of objects variable: variable "objects" { type = "list" description = "list of objects default = [ { id = "name1" attribute = "a" }, { id = "name2" attribute = "a,b" }, { id = "name3" attribute = "d" } ] } How do I get element with id = "name2" ?
You get the map with id="name2" with the following expression: var.objects[index(var.objects.*.id, "name2")] For a quick test, run the following one-liner in terraform console: [{id = "name1", attribute = "a"}, {id = "name2", attribute = "a,b"}, {id = "name3", attribute = "d"}][index([{id = "name1", attribute = "a"}, {id = "name2", attribute = "a,b"}, {id = "name3", attribute = "d"}].*.id, "name2")]
Terraform
52,119,400
48
While using Terraform to deploy a fairly large infrastructure in AWS, our remote tfstate got corrupted and was deleted. From the documentation, I gather that terraform refresh should query AWS to get the real state of the infrastructure and update the tfstate accordingly, but that does not happen: my tfstate is untouched and plan + apply give a lot of Already existing errors. What does terraform refresh really do?
terraform refresh attempts to find any resources held in the state file and update with any drift that has happened in the provider outside of Terraform since it was last ran. For example, lets say your state file contains 3 EC2 instances with instance ids of i-abc123, i-abc124, i-abc125 and then you delete i-abc124 outside of Terraform. After running terraform refresh, a plan would show that it needs to create the second instance while a destroy plan would show that it only needs to destroy the first and third instances (and not fail to destroy the missing second instance). Terraform makes a very specific decision to not interfere with things that aren't being managed by Terraform. That means if the resource doesn't exist in its state file then it absolutely will not touch it in any way. This enables you to run Terraform alongside other tools as well as making manual changes in the AWS console. It also means that you can run Terraform in different contexts simply by providing a different state file to use, allowing you to split your infrastructure up into multiple state files and save yourself from catastrophic state file corruption. To get yourself out of your current hole I suggest you use terraform import liberally to get things back into your state file or, if possible, manually destroy everything outside of Terraform and start from scratch. In future I would suggest both splitting out state files to apply for more granular contexts and also to store your remote state in an S3 bucket with versioning enabled. You could also look towards tools like Terragrunt to lock your state file to help avoid corruption or wait for the native state file locking in the upcoming 0.9 release of Terraform.
Terraform
42,628,660
47
How do you check if a terraform string contains another string? For example, I want to treat terraform workspaces with "tmp" in the name specially (e.g. allowing rds instances to be deleted without a snapshot), so something like this: locals { is_tmp = "${"tmp" in terraform.workspace}" } As far as I can tell, the substr interpolation function doesn't accomplish this.
For terraform 0.12.xx apparently you are suppose to use regexall to do this. From the manual for terraform 0.12.XX: regexall() documentation regexall can also be used to test whether a particular string matches a given pattern, by testing whether the length of the resulting list of matches is greater than zero. Example from the manual: > length(regexall("[a-z]+", "1234abcd5678efgh9")) 2 > length(regexall("[a-z]+", "123456789")) > 0 false Example applied to your case in terraform 0.12.xx syntax should be something like: locals { is_tmp = length(regexall(".*tmp.*", terraform.workspace)) > 0 } It also specifically says in the manual not to use "regex" but instead use regexall. If the given pattern does not match at all, the regex raises an error. To test whether a given pattern matches a string, use regexall and test that the result has length greater than zero. As stated above this is because you will actually get an exception error when you try to use it in the later versions of 0.12.xx that are out now when you run plan. This is how I found this out and why I posted the new answer back here.
Terraform
47,243,474
47
Just today, whenever I run terraform apply, I see an error something like this: Can't configure a value for "lifecycle_rule": its value will be decided automatically based on the result of applying this configuration. It was working yesterday. Following is the command I run: terraform init && terraform apply Following is the list of initialized provider plugins: - Finding latest version of hashicorp/archive... - Finding latest version of hashicorp/aws... - Finding latest version of hashicorp/null... - Installing hashicorp/null v3.1.0... - Installed hashicorp/null v3.1.0 (signed by HashiCorp) - Installing hashicorp/archive v2.2.0... - Installed hashicorp/archive v2.2.0 (signed by HashiCorp) - Installing hashicorp/aws v4.0.0... - Installed hashicorp/aws v4.0.0 (signed by HashiCorp) Following are the errors: Acquiring state lock. This may take a few moments... Releasing state lock. This may take a few moments... ╷ │ Error: Value for unconfigurable attribute │ │ with module.ssm-parameter-store-backup.aws_s3_bucket.this, │ on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource "aws_s3_bucket" "this": │ 1: resource "aws_s3_bucket" "this" { │ │ Can't configure a value for "lifecycle_rule": its value will be decided │ automatically based on the result of applying this configuration. ╵ ╷ │ Error: Value for unconfigurable attribute │ │ with module.ssm-parameter-store-backup.aws_s3_bucket.this, │ on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource "aws_s3_bucket" "this": │ 1: resource "aws_s3_bucket" "this" { │ │ Can't configure a value for "server_side_encryption_configuration": its │ value will be decided automatically based on the result of applying this │ configuration. ╵ ╷ │ Error: Value for unconfigurable attribute │ │ with module.ssm-parameter-store-backup.aws_s3_bucket.this, │ on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 3, in resource "aws_s3_bucket" "this": │ 3: acl = "private" │ │ Can't configure a value for "acl": its value will be decided automatically │ based on the result of applying this configuration. ╵ ERRO[0012] 1 error occurred: * exit status 1 My code is as follows: resource "aws_s3_bucket" "this" { bucket = "${var.project}-${var.environment}-ssm-parameter-store-backups-bucket" acl = "private" server_side_encryption_configuration { rule { apply_server_side_encryption_by_default { kms_master_key_id = data.aws_kms_key.s3.arn sse_algorithm = "aws:kms" } } } lifecycle_rule { id = "backups" enabled = true prefix = "backups/" transition { days = 90 storage_class = "GLACIER_IR" } transition { days = 180 storage_class = "DEEP_ARCHIVE" } expiration { days = 365 } } tags = { Name = "${var.project}-${var.environment}-ssm-parameter-store-backups-bucket" Environment = var.environment } }
Terraform AWS Provider is upgraded to version 4.0.0 which is published on 10 February 2022. Major changes in the release include: Version 4.0.0 of the AWS Provider introduces significant changes to the aws_s3_bucket resource. Version 4.0.0 of the AWS Provider will be the last major version to support EC2-Classic resources as AWS plans to fully retire EC2-Classic Networking. See the AWS News Blog for additional details. Version 4.0.0 and 4.x.x versions of the AWS Provider will be the last versions compatible with Terraform 0.12-0.15. The reason for this change by Terraform is as follows: To help distribute the management of S3 bucket settings via independent resources, various arguments and attributes in the aws_s3_bucket resource have become read-only. Configurations dependent on these arguments should be updated to use the corresponding aws_s3_bucket_* resource. Once updated, new aws_s3_bucket_* resources should be imported into Terraform state. So, I updated my code accordingly by following the guide here: Terraform AWS Provider Version 4 Upgrade Guide | S3 Bucket Refactor The new working code looks like this: resource "aws_s3_bucket" "this" { bucket = "${var.project}-${var.environment}-ssm-parameter-store-backups-bucket" tags = { Name = "${var.project}-${var.environment}-ssm-parameter-store-backups-bucket" Environment = var.environment } } resource "aws_s3_bucket_acl" "this" { bucket = aws_s3_bucket.this.id acl = "private" } resource "aws_s3_bucket_server_side_encryption_configuration" "this" { bucket = aws_s3_bucket.this.id rule { apply_server_side_encryption_by_default { kms_master_key_id = data.aws_kms_key.s3.arn sse_algorithm = "aws:kms" } } } resource "aws_s3_bucket_lifecycle_configuration" "this" { bucket = aws_s3_bucket.this.id rule { id = "backups" status = "Enabled" filter { prefix = "backups/" } transition { days = 90 storage_class = "GLACIER_IR" } transition { days = 180 storage_class = "DEEP_ARCHIVE" } expiration { days = 365 } } } If you don't want to upgrade your Terraform AWS Provider version to 4.0.0, you can use the existing or older version by specifying it explicitly in the code as below: terraform { required_version = "~> 1.0.11" required_providers { aws = "~> 3.73.0" } }
Terraform
71,078,462
47
What exactly does this AWS role do? The most relevant bits seem to be: "Action": "sts:AssumeRole", and "Service": "ec2.amazonaws.com" The full role is here: resource "aws_iam_role" "test_role" { name = "test_role" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Service": "ec2.amazonaws.com" }, "Effect": "Allow", "Sid": "" } ] } EOF } From: https://www.terraform.io/docs/providers/aws/r/iam_role.html
To understand the meaning of this it is necessary to understand some details of how IAM Roles work. An IAM role is similar to a user in its structure, but rather than it being accessed by a fixed set of credentials it is instead used by assuming the role, which means to request and obtain temporary API credentials that allow taking action with the privileges that are granted to the role. The sts:AssumeRole action is the means by which such temporary credentials are obtained. To use it, a user or application calls this API using some already-obtained credentials, such as a user's fixed access key, and it returns (if permitted) a new set of credentials to act as the role. This is the mechanism by which AWS services can call into other AWS services on your behalf, by which IAM Instance Profiles work in EC2, and by which a user can temporarily switch access level or accounts within the AWS console. The assume role policy determines which principals (users, other roles, AWS services) are permitted to call sts:AssumeRole for this role. In this example, the EC2 service itself is given access, which means that EC2 is able to take actions on your behalf using this role. This role resource alone is not useful, since it doesn't have any IAM policies associated and thus does not grant any access. Thus an aws_iam_role resource will always be accompanied by at least one other resource to specify its access permissions. There are several ways to do this: Use aws_iam_role_policy to attach a policy directly to the role. In this case, the policy will describe a set of AWS actions the role is permitted to execute, and optionally other constraints. Use aws_iam_policy to create a standalone policy, and then use aws_iam_policy_attachment to associate that policy with one or more roles, users, and groups. This approach is useful if you wish to attach a single policy to multiple roles and/or users. Use service-specific mechanisms to attach policies at the service level. This is a different way to approach the problem, where rather than attaching the policy to the role, it is instead attached to the object whose access is being controlled. The mechanism for doing this varies by service, but for example the policy attribute on aws_s3_bucket sets bucket-specific policies; the Principal element in the policy document can be used to specify which principals (e.g. roles) can take certain actions. IAM is a flexible system that supports several different approaches to access control. Which approach is right for you will depend largely on how your organization approaches security and access control concerns: managing policies from the role perspective, with aws_iam_role_policy and aws_iam_policy_attachment, is usually appropriate for organizations that have a centralized security team that oversees access throughout an account, while service-specific policies delegate the access control decisions to the person or team responsible for each separate object. Both approaches can be combined, as part of a defense in depth strategy, such as using role- and user-level policies for "border" access controls (controlling access from outside) and service-level policies for internal access controls (controlling interactions between objects within your account). More details on roles can be found in the AWS IAM guide IAM Roles. See also Access Management, which covers the general concepts of access control within IAM.
Terraform
44,623,056
46
I have some code in the general form: variable "foo" { type = "list" default = [ 1,2,3 ] } resource "bar_type" "bar" { bar_field = "${var.foo}" } I want to append an addition value to bar_field without modifying foo. How can I do this? I don't see any sort of contacting or appending functions in their docs. This is 0.11.x Terraform
You can use the concat function for this. Expanding upon the example in your question: variable "foo" { type = "list" default = [ 1,2,3 ] } # assume a value of 4 of type number is the additional value to be appended resource "bar_type" "bar" { bar_field = "${concat(var.foo, [4])}" } which appends to the value assigned to bar_field while ensuring var.foo remains unchanged.
Terraform
55,957,697
46
When running terraform plan or terraform apply with a list provided to for_each an error occurs saying Error: Invalid for_each argument on main.tf line 2, in resource "aws_ssm_parameter" "foo": 2: for_each = ["a", "b"] The given "for_each" argument value is unsuitable: the "for_each" argument must be a map, or set of strings, and you have provided a value of type tuple. A minimum example to reproduce this error is this: resource "aws_ssm_parameter" "foo" { for_each = ["a", "b"] name = "foo-${each.value}" type = "String" value = "bar-${each.value}" }
Explanation This error is often caused by passing a list to for_each, but for_each only works with unordered data-types, i.e. with sets and maps. Solution The resolution depends on the situation. List of strings If the list is just a list of strings, the easiest fix is to add a toset()-call to transform the list to a set that can be handled by for_each, like this resource "aws_ssm_parameter" "foo" { for_each = toset(["a", "b"]) name = "foo-${each.value}" type = "String" value = "bar-${each.value}" } List that can be rearranged to a map If the input is a list, but easily be rearranged to a map this is usually the best way. Say we have a list like this locals { animals = [ { name = "Bello" age = 3 type = "dog" }, { name = "Minga" age = 4 type = "cat" }, ] } Then an appropriate re-structuring might be this locals { animals = { Bello : { age = 3 type = "dog" }, Minga : { age = 4 type = "cat" } } } which then allows you to define resource "aws_ssm_parameter" "foo" { for_each = local.animals name = each.key type = string value = "This is a ${each.value.type}, ${each.value.age} years old." } List that you do not want to rearrange Sometimes it is natural to have a list, e.g. comming from an output of a module that one does not control or from a resource that is defined with count. In such a situation, one can either work with count like this resource "aws_ssm_parameter" "foo" { count = length(local.my_list) name = my_list[count.index].name type = "String" value = my_list[count.index].value } which works for a list of maps containing name and value as keys. Often times, though, it is more appropriate to transform the list to a map instead like this resource "aws_ssm_parameter" "foo" { for_each = { for x in local.my_list: x.id => x } name = each.value.name type = "String" value = each.value.value } Here one should choose anything appropriate in place of x.id. If my_list is a list of objects, there is usually some common field like a name or key, that can be used. The advantage of this approach in favor of using count as above, is that this behaves better when inserting or removing elements from the list. count will not notice the insertion or deletion as such and will hence update all resources following the place where the insertion took place, while for_each really only adds or removes the resource with the new or deleted id.
Terraform
62,264,013
46
I want to identify the public IP of the terraform execution environment and add it to aws security group inbound to prevent access from other environments. Currently, I am manually editing the values in the variables.tf file. variables.tf variable public_ip_address { default = "xx" } I would like to execute the "curl ifconfig.co" command on the local host and automatically set the security group based on the result Is there a way to do such things? I could do it by putting the result of local-exec in some variable but I don't know how to do it. Thank you for reading my question.
There's an easier way to do that without any scripts. The trick is having a website such as icanhazip.com which retrieve your IP, so set it in your terraform file as data: data "http" "myip" { url = "https://ipv4.icanhazip.com" } And whenever you want to place your IP just use data.http.myip.body, example: ingress { from_port = 5432 to_port = 5432 protocol = "tcp" cidr_blocks = ["${chomp(data.http.myip.response_body)}/32"] } Note I used terraform chomp() method to remove any trailing space or new line which comes with body. You can use your ipv6 with http://ipv6.icanhazip.com. Take care by just using http://icanhazip.com because it can retrieve ipv4 or ipv6
Terraform
46,763,287
45
I'm using the AWS VPC Terraform module to create a VPC. Additionally, I want to create and attach an Internet Gateway to this VPC using the aws_internet_gateway resource. Here is my code: module "vpc" "vpc_default" { source = "terraform-aws-modules/vpc/aws" name = "${var.env_name}-vpc-default" cidr = "10.0.0.0/16" enable_dns_hostnames = true } resource "aws_internet_gateway" "vpc_default_igw" { vpc_id = "${vpc.vpc_default.id}" tags { Name = "${var.env_name}-vpc-igw-vpcDefault" } } When I run terraform init, I get the following error: Initializing modules... - module.vpc Error: resource 'aws_internet_gateway.vpc_default_igw' config: unknown resource 'vpc.vpc_default' referenced in variable vpc.vpc_default.id How can I reference a resource created by a Terraform module?
Since you're using a module, you need to change the format of the reference slightly. Module Outputs use the form ${module.<module name>.<output name>}. It's also important to note, you can only reference values outputted from a module. In your specific case, this would become ${module.vpc.vpc_id} based on the VPC Module's Outputs.
Terraform
52,804,543
45
I upgraded to Terraform v0.12.16 and now I am getting a lot of messages that look like this: Warning: Interpolation-only expressions are deprecated on ../modules/test-notifier/test_notifier.tf line 27, in resource "aws_sns_topic_policy" "default": 27: arn = "${aws_sns_topic.default.arn}" Terraform 0.11 and earlier required all non-constant expressions to be provided via interpolation syntax, but this pattern is now deprecated. To silence this warning, remove the "${ sequence from the start and the }" sequence from the end of this expression, leaving just the inner expression. Template interpolation syntax is still used to construct strings from expressions when the template includes multiple interpolation sequences or a mixture of literal strings and interpolations. This deprecation applies only to templates that consist entirely of a single interpolation sequence. There are hundreds of these messages. Is there an automated way to fix them?
Warning: Interpolation-only expressions are deprecated on main.tf line 3, in provider "aws": 3: region = "${var.region}" I also got the above warning, which is due to the changed syntax for declaring variables in terraform. See the example below -: Old syntax- region = "${var.region}" # you will get Interpolation-only warning New Synatx- region = var.region # no warning Check the syntax and correct it using any code editor.
Terraform
59,038,537
45
I have my s3 resource in terraform with configuration: locals { bucket_count = "${length(var.s3_config["bucket_names"])}" } resource "aws_s3_bucket" "s3_bucket" { count = "${local.bucket_count}" bucket = "${format("%s-%s", element(var.s3_config["bucket_names"], count.index), var.region)}" acl = "private" region = "${var.region}" tags { Name = "${format("%s-%s", element(var.s3_config["bucket_names"], count.index), var.region)}" } } and i want to set output variable for all created bucket so i created file names outputs.tf with content output "buckets" { value = "${aws_s3_bucket.s3_bucket.*.bucket}" } output "buckets_arns" { value = "${aws_s3_bucket.s3_bucket.*.arn}" } when i apply configuration its ok i see outputs in terraform.tfstate file but when i call terraform output i see information that is no output or output is empty what i do wrong ?
Try this: output "buckets" { value = ["${aws_s3_bucket.s3_bucket.*.bucket}"] } output "buckets_arns" { value = ["${aws_s3_bucket.s3_bucket.*.arn}"] }
Terraform
52,040,798
44
I am using Terraform snowflake plugins. I want to use ${terraform.workspace} variable in terraform scope. terraform { required_providers { snowflake = { source = "chanzuckerberg/snowflake" version = "0.20.0" } } backend "s3" { bucket = "data-pf-terraform-backend-${terraform.workspace}" key = "backend/singlife/landing" region = "ap-southeast-1" dynamodb_table = "data-pf-snowflake-terraform-state-lock-${terraform.workspace}" } } But I got this error. Variables are not available in this scope? Error: Variables not allowed on provider.tf line 9, in terraform: 9: bucket = "data-pf-terraform-backend-${terraform.workspace}" Variables may not be used here. Error: Variables not allowed on provider.tf line 12, in terraform: 12: dynamodb_table = "data-pf-snowflake-terraform-state-lock-${terraform.workspace}" Variables may not be used here.
Set backend.tf terraform { backend "azurerm" {} } Create a file backend.conf storage_account_name = "deploymanager" container_name = "terraform" key = "production.terraform.tfstate" Run: terraform init -backend-config=backend.conf
Terraform
65,838,989
43
We are looking into Terraform as a way of managing our infrastructure and it looks very interesting. However, currently our corporate proxy/firewall is causing terraform apply to fail due to security restrictions. While we wait for these network issues to be resolved, is there any way that I can experiment with Terraform locally without needing to connect to Azure or AWS? Perhaps with VirtualBox?
Terraform supports a bunch of providers, but the vast majority of them are public cloud based. However, you could set up a local VMware vSphere cluster and use the vSphere provider to interact with that to get you going. There's also a provider for OpenStack if you want to set up an OpenStack cluster. Alternatively you could try using something like HPE's Eucalyptus which provides API compatibility with AWS, but on-premises. That said, unless you already have a datacenter running VMware, all of those options are pretty awful and will take a lot of effort to get setup so you may be best waiting for your firewall to be opened up instead. There isn't unfortunately a nice frictionless first party implementation of a VirtualBox provider but you could try this third-party VirtualBox provider.
Terraform
39,211,000
42
I'm looking at using the new conditionals in Terraform v0.11 to basically turn a config block on or off depending on the evnironment. Here's the block that I'd like to make into a conditional, if, for example I have a variable to turn on for production. access_logs { bucket = "my-bucket" prefix = "${var.environment_name}-alb" } I think I have the logic for checking the environment conditional, but I don't know how to stick the above configuration into the logic. "${var.environment_name == "production" ? 1 : 0 }" Is it possible to turn the access_logs block on and off via the environment_name variable? If this is not possible, is there a workaround?
One way to achieve this with TF 0.12 onwards is to use dynamic blocks: dynamic "access_logs" { for_each = var.environment_name == "production" ? [var.environment_name] : [] content { bucket = "my-bucket" prefix = "${var.environment_name}-alb" } } This will create one or zero access_logs blocks depending on the value of var.environment_name.
Terraform
42,461,753
42
Is there a way of abstracting the provider for all the modules defined in a project. for example, I have this project ├── modules │   ├── RDS │   └── VPC └── stacks ├── production │   └── main.tf └── staging └── main.tf and it works fine... the problem is with the definition of modules ├── RDS │ ├── README.md │ ├── main.tf │ ├── providers.tf │ └── variables.tf └── VPC ├── README.md ├── main.tf ├── providers.tf └── variables.tf the provider in both of these modules are exactly the same # providers.tf provider "aws" { region = "${var.region}" version = "~> 1.26" } and the variables in each module are different but they all have the region variable. # variables.tf variable "region" { default = "eu-central-1" description = "AWS region." } # other module dependent variables... is there a way to define those bits of information on the modules level so that I end up with something roughly like this ├── modules │ ├── providers.tf <<< include the *shared* provider definition block │ ├── variables.tf <<< include the *shared* region vaiable definition block │   ├── RDS │ │ ├── README.md │ │ ├── main.tf │ │ └── variables.tf │   └── VPC │   ├── README.md │   ├── main.tf │   └── variables.tf one last thing, the modules definitions most of the time have a resource attribute (pulling a module from the terraform registry... therefore I don't know if it's feasible to inherit both the source from the registry and a base module)
Right now it's not possible to achieve that. There were previous discussions on github about the same topic in the following issues: https://github.com/hashicorp/terraform/issues/5480 https://github.com/hashicorp/terraform/issues/4585 https://github.com/hashicorp/terraform/issues/2714 https://github.com/hashicorp/terraform/issues/1742 https://github.com/hashicorp/terraform/issues/1478 TL;DR the sharing of variables between modules is against terraform core clarity/explicity principles. Workaround A workaround is to have the *shared* files in the parent directory and using symlinks to add them to the modules.
Terraform
51,213,871
41
I need to ship my cloudwatch logs to a log analysis service. I've followed along with these articles here and here and got it working by hand, no worries. Now I'm trying to automate all this with Terraform (roles/policies, security groups, cloudwatch log group, lambda, and triggering the lambda from the log group). But I can't figure out how to use TF to configure AWS to trigger the lambda from the cloudwatch logs. I can link the two TF resources together by hand by doing the following (in the Lambda web console UI): go into the lambda function's "Triggers" section click "Add Trigger" select "cloudwatch logs" from the list of trigger types select the log group I want to trigger the lambda enter a filter name leave the filter pattern empty (implying trigger on all log streams) make sure "enable trigger" is selected click the submit button Once that's done, the lambda shows up on the cloudwatch logs console in the subscriptions column - displays as "Lambda (cloudwatch-sumologic-lambda)". I tried to create the subscription with the following TF resource: resource "aws_cloudwatch_log_subscription_filter" "cloudwatch-sumologic-lambda-subscription" { name = "cloudwatch-sumologic-lambda-subscription" role_arn = "${aws_iam_role.jordi-waf-cloudwatch-lambda-role.arn}" log_group_name = "${aws_cloudwatch_log_group.jordi-waf-int-app-loggroup.name}" filter_pattern = "logtype test" destination_arn = "${aws_lambda_function.cloudwatch-sumologic-lambda.arn}" } But it fails with: aws_cloudwatch_log_subscription_filter.cloudwatch-sumologic-lambda-subscription: InvalidParameterException: destinationArn for vendor lambda cannot be used with roleArn I found this answer about setting up a similar thing for a scheduled event, but that doesn't seem to be equivalent to what the console actions I described above do (the console UI method doesn't create an event/rule that I can see). Can someone give me a pointer on what I'm doing wrong please?
I had the aws_cloudwatch_log_subscription_filter resource defined incorrectly - you should not provide the role_arn argument in this situation. You also need to add an aws_lambda_permission resource (with a depends_on relationship defined on the filter or TF may do it in the wrong order). Note that the AWS lambda console UI adds the lambda permission for you invisibly, so beware that the aws_cloudwatch_log_subscription_filter will work without the permission resource if you happen to have done the same action before in the console UI. The necessary TF config looks like this (the last two resources are the relevant ones for configuring the actual cloudwatch->lambda trigger): // intended for application logs (access logs, modsec, etc.) resource "aws_cloudwatch_log_group" "test-app-loggroup" { name = "test-app" retention_in_days = 90 } resource "aws_security_group" "cloudwatch-sumologic-lambda-sg" { name = "cloudwatch-sumologic-lambda-sg" tags { Name = "cloudwatch-sumologic-lambda-sg" } description = "Security group for lambda to move logs from CWL to SumoLogic" vpc_id = "${aws_vpc.dev-vpc.id}" } resource "aws_security_group_rule" "https-egress-cloudwatch-sumologic-to-internet" { type = "egress" from_port = 443 to_port = 443 protocol = "tcp" security_group_id = "${aws_security_group.cloudwatch-sumologic-lambda-sg.id}" cidr_blocks = ["0.0.0.0/0"] } resource "aws_iam_role" "test-cloudwatch-lambda-role" { name = "test-cloudwatch-lambda-role" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Service": "lambda.amazonaws.com" }, "Effect": "Allow" } ] } EOF } resource "aws_iam_role_policy" "test-cloudwatch-lambda-policy" { name = "test-cloudwatch-lambda-policy" role = "${aws_iam_role.test-cloudwatch-lambda-role.id}" policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Sid": "CopiedFromTemplateAWSLambdaVPCAccessExecutionRole1", "Effect": "Allow", "Action": [ "ec2:CreateNetworkInterface" ], "Resource": "*" }, { "Sid": "CopiedFromTemplateAWSLambdaVPCAccessExecutionRole2", "Effect": "Allow", "Action": [ "ec2:DescribeNetworkInterfaces", "ec2:DeleteNetworkInterface" ], "Resource": "arn:aws:ec2:ap-southeast-2:${var.dev_vpc_account_id}:network-interface/*" }, { "Sid": "CopiedFromTemplateAWSLambdaBasicExecutionRole1", "Effect": "Allow", "Action": "logs:CreateLogGroup", "Resource": "arn:aws:logs:ap-southeast-2:${var.dev_vpc_account_id}:*" }, { "Sid": "CopiedFromTemplateAWSLambdaBasicExecutionRole2", "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:ap-southeast-2:${var.dev_vpc_account_id}:log-group:/aws/lambda/*" ] }, { "Sid": "CopiedFromTemplateAWSLambdaAMIExecutionRole", "Effect": "Allow", "Action": [ "ec2:DescribeImages" ], "Resource": "*" } ] } EOF } resource "aws_lambda_function" "cloudwatch-sumologic-lambda" { function_name = "cloudwatch-sumologic-lambda" filename = "${var.lambda_dir}/cloudwatchSumologicLambda.zip" source_code_hash = "${base64sha256(file("${var.lambda_dir}/cloudwatchSumologicLambda.zip"))}" handler = "cloudwatchSumologic.handler" role = "${aws_iam_role.test-cloudwatch-lambda-role.arn}" memory_size = "128" runtime = "nodejs4.3" // set low because I'm concerned about cost-blowout in the case of mis-configuration timeout = "15" vpc_config = { subnet_ids = ["${aws_subnet.dev-private-subnet.id}"] security_group_ids = ["${aws_security_group.cloudwatch-sumologic-lambda-sg.id}"] } } resource "aws_lambda_permission" "test-app-allow-cloudwatch" { statement_id = "test-app-allow-cloudwatch" action = "lambda:InvokeFunction" function_name = "${aws_lambda_function.cloudwatch-sumologic-lambda.arn}" principal = "logs.ap-southeast-2.amazonaws.com" source_arn = "${aws_cloudwatch_log_group.test-app-loggroup.arn}" } resource "aws_cloudwatch_log_subscription_filter" "test-app-cloudwatch-sumologic-lambda-subscription" { depends_on = ["aws_lambda_permission.test-app-allow-cloudwatch"] name = "cloudwatch-sumologic-lambda-subscription" log_group_name = "${aws_cloudwatch_log_group.test-app-loggroup.name}" filter_pattern = "" destination_arn = "${aws_lambda_function.cloudwatch-sumologic-lambda.arn}" } EDIT: Please note that the above TF code was written years ago, using version 0.11.x - it should still work but there may be better ways of doing things. Specifically, don't use an inline policy like this unless needed, use an aws_iam_policy_document instead - they're just way easier to maintain over time.
Terraform
38,407,660
40
I'm using Terraform to create a few services in AWS. One of those services is an ECS task definition. I followed the docs and I keep getting the following error: aws_ecs_task_definition.github-backup: ClientException: Fargate requires task definition to have execution role ARN to support ECR images. status code: 400, request id: 84df70ec-94b4-11e8-b116-97f92c6f483f First of all the task_role_arn is optional and I can see that a new role was created. I also tried creating a role myself with the permissions required by task definition. Here's what I have: Task Definition: resource "aws_ecs_task_definition" "github-backup" { family = "${var.task_name}" requires_compatibilities = ["FARGATE"] network_mode = "awsvpc" cpu = "${var.fargate_cpu}" memory = "${var.fargate_memory}" task_role_arn = "${aws_iam_role.github-role.arn}" container_definitions = <<DEFINITION [ { "cpu": ${var.fargate_cpu}, "image": "${var.image}", "memory": ${var.fargate_memory}, "name": "github-backup", "networkMode": "awsvpc" } ] DEFINITION } IAM policy: resource "aws_iam_policy" "access_policy" { name = "github_policy" policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1532966429082", "Action": [ "s3:PutObject", "s3:PutObjectTagging", "s3:PutObjectVersionTagging" ], "Effect": "Allow", "Resource": "arn:aws:s3:::zego-github-backup11" }, { "Sid": "Stmt1532967608746", "Action": "lambda:*", "Effect": "Allow", "Resource": "*" }, { "Effect": "Allow", "Action": [ "ecr:GetAuthorizationToken", "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": "*" } ] } EOF } IAM role: resource "aws_iam_role" "github-role" { name = "github-backup" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Service": [ "s3.amazonaws.com", "lambda.amazonaws.com", "ecs.amazonaws.com" ] }, "Effect": "Allow", "Sid": "" } ] } EOF } IAM policy attachment: resource "aws_iam_role_policy_attachment" "test-attach" { role = "${aws_iam_role.github-role.name}" policy_arn = "${aws_iam_policy.access_policy.arn}" } Terraform plan doesn't show me any error. Only when running Terraform apply do I get this error. I am providing a role with required permissions to task definition and I still get this. What is wrong with this?
As mentioned in the AWS ECS User Guide Fargate tasks require the execution role to be specified as part of the task definition. EC2 launch type tasks don't require this because the EC2 instances themselves should have an IAM role that allows them to pull the container image and optionally push logs to Cloudwatch. Because this is optional for EC2 launch types then Terraform needs to make this optional otherwise it breaks those. Strictly speaking Terraform doesn't have a way to do cross field validation at plan time so it's unable to tell you in the plan that because you have a Fargate launch type task then you need to specify the execution_role_arn. There are workarounds for this using the CustomizeDiff in the provider source but it's hacky as all hell and only used in a couple of places right now. Note that the execution role is what is needed to launch the task, not the role that the task has that allows the task to do things. So you should remove the ECS related permissions from your IAM policy as the task should not be interacting with S3 at all. Instead just add a role with the appropriate permissions as the execution role. To use the AWS managed ECS task execution role you would do something like this: data "aws_iam_role" "ecs_task_execution_role" { name = "ecsTaskExecutionRole" } resource "aws_ecs_task_definition" "github-backup" { family = "${var.task_name}" requires_compatibilities = ["FARGATE"] network_mode = "awsvpc" cpu = "${var.fargate_cpu}" memory = "${var.fargate_memory}" task_role_arn = "${aws_iam_role.github-role.arn}" execution_role_arn = "${data.aws_iam_role.ecs_task_execution_role.arn}" container_definitions = <<DEFINITION [ { "cpu": ${var.fargate_cpu}, "image": "${var.image}", "memory": ${var.fargate_memory}, "name": "github-backup", "networkMode": "awsvpc" } ] DEFINITION }
Terraform
51,612,556
40
I'm new at terraform and I created a custom azure policies on module structure. each policy represents a custom module. One of the modules that I have created is enabling diagnostics logs for any new azure resource created. but, I need a storage account for that. (before enabling the diagnostics settings how can I implement "depends_on"? or any other methods? I want to create first the storage account and then the module of diagnostics settings. on the main.tf (where calling all the other modules) or inside the resource (module)? Thanks for the help!! :) this below code represents the main.tf file: //calling the create storage account name module "createstorageaccount" { source = "./modules/module_create_storage_account" depends_on = [ "module_enable_diagnostics_logs" ] } this one represents the create storage account module resource "azurerm_resource_group" "management" { name = "management-rg" location = "West Europe" } resource "azurerm_storage_account" "test" { name = "diagnostics${azurerm_resource_group.management.name}" resource_group_name = "${azurerm_resource_group.management.name}" location = "${azurerm_resource_group.management.location}" account_tier = "Standard" account_replication_type = "LRS" tags = { environment = "diagnostics" } } depends_on = [ "module_enable_diagnostics_logs" ]
In most cases, the necessary dependencies just occur automatically as a result of your references. If the configuration for one resource refers directly or indirectly to another, Terraform automatically infers the dependency between them without the need for explicit depends_on. This works because module variables and outputs are also nodes in the dependency graph: if a child module resource refers to var.foo then it indirectly depends on anything that the value of that variable depends on. For the rare situation where automatic dependency detection is insufficient, you can still exploit the fact that module variables and outputs are nodes in the dependency graph to create indirect explicit dependencies, like this: variable "storage_account_depends_on" { # the value doesn't matter; we're just using this variable # to propagate dependencies. type = any default = [] } resource "azurerm_storage_account" "test" { name = "diagnostics${azurerm_resource_group.management.name}" resource_group_name = "${azurerm_resource_group.management.name}" location = "${azurerm_resource_group.management.location}" account_tier = "Standard" account_replication_type = "LRS" tags = { environment = "diagnostics" } # This resource depends on whatever the variable # depends on, indirectly. This is the same # as using var.storage_account_depends_on in # an expression above, but for situations where # we don't actually need the value. depends_on = [var.storage_account_depends_on] } When you call this module, you can set storage_account_depends_on to any expression that includes the objects you want to ensure are created before the storage account: module "diagnostic_logs" { source = "./modules/diagnostic_logs" } module "storage_account" { source = "./modules/storage_account" storage_account_depends_on = [module.diagnostic_logs.logging] } Then in your diagnostic_logs module you can configure indirect dependencies for the logging output to complete the dependency links between the modules: output "logging" { # Again, the value is not important because we're just # using this for its dependencies. value = {} # Anything that refers to this output must wait until # the actions for azurerm_monitor_diagnostic_setting.example # to have completed first. depends_on = [azurerm_monitor_diagnostic_setting.example] } If your relationships can be expressed by passing actual values around, such as by having an output that includes the id, I'd recommend preferring that approach because it leads to a configuration that is easier to follow. But in rare situations where there are relationships between resources that cannot be modeled as data flow, you can use outputs and variables to propagate explicit dependencies between modules too.
Terraform
58,275,233
40
I am running Terraform using Terragrunt so I am not actually certain about the path that the terraform is invoked from. So I am trying to get the current working directory as follows: resource null_resource "pwd" { triggers { always_run = "${uuid()}" } provisioner "local-exec" { command = "echo $pwd >> somefile.txt" } } However the resulting file is empty. Any suggestions?
Terraform has a built-in object path that contains attributes for various paths Terraform knows about: path.module is the directory containing the module where the path.module expression is placed. path.root is the directory containing the root module. path.cwd is the current working directory. When writing Terraform modules, we most commonly want to resolve paths relative to the module itself, so that it is self-contained and doesn't make any assumptions about or cause impacts to any other modules in the configuration. Therefore path.module is most often the right choice, making a module agnostic to where it's being instantiated from. It's very uncommon to use the current working directory because that would make a Terraform configuration sensitive to where it is applied, and likely cause unnecessary resource changes if you later apply the same configuration from a different directory or on a different machine altogether. However, in the rare situations where such a thing is needed, path.cwd will do it. path.module and path.root are both relative paths from path.cwd, because that minimizes the risk of inadvertently introducing details about the system where Terraform is being run into the configuration. However, if you do need an absolute module path for some reason, you can use abspath, like abspath(path.module).
Terraform
60,302,694
40
I am very new to GCP with terraform and I want to deploy all my modules using centralized tools. Is there any way to remove the step of enabling google API's every time so that deployment is not interrupted?
There is a Terraform resource definition called "google_project_service" that allows one to enable a service (API). This is documented at google_project_service. An example of usage appears to be: resource "google_project_service" "project" { project = "your-project-id" service = "iam.googleapis.com" }
Terraform
59,055,395
39
I use a CI system to compile terraform providers and bundle them into an image, but every time I run terraform init, I am getting the following error/failure. │ Error: Failed to install provider │ │ Error while installing rancher/rancher2 v1.13.0: the current package for │ registry.terraform.io/rancher/rancher2 1.13.0 doesn't match any of the │ checksums previously recorded in the dependency lock file This message is repeated for all of the providers listed in my provider file, which looks like this: terraform { required_version = ">= 0.13" required_providers { azurerm = { source = "hashicorp/azurerm" version = "2.55.0" } github = { source = "integrations/github" version = "4.8.0" } } ...snip... } The terraform hcl lock file is stored in the repo and it's only when the lock file exists in the repo that these errors appear and terraform init fails. What could be the cause?
The issue is that my local workstation is a Mac which uses the darwin platform, so all of the providers are downloaded for darwin and the hashes stored in the lockfile for that platform. When the CI system, which is running on Linux runs, it attempts to retrieve the providers listed in the lockfile, but the checksums don't match because they use a different platform. The solution is to use the following command locally to generate a new terraform dependency lock file with all of the platforms for terraform, other systems running on different platforms will then be able to obey the dependency lock file. terraform providers lock -platform=windows_amd64 -platform=darwin_amd64 -platform=linux_amd64
Terraform
67,204,811
38
I want to script terraform for CI/CD purpose and I don't like CDing in scripts, I rather have specific paths. I tried terraform init c:\my\folder\containing\tf-file But running that puts the .terraform folder in my cwd.
I know this is an old thread but... The command you are looking for is: terraform -chdir=environments/production apply Please see this link for help with the global option -chdir=": Quote from the actual Terraform site: The usual way to run Terraform is to first switch to the directory containing the .tf files for your root module (for example, using the cd command), so that Terraform will find those files automatically without any extra arguments. In some cases though — particularly when wrapping Terraform in automation scripts — it can be convenient to run Terraform from a different directory than the root module directory. To allow that, Terraform supports a global option -chdir=... which you can include before the name of the subcommand you intend to run: terraform -chdir=environments/production apply The chdir option instructs Terraform to change its working directory to the given directory before running the given subcommand. This means that any files that Terraform would normally read or write in the current working directory will be read or written in the given directory instead.
Terraform
47,274,254
37
Currently I am working on a infrastructure in azure that comprises of the following: resource group application gateway app service etc everything I have is in one single main.tf file which I know was a mistake however I wanted to start from there. I am currently trying to move each section into its own sub folder in my repo. Which would look something like this: terraform-repo/ ├── applicationGateway/ │ ├── main.tf │ ├── vars.tf ├── appService/ │ ├── main.tf │ └── vars.tf ├── main.tf └── vars.tfvars However when I create this while trying to move over from the single file structure I get issues with my remote state where it wants to delete anything that isn't a part of the currently worked on sub folder. For example if I wanted to run terraform apply applicationGateway I will get the following: # azurerm_virtual_network.prd_vn will be destroyed Plan: 0 to add, 2 to change, 9 to destroy. What is the correct way to setup multiple logically organized sub folders in a terraform repo? Or do I have to destroy my current environment to get it to be setup like this ?
You are seeing this issue because terraform ignores subfolders, so those resources are not being included at all anymore. You would need to configure the subfolders to be Terraform Modules, and then include those modules in your root main.tf
Terraform
60,041,854
36
Terraform newbie here. I'd like to iterate a list using for_each, but it seems like the key and value are the same: provider "aws" { profile = "default" region = "us-east-1" } variable "vpc_cidrs" { default = ["10.0.0.0/16", "10.1.0.0/16"] } resource "aws_vpc" "vpc" { for_each = toset(var.vpc_cidrs) cidr_block = each.value enable_dns_hostnames = true tags = { Name = "Company0${each.key}" } } I'd like the tag Name to be "Name" = "Company01" and "Name" = "Company02" but according to terraform apply, I get: "Name" = "Company010.0.0.0/16" and "Name" = "Company010.1.0.0/16" What am I missing?
Found an easy solution using the index function: tags = { Name = "Company0${index(var.vpc_cidrs, each.value) + 1}" }
Terraform
61,343,796
36
In terraform, is there any way to conditionally use a data source? For example: data "aws_ami" "application" { most_recent = true filter { name = "tag:environment" values = ["${var.environment}"] } owners = ["self"] } I'm hoping to be able to pass in an environment variable via the command line, and based on that, determine whether or not to fetch this data source. I know with resources you can use the count property, but it doesn't seem you can use that with data sources. I would consider tucking this code away in a module, but modules also can't use the count parameter. Lastly, another option would be to provide a "Default" value for the data source, if it returned null, but I don't think that's doable either. Are there any other potential solutions for this?
You can use a conditional on data sources the same as you can with resources and also from Terraform 0.13+ on modules as well: variable "lookup_ami" { default = true } data "aws_ami" "application" { count = var.lookup_ami ? 1 : 0 most_recent = true filter { name = "tag:environment" values = [var.environment] } owners = ["self"] } One use case for this in Terraform 0.12+ is to utilise the lazy evaluation of ternary statements like with the following: variable "internal" { default = true } data "aws_route53_zone" "private_zone" { count = var.internal ? 1 : 0 name = var.domain vpc_id = var.vpc_id private_zone = var.internal } data "aws_route53_zone" "public_zone" { count = var.internal ? 0 : 1 name = var.domain private_zone = var.internal } resource "aws_route53_record" "www" { zone_id = var.internal ? data.aws_route53_zone.private_zone.zone_id : data.aws_route53_zone.public_zone.zone_id name = "www.${var.domain}" type = "A" alias { name = aws_elb.lb.dns_name zone_id = aws_elb.lb.zone_id evaluate_target_health = false } } This would create a record in the private zone when var.internal is true and instead create a record in the public zone when var.internal is false. For this specific use case you could also use Terraform 0.12+'s null to rewrite this more simply: variable "internal" { default = true } data "aws_route53_zone" "zone" { name = var.domain vpc_id = var.internal ? var.vpc_id : null private_zone = var.internal } resource "aws_route53_record" "www" { zone_id = data.aws_route53_zone.zone.zone_id name = "www.${data.aws_route53_zone.zone.name}" type = "A" alias { name = aws_elb.lb.dns_name zone_id = aws_elb.lb.zone_id evaluate_target_health = false } } This would only pass the vpc_id parameter to the aws_route53_zone data source if var.internal is set to true as you can't set vpc_id when private_zone is false. Old Terraform 0.11 and earlier answer: You can in fact use a conditional on the count of data sources but I've yet to manage to work out a good use case for it when I've tried. As an example I successfully had this working: data "aws_route53_zone" "private_zone" { count = "${var.internal == "true" ? 1 : 0}" name = "${var.domain}" vpc_id = "${var.vpc_id}" private_zone = "true" } data "aws_route53_zone" "public_zone" { count = "${var.internal == "true" ? 0 : 1}" name = "${var.domain}" private_zone = "false" } But then had issues in how to then select the output of it because Terraform will evaluate any variables in the ternary conditional before deciding which side of the ternary to use (instead of lazy evaluation). So something like this doesn't work: resource "aws_route53_record" "www" { zone_id = "${var.internal ? data.aws_route53_zone.private_zone.zone_id : data.aws_route53_zone.public_zone.zone_id}" name = "www.example.com" type = "A" alias { name = "${aws_elb.lb.dns_name}" zone_id = "${aws_elb.lb.zone_id }" evaluate_target_health = "false" } } Because if internal is true then you get the private_zone data source but not the public_zone data source and so the second half of the ternary fails to evaluate because data.aws_route53_zone.public_zone.zone_id isn't defined and equally with the other way around too. In your case you probably just want to conditionally use the data source so might be able to do something like this: variable "dynamic_ami" { default = "true" } variable "default_ami" { default = "ami-123456" } data "aws_ami" "application" { most_recent = true filter { name = "tag:environment" values = ["${var.environment}"] } owners = ["self"] } resource "aws_instance" "app" { ami = "${var.dynamic_ami == "true" ? data.aws_ami.application.id : var.default_ami}" instance_type = "t2.micro" }
Terraform
41,858,630
35
I am configuring S3 backend through terraform for AWS. terraform { backend "s3" {} } On providing the values for (S3 backend) bucket name, key & region on running "terraform init" command, getting following error "Error configuring the backend "s3": No valid credential sources found for AWS Provider. Please see https://terraform.io/docs/providers/aws/index.html for more information on providing credentials for the AWS Provider Please update the configuration in your Terraform files to fix this error then run this command again." I have declared access & secret keys as variables in providers.tf. While running "terraform init" command it didn't prompt any access key or secret key. How to resolve this issue?
When running the terraform init you have to add -backend-config options for your credentials (aws keys). So your command should look like: terraform init -backend-config="access_key=<your access key>" -backend-config="secret_key=<your secret key>"
Terraform
55,449,909
35
So I have a terraform script that creates instances in Google Cloud Platform, I want to be able to have my terraform script also add my ssh key to the instances I create so that I can provision them through ssh. Here is my current terraform script. #PROVIDER INFO provider "google" { credentials = "${file("account.json")}" project = "myProject" region = "us-central1" } #MAKING CONSUL SERVERS resource "google_compute_instance" "default" { count = 3 name = "a-consul${count.index}" machine_type = "n1-standard-1" zone = "us-central1-a" disk { image = "ubuntu-1404-trusty-v20160627" } # Local SSD disk disk { type = "local-ssd" scratch = true } network_interface { network = "myNetwork" access_config {} } } What do I have to add to this to have my terraform script add my ssh key /Users/myUsername/.ssh/id_rsa.pub?
I think something like this should work: metadata = { ssh-keys = "${var.gce_ssh_user}:${file(var.gce_ssh_pub_key_file)}" } https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys describes the metadata mechanism, and I found this example at https://github.com/hashicorp/terraform/issues/6678
Terraform
38,645,002
34
I've been looking for a way to be able to deploy to multiple AWS accounts simultaneously in Terraform and coming up dry. AWS has the concept of doing this with Stacks but I'm not sure if there is a way to do this in Terraform? If so what would be some solutions? You can read more about the Cloudformation solution here.
You can define multiple provider aliases which can be used to run actions in different regions or even different AWS accounts. So to perform some actions in your default region (or be prompted for it if not defined in environment variables or ~/.aws/config) and also in US East 1 you'd have something like this: provider "aws" { # ... } # Cloudfront ACM certs must exist in US-East-1 provider "aws" { alias = "cloudfront-acm-certs" region = "us-east-1" } You'd then refer to them like so: data "aws_acm_certificate" "ssl_certificate" { provider = aws.cloudfront-acm-certs ... } resource "aws_cloudfront_distribution" "cloudfront" { ... viewer_certificate { acm_certificate_arn = data.aws_acm_certificate.ssl_certificate.arn ... } } So if you want to do things across multiple accounts at the same time then you could assume a role in the other account with something like this: provider "aws" { # ... } # Assume a role in the DNS account so we can add records in the zone that lives there provider "aws" { alias = "dns" assume_role { role_arn = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME" session_name = "SESSION_NAME" external_id = "EXTERNAL_ID" } } And refer to it like so: data "aws_route53_zone" "selected" { provider = aws.dns name = "test.com." } resource "aws_route53_record" "www" { provider = aws.dns zone_id = data.aws_route53_zone.selected.zone_id name = "www.${data.aws_route53_zone.selected.name" ... } Alternatively you can provide credentials for different AWS accounts in a number of other ways such as hardcoding them in the provider or using different Terraform variables, AWS SDK specific environment variables or by using a configured profile.
Terraform
52,206,436
34
Every Terraform guide on the web provides a partial solution that is almost always not the real picture. I get that, not everyone has the same infrastructure needs, but what worries me that the common scenario with: multiple environments (dev, stage) remote backend (s3) some basic resources (bucket or ec2 instance) isn't presented anywhere on a real example project. I'm looking for just that, and in the meantime, I have researched and concluded that apart from those needs I also want: to utilize modules to NOT use workspaces, but rather a distinct directory-per-environment approach to NOT use terragrunt wrapper My current structure, which does not utilize modules - only root module: infra/ ------------------------------ 'terraform init', 'terraform apply' inside here* main.tf ------------------------ Sets up aws provider, backend, backend bucket, dynamodb table terraform.tfvars variables.tf ----------------- Holds few variables such as aws_region, project_name... My desired structure folder tree (for a simple dev & staging simulation of a single bucket resource) is I think something like this: infra/ dev/ s3/ modules.tf ------ References s3 module from local/remote folder with dev inputs stage/ s3/ modules.tf ------ References s3 module from local/remote folder with stage inputs But what about the files from my previous root module? I still want to have a remote backend in the same way as before, just now I want to have two state files (dev.tfstate and stage.tfstate) in the same backend bucket? How would the backend.tf files look like in each subdirectory and where would they be? In s3/ folder or dev/ folder? It's kind of confusing since I'm transitioning from root module 'terraform init' approach, to specific subdirectory 'terraform init', and it's not clear to me whether I should still have a root-module or another folder for example called global/ which I should consider my prerequisite which I should init at the beginning of the project and is basically leave alone from that point on since it created the buckets which dev/ and staging/ can reference? One more question is: what if I have s3/ ec2/ ecr/ subdirectories inside each environment, where do I execute 'terraform plan' command? Does it traverse all subdirectories? When I have the answers and a clear picture of this above, it would be great to improve it by DRYing it up, but for now, I value a more practical solution through example rather than just a theoretic DRY explanation. Thanks!
I work with terraform 5 years. I did a lot of mistakes with in my career with modules and environments. Below text is just share of my knowledge and experience. They may be bad. Real example project may is hard to find because terraform is not used to create opensource projects. It's often unsafe to share terraform files because you are showing all vulnerabilities from your intrastructure Module purpose and size You should create module that has single purpose, but your module should be generic. Example module You can create bastion host module, but better idea is to create a module for generic server. This module may have some logic dedicated to your business problem like, CW Log group, some generic security group rules, etc. Application module Sometimes it is worth to create more specific module. Let's say you have application, that requires Lambda, ECS service, CloudWatch alarms, RDS, EBS etc. All of that elements are strongly connected. You have 2 options: Create separated modules for each above items - But then your application uses 5 modules. Create one big module and then you can deploy your app with single module Mix above solutions - I prefer that Everything depends on details and some circumstances. But I will show you how I use terraform in my productions in different companies. Separated definitions for separated resurces This is project, where you have environment as directories. For each application, networking, data resoruces you have separated state. I keep mutable data in separated directory(like RDS, EBS, EFS, S3, etc) so all apps, networking, etc can be destroyed and recreated, because they are stateless. No one can destroy statefull items because data can be lost. This is what i was doing for last few years. project/ ├─ packer/ ├─ ansible/ ├─ terraform/ │ ├─ environments/ │ │ ├─ production/ │ │ │ ├─ apps/ │ │ │ │ ├─ blog/ │ │ │ │ ├─ ecommerce/ │ │ │ ├─ data/ │ │ │ │ ├─ efs-ecommerce/ │ │ │ │ ├─ rds-ecommerce/ │ │ │ │ ├─ s3-blog/ │ │ │ ├─ general/ │ │ │ │ ├─ main.tf │ │ │ ├─ network/ │ │ │ │ ├─ main.tf │ │ │ │ ├─ terraform.tfvars │ │ │ │ ├─ variables.tf │ │ ├─ staging/ │ │ │ ├─ apps/ │ │ │ │ ├─ ecommerce/ │ │ │ │ ├─ blog/ │ │ │ ├─ data/ │ │ │ │ ├─ efs-ecommerce/ │ │ │ │ ├─ rds-ecommerce/ │ │ │ │ ├─ s3-blog/ │ │ │ ├─ network/ │ │ ├─ test/ │ │ │ ├─ apps/ │ │ │ │ ├─ blog/ │ │ │ ├─ data/ │ │ │ │ ├─ s3-blog/ │ │ │ ├─ network/ │ ├─ modules/ │ │ ├─ apps/ │ │ │ ├─ blog/ │ │ │ ├─ ecommerce/ │ │ ├─ common/ │ │ │ ├─ acm/ │ │ │ ├─ user/ │ │ ├─ computing/ │ │ │ ├─ server/ │ │ ├─ data/ │ │ │ ├─ efs/ │ │ │ ├─ rds/ │ │ │ ├─ s3/ │ │ ├─ networking/ │ │ │ ├─ alb/ │ │ │ ├─ front-proxy/ │ │ │ ├─ vpc/ │ │ │ ├─ vpc-pairing/ ├─ tools/ To apply single application, You need to do: cd ./project/terraform/environments/<ENVIRONMENT>/apps/blog; terraform apply; You can see there is a lot of directories in all environments. As I can see there are pros and cons of that tools. Cons: It is hard to check if all modules are in sync Complicated CI Complicated directory structure especially for new people in the team, but it is logic There may be a lot of dependencies, but this is not a problem when you think about it from the beginning. You need to take care, to keep exactly the same environments. There is a lot of initialization required and refactors are hard to do. Pros: Quick apply after small changes Separated applications and resources. It is easy to modify small module or small deployment for it without knowledge about overall system It is easier to clean up when you remove something It's easy to tell what module need to be fixed. I use some tools I wrote to analyze status of particular parts of infrastructure and I can send email to particular developer, that his infrastructure needs resync for some reasons. You can have different environments easier than in the monolith. You can destroy single app if you do not need it in environemnt Monolith infrastructure Last time I started working with new company. They keep infrastructure definition in few huge repositories(or folders), and when you do terraform apply, you create all applications at the same time. project/ ├─ modules/ │ ├─ acm/ │ ├─ app-blog/ │ ├─ app-ecommerce/ │ ├─ server/ │ ├─ vpc/ ├─ vars/ │ ├─ user/ │ ├─ prod.tfvars │ ├─ staging.tfvars │ ├─ test.tfvars ├─ applications.tf ├─ providers.tf ├─ proxy.tf ├─ s3.tf ├─ users.tf ├─ variables.tf ├─ vpc.tf Here you prepare different input values for each environment. So for example you want to apply changes to prod: terraform apply -var-file=vars/prod.tfvars -lock-timeout=300s Apply staging: terraform apply -var-file=vars/staging.tfvars -lock-timeout=300s Cons: You have no dependency, but sometimes you need to prepare some environment element like domains, elastic IP, etc manually, or you need to have them created before terraform plan/apply. Then you have problem Its hard to do cleanup as you have hundreds resources and modules at the same time Extremely long terraform execution. Here it takes around 45 minutes to plan/apply single environment It's hard to understand entire environment. Usually you need to have 2/3 repositories if you keep that structure to separate networking,apps,dns etc... You need to do much more work to deal with different environments. You need to use count etc... Pros: It's easy to check if your infrastructure is up to date There is no complicated directory structure... All your environments are exactly the same. Refactoring may be easier, because you have all resources in very few places. Small number of initialization is required. Summary As you can see this is more architectural problem, the only way to learn it, is to get more experience or read some posts from another people... I am still trying to figure out the most optimal way and I would probably experiment with first way. Do not take my advantages as sure thing. This post is just my experience, maybe not the best. References I will post some references that helped me a lot: https://www.terraform-best-practices.com/ https://github.com/antonbabenko/terraform-best-practices-workshop
Terraform
66,024,950
34
(Please note: after receiving initial answers, this issue seems to not be just an issue with passing the variables, but with modularizing my configurations, note at the bottom where I hardcode the values yet the UI prompts me to provide the values) Code example here I've got a project I've broken into the following directory structure master.tf variables.tfvars - providers/ -- digital_ocean/ --- digital_ocean.tf --- variables.tf -- cloud_flare/ --- cloud_flare.tf --- variables.tf - management/ -- jenkins/ --- jenkins-master.tf I'm attempting to pass my Digital Ocean and Cloudflare tokens as variables, to their respective modules. Everything below the root directory is loaded into master.tf as a module. I have the following in my varaibles.tfvars file: cloudflare_email ="service@email.com" cloudflare_token ="TOKEN_STRING" do_token ="${DO_PAT}" The following lines appear in my master.tf variable "do_token" {} module "digital_ocean" { source = "./providers/digital_ocean" token = "${var.do_token}" } variable "cloudflare_email" {} variable "cloudflare_token" {} module "cloud_flare" { source = "./providers/cloud_flare" email = "${var.cloudflare_email}" token = "${var.cloudflare_token}" } My digital_ocean module looks like variable "token" {} provider "digitalocean" { token = "${var.token}" } and the cloudflare provider looks like variable "email" {} variable "token" {} provider "CloudFlare" { email = "${var.email}" token = "${var.token}" } Setting up my jenkins master server on DO resource "digitalocean_droplet" "jenkins-master" { ... } From the command line I'm running terraform apply -var-file="variables.tfvars" or I've also tried passing them via the CLI like so.. terraform apply \ -var "cloudflare_email=service@email.com" \ -var "cloudflare_token=TOKEN_STRING" \ -var "do_token=${DO_PAT}" With the above declarations, it will send me into UI mode and prompt me for those variables rather than reading them automatically. I've replicated this behavior on both Terraform v0.9.8 and v0.9.10. Before I started breaking everything out into separate modules, passing in variables presented no issues. I've tried pulling the provider declarations into master.tf to see if there were any weird behaviors with modularizing them, with the same behavior. I also tried hard coding the values into the provider declarations and am experiencing the same behaviors.
Your variables.tfvars file should be named terraform.tfvars. Per the docs: If a terraform.tfvars file is present in the current directory, Terraform automatically loads it to populate variables. If the file is named something else, you can use the -var-file flag directly to specify a file. These files are the same syntax as Terraform configuration files. And like Terraform configuration files, these files can also be JSON. If you want to use your own filenaming convention, you can set an alternative tfvars file with the -var-file flag like this (per the linked docs): $ terraform plan \ -var-file="secret.tfvars" \ -var-file="production.tfvars" For the CLI, you should quote only the value of the variable, like so: terraform apply \ -var cloudflare_email="service@email.com" \ -var cloudflare_token="TOKEN_STRING" \ -var do_token="${DO_PAT}"
Terraform
44,878,553
33
I want to create a new alb and a route53 record that points to it. I see I have the DNS name: ${aws_lb.MYALB.dns_name} Is it possible to create a cname to the public DNS name with aws_route53_record resource?
See the Terraform Route53 Record docs You can add a basic CNAME entry with the following: resource "aws_route53_record" "cname_route53_record" { zone_id = aws_route53_zone.primary.zone_id # Replace with your zone ID name = "www.example.com" # Replace with your subdomain, Note: not valid with "apex" domains, e.g. example.com type = "CNAME" ttl = "60" records = [aws_lb.MYALB.dns_name] } Or if you're are using an "apex" domain (e.g. example.com) consider using an Alias (AWS Alias Docs): resource "aws_route53_record" "alias_route53_record" { zone_id = aws_route53_zone.primary.zone_id # Replace with your zone ID name = "example.com" # Replace with your name/domain/subdomain type = "A" alias { name = aws_lb.MYALB.dns_name zone_id = aws_lb.MYALB.zone_id evaluate_target_health = true } }
Terraform
48,919,317
33
I am trying to implement nested for loops using Terraform 0.12's new features in order to loop through AWS IAM users, each of which can have one or more policies attached. The variable used to represent this list is of type map(list(string)) and looks something like this: { "user 1" = [ "policy1", "policy2" ], "user 2" = [ "policy1" ] } Getting the list of users to create is easy enough via keys(), but since there is currently no mechanism for nesting looped resource creation in Terraform, the policy attachments have to happen as a singular loop independent of each user. So, I am attempting to construct a list of user:policy associations from the map input that would look something like this based on the example above: [ [ "user1", "policy1" ], [ "user1", "policy2" ], [ "user2", "policy1" ] ] I am attempting construct that list and store it in a local variable like so, where var.iam-user-policy-map is the input map: locals { ... association-list = [ for user in keys(var.iam-user-policy-map): [ for policy in var.iam-user-policy-map[user]: [user, policy] ] ] ... } However, I am getting errors when attempting to access the values in that nested list. I am trying to access the user portion of the association with the reference local.association-list[count.index][0] and the policy with local.association-list[count.index][1], but on running terraform plan it errors out: Error: Incorrect attribute value type on main.tf line 27, in resource "aws_iam_user_policy_attachment" "test-attach": 27: user = local.association-list[count.index][0] Inappropriate value for attribute "user": string required. Error: Incorrect attribute value type on main.tf line 27, in resource "aws_iam_user_policy_attachment" "test-attach": 27: user = local.association-list[count.index][0] Inappropriate value for attribute "user": string required. Error: Invalid index on main.tf line 28, in resource "aws_iam_user_policy_attachment" "test-attach": 28: policy_arn = "arn:aws-us-gov:iam::aws:policy/${local.association-list[count.index][1]}" |---------------- | count.index is 0 | local.association-list is tuple with 2 elements The given key does not identify an element in this collection value. Error: Invalid template interpolation value on main.tf line 28, in resource "aws_iam_user_policy_attachment" "test-attach": 28: policy_arn = "arn:aws-us-gov:iam::aws:policy/${local.association-list[count.index][1]}" |---------------- | count.index is 1 | local.association-list is tuple with 2 elements Cannot include the given value in a string template: string required. What am I doing wrong?
The for expression in your local value association-list is producing a list of list of lists of strings, but your references to it are treating it as a list of lists of strings. To get the flattened representation you wanted, you can use the flatten function, but because it would otherwise group everything into a single flat list I'd recommend making the innermost value an object instead. (That will also make the references to it clearer.) locals { association-list = flatten([ for user in keys(var.iam-user-policy-map) : [ for policy in var.iam-user-policy-map[user] : { user = user policy = policy } ] ]) } The result of this expression will have the following shape: [ { user = "user1", policy = "policy1" }, { user = "user1", policy = "policy2" }, { user = "user2", policy = "policy2" }, ] Your references to it can then be in the following form: user = local.association-list[count.index].user policy_arn = "arn:aws-us-gov:iam::aws:policy/${local.association-list[count.index].policy}"
Terraform
56,047,306
33
I have two subscriptions in Azure. Let's call them sub-dev and sub-prod. Under sub-dev I have resources for development (in a resource group rg-dev) and under sub-prod resources for production (in a resource group rg-prod). Now, I would like to have only one state-file for both dev and prod. I can do this as I am using Terraform workspaces (dev and prod). There is a Storage Account under sub-dev (rg-dev) named tfsate. It has a container etc. The Azure backend is configured like this: terraform { backend "azurerm" { resource_group_name = "rg-dev" storage_account_name = "tfstate" container_name = "tfcontainer" key = "terraform.tfstate" } } If I want to apply to the dev environment I have to switch Az Cli to the sub-dev. Similarly, for production, I would have to use sub-prod. I switch the default subscription with az cli: az account set -s sub-prod Problem is that the state's storage account is under sub-dev and not sub-prod. I will get access errors when trying to terraform init (or apply) when the default subscription is set to sub-prod. Error: Failed to get existing workspaces: Error retrieving keys for Storage Account "tfstate": storage.AccountsClient#ListKeys: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailed" Message="The client 'user@example.com' with object id '<redacted>' does not have authorization to perform action 'Microsoft.Storage/storageAccounts/listKeys/action' over scope '/subscriptions/sub-prod/resourceGroups/rg-dev/providers/Microsoft.Storage/storageAccounts/tfstate' or the scope is invalid. If access was recently granted, please refresh your credentials." I have tried couple of things: I added subscription_id = "sub-dev" I generated a SAS token for the tfstate storage account and added the sas_token config value (removed resource_group_name) but in vain and getting the same error. I tried to az logout but terraform requires me to login first. Do I have to tune the permissions in the Azure end somehow (this is hard as the Azure environment is configured by a 3rd party) or does Terraform support this kind of having your state file under different subscription setup at all?
For better or worse (I haven't experimented much with other methods of organising terraform) we use terraform in the exact way you are describing. A state file, in a remote backend, in a different subscription to my resources. Workspaces are created to handle environments for the deployment. Our state files are specified like this: terraform { required_version = ">= 0.12.6" backend "azurerm" { subscription_id = "<subscription GUID storage account is in>" resource_group_name = "terraform-rg" storage_account_name = "myterraform" container_name = "tfstate" key = "root.terraform.tfstate" } } We keep our terraform storage account in a completely different subscription to our deployments but this isn't necessary. When configuring your state file like so, it authenticates to the remote backend via az CLI, using the context of the person interacting with the CLI. This person needs to have the "Reader & Data Access" role to the storage account in order to dynamically retrieve the storage account keys at runtime. With the above state file configured, executing Terraform would be az login az account set -s "<name of subscription where you want to create resources>" terraform init terraform plan terraform apply
Terraform
57,289,924
33
join works BUT i want to keep the double quotes join gives me this [ben,linda,john] BUT i want this ["ben", "linda", "john"] this is getting crazy, spent over 2 hours trying to fix this i want to pass in a list as a string variable why can't terraform just take in my list as a string? why is this so difficult? so i have name = ["ben", "linda", "john"] and i want to pass this to variable used in terrform var.name why can't terrform take this as is? i get the error saying epxtected a string and i can not find a solution online after sarching everywhere i have been able to get [ ben,linda,john ] using join(",", var.name) but i want ["ben", "linda", "john"] $ terraform --version Terraform v0.12.18 + provider.aws v2.42.0 + provider.template v2.1.2
Conversion from list to string always requires an explicit decision about how the result will be formatted: which character (if any) will delimit the individual items, which delimiters (if any) will mark each item, which markers will be included at the start and end (if any) to explicitly mark the result as a list. The syntax example you showed looks like JSON. If that is your goal then the easiest answer is to use jsonencode to convert the list directly to JSON syntax: jsonencode(var.names) This function produces compact JSON, so the result would be the following: ["ben","linda","john"] Terraform provides a ready-to-use function for JSON because its a common need. If you need more control over the above decisions then you'd need to use more complex techniques to describe to Terraform what you need. For example, to produce a string where each input string is in quotes, the items are separated by commas, and the entire result is delimited by [ and ] markers, there are three steps: Transform the list to add the quotes: [for s in var.names : format("%q", s)] Join that result using , as the delimiter: join(", ", [for s in var.names : format("%q", s)]) Add the leading and trailing markers: "[ ${join(",", [for s in var.names : format("%q", s)])} ]" The above makes the same decisions as the JSON encoding of a list, so there's no real reason to do exactly what I've shown above, but I'm showing the individual steps here as an example so that those who want to produce a different list serialization have a starting point to work from. For example, if the spaces after the commas were important then you could adjust the first argument to join in the above to include a space: "[ ${join(", ", [for s in var.names : format("%q", s)])} ]"
Terraform
59,381,410
33
I am using terraform version 0.11.13, and this afternoon I am getting the following error in terraform init step Does it mean I've to upgrade the terraform version, is there a deprecation for this version for aws provider? Full logs: Successfully configured the backend "s3"! Terraform will automatically use this backend unless the backend configuration changes. [1mInitializing provider plugins... - Checking for available provider plugins on https://releases.hashicorp.com... Error installing provider "aws": openpgp: signature made by unknown entity. Terraform analyses the configuration and state and automatically downloads plugins for the providers used. However, when attempting to download this plugin an unexpected error occured. This may be caused if for some reason Terraform is unable to reach the plugin repository. The repository may be unreachable if access is blocked by a firewall. If automatic installation is not possible or desirable in your environment, you may alternatively manually install plugins by downloading a suitable distribution package and placing the plugin's executable file in the following directory: terraform.d/plugins/linux_amd64
HashiCorp has rotated its release signing key as a part of HCSEC-2021-12 For example, for terraform 0.11.x, you can set the aws version to v2.70.0 provider "aws" { region = "us-east-1" version = "v2.70.0" } For other versions, you can check: https://registry.terraform.io/providers/hashicorp/aws/latest/docs
Terraform
67,368,339
33
One team has already written a cloudformation template as a .yml file that provisions a stack of resources. Is it possible to leverage this file by executing it from within Terraform? Or does it have to be rewritten? I'm new to terraform and just getting started. If I were using the AWS CLI I would execute a command like this, aws cloudformation create-stack --stack-name my-new-stack --template-body file://mystack.yml --parameters ParameterKey=AmiId I'd like to include the equivalent of this command in my terraform configuration. If it is possible and you can point me to an example, I would really appreciate that. Thanks!
The aws_cloudformation_stack resource serves as a bridge from Terraform into CloudFormation, which can be used either as an aid for migration from CloudFormation to Terraform (as you're apparently doing here) or to make use of some of CloudFormation's features that Terraform doesn't currently handle, such as rolling deployments of new instances into an ASG. resource "aws_cloudformation_stack" "example" { name = "example" parameters = { VpcId = var.vpc_id } template_body = file("${path.module}/example.yml") } The parameters argument allows passing data from Terraform into the Cloudformation stack. It's also possible to use the outputs attribute to make use of the results of the CloudFormation stack elsewhere in Terraform, for a two-way integration: resource "aws_route_53_record" "example" { name = "service.example.com" type = "CNAME" ttl = 300 records = [ aws_cloudformation_stack.example.outputs["ElbHostname"], ] } If you have a pre-existing CloudFormation stack that isn't managed by Terraform, you can still make use of its outputs using the aws_cloudformation_stack data source: data "aws_cloudformation_stack" "example" { name = "example" } resource "aws_route_53_record" "example" { name = "service.example.com" type = "CNAME" ttl = 300 records = [ data.aws_cloudformation_stack.example.outputs["ElbHostname"], ] } These features together allow you to effectively mix CloudFormation and Terraform in a single system in different combinations, whether it's as a temporary measure while migrating or permanently in situations where a hybrid solution is desired.
Terraform
43,266,506
32
I'm a beginner in Terraform. I have a directory which contains 2 .tf files. Now I want to run Terraform Apply on a selected .tf file & neglect the other one. Can I do that? If yes, how? If no, why & what is the best practice?
You can't selectively apply one file and then the other. Two ways of (maybe) achieving what you're going for: Use the -target flag to target resource(s) in one file and then the other. Put each file (or more broadly, group of resources, which might be multiple files) in separate "modules" (folders). You can then apply them separately.
Terraform
47,708,338
32
I have a AWS CodePipeline configured in a terraform file, like this: resource { name = "Cool Pipeline" ... stage { name = "Source" ... action { name = "Source" ... configuration { Owner = "Me" Repo = "<git-repo-uri>" Branch = develop OAuthToken = "b3287d649a28374e9283c749cc283ad74" } } } lifecycle { ignore_changes = "OAuthToken" } } The reason for ignoring the token, is that the AWS API doesn't show that token to terraform, instead AWS API outputs this with aws codepipeline get-pipeline <name>: "pipeline": { "stages": { "name": "Source", "actions": { "configuration": { "OAuthToken": "****" } } } } Result is, when I perform the terraform planit shows me it wants to update that token, like so: module.modulename.aws_codepipeline.codepipeline stage.0.action.0.configuration.%: "3" => "4" stage.0.action.0.configuration.OAuthToken: "" => "b3287d649a28374e9283c749cc283ad74" My question is, how can I get the ignore_changes to take effect? I've tried this without any success: ignore_changes = ["OAuthToken"] ignore_changes = ["oauthtoken"] ignore_changes = ["stage.action.configuration.OAuthToken"] All examples I've found googling just shows how to ignore on the same block level. (The token is this text is fake.)
This syntax, as hinted by terraform plan output, solved the problem: ignore_changes = [ "stage.0.action.0.configuration.OAuthToken", "stage.0.action.0.configuration.%" ] Another way to solve it is to add the GITHUB_TOKEN system environment variable, with the token as the value. This way you do not need the ignore_changes directive in the tf files.
Terraform
48,243,968
32
I am launching an aws_launch_configuration instance using terraform. I'm using a shell script for the user_data variable, like so: resource "aws_launch_configuration" "launch_config" { ... user_data = "${file("router-init.sh")}" ... } Within this router-init.sh, one of the things I would like to do is to have access to the IP addresses for other instances I am launching via terraform. I know that I can use a splat to access all the IP addresses of that instance, for instance: output ip_address { value = ${aws_instance.myAWSInstance.*.private_ip}" } Is there a way to pass/access these IP addresses within the router-init.sh script?
You can do this using a template_file data source: data "template_file" "init" { template = "${file("router-init.sh.tpl")}" vars = { some_address = "${aws_instance.some.private_ip}" } } Then reference it inside the template like: #!/bin/bash echo "SOME_ADDRESS = ${some_address}" > /tmp/ Then use that for the user_data: user_data = ${data.template_file.init.rendered}
Terraform
50,835,636
32
I've got a variable declared in my variables.tf like this: variable "MyAmi" { type = map(string) } but when I do: terraform plan -var 'MyAmi=xxxx' I get: Error: Variables not allowed on <value for var.MyAmi> line 1: (source code not available) Variables may not be used here. Minimal code example: test.tf provider "aws" { } # S3 module "my-s3" { source = "terraform-aws-modules/s3-bucket/aws" bucket = "${var.MyAmi}-bucket" } variables.tf variable "MyAmi" { type = map(string) } terraform plan -var 'MyAmi=test' Error: Variables not allowed on <value for var.MyAmi> line 1: (source code not available) Variables may not be used here. Any suggestions?
This error can also occurs when trying to setup a variable's value from a dynamic resource (e.g: an output from a child module): variable "some_arn" { description = "Some description" default = module.some_module.some_output # <--- Error: Variables not allowed } Using locals block instead of the variable will solve this issue: locals { some_arn = module.some_module.some_output }
Terraform
58,712,999
32
I'm trying to create a module in Terraform that can be instantiated multiple times with different variable inputs. Within the module, how do I reference resources when their names depend on an input variable? I'm trying to do it via the bracket syntax ("${aws_ecs_task_definition[var.name].arn}") but I just guessed at that. (Caveat: I might be going about this in completely the wrong way) Here's my module's (simplified) main.tf file: variable "name" {} resource "aws_ecs_service" "${var.name}" { name = "${var.name}_service" cluster = "" task_definition = "${aws_ecs_task_definition[var.name].arn}" desired_count = 1 } resource "aws_ecs_task_definition" "${var.name}" { family = "ecs-family-${var.name}" container_definitions = "${template_file[var.name].rendered}" } resource "template_file" "${var.name}_task" { template = "${file("task-definition.json")}" vars { name = "${var.name}" } } I'm getting the following error: Error loading Terraform: Error downloading modules: module foo: Error loading .terraform/modules/af13a92c4edda294822b341862422ba5/main.tf: Error reading config for aws_ecs_service[${var.name}]: parse error: syntax error
I was fundamentally misunderstanding how modules worked. Terraform does not support interpolation in resource names (see the relevant issues), but that doesn't matter in my case, because the resources of each instance of a module are in the instance's namespace. I was worried about resource names colliding, but the module system already handles that.
Terraform
38,619,691
31
I would like to use the same terraform template for several dev and production environments. My approach: As I understand it, the resource name needs to be unique, and terraform stores the state of the resource internally. I therefore tried to use variables for the resource names - but it seems to be not supported. I get an error message: $ terraform plan var.env1 Enter a value: abc Error asking for user input: Error parsing address 'aws_sqs_queue.SqsIntegrationOrderIn${var.env1}': invalid resource address "aws_sqs_queue.SqsIntegrationOrderIn${var.env1}" My terraform template: variable "env1" {} provider "aws" { region = "ap-southeast-2" } resource "aws_sqs_queue" "SqsIntegrationOrderIn${var.env1}" { name = "Integration_Order_In__${var.env1}" message_retention_seconds = 86400 receive_wait_time_seconds = 5 } I think, either my approach is wrong, or the syntax. Any ideas?
You can't interpolate inside the resource name. Instead what you should do is as @BMW have mentioned in the comments, you should make a terraform module that contains that SqsIntegrationOrderIn inside and takes env variable. Then you can use the module twice, and they simply won't clash. You can also have a look at a similar question I answered.
Terraform
46,353,686
31
I have an infrastructure I'm deploying using Terraform in AWS. This infrastructure can be deployed to different environments, for which I'm using workspaces. Most of the components in the deployment should be created separately for each workspace, but I have several key components that I wish to be shared between them, primarily: IAM roles and permissions They should use the same API Gateway, but each workspace should deploy to different paths and methods For example: resource "aws_iam_role" "lambda_iam_role" { name = "LambdaGeneralRole" policy = <...> } resource "aws_lambda_function" "my_lambda" { function_name = "lambda-${terraform.workspace}" role = "${aws_iam_role.lambda_iam_role.arn}" } The first resource is a IAM role that should be shared across all instances of that Lambda, and shouldn't be recreated more than once. The second resource is a Lambda function whose name depends on the current workspace, so each workspace will deploy and keep track of the state of a different Lambda. How can I share resources, and their state, between different Terraform workspaces?
For the shared resources, I create them in a separate template and then refer to them using terraform_remote_state in the template where I need information about them. What follows is how I implement this, there are probably other ways to implement it. YMMV In the shared services template (where you would put your IAM role) I use Terraform backend to store the output data for the shared services template in Consul. You also need to output any information you want to use in other templates. shared_services template terraform { backend "consul" { address = "consul.aa.example.com:8500" path = "terraform/shared_services" } } resource "aws_iam_role" "lambda_iam_role" { name = "LambdaGeneralRole" policy = <...> } output "lambda_iam_role_arn" { value = "${aws_iam_role.lambda_iam_role.arn}" } A "backend" in Terraform determines how state is loaded and how an operation such as apply is executed. This abstraction enables non-local file state storage, remote execution, etc. In the individual template you invoke the backend as a data source using terraform_remote_state and can use the data in that template. terraform_remote_state: Retrieves state meta data from a remote backend individual template data "terraform_remote_state" "shared_services" { backend = "consul" config { address = "consul.aa.example.com:8500" path = "terraform/shared_services" } } # This is where you use the terraform_remote_state data source resource "aws_lambda_function" "my_lambda" { function_name = "lambda-${terraform.workspace}" role = "${data.terraform_remote_state.shared_services.lambda_iam_role_arn}" } References: https://www.terraform.io/docs/state/remote.html https://www.terraform.io/docs/backends/ https://www.terraform.io/docs/providers/terraform/d/remote_state.html
Terraform
52,606,011
31
My simple terraform file is: provider "aws" { region = "region" access_key = "key" secret_key = "secret_key" } terraform { backend "s3" { # Replace this with your bucket name! bucket = "great-name-terraform-state-2" key = "global/s3/terraform.tfstate" region = "eu-central-1" # Replace this with your DynamoDB table name! dynamodb_table = "great-name-locks-2" encrypt = true } } resource "aws_s3_bucket" "terraform_state" { bucket = "great-name-terraform-state-2" # Enable versioning so we can see the full revision history of our # state files versioning { enabled = true } server_side_encryption_configuration { rule { apply_server_side_encryption_by_default { sse_algorithm = "AES256" } } } } resource "aws_dynamodb_table" "terraform_locks" { name = "great-name-locks-2" billing_mode = "PAY_PER_REQUEST" hash_key = "LockID" attribute { name = "LockID" type = "S" } } All I am trying to do is to replace my backend from local to be store at S3. I am doing the following: terraform init ( when the terrafrom{} block is comment ) terrafrom apply - I can see in my AWS that the bucket was created and the Dynmpo table as well. now I am un commenting the terrafrom block and again terraform init and i get the following error: Error loading state: AccessDenied: Access Denied status code: 403, request id: xxx, host id: xxxx My IAM has administer access I am using Terraform v0.12.24 as one can observe, I am directly writing my AWS key and secret in the file What am i doing wrong? I appreciate any help!
I encountered this before. Following are the steps that will help you overcome that error- Delete the .terraform directory Place the access_key and secret_key under the backend block. like below given code Run terraform init backend "s3" { bucket = "great-name-terraform-state-2" key = "global/s3/terraform.tfstate" region = "eu-central-1" access_key = "<access-key>" secret_key = "<secret-key>" } } The error should be gone.
Terraform
61,851,903
31
We have cronjob and shell script which we want to copy or upload to aws ec2 instance while creating instance using terraform. we tried file provisioner : but its not wokring , and read this option does not work with all terraform version provisioner "file" { source = "abc.sh" destination = "/home/ec2-user/basic2.sh" } tried data template file option data "template_file" "userdata_line" { template = <<EOF #!/bin/bash mkdir /home/ec2-user/files2 cd /home/ec2-user/files2 sudo touch basic2.sh sudo chmod 777 basic2.sh base64 basic.sh |base64 -d >basic2.sh EOF } tried all option but none of them working. could u please help or advise . I am new to terraform so struggling on this from long time.
When starting from an AMI that has cloud-init installed (which is common in many official Linux distri), we can use cloud-init's write_files module to place arbitrary files into the filesystem, as long as they are small enough to fit within the constraints of the user_data argument along with all of the other cloud-init data. As with all cloud-init modules, we configure write_files using cloud-init's YAML-based configuration format, which begins with the special marker string #cloud-config on a line of its own, followed by a YAML data structure. Because JSON is a subset of YAML, we can use Terraform's jsonencode to produce a valid value[1]. locals { cloud_config_config = <<-END #cloud-config ${jsonencode({ write_files = [ { path = "/etc/example.txt" permissions = "0644" owner = "root:root" encoding = "b64" content = filebase64("${path.module}/example.txt") }, ] })} END } The write_files module can accept data in base64 format when we set encoding = "b64", so we use that in conjunction with Terraform's filebase64 function to include the contents of an external file. Other approaches are possible here, such as producing a string dynamically using Terraform templates and using base64encode to encode it as the file contents. If you can express everything you want cloud-init to do in a single configuration file like the above then you can assign local.cloud_config_config directly as your instance user_data, and cloud-config will should recognize and process it on system boot: user_data = local.cloud_config_config If you instead need to combine creating the file with some other actions, like running a shell script, you can use cloud-init's multipart archive format to encode multiple "files" for cloud-init to process. Terraform has a cloudinit provider that contains a data source for easily constructing a multipart archive for cloud-init: data "cloudinit_config" "examplecfg" { gzip = false base64_encode = false part { content_type = "text/cloud-config" filename = "cloud-config.yaml" content = local.cloud_config_config } part { content_type = "text/x-shellscript" filename = "example.sh" content = <<-EOF #!/bin/bash echo "Hello World" EOF } } This data source will produce a single string at cloudinit_config.example.rendered which is a multipart archive suitable for use as user_data for cloud-init: user_data = data.cloudinit_config.examplecfg.rendered EC2 imposes a maximum user-data size of 64 kilobytes, so all of the encoded data together must fit within that limit. If you need to place a large file that comes close to or exceeds that limit, it would probably be best to use an intermediate other system to transfer that file, such as having Terraform write the file into an Amazon S3 bucket and having the software in your instance retrieve that data using instance profile credentials. That shouldn't be necessary for small data files used for system configuration, though. It's important to note that from the perspective of Terraform and EC2 the content of user_data is just an arbitrary string. Any issues in processing the string must be debugged within the target operating system itself, by reading the cloud-init logs to see how it interpreted the configuration and what happened when it tried to take those actions. [1]: We could also potentially use yamlencode, but at the time I write this that function has a warning that its exact formatting may change in future Terraform versions, and that's undesirable for user_data because it would cause the instance to be replaced. If you are reading this in the future and that warning is no longer present in the yamldecode docs, consider using yamlencode instead.
Terraform
62,101,009
31
When I had a single hosted zone it was easy for me to create the zone and then create the NS records for the zone in the delegating account by referencing the hosted zone by name. Edit To try to avoid confusion this is what I wanted to achieve but for multiple hosted zones and the owner of the domain is a management account: https://dev.to/arswaw/create-a-subdomain-in-amazon-route53-in-2-minutes-3hf0 Now I need to create multiple hosted zones and pass the nameserver records back to the parent account and I am not sure if its possible (or how if it is) to reference multiple resources. Reading this probably doesn't make a whole lot of sense, so the code below is as far as I have got. I now have a for_each loop which will loop over a list of strings and create a hosted zone for each string, and I want to then create corresponding NS records in another account, notice that I am using a separate provider provider = aws.management_account to connect to the management account and this works fine for a single hosted zone. I do not know how to reference the hosted zones, is there some syntax for this or is my approach wrong? resource "aws_route53_zone" "public_hosted_zone" { for_each = local.aws_zones name = "${each.value}.${var.domain}" } resource "aws_route53_record" "ns_records" { for_each = local.aws_zones provider = aws.management_account allow_overwrite = true name = "${each.value}.${var.domain}" ttl = 30 type = "NS" zone_id = data.aws_ssm_parameter.public_hosted_zone_id.value records = [ aws_route53_zone.public_hosted_zone.name_servers[0], # Here is my old code which works for a single hosted zone but I cannot work out how to reference multiples created above aws_route53_zone.public_hosted_zone.name_servers[1], aws_route53_zone.public_hosted_zone.name_servers[2], aws_route53_zone.public_hosted_zone.name_servers[3] ] }
Since your local.aws_zones is set ["dev", "test", "qa"], your aws_route53_zone.public_hosted_zone will be a map with keys "dev", "test", "qa". Therefore, to use it in your aws_route53_record, you can try: resource "aws_route53_record" "ns_records" { for_each = local.aws_zones # other attributes records = aws_route53_zone.public_hosted_zone[each.key].name_servers }
Terraform
63,641,187
31
I am trying to pass a variable from the root module to a child module with the following syntax: main.tf: provider "aws" { version = "~> 1.11" access_key = "${var.aws_access_key}" secret_key = "${var.aws_secret_key}" region = "${var.aws_region}" } module "iam" { account_id = "${var.account_id}" source = "./modules/iam" } * account_id value is stored on variables.tf in the root folder /modules/iam/iam.tf resource "aws_iam_policy_attachment" "myName" { name = "myName" policy_arn = "arn:aws:iam::${var.account_id}:policy/myName" <-- doesnt work groups = [] users = [] roles = [] } when I try to access within the module to account_id an error is thrown.
You need to declare any variables that a module uses at the module level itself: variable "account_id" { } resource "aws_iam_policy_attachment" "myName" { name = "myName" policy_arn = "arn:aws:iam::${var.account_id}:policy/myName" groups = [] users = [] roles = [] }
Terraform
49,535,315
30
I am having an issue using Terraform (v0.9.2) adding services to an ELB (I'm using: https://github.com/segmentio/stack/blob/master/s3-logs/main.tf). When I run terraform apply I get this error: * module.solr.module.elb.aws_elb.main: 1 error(s) occurred: * aws_elb.main: Failure configuring ELB attributes: InvalidConfigurationRequest: Access Denied for bucket: my-service- logs. Please check S3bucket permission status code: 409, request id: xxxxxxxxxx-xxxx-xxxx-xxxxxxxxx My service looks like this: module "solr" { source = "github.com/segmentio/stack/service" name = "${var.prefix}-${terraform.env}-solr" environment = "${terraform.env}" image = "123456789876.dkr.ecr.eu-west-2.amazonaws.com/my-docker-image" subnet_ids = "${element(split(",", module.vpc_subnets.private_subnets_id), 3)}" security_groups = "${module.security.apache_solr_group}" port = "8983" cluster = "${module.ecs-cluster.name}" log_bucket = "${module.s3_logs.id}" iam_role = "${aws_iam_instance_profile.ecs.id}" dns_name = "" zone_id = "${var.route53_zone_id}" } My s3-logs bucket looks like this: module "s3_logs" { source = "github.com/segmentio/stack/s3-logs" name = "${var.prefix}" environment = "${terraform.env}" account_id = "123456789876" } I checked in S3 and the bucket policy looks like this: { "Version": "2012-10-17", "Id": "log-bucket-policy", "Statement": [ { "Sid": "log-bucket-policy", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::123456789876:root" }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::my-service-logs/*" } ] } As far as I can see ELB should have access to the S3 bucket to store the logs (it's running in the same AWS account). The bucket and the ELB are all in eu-west-2. Any ideas on what the problem could be would be much appreciated.
The docs for ELB access logs say that you want to allow a specific Amazon account to be able to write to S3, not your account. As such you want something like: { "Id": "Policy1429136655940", "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1429136633762", "Action": [ "s3:PutObject" ], "Effect": "Allow", "Resource": "arn:aws:s3:::my-loadbalancer-logs/my-app/AWSLogs/123456789012/*", "Principal": { "AWS": [ "652711504416" ] } } ] } In Terraform you can use the aws_elb_service_account data source to automatically fetch the account ID used for writing logs as can be seen in the example in the docs: data "aws_elb_service_account" "main" {} resource "aws_s3_bucket" "elb_logs" { bucket = "my-elb-tf-test-bucket" acl = "private" policy = <<POLICY { "Id": "Policy", "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:PutObject" ], "Effect": "Allow", "Resource": "arn:aws:s3:::my-elb-tf-test-bucket/AWSLogs/*", "Principal": { "AWS": [ "${data.aws_elb_service_account.main.arn}" ] } } ] } POLICY } resource "aws_elb" "bar" { name = "my-foobar-terraform-elb" availability_zones = ["us-west-2a"] access_logs { bucket = "${aws_s3_bucket.elb_logs.bucket}" interval = 5 } listener { instance_port = 8000 instance_protocol = "http" lb_port = 80 lb_protocol = "http" } }
Terraform
43,366,038
29
I have an AWS Lambda deployed successfully with Terraform: resource "aws_lambda_function" "lambda" { filename = "dist/subscriber-lambda.zip" function_name = "test_get-code" role = <my_role> handler = "main.handler" timeout = 14 reserved_concurrent_executions = 50 memory_size = 128 runtime = "python3.6" tags = <my map of tags> source_code_hash = "${base64sha256(file("../modules/lambda/lambda-code/main.py"))}" kms_key_arn = <my_kms_arn> vpc_config { subnet_ids = <my_list_of_private_subnets> security_group_ids = <my_list_of_security_groups> } environment { variables = { environment = "dev" } } } Now, when I run terraform plan command it says my lambda resource needs to be updated because the source_code_hash has changed, but I didn't update lambda Python codebase (which is versioned in a folder of the same repo): ~ module.app.module.lambda.aws_lambda_function.lambda last_modified: "2018-10-05T07:10:35.323+0000" => <computed> source_code_hash: "jd6U44lfe4124vR0VtyGiz45HFzDHCH7+yTBjvr400s=" => "JJIv/AQoPvpGIg01Ze/YRsteErqR0S6JsqKDNShz1w78" I suppose it is because it compresses my Python sources each time and the source changes. How can I avoid that if there are no changes in the Python code? Is my hypothesis coherent if I didn't change the Python codebase (I mean, why then the hash changes)?
This works for me and also doesn't trigger an update on the Lambda function when the code hasn't changed data "archive_file" "lambda_zip" { type = "zip" source_dir = "../dist/go" output_path = "../dist/lambda_package.zip" } resource "aws_lambda_function" "aggregator_func" { description = "MyFunction" function_name = "my-func-${local.env}" filename = data.archive_file.lambda_zip.output_path runtime = "go1.x" handler = "main" source_code_hash = data.archive_file.lambda_zip.output_base64sha256 role = aws_iam_role.function_role.arn timeout = 120 publish = true tags = { environment = local.env } }
Terraform
52,662,244
29
Can you create views in Amazon Athena? outlines how to create a view using the User Interface. I'd like to create an AWS Athena View programatically, ideally using Terraform (which calls CloudFormation). I followed the steps outlined here: https://ujjwalbhardwaj.me/post/create-virtual-views-with-aws-glue-and-query-them-using-athena, however I run into an issue with this in that the view goes stale quickly. ...._view' is stale; it must be re-created. The terraform code looks like this: resource "aws_glue_catalog_table" "adobe_session_view" { database_name = "${var.database_name}" name = "session_view" table_type = "VIRTUAL_VIEW" view_original_text = "/* Presto View: ${base64encode(data.template_file.query_file.rendered)} */" view_expanded_text = "/* Presto View */" parameters = { presto_view = "true" comment = "Presto View" } storage_descriptor { ser_de_info { name = "ParquetHiveSerDe" serialization_library = "org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe" } columns { name = "first_column" type = "string" } columns { name = "second_column" type = "int" } ... columns { name = "nth_column" type = "string" } } An alternative I'd be happy to use is the AWS CLI, however aws athena [option] provides no option for this. I've tried: create-named-query which I have not been able to get working for a statement such as CREATE OR REPLACE VIEW as this doesn't seem to be the intended use case for this command. start-query-execution which asks for an output location, which suggests that this is meant for querying the data and outputting the results, as opposed to making stateful changes/creations. It also seems to be paired with stop-query-execution.
Creating views programmatically in Athena is not documented, and unsupported, but possible. What happens behind the scenes when you create a view using StartQueryExecution is that Athena lets Presto create the view and then extracts Presto's internal representation and puts it in the Glue catalog. The staleness problem usually comes from the columns in the Presto metadata and the Glue metadata being out of sync. An Athena view really contains three descriptions of the view: the view SQL, the columns and their types in Glue format, and the columns and types in Presto format. If either of these get out of sync you will get the "… is stale; it must be re-created." error. These are the requirements on a Glue table to work as an Athena view: TableType must be VIRTUAL_VIEW Parameters must contain presto_view: true TableInput.ViewOriginalText must contain an encoded Presto view (see below) StorageDescriptor.SerdeInfo must be an empty map StorageDescriptor.Columns must contain all the columns that the view defines, with their types The tricky part is the encoded Presto view. That structure is created by this code: https://github.com/prestosql/presto/blob/27a1b0e304be841055b461e2c00490dae4e30a4e/presto-hive/src/main/java/io/prestosql/plugin/hive/HiveUtil.java#L597-L600, and this is more or less what it does: Adds a prefix /* Presto View: (with a space after :) Adds a base 64 encoded JSON string that contains the view SQL, the columns and their types, and some catalog metadata (see below) Adds a suffix */ (with a space before *) The JSON that describes the view looks like this: A catalog property that must have the value awsdatacatalog. A schema property that must be the name of the database where the view is created (i.e. it must match the DatabaseName property of the surrounding Glue structure. A list of columns, each with a name and type A originalSql property with the actual view SQL (not including CREATE VIEW …, it should start with SELECT … or WITH …) Here's an example: { "catalog": "awsdatacatalog", "schema": "some_database", "columns": [ {"name": "col1", "type": "varchar"}, {"name": "col2", "type": "bigint"} ], "originalSql": "SELECT col1, col2 FROM some_other_table" } One caveat here is that the types of the columns are almost, but not quite, the same as the names in Glue. If Athena/Glue would have string the value in this JSON must be varchar. If the Athena/Glue uses array<string> the value in this JSON must be array(varchar), and struct<foo:int> becomes row(foo int). This is pretty messy, and putting it all together requires some fiddling and testing. The easiest way to get it working is to create a few views and decoding working the instructions above backwards to see how they look, and then try doing it yourself.
Terraform
56,289,272
29
I am creating Secrets in AWS using Terraform code. My Jenkins pipeline will create the infrastructure every 2 hours and destroys it. Once Infrastructure re-creates after 2 hours, it happened that, AWS Secrets is not allowing me to re-create again and throwing me with below error. Please suggest. Error: error creating Secrets Manager Secret: InvalidRequestException: You can't create this secret because a secret with this name is already scheduled for deletion. status code: 400, request id: e4f8cc85-29a4-46ff-911d-c5115716adc5 TF code:- resource "aws_secretsmanager_secret" "secret" { description = "${var.environment}" kms_key_id = "${data.aws_kms_key.sm.arn}" name = "${var.environment}-airflow-secret" } resource "random_string" "rds_password" { length = 16 special = true } resource "aws_secretsmanager_secret_version" "secret" { secret_id = "${aws_secretsmanager_secret.secret.id}" secret_string = <<EOF { "rds_password": "${random_string.rds_password.result}" } EOF } TF code plan output:- # module.aws_af_aws_secretsmanager_secret.secret will be created + resource "aws_secretsmanager_secret" "secret" { + arn = (known after apply) + description = "dev-airflow-secret" + id = (known after apply) + kms_key_id = "arn:aws:kms:eu-central-1" + name = "dev-airflow-secret" + name_prefix = (known after apply) + recovery_window_in_days = 30 + rotation_enabled = (known after apply) } # module.aws_af.aws_secretsmanager_secret_version.secret will be created + resource "aws_secretsmanager_secret_version" "secret" { + arn = (known after apply) + id = (known after apply) + secret_id = (known after apply) + secret_string = (sensitive value) + version_id = (known after apply) + version_stages = (known after apply) }
You need to set the recovery window to 0 for immediate deletion of secrets. https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/secretsmanager_secret#recovery_window_in_days recovery_window_in_days - (Optional) Specifies the number of days that AWS Secrets Manager waits before it can delete the secret. This value can be 0 to force deletion without recovery or range from 7 to 30 days. The default value is 30.
Terraform
57,431,731
29
When attempting to run terraform init as a task in an Azure Pipeline, it errors stating spawn C:\hostedtoolcache\windows\terraform\0.12.7\x64\terraform.exe ENOENT The installation appears fine, as basic functionality is verified during the install step (terraform version) Relevant Pipeline Tasks ... - task: TerraformInstaller@0 displayName: 'Install Terraform 0.12.7' inputs: terraformVersion: 0.12.7 - task: TerraformTaskV1@0 displayName: 'Terraform : init' inputs: command: 'init' workingDirectory: '$(System.DefaultWorkingDirectory)/Terraform/terraform' ... Install Terraform 0.12.7 ... Verifying Terraform installation... C:\hostedtoolcache\windows\terraform\0.12.7\x64\terraform.exe version Terraform v0.12.7 Your version of Terraform is out of date! The latest version is 0.12.19. You can update by downloading from www.terraform.io/downloads.html Finishing: Install Terraform 0.12.7 Terraform : init ... C:\hostedtoolcache\windows\terraform\0.12.7\x64\terraform.exe validate ##[error]Error: There was an error when attempting to execute the process 'C:\hostedtoolcache\windows\terraform\0.12.7\x64\terraform.exe'. This may indicate the process failed to start. Error: spawn C:\hostedtoolcache\windows\terraform\0.12.7\x64\terraform.exe ENOENT Finishing: Terraform : validate Many other users reported success fixing this by adding a checkout step, but the pipeline automatically does this (presumably previous versions did not), and manually adding it had no effect (actually took 2s longer due to different options).
Turns out the working directory path was incorrect, as the directory structure had been changed. Changing all the named working directories from Terraform/terraform to just terraform corrected the issue. Presumably both in this and cases where checkout was not performed, Terraform simply cannot locate main.tf, but the error is missing or lost.
Terraform
59,794,909
29
I am declaring a google_logging_metric resource in Terraform (using version 0.11.14) I have the following declaration resource "google_logging_metric" "my_metric" { description = "Check for logs of some cron job\t" name = "mycj-logs" filter = "resource.type=\"k8s_container\" AND resource.labels.cluster_name=\"${local.k8s_name}\" AND resource.labels.namespace_name=\"workable\" AND resource.labels.container_name=\"mycontainer-cronjob\" \nresource.labels.pod_name:\"my-pod\"" project = "${data.terraform_remote_state.gke_k8s_env.project_id}" metric_descriptor { metric_kind = "DELTA" value_type = "INT64" } } Is there a way to make the filter field multiline? The existence of the local variable "${local.k8s_name} makes it a bit challenging.
From the docs String values are simple and represent a basic key to value mapping where the key is the variable name. An example is: variable "key" { type = "string" default = "value" } A multi-line string value can be provided using heredoc syntax. variable "long_key" { type = "string" default = <<EOF This is a long key. Running over several lines. EOF }
Terraform
60,722,012
29
After deleting kubernetes cluster with "terraform destroy" I can't create it again anymore. "terraform apply" returns the following error message: Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable Here is the terraform configuration: terraform { backend "s3" { bucket = "skyglass-msur" key = "terraform/backend" region = "us-east-1" } } locals { env_name = "staging" aws_region = "us-east-1" k8s_cluster_name = "ms-cluster" } variable "mysql_password" { type = string description = "Expected to be retrieved from environment variable TF_VAR_mysql_password" } provider "aws" { region = local.aws_region } data "aws_eks_cluster" "msur" { name = module.aws-kubernetes-cluster.eks_cluster_id } module "aws-network" { source = "github.com/skyglass-microservices/module-aws-network" env_name = local.env_name vpc_name = "msur-VPC" cluster_name = local.k8s_cluster_name aws_region = local.aws_region main_vpc_cidr = "10.10.0.0/16" public_subnet_a_cidr = "10.10.0.0/18" public_subnet_b_cidr = "10.10.64.0/18" private_subnet_a_cidr = "10.10.128.0/18" private_subnet_b_cidr = "10.10.192.0/18" } module "aws-kubernetes-cluster" { source = "github.com/skyglass-microservices/module-aws-kubernetes" ms_namespace = "microservices" env_name = local.env_name aws_region = local.aws_region cluster_name = local.k8s_cluster_name vpc_id = module.aws-network.vpc_id cluster_subnet_ids = module.aws-network.subnet_ids nodegroup_subnet_ids = module.aws-network.private_subnet_ids nodegroup_disk_size = "20" nodegroup_instance_types = ["t3.medium"] nodegroup_desired_size = 1 nodegroup_min_size = 1 nodegroup_max_size = 5 } # Create namespace # Use kubernetes provider to work with the kubernetes cluster API provider "kubernetes" { # load_config_file = false cluster_ca_certificate = base64decode(data.aws_eks_cluster.msur.certificate_authority.0.data) host = data.aws_eks_cluster.msur.endpoint exec { api_version = "client.authentication.k8s.io/v1alpha1" command = "aws-iam-authenticator" args = ["token", "-i", "${data.aws_eks_cluster.msur.name}"] } } # Create a namespace for microservice pods resource "kubernetes_namespace" "ms-namespace" { metadata { name = "microservices" } } P.S. There seems to be the issue with terraform kubernetes provider for 0.14.7 I couldn't use "load_config_file" = false in this version, so I had to comment it, which seems to be the reason of this issue. P.P.S. It could also be the issue with outdated cluster_ca_certificate, which terraform tries to use: deleting this certificate could be enough, although I'm not sure, where it is stored.
Before doing something radical like manipulating the state directly, try setting the KUBE_CONFIG_PATH variable: export KUBE_CONFIG_PATH=/path/to/.kube/config After this rerun the plan or apply command. This has fixed the issue for me.
Terraform
66,427,129
29
I need to execute a Terraform template to provision infrastructure for an AWS account which I can access by assuming a role. The problem I have now is I do not have an IAM user in that AWS account so I do not have an aws_access_key_id or an aws_secret_access_key to set up another named profile in my ~/.aws/credentials. When I run command terraform apply, the template creates the infrastructure for my account, not the other account. How to run Terraform template using your account which has a role to access services of another AWS account? Here's my Terraform file: # Input variables variable "aws_region" { type = "string" default = "us-east-1" } variable "pipeline_name" { type = "string" default = "static-website-terraform" } variable "github_username" { type = "string" default = "COMPANY" } variable "github_token" { type = "string" } variable "github_repo" { type = "string" } provider "aws" { region = "${var.aws_region}" assume_role { role_arn = "arn:aws:iam::<AWS-ACCOUNT-ID>:role/admin" profile = "default" } } # CodePipeline resources resource "aws_s3_bucket" "build_artifact_bucket" { bucket = "${var.pipeline_name}-artifact-bucket" acl = "private" } data "aws_iam_policy_document" "codepipeline_assume_policy" { statement { effect = "Allow" actions = ["sts:AssumeRole"] principals { type = "Service" identifiers = ["codepipeline.amazonaws.com"] } } } resource "aws_iam_role" "codepipeline_role" { name = "${var.pipeline_name}-codepipeline-role" assume_role_policy = "${data.aws_iam_policy_document.codepipeline_assume_policy.json}" } # CodePipeline policy needed to use CodeCommit and CodeBuild resource "aws_iam_role_policy" "attach_codepipeline_policy" { name = "${var.pipeline_name}-codepipeline-policy" role = "${aws_iam_role.codepipeline_role.id}" policy = <<EOF { "Statement": [ { "Action": [ "s3:GetObject", "s3:GetObjectVersion", "s3:GetBucketVersioning", "s3:PutObject" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "cloudwatch:*", "sns:*", "sqs:*", "iam:PassRole" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "codebuild:BatchGetBuilds", "codebuild:StartBuild" ], "Resource": "*", "Effect": "Allow" } ], "Version": "2012-10-17" } EOF } # CodeBuild IAM Permissions resource "aws_iam_role" "codebuild_assume_role" { name = "${var.pipeline_name}-codebuild-role" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "codebuild.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } EOF } resource "aws_iam_role_policy" "codebuild_policy" { name = "${var.pipeline_name}-codebuild-policy" role = "${aws_iam_role.codebuild_assume_role.id}" policy = <<POLICY { "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:PutObject", "s3:GetObject", "s3:GetObjectVersion", "s3:GetBucketVersioning" ], "Resource": "*", "Effect": "Allow" }, { "Effect": "Allow", "Resource": [ "${aws_codebuild_project.build_project.id}" ], "Action": [ "codebuild:*" ] }, { "Effect": "Allow", "Resource": [ "*" ], "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ] } ] } POLICY } # CodeBuild Section for the Package stage resource "aws_codebuild_project" "build_project" { name = "${var.pipeline_name}-build" description = "The CodeBuild project for ${var.pipeline_name}" service_role = "${aws_iam_role.codebuild_assume_role.arn}" build_timeout = "60" artifacts { type = "CODEPIPELINE" } environment { compute_type = "BUILD_GENERAL1_SMALL" image = "aws/codebuild/nodejs:6.3.1" type = "LINUX_CONTAINER" } source { type = "CODEPIPELINE" buildspec = "buildspec.yml" } } # Full CodePipeline resource "aws_codepipeline" "codepipeline" { name = "${var.pipeline_name}-codepipeline" role_arn = "${aws_iam_role.codepipeline_role.arn}" artifact_store = { location = "${aws_s3_bucket.build_artifact_bucket.bucket}" type = "S3" } stage { name = "Source" action { name = "Source" category = "Source" owner = "ThirdParty" provider = "GitHub" version = "1" output_artifacts = ["SourceArtifact"] configuration { Owner = "${var.github_username}" OAuthToken = "${var.github_token}" Repo = "${var.github_repo}" Branch = "master" PollForSourceChanges = "true" } } } stage { name = "Deploy" action { name = "DeployToS3" category = "Test" owner = "AWS" provider = "CodeBuild" input_artifacts = ["SourceArtifact"] output_artifacts = ["OutputArtifact"] version = "1" configuration { ProjectName = "${aws_codebuild_project.build_project.name}" } } } } Update: Following Darren's answer (it makes a lot of sense) below, I added: provider "aws" { region = "us-east-1" shared_credentials_file = "${pathexpand("~/.aws/credentials")}" profile = "default" assume_role { role_arn = "arn:aws:iam::<OTHER-ACCOUNT>:role/<ROLE-NAME>" } } However, I ran into this error: provider.aws: The role "arn:aws:iam:::role/" cannot be assumed. There are a number of possible causes of this - the most common are: The credentials used in order to assume the role are invalid The credentials do not have appropriate permission to assume the role The role ARN is not valid I've checked the role in the other account and I can switch to that role using the AWS Console from my account. I've also checked AWS guide here So: that role ARN is valid, I do have credentials to assume the role and all the permissions I need to run the stack. Update I've also tried with a new role that has all access to services. However, I ran into this error: Error: Error refreshing state: 2 error(s) occurred: * aws_codebuild_project.build_project: 1 error(s) occurred: * aws_codebuild_project.build_project: aws_codebuild_project.build_project: Error retreiving Projects: "InvalidInputException: Invalid project ARN: account ID does not match caller's account\n\tstatus code: 400, request id: ..." * aws_s3_bucket.build_artifact_bucket: 1 error(s) occurred: * aws_s3_bucket.build_artifact_bucket: aws_s3_bucket.build_artifact_bucket: error getting S3 Bucket CORS configuration: AccessDenied: Access Denied status code: 403, request id: ..., host id: ... ===== UPDATE 29 Apr 2019: Following @Rolando's suggestion, I've added this policy to the user of the MAIN ACCOUNT that I'm trying to use to assume the role of the OTHER ACCOUNT where I'm planning to execute terraform apply. { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::<OTHER-ACCOUNT-ID>:role/admin" } } This is the Trust Relationship of the role admin belongs to the OTHER ACCOUNT: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<MAIN_ACCOUNT_ID>:root" }, "Action": "sts:AssumeRole", "Condition": { "Bool": { "aws:MultiFactorAuthPresent": "true" } } } ] } However, when I ran this command: aws sts assume-role --role-arn arn:aws:iam::<OTHER-ACCOUNT-ID>:role/admin --role-session-name "RoleSession1" --profile default > assume-role-output.txt I have this error: An error occurred (AccessDenied) when calling the AssumeRole operation: Access denied
I have a bulletproof solution anytime you want to run commands as a specific role (including other accounts). I assume you have the AWS CLI tools installed. You will also have to install jq (easy tool to parse and extract data from json), although you can parse the data any way you wish. aws_credentials=$(aws sts assume-role --role-arn arn:aws:iam::1234567890:role/nameOfMyrole --role-session-name "RoleSession1" --output json) export AWS_ACCESS_KEY_ID=$(echo $aws_credentials|jq '.Credentials.AccessKeyId'|tr -d '"') export AWS_SECRET_ACCESS_KEY=$(echo $aws_credentials|jq '.Credentials.SecretAccessKey'|tr -d '"') export AWS_SESSION_TOKEN=$(echo $aws_credentials|jq '.Credentials.SessionToken'|tr -d '"') First line assigns the response from the aws sts command and puts it in a variable. Last 3 lines will select the values from the first command and assigned them to variables that the aws cli uses. Considerations: If you create a bash script, add your terraform commands there as well. You can also just create a bash with the lines above, and run it with a '.' in front (ie: . ./get-creds.sh). This will create the variables on your current bash shell. Role expires, keep in mind that roles have expiration of usually an hour. Your shell will now have the three variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN. This means that it will override your ~/.aws/credentials. Easiest thing to do to clear this is to just start a new bash session. I used this article as my source to figure this out: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html
Terraform
55,128,348
28
Can you conditionally apply lifecycle blocks to resources in Terraform 0.12. For example if I wanted to add this block to an AWS ASG resource based of a parameter passed to the module. lifecycle { ignore_changes = [ target_group_arns, ] }
No, you can't. From the The lifecycle Meta-Argument documentation: The lifecycle settings all affect how Terraform constructs and traverses the dependency graph. As a result, only literal values can be used because the processing happens too early for arbitrary expression evaluation. While that doesn't explicitly forbid for_each or other dynamic use which would achieve your goal, such constructs are not determinable until later in execution. The best current workaround is two separate copies of the resource, one with this block and one without: lifecycle { ignore_changes = [ target_group_arns, ] } Hopefully, a future version of Terraform will support dynamic lifecycle blocks and non-constant expressions within them.
Terraform
62,427,931
28
When I run terraform init for my Google Cloud Platform project on my Apple Silicon macbook pro I get this error. Provider registry.terraform.io/hashicorp/google v3.57.0 does not have a package available for your current platform, darwin_arm64. How can I work around this? I thought that the Rosetta2 emulator would check this box, but alas...
Build Terraform from scratch by using the tfenv package, which can build a specific version adapted to the platform architecture. I ran the following to install a version that works under my M1 Macbook (version 1.3.3 in this case): brew uninstall terraform brew install tfenv TFENV_ARCH=amd64 tfenv install 1.3.3 tfenv use 1.3.3
Terraform
66,281,882
28
I have a list in terraform that looks something like: array = ["a","b","c"] Within this terraform file there are two variables called age and gender, and I want to make it so that the list called array has an extra element called "d" if age is equal to 12 and gender is equal to male (i.e. if var.age == 12 && var.gender == 'male' then array should be ["a","b","c","d"], else array should be ["a","b","c"]). Would the following going along the right path, or would I need to use another method? array = ["a","b","c", var.age == 12 && var.gender == 'male' ? "d" : null]
There is another way to do that using flatten: variable = flatten(["a", "b", "c", var.age == 12 ? ["d"] : []])
Terraform
67,902,785
28
I'm a little confused about what I'm reading in the terraform documentation. Here's what it says about modules: https://www.terraform.io/docs/language/modules/index.html Modules are containers for multiple resources that are used together. A module consists of a collection of .tf and/or .tf.json files kept together in a directory. Here's what it says about providers: https://www.terraform.io/docs/language/providers/requirements.html Requiring Providers Each Terraform module must declare which providers it requires, so that Terraform can install and use them. Provider requirements are declared in a required_providers block. A provider requirement consists of a local name, a source location, and a version constraint: terraform { required_providers { mycloud = { source = "mycorp/mycloud" version = "~> 1.0" } } } I'm confused about this because I never have specified required_providers in any of my modules, even though I'm using providers and it says I must do so. I didn't even know the documentation said this until today. So am I misinterpreting the documentation, or is the documentation wrong? Does each of my modules need required_providers or not? My terraform configuration definitely works without them, so are they defaulting to something? If yes, how and where?
For backward compatibility with earlier versions of Terraform, Terraform v0.13 and later treat any use of a provider short name that isn't declared in required_providers as an implicit declaration of a requirement for a provider in the hashicorp namespace. For example, we can consider a resource like this: resource "aws_instance" "example" { # ... } If you haven't declared what provider you mean by aws then Terraform will assume that you mean to write something like this: terraform { required_providers { aws = { source = "hashicorp/aws" } } } This behavior is primarily intended to allow existing modules written against the HashiCorp-distributed providers (previously the only auto-installable providers) to continue working without any modification. You can rely on that backward-compatibility behavior if you wish, but the intention (reflected in the docs) is that all modern Terraform modules should be explicit about which specific providers they are using so that, over time as there are more providers belonging to other namespaces, a reader of the module doesn't need to be aware of this special backward-compatibility rule in order to understand its meaning. The terraform 0.13upgrade command included in Terraform v0.13 will automatically generate a suitable source address for each provider your module is using, by referring to a table which maps provider names as understood by Terraform v0.12 and earlier to the fully-qualified provider source addresses expected by Terraform v0.13 and later. Only the ones that are maintained by HashiCorp (as opposed to being maintained by a third-party but previously distributed by HashiCorp) are in the hashicorp namespace, and so using that tool will ensure that you'll specify addresses that correspond to the providers that Terraform v0.12 would've installed for the same configuration.
Terraform
68,216,074
28
How can I easily generate random numbers following a normal distribution in C or C++? I don't want any use of Boost. I know that Knuth talks about this at length but I don't have his books at hand right now.
There are many methods to generate Gaussian-distributed numbers from a regular RNG. The Box-Muller transform is commonly used. It correctly produces values with a normal distribution. The math is easy. You generate two (uniform) random numbers, and by applying an formula to them, you get two normally distributed random numbers. Return one, and save the other for the next request for a random number.
Distribution
2,325,472
130
I want to know if the JavaScript function Math.random uses a normal (vs. uniform) distribution or not. If not, how can I get numbers which use a normal distribution? I haven't found a clear answer on the Internet, for an algorithm to create random normally-distributed numbers. I want to rebuild a Schmidt-machine (German physicist). The machine produces random numbers of 0 or 1, and they have to be normally-distributed so that I can draw them as a Gaussian bell curve. For example, the random function produces 120 numbers (0 or 1) and the average (mean) of these summed values has to be near 60.
Since this is the first Google result for "js gaussian random" in my experience, I feel an obligation to give an actual answer to that query. The Box-Muller transform converts two independent uniform variates on (0, 1) into two standard Gaussian variates (mean 0, variance 1). This probably isn't very performant because of the sqrt, log, and cos calls, but this method is superior to the central limit theorem approaches (summing N uniform variates) because it doesn't restrict the output to the bounded range (-N/2, N/2). It's also really simple: // Standard Normal variate using Box-Muller transform. function gaussianRandom(mean=0, stdev=1) { const u = 1 - Math.random(); // Converting [0,1) to (0,1] const v = Math.random(); const z = Math.sqrt( -2.0 * Math.log( u ) ) * Math.cos( 2.0 * Math.PI * v ); // Transform to the desired mean and standard deviation: return z * stdev + mean; }
Distribution
25,582,882
114
I can't figure out how to do a Two-sample KS test in Scipy. After reading the documentation of scipy kstest, I can see how to test whether a distribution is identical to standard normal distribution from scipy.stats import kstest import numpy as np x = np.random.normal(0,1,1000) test_stat = kstest(x, 'norm') #>>> test_stat #(0.021080234718821145, 0.76584491300591395) Which means that at p-value of 0.76 we cannot reject the null hypothesis that the two distributions are identical. However, I want to compare two distributions and see if I can reject the null hypothesis that they are identical, something like: from scipy.stats import kstest import numpy as np x = np.random.normal(0,1,1000) z = np.random.normal(1.1,0.9, 1000) and test whether x and z are identical. I tried the naive: test_stat = kstest(x, z) and got the following error: TypeError: 'numpy.ndarray' object is not callable Is there a way to do a two-sample KS test in Python? If so, how should I do it?
You are using the one-sample KS test. You probably want the two-sample test ks_2samp: >>> from scipy.stats import ks_2samp >>> import numpy as np >>> >>> np.random.seed(12345678) >>> x = np.random.normal(0, 1, 1000) >>> y = np.random.normal(0, 1, 1000) >>> z = np.random.normal(1.1, 0.9, 1000) >>> >>> ks_2samp(x, y) Ks_2sampResult(statistic=0.022999999999999909, pvalue=0.95189016804849647) >>> ks_2samp(x, z) Ks_2sampResult(statistic=0.41800000000000004, pvalue=3.7081494119242173e-77) Results can be interpreted as following: You can either compare the statistic value given by python to the KS-test critical value table according to your sample size. When statistic value is higher than the critical value, the two distributions are different. Or you can compare the p-value to a level of significance a, usually a=0.05 or 0.01 (you decide, the lower a is, the more significant). If p-value is lower than a, then it is very probable that the two distributions are different.
Distribution
10,884,668
110
I was wondering if there were statistics functions built into math libraries that are part of the standard C++ libraries like cmath. If not, can you guys recommend a good stats library that would have a cumulative normal distribution function? More specifically, I am looking to use/create a cumulative distribution function.
Theres is no straight function. But since the gaussian error function and its complementary function is related to the normal cumulative distribution function (see here, or here) we can use the implemented c-function erfc (complementary error function): double normalCDF(double value) { return 0.5 * erfc(-value * M_SQRT1_2); } Which considers the relation of erfc(x) = 1-erf(x) with M_SQRT1_2 = √0,5. I use it for statistical calculations and it works great. No need for using coefficients.
Distribution
2,328,258
62
Visualizing scipy.stats distributions A histogram can be made of the scipy.stats normal random variable to see what the distribution looks like. % matplotlib inline import pandas as pd import scipy.stats as stats d = stats.norm() rv = d.rvs(100000) pd.Series(rv).hist(bins=32, normed=True) What do the other distributions look like?
Visualizing all scipy.stats distributions Based on the list of scipy.stats distributions, plotted below are the histograms and PDFs of each continuous random variable. The code used to generate each distribution is at the bottom. Note: The shape constants were taken from the examples on the scipy.stats distribution documentation pages. alpha(a=3.57, loc=0.00, scale=1.00) anglit(loc=0.00, scale=1.00) arcsine(loc=0.00, scale=1.00) beta(a=2.31, loc=0.00, scale=1.00, b=0.63) betaprime(a=5.00, loc=0.00, scale=1.00, b=6.00) bradford(loc=0.00, c=0.30, scale=1.00) burr(loc=0.00, c=10.50, scale=1.00, d=4.30) cauchy(loc=0.00, scale=1.00) chi(df=78.00, loc=0.00, scale=1.00) chi2(df=55.00, loc=0.00, scale=1.00) cosine(loc=0.00, scale=1.00) dgamma(a=1.10, loc=0.00, scale=1.00) dweibull(loc=0.00, c=2.07, scale=1.00) erlang(a=2.00, loc=0.00, scale=1.00) expon(loc=0.00, scale=1.00) exponnorm(loc=0.00, K=1.50, scale=1.00) exponpow(loc=0.00, scale=1.00, b=2.70) exponweib(a=2.89, loc=0.00, c=1.95, scale=1.00) f(loc=0.00, dfn=29.00, scale=1.00, dfd=18.00) fatiguelife(loc=0.00, c=29.00, scale=1.00) fisk(loc=0.00, c=3.09, scale=1.00) foldcauchy(loc=0.00, c=4.72, scale=1.00) foldnorm(loc=0.00, c=1.95, scale=1.00) frechet_l(loc=0.00, c=3.63, scale=1.00) frechet_r(loc=0.00, c=1.89, scale=1.00) gamma(a=1.99, loc=0.00, scale=1.00) gausshyper(a=13.80, loc=0.00, c=2.51, scale=1.00, b=3.12, z=5.18) genexpon(a=9.13, loc=0.00, c=3.28, scale=1.00, b=16.20) genextreme(loc=0.00, c=-0.10, scale=1.00) gengamma(a=4.42, loc=0.00, c=-3.12, scale=1.00) genhalflogistic(loc=0.00, c=0.77, scale=1.00) genlogistic(loc=0.00, c=0.41, scale=1.00) gennorm(loc=0.00, beta=1.30, scale=1.00) genpareto(loc=0.00, c=0.10, scale=1.00) gilbrat(loc=0.00, scale=1.00) gompertz(loc=0.00, c=0.95, scale=1.00) gumbel_l(loc=0.00, scale=1.00) gumbel_r(loc=0.00, scale=1.00) halfcauchy(loc=0.00, scale=1.00) halfgennorm(loc=0.00, beta=0.68, scale=1.00) halflogistic(loc=0.00, scale=1.00) halfnorm(loc=0.00, scale=1.00) hypsecant(loc=0.00, scale=1.00) invgamma(a=4.07, loc=0.00, scale=1.00) invgauss(mu=0.14, loc=0.00, scale=1.00) invweibull(loc=0.00, c=10.60, scale=1.00) johnsonsb(a=4.32, loc=0.00, scale=1.00, b=3.18) johnsonsu(a=2.55, loc=0.00, scale=1.00, b=2.25) ksone(loc=0.00, scale=1.00, n=1000.00) kstwobign(loc=0.00, scale=1.00) laplace(loc=0.00, scale=1.00) levy(loc=0.00, scale=1.00) levy_l(loc=0.00, scale=1.00) loggamma(loc=0.00, c=0.41, scale=1.00) logistic(loc=0.00, scale=1.00) loglaplace(loc=0.00, c=3.25, scale=1.00) lognorm(loc=0.00, s=0.95, scale=1.00) lomax(loc=0.00, c=1.88, scale=1.00) maxwell(loc=0.00, scale=1.00) mielke(loc=0.00, s=3.60, scale=1.00, k=10.40) nakagami(loc=0.00, scale=1.00, nu=4.97) ncf(loc=0.00, dfn=27.00, nc=0.42, dfd=27.00, scale=1.00) nct(df=14.00, loc=0.00, scale=1.00, nc=0.24) ncx2(df=21.00, loc=0.00, scale=1.00, nc=1.06) norm(loc=0.00, scale=1.00) pareto(loc=0.00, scale=1.00, b=2.62) pearson3(loc=0.00, skew=0.10, scale=1.00) powerlaw(a=1.66, loc=0.00, scale=1.00) powerlognorm(loc=0.00, s=0.45, scale=1.00, c=2.14) powernorm(loc=0.00, c=4.45, scale=1.00) rayleigh(loc=0.00, scale=1.00) rdist(loc=0.00, c=0.90, scale=1.00) recipinvgauss(mu=0.63, loc=0.00, scale=1.00) reciprocal(a=0.01, loc=0.00, scale=1.00, b=1.01) rice(loc=0.00, scale=1.00, b=0.78) semicircular(loc=0.00, scale=1.00) t(df=2.74, loc=0.00, scale=1.00) triang(loc=0.00, c=0.16, scale=1.00) truncexpon(loc=0.00, scale=1.00, b=4.69) truncnorm(a=0.10, loc=0.00, scale=1.00, b=2.00) tukeylambda(loc=0.00, scale=1.00, lam=3.13) uniform(loc=0.00, scale=1.00) vonmises(loc=0.00, scale=1.00, kappa=3.99) vonmises_line(loc=0.00, scale=1.00, kappa=3.99) wald(loc=0.00, scale=1.00) weibull_max(loc=0.00, c=2.87, scale=1.00) weibull_min(loc=0.00, c=1.79, scale=1.00) wrapcauchy(loc=0.00, c=0.03, scale=1.00) Generation Code Here is the Jupyter Notebook used to generate the plots. %matplotlib inline import io import numpy as np import pandas as pd import scipy.stats as stats import matplotlib import matplotlib.pyplot as plt matplotlib.rcParams['figure.figsize'] = (16.0, 14.0) matplotlib.style.use('ggplot') # Distributions to check, shape constants were taken from the examples on the scipy.stats distribution documentation pages. DISTRIBUTIONS = [ stats.alpha(a=3.57, loc=0.0, scale=1.0), stats.anglit(loc=0.0, scale=1.0), stats.arcsine(loc=0.0, scale=1.0), stats.beta(a=2.31, b=0.627, loc=0.0, scale=1.0), stats.betaprime(a=5, b=6, loc=0.0, scale=1.0), stats.bradford(c=0.299, loc=0.0, scale=1.0), stats.burr(c=10.5, d=4.3, loc=0.0, scale=1.0), stats.cauchy(loc=0.0, scale=1.0), stats.chi(df=78, loc=0.0, scale=1.0), stats.chi2(df=55, loc=0.0, scale=1.0), stats.cosine(loc=0.0, scale=1.0), stats.dgamma(a=1.1, loc=0.0, scale=1.0), stats.dweibull(c=2.07, loc=0.0, scale=1.0), stats.erlang(a=2, loc=0.0, scale=1.0), stats.expon(loc=0.0, scale=1.0), stats.exponnorm(K=1.5, loc=0.0, scale=1.0), stats.exponweib(a=2.89, c=1.95, loc=0.0, scale=1.0), stats.exponpow(b=2.7, loc=0.0, scale=1.0), stats.f(dfn=29, dfd=18, loc=0.0, scale=1.0), stats.fatiguelife(c=29, loc=0.0, scale=1.0), stats.fisk(c=3.09, loc=0.0, scale=1.0), stats.foldcauchy(c=4.72, loc=0.0, scale=1.0), stats.foldnorm(c=1.95, loc=0.0, scale=1.0), stats.frechet_r(c=1.89, loc=0.0, scale=1.0), stats.frechet_l(c=3.63, loc=0.0, scale=1.0), stats.genlogistic(c=0.412, loc=0.0, scale=1.0), stats.genpareto(c=0.1, loc=0.0, scale=1.0), stats.gennorm(beta=1.3, loc=0.0, scale=1.0), stats.genexpon(a=9.13, b=16.2, c=3.28, loc=0.0, scale=1.0), stats.genextreme(c=-0.1, loc=0.0, scale=1.0), stats.gausshyper(a=13.8, b=3.12, c=2.51, z=5.18, loc=0.0, scale=1.0), stats.gamma(a=1.99, loc=0.0, scale=1.0), stats.gengamma(a=4.42, c=-3.12, loc=0.0, scale=1.0), stats.genhalflogistic(c=0.773, loc=0.0, scale=1.0), stats.gilbrat(loc=0.0, scale=1.0), stats.gompertz(c=0.947, loc=0.0, scale=1.0), stats.gumbel_r(loc=0.0, scale=1.0), stats.gumbel_l(loc=0.0, scale=1.0), stats.halfcauchy(loc=0.0, scale=1.0), stats.halflogistic(loc=0.0, scale=1.0), stats.halfnorm(loc=0.0, scale=1.0), stats.halfgennorm(beta=0.675, loc=0.0, scale=1.0), stats.hypsecant(loc=0.0, scale=1.0), stats.invgamma(a=4.07, loc=0.0, scale=1.0), stats.invgauss(mu=0.145, loc=0.0, scale=1.0), stats.invweibull(c=10.6, loc=0.0, scale=1.0), stats.johnsonsb(a=4.32, b=3.18, loc=0.0, scale=1.0), stats.johnsonsu(a=2.55, b=2.25, loc=0.0, scale=1.0), stats.ksone(n=1e+03, loc=0.0, scale=1.0), stats.kstwobign(loc=0.0, scale=1.0), stats.laplace(loc=0.0, scale=1.0), stats.levy(loc=0.0, scale=1.0), stats.levy_l(loc=0.0, scale=1.0), stats.levy_stable(alpha=0.357, beta=-0.675, loc=0.0, scale=1.0), stats.logistic(loc=0.0, scale=1.0), stats.loggamma(c=0.414, loc=0.0, scale=1.0), stats.loglaplace(c=3.25, loc=0.0, scale=1.0), stats.lognorm(s=0.954, loc=0.0, scale=1.0), stats.lomax(c=1.88, loc=0.0, scale=1.0), stats.maxwell(loc=0.0, scale=1.0), stats.mielke(k=10.4, s=3.6, loc=0.0, scale=1.0), stats.nakagami(nu=4.97, loc=0.0, scale=1.0), stats.ncx2(df=21, nc=1.06, loc=0.0, scale=1.0), stats.ncf(dfn=27, dfd=27, nc=0.416, loc=0.0, scale=1.0), stats.nct(df=14, nc=0.24, loc=0.0, scale=1.0), stats.norm(loc=0.0, scale=1.0), stats.pareto(b=2.62, loc=0.0, scale=1.0), stats.pearson3(skew=0.1, loc=0.0, scale=1.0), stats.powerlaw(a=1.66, loc=0.0, scale=1.0), stats.powerlognorm(c=2.14, s=0.446, loc=0.0, scale=1.0), stats.powernorm(c=4.45, loc=0.0, scale=1.0), stats.rdist(c=0.9, loc=0.0, scale=1.0), stats.reciprocal(a=0.00623, b=1.01, loc=0.0, scale=1.0), stats.rayleigh(loc=0.0, scale=1.0), stats.rice(b=0.775, loc=0.0, scale=1.0), stats.recipinvgauss(mu=0.63, loc=0.0, scale=1.0), stats.semicircular(loc=0.0, scale=1.0), stats.t(df=2.74, loc=0.0, scale=1.0), stats.triang(c=0.158, loc=0.0, scale=1.0), stats.truncexpon(b=4.69, loc=0.0, scale=1.0), stats.truncnorm(a=0.1, b=2, loc=0.0, scale=1.0), stats.tukeylambda(lam=3.13, loc=0.0, scale=1.0), stats.uniform(loc=0.0, scale=1.0), stats.vonmises(kappa=3.99, loc=0.0, scale=1.0), stats.vonmises_line(kappa=3.99, loc=0.0, scale=1.0), stats.wald(loc=0.0, scale=1.0), stats.weibull_min(c=1.79, loc=0.0, scale=1.0), stats.weibull_max(c=2.87, loc=0.0, scale=1.0), stats.wrapcauchy(c=0.0311, loc=0.0, scale=1.0) ] bins = 32 size = 16384 plotData = [] for distribution in DISTRIBUTIONS: try: # Create random data rv = pd.Series(distribution.rvs(size=size)) # Get sane start and end points of distribution start = distribution.ppf(0.01) end = distribution.ppf(0.99) # Build PDF and turn into pandas Series x = np.linspace(start, end, size) y = distribution.pdf(x) pdf = pd.Series(y, x) # Get histogram of random data b = np.linspace(start, end, bins+1) y, x = np.histogram(rv, bins=b, normed=True) x = [(a+x[i+1])/2.0 for i,a in enumerate(x[0:-1])] hist = pd.Series(y, x) # Create distribution name and parameter string title = '{}({})'.format(distribution.dist.name, ', '.join(['{}={:0.2f}'.format(k,v) for k,v in distribution.kwds.items()])) # Store data for later plotData.append({ 'pdf': pdf, 'hist': hist, 'title': title }) except Exception: print 'could not create data', distribution.dist.name plotMax = len(plotData) for i, data in enumerate(plotData): w = abs(abs(data['hist'].index[0]) - abs(data['hist'].index[1])) # Display plt.figure(figsize=(10, 6)) ax = data['pdf'].plot(kind='line', label='Model PDF', legend=True, lw=2) ax.bar(data['hist'].index, data['hist'].values, label='Random Sample', width=w, align='center', alpha=0.5) ax.set_title(data['title']) # Grab figure fig = matplotlib.pyplot.gcf() # Output 'file' fig.savefig('~/Desktop/dist/'+data['title']+'.png', format='png', bbox_inches='tight') matplotlib.pyplot.close()
Distribution
37,559,470
55
I am trying to recreate maximum likelihood distribution fitting, I can already do this in Matlab and R, but now I want to use scipy. In particular, I would like to estimate the Weibull distribution parameters for my data set. I have tried this: import scipy.stats as s import numpy as np import matplotlib.pyplot as plt def weib(x,n,a): return (a / n) * (x / n)**(a - 1) * np.exp(-(x / n)**a) data = np.loadtxt("stack_data.csv") (loc, scale) = s.exponweib.fit_loc_scale(data, 1, 1) print loc, scale x = np.linspace(data.min(), data.max(), 1000) plt.plot(x, weib(x, loc, scale)) plt.hist(data, data.max(), density=True) plt.show() And get this: (2.5827280639441961, 3.4955032285727947) And a distribution that looks like this: I have been using the exponweib after reading this http://www.johndcook.com/distributions_scipy.html. I have also tried the other Weibull functions in scipy (just in case!). In Matlab (using the Distribution Fitting Tool - see screenshot) and in R (using both the MASS library function fitdistr and the GAMLSS package) I get a (loc) and b (scale) parameters more like 1.58463497 5.93030013. I believe all three methods use the maximum likelihood method for distribution fitting. I have posted my data here if you would like to have a go! And for completeness I am using Python 2.7.5, Scipy 0.12.0, R 2.15.2 and Matlab 2012b. Why am I getting a different result!?
My guess is that you want to estimate the shape parameter and the scale of the Weibull distribution while keeping the location fixed. Fixing loc assumes that the values of your data and of the distribution are positive with lower bound at zero. floc=0 keeps the location fixed at zero, f0=1 keeps the first shape parameter of the exponential weibull fixed at one. >>> stats.exponweib.fit(data, floc=0, f0=1) [1, 1.8553346917584836, 0, 6.8820748596850905] >>> stats.weibull_min.fit(data, floc=0) [1.8553346917584836, 0, 6.8820748596850549] The fit compared to the histogram looks ok, but not very good. The parameter estimates are a bit higher than the ones you mention are from R and matlab. Update The closest I can get to the plot that is now available is with unrestricted fit, but using starting values. The plot is still less peaked. Note values in fit that don't have an f in front are used as starting values. >>> from scipy import stats >>> import matplotlib.pyplot as plt >>> plt.plot(data, stats.exponweib.pdf(data, *stats.exponweib.fit(data, 1, 1, scale=02, loc=0))) >>> _ = plt.hist(data, bins=np.linspace(0, 16, 33), normed=True, alpha=0.5); >>> plt.show()
Distribution
17,481,672
53
I have a dataset from sklearn and I plotted the distribution of the load_diabetes.target data (i.e. the values of the regression that the load_diabetes.data are used to predict). I used this because it has the fewest number of variables/attributes of the regression sklearn.datasets. Using Python 3, How can I get the distribution-type and parameters of the distribution this most closely resembles? All I know the target values are all positive and skewed (positve skew/right skew). . . Is there a way in Python to provide a few distributions and then get the best fit for the target data/vector? OR, to actually suggest a fit based on the data that's given? That would be realllllly useful for people who have theoretical statistical knowledge but little experience with applying it to "real data". Bonus Would it make sense to use this type of approach to figure out what your posterior distribution would be with "real data" ? If no, why not? from sklearn.datasets import load_diabetes import matplotlib.pyplot as plt import seaborn as sns; sns.set() import pandas as pd #Get Data data = load_diabetes() X, y_ = data.data, data.target #Organize Data SR_y = pd.Series(y_, name="y_ (Target Vector Distribution)") #Plot Data fig, ax = plt.subplots() sns.distplot(SR_y, bins=25, color="g", ax=ax) plt.show()
Use this approach import scipy.stats as st def get_best_distribution(data): dist_names = ["norm", "exponweib", "weibull_max", "weibull_min", "pareto", "genextreme"] dist_results = [] params = {} for dist_name in dist_names: dist = getattr(st, dist_name) param = dist.fit(data) params[dist_name] = param # Applying the Kolmogorov-Smirnov test D, p = st.kstest(data, dist_name, args=param) print("p value for "+dist_name+" = "+str(p)) dist_results.append((dist_name, p)) # select the best fitted distribution best_dist, best_p = (max(dist_results, key=lambda item: item[1])) # store the name of the best fit and its p value print("Best fitting distribution: "+str(best_dist)) print("Best p value: "+ str(best_p)) print("Parameters for the best fit: "+ str(params[best_dist])) return best_dist, best_p, params[best_dist]
Distribution
37,487,830
52
I am plotting Cumulative Distribution Functions, with a large number of data points. I am plotting a few lines on the same plot, which are identified with markers as it will be printed in black and white. What I would like are markers evenly spaced in the x-dimension. What I am getting is one marker per data point (and given the number of points, they all overlap) I'm not sure if it's my understanding of how to plot well, or just a lack of understanding matplotlib. I can't find a 'marker frequency' setting. An easy solution for one line would be to take every N'th value from the line, and use that as a separate line with linestyle='', but I would like the markers to be vertically aligned, and the different x arrays have different lengths. # in reality, many thousands of values x_example = [ 567, 460, 66, 1034, 275, 26, 628, 99, 287, 157, 705, 421, 1093, \ 139, 204, 14, 240, 179, 94, 139, 645, 670, 47, 520, 891, 450, 56, 964, \ 1728, 99, 277, 356, 1628, 745, 364, 88, 112, 810, 816, 523, 401, 89, \ 278, 917, 370, 53, 39, 90, 853, 356 ] x = sort(x_example) y = linspace(0,1,len(x)) ax = subplot(1,1,1) plots[w] = ax.plot(x,y, marker='o')
You can do plot(x,y,marker='o',markevery=5) to mark every fifth point, but I don't think there is any built-in support for setting marks at even intervals. You could decide on the x locations where you want the marks, use e.g. numpy.searchsorted to find which data points the locations fall between, and then interpolate between the neighboring points to find the y coordinates.
Distribution
2,040,306
45
Can we say that a truncated md5 hash is still uniformly distributed? To avoid misinterpretations: I'm aware the chance of collisions is much greater the moment you start to hack off parts from the md5 result; my use-case is actually interested in deliberate collisions. I'm also aware there are other hash methods that may be better suited to use-cases of a shorter hash (including, in fact, my own), and I'm definitely looking into those. But I'd also really like to know whether md5's uniform distribution also applies to chunks of it. (Consider it a burning curiosity.) Since mediawiki uses it (specifically, the left-most two hex-digits as characters of the result) to generate filepaths for images (e.g. /4/42/The-image-name-here.png) and they're probably also interested in an at least near-uniform distribution, I imagine the answer is 'yes', but I don't actually know.
Yes, not exhibiting any bias is a design requirement for a cryptographic hash. MD5 is broken from a cryptographic point of view however the distribution of the results was never in question. If you still need to be convinced, it's not a huge undertaking to hash a bunch of files, truncate the output and use ent ( http://www.fourmilab.ch/random/ ) to analyze the result.
Distribution
8,184,941
39
So i have a Random object: typedef unsigned int uint32; class Random { public: Random() = default; Random(std::mt19937::result_type seed) : eng(seed) {} private: uint32 DrawNumber(); std::mt19937 eng{std::random_device{}()}; std::uniform_int_distribution<uint32> uniform_dist{0, UINT32_MAX}; }; uint32 Random::DrawNumber() { return uniform_dist(eng); } What's the best way I can vary (through another function or otherwise) the upper bound of of the distribution? (also willing to take advice on other style issues)
Distribution objects are lightweight. Simply construct a new distribution when you need a random number. I use this approach in a game engine, and, after benchmarking, it's comparable to using good old rand(). Also, I've asked how to vary the range of distribution on GoingNative 2013 live stream, and Stephen T. Lavavej, a member of the standard committee, suggested to simply create new distributions, as it shouldn't be a performance issue. Here's how I would write your code: using uint32 = unsigned int; class Random { public: Random() = default; Random(std::mt19937::result_type seed) : eng(seed) {} uint32 DrawNumber(uint32 min, uint32 max); private: std::mt19937 eng{std::random_device{}()}; }; uint32 Random::DrawNumber(uint32 min, uint32 max) { return std::uniform_int_distribution<uint32>{min, max}(eng); }
Distribution
19,036,141
33
I have a module that sits in a namespace. Should tests and data the tests rely on go in the namespace or in the top level where setup.py sites? ./company/__init__.py ./company/namespace/__init__.py ./company/namespace/useful.py ./company/namespace/test_useful.py ./company/namespace/test_data/useful_data.xml ./setup.py or ./company/__init__.py ./company/namespace/__init__.py ./company/namespace/useful.py ./test_useful.py ./test_data/useful_data.xml ./setup.py Does the question amount to whether tests should be installed or not?
The Sample Project stores the tests outside the module. The directory structure looks like this: ├── data │   └── data_file ├── MANIFEST.in ├── README.rst ├── sample │   ├── __init__.py │   └── package_data.dat ├── setup.cfg ├── setup.py └── tests ├── __init__.py └── test_simple.py Related: The Packing Guide: https://packaging.python.org/en/latest/ Hint: Don't follow the "The Hitchhiker's Guide to Packaging". It has not been updated since 2010! (do not confuse both pages. The "The Hitchhiker’s Guide to Python" is a very solid book)
Distribution
5,341,006
30
I'm working on a data mining algorithm where i want to pick a random direction from a particular point in the feature space. If I pick a random number for each of the n dimensions from [-1,1] and then normalize the vector to a length of 1 will I get an even distribution across all possible directions? I'm speaking only theoretically here since computer generated random numbers are not actually random.
One simple trick is to select each dimension from a gaussian distribution, then normalize: from random import gauss def make_rand_vector(dims): vec = [gauss(0, 1) for i in range(dims)] mag = sum(x**2 for x in vec) ** .5 return [x/mag for x in vec] For example, if you want a 7-dimensional random vector, select 7 random values (from a Gaussian distribution with mean 0 and standard deviation 1). Then, compute the magnitude of the resulting vector using the Pythagorean formula (square each value, add the squares, and take the square root of the result). Finally, divide each value by the magnitude to obtain a normalized random vector. If your number of dimensions is large then this has the strong benefit of always working immediately, while generating random vectors until you find one which happens to have magnitude less than one will cause your computer to simply hang at more than a dozen dimensions or so, because the probability of any of them qualifying becomes vanishingly small.
Distribution
6,283,080
30
My requirement is to generate random bytes of data (not random numbers) aka uniformly distributed bits. As such I was wondering what are the correct/efficient ways of doing this using C++11/14 random facilities. I've had a look around at the examples, but they all seem to focus on number generation (ints, floats etc) Current solution I'm using is the following: #include <vector> #include <random> int main() { std::random_device rd; std::uniform_int_distribution<int> dist(0,255); std::vector<char> data(1000); for (char& d : data) { d = static_cast<char>(dist(rd) & 0xFF); } return 0; }
What you're looking for is the std::independent_bits_engine adaptor: #include <vector> #include <random> #include <climits> #include <algorithm> #include <functional> using random_bytes_engine = std::independent_bits_engine< std::default_random_engine, CHAR_BIT, unsigned char>; int main() { random_bytes_engine rbe; std::vector<unsigned char> data(1000); std::generate(begin(data), end(data), std::ref(rbe)); } Note that the accepted answer is not strictly correct in a general case – random engines produce unsigned values belonging to a range [min(), max()], which doesn't necessarily cover all possible values of the result type (for instance, std::minstd_rand0::min() == 1) and thus you may get random bytes that are not uniformly distributed if using an engine directly. However, for std::random_device the range is [std::numeric_limits<result_type>::min(), std::numeric_limits<result_type>::max()], so this particular engine would also work well without the adaptor.
Distribution
25,298,585
28
Does anyone know how to plot a skew normal distribution with scipy? I supose that stats.norm class can be used but I just can't figure out how. Furthermore, how can I estimate the parameters describing the skew normal distribution of a unidimensional dataset?
From the Wikipedia description, from scipy import linspace from scipy import pi,sqrt,exp from scipy.special import erf from pylab import plot,show def pdf(x): return 1/sqrt(2*pi) * exp(-x**2/2) def cdf(x): return (1 + erf(x/sqrt(2))) / 2 def skew(x,e=0,w=1,a=0): t = (x-e) / w return 2 / w * pdf(t) * cdf(a*t) # You can of course use the scipy.stats.norm versions # return 2 * norm.pdf(t) * norm.cdf(a*t) n = 2**10 e = 1.0 # location w = 2.0 # scale x = linspace(-10,10,n) for a in range(-3,4): p = skew(x,e,w,a) plot(x,p) show() If you want to find the scale, location, and shape parameters from a dataset use scipy.optimize.leastsq, for example using e=1.0,w=2.0 and a=1.0, fzz = skew(x,e,w,a) + norm.rvs(0,0.04,size=n) # fuzzy data def optm(l,x): return skew(x,l[0],l[1],l[2]) - fzz print leastsq(optm,[0.5,0.5,0.5],(x,)) should give you something like, (array([ 1.05206154, 1.96929465, 0.94590444]), 1)
Distribution
5,884,768
24
I'm developing a python framework that would have "addons" written as separate packages. I.e.: import myframework from myframework.addons import foo, bar Now, what I'm trying to arrange is so that these addons can be distributed separately from core framework and injected into myframework.addons namespace. Currently my best solution to this is the following. An add-on would be deployed (most likely into {python_version}/site-packages/ like so: fooext/ fooext/__init__.py fooext/myframework/ fooext/myframework/__init__.py fooext/myframework/addons/ fooext/myframework/addons/__init__.py fooext/myframework/addons/foo.py The fooext/myframework/addons/__init__.py would have the pkgutil path extension code: import pkgutil __path__ = pkgutil.extend_path(__path__, __name__) The problem is that for this to work, the PYTHONPATH needs to have fooext/ in it, however the only thing it would have is the parent install directory (most likely, the above-mentioned site-packages). The solution to this is to have extra code in myframework/addons/__init__.py which would tranverse sys.path and look for any modules with a myframework sub-package, in which case it adds it to sys.path and everything works. Another idea I had is to write the addon files directly to myframework/addons/ install location, but then it would make development and deployed namespace differ. Is there a better way to accomplish this or perhaps a different approach to the above distribution problem altogether?
See namespace packages: http://www.python.org/dev/peps/pep-0382/ or in setuptools: http://peak.telecommunity.com/DevCenter/setuptools#namespace-packages
Distribution
454,691
23
Are there any R packages for the calculation of Kendall's tau-b and tau-c, and their associated standard errors? My searches on Google and Rseek have turned up nothing, but surely someone has implemented these in R.
There are three Kendall tau statistics (tau-a, tau-b, and tau-c). They are not interchangeable, and none of the answers posted so far deal with the last two, which is the subject of the OP's question. I was unable to find functions to calculate tau-b or tau-c, either in the R Standard Library (stat et al.) or in any of the Packages available on CRAN or other repositories. I used the excellent R Package sos to search, so i believe results returned were reasonably thorough. So that's the short answer to the OP's Question: no built-in or Package function for tau-b or tau-c. But it's easy to roll your own. Writing R functions for the Kendall statistics is just a matter of translating these equations into code: Kendall_tau_a = (P - Q) / (n * (n - 1) / 2) Kendall_tau_b = (P - Q) / ( (P + Q + Y0) * (P + Q + X0) ) ^ 0.5 Kendall_tau_c = (P - Q) * ((2 * m) / n ^ 2 * (m - 1) ) tau-a: equal to concordant minus discordant pairs, divided by a factor to account for total number of pairs (sample size). tau-b: explicit accounting for ties--i.e., both members of the data pair have the same value; this value is equal to concordant minus discordant pairs divided by a term representing the geometric mean between the number of pairs not tied on x (X0) and the number not tied on y (Y0). tau-c: larger-table variant also optimized for non-square tables; equal to concordant minus discordant pairs multiplied by a factor that adjusts for table size). # Number of concordant pairs. P = function(t) { r_ndx = row(t) c_ndx = col(t) sum(t * mapply(function(r, c){sum(t[(r_ndx > r) & (c_ndx > c)])}, r = r_ndx, c = c_ndx)) } # Number of discordant pairs. Q = function(t) { r_ndx = row(t) c_ndx = col(t) sum(t * mapply( function(r, c){ sum(t[(r_ndx > r) & (c_ndx < c)]) }, r = r_ndx, c = c_ndx) ) } # Sample size (total number of pairs). n = n = sum(t) # The lesser of number of rows or columns. m = min(dim(t)) So these four parameters are all you need to calculate tau-a, tau-b, and tau-c: P Q m n (plus XO & Y0 for tau-b) For instance, the code for tau-c is: kendall_tau_c = function(t){ t = as.matrix(t) m = min(dim(t)) n = sum(t) ks_tauc = (m * 2 * (P(t) - Q(t))) / ((n ^ 2) * (m - 1)) } So how are Kendall's tau statistics related to the other statistical tests used in categorical data analysis? All three Kendall tau statistics, along with Goodman's and Kruskal's gamma are for correlation of ordinal and binary data. (The Kendall tau statistics are more sophisticated alternatives to the gamma statistic (just P-Q).) And so Kendalls's tau and the gamma are counterparts to the simple chi-square and Fisher's exact tests, both of which are (as far as I know) suitable only for nominal data. example: cpa_group = c(4, 2, 4, 3, 2, 2, 3, 2, 1, 5, 5, 1) revenue_per_customer_group = c(3, 3, 1, 3, 4, 4, 4, 3, 5, 3, 2, 2) weight = c(1, 3, 3, 2, 2, 4, 0, 4, 3, 0, 1, 1) dfx = data.frame(CPA=cpa_group, LCV=revenue_per_customer_group, freq=weight) # Reshape data frame so 1 row for each event # (predicate step to create contingency table). dfx2 = data.frame(lapply(dfx, function(x) { rep(x, dfx$freq)})) t = xtabs(~ revenue + cpa, dfx) kc = kendall_tau_c(t) # Returns -.35.
Distribution
2,557,863
23
I have a dataframe with a column that has numerical values. This column is not well-approximated by a normal distribution. Given another numerical value, not in this column, how can I calculate its percentile in the column? That is, if the value is greater than 80% of the values in the column but less than the other 20%, it would be in the 20th percentile.
To find the percentile of a value relative to an array (or in your case a dataframe column), use the scipy function stats.percentileofscore(). For example, if we have a value x (the other numerical value not in the dataframe), and a reference array, arr (the column from the dataframe), we can find the percentile of x by: from scipy import stats percentile = stats.percentileofscore(arr, x) Note that there is a third parameter to the stats.percentileofscore() function that has a significant impact on the resulting value of the percentile, viz. kind. You can choose from rank, weak, strict, and mean. See the docs for more information. For an example of the difference: >>> df a 0 1 1 2 2 3 3 4 4 5 >>> stats.percentileofscore(df['a'], 4, kind='rank') 80.0 >>> stats.percentileofscore(df['a'], 4, kind='weak') 80.0 >>> stats.percentileofscore(df['a'], 4, kind='strict') 60.0 >>> stats.percentileofscore(df['a'], 4, kind='mean') 70.0 As a final note, if you have a value that is greater than 80% of the other values in the column, it would be in the 80th percentile (see the example above for how the kind method affects this final score somewhat) not the 20th percentile. See this Wikipedia article for more information.
Distribution
44,824,927
23
I want to generate random numbers according some distributions. How can I do this?
The standard random number generator you've got (rand() in C after a simple transformation, equivalents in many languages) is a fairly good approximation to a uniform distribution over the range [0,1]. If that's what you need, you're done. It's also trivial to convert that to a random number generated over a somewhat larger integer range. Conversion of a Uniform distribution to a Normal distribution has already been covered on SO, as has going to the Exponential distribution. [EDIT]: For the triangular distribution, converting a uniform variable is relatively simple (in something C-like): double triangular(double a,double b,double c) { double U = rand() / (double) RAND_MAX; double F = (c - a) / (b - a); if (U <= F) return a + sqrt(U * (b - a) * (c - a)); else return b - sqrt((1 - U) * (b - a) * (b - c)); } That's just converting the formula given on the Wikipedia page. If you want others, that's the place to start looking; in general, you use the uniform variable to pick a point on the vertical axis of the cumulative density function of the distribution you want (assuming it's continuous), and invert the CDF to get the random value with the desired distribution.
Distribution
3,510,475
22
I have the following code to generate bimodal distribution but when I graph the histogram. I don't see the 2 modes. I am wondering if there's something wrong with my code. mu1 <- log(1) mu2 <- log(10) sig1 <- log(3) sig2 <- log(3) cpct <- 0.4 bimodalDistFunc <- function (n,cpct, mu1, mu2, sig1, sig2) { y0 <- rlnorm(n,mean=mu1, sd = sig1) y1 <- rlnorm(n,mean=mu2, sd = sig2) flag <- rbinom(n,size=1,prob=cpct) y <- y0*(1 - flag) + y1*flag } bimodalData <- bimodalDistFunc(n=100,cpct,mu1,mu2, sig1,sig2) hist(log(bimodalData))
The problem seems to be just too small n and too small difference between mu1 and mu2, taking mu1=log(1), mu2=log(50) and n=10000 gives this:
Distribution
11,530,010
22
On Stackoverflow there are many questions about generating uniformly distributed integers from a-priory unknown ranges. E.g. C++11 Generating random numbers from frequently changing range Vary range of uniform_int_distribution The typical solution is something like: inline std::mt19937 &engine() { thread_local std::mt19937 eng; return eng; } int get_int_from_range(int from, int to) { std::uniform_int_distribution<int> dist(from, to); return dist(engine()); } Given that a distribution should be a lightweight object and there aren't performance concerns recreating it multiple times, it seems that even simple distribution may very well and usually will have some internal state. So I was wondering if interfering with how the distribution works by constantly resetting it (i.e. recreating the distribution at every call of get_int_from_range) I get properly distributed results. There's a long discussion between Pete Becker and Steve Jessop but without a final word. In another question (Should I keep the random distribution object instance or can I always recreate it?) the "problem" of the internal state doesn't seem very important. Does the C++ standard make any guarantee regarding this topic? Is the following implementation (from N4316 - std::rand replacement) somewhat more reliable? int get_int_from_range(int from, int to) { using distribution_type = std::uniform_int_distribution<int>; using param_type = typename distribution_type::param_type; thread_local std::uniform_int_distribution<int> dist; return dist(engine(), param_type(from, to)); } EDIT This reuses a possible internal state of a distribution but it's complex and I'm not sure it does worth the trouble: int get_int_from_range(int from, int to) { using range_t = std::pair<int, int>; using map_t = std::map<range_t, std::uniform_int_distribution<int>>; thread_local map_t range_map; auto i = range_map.find(range_t(from, to)); if (i == std::end(range_map)) i = range_map.emplace( std::make_pair(from, to), std::uniform_int_distribution<int>{from, to}).first; return i->second(engine()); } (from https://stackoverflow.com/a/30097323/3235496)
Interesting question. So I was wondering if interfering with how the distribution works by constantly resetting it (i.e. recreating the distribution at every call of get_int_from_range) I get properly distributed results. I've written code to test this with uniform_int_distribution and poisson_distribution. It's easy enough to extend this to test another distribution if you wish. The answer seems to be yes. Boiler-plate code: #include <random> #include <memory> #include <chrono> #include <utility> typedef std::mt19937_64 engine_type; inline size_t get_seed() { return std::chrono::system_clock::now().time_since_epoch().count(); } engine_type& engine_singleton() { static std::unique_ptr<engine_type> ptr; if ( !ptr ) ptr.reset( new engine_type(get_seed()) ); return *ptr; } // ------------------------------------------------------------------------ #include <cmath> #include <cstdio> #include <vector> #include <string> #include <algorithm> void plot_distribution( const std::vector<double>& D, size_t mass = 200 ) { const size_t n = D.size(); for ( size_t i = 0; i < n; ++i ) { printf("%02ld: %s\n", i, std::string(static_cast<size_t>(D[i]*mass),'*').c_str() ); } } double maximum_difference( const std::vector<double>& x, const std::vector<double>& y ) { const size_t n = x.size(); double m = 0.0; for ( size_t i = 0; i < n; ++i ) m = std::max( m, std::abs(x[i]-y[i]) ); return m; } Code for the actual tests: #include <iostream> #include <vector> #include <cstdio> #include <random> #include <string> #include <cmath> void compare_uniform_distributions( int lo, int hi ) { const size_t sample_size = 1e5; // Initialize histograms std::vector<double> H1( hi-lo+1, 0.0 ), H2( hi-lo+1, 0.0 ); // Initialize distribution auto U = std::uniform_int_distribution<int>(lo,hi); // Count! for ( size_t i = 0; i < sample_size; ++i ) { engine_type E(get_seed()); H1[ U(engine_singleton())-lo ] += 1.0; H2[ U(E)-lo ] += 1.0; } // Normalize histograms to obtain "densities" for ( size_t i = 0; i < H1.size(); ++i ) { H1[i] /= sample_size; H2[i] /= sample_size; } printf("Engine singleton:\n"); plot_distribution(H1); printf("Engine creation :\n"); plot_distribution(H2); printf("Maximum difference: %.3f\n", maximum_difference(H1,H2) ); std::cout<< std::string(50,'-') << std::endl << std::endl; } void compare_poisson_distributions( double mean ) { const size_t sample_size = 1e5; const size_t nbins = static_cast<size_t>(std::ceil(2*mean)); // Initialize histograms std::vector<double> H1( nbins, 0.0 ), H2( nbins, 0.0 ); // Initialize distribution auto U = std::poisson_distribution<int>(mean); // Count! for ( size_t i = 0; i < sample_size; ++i ) { engine_type E(get_seed()); int u1 = U(engine_singleton()); int u2 = U(E); if (u1 < nbins) H1[u1] += 1.0; if (u2 < nbins) H2[u2] += 1.0; } // Normalize histograms to obtain "densities" for ( size_t i = 0; i < H1.size(); ++i ) { H1[i] /= sample_size; H2[i] /= sample_size; } printf("Engine singleton:\n"); plot_distribution(H1); printf("Engine creation :\n"); plot_distribution(H2); printf("Maximum difference: %.3f\n", maximum_difference(H1,H2) ); std::cout<< std::string(50,'-') << std::endl << std::endl; } // ------------------------------------------------------------------------ int main() { compare_uniform_distributions( 0, 25 ); compare_poisson_distributions( 12 ); } Run it here. Does the C++ standard make any guarantee regarding this topic? Not that I know of. However, I would say that the standard makes an implicit recommendation not to re-create the engine every time; for any distribution Distrib, the prototype of Distrib::operator() takes a reference URNG& and not a const reference. This is understandably required because the engine might need to update its internal state, but it also implies that code looking like this auto U = std::uniform_int_distribution(0,10); for ( <something here> ) U(engine_type()); does not compile, which to me is a clear incentive not to write code like this. I'm sure there are plenty of advice out there on how to properly use the random library. It does get complicated if you have to handle the possibility of using random_devices and allowing deterministic seeding for testing purposes, but I thought it might be useful to throw my own recommendation out there too: #include <random> #include <chrono> #include <utility> #include <functional> inline size_t get_seed() { return std::chrono::system_clock::now().time_since_epoch().count(); } template <class Distrib> using generator_type = std::function< typename Distrib::result_type () >; template <class Distrib, class Engine = std::mt19937_64, class... Args> inline generator_type<Distrib> get_generator( Args&&... args ) { return std::bind( Distrib( std::forward<Args>(args)... ), Engine(get_seed()) ); } // ------------------------------------------------------------------------ #include <iostream> int main() { auto U = get_generator<std::uniform_int_distribution<int>>(0,10); std::cout<< U() << std::endl; } Run it here. Hope this helps! EDIT My first recommendation was a mistake, and I apologise for that; we can't use a singleton engine like in the tests above, because this would mean that two uniform int distributions would produce the same random sequence. Instead I rely on the fact that std::bind copies the newly-created engine locally in std::function with its own seed, and this yields the expected behaviour; different generators with the same distribution produce different random sequences.
Distribution
30,103,356
22
I have a bunch of keys that each have an unlikeliness variable. I want to randomly choose one of these keys, yet I want it to be more unlikely for unlikely (key, values) to be chosen than a less unlikely (a more likely) object. I am wondering if you would have any suggestions, preferably an existing python module that I could use, else I will need to make it myself. I have checked out the random module; it does not seem to provide this. I have to make such choices many millions of times for 1000 different sets of objects each containing 2,455 objects. Each set will exchange objects among each other so the random chooser needs to be dynamic. With 1000 sets of 2,433 objects, that is 2,433 million objects; low memory consumption is crucial. And since these choices are not the bulk of the algorithm, I need this process to be quite fast; CPU-time is limited. Thx Update: Ok, I tried to consider your suggestions wisely, but time is so limited... I looked at the binary search tree approach and it seems too risky (complex and complicated). The other suggestions all resemble the ActiveState recipe. I took it and modified it a little in the hope of making more efficient: def windex(dict, sum, max): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: >>> x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' n = random.uniform(0, 1) sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n < weight: break n = n - weight return key I am hoping to get an efficiency gain from dynamically maintaining the sum of certainties and the maximum certainty. Any further suggestions are welcome. You guys saves me so much time and effort, while increasing my effectiveness, it is crazy. Thx! Thx! Thx! Update2: I decided to make it more efficient by letting it choose more choices at once. This will result in an acceptable loss in precision in my algo for it is dynamic in nature. Anyway, here is what I have now: def weightedChoices(dict, sum, max, choices=10): '''an attempt to make a random.choose() function that makes weighted choices accepts a dictionary with the item_key and certainty_value as a pair like: >>> x = [('one', 20), ('two', 2), ('three', 50)], the maximum certainty value (max) and the sum of all certainties.''' list = [random.uniform(0, 1) for i in range(choices)] (n, list) = relavate(list.sort()) keys = [] sum = max*len(list)-sum for key, certainty in dict.iteritems(): weight = float(max-certainty)/sum if n < weight: keys.append(key) if list: (n, list) = relavate(list) else: break n = n - weight return keys def relavate(list): min = list[0] new = [l - min for l in list[1:]] return (min, new) I haven't tried it out yet. If you have any comments/suggestions, please do not hesitate. Thx! Update3: I have been working all day on a task-tailored version of Rex Logan's answer. Instead of a 2 arrays of objects and weights, it is actually a special dictionary class; which makes things quite complex since Rex's code generates a random index... I also coded a test case that kind of resembles what will happen in my algo (but I can't really know until I try!). The basic principle is: the more a key is randomly generated often, the more unlikely it will be generated again: import random, time import psyco psyco.full() class ProbDict(): """ Modified version of Rex Logans RandomObject class. The more a key is randomly chosen, the more unlikely it will further be randomly chosen. """ def __init__(self,keys_weights_values={}): self._kw=keys_weights_values self._keys=self._kw.keys() self._len=len(self._keys) self._findSeniors() self._effort = 0.15 self._fails = 0 def __iter__(self): return self.next() def __getitem__(self, key): return self._kw[key] def __setitem__(self, key, value): self.append(key, value) def __len__(self): return self._len def next(self): key=self._key() while key: yield key key = self._key() def __contains__(self, key): return key in self._kw def items(self): return self._kw.items() def pop(self, key): try: (w, value) = self._kw.pop(key) self._len -=1 if w == self._seniorW: self._seniors -= 1 if not self._seniors: #costly but unlikely: self._findSeniors() return [w, value] except KeyError: return None def popitem(self): return self.pop(self._key()) def values(self): values = [] for key in self._keys: try: values.append(self._kw[key][1]) except KeyError: pass return values def weights(self): weights = [] for key in self._keys: try: weights.append(self._kw[key][0]) except KeyError: pass return weights def keys(self, imperfect=False): if imperfect: return self._keys return self._kw.keys() def append(self, key, value=None): if key not in self._kw: self._len +=1 self._kw[key] = [0, value] self._keys.append(key) else: self._kw[key][1]=value def _key(self): for i in range(int(self._effort*self._len)): ri=random.randint(0,self._len-1) #choose a random object rx=random.uniform(0,self._seniorW) rkey = self._keys[ri] try: w = self._kw[rkey][0] if rx >= w: # test to see if that is the value we want w += 1 self._warnSeniors(w) self._kw[rkey][0] = w return rkey except KeyError: self._keys.pop(ri) # if you do not find one after 100 tries then just get a random one self._fails += 1 #for confirming effectiveness only for key in self._keys: if key in self._kw: w = self._kw[key][0] + 1 self._warnSeniors(w) self._kw[key][0] = w return key return None def _findSeniors(self): '''this function finds the seniors, counts them and assess their age. It is costly but unlikely.''' seniorW = 0 seniors = 0 for w in self._kw.itervalues(): if w >= seniorW: if w == seniorW: seniors += 1 else: seniorsW = w seniors = 1 self._seniors = seniors self._seniorW = seniorW def _warnSeniors(self, w): #a weight can only be incremented...good if w >= self._seniorW: if w == self._seniorW: self._seniors+=1 else: self._seniors = 1 self._seniorW = w def test(): #test code iterations = 200000 size = 2500 nextkey = size pd = ProbDict(dict([(i,[0,i]) for i in xrange(size)])) start = time.clock() for i in xrange(iterations): key=pd._key() w=pd[key][0] if random.randint(0,1+pd._seniorW-w): #the heavier the object, the more unlikely it will be removed pd.pop(key) probAppend = float(500+(size-len(pd)))/1000 if random.uniform(0,1) < probAppend: nextkey+=1 pd.append(nextkey) print (time.clock()-start)*1000/iterations, "msecs / iteration with", pd._fails, "failures /", iterations, "iterations" weights = pd.weights() weights.sort() print "avg weight:", float(sum(weights))/pd._len, max(weights), pd._seniorW, pd._seniors, len(pd), len(weights) print weights test() Any comments are still welcome. @Darius: your binary trees are too complex and complicated for me; and I do not think its leafs can be removed efficiently... Thx all
This activestate recipe gives an easy-to-follow approach, specifically the version in the comments that doesn't require you to pre-normalize your weights: import random def weighted_choice(items): """items is a list of tuples in the form (item, weight)""" weight_total = sum((item[1] for item in items)) n = random.uniform(0, weight_total) for item, weight in items: if n < weight: return item n = n - weight return item This will be slow if you have a large list of items. A binary search would probably be better in that case... but would also be more complicated to write, for little gain if you have a small sample size. Here's an example of the binary search approach in python if you want to follow that route. (I'd recommend doing some quick performance testing of both methods on your dataset. The performance of different approaches to this sort of algorithm is often a bit unintuitive.) Edit: I took my own advice, since I was curious, and did a few tests. I compared four approaches: The weighted_choice function above. A binary-search choice function like so: def weighted_choice_bisect(items): added_weights = [] last_sum = 0 for item, weight in items: last_sum += weight added_weights.append(last_sum) return items[bisect.bisect(added_weights, random.random() * last_sum)][0] A compiling version of 1: def weighted_choice_compile(items): """returns a function that fetches a random item from items items is a list of tuples in the form (item, weight)""" weight_total = sum((item[1] for item in items)) def choice(uniform = random.uniform): n = uniform(0, weight_total) for item, weight in items: if n < weight: return item n = n - weight return item return choice A compiling version of 2: def weighted_choice_bisect_compile(items): """Returns a function that makes a weighted random choice from items.""" added_weights = [] last_sum = 0 for item, weight in items: last_sum += weight added_weights.append(last_sum) def choice(rnd=random.random, bis=bisect.bisect): return items[bis(added_weights, rnd() * last_sum)][0] return choice I then built a big list of choices like so: choices = [(random.choice("abcdefg"), random.uniform(0,50)) for i in xrange(2500)] And an excessively simple profiling function: def profiler(f, n, *args, **kwargs): start = time.time() for i in xrange(n): f(*args, **kwargs) return time.time() - start The results: (Seconds taken for 1,000 calls to the function.) Simple uncompiled: 0.918624162674 Binary uncompiled: 1.01497793198 Simple compiled: 0.287325024605 Binary compiled: 0.00327413797379 The "compiled" results include the average time taken to compile the choice function once. (I timed 1,000 compiles, then divided that time by 1,000, and added the result to the choice function time.) So: if you have a list of items+weights which change very rarely, the binary compiled method is by far the fastest.
Distribution
526,255
21
I need to create a method to generate a unit vector in three dimensions that points in a random direction using a random number generator. The distribution of direction MUST be isotropic. Here is how I am trying to generate a random unit vector: v = randn(1,3); v = v./sqrt(v*v'); But I don't know how to complete the isotropic part. Any ideas?
You're doing it right. A random normal distribution of coordinates gives you a uniform distribution of directions. To generate 10000 uniform points on the unit sphere, you run v = randn(10000,3); v = bsxfun(@rdivide,v,sqrt(sum(v.^2,2))); plot3(v(:,1),v(:,2),v(:,3),'.') axis equal
Distribution
9,750,908
21
People also often ask "How can I compile Perl?" while what they really want is to create an executable that can run on machines even if they don't have Perl installed. There are several solutions, I know of: perl2exe of IndigoStar It is commercial. I never tried. Its web site says it can cross compile Win32, Linux, and Solaris. Perl Dev Kit from ActiveState. It is commercial. I used it several years ago on Windows and it worked well for my needs. According to its web site it works on Windows, Mac OS X, Linux, Solaris, AIX and HP-UX. PAR or rather PAR::Packer that is free and open source. Based on the test reports it works on the Windows, Mac OS X, Linux, NetBSD and Solaris but theoretically it should work on other UNIX systems as well. Recently I have started to use PAR for packaging on Linux and will use it on Windows as well. Other recommended solutions?
In addition to the three tools listed in the question, there's another one called Cava Packager written by Mark Dootson, who has also contributed to PAR in the past. It only runs under Windows, has a nice Wx GUI and works differently from the typical three contenders in that it assembles all Perl dependencies in a source / lib directory instead of creating a single archive containing everything. There's a free version, but it's not Open Source. I haven't used this except for testing. As for PAR, it's really a toolkit. It comes with a packaging tool which does the dependency scanning and assembly of stand-alone executables, but it can also be used to generate and use so-called .par files, in analogy to Java's JARs. It also comes with client and server for automatically loading missing packages over the network, etc. The slides of my PAR talk at YAPC::EU 2008 go into more details on this. There's also an active mailing list: par at perl dot org.
Distribution
77,278
20
My client has an iOS app with In-app purchase, Game-kit and Push notifications enabled, it is currently on the app store. I would like to resign the application using an in-house enterprise distribution certificate, to test internally, but still be able to test services tied to the original provisioning profile. Is this possible?
I ended up doing this, which is a combination of :- Very tricky question about iPhone/iPad resigned builds behaviors and Re-sign IPA (iPhone) 1) Create Entitlements plist, prevent issues with the Keychain etc <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>application-identifier</key> <string>GBA9L2EABG.com.your.bundle.id.MyApp</string> <key>get-task-allow</key> <false/> </dict> 2) Unzip the IPA unzip Application.ipa 3) Remove the old code signature rm -r "Payload/Application.app/_CodeSignature" "Payload/Application.app/CodeResources" 2> /dev/null | true 4) Replace embedded mobile provisioning profile cp "MyEnterprise.mobileprovision" "Payload/Application.app/embedded.mobileprovision" 5) Resign /usr/bin/codesign -f -s "iPhone Distribution: Certificate Name" --resource-rules "Payload/Application.app/ResourceRules.plist" --entitlements Entitlements.plist "Payload/Application.app" 6) Re-package zip -qr "Application.resigned.ipa" Payload
Distribution
15,634,188
20
I'm a newbie to distutils and I have a problem that really has me stuck. I am compiling a package that requires an extension, so I make the extension thus: a_module = Extension( "amodule", ["initmodule.cpp"], library_dirs=libdirs, extra_objects = [ "unix/x86_64/lib/liba.so" "unix/x86_64/lib/lib.so", "unix/x86_64/lib/libc.so"], ) I then run the setup method: setup(name="apackage", version="7.2", package_dir = {'':instdir+'/a/b/python'}, packages=['apackage','package.tests'], ext_modules=[hoc_module] ) The package distribution is made properly and I can "python setup.py install" fine but when I try and import my package I get an error ImportError: liba.so.0: cannot open shared object file: No such file or directory I realise that when I add the location of liba.so.0 to my LD_LIBRARY_PATH the program runs fine. Unfortunately I haven't written these modules and don't have a good understanding of compilation. I've been trying to figure this out for several days to no avail. UPDATE:I tried passing the liba.a, libb.a etc files to extra_objects but this didn't work, generating the following errror: liba.a: could not read symbols: Bad value collect2: ld returned 1 exit status. What I'm trying to do is package a python module which requires a library to be compiled which itself depends on other libraries which I need to somehow include in the package .I suspect that my problem is very similar to this one: http://mail.python.org/pipermail/distutils-sig/2009-February/010960.html but that one was not resolved, I thought perhaps since it's two years old a resolution has been found? UPDATE 2: For now I have solved this by doing: data_files=[('/usr/local/lib', glob.glob('unix/x86_64/lib/*'))] That is to say, I am copying the libraries I need into /usr/local/lib. I'm not hugely happy with this solution however, not least because it requires my users to have root privileges and also because this may still not work Redhat distros. So if anyone can suggest something better than this fix do please let me know.
You can have the linker store paths to search in the output binary so LD_LIBRARY_PATH isn't necessary. Some examples: # Will link fine but at run-time LD_LIBRARY_PATH would be required gcc -o blah blah.o -lpcap -L/opt/csw/lib # Without LD_LIBRARY_PATH=/opt/csw/lib it will fail to link, but # it wouldn't be needed at run-time gcc -o blah blah.o -lpcap -Wl,-R/opt/csw/lib # LD_LIBRARY_PATH not needed at link or run-time gcc -o blah blah.o -lpcap -Wl,-{L,R}/opt/csw/lib # This makes it possible to use relative paths; run `readelf -d binary_name` # and you'll see '$ORIGIN/../lib/' in RPATH. This plus `-zorigin` make it look # relative to the binary for libraries at run-time gcc -o blah blah.o -lsomelib -L/whatever/path/floats/your/boat -Wl,-R'$ORIGIN/../lib/' -Wl,-zorigin .. where: paths given with -L are used at link-time paths given with -R are used at run-time
Distribution
9,795,793
19
Given mean and variance of a Gaussian (normal) random variable, I would like to compute its probability density function (PDF). I referred this post: Calculate probability in normal distribution given mean, std in Python, Also the scipy docs: scipy.stats.norm But when I plot a PDF of a curve, the probability exceeds 1! Refer to this minimum working example: import numpy as np import scipy.stats as stats x = np.linspace(0.3, 1.75, 1000) plt.plot(x, stats.norm.pdf(x, 1.075, 0.2)) plt.show() This is what I get: How is it even possible to have 200% probability to get the mean, 1.075? Am I misinterpreting anything here? Is there any way to correct this?
It's not a bug. It's not an incorrect result either. Probability density function's value at some specific point does not give you probability; it is a measure of how dense the distribution is around that value. For continuous random variables, the probability at a given point is equal to zero. Instead of p(X = x), we calculate probabilities between 2 points p(x1 < X < x2) and it is equal to the area below that probability density function. Probability density function's value can very well be above 1. It can even approach to infinity.
Distribution
38,141,951
19
I am developing cross-platform Qt application. It is freeware though not open-source. Therefore I want to distribute it as a compiled binary. On windows there is no problem, I pack my compiled exe along with MinGW's and Qt's DLLs and everything goes great. But on Linux there is a problem because the user may have shared libraries in his/her system very different from mine. Qt deployment guide suggests two methods: static linking and using shared libraries. The first produces huge executable and also require static versions of many libraries which Qt depends on, i.e. I'll have to rebuild all of them from scratches. The second method is based on reconfiguring dynamic linker right before the application startup and seems a bit tricky to me. Can anyone share his/her experience in distributing Qt applications under Linux? What method should I use? What problems may I confront with? Are there any other methods to get this job done?
Shared libraries is the way to go, but you can avoid using LD_LIBRARY_PATH (which involves running the application using a launcher shell script, etc) building your binary with the -rpath compiler flag, pointing to there you store your libraries. For example, I store my libraries either next to my binary or in a directory called "mylib" next to my binary. To use this on my QMake file, I add this line in the .pro file: QMAKE_LFLAGS += -Wl,-rpath,\\$\$ORIGIN/lib/:\\$\$ORIGIN/../mylib/ And I can run my binaries with my local libraries overriding any system library, and with no need for a launcher script.
Distribution
934,950
18
This question relates to the Apple iOS Developer Enterprise Program I am trying to determine the limits and relationships between the following 4 entities: Apple Enterprise Program distribution licenses, DUNS numbers, distribution certificates, and apps. Here's the scenario: a client wants to develop iPad apps for in-house distribution. This client does not want to go begging to another department head every time he wants to update or release an app, so he wants control of the distribution preocess. Is it possible for him to have his "own" department-level enterprise license or can he have a separate enterprise distribution certificate under the (presumably single) enterprise license? Further, is there any limit to the number of apps that can be distributed in-house under a)an enterprise license, or b)a distribution certificate. So this boils down to: Can an enterprise have more than one enterprise license? For example, could 2 departments each have their own enterprise developer license? Can a single enterprise license have more than one distribution certificate? Can a single enterprise distribution certificate apply to more than one app? Edit: you can skip the dialogue below; just go straight to the answer
I posed these questions to Apple developer relations Can an enterprise have more than one enterprise license? For example, could 2 departments each have their own enterprise license? Can a single enterprise license have more than one distribution certificate? Can a single enterprise distribution certificate apply to more than one app? I got this response A single organization can enroll in up to five iOS Developer Enterprise Programs.   Multiple Enterprise distribution provisioning profiles can be created. Each Enterprise distribution provisioning profile can only be associated with one App ID.* Edit: and this response... Two enterprise distribution certificate can be created at a time. A single enterprise distribution certificate can apply to multiple apps. and then this response: Each iOS Developer Enterprise license is completely separate with different distribution certificates. If a company enrolls in five enterprise programs, they will be able to create five different distribution certificates.
Distribution
6,034,495
18
heatmap.2 defaults to dist for calculating the distance matrix and hclust for clustering. Does anyone now how I can set dist to use the euclidean method and hclust to use the centroid method? I provided a compilable code sample bellow. I tried: distfun = dist(method = "euclidean"), but that doesn't work. Any ideas? library("gplots") library("RColorBrewer") test <- matrix(c(79,38.6,30.2,10.8,22, 81,37.7,28.4,9.7,19.9, 82,36.2,26.8,9.8,20.9, 74,29.9,17.2,6.1,13.9, 81,37.4,20.5,6.7,14.6),ncol=5,byrow=TRUE) colnames(test) <- c("18:0","18:1","18:2","18:3","20:0") rownames(test) <- c("Sample 1","Sample 2","Sample 3", "Sample 4","Sample 5") test <- as.table(test) mat=data.matrix(test) heatmap.2(mat, dendrogram="row", Rowv=TRUE, Colv=NULL, distfun = dist, hclustfun = hclust, xlab = "Lipid Species", ylab = NULL, colsep=c(1), sepcolor="black", key=TRUE, keysize=1, trace="none", density.info=c("none"), margins=c(8, 12), col=bluered )
Glancing at the code for heatmap.2 I'm fairly sure that the default is to use dist, and it's default is in turn to use euclidean distances. The reason your attempt at passing distfun = dist(method = 'euclidean') didn't work is that distfun (and hclustfun) are supposed to simply be name of functions. So if you want to alter defaults and pass arguments you need to write a wrapper function like this: heatmap.2(...,hclustfun = function(x) hclust(x,method = 'centroid'),...) As I mentioned, I'm fairly certain that heatmap.2 is using euclidean distances by default, but a similar solution can be used to alter the distance function used: heatmap.2(...,distfun = function(x) dist(x,method = 'euclidean'),...)
Distribution
6,806,762
18