Question
stringlengths 0
222
| Description
stringlengths 0
790
| Answer
stringlengths 0
28.2k
| Link
stringlengths 35
92
|
---|---|---|---|
How do I troubleshoot Amazon ECS tasks stopping or failing to start while my container exits? | "My Amazon Elastic Container Service (Amazon ECS) container exits unexpectedly, and tasks stop or fail to start. How can resolve the issue?" | "My Amazon Elastic Container Service (Amazon ECS) container exits unexpectedly, and tasks stop or fail to start. How can resolve the issue?Short descriptionYour containers can exit due to application issues, resource constraints, or other issues.For task failure due to image issues, see How do I resolve the "Image does not exist" error when my tasks fail to start in my Amazon ECS cluster?For AWS Fargate tasks that fail due to network configuration or resource constraint issues, see Stopped tasks error codes.ResolutionTo identify why your tasks stopped, follow these troubleshooting steps:Check for diagnostic information in the service event log.Check stopped tasks for errors. Note: You can see stopped tasks in the returned results for at least one hour.If you already have a log driver configured, then check your application logs for application issues. Otherwise, use the log configuration options in your task definition to send logs to a custom log driver for the container. For example, you can send the logs to Amazon CloudWatch or use a supported log driver. Note the following information on logs, depending on your task's launch type:For ECS tasks other than Fargate: If you're using the default json-file logging driver with the Amazon Elastic Compute Cloud (Amazon EC2) launch type, run the docker logs yourContainerID command. This command checks the Docker logs of the container on your ECS container instance.For Fargate tasks: Captured logs show the command output that you see in an interactive terminal if you run the container locally, in the STDOUT and STDERR I/O streams. The awslogs log driver passes these logs from Docker to Amazon CloudWatch Logs.To address memory constraint issues, follow the instructions at How can I allocate memory to tasks in Amazon ECS?If the awslogs log driver is configured in your task definition, then check Viewing awslogs container logs in CloudWatch Logs.Follow" | https://repost.aws/knowledge-center/ecs-tasks-container-exit-issues |
How can I stop my Amazon ECS service from failing to stabilize in AWS CloudFormation? | My Amazon Elastic Container Service (Amazon ECS) service fails to stabilize in AWS CloudFormation. I get the following error: "Service arn:aws:ecs:us-east-accountID:service/ServiceName did not stabilize." | "My Amazon Elastic Container Service (Amazon ECS) service fails to stabilize in AWS CloudFormation. I get the following error: "Service arn:aws:ecs:us-east-accountID:service/ServiceName did not stabilize."Short descriptionA service created in Amazon ECS fails to stabilize if it isn't in the state specified by the AWS CloudFormation template. To confirm that a service launched the desired number of tasks with the desired task definition, AWS CloudFormation makes repeated DescribeService API calls. These calls check the status of the service until the desired state is met. The calling process can take up to three hours. Then, AWS CloudFormation times out, and returns the "Service ARN did not stabilize" message. While AWS CloudFormation checks the status of the service, the stack that contains the service remains in the CREATE_IN_PROGRESS or UPDATE_IN_PROGRESS state and can't be updated.If you can't fix the underlying issue with your Amazon ECS service tasks immediately and you don't want to wait for the DescribeService API calls to time out, then you can manually force the state of the Amazon ECS service resource in AWS CloudFormation into a CREATE_COMPLETE state. To do this, manually set the desired count of the service to zero in the Amazon ECS console to stop running tasks. AWS CloudFormation then considers the update as successful, because the number of tasks equals the desired count of zero.Important: Manually forcing AWS CloudFormation into a CREATE_COMPLETE state isn't a best practice for production services, because all tasks are stopped, and doing this can cause a production outage.Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.ResolutionVerify resource creation1. In your AWS CloudFormation template, create an AWS::ECS::Service resource. For example:Resources:ECSServiceA: Type: AWS::ECS::ServiceProperties: DesiredCount: 1 Cluster: awsExampleECSCluster LaunchType: EC2 ServiceName: "MyNginxService2" TaskDefinition: NginxTask:12. Open the AWS CloudFormation console, and then select your stack.3. Choose the Events tab, and then verify that your resource is being created.Update the desired count of the serviceYou can update the desired count of the service to your original value with either the AWS CLI or Amazon ECS console.Using the AWS CLI:1. To describe the service and list the service events, run the following command:aws ecs describe-services --cluster awsExampleECSCluster --services MyNginxService22. To update the desired count of the service, run the following command:aws ecs update-service --cluster awsExampleECSCluster --service MyNginxService2 --desired-count 03. Update --desired-count to your original value.Using the Amazon ECS console:1. Open the Amazon ECS console.2. In the navigation pane, choose Clusters, and then select the cluster that contains the Amazon ECS service that you created.3. On the Clusters page, choose the cluster that contains the Amazon ECS service that you created.4. On the page for the cluster that you selected, in the Service Name column, choose your service.5. Choose the Events tab, and then choose Update.6. On the Configure service page, for Number of tasks, enter 0.7. Choose Next step to step through to the end of the Update Service wizard, and then choose Update Service.The service now reaches a steady state and transitions the Amazon ECS service resource in AWS CloudFormation to CREATE_COMPLETE or UPDATE_COMPLETE.Important: To make your AWS CloudFormation stack sync with the Amazon ECS service properties after you fix the issue with the underlying tasks, you must manually change the desired count (DesiredCount) back to the original value from your template.Related informationUpdating a serviceupdate-serviceservices-stableFollow" | https://repost.aws/knowledge-center/cloudformation-ecs-service-stabilize |
How do I troubleshoot 504 errors returned while using a Classic Load Balancer? | "I see HTTP 504 errors in Classic Load Balancer access logs, Amazon CloudWatch metrics, or when connecting to my service through a Classic Load Balancer." | "I see HTTP 504 errors in Classic Load Balancer access logs, Amazon CloudWatch metrics, or when connecting to my service through a Classic Load Balancer.ResolutionAn HTTP 504 error is an HTTP status code that indicates a gateway or proxy has timed out. When troubleshooting, investigate the following:Check your load balancer’s idle timeout, and then modify if necessaryThe most common reason for an HTTP 504 error is that a corresponding instance didn't respond to the request within the configured idle timeout. By default, the idle timeout for Classic Load Balancer is 60 seconds.If CloudWatch metrics are turned on, check CloudWatch metrics for your load balancer. When latency data points are equal to your configured load balancer timeout value, and HTTPCode_ELB_5XX has data points, at least one request has timed out.To resolve this, you can do one of two things:Modify the idle timeout for your load balancer so that the HTTP request is completed within the idle timeout period.Tune your application to respond more quickly.Make sure that your backend instances keep connections openWhen the backend instance closes a TCP connection to the load balancer before it reaches its idle timeout value, an HTTP 504 error might display.To resolve this, activate keep-alive settings on your backend instances, and then set the keep-alive timeout to a value greater than the load balancer’s idle timeout.(Apache only) Turn off TCP_DEFER_ACCEPTWhen TCP_DEFER_ACCEPT is activated for Apache backend instances, the load balancer initiates a connection by sending a SYN to the backend instance. The backend instance then responds with a SYN-ACK, and the load balancer sends an empty ACK to the backend instance.Because the last ACK is empty, the backend doesn't accept the ACK, and instead resends a SYN-ACK to the load balancer. This can result in a subsequent SYN retry timeout. When FIN or RST aren’t sent before the backend instance closes the connection, the load balancer considers the connection established, but it's not. Then, when the load balancer sends requests through this TCP connection, the backend responds with an RST, generating a 504 error.To resolve this, set both AcceptFilter http and AcceptFilter https to none.(Apache only) Turn off the event MPM, and optimally configure the prefork and worker MPMsIt’s a best practice to turn off event MPM on backend instances that are registered to a load balancer. Because the Apache backend dynamically closes connections, these closed connections might result in HTTP 504 errors.For optimal performance when using the prefork and worker MPMs, and presuming the load balancer is configured with a 60-second idle timeout, use these values:mod_prefork MPMmod_worker MPMTimeout6565KeepAliveTimeout6565KeepAliveOnOnMaxKeepAliveRequests100000AcceptFilter httpnonenoneAcceptFilter httpsnonenoneRelated informationMonitor your Classic Load BalancerTroubleshoot a Classic Load Balancer: HTTP errorsWhat are the optimal settings for using Apache or NGINX as a backend server for ELB?Follow" | https://repost.aws/knowledge-center/504-error-classic |
How do I initiate rolling updates when there are no changes to the launch configuration in CloudFormation? | "I want to initiate rolling updates in my Auto Scaling group on every AWS CloudFormation stack update, without modifying the launch configuration." | "I want to initiate rolling updates in my Auto Scaling group on every AWS CloudFormation stack update, without modifying the launch configuration.Short descriptionYou can initiate rolling updates for an Auto Scaling group only if you meet specific conditions of the UpdatePolicy attribute.To initiate rolling updates, you can create a toggle parameter in the launch configuration of your CloudFormation template. However, if you change the value of the toggle parameter during a stack update, then the UserData property is modified. Any modification to UserData requires replacement. CloudFormation detects the modification to UserData, and then replaces the LaunchConfiguration resource. This replacement initiates the Auto Scaling rolling update, as defined by the UpdatePolicy attribute.ResolutionThe following steps assume that your AutoScalingRollingUpdate policy is configured for your Auto Scaling group, and your Auto Scaling group is configured to reference LaunchConfiguration.Important: Make sure that you don't disrupt other elements in the UserData property when you add the toggle parameter to your template. Also, don't add the toggle parameter before cfn-signal.1. In your CloudFormation template, define Toggle as the parameter.JSON:"Parameters": { "Toggle": { "Type": "String", "AllowedValues": ["true","false"], "Default": "true" } }YAML:Parameters: Toggle: Type: String AllowedValues: - 'true' - 'false' Default: 'true'2. In the launch configuration of your template, reference the toggle parameter in the UserData property, and then launch your stack. See the following JSON and YAML examples.JSON:"LaunchConfig" : { "Type" : "AWS::AutoScaling::LaunchConfiguration", "Properties" : { "ImageId" : { "Ref" : "ImageID" }, "UserData" : { "Fn::Base64" : { "Ref" : "Toggle" } ... ... }, "InstanceType" : { "Ref" : "Type" } }}YAML:LaunchConfig: Type: 'AWS::AutoScaling::LaunchConfiguration' Properties: ImageId: !Ref ImageID UserData: 'Fn::Base64': !Ref Toggle ... ... InstanceType: !Ref TypeImportant: To initiate rolling updates when you update your stack, change the value of the parameter from true to false or false to true, depending on its initial setting.Note: You can use the toggle parameter solution on properties where an update requires replacement, such as LaunchConfigurationName, and for resources such as, AWS::EC2::LaunchTemplate.Follow" | https://repost.aws/knowledge-center/cloudformation-rolling-updates-launch |
Can I safely delete the empty files with the _$folder$ suffix that appear in my Amazon S3 bucket when I use Amazon EMR with Amazon S3? | "When I use Amazon EMR to transform or move data into or out of Amazon Simple Storage Service (Amazon S3), several empty files with the "_$folder$" suffix appear in my S3 buckets. What are these files, and is it safe to delete them?" | "When I use Amazon EMR to transform or move data into or out of Amazon Simple Storage Service (Amazon S3), several empty files with the "_$folder$" suffix appear in my S3 buckets. What are these files, and is it safe to delete them?ResolutionThe "_$folder$" files are placeholders. Apache Hadoop creates these files when you use the -mkdir command to create a folder in an S3 bucket. Hadoop doesn't create the folder until you PUT the first object. If you delete the "_$folder$" files before you PUT at least one object, Hadoop can't create the folder. This results in a "No such file or directory" error.In general, it's a best practice not to delete the "_$folder$" files. Doing so might cause performance issues for the Amazon EMR job. The exception is if you manually delete the folder from Amazon S3 and then try to recreate the folder in an Amazon EMR job or with Hadoop commands. If you don't delete the "_$folder$" files before you try to recreate the folder, you get the "File exists" error.Related informationUpload data to Amazon S3Configure an output locationFollow" | https://repost.aws/knowledge-center/emr-s3-empty-files |
How can I troubleshoot problems using Amazon Data Lifecycle Manager? | "My Amazon Data Lifecycle Manager policy is in an error state, or does not act as expected regarding snapshots. How can I troubleshoot these issues?" | "My Amazon Data Lifecycle Manager policy is in an error state, or does not act as expected regarding snapshots. How can I troubleshoot these issues?Short descriptionThe following are common reasons that your lifecycle policy is in an error state, or fails to create or copy Amazon Elastic Block Store (Amazon EBS) snapshots:The lifecycle policy isn't turned on.There are incorrect permissions on the policy.You're using an AWS Identity and Access Management (IAM) role other than the default AWSDataLifecycleManagerDefaultRole, and there are issues with trust relationships.There are duplicate tags on the policy.A tag defined in the policy is already in use.Your resources are encrypted.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, then make sure that you’re using the most recent AWS CLI version.Snapshots aren't created as expectedIf snapshots aren't created, then verify that the lifecycle policy is turned on.1. Open the Amazon Elastic Compute Cloud (Amazon EC2) console.2. Select Lifecycle Manager, and then verify that the policy State is set to ENABLED.3. If the policy isn't set to ENABLED, then choose Actions, Modify Snapshot Lifecycle Policy.Note: If the policy state is ERROR, see the following section, DLM policy is in the ERROR state.4. Select Enable policy, and then select Update policy.Note: It can take up to an hour after creating a lifecycle policy for snapshots to be created. After at least an hour has passed, open the Amazon EC2 console. Then, select Snapshots to verify that snapshots are being created.Unable to copy snapshots between RegionsIf the copied snapshot copied is encrypted, then the user must have access to the source and destination AWS Key Management Service (AWS KMS) key. For more information, see Determining access to an AWS KMS key.The lifecycle policy is in an error stateA lifecycle policy in the error state can be caused by one or more of these issues:There is a problem with your resource tags.The Amazon Data Lifecycle Manager permissions aren't correct.The IAM permissions aren't correct.In addition, if you're using a custom IAM role, a trust relationship might not be attached to the role.View information about what caused the error state by checking Amazon CloudWatch Events. The following are common errors and resolutions:Duplicate tag keyIf there are duplicate tags in your lifecycle policy, then a CloudTrail Event similar to the following appears. In the following example, the tag key Name is duplicated in the policy.CreateSnapshot @2018-12-24T20:25:58.000Z UTC"errorCode": "Client.InvalidParameterValue", "errorMessage": "Duplicate tag key 'Name' specified.", "requestParameters": { "volumeId": "vol-xxxxxxxxxxxx", "description": "Created for policy: policy-xxxxschedule: First Schedule",1. Open the Amazon EC2 console.2. Select Lifecycle Manager.3. Select your lifecycle policy, and then choose Actions, Modify Lifecycle Policy.4. In the Tag created EBS snapshots section, change the Key on the duplicated tag to a unique name.5. Select Update policy.Tag (Name) is already defined in resource id vol-xxxxxxxxxxxxIf a tag that's defined in your lifecycle policy is already in use in a different lifecycle policy, then you might have an issue if:The lifecycle policy is in the same account, andThe lifecycle policy is for the same resource.In this case, a CloudTrail Event similar to the following appears:CreateSnapshots--------------------------------------------------------------------------------- "eventVersion": "1.05", "userIdentity": { "type": "AssumedRole", "eventTime": "2020-01-xxxxxxxx", "eventSource": "ec2.amazonaws.com", "eventName": "CreateSnapshots", "awsRegion": "us-east-1", "sourceIPAddress": "dlm.amazonaws.com", "userAgent": "dlm.amazonaws.com", "errorCode": "Client.InvalidParameterCombination", "errorMessage": "Tag (Name) is already defined in resource id vol-xxxxxxxx.", "requestParameters": {"requestParameters": { "CreateSnapshotsRequest": { "Description": "Created for policy: policy-xxxxxxxschedule: Default Schedule", "InstanceSpecification": { "ExcludeBootVolume": false, "InstanceId": "i-xxxxxxx" },A volume or instance can have more than one policy associated with it, but tags can't be duplicated across policies. For more information, see Considerations for Amazon Data Lifecycle Manager.To correct this error, do the following:1. View your lifecycle policies to determine which tag is duplicated.2. Create a new lifecycle policy using a different tag, or edit your current lifecycle policy to use a different tag.Client.AuthFailureThe "Client.AuthFailure" error might occur if the custom lifecycle policy or the IAM user don't have permissions set correctly. The following is an example of a Client.AuthFailure caused by an inaccessible key:"Client.AuthFailure","errorMessage": "The specified keyIdarn:aws:kms:us-west-1:xxxxxxxxxxxxx:key/4ad6a1d7-53ac-45a3-8f08-e6eccc948fdd is not accessible",For instructions on setting permissions for Amazon Data Lifecycle Manager, see Permissions for Amazon Data Lifecycle Manager.For instructions on setting permissions for IAM users to use Amazon Data Lifecycle Manager, see Permissions for IAM users.Related informationAmazon Data Lifecycle Manager API referenceAWS CLI Command reference - dlmFollow" | https://repost.aws/knowledge-center/troubleshoot-data-lifecycle-manager-ebs |
Why did my Auto Scaling group scale down? | My Auto Scaling group scaled down without my intervention. Why did this happen? | "My Auto Scaling group scaled down without my intervention. Why did this happen?Short descriptionScale-downs are user initiated, or triggered by configured scale-down policies and scheduled scaling. When a scale-down occurs, an instance is terminated according to the configured termination policy.ResolutionView your Auto Scaling group scaling history in the Amazon EC2 console, using the AWS Command Line Interface (AWS CLI), or using the AWS API.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Policy-based scale downIf your Auto Scaling group scales down because of a scale-down policy, a message similar to the following appears in the scaling history:At 2016-05-08T13:55:14Z a monitor alarm My-Scale-Down-Alarm in state ALARMtriggered policy Decrease Group Size changing the desired capacity from 4 to 3.You can adjust your Auto Scaling group scaling policy using the Amazon EC2 console or with the AWS CLI put-scaling-policy command. To configure when a scale-down occurs, adjust the associated Amazon CloudWatch alarm. Or, create a new alarm and then associate the new alarm with the Auto Scaling group scaling policy.User-initiated scale downA scale-down triggered by a user displays an event similar to the following in the scaling history:At 2016-05-13T15:03:47Z a user request update of AutoScalingGroup constraintsto min: 12, max: 20, desired: 13 changing the desired capacity from 14 to 13.You can determine the user that made the API call by viewing your AWS CloudTrail logs.Note: CloudTrail must be configured and enabled before you can begin recording API calls.Scheduled scalingA scale-down initiated by a scheduled scale down action displays an event similar to the following in the scaling history:At 2016-02-12T16:01:25Z a scheduled action update of AutoScalingGroup constraintsto min: 1, max: 5, desired: 2 changing the desired capacity from 3 to 2. At2016-02-12T16:01:25Z the scheduled action ScaleDown executed. Setting max sizefrom 1 to 5. Setting desired capacity from 3 to 2.To view scheduled scaling using the AWS CLI, run the following command. Replace MY-ASG-NAME with the name of your Auto Scaling group.aws autoscaling describe-scheduled-actions --auto-scaling-group-name MY-ASG-NAMEYou can also view and manage scheduled scaling using the Amazon EC2 console. For more information, see Create and manage scheduled actions (console).For more information about schedule-based Amazon EC2 Auto Scaling, see Scheduled scaling.Related informationTutorial: Set up a scaled and load-balanced applicationMonitoring CloudWatch metrics for your Auto Scaling groups and instancesGetting Amazon SNS notifications when your Auto Scaling group scalesLogging Amazon EC2 Auto Scaling API calls with AWS CloudTrailAWS CLI autoscaling commandsFollow" | https://repost.aws/knowledge-center/auto-scaling-scale-down |
How can I troubleshoot issues with EventBridge rules? | "An event occurred that matched my Amazon EventBridge rule. However, my rule isn't functioning correctly. How can I troubleshoot this?" | "An event occurred that matched my Amazon EventBridge rule. However, my rule isn't functioning correctly. How can I troubleshoot this?Short descriptionDetermine if the problem is that the rule isn't triggering or if the target wasn't invoked. After you determine the source of the problem, you can validate the incoming event, or you can validate the target.ResolutionVerify if the rule isn't being triggered or if the target isn't being invokedTo do this, use the corresponding EventBridge metrics to review on the rule performance.The TriggeredRules metric illustrates the number of times a rule matched an event or was executed. This metric is useful in confirming whether a scheduled rule ran or if a rule matched a specific event. After the rule successfully triggered, EventBridge forwards the event to the target.An Invocations datapoint is generated when a rule invokes a target. EventBridge makes multiple attempts if it encounters difficulty delivering the event to the target. A FailedInvocations datapoint is issued when EventBridge permanently fails to invoke the target. FailedInvocations indicate problems with the target configuration.Review Amazon CloudWatch metrics for the EventBridge rule by doing the following:Open the CloudWatch console.Select All Metrics.Select the AWS/Events namespace.Select the TriggerRules, Invocations, and FailedInvocations (if available) metrics for the rule in question. These metrics can be viewed with the SUM statistic.Validate the incoming eventFor event-based rules, the event pattern must be configured to match the desired event. You can validate the event pattern using the EventBridge console during rule creation. EventBridge also provides the TestEventPattern API for event pattern validation.If the event in question is captured by AWS CloudTrail, then you can retrieve the event from CloudTrail. Then confirm that the provided event pattern is correct.Note that some AWS services are only available in the Region us-east-1. For example, IAM API calls are only published in us-east-1. This means that the corresponding EventBridge rule must be created in the same Region.Validate the targetWhen rules are created using the EventBridge console, the console automatically adds the required permissions to the following:The IAM role associated with the EventBridge rule.The resource policy associated with the target.If the rule is deployed using AWS SDK, the AWS CLI, or AWS CloudFormation, then you must explicitly configure the permissions.EventBridge must be given the appropriate access to invoke the target. Depending on the target, confirm that the appropriate IAM role or resource policy has the correct permissions. FailedInvocations datapoints generate due to inadequate target permissions.If there are no FailedInvocations datapoints, then EventBridge delivered the event to the target successfully. However, the target might have encountered its own issues. For example, an AWS Lambda target might have encountered errors or throttling independent of EventBridge. For the timestamp when the target was invoked by the EventBridge rule, review the target's CloudWatch metrics and any relevant logs.An Amazon Simple Queue Service (Amazon SQS) dead-letter queue (DLQ) can be associated with the target. Any events that failed to be delivered to the target are sent to the dead-letter queue. This is useful to see more details on failed events. For example, incorrectly structuring the event with Input Transformer can result in input validation errors on the target.Related informationTroubleshooting Amazon EventBridgeWhy wasn't my Lambda function triggered by my CloudWatch Events rule?Follow" | https://repost.aws/knowledge-center/eventbridge-rules-troubleshoot |
How do I set up a NAT gateway for a private subnet in Amazon VPC? | I have Amazon Elastic Compute Cloud (Amazon EC2) instances in a private subnet of my Amazon Virtual Private Cloud (Amazon VPC). How can I configure these instances to communicate securely with the internet? | "I have Amazon Elastic Compute Cloud (Amazon EC2) instances in a private subnet of my Amazon Virtual Private Cloud (Amazon VPC). How can I configure these instances to communicate securely with the internet?Short descriptionA network address translation (NAT) gateway allows EC2 instances to establish outbound connections to resources on internet without allowing inbound connections to the EC2 instance. It's not possible to use the private IP addresses assigned to instances in a private VPC subnet over the internet. Instead, you must use NAT to map the private IP addresses to a public address for requests. Then, you must map the public IP address back to the private IP addresses for responses.ResolutionCreate a public VPC subnet to host your NAT gateway.Create and attach an internet gateway to your VPC.Create a custom route table for your public subnet with a route to the internet gateway.Verify that the network access control list (ACL) for your public VPC subnet allows inbound traffic from the private VPC subnet. For more information, see Work with network ACLs.Create a public NAT gateway then create and associate your new or existing Elastic IP address.Update the route table of your private VPC subnet to point internet traffic to your NAT gateway.Test your NAT gateway by pinging the internet from an instance in your private VPC subnet.Related informationMonitor NAT gateways with Amazon CloudWatchFollow" | https://repost.aws/knowledge-center/nat-gateway-vpc-private-subnet |
How can I sign or share a compliance report that requires an NDA using AWS Artifact? | I need to sign or share a nondisclosure agreement (NDA) with Amazon Web Services (AWS) for security compliance or auditing purposes. | "I need to sign or share a nondisclosure agreement (NDA) with Amazon Web Services (AWS) for security compliance or auditing purposes.Short descriptionYou and your AWS Partner Network (APN) can use an AWS account to access reports requiring an NDA using AWS Artifact.ResolutionFollow the instructions for downloading reports in AWS Artifact.Reports requiring an NDA are accessible by signing the terms and conditions in AWS Artifact. You must accept the AWS Artifact NDA to access and download confidential documents in AWS Artifact. You must sign the NDA each time that you access confidential reports.If you have an existing NDA with Amazon covering the same confidential information as provided in AWS Artifact, then your existing NDA applies. You can use your existing NDA instead of the AWS Artifact NDA.For more information, see AWS Artifact Agreements.Note: Reports requiring approval from Amazon are reviewed and approved within 24 hours. For more information, see Getting Started with AWS Artifact.Related informationHow can I download and share AWS Artifact documents with regulators and auditors, or with others in my company?AWS Artifact FAQsFollow" | https://repost.aws/knowledge-center/artifact-nda-report |
How do I view traffic passing through an Amazon Route 53 resolver outbound endpoint? | I want to view traffic passing through Amazon Route 53 resolver outbound endpoint. How can I do this? | "I want to view traffic passing through Amazon Route 53 resolver outbound endpoint. How can I do this?Short descriptionTo view traffic passing through Route 53 resolver endpoints, configureAmazon Virtual Private Cloud (Amazon VPC) Traffic Mirroring.ResolutionConfigure network connectivityConfirm the target EC2 instance's security group and network access control list (network ACL) allow incoming traffic on UDP port 4789 from the outbound endpoint elastic network interface.Confirm the target EC2 instance has connectivity to the outbound endpoint's network interface subnet.Confirm the outbound endpoint network interface subset is configured for outgoing traffic for the EC2 instance on UPD port 4789. The subset configuration includes network ACL, security groups, and routing tables.Set up Amazon VPC Traffic Mirroring1. Create a traffic mirror target using the network interface of the EC2 instance you're using as the target.2. Create a mirror filter to identify the DNS traffic from the outbound endpoint network interface to the EC2 mirror target.Example mirror filter for Route 53Note: The example values in this table represent the following:VPC A is associated with Route 53 resolve rule to forward *.test.com domain DNS queries to on-premise networkOn-premise network is hosting domain *.test.comValueInbound RuleOutbound RuleRule numberRule priorityRule priorityRule ActionAcceptAcceptProtocolUDP and TCPUDP and TCPSource port range531024-65535Destination port range1024-6553553Source CIDR blockOn-premise CIDRVPC A CIDRDestination CIDR blockVPC A CIDROn-premise CIDR3. Create a mirror session for each outbound endpoint network interface to the mirror EC2 instance. Use the following values: Mirror source: outbound endpoint network interfaceMirror target: traffic mirror that you created previouslySession number: 1Filter: mirror filter that you created previouslyView mirrored trafficFor Linux operating systems1. View the captured traffic logs by running the following command:sudo tcpdump -w <filename>.pcap -i <eth> port 4789For filename, use the filename where you want to store the captured traffic logs. For eth, use the ethernet port that you want to use on your EC2 instance. 2. Transfer the file from the EC2 instance to your local computer by running the following command:scp -i <keypair>.pem ec2-user@<ec2 instance's public/private DNS name or IP address>:<file path>/<filename>.pcap ~/Desktop/For keypair, use the keypair you used to log into the instance. For filename, use the filename where you want to store the captured traffic logs.3. Open the capture file to view the DNS packets.For windows operating systems1. Open the Wireshark tool.2. Filter traffic using the IP address of the outbound resolver endpoint.3. Open the capture file to view the DNS packets.Related informationResolving Domain Name System (DNS) queries between VPCs and your networkFollow" | https://repost.aws/knowledge-center/route53-view-endpoint-traffic |
How do I send my container logs to multiple destinations in Amazon ECS on AWS Fargate? | "I want to forward the logs of my application container that runs on AWS Fargate to multiple destinations. These destinations could be Amazon CloudWatch, Amazon Kinesis Data Firehose, or Splunk." | "I want to forward the logs of my application container that runs on AWS Fargate to multiple destinations. These destinations could be Amazon CloudWatch, Amazon Kinesis Data Firehose, or Splunk.Short descriptionAn Amazon Elastic Container Service (Amazon ECS) task definition allows you to specify only a single log configuration object for a given container, which means that you can forward logs to a single destination only. To forward logs to multiple destinations in Amazon ECS on Fargate, you can use FireLens.Note: FireLens works with both Fluent Bit and Fluentd log forwarders. The following resolution considers Fluent Bit because Fluent Bit is more resource-efficient than Fluentd.Consider the following:FireLens uses the key-value pairs specified as options in the logConfiguration object from the ECS task definition to generate the Fluent Bit output definition. The destination where the logs are routed is specified in the [OUTPUT] definition section of a Fluent Bit configuration file. For more information, see the Output section of Configuration File on the Fluent Bit website.FireLens creates a configuration file on your behalf, but you can also specify a custom configuration file. You can host this configuration file in either Amazon Simple Storage Service (Amazon S3), or create a custom Fluent Bit Docker image with the custom output configuration file added to it.If you're using Amazon ECS on Fargate, then you can't pull a configuration file from Amazon S3. Instead, you must create a custom Docker image with the configuration file.ResolutionCreate AWS Identity and Access Management (IAM) permissionsCreate IAM permissions to allow your task role to route your logs to different destinations. For example, if your destination is Kinesis Data Firehose, then you must give the task permission to call the firehose:PutRecordBatch API.Note: Fluent Bit supports several plugins as log destinations. Permissions required by destinations like CloudWatch and Kinesis streams include logs:CreateLogGroup, logs:CreateLogStream, logs:DescribeLogStreams, logs:PutLogEvents,and kinesis:PutRecords.Create a Fluent Bit Docker image with a custom output configuration file.1. Create a custom Fluent Bit configuration file called logDestinations.conf with your choice of [OUTPUT] definitions defined in it. For example, the following configuration file includes configurations defined for CloudWatch, Kinesis Data Firehose, and Splunk.[OUTPUT] Name firehose Match YourContainerName* region us-west-2 delivery_stream nginx-stream [OUTPUT] Name cloudwatch Match YourContainerName* region us-east-1 log_group_name firelens-nginx-container log_stream_prefix from-fluent-bit auto_create_group true [OUTPUT] Name splunk Match <ContainerName>* Host 127.0.0.1 Splunk_Token xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx Splunk_Send_Raw OnNote: Different destinations require different fields to be specified in the [OUTPUT] definition. For examples, see Amazon ECS FireLens examples.2. Create a Docker image with a custom Fluent Bit output configuration file using the following Dockerfile example:FROM amazon/aws-for-fluent-bit:latestADD logDestinations.conf /logDestinations.confNote: For more information, see Dockerfile reference on the Docker website.3. To create the custom fluent-bit Docker image using the Dockerfile that you created in step 2, run the following command:docker build -t custom-fluent-bit:latest .Important: Be sure to run the docker build command in the same location as the Dockerfile.4. To confirm that the Docker image is available to Amazon ECS, push your Docker image to Amazon Elastic Container Registry (Amazon ECR) or your own Docker registry. For example, to push a local Docker image to Amazon ECR, run the following command:docker push aws_account_id.dkr.ecr.region.amazonaws.com/custom-fluent-bit:latest5. In your task definition (TaskDefinition), update the options for your FireLens configuration (firelensConfiguration). For example:{ "containerDefinitions":[ { "essential":true, "image":"aws_account_id.dkr.ecr.region.amazonaws.com/custom-fluent-bit:latest", "name":"log_router", "firelensConfiguration":{ "type":"fluentbit", "options":{ "config-file-type":"file", "config-file-value":"/logDestinations.conf" } } } ]}When you update the options for your FireLens configuration, consider the following:To specify a custom configuration file, you must include the config-file-type and config-file-value options in your FireLens configuration file. You can only set these options when you create a task definition with the AWS Command Line Interface (AWS CLI).Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.You must modify the image property in the containerDefinition section of your configuration to reflect a valid Amazon ECR image location. You can specify images in Amazon ECR repositories by using the full registry/repository:tag naming convention. For example:aws_account_id.dkr.ecr.region.amazonaws.com/custom-fluent-bit:latestTo use other repositories, see the image property of the task definition.Follow" | https://repost.aws/knowledge-center/ecs-container-log-destinations-fargate |
How can I troubleshoot access denied or unauthorized operation errors with an IAM policy? | I am trying to perform an action on an AWS resource and I received an "access denied" or "unauthorized operation" error. How can I troubleshoot these permission issues? | "I am trying to perform an action on an AWS resource and I received an "access denied" or "unauthorized operation" error. How can I troubleshoot these permission issues?Short descriptionTo troubleshoot issues with AWS Identity and Access Management (IAM) policies:Identify the API callerCheck the IAM policy permissionsEvaluate service control policies (SCPs)Review identity-based and resource-based policiesCheck for permission boundariesEvaluate session policiesMake sure that the condition keys in the policy are supported by the APIsReview the IAM policy errors and troubleshooting examplesResolutionIdentify the API caller and understand the error messagesImportant:Before you begin, be sure that you have installed and configured the AWS Command Line Interface (AWS CLI).If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Be sure that the API calls are made on behalf of the correct IAM entity before reviewing IAM policies. If the error message doesn't include the caller information, then follow these steps to identify the API caller:Open the AWS Management Console.In the upper-right corner of the page, choose the arrow next to the account information.If you're signed in as an IAM role, refer to "Currently active as" for the assumed role's name, and "Account ID" for account ID.If you're signed in as a federated user, refer to "Federated User" for the federation role name and role session name.-or-Use the AWS CLI command get-caller-identity to identify the API caller. You can also use the AWS CLI command with the --debug flag to identify the source of the credentials from the output similar to the following:2018-03-13 16:23:57,570 - MainThread - botocore.credentials - INFO - Found credentials in shared credentials file: ~/.aws/credentialsCheck the IAM policy permissionsVerify if the necessary permissions are granted to the API caller by checking the attached IAM policies. For more information, see Determining whether a request is allowed or denied within an account.Evaluate AWS Organizations SCPsIf the AWS account is a part of an AWS Organization, SCPs can be applied at the hierarchical level to allow or deny actions. The SCP permissions are inherited by all IAM entities in the AWS account. Make sure that the API caller isn't explicitly denied in the SCP.Make sure that the API being called isn't explicitly denied in an Organizational SCP policy that impacts the callerReview identity-based and resource-based policiesMake sure that there is an explicit allow statement in the IAM entities identity-based policy for the API caller. Then, make sure that the API supports resource-level permissions. If the API caller doesn't support resource-level permissions, make sure the wildcard "*" is specified in the resource element of the IAM policy statement.You can attach resource-based policies to a resource within the AWS service to provide access. For more information, see Identity-based policies and resource-based policies.To view the IAM policy summary:Open the IAM console.In the navigation pane, choose Policies.Choose the arrow next to the policy name to expand the policy details view.In the following example, the policy doesn't work because not all Amazon Elastic Compute Cloud (Amazon EC2) API actions support resource-level permissions:{ "Version": "2012-10-17", "Statement": [ { "Sid": "SorryThisIsNotGoingToWorkAsExpected", "Effect": "Allow", "Action": ["ec2:*"], "Resource": "arn:aws:ec2:us-east-1:accountid:instance/*" } ]}IAM users that try to launch an Amazon EC2 instance in the us-east-1 Region with the run-instances AWS CLI command receive an error message similar to the following:"An error occurred (UnauthorizedOperation) when calling the RunInstances operation: You are not authorized to perform this operation. Encoded authorization failure message:…"To resolve this, change the resource to a wildcard "*". This is because Amazon EC2 only supports partial resource-level permissions.To decode the authorization failure message to get more details on the reason for this failure, use the DecodeAuthorizationMessage API action similar to the following:$ aws sts decode-authorization-message --encoded-message <encoded-message-from-the-error>Check for permission boundariesIf the IAM entity has a permission boundary attached, the boundary sets the maximum permissions that the entity has.Evaluate session policiesIf the API caller is an IAM role or federated user, session policies are passed for the duration of the session. The permissions for a session are the intersection of the identity-based policies for the IAM entity used to create the session and the session policies. Make sure that the API call exists in the IAM policy and entity.Make sure that the condition keys in the policy are supported by the APIsAWS condition keys can be used to compare elements in an API request made to AWS with key values specified in a IAM policy. The condition keys can either be a global condition key or defined by the AWS service. AWS service specific condition keys can only be used within that service (for example EC2 conditions on EC2 API actions).For more information, see Actions, resources, and condition context keys for AWS services.A condition element can contain multiple conditions, and within each condition block can contain multiple key-value pairs. For more information, see Creating a condition with multiple keys or values.In this example policy, the condition element is matched if an IAM API request is called by the IAM user admin and the source IP address is from 1.1.1.0/24 or 2.2.2.0/24.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "iam:*", "Resource": "*", "Condition": { "StringEquals": { "aws:username": "admin" }, "IpAddress": { "aws:SourceIp": [ "1.1.1.0/24", "2.2.2.0/24" ] } } } ]}IAM policy errors and troubleshooting examplesSee the following examples to identify the error message, the API caller, the API, and the resources being called:Example error messageAPI callerAPIResourcesWhenA: "An error occurred (UnauthorizedOperation) when calling the DescribeInstances operation: You are not authorized to perform this operation."unknownDescribeInstancesunknowntime of the error occursB: "An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:iam::123456789012:user/test is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::123456789012:role/EC2-FullAccess"arn:aws:iam::123456789012:user/testAssumeRolearn:aws:iam::123456789012:role/EC2-FullAccesstime of the error occursC: "An error occurred (AccessDenied) when calling the GetSessionToken operation: Cannot call GetSessionToken with session credentials"unknownGetSessionTokenunknowntime of the error occursD: "An error occurred (UnauthorizedOperation) when calling the AssociateIamInstanceProfile operation: You are not authorized to perform this operation. Encoded authorization failure message: ...."unknownAssociateIamInstanceProfileunknowntime of the error occursUsing this evaluation method, you can identify the cause of the error messages you can receive for permission issues for different AWS services. For more details, see the following error messages and troubleshooting steps:Example error message A:This error message indicates that you don't have permission to call the DescribeInstances API.To resolve this error, follow these steps:Identify the API caller.Confirm that the ec2:DescribeInstances API action isn't included in any deny statements.Confirm that the ec2:DescribeInstances API action is included in the allow statements.Confirm that there's no resource specified for this API action. Note: This API action doesn't support resource-level permissions.Confirm that all IAM conditions specified in the allow statement are supported by the DescribeInstances action and that the conditions are matched.For more information, see DescribeInstanceStatus.Example error message B:This error message includes the API name, API caller, and target resource. Be sure that the IAM identity that called the API has the correct access to the resources. Review the IAM policies using the previous evaluation method.To resolve this error, follow these steps to confirm the trust policy of IAM role: EC2-FullAccess:Confirm arn:aws:iam::123456789012:user/test or arn:aws:iam::123456789012:root isn't included in any deny statement of the trust policy.Confirm arn:aws:iam::123456789012:user/test or arn:aws:iam::123456789012:root is included in the allow statement of the trust policy.Confirm all IAM conditions specified in that allow statement are supported by sts:AssumeRole API action and matched.Follow these steps to confirm the IAM policies attached to the API caller (arn:aws:iam::123456789012:user/test):Confirm arn:aws:iam::123456789012:role/EC2-FullAccess isn't included in any deny statement with sts:AssumeRole API action.If arn:aws:iam::123456789012:root is in the allow statement of the trust policy, then confirm arn:aws:iam::123456789012:role/EC2-FullAccess is included in the allow statement of the IAM policies with sts:AssumeRole API action.Confirm all IAM conditions specified in that allow statement are supported by sts:AssumeRole API action and match.Example error message C:This error message indicates that get-session-token isn't supported by temporary credentials. For more information, see Comparing the AWS STS API operations.Example error message D:This error message returns an encoded message that can provide details about the authorization failure. To decode the error message and get the details of the permission failure, see DecodeAuthorizationMessage. After decoding the error message, identify the API caller and review the resource-level permissions and conditions.To resolve this error, follow these steps to review the IAM policy permissions:If the error message indicates that the API is explicitly denied, then remove ec2:AssociateIamInstanceProfile or iam:PassRole API actions from the matched statement.Confirm that ec2:AssociateIamInstanceProfile and iam:PassRole are in the allow statement with supported and correct resource targets. For example, confirm that the resource targets of ec2:AssociateIamInstanceProfile API action are EC2 instances and the resource targets of iam:PassRole are IAM roles.If ec2:AssociateIamInstanceProfile and iam:PassRole API actions are in the same allow statement, confirm that all conditions are supported by ec2:AssociateIamInstanceProfile and iam:PassRole API action and that the conditions match.If ec2:AssociateIamInstanceProfile and iam:PassRole API actions are in separate allow statements, confirm that all conditions in each allow statement are supported by an action and that the conditions match.For more information, see Policy evaluation logic and Determining whether a request is allowed or denied within an account.Related informationTroubleshoot IAM policiesWhy did I receive an "AccessDenied" or "Invalid information" error trying to assume a cross-account IAM role?IAM JSON policy elements referenceFollow" | https://repost.aws/knowledge-center/troubleshoot-iam-policy-issues |
How do I customize my nginx configuration to modify the "client_max_body_size" in Elastic Beanstalk? | I want to upload large size files to my AWS Elastic Beanstalk environment without receiving the error message "413 Request Entity Too Large". | "I want to upload large size files to my AWS Elastic Beanstalk environment without receiving the error message "413 Request Entity Too Large".Short descriptionBy default, NGINX has a limit of 1MB on file uploads. If the size of a request exceeds the configured value, then the 413 Request Entity Too Large error is returned. To upload files larger than 1 MB, configure the client_max_body_size directive in the NGINX configuration files.Important: M and MB are equivalent expressions for "megabyte". For example, 2M is equal to 2 MB. However, use only M in your configuration file, as MB isn't valid in a configuration file.ResolutionTo configure the client_max_body_size in Amazon Linux 2 environments, do the following:1. To extend the Elastic Beanstalk default NGINX configuration, add the .conf configuration file client_max_body_size.conf that includes the following:client_max_body_size 50M;Note: In the preceding example, the value of the client_max_body_size is updated to 50M. Substitute any value in place of 50 as per your requirements.2. Copy the .conf configuration file client_max_body_size.conf to a folder named .platform/nginx/conf.d/ in your application source bundle. The Elastic Beanstalk NGINX configuration includes .conf files in this folder automatically. Make sure to create this path if it doesn't exist in your source bundle. The following example shows the structure of the .platform directory and .conf file in the application zip file:-- .ebextensions -- other non nginx server config files -- .platform -- nginx -- conf.d -- client_max_body_size.conf -- other application filesThe file client_max_body_size.conf has a path like this: my-app/.platform/nginx/conf.d/client_max_body_size.conf.3. Deploy your code and the new .platform/ directory together as a new application version in your Elastic Beanstalk environment.4. After the deployment is completed, log in to the instance running on the Elastic Beanstalk environment. After logging in, check that the settings to the NGINX server are applied. To do this, use the following command:$ sudo nginx -T | egrep -i "client_max_body_size"nginx: the configuration file /etc/nginx/nginx.conf syntax is oknginx: configuration file /etc/nginx/nginx.conf test is successfulclient_max_body_size 50M;Follow" | https://repost.aws/knowledge-center/elastic-beanstalk-nginx-configuration |
How do I troubleshoot BGP connection issues over VPN? | My BGP session can't establish a connection or is in an idle state over my VPN tunnel. How can I troubleshoot this? | "My BGP session can't establish a connection or is in an idle state over my VPN tunnel. How can I troubleshoot this?ResolutionTo troubleshoot BGP connection issues over VPN, check the following:Check the underlying VPN connectionFor BGP-based VPN connections, the BGP session can be established only if the VPN tunnel is up. If the VPN tunnel is down or flapping, then you experience issues with establishing the BGP session. Verify that the VPN is up and stable. If the VPN isn't coming up or it isn't stable, see the following:Why is IKE (phase 1 of my VPN tunnel) failing in Amazon VPC?I can’t establish my VPN tunnel: IPsec is failingHow do I troubleshoot VPN tunnel inactivity or instability or tunnel down on my customer gateway device?Check the BGP configuration on your customer gateway deviceThe IP addresses of the local and remote BGP peers must be configured with the downloaded VPN configuration file from the VPC console.The local and remote BGP Autonomous System Numbers (ASN) must be configured with the downloaded VPN configuration file from the VPC console.If the configuration settings are correct, then ping the remote BGP peer IP from your local BGP peer IP to verify the connectivity between BGP peers.Be sure that the BGP peers are directly connected to each other. External BGP (EBGP) multi-hop is turned off on AWS.Note: If your BGP session is flapping between active and connect states, verify that TCP port 179 and other relevant ephemeral ports are not blocked.Debugs and packet capturesIf the BGP configuration on the customer gateway is verified and the pings between the BGP peer IPs are working, then collect this information from the customer gateway device for further analysis:BGP and TCP debugsBGP logsPacket captures for traffic between the BGP peer IPsCheck if the BGP session is going from established to idle statesFor VPN on a VGW, if you see the BGP session going from established to idle state, then verify the number of routes that you are advertising over the BGP session. You can advertise up to 100 routes over the BGP session. If the number of routes advertised over the BGP session is more than 100, then the BGP session goes to the idle state.To resolve this, do one of the following:Advertise a default route to route to AWS, or summarize the routes so that the number of routes received is fewer than 100.-or-You can migrate your VPN connection to a transit gateway as transit gateway supports 1,000 routes advertised from a customer gateway.For more information, see Site-to-Site VPN quotas.Related informationHow can I troubleshoot BGP connection issues over Direct Connect?Amazon VPC FAQsFollow" | https://repost.aws/knowledge-center/troubleshoot-bgp-vpn |
How do I troubleshoot issues with IP-based routing in Route 53? | I see unexpected results when testing the DNS resolution on my Amazon Route 53 IP-based routing policy. | "I see unexpected results when testing the DNS resolution on my Amazon Route 53 IP-based routing policy.ResolutionExample scenarioYou have IP-based routing records for clients with the CIDR 172.32.0.0/16 and 172.33.0.0/16 pointing towards two Elastic Load Balancing (ELB) load balancers. One ELB load balancer is located in Virginia (us-east-1) and the other is in Ireland (eu-west-1). Clients with CIDR 172.32.0.0/16 resolve the domain to the load balancer in us-east-1. Clients with CIDR 172.33.0.0/16 resolve the domain to the load balancer in eu-west-1. However, clients aren't receiving the expected output.Note: IP-based routing isn't supported for private hosted zones.To troubleshoot IP-based routing, complete the following steps:1. Confirm that you correctly configured the IP-based resource records for your Route 53 hosted zone for your use case. Open the Route 53 console, and check the default location that's specified in your Route 53 hosted zone configuration.Note: If CIDR blocks aren't specified in the CIDR collection that matches the source IP address, then Route 53 answers with the default "*" location.Route 53 answers with NODATA if the following are true:A default “*” location isn't specified.The query originates from a CIDR block that doesn't match any specified CIDR blocks in the CIDR collection.Route 53 matches DNS queries that have a CIDR that's longer than your specified CIDR to the shorter specified CIDR in the CIDR collection. For example, if you specify 2001:db8::/32 as the CIDR in your CIDR collection and receive a query from 2001:0db8:1234::/48, then the CIDR matches. If, you specify 2001:db8:1234::/48 in your CIDR collection and receive a query from 2001:db8::/32, then it doesn't match. Route 53 answers with the record for the “*” location.For CIDR collection and CIDR block limit quotas, see Quotas on records.2. Check if the client resolves correctly.If the client is resolving within the virtual private cloud (VPC) and using VPC DNS .2 resolver, then IP-based routing doesn't work. This is because the .2 resolver doesn't support EDNS Client Subnet (ECS).If the client is resolving outside of the VPC, then run the following command to verify that your DNS resolver supports EDNS0:Linux or macOSdig TXT o-o.myaddr.google.com -4Windowsnslookup -type=txt o-o.myaddr.google.comThe following example output shows DNS resolvers that don't support EDNS0:$ dig TXT o-o.myaddr.google.com -4; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.amzn2.5.2 <<>> TXT o-o.myaddr.google.com -4;; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38328;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1;; OPT PSEUDOSECTION:; EDNS: version: 0, flags:; udp: 4096;; QUESTION SECTION:;o-o.myaddr.google.com. IN TXT;; ANSWER SECTION:o-o.myaddr.google.com. 60 IN TXT "3.209.83.70"The following example output shows DNS resolvers that do support EDNS0:$ dig TXT o-o.myaddr.google.com -4 @8.8.8.8; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.amzn2.5.2 <<>> TXT o-o.myaddr.google.com -4 @8.8.8.8;; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 30624;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1;; OPT PSEUDOSECTION:; EDNS: version: 0, flags:; udp: 512;; QUESTION SECTION:;o-o.myaddr.google.com. IN TXT;; ANSWER SECTION:o-o.myaddr.google.com. 60 IN TXT "172.253.8.137"o-o.myaddr.google.com. 60 IN TXT "edns0-client-subnet 54.82.0.0/24"3. To check DNS resolution, use your hosted zone's authoritative name server to resolve the record. Run the following command. In the following example command, replace example.com with your domain.dig A example.com +subnet=172.3.0.0/16 @ns-1875.awsdns-42.co.ukOr, use dig or nslookup to verify that the DNS resolution works as expected. In the following example commands, replace example.com with your domain.dig example.com @8.8.8.8nslookup example.com 8.8.8.84. If you configure health check records for IP-based resource records, then use these records to verify the health status. If a health check fails for a record with a default (*) location, then Route 53 returns the default location as a DNS query response.Related informationIP-based routingFollow" | https://repost.aws/knowledge-center/route53-ip-based-routing |
"How can I find who stopped, rebooted, or terminated my EC2 Windows instance?" | "My Amazon Elastic Compute Cloud (Amazon EC2) Windows instance was unexpectedly stopped, rebooted, or terminated. How can I identify who stopped, restarted, or terminated the instance?" | "My Amazon Elastic Compute Cloud (Amazon EC2) Windows instance was unexpectedly stopped, rebooted, or terminated. How can I identify who stopped, restarted, or terminated the instance?ResolutionAn EC2 Windows instance can be stopped or rebooted either through AWS or through the Windows operating system. An EC2 Windows instance can be terminated only through AWS.If the instance was stopped, rebooted, or terminated through AWSYou can stop, reboot, or terminate your instance through AWS from:The AWS Management ConsoleThe AWS Command Line Interface (AWS CLI)AWS Tools for PowerShellAWS APIsAWS SDKIf the event occurred in the last 90 days, then you can get more information about the event using AWS CloudTrail logs. To view the event on CloudTrail, follow these steps:Open the CloudTrail console.In the navigation pane, choose Event history.In the Lookup attributes dropdown menu, select Event name.For Enter an event name, enter StopInstances if your instance was stopped. Enter RebootInstances if your instance was rebooted. Enter TerminateInstances if your instance was terminated.To see more information about an event, choose the event name. On the StopInstances, RebootInstances, or TerminateInstances details page, you can see the user of the AWS Identity and Access Management (IAM) that initiated the event.If the instance was stopped or rebooted within the Windows OSIf the instance wasn't stopped or rebooted through AWS, then the event was likely initiated within the Windows OS. To find more information about this event within the Windows OS, follow these steps while logged in to the instance:Open Event Viewer.On the navigation pane, expand Windows Logs and then choose System.On the Actions pane, choose Filter Current Log.In the All Event IDs field, enter 1074 or 1076.The event log indicates which user initiated the event in the Source field.Note: An EC2 Windows stop or reboot can occur at the Windows OS level when:A user is logged into the instance and a Windows update reboots the OS.An unexpected hardware failure occurs.An AWS planned maintenance event stops or restarts the instance.A third-party tool issued the command.AWS sends notifications about planned instance retirements and unexpected hardware failure over email or on your Personal Health Dashboard.Follow" | https://repost.aws/knowledge-center/ec2-windows-identify-stop-reboot |
How can I resolve issues accessing an encrypted AWS Secrets Manager secret? | "I want to retrieve or access an AWS Secrets Manager secret, but I receive an error." | "I want to retrieve or access an AWS Secrets Manager secret, but I receive an error.ResolutionIf you can't retrieve or access a Secrets Manager secret, then you might see one of the following errors: "You can't access a secret from a different AWS account if you encrypt the secret with the default KMS service key.""Access to KMS is not allowed""InternalFailure""An unknown error occurred""Access to KMS is not allowed. This version of secret is not encrypted with the current KMS key."To troubleshoot any of these errors, complete the following steps.Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Verify that the secret isn't encrypted with an AWS KMS managed key when accessing a secret in another account.AWS managed key policies can't be edited because they're read-only. However, you can view AWS Key Management Service (AWS KMS) managed key and customer managed key policies. Because AWS KMS managed key policies can't be edited, cross-account permissions can't be granted for these key policies. Secrets Manager secrets that are encrypted using an AWS KMS managed key can't be accessed by other AWS accounts.For cross accounts, verify that the identity-based policy and resource-based policy allows the principal to access the AWS KMS key.The identity policy should allow the principal to access the AWS KMS key, similar to the following:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "kms:Decrypt", "Resource": "arn:aws:kms:Region:AccountID:key/EncryptionKey" } ]}The resource-based policy should allow the principal to access the AWS KMS key, similar to the following:{ "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::AccountID:user/UserName", "arn:aws:iam::AccountID:role/RoleName" ] }, "Action": [ "kms:Decrypt", "kms:DescribeKey" ], "Resource": "*"}After the AWS KMS key is updated, verify that the secret is encrypted with the new AWS KMS key.Updating the AWS KMS key associated with a Secrets Manager secret using the AWS CLI doesn't re-encrypt current or previous versions of the secret with the new KMS key. This means that external accounts, also called cross-accounts, can't access the secret because the secret hasn't been re-encrypted with the new AWS KMS key. You must re-encrypt the secret using the new AWS KMS key to retrieve the secret value from the cross-account.Note: Using the Secrets Manager console to change the AWS KMS key associated with a secret by default creates a new version of the secret and encrypts it with the new AWS KMS key. For more information, see Secret encryption and decryption in AWS Secrets Manager.Re-encrypt the secret with the new AWS KMS key.Follow these steps to re-encrypt the secret with the new AWS KMS key using the AWS Management Console or the AWS CLI.AWS Management Console1. Open the Secrets Manager console.2. In Secret name, choose your secret.3. Choose Actions, and then choose dropdown list, select the AWS KMS key, select the check box for Create new version of secret with new encryption key, and then choose Save.AWS CLIFollow these steps from the source account where the secret resides.1. Run the AWS CLI command get-secret-value similar to the following:$ aws secretsmanager get-secret-value --secret-id arn:aws:secretsmanager:us-east-1:123456789012:secret:cross-account --query SecretString --output text {"CrossAccount":"DefaultEncryption"}2. Create a file named creds.txt.$ cat creds.txt {"CrossAccount":"DefaultEncryption"}3. Run the AWS CLI update-secret command to re-encrypt the encryption key similar to the following:. Note: If you use a customer managed key, you must also have kms:GenerateDataKey and kms:Decrypt permissions on the key.$ aws secretsmanager update-secret --secret-id arn:aws:secretsmanager:us-east-1:123456789012:secret:cross-account --secret-string file://creds.txt { "ARN": "arn:aws:secretsmanager:us-east-1:123456789012:cross-account", "Name": "cross-account", "VersionId": "f68246e8-1cfb-4c3b-952b-17c9298d3462" }4. Run the AWS CLI command get-secret-value from the cross-account similar to the following:$ aws secretsmanager get-secret-value --secret-id arn:aws:secretsmanager:us-east-1:123456789012:secret:cross-account --version-stage AWSCURRENT --profile {"CrossAccount":"DefaultEncryption"}Related informationHow to use resource-based policies in the AWS Secrets Manager console to securely access secrets across AWS accountsHow do I share AWS Secrets Manager secrets between AWS accounts?Permissions to AWS Secrets Manager secrets for users in a different accountFollow" | https://repost.aws/knowledge-center/secrets-manager-cross-account-key |
How do I resolve data incompatibility errors in Redshift Spectrum? | "I'm trying to use an external schema, object, or file format in Amazon Redshift Spectrum. However, I receive an error. How do I troubleshoot this error?" | "I'm trying to use an external schema, object, or file format in Amazon Redshift Spectrum. However, I receive an error. How do I troubleshoot this error?ResolutionIncompatible data format errorTo resolve your incompatible data format error in Redshift Spectrum, perform the following steps:1. Retrieve the complete error message from the SVL_S3LOG system view:select * from SVL_S3LOG where query = '<query_ID_of_the_Spectrum_query>';A mismatch in incompatible Parquet schema produces the following error message:File 'https://s3bucket/location/file has an incompatible Parquet schema for column ‘s3://s3bucket/location.col1'. Column type: VARCHAR, Par...2. Check the Message column to view the error description. The error description explains the data incompatibility between Redshift Spectrum and the external file.3. FSPCheck the schema of your external file, and then compare it with the column definition in the CREATE EXTERNAL TABLE definition.4. (Optional) If the column definition in the Apache Parquet file differs from the external table, modify the column definition in the external table. The column definition must match the columnar file format of the Apache Parquet file.5. Run the following query for the SVV_EXTERNAL_COLUMNS view:select * from SVV_EXTERNAL_COLUMNS where schemaname = '<ext_schema_name>' and tablename = '<ext_table_name>';This query checks the data type of the column in the CREATE EXTERNAL TABLE definition.Note: For columnar file formats such as Apache Parquet, the column type is embedded with the data. The column type in the CREATE EXTERNAL TABLE definition must match the column type of the data file. Mismatched column definitions result in a data incompatibility error.Invalid type length errorIf you select a Redshift Spectrum table with columns that are of DECIMAL data type, you might encounter the following error:S3 Query Exception (Fetch). Task failed due to an internal error. File ‘https://s3.amazonaws.com/…/<file_name>’ has an incompatible Parquet schema for column ‘<column_name>’column ‘<column_name>’ has an invalid type lengthTo resolve the invalid type length error in Redshift Spectrum, use an external table definition. The table definition must match the "precision" and "scale" values defined in the external file. For example:create external table ext_schema.tablename (c1 int, c2 decimal (6,2)) stored as PARQUET location 's3://.../.../';In this example, the updated values (in the c2 decimal column) for "precision" and "scale" values are set to 6 and 2, respectively. Therefore, the CREATE EXTERNAL TABLE definition values listed in the c2 column must match the values defined in the Apache Parquet file.Internal errorIf you select an external schema from an Amazon Athena catalog, you might receive the following error in Redshift Spectrum:Task failed due to an internal error. File 'https://s3...snappy.parquet has an incompatible Parquet schema for column 's3://.../tbl.a'. Column type: BOOLEAN, Parquet schema:\noptional int32 b [i:26 d:1 r:0]In Redshift Spectrum, the column ordering in the CREATE EXTERNAL TABLE must match the ordering of the fields in the Parquet file. For Apache Parquet files, all files must have the same field orderings as in the external table definition. If you skip this ordering or rearrange any data type column, you receive an internal error.Note: Although you can import Amazon Athena data catalogs into Redshift Spectrum, running a query might not work in Redshift Spectrum. In Redshift Spectrum, column names are matched to Apache Parquet file fields. Meanwhile, Amazon Athena uses the names of columns to map to fields in the Apache Parquet file.To resolve the internal error, specify the following column names in the SELECT statement:select col_1, col_2, col_3, .... col_n from athena_schema.tablename;Also, be sure that the AWS Identity and Access Management (IAM) role allows access to Amazon Athena. For more information, see IAM policies for Amazon Redshift Spectrum.Invalid column type errorIf you use Redshift Spectrum to query VARCHAR data type columns from an AWS Glue Data Catalog table, you might receive the following error:<column_name> - Invalid column type for column <column_name>. Type: varchar"Both AWS Glue and Redshift Spectrum support the VARCHAR data type. However, the VARCHAR data type defined by AWS Glue Catalog doesn't include a size parameter (such as VARCHAR (256)). When Redshift Spectrum queries a VARCHAR column defined without a size parameter, the result is an invalid column type error.To resolve the invalid column type error, perform the following steps:1. Run the following AWS Command Line Interface (AWS CLI) syntax to retrieve and store the AWS Glue table data in a local file:aws glue get-table --region us-east-1 --database gluedb --name click_data_json > click-data-table.jsonNote: If you receive an error while running an AWS CLI command, be sure to use the most recent version of the AWS CLI.2. Open the click-data-table.json file using any text editor and remove the outer {"Table": ...} envelope. For example, the updated configuration should now read like this:{"Name": "my-table", ...}3. Remove any fields that aren't allowed in the UpdateTable action. For example, you can remove the following fields:DatabaseNameOwnerCreateTimeUpdateTimeLastAccessTimeCreatedByIsRegisteredWithLakeFormation.4. Modify the STRING column types to "varchar" with the desired size parameter. For example:"Type": "varchar(1000)"5. Use the following command syntax to update your AWS Glue table:aws glue update-table --region us-east-1 --database gluedb --table-input "$(cat click-data-table.json)"6. Check your table definition in AWS Glue and verify that the data types have been modified.7. Query the AWS Glue table for the external schema in Amazon Redshift. For example:create external schema glue_schema from data catalog database ‘gluedb’ iam_role 'arn:aws:iam::111111111111:role/myRedshiftRole' region 'us-east-1';8. Run the following query for click_data_json:select * from glue_schema.click_data_json;Invalid range errorRedshift Spectrum expects that files in Amazon Simple Storage Service (Amazon S3) that belong to an external table aren't overwritten during a query. If this happens, it can result in an error similar to the following:Error: Spectrum Scan Error. Error: HTTP response error code: 416 Message: InvalidRange The requested range is not satisfiableTo prevent the preceding error, be sure Amazon S3 files aren't overwritten while they're queried with Redshift Spectrum.Invalid Parquet version numberRedshift Spectrum verifies the metadata of each Apache Parquet file that it accesses. If the verification fails, it might result in an error similar to the following:File 'https://s3.region.amazonaws.com/s3bucket/location/file has an invalid version numberThe verification can fail for the following reasons:The Parquet file was overwritten during the queryThe Parquet file is corruptRelated informationTroubleshooting queries in Amazon Redshift SpectrumFollow" | https://repost.aws/knowledge-center/redshift-spectrum-data-errors |
How do I use AWS Cloud Map to set up cross-account service discovery for ECS services? | I want to use AWS Cloud Map to set up cross-account service discovery for my Amazon Elastic Container Service (Amazon ECS) services. | "I want to use AWS Cloud Map to set up cross-account service discovery for my Amazon Elastic Container Service (Amazon ECS) services.Short descriptionCreating a private namespace in AWS Cloud Map, also creates an Amazon Route 53 hosted zone. Because the Route 53 hosted zone is associated with an Amazon Virtual Private Cloud (Amazon VPC), DNS records are discoverable. So, other AWS accounts and Amazon VPCs can't discover the Amazon ECS service through DNS.Prerequisites:Two Amazon VPCs in the same or different accounts with the required subnets, security groups, and non-overlapping CIDR.Both Amazon VPCs using AmazonProvidedDNS, with the attributes enableDnsHostnames and enableDnsSupport turned on.One Amazon ECS cluster.The latest version of AWS Command Line Interface (AWS CLI) installed and configured with the appropriate permissions.Note: If you receive errors when running AWS CLI, confirm that you're running a recent version of the AWS CLI.ResolutionImportant: The following steps use examples for the target Amazon VPC, the source Amazon VPC, and the Amazon ECS cluster. In the AWS CLI, replace the example values with your values.The example Amazon ECS cluster example-cluster is in AWS account 1.The example target Amazon VPC example-target-vpc hosts the Amazon ECS task and is in AWS account 1.The example source Amazon VPC example-source-vpc performs the DNS query and is in AWS account 2.Create a namespace and an AWS Cloud Map service1. Configure your AWS CLI with the credentials of account 1.2. Create a private AWS Cloud Map service discovery namespace in account 1:$ aws servicediscovery create-private-dns-namespace --name example-namespace --vpc example-target-vpcNote: The preceding command creates a namespace with name example-namespace and returns OperationID as the output in JSON.3. Use the OperationID to check the status of the namespace and namespace ID:$ aws servicediscovery get-operation --operation-id <example-OperationId> --query 'Operation.{Status: Status, NamespaceID: Targets.NAMESPACE}'4. Locate the hosted zone ID that's associated with the namespace:$ aws servicediscovery get-namespace --id <example-NamespaceID> --query 'Namespace.Properties.DnsProperties.{HoztedZoneId: HostedZoneId}'5. Use the namespace ID to create an AWS Cloud Map service:$ aws servicediscovery create-service \ --name myservice \ --namespace-id <example-NamespaceID> \ --dns-config "NamespaceId=<example-NamespaceID>,RoutingPolicy=MULTIVALUE,DnsRecords=[{Type=A,TTL=60}]"Note: The preceding command creates an AWS Cloud Map service with name example-myservice and returns a service ARN as the output.Register a task definition that uses the awsvpc network modeRegister a task definition that's compatible with AWS Fargate and uses the awsvpc network mode:1. Create a file named fargate-task.json with the following task definition contents:{ "family": "tutorial-task-def", "networkMode": "awsvpc", "containerDefinitions": [ { "name": "sample-app", "image": "httpd:2.4", "portMappings": [ { "containerPort": 80, "hostPort": 80, "protocol": "tcp" } ], "essential": true, "entryPoint": [ "sh", "-c" ], "command": [ "/bin/sh -c \"echo '<html> <head> <title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div style=color:white;text-align:center> <h1>Amazon ECS Sample App</h1> <h2>Congratulations!</h2> <p>Your application is now running on a container in Amazon ECS.</p> </div></body></html>' > /usr/local/apache2/htdocs/index.html && httpd-foreground\"" ] } ], "requiresCompatibilities": [ "FARGATE" ], "cpu": "256", "memory": "512"}2. Use the fargate-task.json file to register the task definition:$ aws ecs register-task-definition --cli-input-json file://fargate-task.jsonCreate an Amazon ECS service1. Create a file named ecs-service-discovery.json, and include the contents of the Amazon ECS service that you're creating:Specify launchType as FARGATE.Specify platformVersion as the latest version.Make sure that the securityGroups and subnets parameters belong to the example-target-vpc. You can get the security group and subnet IDs from your Amazon VPC console.Note: Because the task definition uses awsvpc network mode, an awsvpcConfiguration is required.Example JSON file:{ "cluster": "example-cluster", "serviceName": "ecs-service-discovery", "taskDefinition": "tutorial-task-def", "serviceRegistries": [ { "registryArn": "<Cloudmap service ARN>" } ], "launchType": "FARGATE", "platformVersion": "example-latest-version", "networkConfiguration": { "awsvpcConfiguration": { "assignPublicIp": "ENABLED", "securityGroups": [ "example-target-vpc-sg" ], "subnets": [ "example-target-vpc-subnet" ] } }, "desiredCount": 1}2. Use the ecs-service-discovery.json file to create your Amazon ECS service:$ aws ecs create-service --cli-input-json file://ecs-service-discovery.json3. Confirm whether the service task is registered as an instance in the AWS Cloud Map service:$ aws servicediscovery list-instances --service-id <example-cloud-map-service-id>Note: Your Amazon ECS service is now discoverable within example-target-vpc.Associate the source Amazon VPC to the Route 53 hosted zone1. If the Amazon VPCs are in different accounts, then submit an Amazon VPC association authorization request to account 2:$ aws route53 create-vpc-association-authorization --hosted-zone-id <example-HoztedZoneId> --vpc VPCRegion=<example_VPC_region>,VPCId=<example-source-vpc>2. Configure awscli with the credentials of account 2, and associate the example-source-vpc in account 2 with the hosted zone in account 1:$ aws route53 associate-vpc-with-hosted-zone --hosted-zone-id <example-HoztedZoneId> --vpc VPCRegion=<example_VPC_region>,VPCId=<example-source-vpc>3. Check whether the example-source-vpc is added to the hosted zone:aws route53 get-hosted-zone --id <example-HoztedZoneId> --query 'VPCs'4. Check that the Amazon ECS service is discoverable through DNS in example-source-vpc. Query the DNS from an Amazon Elastic Compute Cloud (Amazon EC2) instance with example-source-vpc:$ dig +short example-service.example-namespaceNote: In the preceding command example-service.example-namespace is the DNS name. Replace it with your AWS Cloud Map service and namespace.Set up Amazon VPC peering 1. Use Amazon VPC peering to connect example-target-vpc with example-source-vpc.2. Update the route tables.3. Update the security groups.Note: After you set up Amazon VPC peering with the route tables and security groups, resources in example-source-vpc can connect to Amazon ECS tasks in example-target-vpc.If you experience issues when setting up Amazon VPC peering, then see the following AWS Knowledge Center articles:Issues when creating Amazon VPC peering: Why can’t I create an Amazon VPC peering connection with a VPC in another AWS account?Connectivity issues between Amazon VPCs: How do I troubleshoot problems establishing communication over VPC peering?Follow" | https://repost.aws/knowledge-center/fargate-service-discovery-cloud-map |
How do I protect my data against accidental EC2 instance termination? | What are some options that AWS offers that can help me protect my data against accidental Amazon Elastic Compute Cloud (Amazon EC2) instance termination? How can I troubleshoot and collect more information about probable termination causes and behaviors? | "What are some options that AWS offers that can help me protect my data against accidental Amazon Elastic Compute Cloud (Amazon EC2) instance termination? How can I troubleshoot and collect more information about probable termination causes and behaviors?ResolutionTo help protect against data loss caused by accidental termination of an Amazon EC2 instance, consider the following options when you configure EC2 infrastructure:Enable termination protection. Termination protection prevents an instance from accidental termination. By default, this option is disabled for EC2 instances. Enable this option to protect your instance from any unintentional termination. For more information, see Enable termination protection.Regularly back up your data. Back up your instance by doing one or more of the following:Create an Amazon Machine Image (AMI). An AMI can capture the data on all the EBS volumes attached to an instance. You can use the AMI to launch a new instance.Schedule regular Amazon Elastic Block Store (Amazon EBS) snapshots.Use AWS Backup.Output data to another AWS service or source. Consider using one of the following services to store the workflows you run on your instance:Amazon Simple Storage Service (Amazon S3)Amazon Relational Database Service (Amazon RDS)Amazon DynamoDBRecreate your instance or restore data from a terminated instance. If you backed up your instances, then you can use the backup to restore the terminated instance. For more information, see How do I recreate a terminated EC2 instance?Troubleshoot termination behaviors to identify the causes of termination. Several issue can cause your instance to terminate immediately on start-up. Or, your instance configuration might lead to instance termination. For more information, see Troubleshoot instance termination (shutting down) and Why did Amazon EC2 terminate my instance?Related informationBest practices for Amazon EC2Best practices for Windows on Amazon EC2How can I automate Amazon EBS snapshot management using Amazon Data Lifecycle Manager?Follow" | https://repost.aws/knowledge-center/accidental-termination |
How do I troubleshoot CORS errors from my API Gateway API? | I get the error "No 'Access-Control-Allow-Origin' header is present on the requested resource" when I try to invoke my Amazon API Gateway API. How do I troubleshoot this error and other CORS errors from API Gateway? | "I get the error "No 'Access-Control-Allow-Origin' header is present on the requested resource" when I try to invoke my Amazon API Gateway API. How do I troubleshoot this error and other CORS errors from API Gateway?Short descriptionCross-Origin Resource Sharing (CORS) errors occur when a server doesn’t return the HTTP headers required by the CORS standard. To resolve a CORS error from an API Gateway REST API or HTTP API, you must reconfigure the API to meet the CORS standard.For more information on configuring CORS for REST APIs, see Turning on CORS for a REST API resource. For HTTP APIs, see Configuring CORS for an HTTP API.Note: CORS must be configured at the resource level and can be handled using API Gateway configurations or backend integrations, such as AWS Lambda.ResolutionThe following example procedure shows how to troubleshoot the No ‘Access-Control-Allow-Origin’ header present CORS error. However, you can use a similar procedure to troubleshoot all CORS errors. For example: Method not supported under Access-Control-Allow-Methods header errors and No ‘Access-Control-Allow-Headers’ headers present errors.Note: The No 'Access-Control-Allow-Origin' header present error can occur for any of the following reasons:The API isn't configured with an OPTIONS method that returns the required CORS headers.Another method type (such as GET, PUT, or POST) isn't configured to return the required CORS headers.An API with proxy integration or non-proxy integration isn't configured to return the required CORS headers.(For private REST APIs only) The incorrect invoke URL is called, or the traffic isn't being routed to the interface virtual private cloud (VPC) endpoint.Confirm the cause of the errorThere are two ways to confirm the cause of a CORS error from API Gateway:Create an HTTP Archive (HAR) file when you invoke your API. Then, confirm the cause of the error in the file by checking the headers in the parameters returned in the API response.-or-Use the developer tools in your browser to check the request and response parameters from the failed API request.Configure CORS on your API resource that’s experiencing the errorFor REST APIsFollow the instructions to Turn on CORS on a resource using the API Gateway console.For HTTP APIsFollow the instructions in Configuring CORS for an HTTP API.Important: If you configure CORS for an HTTP API, then API Gateway automatically sends a response to preflight OPTIONS requests. This response is sent even if there isn't an OPTIONS route configured for your API. For a CORS request, API Gateway adds the configured CORS headers to the response from an integration.While configuring CORS on your API resource, make sure that you do the following:For Gateway Responses for <api-name> API, choose the DEFAULT 4XX and DEFAULT 5XX check boxes.Note: When you select these default options, API Gateway responds with the required CORS headers, even when a request doesn't reach the endpoint. For example, if a request includes an incorrect resource path, API Gateway still responds with a 403 "Missing Authentication Token" error.For Methods, choose the check box for the OPTIONS method, if it isn't already selected. Also, choose the check boxes for all of the other methods that are available to CORS requests. For example: GET, PUT, and POST.Note: Configuring CORS in the API Gateway console adds an OPTIONS method to the resource if one doesn’t already exist. It also configures the OPTIONS method's 200 response with the required Access-Control-Allow-* headers. If you already configured CORS using the console, configuring it again overwrites any existing values.Configure your REST API integrations to return the required CORS headersConfigure your backend AWS Lambda function or HTTP server to send the required CORS headers in its response. Keep in mind the following:Allowed domains must be included in the Access-Control-Allow-Origin header value as a list.For proxy integrations, you can't set up an integration response in API Gateway to modify the response parameters returned by your API's backend. In a proxy integration, API Gateway forwards the backend response directly to the client.If you use a non-proxy integration, you must manually set up an integration response in API Gateway to return the required CORS headers.Note: For APIs with a non-proxy integration, configuring CORS on a resource using the API Gateway console automatically adds the required CORS headers to the resource.(For private REST APIs only) Check the private DNS setting of your interface endpointFor private REST APIs, determine if private DNS is activated on the associated interface VPC endpoint.If private DNS is activatedMake sure that you invoking your private API from within your Amazon Virtual Private Cloud (Amazon VPC) using the private DNS name.If private DNS isn't activatedYou must manually route traffic from the invoke URL to the IP addresses of the VPC endpoint.Note: You must use the following invoke URL, whether private DNS is activated or not:https://api-id.execute-api.region.amazonaws.com/stage-nameMake sure that you replace the values for api-id, region, and stage-name with the required values for your API. For more information, see How to invoke a private API.Important: If CORS is configured when private DNS isn't activated, keep in mind the following limitations:You can't use endpoint-specific public DNS names to access your private API from within your Amazon VPC.You can't use the Host header option, because requests from a browser don't allow Host header manipulation.You can’t use the x-apigw-api-id custom header, because it initiates a preflight OPTIONS request that doesn't include the header. API calls that use the x-apigw-api-id header won’t reach your API.Related informationTesting CORSTurn on CORS on a resource using the API Gateway import APIFollow" | https://repost.aws/knowledge-center/api-gateway-cors-errors |
How do I create a snapshot of an Amazon EBS RAID array? | I want to create snapshots of my Amazon Elastic Block Store (Amazon EBS) volumes that are configured in a RAID array. How can I take crash consistent snapshots across multiple Amazon EBS volumes on an Amazon Elastic Compute Cloud (Amazon EC2) instance? | "I want to create snapshots of my Amazon Elastic Block Store (Amazon EBS) volumes that are configured in a RAID array. How can I take crash consistent snapshots across multiple Amazon EBS volumes on an Amazon Elastic Compute Cloud (Amazon EC2) instance?Short descriptionTo create snapshots for Amazon EBS volumes that are configured in a RAID array, use the multi-volume snapshot feature of your instance. The multi-volume snapshot feature is a one-click solution that takes individual, point-in-time backups of all Amazon EBS volumes attached to your instance. This process makes sure that the backup of your Amazon EBS volumes attached to the instance are in sync relative to each other, resulting in an accurate restore of the Amazon EBS volumes in RAID.Note: You can automate snapshot creation using the Amazon Data Lifecycle Manager.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Use the AWS Management Console or AWS CLI to create snapshots.Note: After the snapshots are created, each snapshot is treated like an individual snapshot. It's a best practice that you tag your multiple volume snapshots to manage them together during operations such as restore, copy, or retention.AWS Management Console1. Open the Amazon EC2 console.2. Under Elastic Block Store, select Snapshots.3. Select Create Snapshot.4. Choose Instance as the resource type.5. Select the instance ID from the drop-down menu.6. (Optional) Enter a Description of the snapshot.7. (Optional) Select Copy Tags to automatically copy tags from the source volume to the corresponding snapshots.8. Click Create Snapshot.AWS Command Line Interface (AWS CLI)Use the CreateSnapshots API.create-snapshots[--description <value>]--instance-specification <value>[--tag-specifications <value>][--dry-run | --no-dry-run][--copy-tags-from-source <value>][--cli-input-json <value>][--generate-cli-skeleton <value>]For parameter descriptions and example calls, refer to AWS CLI Command Reference - create-snapshots.Related informationTaking crash-consistent snapshots across multiple Amazon EBS volumes on an Amazon EC2 instanceCreating an Amazon EBS snapshot - Multi-volume snapshotsRAID configuration on WindowsRAID configuration on LinuxBenchmark EBS volumesFollow" | https://repost.aws/knowledge-center/snapshot-ebs-raid-array |
How do I troubleshoot issues related to a signed URL or signed cookies in CloudFront? | "I'm using Amazon CloudFront and a signed URL or signed cookies to secure private content. However, I'm receiving a 403 Access Denied error." | "I'm using Amazon CloudFront and a signed URL or signed cookies to secure private content. However, I'm receiving a 403 Access Denied error.ResolutionIf Amazon CloudFront encounters an issue with a signed URL or signed cookies, then you might receive a 403 Access Denied error. A signed URL or signed cookie doesn't include the correct signerWhen you turn on Restrict Viewer Access in a behavior, you must determine a signer. A signer is either a trusted key group that you create in CloudFront, or an AWS account that contains a CloudFront key pair. The following 403 error messages indicate that the signer information is missing or incorrect:The error includes the message "Missing Key-Pair-Id query parameter or cookie value."This message indicates a missing or empty Key-Pair-Id query parameter when using a signed URL. Or, it indicates a missing CloudFront-Key-Pair-ID query string parameter in a signed cookie.The error includes the message "Unknown Key."This message indicates that CloudFront can't verify signer information through Key-Pair-ID (for signed URLs) or CloudFront-Key-Pair-ID (for signed cookies). To resolve this issue, confirm that the correct Key-Pair-ID value is used for a signed URL or CloudFront-Key-Pair-ID for signed cookies:If you use a signed URL, then find and note the value of Key-Pair-ID.-or-If you use signed cookies, then find and note the value of CloudFront-Key-Pair-ID.Then, find the Key ID and confirm that it matches the Key-Pair-ID or CloudFront-Key-Pair-ID:1. Open the CloudFront console. In the navigation menu, choose Distributions.2. Select your distribution. Then, choose the Behaviors tab.3. Select the behavior name, and then choose Edit.4. Find the Restrict viewer access setting. Note: If it's set to Yes, then requests for files that match the path pattern of this cache behavior must use the signed URL or signed cookie.5. After you confirm that the Restrict view access field is set to Yes, check the Trusted authorization type field.6. If the Trusted authorization type value setting is Trusted key groups, then note the name of the trust key group.Find the public key IDs for a trust key group:Open the CloudFront console. Choose Key Groups.In the list of key groups, choose the name of the trusted key group that you noted.Confirm that the Key-Pair-Id or CloudFront-Key-Pair-Id value that you noted in step 1 matches one of the public key IDs in the trusted key group.7. If the value of Trusted authorization type is Trusted Signer, then CloudFront uses the credentials that AWS generates. In this case, the value of Key-Pair-Id or CloudFront-Key-Pair-Id that you noted in step 1 must match the Access Key ID of the CloudFront credentials. To find the Access Key ID of CloudFront credentials, see Creating key pairs for your signers.A signed URL or signed cookie isn't sent at a valid timeWhen you create a signed URL or signed cookie, a policy statement in JSON format specifies the restrictions on the signed URL. This statement determines how long the URL is valid. CloudFront returns a 403 Access Denied error in any of the following scenarios:A signed URL is sent at time that's greater than the Expires value in a signed URL that uses a canned policy.A signed cookie is sent at a time that's greater than the CloudFront-Expires value in a signed cookie that uses a canned policy.A signed URL or a signed cookie is sent at time that's greater than the DateLessThan value in a custom policy. Or, it's sent at a time that's less than the DateGreaterThan value.Note: The Expires, CloudFront-Expires, DateLessThan, and DateGreaterThan values are in Unix time format (in seconds) and Coordinated Universal Time (UTC). For example, January 1, 2013 10:00 AM UTC converts to 1357034400 in Unix time format. If you use epoch time, then use a 32-bit integer for a date that's no later than 2147483647 (January 19, 2038 at 03:14:07 UTC).The Policy parameter in a signed URL or the CloudFront-Policy attribute in a signed cookie indicates that you use a custom policy. The policy statement is in JSON format and is base64 encoded. To find out the DateLessThan or DateGreaterThan value, use a 64base decode command.Example of a base64 encoded custom policy:eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cDovL2QxMTExMTFhYmNkZWY4LmNsb3VkZnJvbnQubmV0L2dhbWVfZG93bmxvYWQuemlwIiwiQ29uZGl0aW9uIjp7IklwQWRkcmVzcyI6eyJBV1M6U291cmNlSXAiOiIxOTIuMC4yLjAvMjQifSwiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE0MjY1MDAwMDB9fX1dfQ__To decode a custom policy that's in base64 encoded format into JSON format, run the following Linux command. This example uses the value from the previous example. Replace the policy value with your own custom policy value.$ echo -n eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cDovL2QxMTExMTFhYmNkZWY4LmNsb3VkZnJvbnQubmV0L2dhbWVfZG93bmxvYWQuemlwIiwiQ29uZGl0aW9uIjp7IklwQWRkcmVzcyI6eyJBV1M6U291cmNlSXAiOiIxOTIuMC4yLjAvMjQifSwiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE0MjY1MDAwMDB9fX1dfQ__ | base64 -diYou receive an output that looks similar to the following message:{ "Statement": [{ "Resource": "http://d111111abcdef8.cloudfront.net/game_download.zip", "Condition": { "IpAddress": { "AWS:SourceIp": "192.0.2.0/24" }, "DateLessThan": { "AWS:EpochTime": 1426500000 } } }] }A signed URL or signed cookie has more than one statement in the policyIf more than one statement is included in a canned policy or custom policy, then CloudFront returns a 403 Access Denied error.To troubleshoot this issue, run the Linux command in the previous section to check the custom policy statement. Verify your code details, and then confirm that only one statement is included in the canned policy or custom policy.A signed URL or signed cookie has an incorrect base URL in the policyCloudFront returns a 403 Access Denied error in any of the following scenarios:The base URL is abbreviated (www.example.com) in the Resource key in the policy statement. Use a full URL (http://www.example.com).The base URL doesn't have UTF-8 character encoding.The base URL doesn't include all punctuation and parameter names.The HTTP or HTTPS protocol in the base URL doesn't match the protocol that's used in a request that's sending a signed URL or signed cookies.The base URL's domain name doesn't match the host header that's the user agent uses to send a signed URL or signed cookies.The base URL query string includes characters that aren't valid.A signed URL or signed cookie has an incorrect signature in the policyCloudFront returns a 403 Access Denied error in any of the following scenarios:The policy statement includes white spaces (including tabs and newline characters).The canned policy or custom policy isn't formatted as string before it's hashed. This happens if you create a signed URL or signed cookie without using an AWS SDK.The policy isn't hashed before it generates the signature. This happens if you create the signed URL or signed cookie without using an AWS SDK.For signature best practices when using a signed URL or signed cookies, see Code examples for creating a signature for a signed URL.A signed URL or signed cookie was sent from an IP address or IP range that's not permitted in the custom policyCloudFront returns a 403 Access Denied error in any of the following scenarios:A signed URL or signed cookie is sent from an IPv6 IP address.A signed URL or signed cookie isn't sent from an IPv4 IP address or an IPv4 IP range that's specified in the custom policy.The IpAddress key is available only in the custom policy that's in a signed URL or a signed cookie. IP addresses in IPv6 format aren't supported. If you use a custom policy that includes IpAddress, then don't turn on IPv6 for the distribution.A signed cookie doesn't include the Domain and Path attributes in Set-cookie response headersCloudFront returns a 403 Access Denied error if cookies return from CloudFront but aren't included in later requests to the same domain. In this case, check the Domain and Path cookie attributes in the Set-Cookie response header.The Domain value is the domain name for the requested file. If you don't specify a Domain attribute, then the default value is the domain name in the URL. This applies only to the specified domain name, not to subdomains. If you specify a Domain attribute, then it also applies to subdomains.If you specify a Domain attribute, then the domain name in the URL and the value of the Domain attribute must match. You can specify the domain name that CloudFront assigns to your distribution (for example, d111111abcdef8.cloudfront.net), but you can't specify *.cloudfront.net for the domain name. To use an alternate domain name, such as example.com, in URLs, add an alternate domain name to your distribution.The Path attribute is the path for the requested file. If you don't specify a Path attribute, then the default value is the path in the URL.Follow" | https://repost.aws/knowledge-center/cloudfront-troubleshoot-signed-url-cookies |
How do I allow Amazon ECS tasks to pull images from an Amazon ECR image repository? | How do I allow Amazon Elastic Container Service (Amazon ECS) tasks to pull images from an Amazon Elastic Container Registry (Amazon ECR) image repository? | "How do I allow Amazon Elastic Container Service (Amazon ECS) tasks to pull images from an Amazon Elastic Container Registry (Amazon ECR) image repository?Short descriptionTo access the Amazon ECR image repository with your launch type, choose one of these options:For Amazon Elastic Compute Cloud (Amazon EC2) launch types, you must provide permissions to ecsTaskExecutionRole or the instance profile associated with the container instance. However, it's always a best practice to provide Amazon ECR permissions to ecsTaskExecutionRole. If permissions are provided to both the instance and the role, then ecsTaskExecutionRole takes precedence.For AWS Fargate launch types, you must grant your Amazon ECS task execution role permission to access the Amazon ECR image repository.ResolutionFor EC2 launch typesOpen the AWS Identity and Access Management (IAM) console.In the navigation pane, choose Roles, and then choose Create role.Choose the AWS service role type.In the Use Case section, select EC2. Then, select Next.Choose the default AmazonEC2ContainerServiceforEC2Role managed policy, and then choose Next.Note: The AmazonEC2ContainerServiceforEC2Role policy also allows you to register container instances to your ECS cluster and enable log streams in Amazon CloudWatch.Add tags to your policy, if desired, and then choose Next.For Role name, enter a unique name (such as ECSRoleforEC2), and then choose Create role.Launch a new container instance using the latest Amazon ECS-optimized Amazon Linux AMI.Attach the role that you created to the new container instance.Create a task definition.Important: In the containerDefinitions section of your task definition, specify the ECR image aws_account_id.dkr.ecr.region.amazonaws.com/repository:tag as the image property.Run a task or a service using the task definition that you created in step 10.(Optional) If you don't want to provide permissions to an instance profile, give permissions to the ECS Task Execution Role. Then, run a task or a service using the task definition that you created in step 10.For Fargate launch typesAn Amazon ECS task execution role is automatically created in the Amazon ECS console first-run experience. If you can't find the role or the role is deleted, complete these steps:Open the IAM console.In the navigation pane, choose Roles, and then choose Create role.In the Select type of trusted entity section, choose Elastic Container Service.For Select your use case, choose Elastic Container Service Task, and then choose Next.In the Attach permissions policy section, search for AmazonECSTaskExecutionRolePolicy, select the policy, and then choose Next.Note: This policy also provides permissions to use the awslogs log driver.For the Role Name, enter ecsTaskExecutionRole, and then choose Create role.Create a task definition.Important: In the containerDefinitions section of your task definition, specify the ECR image aws_account_id.dkr.ecr.region.amazonaws.com/repository:tag as the image property. Specify the IAM role created in step 6.Run a task or a service using the task definition that you created in step 7.Your task or service can now pull images from the Amazon ECR image repository.Related informationUsing Amazon ECR Images with Amazon ECSFollow" | https://repost.aws/knowledge-center/ecs-tasks-pull-images-ecr-repository |
Why did I receive the AWS account ID status "Verification failed" with GuardDuty? | "To manage multiple accounts in Amazon GuardDuty, I invited an AWS account to associate with my AWS account using AWS Organizations. The status of the member account is "Verification failed."" | "To manage multiple accounts in Amazon GuardDuty, I invited an AWS account to associate with my AWS account using AWS Organizations. The status of the member account is "Verification failed."Short descriptionTo manage multiple accounts in GuardDuty, you must choose a single AWS account to be the administrator account for GuardDuty. You can then associate other AWS accounts with the administrator account as member accounts.You can associate accounts with a GuardDuty administrator account with either of the following:An AWS Organizations organization that both accounts are members of.An invitation that's sent through GuardDuty.To send an invitation from the GuardDuty administrator account, you must specify the member account's account ID and email address. The "Verification failed" status indicates that the root email address or the account ID that you added as a GuardDuty member account are incorrect.For more information, see Managing multiple accounts in Amazon GuardDuty.ResolutionFollow these steps to designate a GuardDuty delegated administrator, and add member accounts using the GuardDuty console.Important:Be sure to use the root email address and account ID associated with the account.GuardDuty must be turned on in the member account before sending an invitation.You can bulk add accounts by uploading a .csv file. Be sure to specify the account ID and primary email address separated by a comma on separate lines. The first line of the .csv file must contain the account ID and email header in the following format:Account ID,Email111111111111,primary1@example.com222222222222,primary2@example.comYou can also use Python scripts to turn on GuardDuty in multiple accounts simultaneously. For this method, make sure that the accounts in the input .csv file are listed one per line. Use the account ID and email address without headers in the following format:111111111111,primary1@example.com222222222222,primary2@example.comAfter the GuardDuty member account accepts the invitation, the Status column for your member account changes to Enabled in the administrator account.Related informationHow do I set up a trusted IP address list for GuardDuty?Follow" | https://repost.aws/knowledge-center/guardduty-verification-failed |
How do I analyze my audit logs using Redshift Spectrum? | I want to use Amazon Redshift Spectrum to analyze my audit logs. | "I want to use Amazon Redshift Spectrum to analyze my audit logs.Short descriptionBefore you use Redshift Spectrum, complete the following tasks:1. Turn on your audit logs.Note: It might take some time for your audit logs to appear in your Amazon Simple Storage Service (Amazon S3) bucket.2. Create an AWS Identity and Access Management (IAM) role.3. Associate the IAM role to your Amazon Redshift cluster.To query your audit logs in Redshift Spectrum, create external tables, and then configure them to point to a common folder (used by your files). Then, use the hidden $path column and regex function to create views that generate the rows for your analysis.ResolutionTo query your audit logs in Redshift Spectrum, follow these steps:1. Create an external schema:create external schema s_audit_logs from data catalog database 'audit_logs' iam_role 'arn:aws:iam::your_account_number:role/role_name' create external database if not existsReplace your_account_number to match your real account number. For role_name, specify the IAM role attached to your Amazon Redshift cluster.2. Create the external tables.Note: In the following examples, replace bucket_name, your_account_id, and region with your bucket name, account ID, and AWS Region.Create a user activity logs table:create external table s_audit_logs.user_activity_log(logrecord varchar(max))ROW FORMAT DELIMITEDFIELDS TERMINATED BY '|'STORED AS INPUTFORMAT'org.apache.hadoop.mapred.TextInputFormat'OUTPUTFORMAT'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'LOCATION's3://bucket_name/logs/AWSLogs/your_account_id/redshift/region'Create a connection log table:CREATE EXTERNAL TABLE s_audit_logs.connections_log(event varchar(60), recordtime varchar(60),remotehost varchar(60), remoteport varchar(60),pid int, dbname varchar(60),username varchar(60), authmethod varchar(60),duration int, sslversion varchar(60),sslcipher varchar(150), mtu int,sslcompression varchar(70), sslexpansion varchar(70),iamauthguid varchar(50), application_name varchar(300))ROW FORMAT DELIMITEDFIELDS TERMINATED BY '|'STORED AS INPUTFORMAT'org.apache.hadoop.mapred.TextInputFormat'OUTPUTFORMAT'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'LOCATION 's3://bucket_name/logs/AWSLogs/your_account_id/redshift/region';Create a user log table:create external table s_audit_log.user_log(userid varchar(255),username varchar(500),oldusername varchar(500),usecreatedb varchar(50),usesuper varchar(50),usecatupd varchar(50),valuntil varchar(50),pid varchar(50),xid varchar(50),recordtime varchar(50))ROW FORMAT DELIMITEDFIELDS TERMINATED BY '|'STORED AS INPUTFORMAT'org.apache.hadoop.mapred.TextInputFormat'OUTPUTFORMAT'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'LOCATION 's3://bucket_name/logs/AWSLogs/your_account_id/redshift/region’;3. Create a local schema to view the audit logs:create schema audit_logs_views4. To access the external tables, create views in a database with the WITH NO SCHEMA BINDING option:CREATE VIEW audit_logs_views.v_connections_log ASselect *FROM s_audit_logs.connections_logWHERE "$path" like '%connectionlog%'with no schema binding;The returned files are restricted by the hidden $path column to match the connectionlog entries.In the following example, the hidden $path column and regex function restrict the files that are returned for v_connections_log:CREATE or REPLACE VIEW audit_logs_views.v_useractivitylog ASSELECT logrecord, substring(logrecord,2,24) as recordtime, replace(regexp_substr(logrecord,'db=[^" "]*'),'db=','') as db, replace(regexp_substr(logrecord,'user=[^" "]*'),'user=','') AS user, replace(regexp_substr(logrecord, 'pid=[^" "]*'),'pid=','') AS pid, replace(regexp_substr(logrecord, 'userid=[^" "]*'),'userid=','') AS userid, replace(regexp_substr(logrecord, 'xid=[^" "]*'),'xid=','') AS xid, replace(regexp_substr(logrecord, '][^*]*'),']','') AS query FROM s_audit_logs.user_activity_log WHERE "$path" like '%useractivitylog%' with no schema binding;The returned files match the useractivitylog entries.Note: There's a limitation that's related to the multi-row queries in user activity logs. It's a best practice to query the column log records directly.Related informationAnalyze database audit logs for security and compliance using Amazon Redshift SpectrumSTL_CONNECTION_LOGFollow" | https://repost.aws/knowledge-center/redshift-spectrum-audit-logs |
How can I map Availability Zones across my accounts? | "An Availability Zone (AZ) in my account doesn't map to the same location as an AZ with the same name in a different account. I want to coordinate the distribution of my resources to the AZs across my accounts to confirm fault tolerance, improve performance, or optimize costs. How can I map AZs across accounts?" | "An Availability Zone (AZ) in my account doesn't map to the same location as an AZ with the same name in a different account. I want to coordinate the distribution of my resources to the AZs across my accounts to confirm fault tolerance, improve performance, or optimize costs. How can I map AZs across accounts?ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.AZ names don't map to the same location across accounts. However, you can use Availability Zone IDs (AZ IDs) to map AZ across accounts:Open the AWS Resource Access Manager console.In the navigation bar, select your Region from the Region selector.In the Your AZ ID pane on the right, review the list of AZ names and their corresponding AZ IDs.Note: You can also use the aws ec2 describe-availability-zones --region "region-name" command in the AWS CLI to generate AZ ID information. Be sure to replace "region-name" with the name of your AWS region.Identify which AZ have the same AZ IDs across your accounts. AZs that have the same AZ IDs map to the same physical location.You can gain better insight into your resources by using the information from performing the previous steps. Then, you can make informed decisions on things such as:Launching new instances in a subnet that was created in your desired AZ.Moving an EC2 instance to another subnet, Availability Zone, or VPC.Related informationUse consistent Availability Zones in VPCs across different AWS accountsFollow" | https://repost.aws/knowledge-center/vpc-map-cross-account-availability-zones |
How can I invoke a Lambda function asynchronously from my Amazon API Gateway API? | I want to invoke an AWS Lambda function asynchronously instead of synchronously for my Amazon API Gateway API. | "I want to invoke an AWS Lambda function asynchronously instead of synchronously for my Amazon API Gateway API.ResolutionREST APIsIn Lambda non-proxy integration, the backend Lambda function is invoked synchronously by default. You can configure the Lambda function for a Lambda non-proxy integration to be invoked asynchronously by specifying 'Event' as the Lambda invocation type.1. Open the API Gateway console, choose APIs, and then choose your REST API.2. In Resources, choose GET, and then choose Integration Request.3. In Integration type, choose Lambda Function.4. Expand HTTP Headers, and then choose Add header.5. For Name, enter X-Amz-Invocation-Type.6. For Mapped from, enter 'Event'.7. Redeploy the REST API.To invoke the Lambda function with the option for either asynchronous or synchronous, add an InvocationType header.1. Open the API Gateway console, choose APIs, and then choose your REST API.2. In Resources, choose GET, and then choose Method Request.3. In Request Validator, choose the edit icon, choose the dropdown list, and then choose Validate query string parameters and headers.4. Choose the update icon to save changes.5. Expand HTTP Headers, and then choose Add header.6. For Name, enter InvocationType, and then choose Required.7. In Integration Request, Expand HTTP Headers, and then choose Add header.8. For Name, enter X-Amz-Invocation-Type.9. For Mapped from, enter method.request.header.InvocationType.10. Redeploy the REST API.Clients can include the InvocationType: Event header in API requests for asynchronous invocations or InvocationType: RequestResponse for synchronous invocations.For more information, see Set up asynchronous invocation of the backend Lambda function.HTTP APIsHTTP APIs only support proxy integrations for Lambda. You can't set the X-Amz-Invocation-Type header in the API Gateway integration for HTTP APIs. You can use two Lambda functions with one acting as proxy.Example configuration:HTTP API --> Invoke Lambda1 synchronously --> Invoke Lambda2 asynchronouslyFollow" | https://repost.aws/knowledge-center/api-gateway-invoke-lambda |
How can I decode an authorization failure message after receiving an "UnauthorizedOperation" error during an EC2 instance launch? | "I'm trying to launch an Amazon Elastic Compute Cloud (Amazon EC2) instance, but I get the error "An error occurred (UnauthorizedOperation) when calling the RunInstances operation: You are not authorized to perform this operation. Encoded authorization failure message encoded-message". How do I resolve this?" | "I'm trying to launch an Amazon Elastic Compute Cloud (Amazon EC2) instance, but I get the error "An error occurred (UnauthorizedOperation) when calling the RunInstances operation: You are not authorized to perform this operation. Encoded authorization failure message encoded-message". How do I resolve this?Short descriptionThe "UnauthorizedOperation" error indicates that permissions attached to the AWS Identity and Access Management (IAM) role or user trying to perform the operation doesn't have the required permissions to launch EC2 instances. Because the error involves an encoded message, use the AWS Command Line Interface (AWS CLI) to decode the message. This decoding provides more details regarding the authorization failure.PrerequisiteThe IAM user or role attempting to decode the encoded message must have permission to the DecodeAuthorizationMesssage API action with an IAM policy. If the user or role doesn't have this permission, the decode action fails and the following error message appears:"Error: A client error (AccessDenied) occurred when calling the DecodeAuthorizationMessage operation: User: xxx is not authorized to perform: (sts:DecodeAuthorizationMessage) action".Resolution1. Verify that the AWS CLI is installed and configured on your machine with the following command:$ aws --versionNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.2. Run the decode-authorization-message command. Replace encoded-message with the exact encoded message contained in the error message.$ aws sts decode-authorization-message --encoded-message encoded-message3. The decoded message lists the required permissions that are missing from the IAM role or user policy.Example encoded message:Launch Failed - You are not authorized to perform this operation. Encoded authorization failure message: 4GIOHlTkIaWHQD0Q0m6XSnuUMCm-abcdefghijklmn-abcdefghijklmn-abcdefghijklmnExample decoded message:$ aws sts decode-authorization-message --encoded-message 4GIOHlTkIaWHQD0Q0m6XSnuUMCm-abcdefghijklmn-abcdefghijklmn-abcdefghijklmn{ "DecodedMessage": "{\"allowed\":false,\"explicitDeny\":false,\"matchedStatements\":{\"items\":[]},\"failures\":{\"items\":[]},\"context\":{\"principal\":{\"id\":\"ABCDEFGHIJKLMNO\",\"name\":\"AWS-User\",\"arn\":\"arn:aws:iam::accountID:user/test-user\"},\"action\":\"iam:PassRole\",\"resource\":\"arn:aws:iam::accountID:role/EC2_instance_Profile_role\",\"conditions\":{\"items\":[{\"key\":\"aws:Region\",\"values\":{\"items\":[{\"value\":\"us-east-2\"}]}},{\"key\":\"aws:Service\",\"values\":{\"items\":[{\"value\":\"ec2\"}]}},{\"key\":\"aws:Resource\",\"values\":{\"items\":[{\"value\":\"role/EC2_instance_Profile_role\"}]}},{\"key\":\"iam:RoleName\",\"values\":{\"items\":[{\"value\":\"EC2_instance_Profile_role\"}]}},{\"key\":\"aws:Account\",\"values\":{\"items\":[{\"value\":\"accountID\"}]}},{\"key\":\"aws:Type\",\"values\":{\"items\":[{\"value\":\"role\"}]}},{\"key\":\"aws:ARN\",\"values\":{\"items\":[{\"value\":\"arn:aws:iam::accountID:role/EC2_instance_Profile_role\"}]}}]}}}"}The preceding error message indicates that the request failed to call RunInstances because AWS-User didn't have permission to perform the iam:PassRole action on the arn:aws:iam::accountID:role/EC2_instance_Profile_role.4. Edit the IAM policy associated with the IAM role or user to add the missing required permissions listed in the preceding step.Related informationWhy can't I run AWS CLI commands on my EC2 instance?Why am I unable to start or launch my EC2 instance?Follow" | https://repost.aws/knowledge-center/ec2-not-auth-launch |
How can I change the encryption key used by my Amazon RDS DB instances and DB snapshots? | I want to update the encryption key used by my Amazon Relational Database Service (Amazon RDS) DB instances and DB snapshots so that they use a new encryption key. | "I want to update the encryption key used by my Amazon Relational Database Service (Amazon RDS) DB instances and DB snapshots so that they use a new encryption key.ResolutionYou can't change the encryption key used by an Amazon RDS DB instance. However, you can create a copy of the RDS DB instance, and then choose a new encryption key for that copy.Note: Data in unlogged tables might not be restored using snapshots. For more information, review Best practices for working with PostgreSQL.To create a copy of an RDS DB instance with a new encryption key, do the following:Open the Amazon RDS console.In the navigation pane, choose Databases.Choose the DB instance for which you want to create a manual snapshot.Create a manual snapshot for your DB instance.In the navigation pane, choose Snapshots.Select the manual snapshot that you created.Choose Actions, and then choose Copy Snapshot.Under Encryption, select Enable Encryption.For AWS KMS Key, choose the new encryption key that you want to use.Choose Copy snapshot.Restore the copied snapshot.The new RDS DB instance uses your new encryption key.Confirm that your new database has all necessary data and your application is using the new database. When you no longer need the old RDS DB instance, you can delete the instance.Related informationEncrypting Amazon RDS resourcesBacking up and restoring an Amazon RDS DB instanceFollow" | https://repost.aws/knowledge-center/update-encryption-key-rds |
How do I troubleshoot high latency in my API Gateway requests that are integrated with Lambda? | Response times are slow when I make requests to an Amazon API Gateway API that’s integrated with an AWS Lambda function. How do I determine the cause of high latency? | "Response times are slow when I make requests to an Amazon API Gateway API that’s integrated with an AWS Lambda function. How do I determine the cause of high latency?ResolutionHigh latency must be addressed when an API endpoint that’s integrated with a Lambda function takes too long to send responses to a client. Review the metrics for API Gateway to identify the section of the request/response flow that’s causing high latency. After you determine the cause of high latency, you can work to reduce the delays.Filter CloudWatch metrics to review latency metrics on the APITo identify the section of the request/response flow that’s causing high latency, first perform the following steps:Observe the latency of the client after sending a request to the API.After you note the overall latency, open the Amazon CloudWatch console. In the left navigation pane, choose Metrics, All metrics. In the metrics search box, enter APIGateway. From the search results, choose API Gateway, ApiId.In the list of APIs, filter for the specified API by using the API ID or the API name. After filtering, check the IntegrationLatency and Latency check boxes.Note: The API ID and API name are available from the API Gateway console.Open the Graphed metrics tab. For Statistic, choose Maximum. For Period, choose 1 minute. Above the graph, select the Custom time period. Choose the time frame during which the client experienced high latency.Review both the IntegrationLatency and Latency metrics. Note the values and timestamps when these metrics have high values. The values can explain the cause for high latency.Compare metrics to identify the cause of high latencyContinue to review the metrics related to the request/response flow to find the cause of high latency:Compare the API Gateway Latency metric to the overall latency value observed at the client.For example, an API has a Latency metric with a Maximum value that’s approximately equal to the Max Latency value at the client. These values suggest that the maximum delay in the request/response flow is the time taken by API Gateway to process requests. API Gateway processing time includes the time taken to send requests to Lambda, wait for responses from Lambda, and send responses to a client.Compare the IntegrationLatency metric with the Latency metric for the API.For example, the IntegrationLatency metric is approximately equal to the Latency metric. These values indicate that latency at the API is primarily caused by backend requests sent to Lambda that are taking longer to respond. The IntegrationLatency metric includes the time between API Gateway sending a request and API Gateway receiving a response from the backendWhen the IntegrationLatency metric is low when compared to the Latency metric for the API, the backend response times are low. In this scenario, it takes longer to process the API requests or responses.For example, mapping templates configured in the API or an API Gateway Lambda authorizer both might create delays.When the Latency metric for the API is much lower than the latency observed at the client, the route might be causing delays. Review the route between the client and API Gateway to confirm whether there are any intermediate endpoints that are adding delays.For example, private VPN connections or proxies might create delays.View Lambda metrics to identify the cause of high IntegrationLatencyFinally, focus on the Lambda metrics related to the request/response flow to find the cause of high IntegrationLatency:Check the Lambda function Duration metric to confirm whether the Lambda function’s execution time is longer. If the Lambda function’s execution time has increased, review the CloudWatch log to find the section of code that’s causing high latency. By default, Lambda functions log the START, END, and REPORT statements in CloudWatch logs. Add custom log statements at each logical section of the Lambda function code to get verbose CloudWatch logs.If the Duration metric didn't change during the period of high latency at the client, determine if the initialization time increased. Initialization time in a Lambda function is the time taken to set up the execution environment to process a request. Requests that come from API Gateway might require a new environment for processing. This is set up through Lambda. Typically, the code that’s present outside the Lambda function handler runs during the initialization time. Code that takes longer to complete can cause delays for the overall response times to the client.Note: Initialization time is known as INIT or cold start.Confirm whether there’s any increase in the initialization time’s Duration by verifying the report statements in the Lambda function logs. Initialization time that's high for some requests can cause an increase in the IntegrationLatency metric for API Gateway.Related informationWorking with metrics for HTTP APIsAmazon API Gateway dimensions and metricsMonitoring WebSocket API execution with CloudWatch metricsViewing metrics on the CloudWatch consoleLambda runtime environment lifecycleFollow" | https://repost.aws/knowledge-center/api-gateway-high-latency-with-lambda |
How do I remove non-valid characters from my Amazon Redshift data? | There are non-valid characters in my Amazon Redshift data. How do I remove them? | "There are non-valid characters in my Amazon Redshift data. How do I remove them?Short descriptionIf your data contains non-printable ASCII characters, such as null, bell, or escape characters, you might have trouble retrieving the data or unloading the data to Amazon Simple Storage Service (Amazon S3). For example, a string that contains a null terminator, such as "abc\0def," is truncated at the null terminator, resulting in incomplete data.ResolutionUse the TRANSLATE function to remove the non-valid character. In the following example, the data contains "abc\u0000def". The TRANSLATE function removes the null character "\u0000" and replaces it with an empty value, which removes it from the string:admin@testrs=# select a,translate(a,chr(0),'') from invalidstring; a | translate --------+----------- abc | abcdef abcdef | abcdef(2 rows)To remove specified non-valid characters from all rows in a table, run the UPDATE command with the TRANSLATE function, as shown in this example:admin@testrs=# select * from invalidstring; a -------- abc abcdef(2 rows)admin@testrs=# update invalidstring set a=translate(a,chr(0),'') where a ilike '%'||chr(0)||'%';UPDATE 1 admin@testrs=# select * from invalidstring; a -------- abcdef abcdef(2 rows)Related informationCHR FunctionFollow" | https://repost.aws/knowledge-center/remove-invalid-characters-redshift-data |
Why can't I delete my Amazon S3 bucket? | I can't delete my Amazon Simple Storage Service (Amazon S3) bucket and I'm not sure why. | "I can't delete my Amazon Simple Storage Service (Amazon S3) bucket and I'm not sure why.ResolutionPrerequisitesBefore you delete an Amazon S3 bucket, confirm the following points:All AWS accounts share the Amazon S3 namespace. If you delete a bucket name, then the name becomes available for all users. If another AWS account claims the bucket name, then you can't reuse the bucket name. It's a best practice to empty the bucket instead of deleting it entirely.For buckets that are hosted as a static website, review and update Amazon Route 53 hosted zone settings that relate to the bucket.If the bucket receives log data, then stop the delivery of logs to the bucket before you delete it.Amazon S3 bucket isn't emptyTo delete an Amazon S3 bucket, the bucket must be empty. Use the AWS Management Console, AWS Command Line Interface (AWS CLI), or SDK to delete a bucket manually. If the bucket is large and has versioning configured, then it takes a long time to delete the objects manually. In these cases, use Amazon S3 Lifecycle configuration to empty the buckets.Note: If you receive errors when running AWS CLI commands, make sure that you're using the most recent AWS CLI version.For buckets that have versioning configured or are in a suspended status, include the following rules in your lifecycle configuration:Rule 1: Expire all current versions of objects after X days of creation. Permanently delete all noncurrent versions of objects after Y days of becoming noncurrent.Rule 2: Expire all lone delete markers and incomplete multipart uploads after Z days.For buckets with versioning not configured, include the following rules in your lifecycle configuration:Rule 1: Expire all current versions of the objects after X days of object creation.Rule 2: Expire all incomplete multipart uploads after Z days.Note: To delete the bucket quickly in this example, set X, Y, and Z to 1 day.Amazon S3 runs the lifecycle rules daily at 12:00AM UTC. After the lifecycle rules run, all objects that are eligible for expiration are marked for deletion. Because the lifecycle policy actions are asynchronous, it takes several days for the objects to be physically deleted from the bucket. After an object is marked for deletion, you're no longer charged for the storage that's associated with the object.Using the AWS CLIRun the following command to permanently delete objects from an Amazon S3 bucket with versioning not configured:aws s3 rm s3://bucket-name --recursiveRun the following command to permanently delete all objects in an Amazon S3 bucket with versioning configured or suspended:aws s3api delete-objects --bucket BUCKET_NAME --delete "$(aws s3api list-object-versions --bucket BUCKET_NAME --output=json --query='{Objects: Versions[].{Key:Key,VersionId:VersionId}}')"Using the Amazon S3 consoleOpen the Amazon S3 console.In the navigation pane, choose Buckets.Under Buckets, select the bucket that you want to empty. Then, choose Empty.On the Empty bucket page, type permanently delete in the text field to delete all objects in the bucket. Then, choose Empty.(Optional) Review the Empty bucket: Status page to see the emptying progress.Note: If object versions in an Amazon S3 bucket are locked in governance mode, then the AWS Identity and Access Management (IAM) identity requires "s3:BypassGovernanceRetention" permissions. To bypass governance mode, you must include the "x-amz-bypass-governance-retention:true" header in your request. For requests made in the AWS Management Console, the Console applies the header automatically to requests with the required permissions to bypass governance mode.During the retention period, an IAM identity can't delete object versions protected in compliance mode. This includes the root user of the account. After the retention period, delete the objects that are protected in compliance. After the bucket is empty, delete the Amazon S3 bucket.Amazon S3 bucket has access points associated with itBefore you delete the Amazon S3 bucket, delete all access points attached to the bucket. For more information, see Deleting an access point.IAM identity making the DeleteBucket request doesn't have sufficient permissionsGrant the IAM identity that is deleting the Amazon S3 bucket DeleteBucket permissions on the IAM policy. Or, grant the Amazon S3 bucket policy permissions to perform the DeleteBucket action.Explicit DENY statement preventing the deletionAn explicit DENY statement takes precedence over an explicit ALLOW statement. Confirm that the following policies don't contain any explicit DENY statements:IAM policy of the IAM identityAmazon S3 bucket policyVirtual Private Cloud (VPC) endpoint policyWhen AWS Elastic Beanstalk creates a bucket, the policies contain explicit DENY statements by default. Before you delete the Amazon S3 bucket, delete the explicit DENY statement or the bucket policy.Related informationEmptying a bucketHow S3 Object Lock worksDeleting a bucketFollow" | https://repost.aws/knowledge-center/s3-delete-bucket |
How can I increase the binlog retention in my Aurora MySQL-CompatibleDB cluster? | I have an Amazon Aurora MySQL-Compatible Edition DB cluster. I want to increase the binlog retention to increase the performance of binlog extraction. How can I do this? | "I have an Amazon Aurora MySQL-Compatible Edition DB cluster. I want to increase the binlog retention to increase the performance of binlog extraction. How can I do this?Short descriptionYou can increase the availability of your Aurora MySQL-Compatible DB cluster's binlogs by increasing the binlog retention period of the DB cluster.Note: Turning on binlog on your Aurora MySQL-Compatible DB cluster has the following performance effects:Causes additional write overhead (turn it on only when necessary)Increases engine start-up time at reboot because of the binlog recovery processAs a best practice, turn on binary logging in your Aurora MySQL-Compatible DB cluster in the following circumstances:For Aurora cross-Region read replicaFor Aurora manual replication to an external MySQL-compatible databaseNote: Aurora MySQL-Compatible doesn't use binlogs for intra-cluster replication. Aurora MySQL-Compatible global databases don't use binlogs.ResolutionTurn on binary logging in Aurora MySQL-Compatible DB cluster1. Open the Amazon Relational Database Service (Amazon RDS) console.2. In the navigation pane, choose Parameter groups.Note: If you're using the default Aurora DB cluster parameter group, then create a new DB cluster parameter group. For Type, choose DB Cluster Parameter Group.3. Choose the DB custom cluster parameter group. Then, choose Parameter group actions.4. Choose Edit.5. Change the value for the binlog_format parameter. For example: ROW, Statement, or MIXED.6. Choose Save changes.For more information, see How do I turn on binary logging for my Aurora MySQL-Compatible cluster?Increase the binlog retention in Aurora MySQL-Compatible DB clusterMake sure that the binlog files on your replication source are retained until the changes are applied to the replica.Note: Make sure that you choose a time frame to retain the binlog files before they are deleted. The retention time frame must be long enough to make sure that changes are applied to your replica before they are deleted.To increase the binlog retention of the DB cluster, use the mysql_rds_set_configuration procedure. You can run the following command and sample parameters on the writer instance to retain the binlog files for 7 days:CALL mysql.rds_set_configuration('binlog retention hours', 168);Note: For Aurora MySQL 5.7-compatible, the maximum value for binlog retention hours is 168 (7 days). If you enter a higher value, then 168 is used by default.For other Aurora MySQL compatible versions, the maximum value for binlog retention hours is 2160 (90 days). If you enter a higher value, then 2160 is used by default.Related informationRetain binary logs on the replication source until they are no longer neededFollow" | https://repost.aws/knowledge-center/aurora-mysql-increase-binlog-retention |
How do I troubleshoot SSH or RDP connectivity to my EC2 instances launched in a Wavelength Zone? | I can't connect to my Amazon Elastic Compute Cloud (Amazon EC2) instance launched in a Wavelength Zone using SSH or Windows remote desktop protocol (RDP). I can still ping my instance. | "I can't connect to my Amazon Elastic Compute Cloud (Amazon EC2) instance launched in a Wavelength Zone using SSH or Windows remote desktop protocol (RDP). I can still ping my instance.Short descriptionThere are restrictions on connecting to an EC2 instance launched in a Wavelength Zone using the public IP address provided by the 5G service provider.In a Wavelength Zone, the carrier gateway turns on the following controls for internet flows by default. You can't remove these controls.TCP is allowed for outbound and response. This means that TCP traffic is allowed only in one direction, from the EC2 instance to the internet.UDP is denied. This includes both inbound and outbound UDP traffic.ICMP is allowed. This means that the carrier gateway allows ICMP inbound and outbound traffic.Pinging an EC2 instance in a Wavelength Zone works with these controls. However, connecting to the instance using SSH or RDP from the internet fails.ResolutionUnlike public IPs, private IP connectivity works exactly the same way as it does for any other EC2 instance in the Region. To connect to your EC2 instance in a Wavelength Zone, do the following:Launch the bastion host in the same VPC as the Wavelength Zone in the Region.Connect through a bastion host using your instance's private IP address.Note: Public IP connectivity restrictions apply only when connecting to your instances in a Wavelength Zone from internet. Connectivity to instances in a Wavelength Zone works as expected if the following conditions are true:Your security group and Network ACL are set up correctly.The SSH or RDP client is located in the carrier network.UDP traffic also works from within the carrier network.Related informationQuotas and considerations for Wavelength ZonesHow AWS Wavelength worksFollow" | https://repost.aws/knowledge-center/ec2-wavelength-zone-connection-errors |
How do I provision an Amazon SageMaker Project? | "I want to provision an Amazon SageMaker Project, but I don't know how." | "I want to provision an Amazon SageMaker Project, but I don't know how.ResolutionSageMaker Projects use templates that AWS Service Catalog imports to your AWS account. Creating a domain activates the import when one of the following actions happens:You turn on Amazon SageMaker Project templates and Amazon SageMaker JumpStart for the account.The EnableSagemakerServicecatalogPortfolio API is called for an existing SageMaker Studio domain.You can provision a SageMaker Project directly from a SageMaker Studio domain. You can also create a SageMaker Project through an API or AWS Command Line Interface (AWS CLI). However, you must provide the ProductId and ProvisioningArtifactId.Before you begin, turn on project templates for your SageMaker Studio domain. Also, provide AWS Identity and Access Management (IAM) access to your SageMaker Studio users.Check the Service Catalog status from the Domain Settings tab in the Amazon SageMaker console. You can also invoke the GetSagemakerServicecatalogPortfolioStatus API, or run the following AWS CLI command:$aws sagemaker get-sagemaker-servicecatalog-portfolio-statusNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.If you get Access Denied errors or the projects aren't active, then turn on the project templates and grant the required permissions for users. For more information, see SageMaker Studio permissions required to use projects.After you turn on the projects templates, you must get the ProductID and ProvisioningArtifactID values for the templates to create projects.To get the the product and provisioning artifact IDs, complete the following steps:Open the Service Catalog console.In the navigation pane, under Administration, choose Portfolios.Choose the Imported tab.In the search bar, enter Amazon SageMaker Solutions and MLOps products.Choose Amazon SageMaker Solutions and MLOps products.Review the Products page, and get the Product ID.Select the product or project template that you want to use (example: MLOps template for model building and training).Review the Product list page, and get the Provisioning Artifact ID.Use the project and provisioning artifact IDs in your CreateProject API or the following AWS CLI command:$aws sagemaker create-project --project-name myproject--service-catalog-provisioning-details ProductId="prod-xxxxxx",ProvisioningArtifactId="pa-xxxxxx"Follow" | https://repost.aws/knowledge-center/sagemaker-project-provision |
How do I troubleshoot connectivity issues with my Amazon EMR notebook? | I want to troubleshoot connectivity issues with my Amazon EMR notebook. | "I want to troubleshoot connectivity issues with my Amazon EMR notebook.Short descriptionWhen connecting to your Amazon EMR notebook, you might receive errors similar to the following:Workspace (notebook) is stopped. Internal Error.Workspace (notebook) can now be used in local mode.Notebook is stopped. Service Role does not have the required permissions.Notebook is stopped. Notebook security group sg-xxxxxxxx does not have an egress rule to connect with the master security group sg-yyyyyy. Please fix the security group or use the default option.Notebook is stopped. Notebook security group sg-xxxxxxx should not have any ingress rules. Please fix the security group or use the default option.ResolutionCheck the service role in the EMR notebooks1. Validate that the notebook's AWS Identity and Access Management (IAM) role has the minimum required permissions. For more information, see Service role for EMR notebooks.2. Verify that the notebook has all of the permissions contained in AmazonElasticMapReduceEditorsRole. Use the AWS managed policy S3FullAccessPolicy for full access to Amazon Simple Storage Service (Amazon S3). For more information, see AWS managed policy: Amazon S3FullAccess.3. Remove any permissions restrictions from the bucket policy attached to the S3 bucket where the notebook is located.Check the security groups in the EMR notebooks1. Validate that the security group used for your notebook has at least the minimal rules required. For more information, see Specifying EC2 security groups for EMR notebooks.2. It's a best practice to use different security groups for the EMR cluster and the EMR notebook. The security groups for the notebook and the cluster have different inbound and outbound rule requirements.The notebook security group ElasticMapReduceEditors-Editor has an egress rule that allows connection to the master security group ElasticMapReduceEditors-Livy. This connection uses tcp/18888. Remove any outbound rules added in the notebook security group ElasticMapReduceEditors-Editor routing to 0.0.0.0/0.The master security group ElasticMapReduceEditors-Livy has an ingress rule that allows connection to the notebook security group ElasticMapReduceEditors-Editor. This connection uses tcp/18888. Remove any ingress rules added in the master security group ElasticMapReduceEditors-Livyrouting to 0.0.0.0/0.EMR cluster requirements1. Verify that the attached cluster is compatible and meets all cluster requirements.2. When Livy impersonation is turned on, verify that hadoop-httpfs is running on the EMR cluster master node.Use the following command to check the status of hadoop-httpfs:$ sudo systemctl status hadoop-httpfsUse the following command to turn on hadoop-httpfs:$ sudo systemctl start hadoop-httpfsFollow" | https://repost.aws/knowledge-center/emr-troubleshoot-notebook-connection |
Why can't I find my DataSync task logs in the CloudWatch log group? | "I ran an AWS DataSync task, but I can't find the logs in the relevant Amazon CloudWatch log group. How can I troubleshoot this?" | "I ran an AWS DataSync task, but I can't find the logs in the relevant Amazon CloudWatch log group. How can I troubleshoot this?ResolutionConfirm that CloudWatch Logs has a resource policy that allows DataSync to upload logs. Follow these steps to review the CloudWatch Logs resource policies in the AWS Region of your DataSync agent:Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.1. Run the describe-resource-policies command using the AWS Command Line Interface (AWS CLI):aws logs describe-resource-policies --region us-east-1Note: Each AWS account is allowed up to 10 resource policies per Region for CloudWatch Logs. If you exceed this limit, then you receive an error message when creating your resource policy.2. Review the output of the command. If a resource policy isn't set up, then the output is similar to the following:{ "resourcePolicies": []}Important: Confirm that the resource policy for DataSync is enabled in the correct AWS Region. The policy must be in the same Region as the DataSync agent that you're using.Follow these steps to create a resource policy that grants DataSync permissions for uploading logs:1. Create a JSON file that grants DataSync the minimum permissions for uploading logs:{ "Statement": [ { "Sid": "DataSyncLogsToCloudWatchLogs", "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogStream" ], "Principal": { "Service": "datasync.amazonaws.com" }, "Resource": "*" } ], "Version": "2012-10-17"}You can name the file policy.json.2. Run the put-resource-policy command using the AWS CLI to create a resource policy using the JSON file:aws logs put-resource-policy --policy-name trustDataSync --policy-document file://policy.json --region <Region>Important: Set the AWS Region of your DataSync agent as the value for --region.3. Run the describe-resource-policies command to confirm that the resource policy was created:aws logs describe-resource-policies --region <Region>Note: Each AWS account is allowed up to 10 resource policies per Region for CloudWatch Logs. If you exceed this limit, you receive an error message when creating your resource policy. Use the put-resource-policy command to verify if you've reached the limit.4. After you create the resource policy, the command output is similar to the following:{ "resourcePolicies": [ { "policyName": "trustDataSync", "policyDocument": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Sid\":\"DataSyncLogsToCloudWatchLogs\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"datasync.amazonaws.com\"},\"Action\":[\"logs:PutLogEvents\",\"logs:CreateLogStream\"],\"Resource\":\"*\"}]}", "lastUpdatedTime": 1577448776606 } ]}5. To test the resource policy, run a DataSync task. A few minutes after the task runs, confirm that you're seeing the log stream from the relevant CloudWatch log group.Related informationOverview of managing access permissions to your CloudWatch Logs resourcesFollow" | https://repost.aws/knowledge-center/datasync-missing-cloudwatch-logs |
How do I resolve "403 Error - The request could not be satisfied. Request Blocked" in CloudFront? | Amazon CloudFront is returning the error message "403 Error - The request could not be satisfied. Request Blocked." | "Amazon CloudFront is returning the error message "403 Error - The request could not be satisfied. Request Blocked."Short descriptionThe error message "403 Error - The request could not be satisfied. Request Blocked." is an error from the client. This error can occur due to the default actions of AWS WAF rules associated with the CloudFront distribution. The following settings might cause a Request Blocked error:When the default action is set to Allow, the request matches a rule that has Action set to Block.When the default action is set to Block, the request matches the conditions of a rule that has Action set to Block.-or-When the default action is set to Block, the request doesn't match the conditions of any rule that has Action set to Allow.For information on troubleshooting other types of 403 errors, see How do I troubleshoot 403 errors from CloudFront?ResolutionTo resolve the Request Blocked error:Open the CloudFront console.Choose the ID for the distribution that you want to update.Choose the General tab.Under Settings, in the AWS WAF web ACL list, choose the web access control list (web ACL) associated with your distribution.In the AWS WAF console, choose Web ACLs.On the Web ACLs page, for AWS Region, choose Global (CloudFront).Choose the web ACLs that require review. Check that the AWS WAF default action is set on the web ACL.To resolve the Request Blocked error when the default action is Allow, review the requests. Be sure that they don't match the conditions for any AWS WAF rules with Action set to Block.If valid requests match the conditions for a rule that blocks requests, then update the rule to allow the requests.To resolve the Request Blocked error when the default action is Block, review the requests. Be sure that they match the conditions for any AWS WAF rules with Action set to Allow.If valid requests don't match any existing rules that allow requests, then create a rule that allows the requests.Note: For more troubleshooting, use theAWS WAF console to review a sample of requests that match the rule that might cause the Request Blocked error. For more information, seeTesting and tuning your AWS WAF protections.Related informationHow do I resolve "403 ERROR - The request could not be satisfied. Bad Request" in Amazon CloudFront?How AWS WAF worksUsing AWS WAF to control access to your contentFollow" | https://repost.aws/knowledge-center/cloudfront-error-request-blocked |
How do I resolve issues with deleting my Amazon EBS snapshot? | "I'm trying to delete my Amazon Elastic Block Store (Amazon EBS) snapshot, but I can't. How do I resolve this issue?" | "I'm trying to delete my Amazon Elastic Block Store (Amazon EBS) snapshot, but I can't. How do I resolve this issue?Short descriptionThe following are common reasons why Amazon EBS snapshot deletion fails:The AWS Identity and Access Management (IAM) user or role doesn't have permission to run the DeleteSnapshot API action.Another account owns the snapshot and shares it with your AWS account.The snapshot of the EBS volume root device is used by a registered Amazon Machine Image (AMI).The snapshot is in the Recycle Bin.The snapshot is created in AWS Backup, or the snapshot that's created in AWS Backup is restored from the Recycle Bin.The snapshot is created using Amazon Data Lifecycle Manager and is in or restored from the Recycle Bin.DeleteSnapshot API results aren't immediately visible to subsequent commands.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.The IAM user or role doesn't have permission to run the DeleteSnapshot API actionIn AWS CloudTrail, you receive the error message: "You are not authorized to perform this operation. Encoded authorization failure message: Bght_tAZ......"To decode the authorization failure message, run the following command:$ aws sts decode-authorization-message --encoded-message encoded_messageNote: Replace encoded_message with the encoded authorization failure message that you received.You can also use the IAM policy simulator to troubleshoot. Check the policy that's related to the IAM user or role to see if it has a rule that denies the ec2:DeleteSnapshot action.Example JSON policy that denies the ec2:DeleteSnapshot action:{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Deny", "Action": "ec2:DeleteSnapshot", "Resource": "*" } ]}Also, check for rules that deny any conditions that must be satisfied for the operation to complete, such as ec2:SnapshotID. Update the IAM user or role policy to allow snapshot deletion.For a list of related condition keys, see the DeleteSnapshot section of Actions, resources, and condition keys for Amazon EC2.The snapshot is owned and shared by a different AWS accountYou receive the error message: "The snapshot 'snap-abcdef1234567890' does not exist."You can't delete a snapshot from your account that another account owns and shares with your account. If you have access to the account that owns the snapshot, then you can delete it. If not, then you must contact the owner of that account.To check the owner of the snapshot, run the following describe-snapshots AWS CLI command:$ aws ec2 describe-snapshots --snapshot-id snap-abcdef1234567890Note: Replace snap-abcdef1234567890 with your snapshot's ID.You can also find information about the snapshot in the Amazon Elastic Compute Cloud (Amazon EC2) console. For more information see, View Amazon EBS snapshot information.If you own the snapshot and want to revoke sharing the snapshot with other accounts, then follow these steps:Open the Amazon EC2 console.In the navigation pane, choose Snapshots.Select the snapshot that you shared, and then choose Actions, Modify permissions.Under Shared accounts, select the account ID of the account that you want to revoke snapshot sharing from. Then, choose Remove selected.Choose, Save changes.The snapshot of the Amazon EBS volume root device is used by a registered AMIYou receive the error message: "The snapshot 'snap-abcdef1234567890' is currently in use by ami-abcdef1234567890."Use the AWS Management Console or AWS CLI to deregister your AMI. Then, delete the snapshot.You can find the AMI ID in the error message. Or, you can run the following describe-snapshots AWS CLI command:$ aws ec2 describe-snapshots --snapshot-ids snap-abcdef1234567890You can find the AMI ID in the Description section:{ "Snapshots": [ { "Description": "Created by CreateImage(i-abcdef1234567890) for ami-abcdef1234567890", "Encrypted": false, "OwnerId": "111122223333", "Progress": "100%", "SnapshotId": "snap-abcdef1234567890", "StartTime": "2022-11-12T03:15:16.272000+00:00", "State": "completed", "VolumeId": "vol-abcdef1234567890", "VolumeSize": 8, "StorageTier": "standard" } ]}The snapshot is in the Recycle BinYou receive the error message: "An error occurred (InvalidSnapshot.NotFound) when calling the DeleteSnapshot operation. The snapshot 'snap-abcdef1234567890' does not exist."If you delete a snapshot using the AWS CLI and receive the preceding error message, then the snapshot might be in the Recycle Bin. You can't delete a snapshot that's in the Recycle Bin. The snapshot is deleted only when the retention period expires.To check if the snapshot is in the Recycle Bin, run the list-snapshots-in recycle-bin AWS CLI command:aws ec2 list-snapshots-in-recycle-bin --snapshot-id snap-abcdef1234567890 --region regionNote: Replace region with your AWS Region.Example output:{ "Snapshots": [ { "SnapshotId": "snap-0460a240fc523552e", "RecycleBinEnterTime": "2022-11-13T16:33:54.707000+00:00", "RecycleBinExitTime": "2022-11-14T16:33:54.707000+00:00", "Description": "", "VolumeId": "vol-08d1428974b817a18" } ]}If you need to delete the snapshot before the retention period expires, then you can restore the snapshot from the Recycle Bin. Make sure that your IAM user or role has the correct permissions to view and recover snapshots that are in the Recycle Bin.Then, check your AWS Region's retention rules. For a tag-level retention rule, modify the snapshot tags so that they don't match the retention rule. Then, delete the snapshot. For a Region-level rule, delete the retention rule, and then delete the snapshot. Deleting the retention rule doesn't affect the other snapshots in the Recycle Bin.The snapshot is created in AWS Backup, or the snapshot that's created in AWS Backup was restored from the Recycle BinYou receive the error message: "snap-abcdef1234567890 This snapshot is managed by AWS Backup service and cannot be deleted via EC2 APIs. If you wish to delete this snapshot, please do so via the Backup console."You can't use the Amazon EC2 console or AWS CLI to delete a snapshot that's created and managed in AWS Backup. You must delete the snapshot from the AWS Backup console. Note the snapshot ID, and then follow the steps for Deleting backups.However, you can't use the AWS Backup console to delete a snapshot that's created in AWS Backup, sent to the Recycle Bin, and then restored. You must delete the snapshot using the Amazon EC2 console or AWS CLI.The snapshot is created using Amazon Data Lifecycle Manager and is stored in the Recycle BinAmazon Data Lifecycle Manager doesn't manage snapshots in the Recycle Bin that are created using Amazon Data Lifecycle Manager or snapshot policies. You must use the Amazon EC2 console or AWS CLI to delete the snapshot.DeleteSnapshot API results aren't immediately visible to subsequent demandsAll Amazon EC2 APIs follow an eventual consistency model. This means that when you use the DeleteSnapshot API, the results might not be immediately visible to subsequent commands that you run.To check the status of a recently deleted snapshot, run the following describe-snapshots AWS CLI command:$ aws ec2 describe-snapshots --region region --snapshot-ids snap-abcdef1234567890If you receive the following error message, then the snapshot is successfully deleted: "An error occurred (InvalidSnapshot.NotFound) when calling the DescribeSnapshots operation: The snapshot 'snap-abcdef1234567890' does not exist."Follow" | https://repost.aws/knowledge-center/ebs-resolve-delete-snapshot-issues |
How do I resolve the AWS CloudFormation error "Unable to assume role and validate the listeners configured on your load balancer" when I launch an Amazon ECS resource? | I get an error message when I use AWS CloudFormation to launch an Amazon Elastic Container Service (Amazon ECS) resource (AWS::ECS::Service). | "I get an error message when I use AWS CloudFormation to launch an Amazon Elastic Container Service (Amazon ECS) resource (AWS::ECS::Service).Short descriptionIf I'm using a Classic Load Balancer, I get an error message similar to this:"12:21:48 UTC+0100 CREATE_FAILED AWS::ECS::Service ECSService Unable to assume role and validate the listeners configured on your load balancer. Please verify the role being passed has the proper permissions."If I'm using an Application Load Balancer, I get an error message similar to this:"12:21:48 UTC+0100 CREATE_FAILED AWS::ECS::Service ECSService Unable to assume role and validate the specified targetGroupArn. Please verify that the ECS service role being passed has the proper permissions."If you create an Amazon ECS service with an independent AWS Identity and Access Management (IAM) policy resource that specifies an instance profile, the Amazon ECS service can fail and return an error message.ResolutionTo resolve the error for both Classic Load Balancers and Application Load Balancers, try one or more of the following solutions:Confirm that the IAM role for the Amazon ECS service has the right permissions to register and deregister container instances with your load balancers.Tip: You can use this AWS CloudFormation template as a reference to build out your Amazon ECS architecture components with the right dependencies. The architecture components include an Amazon ECS cluster, service, load balancers, container instances, and IAM resources.Confirm that your AWS Auto Scaling group or Amazon ECS container instance has an instance profile associated as an attribute.Use the DependsOn attribute to specify the dependency of the AWS::ECS::Service resource on AWS::IAM::Policy. Or, use a custom resource to delay the stack creation process and give service role permissions time to propagate.Follow" | https://repost.aws/knowledge-center/assume-role-validate-listeners |
How do I resolve problems when accessing a service in the AWS Management Console? | I'm unable to access a service from the AWS Management Console. | "I'm unable to access a service from the AWS Management Console.Short descriptionYou might receive one of the following error messages:You are not subscribed to this serviceThe AWS Access Key Id needs a subscription for the serviceYour service sign-up is almost complete!ResolutionResolve "You are not subscribed to this service" or "The AWS Access Key Id needs a subscription for the service" errorsWhen you created your AWS account, all available services at that time were activated. However, as new services are released, they aren't automatically put into an active state without your permission. You must subscribe to each service individually as they are released.To activate all existing services and new services as they are released, you must sign in as the AWS account root user. Then, in your account settings page, update your account subscription.Resolve "Your service sign-up is almost complete!" errorsFinish the account sign-up process.You might need to provide a valid payment method, verify your account, or choose a support plan.Related informationCreating your first IAM admin user and groupManaging your AWS payment methodsFollow" | https://repost.aws/knowledge-center/error-access-service |
How do I avoid throttling when I call PutMetricData in the CloudWatch API? | I receive the error "400 ThrottlingException" for PutMetricData API calls in Amazon CloudWatch. | "I receive the error "400 ThrottlingException" for PutMetricData API calls in Amazon CloudWatch.Short descriptionWhen you receive the error "400 ThrottlingException" for PutMetricData API calls in CloudWatch, you also receive the following message:<ErrorResponse xmlns="http://monitoring.amazonaws.com/doc/2010-08-01/"> <Error> <Type>Sender</Type> <Code>Throttling</Code> <Message>Rate exceeded</Message> </Error> <RequestId>2f85f68d-980b-11e7-a296-21716fd2d2e3</RequestId></ErrorResponse>To help service performance, CloudWatch throttles requests for each AWS account based on the AWS Region. For current PutMetricData API request quotas, see CloudWatch service quotas.Note: All calls to the PutMetricData API in a Region count towards the maximum allowed request rate. This number includes calls from any custom or third-party application. Examples of such applications include the CloudWatch Agent, the AWS Command Line Interface (AWS CLI), and the AWS Management Console.ResolutionIt's a best practice to use the following methods to reduce your call rate and avoid API throttling:Distribute your API calls evenly over time rather than making several API calls in a short time period. If you require data to be available with a 1-minute resolution, then you have an entire minute to emit that metric. Use jitter (randomized delay) to send data points at various times.Combine as many metrics as possible into a single API call. For example, a single PutMetricData call can include 1,000 metrics and 150 data points. You can also use pre-aggregated data sets, such as StatisticSet, to publish aggregated data points. This reduces the number of PutMetricData calls per second.Retry your call with exponential backoff and jitter.Related informationCloudWatch API referenceFollow" | https://repost.aws/knowledge-center/cloudwatch-400-error-throttling |
Why are the items with expired Time to Live not deleted from my Amazon DynamoDB table? | Some of the items with expired Time to Live (TTL) aren't yet deleted from my Amazon DynamoDB table. | "Some of the items with expired Time to Live (TTL) aren't yet deleted from my Amazon DynamoDB table.ResolutionThe DynamoDB TTL feature allows you to define a per-item timestamp to determine when an item is no longer needed. Shortly after the date and time of the specified timestamp, DynamoDB deletes the item from your table without consuming any write throughput. TTL is provided at no extra cost as a means to reduce stored data volumes by retaining only the items that remain current for your workload’s needs. When you activate TTL on a DynamoDB table, you must identify a specific attribute name that the service will look for when determining if an item is eligible for expiration. After you activate TTL on a table, a per-partition scanner background process automatically and continuously evaluates the expiry status of items in the table.Some of the common causes for deletion of expired items taking longer are the following:The actual delete operation of an expired item might vary depending on the size and activity level of your table. Because TTL is a background process, the nature of the capacity used to expire and delete items using TTL is variable.The TTL process typically deletes expired items within 48 hours of expiration. However, this timeline isn't an SLA. TTL deletes are done on best effort basis and deletion might take longer in some cases. During the deletion of objects using TTL, DynamoDB uses the backend capacity of the table to delete the objects instead of using the provisioned capacity. The deletion process might take longer if there are a large number of delete requests and not enough backend capacity to continuously delete those items.To check if TTL is working properly, do the following:Be sure that you activated TTL on the table and the related settings are correct.The item must contain the attribute that you specified when you activated TTL on the table.The TTL attribute’s value must have the datatype Number.The TTL attribute’s value must be a timestamp in Unix epoch time format in seconds.The TTL attribute value must be a datetimestamp with an expiration of no more than five years in the past.The TTL processes run on the table only when there is enough spare capacity so that these processes don't interfere with table operations. If the table or table partitions are using most of the allocated capacity, TTL processes might not run due to insufficient spare capacity.Items that are expired, but aren't yet deleted by TTL, still appear in reads, queries, and scans. If you don't want expired items in the result set, you must filter them out. To do so, use a filter expression that returns only items where the Time to Live expiration value is greater than the current time in epoch format. For more information, see Filter expressions for scan.Related informationExpiring items by using DynamoDB Time to Live (TTL)Follow" | https://repost.aws/knowledge-center/dynamodb-expired-ttl-not-deleted |
Why did I receive the GuardDuty finding type alert Recon:EC2/PortProbeUnprotectedPort for my Amazon EC2 instance? | Amazon GuardDuty detected alerts for the Recon:EC2/PortProbeUnprotectedPort finding type for my Amazon Elastic Compute Cloud (Amazon EC2) instance. | "Amazon GuardDuty detected alerts for the Recon:EC2/PortProbeUnprotectedPort finding type for my Amazon Elastic Compute Cloud (Amazon EC2) instance.Short descriptionThe GuardDuty finding type Recon:EC2/PortProbeUnprotectedPort means that an Amazon EC2 instance has an unprotected port that is being probed by a known malicious host.ResolutionUse the following best practices to protect the unprotected port or remove inbound rules:Follow the instructions to view and analyze your GuardDuty findings.In the findings detail pane, note the port number.If the unprotected port is 22 for Linux, you can restrict access by following the instructions for authorizing inbound traffic for your Linux instances.If the unprotected port is 3389 for Windows, you can restrict access by following the instructions for authorizing inbound traffic for your Windows instances.If the unprotected port is 80 or 443 and you must keep these ports open, you can put the EC2 instance behind a load balancer.If the port doesn't have any application running on it and doesn't need to be open, you can remove the inbound rule for the EC2 instance security group and iptables rules.If you don't need to protect the unprotected port, you can ignore the Recon:EC2/PortProbeUnprotectedPort finding type.Related informationMonitoring GuardDuty findings with Amazon CloudWatch EventsFinding typesFollow" | https://repost.aws/knowledge-center/resolve-guardduty-unprotectedport-alerts |
How can I find which Availability Zones I can use in my account? | "I want to use an Availability Zone (AZ), but it's unavailable for my account. How can I find which AZs I can use in my account?" | "I want to use an Availability Zone (AZ), but it's unavailable for my account. How can I find which AZs I can use in my account?Short descriptionAs AZs grow, the ability for AWS to expand them can become constrained. As a result, you might notice the following:Your account has fewer available AZs than what's publicly listed.One of your accounts has a different number of available AZs in a Region than another account.ResolutionTo find which AZs you can use to launch resources in any of your accounts, do the following:Open the Amazon Elastic Cloud Compute (Amazon EC2) console.From the navigation bar, view the options in the Region selector.On the navigation pane, choose EC2 Dashboard.In the Service Health section, view the list of AZs under Availability Zone Status.Also, you can run the following command in the AWS Command Line Interface (AWS CLI) to generate a list of AZs in your account.Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure you confirm that you’re running a recent version of the AWS CLI.Note: Be sure to replace region-name with your Region's name. For example, ap-northeast-1.aws ec2 describe-availability-zones --region region-nameRelated informationAvailability ZonesFollow" | https://repost.aws/knowledge-center/vpc-find-availability-zone-options |
How do I submit a Digital Millennium Copyright Act (DMCA) notice to AWS? | Material that I own the copyright for is being distributed using AWS resources without my permission. How do I submit a Digital Millennium Copyright Act (DMCA) notice to AWS? | "Material that I own the copyright for is being distributed using AWS resources without my permission. How do I submit a Digital Millennium Copyright Act (DMCA) notice to AWS?ResolutionThe Digital Millennium Copyright Act (DMCA) is the United States law governing claims that copyright-protected material is being distributed online.If you believe that AWS resources are being used to host or distribute copyrighted material that you own the copyright for without your permission, first consider whether there is an exception to copyright that applies in this case (such as fair use). If after considering this you still want to make a copyright complaint, you can reach out directly to the site operator using the contact details provided on the site. Or, to submit a DMCA notice to AWS, follow the steps outlined below. Remember that only the copyright owner or their authorized representative can file a report of copyright infringement.Steps to submit a DMCA notice to AWSGather the documentation described at Notice and procedure for making claims of copyright infringement.Send the information to ec2-abuse@amazon.com or abuse@amazonaws.com.Note: The AWS Abuse team won't open attachments under any circumstance. You must provide any necessary information in plaintext.If you provide your notice in the body of an email and send it to ec2-abuse@amazon.com or abuse@amazonaws.com, you don't need to provide a physical signature.What happens next?After we receive your DMCA notice, you'll receive an automated email acknowledging receipt of your notice. If we have questions about your notice, we'll email you asking for more information. We request that you respond to our email so that we can continue looking into your report.If your DMCA notice is complete and valid and the reported content is on AWS, we'll take action expeditiously including contacting our customer to remove or disable access to the infringing content. Whenever we remove content in response to a DMCA notice, we provide a copy of the original complaint as well as your contact information to our customer. If our customer doesn't believe that the content is infringing or that the content shouldn't be removed, they might reach out to you directly to resolve the issue or they might submit a counter-notice to us under the DMCA.Repeat infringer policyAWS terminates the accounts of repeat infringers in appropriate circumstances.The information above is provided for general informational purposes only and is not legal advice. If you have any questions about this information or your specific situation or rights, seek legal advice from a professional.Related informationHow do I provide a counter-notice to a DMCA notice?Follow" | https://repost.aws/knowledge-center/submit-dmca-notice |
Why did I get an AWS Config error after turning on AWS Security Hub? | How to troubleshoot AWS Config errors after turning on AWS Security Hub. | "How to troubleshoot AWS Config errors after turning on AWS Security Hub.Short descriptionWhen setting up AWS Security Hub, you might have one of the following errors:"AWS Config is not enabled on some accounts.""AWS Config is not enabled in all regions.”"An error has occurred with AWS Config. Contact AWS Support."ResolutionUse the following best practices for configuring and troubleshooting AWS Config with Security Hub:Note: AWS Config rules created by Security Hub do not incur any additional costs.Verify that AWS Config is turned on in the same AWS Region as Security HubManually turn on AWS Config in the same Region as Security Hub as follows:1. Open the AWS Config console in the same Region that you have Security Hub turned on.2. If AWS Config is not turned on, follow the instructions for setting up AWS Config with the Console.Note: If you have Security Hub configured in multiple Regions, repeat these steps for each Region.Verify AWS Config is recording all resources including global in your RegionModify the type of resources AWS config records as follows:1. Open the AWS Config console, and choose Settings.2. In Settings, confirm Recording is on.3. In Resource types to record, select Record all resources supported in this region.4. In Resource types to record, select Include global resources (e.g., AWS IAM resources).5. Choose Save.Note:These settings apply to all of your AWS accounts that are configured with Security Hub, including AWS Organizations member accounts.You do not have to record all resource types in AWS Config. However, be sure that the required resource types for CIS, PCI DSS, and AWS foundational security best practices controls are recording.You do not need to turn on global resources in all Regions. To avoid duplicate configuration settings, you can turn on global settings in only the same AWS Region as Security Hub per AWS account.It can take up to 24 hours for the recorder settings to complete.Use Amazon CloudWatch log filter patterns to search AWS CloudTrail log dataSearch for and troubleshoot AWS Config error messages as follows:1. Follow steps 1-4 in Search log entries using the console.2. In Filter, paste the following example syntax, and then choose enter on your device:EventSource: config.amazonaws.com<br>3. Note the error. Then, follow the instructions for How can I troubleshoot AWS Config console error messages?Verify the permissions on the Security Hub service-linked roleAWS Security Hub uses service-linked roles to provide permissions to AWS services. The following AWS Identity and Access Management (IAM) permission allows access to AWS Config with Security Hub:{<br>"Effect": "Allow",<br>"Action": [<br>"config:PutConfigRule",<br>"config:DeleteConfigRule",<br>"config:GetComplianceDetailsByConfigRule",<br>"config:DescribeConfigRuleEvaluationStatus"<br>],<br>"Resource": "arn:aws:config:*:*:config-rule/aws-service-rule/*securityhub*"<br>}<br>For more information, see Using service-linked roles for AWS Security Hub.Related informationAWS Security Hub now generally availableFollow" | https://repost.aws/knowledge-center/config-error-security-hub |
I unintentionally incurred charges while using the AWS Free Tier. How do I make sure that I'm not billed again? | I accidentally provisioned resources that are not covered under the AWS Free Tier and I was billed. How do I make sure I'm not billed again? | "I accidentally provisioned resources that are not covered under the AWS Free Tier and I was billed. How do I make sure I'm not billed again?Short descriptionWith the AWS Free Tier, you can explore and try out some AWS services free of charge within certain usage limits. To learn more about AWS Free Tier, see What is the AWS Free Tier, and how do I use it?When using AWS Free Tier, you might incur charges due to the following reasons:You exceeded the monthly free tier usage limits of one or more services.You're using an AWS service, such as Amazon Aurora, that doesn't offer free tier benefits.Your free tier period expired.To be sure that you aren't billed again:Verify that your account is covered by the free tier.Identify the resources that are generating charges.Delete, stop, or terminate any resources that are generating charges.Proactively monitor your usage to be sure that you don't exceed the free tier offering again.ResolutionVerify that your account is covered by the free tierOpen the Billing and Cost Management console, and then choose Free Tier from the navigation pane.If the table doesn't appear, then your account is no longer covered under the AWS Free Tier. You're billed at standard rates for any resources provisioned on your account. Delete, shut down, or terminate any resources that you don't want to keep or be billed for.If the table appears, then you see additional information about any services that you're using. This table also provides a Month-to-Date (MTD) actual usage percentage.Identify the resources that are generating chargesTo identify active resources in your account, do the following:Open the Billing and Cost Management console.In the navigation pane, on the left side of the screen, choose Bills.The Details section shows all the charges that are being incurred by various AWS services on your account. Make note of which services have active resources. If you use the consolidated billing feature in AWS Organizations, the Bills page lists totals for all accounts on the Consolidated Bill Details tab. Choose the Bill Details by Account tab to see the activity for each account in the organization.Note: It's a best practice to check the Details section for the previous month to identify all services that are generating charges.Under Details, expand each service to identify the Regions where the services have incurred charges. The Bill details by account section shows all the charges incurred in different AWS Regions. Note the resources in those Regions.Note: The Billing and Cost Management console takes about 24 hours to update usage and charge information for active resources.Charges listed under a service are for usage not covered under the AWS Free Tier.For more information on finding your active Amazon Elastic Compute Cloud (Amazon EC2) resources, see Why can't I find an Amazon EC2 instance that I launched on my account?Important: AWS Free Tier benefits cover the aggregated billable total of your usage for all AWS Regions together, but not each Region individually.Delete, stop, or terminate any resources that you don't want to be billed forAfter identifying the AWS resources that are incurring charges, you can stop the billing by deleting, stopping, or terminating the resources.For information on how to delete, stop, or terminate resources that aren't covered by the free tier, see How do I terminate active resources that I no longer need on my AWS account?Proactively monitor your usage of AWS Free Tier resourcesYou can track your free tier usage with the AWS Free Tier usage alerts. AWS provides free tier usage alerts using AWS Budgets. These alerts notify you when your free tier usage exceeds 85 percent of your monthly limit.View the Top AWS Free Tier services table for tracking the free tier usage limit of your services along with your current usage amount.Related informationHow do I make sure I don't incur charges when I'm using the AWS Free Tier?Using the AWS Free TierAWS Free Tier FAQsAvoiding unexpected chargesHow do I close my AWS account?Follow" | https://repost.aws/knowledge-center/stop-future-free-tier-charges |
How can I troubleshoot issues with Accelerated VPN? | How can I troubleshoot issues with AWS Accelerated VPN? | "How can I troubleshoot issues with AWS Accelerated VPN?ResolutionConfirm that your firewall configuration meets all requirementsFor more information, see Configuring a firewall between the internet and your customer gateway device.Confirm that NAT-traversal is activated on the customer gateway deviceNAT-traversal (NAT-T) is required for an Accelerated VPN connection. NAT-T is activated by default. If you downloaded a configuration file from the Amazon Virtual Private Cloud (Amazon VPC) console, check the NAT-T setting and start it if necessary.Note: If NAT-T is deactivated on the customer gateway device the tunnel will still come up. However, in this scenario the issue could still persist for the data traffic.For more information, see Your customer gateway device.Confirm that lifetime parameters matchThe IKE tunnel lifetime parameter must match what's set on your AWS Virtual Private Network (AWS VPN). By default, these settings are:28,800 seconds (8 hours) for phase 13,600 seconds (1 hour) for phase 2If necessary, change the AWS VPN parameters to match your IKE tunnel parameters.Confirm the connection's compatibility with Global Accelerator (if applicable)If your Site-to-Site VPN connection uses certificate-based authentication, it might not be compatible with AWS Global Accelerator. There's limited support for packet fragmentation in Global Accelerator. If you require an Accelerated VPN connection that uses certificate-based authentication, your customer gateway device must support IKE fragmentation. Otherwise, don't activate your VPN for acceleration. For more information, see How AWS Global Accelerator works.Confirm that acceleration was configured in the proper sequenceAcceleration can't be activated or deactivated for an existing Site-to-Site VPN connection. Instead, create a new Site-to-Site VPN connection with acceleration activated or deactivated as needed. Then, configure your customer gateway device to use the new Site-to-Site VPN connection. Finally, delete the previous Site-to-Site VPN connection. For more information on Accelerated VPN restrictions, see Rules and restrictions.Follow" | https://repost.aws/knowledge-center/vpn-troubleshoot-accelerated-vpn |
How do I resolve the "Server.InternalError: Internal error on launch" error for a failed stack in AWS CloudFormation? | "I tried to create an Amazon Elastic Compute Cloud (Amazon EC2) instance with an AWS CloudFormation stack, but my stack creation failed. Then, I received the "Server.InternalError: Internal error on launch" error message. How can I resolve this error?" | "I tried to create an Amazon Elastic Compute Cloud (Amazon EC2) instance with an AWS CloudFormation stack, but my stack creation failed. Then, I received the "Server.InternalError: Internal error on launch" error message. How can I resolve this error?Short descriptionYou receive this error if duplicate or invalid device mappings are specified in your AWS CloudFormation template. You can't have two block devices map to the same location (for example, /dev/sdb).Note: If you're using a Nitro-based instance type (for example, c5, m5, or t3), then you won't receive this error, because /dev/sdb and /dev/xvdb are mapped to two different NVMe devices in the operating system.ResolutionIn the BlockDeviceMappings property of your AWS CloudFormation template, confirm that your block devices aren't mapping to the same location by checking the value of DeviceName for each block device.In the following JSON and YAML example templates, the block devices specified are /dev/xvdb and /dev/xvdc. The root volume is automatically provisioned for the instance, and the block devices are associated as secondary volumes.JSON: "Ec2Instance" : { "Type" : "AWS::EC2::Instance", "Properties" : { "...OtherProperties..." "BlockDeviceMappings" : [ { "DeviceName" : "/dev/xvdb", "Ebs" : { "VolumeSize" : "100" } },{ "DeviceName" : "/dev/xvdc", "Ebs" : { "VolumeSize" : "100" } } ] } }YAML:EC2Instance: Type: AWS::EC2::Instance Properties: ...OtherProperties... BlockDeviceMappings: - DeviceName: /dev/xvdb Ebs: VolumeSize: 100 - DeviceName: /dev/xvdc Ebs: VolumeSize: 100Related informationBlock device mappingEC2 block device mapping examplesDevice names on Linux instancesFollow" | https://repost.aws/knowledge-center/cloudformation-device-mapping-error |
How do I troubleshoot distribution errors for encrypted AMIs in Image Builder? | I receive errors when I try to distribute encrypted AMIs to another account in EC2 Image Builder. How do I troubleshoot this? | "I receive errors when I try to distribute encrypted AMIs to another account in EC2 Image Builder. How do I troubleshoot this?Short descriptionThe following scenarios can cause distribution errors in EC2 Image Builder when you distribute an encrypted Amazon Machine Image (AMI) to another account:The AMI that's distributed is encrypted using the default AWS managed key for Amazon Elastic Block Store (Amazon EBS).The AWS Key Management Service (AWS KMS) or AWS Identity and Access Management (IAM) entity doesn't have the required permissions.ResolutionThe AMI that's distributed is encrypted using the default AWS managed key for Amazon EBSYou receive the following error:Distribution failed with JobId 'XXXXXXXXXXXXXX', status = 'Failed' for ARN 'arn:aws:imagebuilder:us-east-1:xxxxxxxxxxxx:image/test-recipe/0.0.1/1'. 'Not all distribution jobs are completed. 1) EC2 Client Error: 'Snapshots encrypted with the AWS Managed CMK can’t be shared. Specify another snapshot.' when distributing the image from the source account (ID: xxxxxxxxxxxx) to the destination account (ID: xxxxxxxxxxxx) in Region us-east-1.'You can't share AMIs that are encrypted with the default AWS KMS key. For more information see, Share an AMI with specific AWS accounts.Scenarios to check include:The AWS managed KMS key is specified in the recipe's storage configuration.The AWS managed KMS key is specified in the distribution configuration along with one or more target accounts.The parent AMI is encrypted using the AWS managed KMS key.The parent AMI has multiple snapshots, and at least one is encrypted with the AWS managed KMS key.Encryption by default is activated in your AWS Region, and it's using the AWS managed KMS key.To resolve this issue, create a new version of the image recipe, and specify a customer managed KMS key for encryption in the recipe's storage configuration. For KMS keys in the distribution configuration, specify a customer managed KMS key for encryption when you distribute AMIs to other accounts.The AWS KMS or IAM entity doesn't have the required permissionsYou can distribute AMIs in Image Builder using either the launchPermissions or targetAccountIds configurations.launchPermissionsWhen you distribute an AMI using launchPermissions, Image Builder uses the IAM role AWSServiceRoleForImageBuilder in the source account. By default, AWSServiceRoleForImageBuilder has the required AWS KMS permission for the resources in the source account.The KMS key policy has a statement that allows the "kms:*" action for the root user. If this statement isn't in the key policy, then the service-linked role can't access the key in the source account. If the "kms:*" action isn't allowed for the root user, then modify the policy to allow the service-linked role to use the key.For example:{ "Sid": "Enable IAM User Permissions", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::source_account_id:role/aws-service-role/imagebuilder.amazonaws.com/AWSServiceRoleForImageBuilder" }, "Action": [ "kms:Decrypt", "kms:Encrypt", "kms:GenerateDataKeyWithoutPlaintext", "kms:DescribeKey", "kms:CreateGrant", "kms:ReEncryptFrom", "kms:ReEncryptTo" ], "Resource": "*"}Note: Replace source_account_id with your source account's ID.targetAccountIdsIf the destination account doesn't have the IAM role EC2ImageBuilderDistributionCrossAccountRole, or the source account isn't listed in the trust policy, then you receive the following error:Distribution failed with JobId 'xxxxxxxxxxxxxx', status = 'Failed' for ARN 'arn:aws:imagebuilder:us-east-1:XXXXXXXXXX:image/testdistribution/2.0.0/3'. 'Not all distribution jobs are completed. 1) STS Client Error: 'User: arn:aws:sts::XXXXXXXXXXXX:assumed-role/AWSServiceRoleForImageBuilder/Ec2ImageBuilderIntegrationService is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::xxxxxxxxx:role/EC2ImageBuilderDistributionCrossAccountRole'. Please make sure your 'EC2ImageBuilderDistributionCrossAccountRole' is setup with correct permission policies. If you are copying AMI to opt-in regions, please make sure the region is enabled in the account when distributing the image from the source account (ID: XXXXXXXXXXXX) to the destination account (ID: XXXXXXXXXXXX) in Region us-east-1.'STS Client Error User is not authorized to perform: sts:AssumeRole on resource.To resolve this issue, create the role EC2ImageBuilderDistributionCrossAccountRole. Then, attach the Ec2ImageBuilderCrossAccountDistributionAccess policy to allow cross-account distribution. Then, list AWSServiceRoleForImageBuilder in the EC2ImageBuilderDistributionCrossAccountRole trust policy:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "ec2.amazonaws.com", "AWS": "arn:aws:iam::XXXXXXXXXX:root" }, "Action": "sts:AssumeRole" } ]}You might receive the following error due to issues with cross-account AWS KMS permissions:Distribution failed with JobId 'xxxxxxxxxxxxxx', status = 'Failed' for ARN 'arn:aws:imagebuilder:ap-southeast-2:11111111111:image/test/1.0.0/1'. 'Not all distribution jobs are completed. 1) AMI Copy Reported Failure For 'ami-0047623fbcxxxxx' when distributing the image from the source account (ID: 11111111111) to the destination account (ID: 222222222222) in Region ap-southeast-2.'When you distribute an AMI using targetAccountIds, Image Builder uses the role AWSServiceRoleForImageBuilder in the source account. In the destination account, it uses the role EC2ImageBuilderDistributionCrossAccountRole. Make sure that you give permission to EC2ImageBuilderDistributionCrossAccountRole to use the AWS KMS keys in the distribution configuration and recipe's storage configuration.For example:{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "kms:Decrypt", "kms:Encrypt", "kms:GenerateDataKeyWithoutPlaintext", "kms:DescribeKey", "kms:CreateGrant", "kms:ReEncryptFrom", "kms:ReEncryptTo" ], "Resource": "*" } ]}Note: You can also restrict the Resource section by specifying the ARN of the AWS KMS key.If the customer managed KMS keys belong to the destination account, then you must share the AWS KMS key with the source account. If the customer managed KMS keys belong to the source account, then you must share the AWS KMS keys with the destination account.Complete the following steps to share the AWS KMS keys:1. Log in to the account where the KMS key is.2. Open the AWS KMS console in the same AWS Region.3. In left navigation pane, choose Customer managed keys.4. Select the KMS key ID.5. Choose the Key Policy tab.6. In the Other AWS accounts section, choose Add other AWS accounts.7. Specify the ID of the account that you want to share the KMS key with.8. Choose Save Changes.Related informationSet up cross-account AMI distribution with Image BuilderHow do I share my AWS KMS keys across multiple AWS accounts?Follow" | https://repost.aws/knowledge-center/image-builder-distr-error-encrypted-ami |
How do I prepare for end of support for Kubernetes 1.15 on Amazon EKS? | I want to prepare for the end of support for Kubernetes 1.15 on Amazon EKS in Amazon Elastic Kubernetes Service (Amazon EKS). | "I want to prepare for the end of support for Kubernetes 1.15 on Amazon EKS in Amazon Elastic Kubernetes Service (Amazon EKS).Short descriptionKubernetes 1.15 on Amazon EKS reaches end of support on May 3, 2021.Note: After Open Source community support for Kubernetes 1.15 ended on May 6, 2020, AWS provided an additional one year of support for Kubernetes 1.15 on Amazon EKS. This means that no bug fixes or security vulnerability patches will be back-ported to Kubernetes 1.15 on Amazon EKS. For more information, see Kubernetes Patch Releases on the Kubernetes GitHub site.ResolutionTo prepare for end of support, complete the following steps:Understand the impact of end of support.Complete the Kubernetes 1.16 update prerequisites.Update your Amazon EKS clusters by May 3, 2021.For more information on preparing for deprecation, see Preparing for Kubernetes API deprecations when going from 1.15 to 1.16.Note: If you're having issues with your Amazon EKS pods or apps, see Kubernetes 1.16 update prerequisites. The most common issue is kube-proxy failing to start because the --resource-container flag is still in the kube-proxy DaemonSet. To resolve this issue, remove the --resource-container flag.Follow" | https://repost.aws/knowledge-center/eks-end-of-support-kubernetes-115 |
How do I create a subdomain for my domain that's hosted in Route 53? | "I want to create a subdomain for my domain that's hosted in Amazon Route 53, but I don't know how." | "I want to create a subdomain for my domain that's hosted in Amazon Route 53, but I don't know how.Short descriptionUsing a separate hosted zone to route internet traffic for a subdomain is known as delegating responsibility for a subdomain to a hosted zone. It might also be referred to as subdomain delegation through name servers.PrerequisitesBefore you begin, be sure to implement the following requirements:A valid registered domain (regardless of the registrar)An authoritative hosted zone for the registered domain in Route 53ResolutionCreate a hosted zone for the subdomain in Route 53Create a hosted zone with the same name as the subdomain that you want to route traffic for, such as acme.example.com. To do this, complete the following steps:Open the Route 53 console.In the navigation pane, choose Hosted zones.Choose Create hosted zone.In the navigation pane, enter the name of the subdomain (such as acme.example.com).Note: For more information, see DNS domain name format.For Type, accept the default value of Public hosted zone.Choose Create hosted zone.Find the name servers that Route 53 assigned to the new hosted zoneWhen you create a hosted zone, Route 53 automatically assigns four name servers to the zone. To start using the hosted zone for the subdomain, create a new name server (NS) record in the hosted zone for the domain (example.com). The name of the NS record must be the same as the name of the subdomain (acme.example.com).After creating the hosted zone for the subdomain, expand the Hosted zone details dropdown list for the subdomain in the hosted zone (acme.example.com). In the right pane, copy the names of the four servers listed as the Name servers under Hosted zone details.Add NS records to route traffic to your subdomainComplete the following steps to route traffic to your subdomain. These steps also apply to cross-account scenarios where the hosted zone for the domain and the subdomain are in different accounts.Select the hosted zone for the domain (example.com). Be sure not to select the name of the subdomain (some.example.com).In the hosted zone for the domain, choose Create record.For Name, enter the name of the subdomain.For Value, enter the names of the name servers.For Record type, choose NS - Name servers for a hosted zone.For TTL (Seconds), select a more common value for an NS record, such as 172,800 seconds.For Route Policy, choose Simple routing.Choose Create Records.Note: To remove the subdomain delegation (acme.example.com), first delete the NS record in the parent hosted zone (example.com). Then, delete the subdomain hosted zone. These steps protect your subdomain from an unauthorized takeover.Create records in the subdomain hosted zoneCreate your records in the newly-created subdomain hosted zone. Test the record resolution using the dig/nslookup command.Example dig/nslookup command:dig acme.example.comFollow" | https://repost.aws/knowledge-center/create-subdomain-route-53 |
How do I set up UDP load balancing with my Network Load Balancer? | I want to register a User Datagram Protocol (UDP) service to my Network Load Balancer. How can I do this? | "I want to register a User Datagram Protocol (UDP) service to my Network Load Balancer. How can I do this?ResolutionCreate a new target group using UDPOpen the Amazon Elastic Cloud Compute (Amazon EC2) console.From the navigation pane, choose Target Groups.Choose Create target group.For Target type, choose Instances or IP addresses.For Target group name, enter a name for the target group.For Protocol, choose UDP or TCP_UDP.For Port, choose the port that the service is listening on.For VPC, choose your VPC.For Health check protocol, choose TCP, HTTP, or HTTPS.Expand Advanced health check settings, and then choose Override.For Healthy threshold, enter the number of health checks.Choose Next, and then choose Create target group.For more information, see Create a target group for your Network Load Balancer.Register targets to the target groupOpen the Amazon EC2 console.From the navigation pane, in Load Balancing, choose Target Groups.In Target groups, choose your target group, choose Actions, and then choose Register targets.For Available instances, choose your Instance ID or IP address that you created previously, and then choose Include as pending below.Choose Register pending targets.For more information, see Register targets with your target group.Modify the health check for the target groupOpen the Amazon EC2 console.From the navigation pane, in Load Balancing, choose Target Groups.In Target groups, choose your target group.Choose the Health checks tab, and then choose Edit.In Health check protocol, choose your protocol (TCP, HTTP, or HTTPS).Expand Advanced health check settings.In Port, choose either Traffic port or Override, and then choose Save changes.For more information, see Modify the health check settings of a target group.Add a listener to your Network Load BalancerOpen the Amazon EC2 console.From the navigation pane, in Load Balancing, choose Load Balancers.Choose your Network Load Balancer, choose the Listeners tab, and then choose Add listener.Choose the Protocol dropdown list, and then choose your protocol (UDP or TCP_UDP).Choose the Select a target group dropdown list, choose your target group, and then choose Add.For more information, see Create a listener for your Network Load Balancer.Related informationUDP load balancing for Network Load BalancerFollow" | https://repost.aws/knowledge-center/elb-nlb-udp-target-ec2 |
Why am I receiving an error when I try to create an Amazon EC2 Auto Scaling lifecycle hook? | I'm receiving a validation error when I try to create an Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling lifecycle hook. The error reads "Unable to publish test message to notification target" or "Please check your target and role configuration and try to put lifecycle hook again." How do I troubleshoot these errors? | "I'm receiving a validation error when I try to create an Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling lifecycle hook. The error reads "Unable to publish test message to notification target" or "Please check your target and role configuration and try to put lifecycle hook again." How do I troubleshoot these errors?Short descriptionTo publish a message to the Amazon Simple Queue Service (Amazon SQS), the lifecycle hook's AWS Identity and Access Management (IAM) role must:Be different from the IAM role assigned to the instance.Be listed as a key user on the AWS Key Management Service (AWS KMS) key policy.Have a trust policy attached for the Auto Scaling service.Include specific managed policy actions.Be associated with the Amazon EC2 Auto Scaling group.Have access to the encryption key used by Amazon SQS.Resolution1. Confirm that you're using an IAM role for the lifecycle hook that's different from the IAM role you've assigned to the instance.Note: You can create an IAM role, or use the following AWS managed role that has all of the necessary permissions:arn:aws:iam::aws:policy/service-role/AutoScalingNotificationAccessRole2. Verify that the role is included as a key user on the KMS key policy. To do this:Open the AWS KMS console.Select the KMS key.Verify that the role is listed under Key users on the Key policy tab. If the role isn't listed, search for it, and then select Add.3. Be sure that the IAM role for the lifecycle hook has a trust policy attached for the Amazon EC2 Auto Scaling service.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "autoscaling.amazonaws.com" }, "Action": "sts:AssumeRole" } ]}4. Verify that the managed policy for the lifecycle hook's IAM role includes the following actions:For SQS messagessqs:SendMessagesqs:GetQueueUrlFor SNS notificationssns:Publish5. In the AWS Command Line Interface (AWS CLI), run the aws autoscaling put-lifecycle-hook command.6. Run the command below to confirm that the lifecycle hook is associated with the Auto Scaling group.aws autoscaling describe-lifecycle-hooks --auto-scaling-group-name "ExampleSQSQueueName"Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Related informationAmazon EC2 Auto Scaling lifecycle hooksFollow" | https://repost.aws/knowledge-center/ec2-auto-scaling-lifecycle-hook-error |
Why did the AWS Config auto remediation action for the SSM document AWS-ConfigureS3BucketLogging fail with the error "(MalformedXML)" when calling the PutBucketLogging API? | I want to troubleshoot the errors that I get when I set up auto remediation for non-compliant Amazon Simple Storage Service (Amazon S3) resources. | "I want to troubleshoot the errors that I get when I set up auto remediation for non-compliant Amazon Simple Storage Service (Amazon S3) resources.Short descriptionUsing auto remediation to address non-compliant Amazon S3 buckets can generate errors. The following types of errors may occur when you use the AWS SSM Automation document AWS-ConfigureS3BucketLogging with the AWS Config managed rule s3-bucket-logging-enabled:AWS Config console error “Action execution failed (details)."AWS Systems Manager console error "Step fails when it is Execute/Cancelling action. An error occurred (MalformedXML) when calling the PutBucketLogging operation: The XML you provided was not well-formed or did not validate against our published schema. Please refer to the Automation Service Troubleshooting Guide for more diagnosis details."AWS CloudTrail event PutBucketLogging error "The XML you provided was not well-formed or did not validate against our published schema."Remediation fails when an Amazon S3 bucket, configured as a target bucket to receive server access logging, does not allow the Log Delivery group write access.ResolutionGrant the Amazon S3 Log Delivery group write access in the target bucket's access control list (ACL). See How do I set ACL bucket permissions? for more information.Related informationHow do I enable server access logging for an S3 bucket?How can I be notified when an AWS resource is non-compliant using AWS Config?How can I troubleshoot AWS Config console error messages?Follow" | https://repost.aws/knowledge-center/config-malformedxml-error |
How can I create queues on my Amazon EMR YARN CapacityScheduler? | How do I create queues on my Amazon EMR Hadoop YARN CapacityScheduler? | "How do I create queues on my Amazon EMR Hadoop YARN CapacityScheduler?Short descriptionEMR clusters have a single queue by default. You can add additional queues to your cluster and allocate available cluster resource capacity to your new queues.ResolutionCreate a reconfiguration commandThe following example reconfiguration does the following:Creates two additional queues, alpha and beta.Allocates 30% of the total resource capacity of your cluster to each of the new queues. When adding queues and allocating cluster capacity, the sum of capacities for all queues must be equal to 100. So, in the following example reconfiguration, the capacity of the default queue decreases to 40%.Provides full access (designated by the "*" label) to both queues. This allows both queues to access labeled core nodes.To submit to particular queue specify the queue in the yarn.scheduler.capacity.queue-mappings parameter. This parameter maps users to a queue with the same name as the user. The parent queue name must be the same as the primary group of the user, such as u:user:primary_group.user. In the following example, the parameter is set to u:hadoop:alpha. This maps to the newly created queue alpha.Note: The capacity for each queue’s access to the core label matches the capacity of the queue itself. So, the core partition splits between queues at the same ratio as the rest of the cluster.- Classification: capacity-scheduler Properties: yarn.scheduler.capacity.root.queues: 'default,alpha,beta' yarn.scheduler.capacity.root.default.capacity: '40' yarn.scheduler.capacity.root.default.accessible-node-labels.CORE.capacity: '40' yarn.scheduler.capacity.root.alpha.capacity: '30' yarn.scheduler.capacity.root.alpha.accessible-node-labels: '*' yarn.scheduler.capacity.root.alpha.accessible-node-labels.CORE.capacity: '30' yarn.scheduler.capacity.root.beta.capacity: '30' yarn.scheduler.capacity.root.beta.accessible-node-labels: '*' yarn.scheduler.capacity.root.beta.accessible-node-labels.CORE.capacity: '30'- classification: yarn-site properties: yarn.scheduler.capacity.queue-mappings: 'u:hadoop:alpha' configurations: []Note: If you want to override the default queue mapping settings, set parameter yarn.scheduler.capacity.queue-mappings-override.enable to true. By default, this parameter is set to false. When set to true, users can submit jobs to queues other than the designated queue. For more information, see Enable override of default queue mappings on the Hortonworks Docs website.Verify your modificationsAccess the YARN ResourceManager Web UI to verify that your modifications have taken place.The following is an example of a Spark job submitted on Amazon EMR 6.4.0 that has the preceding example reconfiguration:spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster --conf spark.driver.memoryOverhead=512 --conf spark.executor.memoryOverhead=512 /usr/lib/spark/examples/jars/spark-examples.jar 100SLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/usr/lib/spark/jars/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class].........22/11/29 07:58:07 INFO Client: Application report for application_1669707794547_0001 (state: ACCEPTED)22/11/29 07:58:08 INFO Client: Application report for application_1669707794547_0001 (state: RUNNING)This application application_1669707794547_0001 is submitted to queue "alpha"Related informationHadoop: Capacity Scheduler on the Apache Hadoop websiteConfigure Hadoop YARN CapacityScheduler on Amazon EMR on Amazon EC2 for multi-tenant heterogeneous workloadsFollow" | https://repost.aws/knowledge-center/emr-create-queue-capacityscheduler |
How can I configure an AWS Glue ETL job to output larger files? | I want to configure an AWS Glue ETL job to output a small number of large files instead of a large number of small files. | "I want to configure an AWS Glue ETL job to output a small number of large files instead of a large number of small files.ResolutionUse any of the following methods to reduce the number of output files for an AWS Glue ETL job.Increase the value of the groupSize parameterGrouping is automatically enabled when you use dynamic frames and when the Amazon Simple Storage Service (Amazon S3) dataset has more than 50,000 files. Increase this value to create fewer, larger output files. For more information, see Reading input files in larger groups.In the following example, groupSize is set to 10485760 bytes (10 MB):dyf = glueContext.create_dynamic_frame_from_options("s3", {'paths': ["s3://awsexamplebucket/"], 'groupFiles': 'inPartition', 'groupSize': '10485760'}, format="json")Note: The groupSize and groupFiles parameters are supported only in the following data formats: csv, ion, grokLog, json, and xml. This option is not supported for avro, parquet, and orc.Use coalesce(N) or repartition(N)1. (Optional) Calculate your target number of partitions (N) based on the input data set size. Use the following formula:targetNumPartitions = 1 Gb * 1000 Mb/10 Mb = 100Note: In this example, the input size is 1 GB, and the target output is 10 MB. This calculation allows you to control the size of your output file .2. Check the current number of partitions using the following code:currentNumPartitions = dynamic_frame.getNumPartitions()Note: When you repartition, targetNumPartitions should be smaller than currentNumPartitions. Use an Apache Spark coalesce() operation to reduce the number of Spark output partitions before writing to Amazon S3. This reduces the number of output files. For example:dynamic_frame_with_less_partitions=dynamic_frame.coalesce(targetNumPartitions)Keep in mind:coalesce() performs Spark data shuffles, which can significantly increase the job run time.If you specify a small number of partitions, then the job might fail. For example, if you run coalesce(1), Spark tries to put all data into a single partition. This can lead to disk space issues.You can also use repartition() to decrease the number of partitions. However, repartition() reshuffles all data. The coalesce() operation uses existing partitions to minimize the number of data shuffles. For more information on using repartition(), see Spark Repartition on the eduCBA website.Use maxRecordsPerFileUse the Spark write() method to control the maximum record count per file. The following example sets the maximum record count to 20:df.write.option("compression", "gzip").option("maxRecordsPerFile",20).json(s3_path)Note: The maxRecordsPerFile option acts only as an upper limit for the record count per file. The record count of each file will be less than or equal to the number specified. If the value is zero or negative, then there is no limit.Related informationFix the processing of multiple files using groupingFollow" | https://repost.aws/knowledge-center/glue-job-output-large-files |
My Amazon S3 bucket has data files created using the UNLOAD command from an Amazon Redshift cluster in another account. Why can't I access those files? | "My Amazon Simple Storage Service (Amazon S3) bucket has data files created using the UNLOAD command from an Amazon Redshift cluster in another AWS account. However, I'm getting 403 Access Denied errors when I try to access those files from my own account. How can I fix this?" | "My Amazon Simple Storage Service (Amazon S3) bucket has data files created using the UNLOAD command from an Amazon Redshift cluster in another AWS account. However, I'm getting 403 Access Denied errors when I try to access those files from my own account. How can I fix this?Short descriptionBy default, an S3 object is owned by the AWS account that uploaded it. This is true even when the bucket is owned by another account. Therefore, when Amazon Redshift data files are put into your bucket by another account, you don't have default permission for those files.To get access to the data files, an AWS Identity and Access Management (IAM) role with cross-account permissions must run the UNLOAD command again. Follow these steps to set up the Amazon Redshift cluster with cross-account permissions to the bucket:1. From the account of the S3 bucket, create an IAM role with permissions to the bucket. This is the bucket role.2. From the account of the Amazon Redshift cluster, create another IAM role with permissions to assume the bucket role. This is the cluster role.3. Update the bucket role to grant bucket access, and then create a trust relationship with the cluster role.4. From the Amazon Redshift cluster, run the UNLOAD command using the cluster role and bucket role.Important: This resolution doesn't apply to Amazon Redshift clusters or S3 buckets that use server-side encryption with AWS Key Management Service (AWS KMS).ResolutionCreate a bucket roleFrom the account of the S3 bucket, create an IAM role with permissions to the bucket:1. From the account of the S3 bucket, open the IAM console.2. Create an IAM role. As you create the role, select the following:For Select type of trusted entity, choose AWS service.For Choose the service that will use this role, choose Redshift.For Select your use case, choose Redshift - Customizable.3. After you create the IAM role, attach a policy that grants permission to the bucket. You can use a policy that's similar to the following:{ "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1234537676482", "Action": [ "s3:ListBucket", "s3:PutObject" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::awsexamplebucket/*", "arn:aws:s3:::awsexamplebucket" ] } ]}4. Get the bucket role's Amazon Resource Name (ARN). You need the role's ARN for a later step.Create a cluster roleFrom the account of the Amazon Redshift cluster, create another IAM role with permissions to assume the bucket role:1. From the account of the Amazon Redshift cluster, open the IAM console.2. Create an IAM role. As you create the role, select the following:For Select type of trusted entity, choose AWS service.For Choose the service that will use this role, choose Redshift.For Select your use case, choose Redshift - Customizable.3. After you create the IAM role, attach the following policy to the role:Important: Replace arn:aws:iam::123456789012:role/Bucket_Role with the ARN of the Bucket Role that you created.{ "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1234537501110", "Action": [ "sts:AssumeRole" ], "Effect": "Allow", "Resource": "arn:aws:iam::123456789012:role/Bucket_Role" } ]}4. Get the cluster role's ARN. You need the role's ARN for a later step.Update the bucket role to create a trust relationship with the cluster role1. From the account of the S3 bucket, open the IAM console.2. In the navigation pane, choose Roles.3. From the list of roles, open the Bucket Role that you created.4. Choose the Trust relationships tab.5. Choose Edit trust relationship.6. For the Policy Document, replace the existing policy with the following:Important: Replace arn:aws:iam::012345678901:role/Cluster_Role with the ARN of the Cluster Role that you created.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::012345678901:role/Cluster_Role" }, "Action": "sts:AssumeRole" } ]}7. Choose Update Trust Policy.From the Amazon Redshift cluster, run the unload operation using the Cluster Role and Bucket Role1. Connect to the Amazon Redshift cluster.2. Run the UNLOAD command with both the IAM roles that you created, similar to the following:Important: Replace arn:aws:iam::012345678901:role/Cluster_Role with the ARN of your Cluster Role. Then, replace arn:aws:iam::123456789012:role/Bucket_Role with the ARN of your Bucket Role.unload ('select * from TABLE_NAME')to 's3://awsexamplebucket' iam_role 'arn:aws:iam::012345678901:role/Cluster_Role,arn:aws:iam::123456789012:role/Bucket_Role';After you run the UNLOAD command, the data files are owned by the same account as the bucket that they're stored in.Related informationUNLOAD ExamplesFollow" | https://repost.aws/knowledge-center/s3-access-denied-redshift-unload |
How do I fix a compute environment that's not valid in AWS Batch? | My compute environment in AWS Batch is in the INVALID state. How do troubleshoot the error? | "My compute environment in AWS Batch is in the INVALID state. How do troubleshoot the error?Short descriptionYou receive the error: "CLIENT_ERROR - Your compute environment has been INVALIDATED and scaled down because none of the instances joined the underlying ECS Cluster. Common issues preventing instances joining are the following: VPC/Subnet configuration preventing communication to ECS, incorrect Instance Profile policy preventing authorization to ECS, or customized AMI or LaunchTemplate configurations affecting ECS agent."Issues preventing your instances from joining an Amazon Elastic Container Service (Amazon ECS) cluster include:Amazon Virtual Private Cloud (Amazon VPC) subnet configuration settings are preventing successful communication to Amazon ECS.An incorrect setting within the instance profile policy preventing authorization to Amazon ECS.Customized Amazon Machine Images (AMI) or launch template configurations affecting the ECS agent.The CLIENT_ERROR message indicates that the Amazon Elastic Compute Cloud (Amazon EC2) instances created by the AWS Batch compute environment have failed to join the ECS cluster. When the CLIENT_ERROR message occurs, AWS Batch automatically terminates the EC2 instance and then moves the compute environment into an INVALID state.If your compute environment is in the INVALID state, choose one of the following resolutions based on the error message that you receive:CLIENT_ERROR - Not authorized to perform sts:AssumeRoleComplete the steps in the Fix a service role that's not valid section.CLIENT_ERROR - Parameter: SpotFleetRequestConfig.IamFleetRole is invalidComplete the steps in the Fix a Spot Fleet role that's not valid section.CLIENT_ERROR - The specified launch template, with template ID [xxx], does not existComplete the steps in the Deactivate and delete your compute environment section.CLIENT_ERROR - Access deniedCreate a service role with the correct permissions or choose an existing service role with the correct permissions.Internal ErrorComplete the steps in the Deactivate and then activate your compute environment section.INVALID CLIENT_ERROR - nullComplete the steps in the Deactivate and then activate your compute environment section.CLIENT_ERROR - The request uses the same client token as previous, but non-identical requestComplete the steps in the Deactivate and then activate your compute environment section.CLIENT_ERROR - You are not authorized to use launch templateCheck the following:Review your Service Role to see if permissions related to Amazon Elastic Compute Cloud and Auto Scaling groups are granted. Then, complete the steps in the Fix a service role that's not valid section.Review if your account is part of AWS Organizations and if any service control policies are blocking access to your Amazon EC2 permissions. Then, update any service control policies, if needed.ResolutionFix a service role that's not valid1. Open the AWS Batch console.2. In the navigation pane, choose Compute environments.3. Choose the compute environment that's in the INVALID state.Note: If your compute environment is in the DISABLED state, choose Enable to activate your compute environment.4. Choose Edit.5. For Service role, choose a service role with the permissions needed for AWS Batch to make calls to other AWS services.Important: Your service role manages the resources that you use with the service. Before you can use the service, you must have an AWS Identity and Access Management (IAM) policy and role that provides the necessary permissions to AWS Batch. You must create a service role with permissions if you don't have one.6. Choose Save.Fix a Spot Fleet role that's not validFor managed compute environments that use Amazon EC2 Spot Fleet Instances, you must create a role that grants the Spot Fleet the following permissions:Bidding on instancesLaunching instancesTagging instancesTerminating instancesIf you don't have a Spot Fleet role, complete the following steps to create one for your compute environment:1. Open the IAM console.2. In the navigation pane, choose Roles.3. Choose Create role.4. Choose AWS service. Then, choose EC2 as the service that will use the role that you're creating.5. In the Select your use case section, choose EC2 Spot Fleet Role.Important: Don't choose the similarly named EC2 - Spot Fleet.6. Choose Next: Permissions.7. Choose Next: Tags. Then, choose Next: Review.8. For Role name, enter AmazonEC2SpotFleetRole.9. Choose Create role.Note: Use your new Spot Fleet role to create new compute environments. Existing compute environments can't change Spot Fleet roles. To get rid of the obsolete environment, deactivate and then delete that environment.10. Open the AWS Batch console.11. In the navigation pane, choose Compute environments.12. Choose the compute environment that's in the INVALID state. Then, choose Disable.13. Choose Delete.Deactivate and delete your compute environmentYou must deactivate and delete your compute environment because the launch template associated with your compute environment doesn't exist. This means that you can't use the compute environment associated with your launch template. You must delete that compute environment, and then create a new compute environment.1. Open the AWS Batch console.2. In the navigation pane, choose Compute environments.3. Select the compute environment that's in the INVALID state. Then, choose Disable.4. Choose Delete.5. Create a new compute environment.Deactivate and then activate your compute environment1. Open the AWS Batch console.2. In the navigation pane, choose Compute environments.3. Choose the compute environment that's in the INVALID state. Then, choose Disable.4. Choose the same compute environment from step 3. Then, choose Enable.Related informationTroubleshooting AWS BatchWhy is my Amazon ECS or Amazon EC2 instance unable to join the cluster?Why is my AWS Batch job stuck in RUNNABLE status?Follow" | https://repost.aws/knowledge-center/batch-invalid-compute-environment |
Why can't I import a third-party public SSL/TLS certificate into AWS Certificate Manager (ACM)? | I received an error message when I tried to import a third-party SSL/TLS certificate into AWS Certificate Manager (ACM). Why can't I import my certificate into ACM? | "I received an error message when I tried to import a third-party SSL/TLS certificate into AWS Certificate Manager (ACM). Why can't I import my certificate into ACM?Short descriptionI tried to import a third-party SSL/TLS certificate into ACM and I received an error message similar to one of the following:You have reached the maximum number of certificates. Delete certificates that aren't in use, or contact AWS Support to request an increase.The certificate field contains more than one certificate. You can specify only one certificate in this field.Unable to validate certificate chain. The certificate chain must start with the immediate signing certificate, followed by any intermediaries, in that order. The index within the chain of the invalid certificate is 0.Can't validate the certificate with the certificate chain.The private key length isn't supported for key algorithm.The certificate body/chain provided isn't in a valid PEM format, InternalFailure, or Unable to parse certificate. Be sure that the certificate is in PEM format.The private key isn't supported.The certificate that isn't a valid self-signed certificate.ResolutionFollow the instructions that match the error message.Note:If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent version of the AWS CLI.You can import third-party SSL/TLS certificates and services integrated with ACM. Be sure that your certificate meets the Prerequisites for importing certificates."You have reached the maximum number of certificates. Delete certificates that are not in use, or contact AWS Support to request an increase."By default, you can import up to 1000 certificates into ACM, but new AWS accounts might start with a lower limit. If you exceed this limit, request an ACM quota increase.If you receive this error message and you haven't exceeded 1000 certificates for your account, then you might have exceeded the limit for certificates that you can import in a year. By default, you can import two times the value of your account limit per year. For example, if your limit is 100 certificates, then you can import up to 200 certificates per year. This includes certificates that you imported and deleted within the last 365 days. If you reach your limit, contact AWS Support to request a limit increase. For more information, see Quotas in the ACM User Guide."The certificate field contains more than one certificate. You can specify only one certificate in this field."If you are importing a certificate, don't upload the complete certificate chain for the Certificate body field. If you receive a certificate bundle, that bundle might contain the server certificate and the certificate chain from the certificate authority (CA). Separate each file (the certificate, the certificate chain with the intermediate and root certificates, and the private key) that is created at the time of the certificate signing request (CSR) generation from the bundle. Then, change the file to a PEM format, and upload them individually to ACM. To convert a certificate bundle to a PEM format, see Troubleshooting."Unable to validate certificate chain. The certificate chain must start with the immediate signing certificate, followed by any intermediaries in order. The index within the chain of the invalid certificate is: 0"When importing a certificate into ACM, don't include the certificate in the certificate chain. The certificate chain must contain only the intermediate and root certificates. The certificate chain must be in order, starting with the intermediate certificates, and then ending with the root certificate."Could not validate the certificate with the certificate chain."If ACM can't match the certificate to the certificate chain provided, then verify that the certificate chain is associated to your certificate. You might need to contact your certificate provider for further assistance."The private key length <key_length> is not supported for key algorithm."When you create an X.509 certificate or certificate request, you specify the algorithm and the key bit size that must be used to create the private-public key pair. Be sure that your certificate key meets the Prerequisites for importing certificates. If your key does meet the requirements for the key size or algorithm, then ask your certificate provider to re-issue the certificate with a supported key size and algorithm."The certificate body/chain provided is not in a valid PEM format," "InternalFailure," or "Unable to parse certificate. Please ensure the certificate is in PEM format."If the certificate body, private key, or certificate chain isn't in the PEM format, then you must convert the file. If the certificate file doesn't contain the appropriate certificate body, then you must convert the file. To convert a certificate or certificate chain from DER to a PEM format, see Troubleshooting."The private key is not supported."If you import a certificate into ACM using the AWS CLI, then you pass the contents of your certificate files (certificate body, private key, and certificate chain) as a string. You must specify the certificate, the certificate chain, and the private key by their file names preceded by file://. For more information, see import-certificate.Note: Be sure to use the file path file://key.pem for your key and file://certificate.pem for your certificate. If you don't include the file path, then you might receive the following error messages: "The private key is not supported" or "The certificate is not valid.""Provided certificate is not a valid self-signed. Please provide either a valid self-signed certificate or certificate chain."The certificate that you tried to import isn't a self-signed certificate. For self-signed certificates, you must provide both the certificate and its private key. If the certificate is signed by a CA, you must provide the certificate chain, private key, and certificate.Related informationImporting certificates into AWS Certificate ManagerCertificate and key format for importingImport a certificateTroubleshoot certificate import problemsFollow" | https://repost.aws/knowledge-center/acm-import-troubleshooting |
How do I troubleshoot row-level security issues in QuickSight? | "I applied RLS to my dataset in Amazon QuickSight, but I'm experiencing issues with data access." | "I applied RLS to my dataset in Amazon QuickSight, but I'm experiencing issues with data access.Short descriptionThe following are common issues that you can experience when you use row-level security (RLS) on your Amazon QuickSight dataset:You can't see any data in the QuickSight embedded dashboard for anonymous QuickSight users.Restricted users can still see all the data.Unrestricted users can't see any data.You receive the error code DatasetRulesInvalidColType when you apply RLS.You receive the error Code DatasetRulesUserDenied when you create an analysis.Note: When you use RLS, consider the following:RLS is available only for the Enterprise edition of QuickSight.RLS supports only textual data, such as string, char, and varchar for fields in the dataset rule. Currently, RLS doesn't work for dates or numeric fields.The full set of rule records that are applied per user must not exceed 999. Datasets with more than 999 rules might fail to apply RLS rules to the dataset.You can't apply RLS to empty rows with the default null value because QuickSight treats null as an empty field value. However, spaces in a field are treated as a literal value, so the dataset rule applies to these rows.Only users that are added to the dataset rule can see the data based on the rule that's defined. Other users can't see the data.When using multiple fields in the dataset rules, the rules work as an AND operator. The OR operator is currently not supported.RLS tag-based rules are supported only for embedded dashboards for anonymous users with the GenerateEmbedUrlForAnonymousUser API. If you embedded dashboards for registered users with the GenerateEmbedUrlForRegisteredUser API, then consider using user-level rules.ResolutionI can't see any data in the QuickSight embedded dashboard for anonymous usersIf you use tag-based rules for your anonymous embedded dashboard, then you can't see or modify the data. To see the data, you must add user-based RLS rules to the dataset.In the following example dataset rule, John Stiles can see data from only the Logistics department, and Martha Rivera can see all the data from the dataset.UserName,Department JohnStiles,Logistics MarthaRivera,Note: You can apply both tag-based rules and user-based RLS rules on your dataset.Restricted users can still see all dataIf a dataset contains too many rules, then even if you successfully applied RLS, restricted users can still see all the data. To resolve this issue, make sure that your dataset contains only 999 or fewer rules. If you restrict users by UserName and have more than 999 users in your dataset rule, then create QuickSight groups. Add the users to the groups, and use GroupName instead of UserName in the dataset rule.Unrestricted users can't see any dataThe following are possible reasons why unrestricted users can't see data:The user doesn't exist in the dataset rule. Check the dataset rule to verify that all the intended users are there.The UserName or GroupName doesn't match the users or groups in QuickSight. Check the UserName or GroupName from the dataset rule to verify that they match the users or groups in QuickSight.You receive the error code DatasetRulesInvalidColType when you apply RLSThe DatasetRulesInvalidColType error occurs when you use RLS for dates or numeric fields.Check the field that's used to evaluate RLS in the dataset rule to verify that the data type is String. You can also convert numeric fields to String in QuickSight by editing the dataset.You receive the error code DatasetRulesUserDenied when you create an analysisThis DataRulesUserDenied error occurs when the user isn't in the dataset rule. To resolve this error, add the user to the dataset rule, and then refresh the dataset.Related informationUsing row-level security (RLS) with user-based rules to restrict access to a datasetUsing row-level security (RLS) with tag-based rules to restrict access to a dataset when embedding dashboards for anonymous usersAdding filter conditions (group filters) with AND and OR operatorsFollow" | https://repost.aws/knowledge-center/quicksight-fix-row-level-security-issues |
What's the impact of modifying my Single-AZ Amazon RDS instance to a Multi-AZ instance and vice versa? | I want to know the impact of changing my Single-AZ Amazon Relational Database Service (Amazon RDS) DB instance to a Multi-AZ instance.-or-I want to know the impact of changing my Multi-AZ Amazon RDS DB instance to a Single-AZ instance. | "I want to know the impact of changing my Single-AZ Amazon Relational Database Service (Amazon RDS) DB instance to a Multi-AZ instance.-or-I want to know the impact of changing my Multi-AZ Amazon RDS DB instance to a Single-AZ instance.Short descriptionIn a Single-AZ setup, one Amazon RDS DB instance and one or more Amazon Elastic Block Store (Amazon EBS) storage volumes are deployed in one Availability Zones. In a Multi-AZ configuration, the DB instances and EBS storage volumes are deployed across two Availability Zones.When you enable Multi-AZ on your instance, Amazon RDS maintains a redundant and consistent standby copy of your data using synchronous storage replication. Amazon RDS detects and then automatically recovers from the most common infrastructure failure scenarios for Multi-AZ deployments. This detection and recovery occurs so that you can resume database operations as quickly as possible. For more information, see High availability (Multi-AZ) for Amazon RDS.To change a DB instance from Single-AZ deployment to a Multi-AZ deployment and vice versa, see Modifying an Amazon RDS instance.ResolutionImpact of changing a Single-AZ instance to a Multi-AZ instanceWhen you change your Single-AZ instance to Multi-AZ, you don't experience any downtime on the instance. During the modification, Amazon RDS creates a snapshot of the instance's volumes. Then, this snapshot is used to create new volumes in another Availability Zone. Although these new volumes are immediately available for use, you might experience a performance impact. This impact occurs because the new volume's data is still loading from Amazon Simple Storage Service (Amazon S3). Meanwhile, the DB instance continues to load data in the background. This process, called lazy loading, might lead to elevated write latency and a performance impact during and after the modification process.The amount of performance impact is a function of your volume type, workload, instance, and volume size. The impact might be significant for large write-intensive DB instances during the peak hours of operations. As a result, it's a best practice to test the impact on a test instance before running this modification in production. It's also a best practice to complete this modification in a maintenance or low-throughput window.Reduce the duration of loadingTo proactively reduce the duration and impact of the loading, do the following:Change the DB instance’s storage type to Provisioned IOPS. Be sure to provision an amount of IOPS that is substantially higher than your workload requires.Note: This step can cause a brief period of downtime if the instance uses a custom parameter group.Change the instance to Multi-AZ.Initiate a failover on your instance to be sure that the new AZ is the primary AZ.Run a full dump of the data on your instance. Or, run full table scan queries on the most active tables to expedite loading the data into the volumes.Confirm that the write latency has returned to normal levels by reviewing the WriteLatency metric in Amazon CloudWatch.Change the instance's storage type or IOPS back to your previous configuration.Note: This step doesn't require downtime.Reduce latency if your instance is already Multi-AZTo reduce the latency if you already modified the instance to Multi-AZ, do the following:Initiate a failover on your instance to be sure that the new AZ is the primary AZ.Change the DB instance’s storage type to Provisioned IOPS. Be sure to provision an amount of IOPS that is substantially higher than your workload requires.Note: This step doesn't require downtime.Run a full dump of the data on your instance. Or, run full table scan queries on the most active tables to expedite loading the data into the volumes.Confirm that the write latency has returned to normal levels by reviewing the WriteLatency metric in Amazon CloudWatch.Change the instance's storage type or IOPS back to your previous configuration.Note: This step doesn't require downtime.If you change a DB instance from Single-AZ to Multi-AZ, then a standby instance is created with the same configuration in another Availability Zone. This leads to additional costs. Also, because Multi-AZ uses synchronous replication, writes are slightly slower than those in Single-AZ.Impact of changing a Multi-AZ instance to a Single-AZ instanceWhen you change your instance from Multi-AZ to Single-AZ, you don't experience downtime on the instance. During the modification, Amazon RDS deletes only the secondary instance and volumes, and the primary instance isn't affected.Here are a few things to consider before changing your instance from Multi-AZ to Single-AZ deployment:With the Multi-AZ deployment, Amazon RDS automatically switches to the standby copy in another Availability Zone during a planned or unplanned outage of your DB instance. But, in a Single-AZ instance, you might have to initiate a point-in-time-restore operation. This operation might take several hours to complete. Any data updates that occurred after the latest restorable time aren't available. So, you might experience an additional downtime on a Single-AZ instance in case of a failure.In a Multi-AZ instance, automated backups are created from the secondary instance during the automatic backup window. For Amazon RDS for MariaDB, Amazon RDS for MySQL, Amazon RDS for Oracle, and Amazon RDS for PostgreSQL, I/O activity isn't suspended on your primary instance during backup for Multi-AZ deployments because the backup is taken from the secondary. For Amazon RDS for SQL Server, I/O activity is suspended briefly during backup for Multi-AZ deployments. The backup process on a Single-AZ DB instance results in a brief I/O suspension that can last from a few seconds to a few minutes. The amount of time depends on the size and class of your DB instance.In Multi-AZ deployments, operating system maintenance is applied to the secondary instance first. The secondary instance is promoted to primary, and then maintenance is performed on the old primary, which is the new standby. So, the downtime during certain OS patches in a Multi-AZ instance is minimal.If you're scaling your Multi-AZ instance, then the downtime is minimal. This is because the secondary instance is modified first. The secondary instance is promoted to primary. Then, the old primary, now secondary instance, is modified. A Single-AZ instance becomes unavailable during the scaling operation.Related informationAmazon RDS Multi-AZ deploymentsFollow" | https://repost.aws/knowledge-center/rds-convert-single-az-multi-az |
How do I resolve "Model Validation Failed" errors in CloudFormation? | "When I create a resource with AWS CloudFormation, I receive a "Model Validation Failed" error in my stack events." | "When I create a resource with AWS CloudFormation, I receive a "Model Validation Failed" error in my stack events.Short descriptionType, Allowed values, Minimum, Maximum, and Pattern values are the acceptance criteria for creating a resource property using a CloudFormation template. If one of these property values isn't correctly defined, then you receive one of the following "Model Validation Failed" errors:Model Validation Failed (#PropertyName: Failed validation constraint for keyword [type])Model Validation Failed (#PropertyName: Failed validation constraint for keyword [pattern])Model validation failed (#PropertyName: expected type: Number, found: String)Note: The preceding error messages are examples. In the error that you receive, PropertyName is specified.ResolutionIn the CloudFormation stack event, identify the property of the resource type that failed. For example, Namespace is a property of the resource AWS::CloudWatch::Alarm.Identify the resource type that's experiencing the error. For example, AWS::CloudWatch::Alarm.Look up the properties of the resource.Compare the property values that are defined in the template with the correct property values that you found in step 3.Note: Some properties don't include minimum or maximum character limit values.If the property values of the resource don't meet the acceptance criteria, then edit the template with the required values.Update the CloudFormation stack with the new template.The following is an example of acceptance criteria for the Namespace property for resource type AWS::CloudWatch:Alarm:Required: NoType: StringMinimum: 1Maximum: 255Pattern: [^:].*Update requires: No interruptionNote: For the Namespace criteria to be accepted, the Type must be String, the character limit must be between 1 and 255, and the Pattern must be [^:].*.Follow" | https://repost.aws/knowledge-center/cloudformation-resolve-model-val-failed |
How can I schedule my EC2 instances to start and stop using Systems Manager Maintenance Windows? | I want to use AWS Systems Manager Maintenance Windows to schedule my Amazon Elastic Compute Cloud (Amazon EC2) managed instances to start or stop. | "I want to use AWS Systems Manager Maintenance Windows to schedule my Amazon Elastic Compute Cloud (Amazon EC2) managed instances to start or stop.ResolutionRegister either the AWS-StartEC2Instance or AWS-StopEC2Instance Automation automation tasks to a maintenance window. The maintenance window targets the configured EC2 instances, and then uses the Automation document steps on the chosen schedule to stop or start the instances.Note: To restart your instance immediately after stopping it, set both stop and start tasks in the same maintenance window.To keep your instance stopped for a predetermined amount of time before it starts, set each task to a separate maintenance window. This keeps the instance from running when it's not needed and reduces costs.Create an IAM role and policyTo schedule maintenance window start or stop actions, use an AWS Identity and Access Management (IAM) role with ec2:StartInstances and ec2:StopInstances permissions.Note: The IAM role requires permissions only for the Automation task that you register to the maintenance window. For example, if you choose to register AWS-StartEC2Instance and not AWS-StopEC2Instance, then the IAM role requires only ec2:StartInstances permissions.1. Open the IAM console.2. In the navigation pane, choose Roles, and then choose Create role.On the Select trusted entity page, for Trusted entity type, choose AWS service.For Use case, choose Systems Manager from the Use cases for other AWS services dropdown. Then, choose Systems Manager.Choose Next.3. On the Add permissions page, choose Create policy. A new window opens to create an IAM policy.4. On the Specify permissions page, paste the following policy into the JSON Policy editor:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ssm:StartAutomationExecution", "ec2:DescribeInstanceStatus" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "ec2:StartInstances", "ec2:StopInstances" ], "Resource": "Instance ARN 1", "Instance ARN 2" } ]}Note: For ec2:StartInstances and ec2:StopInstances, it's a best practice to add the resource ARNs of the EC2 instances you want to provide access to. For more information, see Policy structure.5. Choose Next.6. On the Review and Create page, under Policy details, enter a policy name. For example, SSM_StartStopEC2Role.7. Choose Create policy.8. Return to the Create role page. For Permissions policies, choose the IAM policy that you created. Then, choose Next.9. Choose Next: Review.For Role name, enter a name. For example, SSM_StartStopEC2Role.10. On the Name, review, and create page, under Role details, enter a role name. Optionally, add tags for the role.11. Choose Create role.For more information, see Creating a role for an AWS service (console).Create a maintenance windowIf you don’t have a maintenance window, then create a maintenance window. If you register targets with the Maintenance Window, then don't use the Specify instance tags as a target option. This option doesn't allow the instances to start. Choose the options Choose instances manually or Choose a resource group instead.Note: If you have an existing maintenance window, then continue to Register an Automation task.To run the maintenance window on managed instances that you haven't registered as targets, you must select Allow unregistered targets.Register the Automation taskOpen the Systems Manager console.In the navigation pane, choose Maintenance Windows.On the Maintenance windows page, choose the target maintenance window. Choose Actions, and then choose Register Automation task.(Optional) For Maintenance window task details, enter a name and description.For Automation document, search for and choose either of the following documents depending on your use case:AWS-StartEC2InstanceAWS-StopEC2InstanceNote: To register multiple Automation documents, repeat the process for each document.For Document version, choose Default version at runtime.The Task priority is set to 1 by default. If you have multiple tasks registered to the same maintenance window, then give them different priority levels. This establishes a run order.For Targets, if you registered target instances for the maintenance window, then choose Selecting registered target groups. If you haven't registered target instances for the maintenance window, then choose Selecting unregistered targets. Then, select instances manually or specify a resource group to identify the instances that you want to run the Automation task.Note: Tags for targets are supported only for instances managed under Systems Manager.For Rate control, specify a Concurrency and Error threshold.For IAM service role, select the service role for Systems Manager from the dropdown list. If you didn't create a Service Role for Systems Manager, then create one.Note: Don't use the value AWSServiceRoleForAmazonSSM because this role isn't available for new tasks.For Input parameters, specify the following parameters:InstanceId: Enter the pseudo parameter {{RESOURCE_ID}} to target more than one resource.AutomationAssumeRole: Enter the complete role ARN for the IAM role that has the required ec2:StartInstances or ec2:StopInstances permissions. For example, "arn:aws:iam::123456789101:role/SSM_StartStopEC2Role".Choose Register Automation task.(Optional) To register Automation tasks to schedule both stop and start actions, repeat the Register an Automation task steps for the second document.For more information, see Assign tasks to a maintenance window (console).Related informationAWS Systems Manager Maintenance WindowsActions, resources, and condition keys for Amazon EC2Why is my EC2 instance not displaying as a managed node or showing a "Connection lost" status in Systems Manager?Follow" | https://repost.aws/knowledge-center/ssm-ec2-stop-start-maintenance-window |
Why is my EC2 Linux instance not booting and going into emergency mode? | "When I boot my Amazon Elastic Compute Cloud (Amazon EC2) Linux instance, the instance goes into emergency mode and the boot process fails. Then, the instance is inaccessible. How can I fix this?" | "When I boot my Amazon Elastic Compute Cloud (Amazon EC2) Linux instance, the instance goes into emergency mode and the boot process fails. Then, the instance is inaccessible. How can I fix this?Short descriptionThe most common reasons an instance might boot in emergency mode are:A corrupted kernel.Auto-mount failures because of incorrect entries in the /etc/fstab.To verify what type of error is occurring, view the instance's console output. You might see a Kernel panic error message in the console output if the kernel is corrupted. Dependency failed messages appear in the console output if auto-mount failures occur.ResolutionKernel panic errorsKernel panic error messages occur when the grub configuration or initramfs file is corrupted. If a problem with the kernel exists, you might see the error "Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(8,1)" in the console output.To resolve kernel panic errors:1. Revert the kernel to a previous, stable kernel. For instructions on how to revert to a previous kernel, see How do I revert to a known stable kernel after an update prevents my Amazon EC2 instance from rebooting successfully?2. After you revert to a previous kernel, reboot the instance. Then, correct the issues on the corrupted kernel.Dependency failed errorsAuto-mount failures caused by syntax errors in the /etc/fstab file can cause the instance to enter emergency mode. Also, if the Amazon Elastic Block Store (Amazon EBS) volume listed in the file is detached from the instance, then the instance boot process might enter emergency mode. If either of these problems occur, then the console output looks similar to the following:-------------------------------------------------------------------------------------------------------------------[[1;33mDEPEND[0m] Dependency failed for /mnt.[[1;33mDEPEND[0m] Dependency failed for Local File Systems.[[1;33mDEPEND[0m] Dependency failed for Migrate local... structure to the new structure.[[1;33mDEPEND[0m] Dependency failed for Relabel all filesystems, if necessary.[[1;33mDEPEND[0m] Dependency failed for Mark the need to relabel after reboot.[[1;33mDEPEND[0m] Dependency failed for File System Check on /dev/xvdf.-------------------------------------------------------------------------------------------------------------------The preceding example log messages show that the /mnt mount point failed to mount during the boot sequence.To prevent the boot sequence from entering emergency mode due to mount failures:Add a nofail option in the /etc/fstab file for the secondary partitions ( /mnt, in the preceding example). When the nofail option is present, the boot sequence isn't interrupted, even if mounting of any volume or partition fails.Add 0 as the last column of the /etc/fstab file for the respective mount point. Adding the 0 column disables the file system check, allowing the instance to successfully boot.There are three methods you can use to correct the /etc/fstab file.Important:Methods 2 and 3 require a stop and start of the instance. Be aware of the following:If your instance is instance store-backed or has instance store volumes containing data, then the data is lost when the instance is stopped. For more information, see Determine the root device type of your instance.If your instance is part of an Amazon EC2 Auto Scaling group, then stopping the instance might terminate it. Instances launched with Amazon EMR, AWS CloudFormation, AWS Elastic Beanstalk might be part of an AWS Auto Scaling group. Instance termination in this scenario depends on the instance scale-in protection settings for your Auto Scaling group. If your instance is part of an Auto Scaling group, temporarily remove it from the Auto Scaling group before starting the resolution steps.Stopping and starting the instance changes the public IP address of your instance. It's a best practice to use an Elastic IP address instead of a public IP address when routing external traffic to your instance.Method 1: Use the EC2 Serial ConsoleIf you’ve enabled EC2 Serial Console for Linux, you can use it to troubleshoot supported Nitro-based instance types. The serial console helps you troubleshoot boot issues, network configuration, and SSH configuration issues. The serial console connects to your instance without the need for a working network connection. You can access the serial console using the Amazon EC2 console or the AWS Command Line Interface (AWS CLI).Before using the serial console, grant access to it at the account level. Then, create AWS Identity and Access Management (IAM) policies granting access to your IAM users. Also, every instance using the serial console must include at least one password-based user. If your instance is unreachable and you haven’t configured access to the serial console, then follow the instructions in Method 2. For information on configuring the EC2 Serial Console for Linux, see Configure access to the EC2 Serial Console.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Method 2: Run the AWSSupport-ExecuteEC2Rescue automation documentIf your instance is configured for AWS Systems Manager, you can run the AWSSupport-ExecuteEC2Rescue automation document to correct boot issues. Manual intervention isn't needed when using this method. For information on using the automation document, see Walkthrough: Run the EC2Rescue tool on unreachable instances.Method 3: Manually edit the file using a rescue instance1. Open the Amazon EC2 console.2. Choose Instances from the navigation pane, and then select the instance that's in emergency mode.3. Stop the instance.4. Detach the Amazon EBS root volume ( /dev/xvda or /dev/sda1) from the stopped instance.5. Launch a new EC2 instance in same Availability Zone as the impaired instance. The new instance becomes your rescue instance.6. Attach the root volume you detached in step 4 to the rescue instance as a secondary device.Note: You can use different device names when attaching secondary volumes.7. Connect to your rescue instance using SSH.8. Create a mount point directory for the new volume attached to the rescue instance in step 6. In the following example, the mount point directory is /mnt/rescue.$ sudo mkdir /mnt/rescue9. Mount the volume at the directory you created in step 8.$ sudo mount /dev/xvdf /mnt/rescueNote: The device (/dev/xvdf, in the preceding example) might be attached to the rescue instance with a different device name. Use the lsblk command to view your available disk devices along with their mount points to determine the correct device names.10. After the volume is mounted, run the following command to open the /etc/fstab file.$ sudo vi /mnt/rescue/etc/fstab11. Edit the entries in /etc/fstab as needed. The following example output shows three EBS volumes defined with UUIDs, the nofail option added for both secondary volumes, and a 0 as the last column for each entry.------------------------------------------------------------------------------------------$ cat /etc/fstabUUID=e75a1891-3463-448b-8f59-5e3353af90ba / xfs defaults,noatime 1 0UUID=87b29e4c-a03c-49f3-9503-54f5d6364b58 /mnt/rescue ext4 defaults,noatime,nofail 1 0UUID=ce917c0c-9e37-4ae9-bb21-f6e5022d5381 /mnt ext4 defaults,noatime,nofail 1 0 ------------------------------------------------------------------------------------------12. Save the file, and then run the umount command to unmount the volume.$ sudo umount /mnt/rescue13. Detach the volume from the temporary instance.14. Attach the volume to original instance, and then start the instance to confirm that it boots successfully.Follow" | https://repost.aws/knowledge-center/ec2-linux-emergency-mode |
Why is my EC2 instance stuck in the stopping state? | I tried to stop my Amazon Elastic Compute Cloud (Amazon EC2) instance and now it is stuck in the stopping state. How do I fix this? | "I tried to stop my Amazon Elastic Compute Cloud (Amazon EC2) instance and now it is stuck in the stopping state. How do I fix this?Short descriptionInstances might appear "stuck" in the stopping state when there is a problem with the underlying hardware hosting the instance. This can also occur when hibernating a hibernation-enabled instance.Note: To check the most recent state of your instance, choose the refresh icon in the Amazon EC2 console. Or, run the describe-instances command in the AWS Command Line Interface (AWS CLI).If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.In the following example, replace i-0123ab456c789d01e with the ID of the instance you're trying to stop:aws ec2 describe-instances --instance-ids i-0123ab456c789d01e --output jsonCheck the State Code and Name in the JSON response:"State": { "Code": 64, "Name": "stopping" },ResolutionIf your instance becomes stuck in the stopping state, refer to Troubleshooting stopping your instance. Follow" | https://repost.aws/knowledge-center/ec2-instance-stuck-stopping-state |
How do I troubleshoot the error "Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4" when I try to access S3 objects that are encrypted with AWS KMS managed keys? | I'm trying to access Amazon Simple Storage Service (Amazon S3) objects that are encrypted with AWS Key Management Service (AWS KMS). I get the error "Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4". | "I'm trying to access Amazon Simple Storage Service (Amazon S3) objects that are encrypted with AWS Key Management Service (AWS KMS). I get the error "Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4".ResolutionYou get this error when you access an AWS KMS-encrypted object using a signature version that's not AWS Signature Version 4. When you access an S3 object that's encrypted with an AWS KMS key, be sure that all your requests are signed with AWS Signature Version 4.Be sure that you aren't making an anonymous requestYou might get this error when you make an anonymous request. An anonymous request is a request that's not signed with AWS credentials. An example of an anonymous request is downloading an S3 object using the object URL on your browser or a HTTP client. An S3 object URL looks like the following:https://bucketname.s3.region.amazonaws.com/folder/file.txtUsing an HTTP client such as curl, you can make an anonymous request with a command similar to the following:curl -vo ./local/path/file.txt https://bucketname.s3.region.amazonaws.com/folder/file.txtBe sure that you aren't using AWS Signature Version 2Some S3 REST API endpoints and Regions still support requests that are signed using Signature Version 2. However, it's a best practice to use Signature Version 4 for signing in. For more information, see AWS Signature Version 2 turned off (deprecated) for Amazon S3.Because some Regions still support Signature Version 2, you can make requests that are signed with Signature Version 2 to buckets in these Regions. However, AWS KMS requires that your requests are signed with Signature Version 4. If you use Signature Version 2 with an AWS KMS-encrypted object, this error appears:Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4.To identity the signature version that you used when making requests to objects in your bucket, try the following:You can use an AWS CloudTrail event log to identify which API signature version was used to sign a request in Amazon S3.You can check if a particular request is using Signature Version 4 by examining the authorization header for the API. The header must contain AWS4-HMAC-SHA256. If you're generating a presigned URL, check if the query parameter contains ?X-Amz-Algorithm=AWS4-HMAC-SHA256. If you can't find this query parameter, modify the code to use Signature Version 4.Note: For requests that specify AWS KMS managed keys, you must use Secure Sockets Layer (SSL) or Transport Layer Security (TLS). If you make a request that specifies AWS KMS keys over an unsecure connection (without SSL/TLS), then you get the following error:An error occurred (InvalidArgument) when calling the <operation_performed> operation: Requests specifying Server Side Encryption with AWS KMS managed keys must be made over a secure connection.Related informationSpecifying server-side encryption with AWS KMS (SSE-KMS)Authenticating requests (AWS Signature Version 2)Follow" | https://repost.aws/knowledge-center/s3-kms-signature-version-error |
How can I add bucket-owner-full-control ACL to my objects in Amazon S3? | I'm trying to add the bucket-owner-full-control access control list (ACL) to existing objects in Amazon Simple Storage Service (Amazon S3). How can I do this? | "I'm trying to add the bucket-owner-full-control access control list (ACL) to existing objects in Amazon Simple Storage Service (Amazon S3). How can I do this?Short descriptionBy default, in a cross-account scenario where other AWS accounts upload objects to your Amazon S3 bucket, the objects remain owned by the uploading account. When the bucket-owner-full-control ACL is added, the bucket owner has full control over any new objects that are written by other accounts.If the object writer doesn't specify permissions for the destination account at an object ACL level, then the destination account can only delete objects.When the bucket-owner-full-control ACL is added, the bucket owner has full control over any new objects that are written by other AWS accounts. This ACL is also required if the destination bucket has enabled S3 Object Ownership. When S3 Object Ownership is enabled, it updates the owner of new objects to the destination account.Important: Granting cross-account access through bucket and object ACLs doesn't work for buckets that have S3 Object Ownership set to Bucket Owner Enforced. In most cases, ACLs aren't required to grant permissions to objects and buckets. Instead, use AWS Identity Access and Management (IAM) policies and S3 bucket policies to grant permissions to objects and buckets.For existing objects, the object owner can grant the bucket owner full control of the object by updating the ACL of the object. When writing new objects, the bucket-owner-full-control ACL can be specified during a PUT or COPY operation.For a user in Account A to grant bucket-owner-full-control canned ACL to objects in Account B, the following permissions must be granted:Your IAM role or user in Account A must grant access to the bucket in Account BYour bucket policy in Account B must grant access to the IAM role or user in Account AYou can grant bucket-owner-full-control access to objects in the following ways:Canned ACLsS3 Batch Operations (for large-scale batch operations)Note: Make sure to review your VPC endpoint policy when you add the bucket-owner-full-control canned ACL to your S3 objects.ResolutionYour IAM role or user in Account A must grant access to the bucket in Account BNote: If the IAM user or role must update the object's ACL during the upload, then the user must have permissions for s3:PutObjectAcl in their IAM policy.Create an IAM role in Account A. Grant the role/user permissions to perform PutObjectAcl on objects in Account B.The following example policy grants the IAM role in Account A access to perform the GetObject, PutObject, and PutObjectAcl actions on objects in Account B:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:PutObjectAcl" ], "Resource": "arn:aws:s3:::AccountB-Bucket/*" } ]}Your bucket policy in Account B must grant access to the IAM user or role in Account ABucket policies can vary based on the canned ACL requirement during object uploads. For example, these two bucket policies grant access to the IAM user or role in Account A in different ways:Policy 1: Allows access to the IAM user or role in Account A without requiring Amazon S3 PUT operations to include bucket-owner-full-control canned ACL.Policy 2: Enforces all Amazon S3 PUT operations to include the bucket-owner-full-control canned ACL.Policy 1: Allows access to the IAM user or role in Account A without requiring Amazon S3 PUT operations to include a bucket-owner-full-control canned ACLTo allow access to the IAM role in Account A without requiring an ACL, create a bucket policy in Account B (where objects are uploaded). This bucket policy must grant access to the IAM role or user in Account A. The following bucket policy allows the role in Account A to perform GetObject, PutObject, and PutObjectAcl actions on the objects in Account B:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::AccountA:role/AccountARole" }, "Action": [ "s3:GetObject", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::AccountB-Bucket/*" ] } ]}Policy 2: Enforces all Amazon S3 PUT operations to include the bucket-owner-full-control canned ACLThe following bucket policy specifies that a user or role in Account A can upload objects to a bucket in Account B (where objects are to be uploaded). Uploads can be performed only when the object's ACL is set to "bucket-owner-full-control". For example:{ "Version": "2012-10-17", "Statement": [ { "Sid": "Only allow writes to my bucket with bucket owner full control", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::AccountA:role/AccountARole" ] }, "Action": [ "s3:PutObject" ], "Resource": "arn:aws:s3:::AccountB-Bucket/*", "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } } } ]}Note: When the preceding bucket policy is applied, the user must include the bucket-owner-full-control canned ACL during the PutObject operation. Otherwise, the operation fails, resulting in an Access Denied error. For information about how Amazon S3 enables object ownership of other AWS accounts, see Controlling ownership of uploaded objects using S3 Object Ownership.Providing bucket-owner-full-control accessCanned ACLsTo grant bucket-owner-full-control canned ACL during an object upload, run the put-object command from Account A (object owner's account):aws s3api put-object --bucket accountB-bucket --key example.txt --acl bucket-owner-full-controlTo grant bucket-owner-full-control canned ACL during a copy operation, run the copy-object command from Account A (object owner's account):aws s3api copy-object --copy-source accountA-bucket/example.txt --key example.txt --bucket accountB-bucket --acl bucket-owner-full-controlOr, you can also run the cp command from Account A to grant the bucket-owner-full-control canned ACL:aws s3 cp s3://accountA-bucket/test.txt s3://accountB-bucket/test2.txt --acl bucket-owner-full-controlFor a copy operation of multiple objects, the object owner (Account A) can run the following command:aws s3 cp s3://accountA-bucket/ s3://accountB-bucket/ --acl bucket-owner-full-control --recursiveIf the object exists in a bucket in another account (Account B), then the object owner can grant the bucket owner access with this command:aws s3api put-object-acl --bucket accountB-bucket --key example.txt --acl bucket-owner-full-controlS3 Batch OperationsTo add bucket-owner-full-control canned ACL on a large number of Amazon S3 objects, use S3 Batch Operations. S3 Batch Operations can perform a single operation on a list of objects that you specify. You can even use S3 Batch Operations to set ACLs on a large number of objects. S3 Batch Operations support custom and canned ACLs that Amazon S3 provides with a predefined set of access permissions.Note: The Replace access control list (ACL) operation replaces the Amazon S3 ACLs for every object that is listed in the manifest.Additional considerationAccess allowed by a VPC endpoint policyIf an IAM role uploads objects to S3 using an instance that's routed through a virtual private cloud (VPC) endpoint, then check the VPC endpoint policy. For example, if an object is uploaded to S3 using an Amazon Elastic Compute Cloud (Amazon EC2) instance in a VPC, then that VPC endpoint policy must be reviewed. Make sure that your endpoint policy grants access to the PutObjectAcl action, like this:{ "Statement": [ { "Sid": "Access-to-specific-bucket-only", "Principal": "*", "Action": [ "s3:PutObject", "s3:PutObjectAcl" ], "Effect": "Allow", "Resource": "arn:aws:s3:::AccountB-Bucket/*" } ]}Follow" | https://repost.aws/knowledge-center/s3-bucket-owner-full-control-acl |
Why do I get an authorization error when I try to subscribe my Lambda function to my Amazon SNS topic? | I receive an authorization error when I try to subscribe my AWS Lambda function to my Amazon Simple Notification Service (Amazon SNS) topic. How do I resolve the error? | "I receive an authorization error when I try to subscribe my AWS Lambda function to my Amazon Simple Notification Service (Amazon SNS) topic. How do I resolve the error?Short descriptionWhen you subscribe a Lambda function to an SNS topic, you can receive an authorization error for the following reasons:You tried to create the subscription from a different AWS account than the one that your Lambda function is in.-or-The AWS Identity and Access Management (IAM) identity that you used to create the subscription doesn't have permissions to run the following API operations:(Lambda) AddPermission(Amazon SNS) SubscribeTo resolve the issue, you must do one of the following, depending on what's causing the error:Make sure that you subscribe your Lambda function to the SNS topic from the AWS account where your function is located.-or-Make sure that the IAM identity that you're using has permissions to run both the Lambda AddPermission and SNS Subscribe API operations.ResolutionVerify what's causing the error based on the error message that Lambda returnsIf you create the subscription from a different AWS account than the one that your function is in, then Lambda returns one of the following errors:AWS CLI error example: You tried to create the subscription from a different account than the one that your Lambda function is inAn error occurred (AuthorizationError) when calling the Subscribe operation: The account YOUR_AWS_ACCOUNT_ID_1 is not the owner of the endpoint arn:aws:lambda:us-east-1:YOUR_AWS_ACCOUNT_ID_2:function: your_Lambda_function_ARNAWS Management Console error example: You tried to create the subscription from a different account than the one that your Lambda function is inError code: AccessDeniedException - Error message: User: arn:aws:sts::XXXXXXX:XXXXXXX/XXXXX/XXXXXX is not authorized to perform: lambda:AddPermission on resource: arn:aws:lambda:us-west-2:XXXXXXX:function:XXXXXXXIf you're using the correct account, but your IAM identity lacks the required permissions, then Lambda or SNS returns one of the following errors:AWS CLI error example: The IAM identity that you used to create the subscription doesn't have permissions to run the Lambda AddPermission actionAn error occurred (AccessDeniedException) when calling the AddPermission operation: User: arn:aws:iam::XXXXXXX:user/XXXXXXXX is not authorized to perform: lambda:AddPermission on resource: arn:aws:lambda:us-west-2:XXXXXX:function:XXXXXXX because no identity-based policy allows the lambda:AddPermission actionAWS Management Console error example: The IAM identity that you used to create the subscription doesn't have permissions to run the Lambda AddPermission actionError code: AccessDeniedException - Error message: User: arn:aws:sts:XXXXXXXX:assumed-role/XXXXXXXX/XXXXX-XXXXXX is not authorized to perform: lambda:AddPermission on resource: arn:aws:lambda:us-west-2:XXXXXXXXX:function:XXXXXXX because no identity-based policy allows the lambda:AddPermission actionAWS CLI error example for when you try to use an IAM identity that doesn't have permission to run the SNS Subscribe actionAn error occurred (AuthorizationError) when calling the Subscribe operation: User: arn:aws:iam::XXXXXXX:user/XXXXXXXX is not authorized to perform: SNS:Subscribe on resource: arn:aws:sns:us-west-2:XXXXXXXX:XXXXXXX because no resource-based policy allows the SNS:Subscribe actionMake sure that you subscribe your Lambda function to the SNS topic from the AWS account where your function is locatedYou can use the Lambda console or AWS CLI to subscribe your Lambda function to an SNS topic.To subscribe a function to an SNS topic using the Lambda consoleNote: When you add the SNS trigger using the Lambda console, the console automatically allows the lambda:InvokeFunction permission from the principal service:sns.amazonaws.com.1. On the Functions page of the Lambda console, choose your function.2. Under Overview, choose Add trigger.3. For Trigger configuration, choose Select a trigger, and then choose SNS.4. For SNS topic, paste the SNS topic Amazon Resource Name (ARN) from the other AWS account.5. Select the Enable trigger check box.6. Choose Add.For more information, see Configuring Lambda function options.Note: If you receive the following error, then you must grant Subscribe API action permissions to the IAM identity that you're using. For troubleshooting instructions, see the following article: How do I resolve authorization errors when trying to add subscribers to an Amazon SNS topic?An error occurred (AuthorizationError) when calling the Subscribe operation: User: your_IAM_user_or_role is not authorized to perform: SNS:Subscribe on resource: your_SNS_topic_ARNTo subscribe a function to an SNS topic using the AWS CLINote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.1. Configure your AWS CLI with an IAM user that belongs to the AWS account where your Lambda function is located:aws configure --profile-name your_profile_nameImportant: Make sure that you pass the AWS Access Key Id and Secret Key of your IAM user.2. Allow Lambda invocations from the SNS topic by adding the lambda:InvokeFunction permission from the principal service:sns.amazonaws.com:aws lambda add-permission --function-name your_lambda_function_name --statement-id sns_invoke_permission --action lambda:InvokeFunction --principal sns.amazonaws.com --source-arn your_sns_topic_arn3. Subscribe your Lambda function to your SNS topic:aws sns subscribe --topic-arn your_sns_topic_ARN --protocol lambda --notification-endpoint your_lambda_function_arn --profile your_profile_name_passed_on_#1Note: If you receive the following error, you must grant Subscribe API action permissions to the IAM identity that you're using. For troubleshooting instructions, see the following article: How do I resolve authorization errors when trying to add subscribers to an Amazon SNS topic?An error occurred (AuthorizationError) when calling the Subscribe operation: User: your_IAM_user_or_role is not authorized to perform: SNS:Subscribe on resource: your_SNS_topic_ARNMake sure that the IAM identity that you're using has permissions to run the Lambda AddPermission and SNS Subscribe API operationsReview your IAM identity's identity-based policy. Make sure that the policy explicitly allows the IAM identity to run both of the following actions:lambda:AddPermissionSNS:SubscribeIf the identity-based policy doesn't grant the required permissions, add the required permissions to the policy. Then, subscribe your Lambda function to the SNS topic from the AWS account that the function is in.Follow" | https://repost.aws/knowledge-center/sns-authorization-error-lambda-function |
How do I use source filters in my AWS DMS tasks? | How can I use source filters in my AWS Database Migration Service (AWS DMS) tasks? | "How can I use source filters in my AWS Database Migration Service (AWS DMS) tasks?ResolutionOpen the AWS DMS console, and then choose Database migration tasks from the navigation pane.Choose Create task.Enter the details for your Task configuration and Task settings. By default, the Task settings are Drop tables on target and Limited LOB mode.Select Enable CloudWatch logs.In the Table mappings section, choose Guided UI.Choose Add new selection rule.Select a Schema, and enter a Table name.For Action, choose Include.In the Table mappings section, expand Selection rules.Next to Source filters, add a Schema and Table name, and then choose Add column filter.Enter a Column name.Choose a Condition, such as Less than or equal to, and then enter a value.Choose Create task.To modify an existing task, you must first stop the task. Then, you can select the task and choose Modify.Related informationUsing table mapping to specify task settingsCreating a taskFollow" | https://repost.aws/knowledge-center/source-filters-aws-dms |
Why can't I detach or delete an elastic network interface that Lambda created? | "When I try to detach or delete an elastic network interface that AWS Lambda created, I get the following error message: "You are not allowed to manage 'ela-attach' attachments." Why is this happening, and how do I delete a network interface Lambda created?" | "When I try to detach or delete an elastic network interface that AWS Lambda created, I get the following error message: "You are not allowed to manage 'ela-attach' attachments." Why is this happening, and how do I delete a network interface Lambda created?Short descriptionWhen you configure a Lambda function to access resources in an Amazon Virtual Private Cloud (Amazon VPC), Lambda assigns the function to a network interface. The network interfaces that Lambda creates can be deleted by the Lambda service only.If you delete the resources that the network interface represents, then Lambda detaches and deletes the network interface for you. To delete unused network interfaces, the Lambda service uses the execution role of the functions that created the network interfaces.Network interfaces aren't deleted if they're being used by functions or function versions with the same Amazon VPC configurations as the functions that created them.To identify which functions or function versions are currently using a network interface, use the Lambda ENI Finder bash script on GitHub.For more information, see Requester-managed network interfaces.Note: Lambda shares network interfaces across multiple functions that have the same Amazon VPC configuration. Sharing network interfaces helps reduce the amount of network interfaces used in your AWS account.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Identify any functions and function versions that are still using the network interface by running the Lambda ENI FinderNote: The commands in the following instructions are valid for Linux, Unix, and macOS operating systems only.1. If you haven't done so already, install the AWS CLI.2. Configure the AWS CLI with an AWS Identity and Access Management (IAM) role that has permissions to query Lambda and network interfaces. For more information, see Execution role and user permissions.3. Install the command-line JSON processor jq by running the following command:$ sudo yum install jq -yNote: For more information, see the jq website on GitHub.4. If you haven't done so already, install Git by running the following command:$ sudo yum install git -y5. Clone the aws-support-tools GitHub repository by running the following command:$ git clone https://github.com/awslabs/aws-support-tools.git6. Change the directory to the location of Lambda ENI Finder.Lambda ENI Finder location$ cd aws-support-tools$ cd Lambda$ cd FindEniMappings7. Run Lambda ENI Finder for the network interface that you want deleted by running the following command:./findEniAssociations --eni eni-0123456789abcef01 --region us-east-1Important: Replace eni-0123456789abcef01 with the network interface's ID. (You can find the ID on the Network Interfaces page of the Amazon Elastic Compute Cloud (Amazon EC2) console.) Also, replace us-east-1 with the AWS Region that the network interface is in.The output returns a list of the Lambda functions and function versions in your AWS account and specified Region that are using the network interface.Note: If you still need any of these functions or function versions, then you likely don't need the network interface to be deleted.To delete a network interface that Lambda created1. For each unpublished Lambda function version ($LATEST) the Lambda ENI Finder listed, do one of the following:Change the Amazon VPC configuration to use a different subnet and security group.-or-Disconnect the function from the Amazon VPC.2. For each published Lambda function version listed, delete the function version.Note: Published function versions can't be edited, so you can't change the VPC configuration.3. Verify that the network interface is no longer being used by running the Lambda ENI Finder again.If no other functions or function versions are listed in the output, Lambda deletes the network interface automatically within 24 hours.Related informationHow do I get more elastic network interfaces if I've reached the limit in an AWS Region?Elastic network interfaces per VPC (Service Quotas console)Follow" | https://repost.aws/knowledge-center/lambda-eni-find-delete |
How do I migrate my Amazon RDS for MySQL DB instance using a custom start time? | I want to migrate my Amazon Relational Database Service (Amazon RDS) for MySQL DB instance to another MySQL DB instance using a custom start time. How can I do this? | "I want to migrate my Amazon Relational Database Service (Amazon RDS) for MySQL DB instance to another MySQL DB instance using a custom start time. How can I do this?Short descriptionTo migrate data from Amazon RDS for MySQL to another MySQL DB instance, you can use one of the following methods:Binlog based replicationNote: If you're using version MySQL version 8.0.17, the engine might not print the last binlog file position and file name during a crash recovery. As a result, you won't be able to use the binlog replication approach to migrate your data. Check the MySQL website for this known issue.AWS DMSResolutionBinlog replicationPrerequisites:Binlog replication uses binlog files that are generated on the source to record ongoing changes. Set the binlog_format parameter to the binary logging format of your choice (ROW, STATEMENT, or MIXED).Increase the binlog retention hours parameter to a larger value than your current value. This way, binlogs that haven't yet been shipped remain on the source Amazon RDS for MySQL instance.Make sure that the Amazon RDS for MySQL instance is reachable from the target host.To migrate data from Amazon RDS for MySQL to another MySQL DB instance using binlog replication, perform the following:1. Perform point-in-time restore from the source Amazon RDS DB instance with a custom start time (or point-in-time value).2. Create a backup of the point-in-time restore RDS instance.For example, you can use mysqldump to perform this task:mysqldump -h rdsendpoint -u username -p dbname > backupfile.sql3. Check the error log file of the point-in-time restore RDS instance and record the message related to the last applied binlog file and position. Here's an example log file message:[Note] InnoDB: Last MySQL binlog file position 0 456, file name mysql-bin-changelog.152707Note: You'll need to access the record binlog file, and any subsequent binlog files, for replication. The replication from the source RDS instance can only be performed using these files. The maximum binlog retention period in RDS MySQL can be set to seven days only and the default value is "NULL". (For more information, see mysql.rds_set_configuration.) Therefore, retain these binlog files on the source instance to complete in later steps.4. Set up a replication user and grant the necessary privileges to the user on the source Amazon RDS for MySQL instance:mysql> create user repl_user@'%' identified by 'repl_user';mysql> grant replication slave, replication client on *.* to repl_user@'%';mysql> show grants for repl_user@'%';5. Transfer the backup file to the target on-premises server by logging in to MySQL-target. Create a new database, and restore the database using dumpfile to the new external DB instance:$ mysql -h hostname -u username -p dbname < backupfile.sql6. Stop the target MySQL engine:$ service mysqld stopModify the my.cnf file parameters to point to your unique server ID and the database that you're trying to replicate.For example:server_id=2replicate-do-db=testdbIf you're replicating multiple databases, you can use the replicate-do-db option multiple times and specify those databases on separate lines like this:replicate-do-db=<db_name_1>replicate-do-db=<db_name_2>replicate-do-db=<db_name_N>For more information about creating a replication filter with the database name, see replicate-do-db on the MySQL website.8. Save the file and restart the MySQL DB engine on the target MySQL DB instance.For example, if you're on a Linux system, you can use the following syntax:service mysqld restart9. Establish a connection to the source RDS for MySQL DB instance.For example:mysql> change master to master_host='rds-endpoint',master_user='repl_user', master_password='password', master_log_file='mysql-bin.152707', master_log_pos= 456;master_host: Endpoint of the source Amazon RDS for MySQL instance.master_user: Name of the replication user (created in Step 4).master_password: Password of the replication user.master_log_file: The binlog file name recorded in Step 3. (In Step 3, the example output indicated "mysql-bin-changelog.152707" as the binlog file name.)master_log_pos: The binlog position recorded in Step 3. (In Step 3, the example output indicated "456" as the binlog file position.) Log in to the target RDS for MySQL DB instance, and begin the replication with the following command:mysql> start slave;11. Confirm that the replication is synchronizing between the source RDS for MySQL DB instance and target MySQL DB instance:mysql> show slave status\GAWS DMSBefore you set up replication using AWS Database Migration Service (AWS DMS), check the following resources:To take a backup from the point-in-time restored instance, see Steps 1-5 in the Binlog replication section. It's a best practice to follow these steps as the custom start time can be at any (past) point in time within your backup retention period. After you restore the backup from the target DB instance, record the checkpoint log sequence number (LSN) that is generated during the DB recovery process. You'll need to reference the LSN to set a change data capture (CDC) start time. To obtain the checkpoint LSN, review the error log file of the restored RDS MySQL instance, immediately after the point-in-time restore operation completes. For example:[Note] InnoDB: Log scan progressed past the checkpoint lsn 44326835524To start the CDC from a custom start point, the user can run the show master status command to return a binlog file name, position, and several other values. For more information about starting the CDC from a custom start point, see Performing replication starting from a CDC start point.To check the prerequisites for setting up a MySQL database as a source for AWS DMS replication, see Using a MySQL-compatible database as a source for AWS DMS.To check the prerequisites for setting up a MySQL database as a target for AWS DMS replication, see Using a MySQL-compatible database as a target for AWS Database Migration Service.Follow" | https://repost.aws/knowledge-center/rds-mysql-custom-start-time |
Why is my materialized view not refreshing for my Amazon Redshift cluster? | "My materialized view isn’t refreshing for my Amazon Redshift cluster. Why is this happening, and how do I get my materialized view to refresh?" | "My materialized view isn’t refreshing for my Amazon Redshift cluster. Why is this happening, and how do I get my materialized view to refresh?Short descriptionThe following scenarios can cause a materialized view in Amazon Redshift to not refresh or take a long time to complete:REFRESH MATERIALIZED VIEW is failing with permission errorYou see the error: Invalid operation: Materialized view mv_name could not be refreshed as a base table changed physically due to vacuum/truncate concurrently. Please try again;REFRESH MATERIALIZED VIEW is unrefreshableREFRESH MATERIALIZED VIEW was submitted and running for long timeRefresh activity isn't shown on an automated refresh due to an active workloadResolutionREFRESH MATERIALIZED VIEW is failing with permission errorYou must be the owner to perform a REFRESH MATERIALIZED VIEW operation on a materialized view. Also, you must have the following privileges:SELECT privilege on the underlying base tablesUSAGE privilege on the schemaIf the materialized view is a full recompute instead of an incremental refresh, you must also have the CREATE privilege on the schema. To define privileges, see GRANT. For more information, see Autorefreshing a materialized view.Invalid operation: Materialized view mv_name could not be refreshed as a base table changed physically due to vacuum/truncate concurrently. Please try again;The error occurs when REFRESH MATERIALIZED VIEW and VACUUM are submitted to run concurrently on the base table. After the operation completes, the REFRESH MATERIALIZED VIEW can be re-submitted.REFRESH MATERIALIZED VIEW is unrefreshableUnrefreshable materialized views can be caused by operations that:Rename or drop a column.Change the type of a column.Change the name of a base table or schemaNote: Materialized views in this condition can be queried but can't be refreshed. The preceding constraints apply even if the column isn't used in the materialized view.To find whether the data in materialized view is stale, and materialized view state information, use STV_MV_INFO. To view the refresh activity of materialized view, use SVL_MV_REFRESH_STATUS. In this unrefreshable state of materialized view, you must drop and recreate the materialized view to keep the materialized view up-to-date.The following are example error messages you might see:```Detail: Procedure <mv_sp_*****_2_1> does not exist``````column <column name> does not exist``````DETAIL: schema "<schema name>" does not exist ;``````ERROR: Materialized view <mv namme> is unrefreshable as a base table was renamed.```REFRESH MATERIALIZED VIEW was submitted and running for long timeREFRESH MATERIALIZED VIEW functions as a normal query that run on your cluster. To confirm the query is running, do the following:To view the active queries running on the data, use STV_INFLIGHT.To record the current state of queries track by workload management (WLM), use STV_WLM_QUERY_STATE.To find out information of queries and query steps that are actively running on compute nodes, use STV_EXEC_STATE.The REFRESH MATERIALIZED VIEW operation performance is subject to the following factors:Table locks: To view any current updates on tables in the database, see STV_LOCKS.Allocated resources: To view the service class configuration for WLM, see STV_WLM_SERVICE_CLASS_CONFIG.Type of Refresh: Incremental or full refresh. To view the type of refresh that the materialized view underwent, see SVL_MV_REFRESH_STATUS.If you experience slow REFRESH MATERIALIZED VIEW performance, see Improve query performance.Refresh activity isn't shown on an automated refresh due to an active workloadAmazon Redshift prioritizes your workloads over autorefresh. This prioritization might stop autorefresh to preserve the performance of your workload and might delay the refresh of some materialized views. In some cases, your materialized views might need more deterministic refresh behavior. To create more deterministic refresh behavior, use the following:Manual refresh as described in REFRESH MATERIALIZED VIEWScheduled refresh using the Amazon Redshift scheduler API operations or the consoleFor more information, see Autorefreshing a materialized view.Follow" | https://repost.aws/knowledge-center/redshift-materialized-view-not-refresh |
How can I tag the Amazon VPC subnets in my Amazon EKS cluster for automatic subnet discovery by load balancers or ingress controllers? | I want to deploy load balancers or ingress controllers in the public or private subnets of my Amazon Virtual Private Cloud (Amazon VPC). Why can't my subnets be discovered by Kubernetes in my Amazon Elastic Kubernetes Service (Amazon EKS) cluster? | "I want to deploy load balancers or ingress controllers in the public or private subnets of my Amazon Virtual Private Cloud (Amazon VPC). Why can't my subnets be discovered by Kubernetes in my Amazon Elastic Kubernetes Service (Amazon EKS) cluster?Short descriptionThe Kubernetes Cloud Controller Manager (cloud-controller-manager) and AWS Load Balancer Controller (aws-load-balancer-controller) query a cluster's subnets to identify them. This query uses the following tag as a filter:kubernetes.io/cluster/cluster-nameNote: Replace cluster-name with your Amazon EKS cluster's name.The Cloud Controller Manager and AWS Load Balancer Controller both require subnets to have either of the following tags:kubernetes.io/role/elb-or-kubernetes.io/role/internal-elbNote: If you don't use the preceding tags, then Cloud Controller Manager determines if a subnet is public or private by examining its associated route table. Unlike private subnets, public subnets use an internet gateway to get a direct route to the internet.If you don't associate your subnets with either tag and you're using AWS Load Balancer Controller, then you receive an error.For example, if you're troubleshooting the Kubernetes service and you run the kubectl describe service your-service-name command, then you receive the following error:Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 9s (x2 over 14s) service-controller Ensuring load balancer Warning CreatingLoadBalancerFailed 9s (x2 over 14s) service-controller Error creating load balancer (will retry): failed to ensure load balancer for service default/guestbook: could not find any suitable subnets for creating the ELBIf you're troubleshooting the Application Load Balancer Ingress Controller and you run the kubectl logs your-aws-load-balancer-controller-pod-name command, then you receive the following error:E0121 22:44:02.864753 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to build LoadBalancer configuration due toretrieval of subnets failed to resolve 2 qualified subnets. Subnets must contain the kubernetes.io/cluster/\u003ccluster name\u003e tag with a value of shared or owned and the kubernetes.io/role/elb tag signifying it should be used for ALBs Additionally, there must be at least 2 subnets with unique availability zones as required by ALBs. Either tag subnets to meet this requirement or use the subnets annotation on the ingress resource to explicitly call out what subnets to use for ALB creation. The subnets that did resolve were []" "controller"="alb-ingress-controller" "request"={"Namespace":"default","Name":"2048-ingress"}Note: If you create the VPC using eksctl, then all the subnets in that VPC have the kubernetes.io/role/elb and kubernetes.io/role/internal-elb tags.ResolutionChoose the appropriate option for tagging your subnets:For public and private subnets used by load balancer resourcesTag all public and private subnets that your cluster uses for load balancer resources with the following key-value pair:Key: kubernetes.io/cluster/cluster-nameValue: sharedNote: Replace cluster-name with your Amazon EKS cluster's name. The shared value allows more than one cluster to use the subnet.For private subnets used by internal load balancersTo allow Kubernetes to use your private subnets for internal load balancers, tag all private subnets in your VPC with the following key-value pair:Key: kubernetes.io/role/internal-elbValue: 1For public subnets used by external load balancersTo allow Kubernetes to use only tagged subnets for external load balancers, tag all public subnets in your VPC with the following key-value pair:Key: kubernetes.io/role/elbValue: 1Note: Use the preceding tag instead of using a public subnet in each Availability Zone.Related informationAmazon EKS VPC and subnet requirements and considerationsSubnet Auto Discovery on the GitHub websiteApplication load balancing on Amazon EKSFollow" | https://repost.aws/knowledge-center/eks-vpc-subnet-discovery |
How does the AWS Glue crawler detect the schema? | I am running an AWS Glue crawler. The crawler is creating multiple tables even though the schemas look similar. I want to know how my crawler detects the schema. | "I am running an AWS Glue crawler. The crawler is creating multiple tables even though the schemas look similar. I want to know how my crawler detects the schema.ResolutionWhen you run your AWS Glue crawler, the crawler does the following:Classifies the dataGroups the data into tables or partitionsWrites metadata to the AWS Glue Data CatalogReview the following to learn what happens when you run the crawler and how the crawler detects the schema.Defining a crawlerWhen you define an AWS Glue crawler, you can choose one or more custom classifiers that evaluate the format of your data to infer a schema. When the crawler runs, the first classifier in your list to successfully recognize your data store is used to create a schema for your table. You define the custom classifiers before you define the crawler. When the crawler runs, the crawler uses the custom classifier that you defined to find a match in the data store. The match with each classifier generates a certainty. If the classifier returns certainty=1.0 during processing, then the crawler is 100 percent certain that the classifier can create the correct schema. In this case, the crawler stops invoking other classifiers, and then creates a table with the classifier that matches the custom classifier.If AWS Glue doesn’t find a custom classifier that fits the input data format with 100 percent certainty, then AWS Glue invokes the built-in classifiers. The built-in classifier returns either certainty=1.0 if the format matches, or certainty=0.0 if the format doesn't match. If no classifier returns certainty=1.0, then AWS Glue uses the output of the classifier that has the highest certainty. If no classifier returns a certainty of higher than 0.0, then AWS Glue returns the default classification string of UNKNOWN. For the current list of built-in classifiers in AWS Glue and the order that they are invoked in, see Built-in classifiers in AWS Glue.Schema detection in crawlerDuring the first crawler run, the crawler reads either the first 1,000 records or the first megabyte of each file to infer the schema. The amount of data read depends on the file format and availability of a valid record. For example, if the input file is a JSON file, then the crawler reads the first 1 MB of the file to infer the schema. If a valid record is read within the first 1 MB of the file, then the crawler infers the schema. If the crawler can't infer the schema after reading the first 1 MB, then the crawler reads up to a maximum of 10 MB of the file, incrementing 1 MB at a time. For CSV files, the crawler reads either the first 1000 records or the first 1 MB of data, whatever comes first. For Parquet files, the crawler infers the schema directly from the file. The crawler compares the schemas inferred from all the subfolders and files, and then creates one or more tables. When a crawler creates a table, it considers the following factors:Data compatibility to check if the data is of the same format, compression type, and include pathSchema similarity to check how closely similar the schemas are in terms partition threshold and the number of different schemasFor schemas to be considered similar, the following conditions must be true:The partition threshold is higher than 0.7 (70%).The maximum number of different schemas (also referred to as "clusters" in this context) doesn't exceed 5.The crawler infers the schema at folder level and compares the schemas across all folders. If the schemas that are compared match, that is, if the partition threshold is higher than 70%, then the schemas are denoted as partitions of a table. If they don’t match, then the crawler creates a table for each folder, resulting in a higher number of tables.Example scenariosExample 1: Suppose that the folder DOC-EXAMPLE-FOLDER1 has 10 files, 8 files with schema SCH_A and 2 files with SCH_B.Suppose that the files with the schema SHC_A are similar to the following:{ "id": 1, "first_name": "John", "last_name": "Doe"}{ "id": 2, "first_name": "Li", "last_name": "Juan"}Suppose that the files with the schema SCH_B are similar to the following:{"city":"Dublin","country":"Ireland"}{"city":"Paris","country":"France"}When the crawler crawls the Amazon Simple Storage Service (Amazon S3) path s3://DOC-EXAMPLE-FOLDER1, the crawler creates one table. The table comprises columns of both schema SCH_A and SCH_B. This is because 80% of the files in the path belong to the SCH_A schema, and 20% of the files belong to the SCH_B schema. Therefore, the partition threshold value is met. Also, the number of different schemas hasn't exceeded the number of clusters, and the cluster size limit isn't exceeded.Example 2: Suppose that the folder DOC-EXAMPLE-FOLDER2 has 10 files, 7 files with the schema SCH_A and 3 files with the schema SCH_B.When the crawler crawls the Amazon S3 path s3://DOC-EXAMPLE-FOLDER2, the crawler creates one table for each file. This is because 70% of the files belong to the schema SCH_A and 30% of the files belong to the schema SCH_B. This means that the partition threshold isn't met. You can check the crawler logs in Amazon CloudWatch to get information on the created tables.Crawler optionsCreate a single schema: To configure the crawler to ignore the schema similarity and create only one schema, use the option Create a single schema for each S3 path. For more information, see How to create a single schema for each Amazon S3 include path. However, if the crawler detects data incompatibility, then the crawler still creates multiple tables.Specify table location: The table level crawler option lets you tell the crawler where the tables are located and how the partitions are to be created. When you specify a Table level value, the table is created at that absolute level from the Amazon S3 bucket. When configuring the crawler on the console, you can specify a value for the Table level crawler option. The value must be a positive integer that indicates the table location (the absolute level in the dataset). The level for the top-level folder is 1. For example, for the path mydataset/a/b, if the level is set to 3, then the table is created at the location mydataset/a/b. For more information, see How to specify the table location.Related informationHow crawlers workSetting crawler configuration optionsFollow" | https://repost.aws/knowledge-center/glue-crawler-detect-schema |
How can I use AWS DataSync to transfer the data to or from a cross-account Amazon S3 location? | I want to use AWS DataSync to transfer data to or from a cross-account Amazon Simple Storage (Amazon S3) bucket. | "I want to use AWS DataSync to transfer data to or from a cross-account Amazon Simple Storage (Amazon S3) bucket.Short descriptionTo use DataSync for cross-account data transfer, do the following:Use AWS Command Line Interface (AWS CLI) or AWS SDK to create a cross-account Amazon S3 location in DataSync.Create a DataSync task that transfers data from the source bucket to the destination bucket.Keep in mind the following limitations when using DataSync to transfer data between buckets owned by different S3 accounts:DataSync doesn't apply the bucket-owner-full-control access control list (ACL) when transferring data to a cross-account destination bucket. This leads to object ownership issues in the destination bucket.For a cross-account S3 location, only a cross-account bucket in the same Region is supported. If you attempt a cross-account and a cross-Region S3 location, then you receive the GetBucketLocation or Unable to connect to S3 endpoint errors. So, if a task is created in source account, the task must be created in the same Region as the destination bucket. If a task is created in destination account, then the task must be created in same Region as the source bucket.You can't use the cross-account pass role to access the cross-account S3 location.You can configure the DataSync task in the destination account to pull data from the source by working around the preceding limitations.ResolutionPerform the required checksSuppose that the source account has the cross-account source S3 bucket and the destination account has the destination S3 bucket and the DataSync task. Perform the following checks:AWS Identity and Management (IAM) user/role: Check if the following IAM users or roles have the required permissions:The user or role that you're using to create the cross-account S3 locationThe role that you assigned to the S3 locationSource bucket policy: Be sure that the source bucket policy allows both IAM users/roles in the destination account to access the bucket. The following example policy grants the access to source bucket to both IAM users/roles:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::1111222233334444:role/datasync-config-role", "arn:aws:iam::1111222233334444:role/datasync-transfer-role" ] }, "Action": [ "s3:GetBucketLocation", "s3:ListBucket", "s3:ListBucketMultipartUploads" ], "Resource": [ "arn:aws:s3:::example-source-bucket" ] }, { "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::1111222233334444:role/datasync-config-role", "arn:aws:iam::1111222233334444:role/datasync-transfer-role" ] }, "Action": [ "s3:AbortMultipartUpload", "s3:DeleteObject", "s3:GetObject", "s3:ListMultipartUploadParts", "s3:PutObjectTagging", "s3:GetObjectTagging", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::example-source-bucket/*" ] } ]}Be sure to replace the following values in the preceding policy:example-source-bucket with the name of the source bucket1111222233334444 with the account ID of the destination accountdatasync-config-role with the IAM role that's used for DataSync configuration (example: create a source S3 location and the task in DataSync)Note: You might also use an IAM user. This article considers the use of the IAM role.dataysnc-transfer-role with the IAM role that's assigned when creating the source S3 locationNote: DataSync uses this role to access the cross-account data.Destination S3 location:Be sure that the destination S3 location is created according to the instructions in Creating an Amazon S3 location for AWS DataSync.When you use DataSync with S3 buckets that use server-side encryption, follow the instructions in Accessing S3 buckets using server-side encryption.Use AWS CLI or SDK to create a cross-account source S3 location in DataSyncNote: Creating a cross-account S3 location is not supported in the AWS Management Console.You can create the cross-account S3 location using either of the following methods:Use a configuration JSON file.Use the options in the AWS CLI command.Use a configuration JSON file1. Create a configuration JSON file input.template for the cross-account S3 location with the following parameters:{ "Subdirectory": "", "S3BucketArn": "arn:aws:s3:::[Source bucket]", "S3StorageClass": "STANDARD", "S3Config": { "BucketAccessRoleArn": "arn:aws:iam::1111222233334444:role/datasync-transfer-role" }}2. Create an S3 location by running the following AWS CLI command:aws datasync create-location-s3 --cli-input-json file://input.template --region example-DataSync-RegionNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.For more information, see create-location-s3.When the S3 location is created, you see the following output:{"LocationArn": "arn:aws:datasync:example-Region:123456789012:location/loc-0f8xxxxxxxxe4821"}Note that 123456789012 is the account ID of the source account.Use the options in the AWS CLI commandRun the following AWS CLI command with appropriate options:aws datasync create-location-s3 --s3-bucket-arn arn:aws:s3:::example-source-bucket --s3-storage-class STANDARD --s3-config BucketAccessRoleArn="arn:aws:iam::1111222233334444:role/datasync-transfer-role" --region example-DataSync-RegionBe sure to replace the following values in the command:example-source-bucket with the name of the source bucketexample-DataSync-Region with the Region where you'll be creating the DataSync task.Create a DataSync taskConfigure the DataSync task, and start the task from the DataSync console. For more information, see Starting your AWS DataSync task.Known errors and resolutionsError: error creating DataSync Location S3: InvalidRequestException: Please provide a bucket in the xxx region where DataSync is currently usedIf you receive this error, then confirm that the bucket and IAM policies include the following required permissions:"Action": ["s3:GetBucketLocation","s3:ListBucket","s3:ListBucketMultipartUploads"]If you get this error when using a cross-account bucket, then be sure that the buckets are in the same Region as your DataSync taskS3 object ownership issuesDataSync doesn't support using a cross-account bucket as the destination location. Therefore, you can't use the ACL bucket-owner-full-control. If the DataSync task runs from the source bucket account, the objects uploaded to the destination bucket account might have the object ownership issue. To resolve this issue, if the destination bucket has no objects that are using ACLs, consider disabling the ACLs on the destination bucket. For more information, see Controlling ownership of objects and disabling ACLs for your bucket. Otherwise, it's a best practice to configure the DataSync task in the destination account to pull data from the source.Related informationHow to use AWS DataSync to migrate data between Amazon S3 bucketsFollow" | https://repost.aws/knowledge-center/datasync-transfer-cross-account-s3 |
How can I speed up the creation of a global secondary index for an Amazon DynamoDB table? | "I want to create a global secondary index (GSI) for an Amazon DynamoDB table, but it's taking a long time." | "I want to create a global secondary index (GSI) for an Amazon DynamoDB table, but it's taking a long time.Short descriptionWhen you add a new global secondary index to an existing table, then the IndexStatus is set to CREATING and Backfilling is true. Backfilling reads items from the table and determines whether they can be added to the index. When you backfill an index, DynamoDB uses the internal system capacity to read items from the table. This minimizes the effect of index creation and makes sure that the table doesn't run out of read capacity.The time required for building a global secondary index depends on multiple factors:The size of the base tableThe number of items in the table that qualify for inclusion in the indexThe number of attributes projected into the indexThe provisioned write capacity of the indexWrite activity on the base table during index creationData distribution across index partitionsTo speed up the creation process, increase the number of write capacity units (WCUs) on the index.Global secondary indexes inherit the read or write capacity mode from the base table. If your table is in on-demand mode, then DynamoDB also creates the index in on-demand mode. In this case, you can't increase the capacity on the index, because an on-demand DynamoDB table scales itself based on incoming traffic.ResolutionUse the OnlineIndexPercentageProgress Amazon CloudWatch metric to monitor the index creation progress:1. Open the DynamoDB console.2. In the navigation pane, choose Tables, and then select your table from the list.3. Choose the Metrics tab.4. Choose View all CloudWatch metrics.5. In the search box, enter OnlineIndexPercentageProgress.Note: If the search returns no results, wait a minute or so for metrics to populate. Then, try again.6. Choose the name of the index to see the progress.Determine the number of additional WCUs that you need. To do this, divide the table size in kilobytes by your desired backfill time. See the following examples of this calculation.Example 1Suppose that you have a 1 GiB (1,074,000 KB) table. You want the backfilling process to complete in 10 minutes (600 seconds). Therefore, calculate the number of WCUs as follows:1,074,000 / 600 = 1,790 WCUsExample 2Suppose that you want the index to be 2 GB in size, and you want the index creation to be completed in one hour. Therefore, calculate the number of WCUs as follows:(2GB * 1024 * 1024) KB / 60 minutes / 60 second = ~583 WCUThe required number of WCUs depends on the index size and the time that you estimate.Note: This is only an estimate. Creation time depends on multiple factors, such as your key distribution, the items' size, and the number of attributes that are projected into the index.To provision additional write capacity, do the following:1. Open the DynamoDB console.2. In the navigation pane, choose Tables, and then select your table from the list.3. Choose the Capacity tab.4. Increase the write capacity of the index, and then choose Save.5. After about a minute, check the OnlineIndexPercentageProgress metric to see if the creation speed is improved.Note: You don't need to provision additional read capacity.Related informationImproving data access with secondary indexesAdding a global secondary index to an existing tableManaging global secondary indexesFollow" | https://repost.aws/knowledge-center/create-gsi-dynamodb |
How do I troubleshoot IKEv2 tunnel stability issues during a rekey? | "I created an AWS Virtual Private Network (AWS VPN) connection using IKEv2. The VPN tunnels were up and working, but they went down during a rekey and aren't coming back up. How do I troubleshoot this?" | "I created an AWS Virtual Private Network (AWS VPN) connection using IKEv2. The VPN tunnels were up and working, but they went down during a rekey and aren't coming back up. How do I troubleshoot this?ResolutionTo troubleshoot IKEv2 tunnel stability issues during a rekey:Confirm that "Perfect Forward Secrecy (PFS)" is activated on the customer gateway for the Phase 2 configuration.If your customer gateway is configured as a policy-based VPN, then determine if you must reconfigure your VPN connection to use specific traffic selectors. By default, AWS VPN endpoints are configured as route-based VPNs. AWS initiates a child security association (SA) rekey using 0.0.0.0/0, 0.0.0.0/0 for the traffic selectors. Some customer gateway devices don't accept the Phase 2 rekey initiated by AWS. This is because the traffic selectors on AWS VPN endpoints don't match the traffic selectors that are configured on the customer gateway device. In this case, you can configure your AWS VPN connection to use specific traffic selectors that match with customer gateway.To configure a new VPN connection to use specific traffic selectors:1. For Local IPv4 Network CIDR, specify the on-premises (customer side) CIDR range.2. For Remote IPv4 Network CIDR, specify the AWS side CIDR range.To configure an existing VPN connection to use specific traffic selectors:1. Select the AWS VPN connection where you must modify the traffic selectors on the AWS side. 2. Choose Actions, then choose Modify VPN Connection Options from the dropdown list.3. For Local IPv4 Network CIDR, specify the on-premises (customer side) CIDR range.4. For Remote IPv4 Network CIDR, specify the AWS side CIDR range.5. Choose Save.Note: The VPN connection is temporarily unavailable for a brief period while the VPN connection is updated.Important: When you modify the VPN connection options, neither of the following change:VPN endpoint IP addresses on the AWS sideTunnel optionsFollow" | https://repost.aws/knowledge-center/vpn-fix-ikev2-tunnel-instability-rekey |
Why is my EC2 instance not displaying as a managed node or showing a "Connection lost" status in Systems Manager? | My Amazon Elastic Compute Cloud (Amazon EC2) instance either lost its connection or isn't displaying under Fleet Manager in the AWS Systems Manager console. | "My Amazon Elastic Compute Cloud (Amazon EC2) instance either lost its connection or isn't displaying under Fleet Manager in the AWS Systems Manager console.ResolutionA managed instance is an EC2 instance that's used with Systems Manager as a managed node.To confirm that your EC2 instance meets the prerequisites to be a managed instance, run the AWSSupport-TroubleshootManagedInstance Systems Manager Automation document. Then, verify that your EC2 instance meets the following requirements.Important: Throughout the troubleshooting steps, select the AWS Region that includes your EC2 instance.Verify that SSM Agent is installed and running on the instanceAfter you confirm that your operating system supports Systems Manager, verify that AWS Systems Manager Agent (SSM Agent) is installed and running on your instance.SSM Agent is preinstalled on some Linux, macOS, and Windows Amazon Machine Images (AMIs). To manually install SSM Agent when the agent isn't preinstalled, see the following documentation:Linux: Manually installing SSM Agent on EC2 instances for LinuxmacOS: Manually installing SSM Agent on EC2 instances for macOSWindows: Manually installing SSM Agent on EC2 instances for Windows ServerTo verify that SSM Agent is running, use operating system-specific commands to check the agent status.After you finish verifying SSM Agent, run ssm-cli to troubleshoot managed instance availability.Verify connectivity to Systems Manager endpoints on port 443To verify connectivity to Systems Manager endpoints on port 443, you must consider your operating system and subnet settings. For a list of Systems Manager endpoints by Region, see AWS Systems Manager endpoints and quotas.Note: In the examples, the ssmmessages endpoint is required for AWS Systems Manager Session Manager.EC2 Linux instancesUse either Telnet or Netcat commands to verify connectivity to endpoints on port 443 for EC2 Linux instances.Note: Replace RegionID with your instance's Region when running commands.Telnet commands:telnet ssm.RegionID.amazonaws.com 443telnet ec2messages.RegionID.amazonaws.com 443telnet ssmmessages.RegionID.amazonaws.com 443Telnet connection example:root@111800186:~# telnet ssm.us-east-1.amazonaws.com 443Trying 52.46.141.158...Connected to ssm.us-east-1.amazonaws.com.Escape character is '^]'.To exit from telnet, hold down the Ctrl key and press the ] key. Enter quit, and then press Enter.Netcat commands:nc -vz ssm.RegionID.amazonaws.com 443nc -vz ec2messages.RegionID.amazonaws.com 443nc -vz ssmmessages.RegionID.amazonaws.com 443Netcat isn't preinstalled on EC2 instances. To manually install Netcat, see Ncat on the Nmap website.EC2 Windows instancesUse the following Windows PowerShell commands to verify connectivity to endpoints on port 443 for EC2 Windows instances.Test-NetConnection ssm.RegionID.amazonaws.com -port 443Test-NetConnection ec2messages.RegionID.amazonaws.com -port 443Test-NetConnection ssmmessages.RegionID.amazonaws.com -port 443Public subnetsSystems Manager endpoints are public endpoints. To resolve issues when connecting to an endpoint from an instance in a public subnet, confirm the following points:Your instance's route table routes internet traffic to an internet gateway.Your Amazon Virtual Private Cloud (Amazon VPC) security groups and network access control lists (network ACLs) allow outbound connections on port 443.Private subnetsUse private IP addresses to privately access Amazon EC2 and Systems Manager APIs. To resolve issues when connecting to an endpoint from an instance in a private subnet, confirm one of the following points:Your instance's route table routes internet traffic to a NAT gateway.Your VPC endpoint is configured to reach Systems Manager endpoints.For more information, see How do I create VPC endpoints so that I can use Systems Manager to manage private EC2 instances without internet access?Note: Each interface endpoint creates an elastic network interface in the provided subnet.As a security best practice for private subnets, verify the following settings:The security group attached to your VPC endpoint's network interface allows TCP port 443 inbound traffic from the security group that's attached to your instance.The security group attached to your instance allows TCP port 443 outbound traffic to the private IP address for your VPC endpoint's network interface.Verify the setup for Default Host Management ConfigurationNote: If you aren't using Default Host Management Configuration, then skip to the Verify that the correct IAM role is attached to the instance section.Systems Manager automatically manages EC2 instances without an AWS Identity and Access Management (IAM) instance profile when you configure Default Host Management Configuration. When you configure this feature, Systems Manager has permissions to manage all instances in your Region and account. If the permissions aren't sufficient for your use case, add policies to the default IAM role created by Default Host Management Configuration.All the associated instances must use Instance Metadata Service Version 2 (IMDSv2). To check your IMDSv2 configuration, see When there is zero IMDSv1 usage and Check if your instances are transitioned to IMDSv2.Default Host Management Configuration is available in SSM Agent version 3.2.582.0 or later. To verify your SSM Agent version, see Checking the SSM Agent version number.To verify the setup for Default Host Management Configuration, complete the following steps:Open the Systems Manager console.In the navigation pane, choose Fleet Manager.From the Account management dropdown list, choose Default Host Management Configuration.Verify that the Enable Default Host Management Configuration setting is turned on.You might also use the following AWS Command Line Interface (AWS CLI) command to verify the setup for Default Host Management Configuration:Note: Replace AccountID with your AWS account ID when running commands.aws ssm get-service-setting \--setting-id arn:aws:ssm:RegionID:AccountID:servicesetting/ssm/managed-instance/default-ec2-instance-management-roleWhen Default Host Management Configuration is set up, you'll receive a response similar to the following:{ "ServiceSetting": { "SettingId": "/ssm/managed-instance/default-ec2-instance-management-role", "SettingValue": "service-role/AWSSystemsManagerDefaultEC2InstanceManagementRole", "LastModifiedDate": 1679492424.738, "LastModifiedUser": "arn:aws:sts::012345678910:assumed-role/role/role-name", "ARN": "arn:aws:ssm:ap-southeast-1:012345678910:servicesetting/ssm/managed-instance/default-ec2-instance-management-role", "Status": "Customized" }}Note: If the value for SettingValue is $None, then Default Host Management Configuration isn't configured.Verify that Default Host Management Configuration is using an appropriate IAM roleThe AWSSystemsManagerDefaultEC2InstanceManagementRole role is the recommended IAM role when you set up Default Host Management Configuration. To use a different role, make sure that the role has the AmazonSSMManagedEC2InstanceDefaultPolicy IAM policy attached to it.If you have instance profiles attached to your EC2 instances, then remove any permissions that allow the ssm:UpdateInstanceInformation operation. SSM Agent tries to use instance profile permissions before using the Default Host Management Configuration permissions. When you allow the ssm:UpdateInstanceInformation operation in your instance profiles, your instance doesn't use the Default Host Management Configuration permissions.Verify that the correct IAM role is attached to the instanceNote: If you're using Default Host Management Configuration, then skip to the Verify connectivity to IMDS section.To make API calls to a Systems Manager endpoint, you must attach the AmazonSSMManagedInstanceCore policy to the IAM role that's attached to your instance. If you're using a custom IAM policy, then confirm that your custom policy uses the permissions found in AmazonSSMManagedInstanceCore. Also, make sure that the trust policy for your IAM role allows ec2.amazonaws.com to assume this role.For more information, see Add permissions to a Systems Manager instance profile (console).Verify connectivity to IMDSSSM Agent must communicate with Instance Metadata Service (IMDS) to obtain necessary information about your instance. To test the connection, run the following Netcat command:nc -vz 169.254.169.254 80To verify that IMDS is set up for your existing instance, do one of the following steps:Open the Amazon EC2 console. In the navigation pane, choose Instances, select your instance, and then choose Actions, Instance settings, Modify instance metadata options. In the dialog box, Instance metadata service must be Enabled.In the AWS CLI, run the describe-instances CLI command.aws ec2 describe-instances --query "Reservations[*].Instances[*].MetadataOptions" --instance-ids i-012345678910Output example:[ [ { "State": "applied", "HttpTokens": "optional", "HttpPutResponseHopLimit": 1, "HttpEndpoint": "enabled", "HttpProtocolIpv6": "disabled", "InstanceMetadataTags": "disabled" } ]]Note: "HttpTokens": "optional" means both IMDSv1 and IMDSv2 are supported. "HttpTokens": "required" means IMDSv2 is supported. "HttpEndpoint": "enabled" means that IMDS is turned on.If you're using a proxy on your instance, then the proxy might block connectivity to the metadata URL. To avoid this, make sure that you configure your SSM Agent to work with a proxy and set no_proxy for the metadata URL. To configure SSM Agent to use a proxy, see the following documentation:Linux: Configure SSM Agent to use a proxy (Linux)macOS: Configure SSM Agent to use a proxy (macOS)Windows: Configure SSM Agent to use a proxy for Windows Server instancesAdditional troubleshootingIf your instance still doesn't appear as a managed node or shows a lost connection in Systems Manager, then continue troubleshooting in the SSM Agent logs:Linux and macOS: The SSM Agent logs are in /var/log/amazon/ssm.Windows: The SSM Agent logs are in %PROGRAMDATA%\Amazon\SSM\Logs.When your instance isn't reporting to SSM Agent, try signing in using RDP (Windows) or SSH (Linux) to collect the logs. If you can't collect the logs, then you must stop your instance and detach the root volume. Then, attach the root volume to another instance in the same Availability Zone as a secondary volume to obtain the logs.Related informationAttach an Amazon Elastic Block Store (Amazon EBS) volume to an instanceDetach an Amazon EBS volume from a Linux instanceMake an Amazon EBS volume available for use on LinuxMake an Amazon EBS volume available for use on WindowsFollow" | https://repost.aws/knowledge-center/systems-manager-ec2-instance-not-appear |
Why can't I access a VPC to launch my Amazon Redshift cluster? | "I want to launch an Amazon Redshift cluster in a specific Amazon Virtual Private Cloud (Amazon VPC). However, I can't access my VPC from the dropdown list." | "I want to launch an Amazon Redshift cluster in a specific Amazon Virtual Private Cloud (Amazon VPC). However, I can't access my VPC from the dropdown list.Short descriptionYou might not be able to select and access your VPC for the following reasons:The VPC doesn't exist in the same AWS Region where you're trying to create your Amazon Redshift cluster in.The VPC isn't associated with a cluster subnet group.ResolutionTo access your VPC in Amazon Redshift, perform the following steps:1. Create a VPC in the same Region where you want to launch an Amazon Redshift cluster.2. Create a cluster subnet group.Note: To improve fault tolerance, it's a best practice to create a cluster subnet group with two or more subnets from different Availability Zones. However, all the nodes in a cluster must be in the same Availability Zone.3. Launch an Amazon Redshift cluster into the VPC. In the Additional configurations section, switch off Use defaults. Then, choose the VPC that you want from the dropdown list.Related informationAmazon Redshift cluster subnet groupsFollow" | https://repost.aws/knowledge-center/vpc-redshift-associate |
How can I use a static or Elastic IP address for an Amazon ECS task on Fargate? | I want to use a static or Elastic IP address for an Amazon Elastic Container Service (Amazon ECS) task on AWS Fargate. | "I want to use a static or Elastic IP address for an Amazon Elastic Container Service (Amazon ECS) task on AWS Fargate.Short descriptionYou can't add a static IP address or Elastic IP address directly to a Fargate task. To use a static IP or Elastic IP with Fargate tasks, first create a Fargate Service with a Network Load Balancer. Then, attach the Elastic IP address of the task to the Load Balancer.Choose one of the following options:To create a static IP address for a Fargate task for inbound traffic, complete the following steps in the Resolution section.To create a static IP address for a Fargate task for outbound traffic, create a NAT gateway. In this scenario, a static IP address is required by the downstream consumer. You must place your Fargate task on a private subnet. You can use the NAT gateway IP address for an IP allow list.ResolutionCreate a network load balancer, and then configure routing for your target groupOpen the Amazon EC2 console.In the navigation pane, under Load Balancing, choose Load Balancers.Choose Create Load Balancer.On the Select load balancer type page, choose Create for Network Load Balancer.On the Create Network Load Balancer page, for Load balancer name, enter a name for your load balancer.For Scheme, select either Internet-facing or internal.For IP address type, select IPv4.In the Network mapping section, for VPC, select the Amazon Virtual Private Cloud (Amazon VPC) for your Fargate task.For Mappings, select at least one Availability Zone and one subnet for each Availability Zone.Note: Turning on multiple Availability Zones increases the fault tolerance of your applications. For internet-facing load balancers, select an Elastic IP address for each Availability Zone. This provides your load balancer with static IP addresses. Or, for an internal load balancer, assign a private IP address from the IPv4 range of each subnet instead of letting AWS assign one for you.In the Listeners and routing section, keep the default listener or add another listener.Note: The default listener accepts TCP traffic on port 80. You can keep the default listener settings, modify the protocol or port of the listener, or choose Add listener to add another listener.For Protocol, select your protocol.For Port, select your port.Under Default action, choose Create target group.Note: The target group is used by the Network Load Balancer listener rule that forwards the request to the target group.On the Specify group details page, for Choose a target type, select IP addresses.Note: The target type Instances isn't supported on Fargate.For Target group name, enter a name for your target group.In the Health checks section, keep the default settings.Choose Next.Note: Load balancers distribute traffic between targets within the target group. When a target group is associated with an Amazon ECS service, Amazon ECS automatically registers and deregisters containers with the target group. Because Amazon ECS handles target registration, you don't need to add targets to your target group.On the Register targets page, choose Create target group.Navigate to the Create Network Load Balancer page.In the Listeners and routing section, for Forward to, select the target group that you created.Note: You must select the reload button to see the new target group after it has been created.Choose Create load balancer.Create an Amazon ECS serviceCreate an Amazon ECS service. Be sure to specify the target group in the service definition when you create your service.When each task for your service is started, the container and port combination specified in the service definition is registered with your target group. Then, traffic is routed from the load balancer to that container.Related informationService load balancingFollow" | https://repost.aws/knowledge-center/ecs-fargate-static-elastic-ip-address |
How do I troubleshoot InsufficientInstanceCapacity errors when starting or launching an EC2 instance? | "I'm unable to start or launch my Amazon Elastic Compute Cloud (Amazon EC2) instance, and I'm receiving an insufficient capacity error. How can I troubleshoot this issue and be sure that I have enough capacity for my critical machines?" | "I'm unable to start or launch my Amazon Elastic Compute Cloud (Amazon EC2) instance, and I'm receiving an insufficient capacity error. How can I troubleshoot this issue and be sure that I have enough capacity for my critical machines?Short descriptionIf AWS doesn't currently have enough available On-Demand capacity to complete your request, you'll receive the following InsufficientInstanceCapacity error:"An error occurred (InsufficientInstanceCapacity) when calling the RunInstances operation (reached max retries: 4). We currently do not have sufficient capacity in the Availability Zone you requested."ResolutionIf you receive this error, do the following:For troubleshooting steps, see Insufficient instance capacity.If the preceding troubleshooting steps don't resolve the problem, then you can move the instance to another VPC or to another subnet and Availability Zone.To avoid insufficient capacity errors on critical machines, consider using On-Demand Capacity Reservations. To use an On-Demand Capacity Reservation, do the following:Create the Capacity Reservation in an Availability Zone.Launch critical instances into your Capacity Reservation. You can view real-time Capacity Reservation usage, and launch instances into it as needed.Related informationWhy am I unable to start or launch my EC2 instance?Follow" | https://repost.aws/knowledge-center/ec2-insufficient-capacity-errors |
Clients are receiving certificate error messages when trying to access my website using HTTPS connections. How do I resolve this? | I'm using a certificate from AWS Certificate Manager (ACM). My clients are receiving warning messages that say the connection is not secure or private. What can I do to resolve these certificate error messages? | "I'm using a certificate from AWS Certificate Manager (ACM). My clients are receiving warning messages that say the connection is not secure or private. What can I do to resolve these certificate error messages?Short descriptionIf you are using HTTPS connections, then a server certificate is required. A server certificate is an x.509 v3 data structure signed by a certificate authority (CA). A server certificate contains the name of the server, the validity period, the public key, and other data. When your browser accesses the web server, all the data fields must be valid. Your browser considers invalid data fields an insecure connection.You can receive a certificate error message if:The certificate isn't valid for the name of the server.The certificate is expired.The SSL/TLS certificate for the website isn't trusted.Your connection is not fully secured.ResolutionThe certificate is not valid for the name of the serverCheck the domain that you're accessing, and then check the domain names included in your certificate. You can view the domain name using your browser and by checking the certificate details. The domain in the URL must match at least one of the domain names included in the certificate.If you use a wildcard name (*), then the wildcard matches only one subdomain level. For example, *.example.com can protect login.example.com and test.example.com, but the wildcard can't protect test.login.example.com or example.com. If your website can be accessed by example.com and www.example.com, then you can add multiple domain names to your certificate to cover other possible domain and subdomain names of your website.The certificate is expiredIf you use an ACM-issued certificate, then ACM tries to renew the certificate automatically. If the certificate is expired, then you must issue or import a new certificate. After a new certificate is issued, confirm that your DNS records are pointing to the AWS resource, such as a load balancer, where the ACM certificate is used. For more information, see Troubleshoot managed certificate renewal problems.The SSL/TLS certificate for the website is not trustedACM-issued certificates are trusted by most modern browsers, operating systems, and mobile devices. Update your browser to the latest version, or try to access the domain from a different computer and browser. If you imported a self-signed certificate using AWS Certificate Manager (ACM), then some browsers can't trust the certificate. To resolve this error, request a public certificate using ACM or contact your CA.Your connection is not fully securedMixed content can occur if an initial request and parts of the webpage are established over HTTPS, and other parts are established over HTTP. Webpage visitors see the error “Your connection is not fully secured” with mixed content. This is because webpage elements in your source code use HTTP instead of HTTPS. To resolve this error, update your source code to load all the resources on your page over HTTPS.Related informationHow do I upload SSL certificates for my Classic Load Balancer to prevent clients from receiving “untrusted certificate” errors?Listeners for your Classic Load BalancerImporting certificates into AWS Certificate ManagerFollow" | https://repost.aws/knowledge-center/acm-certificate-error-https |
I want to troubleshoot performance bottlenecks within my EC2 Linux instances. What advanced tools can I use with EC2Rescue for Linux to do that? | "I want to troubleshoot performance bottlenecks within my Amazon Elastic Compute Cloud (Amazon EC2) instances running Amazon Linux 1 or 2, RHEL or CentOS. What tools can I use with EC2Rescue for Linux to do that?" | "I want to troubleshoot performance bottlenecks within my Amazon Elastic Compute Cloud (Amazon EC2) instances running Amazon Linux 1 or 2, RHEL or CentOS. What tools can I use with EC2Rescue for Linux to do that?Short descriptionPerformance bottlenecks on Linux instances running Amazon Linux 1 or 2, RHEL, or CentOS can occur in CPU performance, block I/O performance, or network performance. To determine where performance bottlenecks are happening, you can leverage the 33 tools available in EC2Rescue for Linux using the bcc framework in eBPF (extended Berkeley Packet Filter). eBPF efficiently and securely runs monitoring tools in production environments without significant performance overhead.Note: This resolution doesn't apply to instances running Debian or Ubuntu.Resolution(For experienced Linux system administrators)Install the bcc package for your operating system.1. Connect to your instance using SSH.2. Install the bcc package. For download and installation instructions for distributions other than Amazon Linux, refer to the documentation specific to your distribution. For Amazon Linux instances, use the following command:$ sudo yum install bcc3. The bcc tools must be in the PATH variable on your operating system for EC2 Rescue for Linux to run them. Use the following command to put the tools in the PATH variable:$ sudo -s# export PATH=$PATH:/usr/share/bcc/tools/4. It's a best practice to permanently add the PATH setting to your Linux system. The steps to make this setting permanent vary depending on your specific Linux distribution. For Amazon Linux, use the following commands:Open ~/.bash_profile using the vi editor:# vi ~/.bash_profileAppend /usr/share/bcc/tools to the PATH variable:PATH=$PATH:$HOME/bin:/usr/share/bcc/toolsSave the file and exit the vi editor.Source the updated profile:#source ~/.bash_profile6. Download and install the EC2 Rescue for Linux tool, and then navigate to the installation directory on your instance.The following are commonly used bcc-based modules used with EC2Rescue for Linux.CPU performance toolsbccsoftirqs.yaml - This module runs the softirqs tool that traces soft interrupts (IRQs), and then stores timing statistics in-kernel for efficiency. An interval can be provided using --period, and a count using --times argument. The tool automatically prints the timestamps for each time it runs. For more information, see EC2Rescue for Linux - bccsoftirqs.yaml on the GitHub website.bccrunqlat.yaml - This program shows how long tasks have spent waiting their turn to run on-CPU. Results are shown as a histogram. For more information, see EC2Rescue for Linux - bccrunqlat.yaml on the GitHub website.# ./ec2rl run --only-modules=bccsoftirqs,bccrunqlat --period=5 --times=5Block I/O performance toolsbccbiolatency.yaml - Traces block device I/O, and records the distribution of I/O latency (time) per disk device, such as an instance store and Amazon Elastic Block Store (Amazon EBS), attached to your EC2 instance. Results are printed as a histogram. The module runs for the specified period and collects output a specified number of times. In the example below, the period and times variables are set to 5. For more information, see EC2Rescue for Linux - bccbiolatency.yaml on the GitHub website.bccext4slower.yaml - Collects output using the ext4slower tool. ext4slower traces any ext4 reads, writes, opens, and fsyncs that are slower than a threshold of 10 ms by default. The module runs for the specified period and collects output a specified number of times. In the example below, the period and times variables are set to 5. For more information, see EC2Rescue for Linux - bccext4slower.yaml on the GitHub website.You can use the bccxfsslower module similarly to bccext4slower.yaml for XFS file systems. For more information, see EC2Rescue for Linux - bccxfsslower.yaml on the GitHub website.bccfileslower.yaml - Collects output using fileslower that traces file-based synchronous reads and writes slower than a default threshold of 10 ms. The module runs for the specified period and then collects output a specified number of times. In the example below, the period and times variables are set to 5. For more information, see EC2Rescue for Linux - bccfileslower.yaml on the GitHub website.# ./ec2rl run --only-modules=bccbiolatency,bccext4slower,bccfileslower --period=5 --times=5Network performance toolsbcctcpconnlat.yaml - Traces the kernel function performing active TCP connections (for example, through a connect() syscall). The results display the latency (time) for the connection. Latency is measured locally, meaning the time from SYN sent to the response packet for a specified period. TCP connection latency indicates the time taken to establish a connection. For more information, see EC2Rescue for Linux - bcctcpconnlat.yaml on the GitHub website.bcctcptop.yaml - Displays TCP connection throughput per host and port for the specified period and times without clearing the screen. For more information, see EC2Rescue for Linux - bcctcptop.yaml on the GitHub website.bcctcplife.yaml - Summarizes TCP sessions that open and close while tracing. For more information, see EC2Rescue for Linux - bcctcplife.yaml on the GitHub website.# ./ec2rl run --only-modules=bcctcpconnlat,bcctcptop,bcctcplife --period=5 --times=5Output exampleThe results of running these modules are located under the /var/tmp/ec2rl directory after each single run of one or more modules on your instance.The following example is the output from the bcctcptop module with the period parameter set to 5 and the times parameter set to 2:# ./ec2rl run --only-modules=bcctcptop --period=5 --times=2 # cat /var/tmp/ec2rl/2020-04-20T21_50_01.177374/mod_out/run/bcctcptop.log I will collect tcptop output from this alami box 2 times.Tracing... Output every 5 secs. Hit Ctrl-C to end21:50:17 loadavg: 0.74 0.33 0.17 5/244 4285PID COMM LADDR RADDR RX_KB TX_KB3989 sshd 172.31.22.238:22 72.21.196.67:26601 0 921:50:22 loadavg: 0.84 0.36 0.18 4/244 4285PID COMM LADDR RADDR RX_KB TX_KB3989 sshd 172.31.22.238:22 72.21.196.67:26601 0 112731 amazon-ssm-a 172.31.22.238:54348 52.94.225.236:443 5 42938 amazon-ssm-a 172.31.22.238:58878 52.119.197.249:443 0 0You can upload results to AWS Support using the following command:# ./ec2rl upload --upload-directory=/var/tmp/ec2rl/2020-04-20T21_50_01.177374 --support-url="URLProvidedByAWSSupport"Note: The quotation marks in the preceding command are required. If you run the tool with sudo, upload the results using sudo. Run the command help upload for details on using an Amazon Simple Storage Service (Amazon S3) presigned URL to upload the output.Related informationHow do I diagnose high CPU utilization on an EC2 Windows instance?Follow" | https://repost.aws/knowledge-center/ec2-linux-tools-performance-bottlenecks |
How can I troubleshoot slow performance when I copy local files to Storage Gateway? | "I want to copy local files to my Network File System (NFS) or Server Message Block (SMB) file share on AWS Storage Gateway, but the transfer is slow. How can I improve the upload performance?" | "I want to copy local files to my Network File System (NFS) or Server Message Block (SMB) file share on AWS Storage Gateway, but the transfer is slow. How can I improve the upload performance?ResolutionConsider the following ways to improve the performance when you copy local files to a file share on Storage Gateway:Note: A file gateway is an object-store cache, not a file server. This means that a file gateway's performance characteristics differ from those of file servers.Scale your workloadFor the best performance, scale your workload by adding threads or clients. When you transfer a directory of files, a file gateway scales best when the workload is multi-threaded or involves multiple clients. Review your file-management tool and confirm whether the tool runs single-threaded uploads by default.It's a best practice to use multiple threads or clients when you transfer small or large files. You get the highest MiB per second throughput when you transfer large files (tens or hundreds of MiB each) using multiple threads. Because of the overhead of creating new files, transferring many small files results in a lower MiB per second throughput when compared to the same workload with large files.To perform a multi-threaded copy in Windows, use robocopy, a file copy tool by Microsoft.Note: For transfers of smaller files, measure the transfer rate in files per second instead of MiB per second. The rate of file creation can take up workload space associated with transferring smaller files.Tune your cache storageTune your gateway's total cache storage size to the size of the active working set. A cache that uses multiple local disks can parallelize access to data and lead to higher I/O operations per second (IOPS). For more information, see Performance guidance for Amazon Simple Storage Service (Amazon S3) File Gateway.Also, monitor the CachePercentDirty metric for your gateway. This metric returns the percentage of Cache storage that's occupied by data that isn't persisted to an S3 bucket. A high value of CachePercentDirty can cause the gateway's cache storage to throttle writes to the gateway.Use higher-performance disksIt's a best practice to use solid state drive (SSD) backed disks for your gateway's cache storage with dedicated tenancy. Ideally, the underlying physical disks shouldn't be shared with other virtual machines in order to prevent IOPS exhaustion.To measure disk IOPS, use the ReadBytes and WriteBytes metric with the Samples statistic in CloudWatch. As a general rule, when you review these metrics for the gateway, look for low throughput and low IOPS trends to indicate any disk-related bottlenecks.Monitor the IOWaitPercent metric in CloudWatch, which reports the percentage of time that the CPU is waiting for a response from the local disk.. A value higher than 10% typically indicates a bottleneck in the underlying disks and can be a result of slower disks. In this case, add additional disks to provide more available IOPS to the gateway.Note: For Amazon Elastic Compute Cloud (Amazon EC2) based gateways, the Amazon Elastic Block Store (Amazon EBS) throughput of the instance can also be a limiting factor. Confirm that the CPU and RAM of your gateway's host virtual machine or Amazon EC2 instance supports your gateway's throughput to AWS. For example, every EC2 instance type has a different baseline throughput. If burst throughput is exhausted, then the instance uses its baseline throughput, which can limit the upload throughput to AWS. If your gateway is hosted on an Amazon EC2 instance, NetworkOut metric for the instance. If the NetworkOut metric stays at the baseline throughput during your testing, then consider changing the instance to a larger instance type.Follow" | https://repost.aws/knowledge-center/storage-gateway-slow-copy-local |
How can I use Sysprep to create and install custom reusable Windows AMIs? | I want to use Sysprep to capture and install a custom reusable Windows Amazon Machine Image (AMI). | "I want to use Sysprep to capture and install a custom reusable Windows Amazon Machine Image (AMI).Short descriptionYou can use Sysprep, a Microsoft tool, to capture custom Windows images. Sysprep removes unique information from an Amazon Elastic Compute Cloud (Amazon EC2) Windows instance. This information includes the instance security identifiers (SID), computer name, and drivers.ResolutionBefore you run Sysprep, consider the following points:Don't use Sysprep to create a backup for your instance.Don't run Sysprep on a production system.Sysprep isn't supported on Windows Server 2016 Nano Server.For Windows Server 2008 through Windows Server 2012 R2, run Sysprep with EC2Config.For Windows Server 2016 or later, run Sysprep with EC2Launch.Run Sysprep with EC2Config or EC2Launch1. Open the Amazon EC2 console, and then connect to your Windows EC2 instance using Remote Desktop Protocol (RDP).Note: To create a standard custom image without Sysprep, see Create a Windows AMI from a running instance. Be sure to note the AMI ID.2. From the Windows Start menu, complete the following steps:For Windows Server 2008 through Windows Server 2012 R2, open EC2ConfigService Settings, and then choose the Image tab.For Windows Server 2016 or later, open EC2LaunchSettings.3. For Administrator Password, choose Random.4. Choose Shutdown with Sysprep.5. Choose Yes.Note: You must retrieve the new password from the EC2 console on the next boot.6. Open the Amazon EC2 console, and then choose Instances from the navigation pane.7. After the instance state changes to stopped, select your instance.8. For Actions, choose Image, Create image.For Image name, enter a name.(Optional) For Image description, enter a description.9. Choose Create image.For more information and customization options, see Create a standardized Amazon Machine Image (AMI) using Sysprep.If you receive error messages or experience issues when using Sysprep, then see Troubleshoot Sysprep.You can also use EC2Rescue for Windows Server to collect log files and troubleshoot issues.Related informationWhat is Sysprep? (Microsoft)How do I create an AMI that is based on my EBS-backed EC2 instance?How do I launch an EC2 instance from a custom AMI?Why can't I launch EC2 instances from my copied AMI?Follow" | https://repost.aws/knowledge-center/sysprep-create-install-ec2-windows-amis |
How do I troubleshoot issues with historical reports in Amazon Connect? | "In Amazon Connect, I want to troubleshoot common issues with historical metric reports." | "In Amazon Connect, I want to troubleshoot common issues with historical metric reports.ResolutionBefore troubleshooting issues with historical reports, be sure you have the permissions required to view historical reports.After a report is published, any user with the Saved reports, Create permission can create, edit, or delete the schedule of your report.Note: Only the user who created the report can delete the report. Admin or emergency access accounts can't delete a saved report published by another user. However, you can use either an Admin account or an emergency access account to access the reports of your instance and unpublish the report. If a report is unpublished, then it's removed from the list of Saved reports, but isn't deleted.Troubleshooting historical report scheduling errorsMissing schedule optionTo schedule reports, you must turn on and configure the Amazon Simple Storage Service (Amazon S3) bucket where the reports will be saved.To configure an Amazon S3 bucket for reports, do the following:Open the Amazon Connect console.On the instances page, choose the instance alias.In the navigation pane, choose Data storage.Choose Edit, and then specify the bucket and KMS key for exported reports.Choose Save.Error message: Schedule report creation unsuccessfulIf you use blank spaces in the prefix to specify the location for the report file in Amazon S3, then the preceding error might occur. When setting up the scheduled report, you can add or update a prefix for the report files. To resolve this error, remove the blank spaces in the prefix, and then schedule the report to run again.Note: Scheduled reports are saved as CSV files in the Amazon S3 bucket specified for reports for your contact center.If correcting the prefix doesn't resolve the error message, then do the following:Create a HAR file to capture browser related issues.Create a Support case.Attach the HAR file and a screenshot of the error message to the support case.Reports aren't scheduled for my time zoneReports can be scheduled only in the UTC time zone. The historical metrics available in the console are viewable in different time zones. For more information, see Schedule a historical metrics report.Troubleshooting historical report creation errorsWhen generating a historical metrics report with filtering you receive the following error: "Failed to generate report. Please try again in a few minutes. Report this error with exceptionId:aabxxxxx-a93c-xxxx-9042-xxxxdf12xxxx"The preceding error might occur when running a historical report with an incorrect configuration. An incorrect configuration includes running a report with metrics that can't be grouped or filtered by queue, phone number, or channel.To resolve this error, remove the following metrics that can't be grouped or filtered:Non-Productive TimeOnline timeError status timeIf changing the report groupings or filters doesn't resolve the error message, then do the following:Create a HAR file to capture browser related issues.Create a Support case. Be sure to indicate the type of report (Queues, Agents, or Phone numbers) and the report custom settings (selected metrics and filters).Attach the HAR file and a screenshot of the error message to the Support case.When generating a historical metrics report, you receive the following error**: "The report includes more records than the maximum allowed. To create a report with fewer records, please select a shorter time range, apply different filters, or select fewer metrics to include in the report. You can also use Amazon Kinesis to stream your contact data for advanced monitoring and analytics."**The preceding error occurs when your historical report exceeds the 80k cell limit for a report. The 80k cell limit applies to the total number of cells (columns * rows), accounting for grouping and filtering. To reduce the report records to under 80,000 cells, consider the following:Create multiple historical reports by customizing the report by Time range.Reduce the selected report criteria, such as, the time interval or the number of historical metrics in the report.Use filters to control the data included in the report.Historical metric report limitationsConsider the following when scheduling historical reports:Using 15 minute intervals creates reports for only three days at a time. Also, the report can't list data more than 35 days old. This means that the report contains data within the past 35 days and only three days of data can be viewed at a time.Using 30 minute intervals creates reports for only three days at a time. Also, the available data is based on a two-year retention period. This means that the report contains data within the past two years and only three days of data can be viewed at a time.Using a Daily interval or Total interval creates reports for 31 days. Also, the available data is based on a two-year retention period. This means that the report contains data within the past two years and only 31 days of data can be viewed at a time.For more information, see Historical report limits.To export records without these limitations, you can turn on data streaming for your Amazon Connect instance. Then, use a consumer, such as an AWS Lambda function, to poll the streams and perform monitoring and analytics. For instructions on streaming and storing contact trace records, see Data streaming on AWS.Follow" | https://repost.aws/knowledge-center/connect-historical-report-issues |
"How do I troubleshoot packet loss, latency, or intermittent connectivity issues with my Client VPN connection?" | "How do I troubleshoot packet loss, latency, or intermittent connectivity issues with my AWS Client VPN connection?" | "How do I troubleshoot packet loss, latency, or intermittent connectivity issues with my AWS Client VPN connection?Short descriptionTo diagnose network issues such as packet loss, latency, or intermittent connectivity in your Client VPN connection, first test the network to isolate the source of the issue. To isolate the source of the issue, consider:Is this issue affecting all users, or only users on a specific internet service provider (ISP) or at a specific remote location?What connectivity medium are the affected users using to connect to Client VPN? For example, possibly a fixed internet connection, a local WiFi hotspot, or a mobile network.From what device are the affected users connecting? For example, possibly a Windows machine or a mobile device.Where are the affected users located in relation to the Client VPN endpoint?What are users accessing when they experience packet loss, latency, or intermittent connectivity issues?Does the client still experience packet loss, latency, or intermittent connectivity issues to external resources when not connected to Client VPN?Is split-tunnel enabled on the endpoint?ResolutionReview how users connect to the Client VPN endpointThere are several factors involved in troubleshooting performance and connectivity issues with a Client VPN connection. Before progressing to more complicated troubleshooting methods used by network administrators to test connectivity, review the following considerations.For users on a mobile network or WiFi hotspotUsers might have a poor connection with low signal or connectivity issues. Specifically with hotspots accessed in a shared location, there might be bandwidth restrictions. For these types of connections:Test connection speeds using a performance testing tool, such as the Speedtest website. It's a best practice to test from the same Region as the Client VPN endpoint.-or-On Windows, macOS, or Linux-based systems, use ICMP to test connectivity to the default gateway. To check the stability of a WiFi hotspot connection:Ping <Default Gateway IP>Note: Be sure to replace "Default Gateway IP" with the IP address of the default gateway.If there's a poor connection or bandwidth constraints, it's a best practice to connect using a faster or more stable connection and then note any performance improvements.For users in different geographic locationsReview where users are located in relation to the Client VPN endpoint.For example, consider a scenario where split-tunnel isn't enabled and all user traffic is forced over the Client VPN tunnel. A user that's geographically separated from the endpoint might experience elevated latency, packet loss, or intermittent connectivity to resources in the VPC or over the internet. In this case, you can resolve the issues by configuring the VPC to allow this traffic if the intermediate ISPs have issues.If it's not a requirement that traffic from internet resources is forwarded over the VPC, it's a best practice to enable split-tunnel. When you enable split-tunnel on the Client VPN endpoint, the routes in the endpoint route table are pushed to the device that's connected to the Client VPN. Then, only traffic with a destination to the network matching a route from the endpoint route table is routed through the Client VPN tunnel.If necessary, use advanced troubleshooting techniquesIf the previously described methods don't resolve your issues, use the following advanced techniques. These methods can help remote users troubleshoot network connectivity issues between their local device and the Client VPN endpoints.For Windows usersFind the Client VPN endpoint node IP addresses:1. Open Command Prompt (cmd).2. Perform nslookup on your endpoint DNS URL:nslookup cvpn-endpoint-0102bc4c2eEXAMPLE.clientvpn.us-west-2.amazonaws.comIf you have trouble resolving the previous command, append a subdomain:nslookup test.cvpn-endpoint-0102bc4c2eEXAMPLE.clientvpn.us-west-2.amazonaws.comUse the MTR method:1. Download and install WinMTR from the SourceForge website.2. For Host, enter the destination IP address, and then choose Start.3. Run the test for approximately one minute, and then choose Stop.4. Choose Copy text to clipboard, and then paste the output in a text file.5. Search output in the text file for any losses in the % column that are propagated to the destination.6. Review hops on the MTR reports using a bottom-up approach. For example, check for loss on the last hop or destination, and then review the preceding hops.Notes:Client VPN doesn't respond to ICMP. However, MTR is still a viable test to confirm that there's no packet loss on the intermediate ISP links.Ignore any hops with the "No response from host" message. This message indicates that those particular hops aren't responding to the ICMP probes.Use the tracert method:If you don't want to install MTR, or need to perform further testing, you can use the tracert command utility tool. Perform a tracert to the destination URL or IP address. Then, look for any hop that shows an abrupt spike in round-trip time (RTT). An abrupt spike in RTT might indicate that there's a node under high load. A node under high load induces latency or packet drops in your traffic.For macOS and Amazon Linux usersFind your Client VPN endpoint node IP addresses:1. Open Terminal.2. Perform dig on your endpoint DNS URL:dig cvpn-endpoint-0102bc4c2eEXAMPLE.clientvpn.us-west-2.amazonaws.comIf you have trouble resolving the previous command, append a subdomain:dig test.cvpn-endpoint-0102bc4c2eEXAMPLE.clientvpn.us-west-2.amazonaws.comUse the MTR method:1. Install MTR. On macOS, use macOS with Homebrew.On Amazon Linux, use "sudo yum install mtr".On Ubuntu Linux, use "sudo apt-get mtr".2. Run a TCP-based MTR:mtr -n -T -P 443 -c 200 <Client VPN endpoint IP> --reportmtr -n -T -P 1194 -c 200 <Client VPN endpoint IP> --report-or-Run a UDP-based MTR:mtr -n -u -P 443 -c 200 <Client VPN endpoint IP> --reportmtr -n -u -P 1194 -c 200 <Client VPN endpoint IP> --reportNote: Be sure to test based on the port configured on your Client VPN endpoint. If you find packet loss in your network, refer to your vendor documentation for instructions on how to check network devices for analysis and troubleshooting. Or, reach out to your internet service provider.Follow" | https://repost.aws/knowledge-center/client-vpn-fix-packet-loss-latency |
How can I access an API Gateway private REST API in another AWS account using an interface VPC endpoint? | I want to use an interface virtual private cloud (VPC) endpoint to access an Amazon API Gateway private REST API that's in another AWS account. | "I want to use an interface virtual private cloud (VPC) endpoint to access an Amazon API Gateway private REST API that's in another AWS account.Short descriptionTo use an interface VPC endpoint to access an API Gateway private REST API that's in another AWS account, complete the following steps:Create an interface endpoint in an Amazon Virtual Private Cloud (Amazon VPC) in one account (account A).Create an API Gateway private REST API in a second account (account B).Configure a resource policy for the private REST API that allows the interface endpoint to invoke the API.Set up a method for the private REST API.Deploy the private REST API.Call the private REST API from account A to test the setup.Note: The Amazon API Gateway private REST API and the VPC endpoint must be in the same AWS Region.ResolutionCreate an interface endpoint in an Amazon VPC in one account (account A)Create a new interface VPC endpointFrom account A, follow the instructions in Create an interface VPC endpoint for API Gateway execute-api.Important: For Policy, choose Full access. It's a best practice to use a VPC endpoint policy to restrict endpoint access by API ID. It's also a best practice to use the API Gateway resource policy to restrict endpoint access by principal. For more information on the security advice of granting least privilege, see Apply least-privilege permissions in the IAM User Guide.As you create the interface endpoint, consider take the following actions:It's a best practice to select multiple subnets in different Availability Zones. Configuring subnets across multiple Availability Zones makes your interface endpoint resilient to possible AZ failures.With private DNS activated, you can use public or private DNS to connect to your private REST API.Note: When you activate private DNS for an interface VPC endpoint, you can no longer access API Gateway public APIs from your Amazon VPC. For more information, see Why do I get an HTTP 403 Forbidden error when connecting to my API Gateway APIs from a VPC?The security groups that you choose must have a rule that allows TCP Port 443 inbound HTTPS traffic from one of the following places:An IP address range in your Amazon VPC-or-Another security group in your Amazon VPCNote: If you don't have a security group that meets either of these requirements, then choose Create a new security group. Create a new security group that meets one of the requirements. If you don't specify a security group, then a default security group is associated with the endpoint network interfaces.Retrieve the interface endpoint's VPC Endpoint IDAfter you choose Create endpoint and create the interface endpoint, the VPC Endpoint ID appears. Copy the VPC Endpoint ID of your new interface endpoint (for example: vpce-1a2b3c456d7e89012). Then, choose Close.Note: Use this ID when creating and configuring your private REST API.Retrieve the interface endpoint's public DNS nameAfter you choose Close, the Endpoints page opens in the Amazon VPC console. On the Details tab of the Endpoints page, in the DNS names column, find and copy the public DNS name for your interface endpoint. For example: vpce-1a2b3c456d7e89012-f3ghijkl.execute-api.region.vpce.amazonaws.com.Create an API Gateway private REST API in a second account (account B)1. In account B, open the API Gateway console.2. Choose Create API.3. For Choose an API type, Under REST API Private, choose Build.4. On the Create page, keep Choose the protocol set to REST.5. For Create new API, choose New API.6. Under Settings, enter the following information:For API name, enter a name for your API.(Optional) For Description, enter a description for your API.Keep Endpoint Type set to Private.For VPC Endpoint IDs, paste your interface endpoint ID. Then, choose Add.Note: When you associate your interface endpoint with your private REST API, API Gateway generates a new Amazon Route 53 alias record. You can use the Route53 alias to access your private API.7. Choose Create API.For more information, see Creating a private API in Amazon API Gateway.Configure a resource policy for the private REST API that allows the interface endpoint to invoke the API1. In the navigation pane of the API Gateway console, under your API, choose Resource Policy.2. On the Resource Policy page, paste the following example resource policy into the text box:Note: Replace vpce-1a2b3c456d7e89012 with the interface endpoint ID that you copied.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Deny", "Principal": "*", "Action": "execute-api:Invoke", "Resource": "execute-api:/*/*/*", "Condition": { "StringNotEquals": { "aws:sourceVpce": "vpce-1a2b3c456d7e89012" } } }, { "Effect": "Allow", "Principal": "*", "Action": "execute-api:Invoke", "Resource": "execute-api:/*/*/*" } ]}For more information, see Set up a resource policy for a private API.Set up a method for the private REST API1. In the navigation pane of the API Gateway console, under your API, choose Resources.2. On the Resources pane, choose Actions, and then choose Create Method.3. In the dropdown list under the / resource node, choose ANY, and then choose the check mark icon.4. On the / - ANY - Setup pane, for Integration type, choose Mock.Note: A mock integration responds to any request that reaches it. This response helps when testing.5. Choose Save.For more information, see Set up REST API methods in API Gateway.Deploy the private REST API1. On the Resources pane of the API Gateway console, choose Actions, and then choose Deploy API.2. In the Deploy API dialog box, enter the following information:For Deployment stage, choose [New Stage].For Stage name, enter a name. For example, dev or test.Choose Deploy.3. On the Stage Editor pane, find the message ("If Private DNS is enabled, use this URL:"). This includes your private REST API's invoke URL. Copy the URL.Note: Use the private REST API's invoke URL for testing the setup.For more information, see Deploy a private API using the API Gateway console.Test the setup by calling the private REST API from account A1. In account A, launch an Amazon Elastic Compute Cloud (Amazon EC2) instance in the same Amazon VPC as your interface endpoint.Important: During setup, choose the existing security group that you associated with your interface endpoint.2. Connect to the Amazon EC2 instance.Note: An amazon EC2 instance can incur charges on your AWS account. If you create an instance for testing this setup only, then terminate the instance when you're done to prevent recurring charges.3. From the command line of your Amazon EC2 instance, use any of the following curl commands to call the private REST API in account B.Note: For more information, see Invoking your private API using endpoint-specific public DNS hostnames. For more information about curl, see the curl project website.Call your API using a Private DNS namecurl -i https://a1bc234d5e.execute-api.region.amazonaws.com/stage-nameNote: Replace https://a1bc234d5e.execute-api.region.amazonaws.com/stage-name with your private API's invoke URL that you copied from the API Gateway console. This command works only if you turned on private DNS for your interface endpoint. For more information, see Invoking your private API using private DNS names.Call your API using a Route 53 aliascurl -i https://a1bc234d5e-vpce-1a2b3c456d7e89012.execute-api.region.amazonaws.com/stage-nameNote: Replace a1bc234d5e with your API's ID.Replace vpce-1a2b3c456d7e89012 with the interface endpoint ID.Replace region with your API's Region. (For example, us-east-1.)Replace stage-name with the name of the stage where you deployed your private API. For more information, see Accessing your private API using a Route53 alias.Call your API using a public DNS name with a host headercurl -i https://vpce-1a2b3c456d7e89012-f3ghijkl.execute-api.region.vpce.amazonaws.com/stage-name -H "Host: a1bc234d5e.execute-api.region.amazonaws.com"Note: Replace vpce-1a2b3c456d7e89012-f3ghijkl.execute-api.region.vpce.amazonaws.com with the public DNS name that you noted in the Amazon VPC console.Replace stage-name with the name of the stage where you deployed your private API.Replace a1bc234d5e.execute-api.region.amazonaws.com with your private API's invoke URL from the API Gateway console.Call your API using a public DNS name with the x-apigw-api-id headercurl -i https://vpce-1a2b3c456d7e89012-f3ghijkl.execute-api.region.vpce.amazonaws.com/stage-name -H "x-apigw-api-id:a1bc234d5e"Note: Replace vpce-1a2b3c456d7e89012-f3ghijkl.execute-api.region.vpce.amazonaws.com with the public DNS name that you noted in the Amazon VPC console.Replace stage-name with the name of the stage where you deployed your private API.Replace a1bc234d5e with your API's ID.4. Review the command output. API Gateway returns a 200 OK response if the connection is successful.Related informationUse VPC endpoint policies for private APIs in API GatewayHow do I troubleshoot issues when connecting to an API Gateway private API endpoint?Access an AWS service using an interface VPC endpointFollow" | https://repost.aws/knowledge-center/api-gateway-private-cross-account-vpce |
How can I use messaging construct features to customize my interactions in Amazon Lex? | "I want to customize my interaction with the bot in Amazon Lex. How can I use messaging constructs like response cards, message groups, and intent contexts to customize my interactions with the bot?" | "I want to customize my interaction with the bot in Amazon Lex. How can I use messaging constructs like response cards, message groups, and intent contexts to customize my interactions with the bot?Short descriptionAmazon Lex V2 provides you with a number of messaging construct features that allow you to customize user interactions with bots. Follow the steps in this article to customize messaging constructs like response cards, message groups, and intent contexts using the Amazon Lex V2 console.ResolutionResponse CardsResponse cards consist of a set of responses to a prompt. You can use response cards when you want Amazon Lex to provide a predefined set of applications to a client application. For example, in a taxi booking application, you can configure the types of vehicles available for a user, like compact, van, or SUV. The vehicle types are displayed as buttons in the response cards, and your application users chooses one of the available options. This option is then sent as input to Amazon Lex.To create a response card using the Amazon Lex V2 console for slot prompt, follow these steps:1. Open the Amazon Lex V2 console, and then choose the intent where the slot is configured.2. From the slots section, choose the slot, and then choose Advanced options.3. From the to slot prompts section, choose More prompt options.4. Choose the Add dropdown, and then choose Add card group. You can now create cards and card groups, as needed.Note: You can define up to three cards per group. A user selects one card during a conversation.Message groupsA message group is a set of suitable responses to a specific prompt. You can use message groups when you want your bot to dynamically build the responses in a conversation. When Amazon Lex returns a response to a client application, it randomly chooses one message from each group.For example, in a TaxiBooking bot, your first message group might contain different ways that the bot will greet the user. It might use “Hello”, “Hi” , "Hey", or “Greetings”. The second message group might contain different forms of introduction like “I am the TaxiBooking chatbot” or “This is the TaxiBooking chatbot.” A third message group might communicate capabilities like “I can help with the Taxi booking” or "I am here to assist you with taxi booking". Amazon Lex randomly selects one message from each group, and then uses them to give a response to the user.Follow these steps to create multiple message groups for success fulfillment messages using the Amazon Lex V2 console.1. Open the Amazon Lex V2 console, and then choose the intent that you want to customize.2. From the fulfillment section, choose Advanced options.3. From the Success response section, choose More response options.4. Choose the Add dropdown, and then choose Add text message group. You can now create messages and message groups, as needed.ContextsA context is a state variable that can be associated with an intent when you define a bot. You can configure the contexts for an intent when you create the intent using the console or using the CreateIntent operation.There are two types of relationships for contexts, output contexts and input contexts. An output context becomes active when an associated intent is fulfilled. After a context is activated, it stays active for the number of turns or for a time limit that you configure when you define the context.An input context specifies the conditions under which an intent is recognized. An intent is only recognized during a conversation when all of its input contexts are active. An intent with no input contexts is always eligible for recognition.Create an output contextAmazon Lex makes an intent's output contexts active when the intent is fulfilled. You can use the output context to control the intent's eligibility to follow up the current intent. You can configure an intent with more than one output context. When the intent is fulfilled, all of the output contexts are activated and returned in the RecognizeText or RecognizeUtterance response.When you define an output context, you also define its time to live. This indicates the length of time or number of turns that the context is included in responses from Amazon Lex. A turn is one request from your application to Amazon Lex. Once the number of turns or the time has expired, the context is no longer active.Your application can use the output context, as needed. For example, your application can use the output context to:Change the behavior of the application based on the context. For example, a travel booking application might have one action for the context book_car_fulfilled and a different action for rental_hotel_fulfilled.Return the output context to Amazon Lex as the input context for the next utterance. If Amazon Lex recognizes the utterance as an attempt to use an intent, it uses the context to limit the intents that can be returned to those with the specified context.Follow these steps to create/specify output contexts:1. Open the Amazon Lex V2 console, and choose the intent that you want to customize.2. In the Context section, enter the output contexts that you want to create and assign to the intent.Create an input contextYou can set an input context to limit the points in the conversation where the intent is recognized. Intents without an input context are always eligible to be recognized.Follow these steps to create or specify input contexts:1. Open the Amazon Lex V2 console, and choose the intent that you want to customize.2. From the Context section, enter the output contexts that you want to create and assign to the intent.For an intent with more than one input context, all contexts must be active to trigger the intent. You can set an input context when you call the RecognizeText, RecognizeUtterance or PutSession operations.Related informationImageReponseCardMessageGroupOutputContextInputContextFollow" | https://repost.aws/knowledge-center/lex-messaging-constructs |
How do I connect agents in my Amazon Connect contact center to incoming calls automatically? | "When agents in my Amazon Connect contact center are available, I want them to connect to incoming calls automatically. How do I set that up?" | "When agents in my Amazon Connect contact center are available, I want them to connect to incoming calls automatically. How do I set that up?ResolutionActivate Auto-Accept Call for your agents in Amazon Connect. When the Auto-Accept Call setting is turned on for an available agent, the agent connects to incoming contacts automatically.Auto-Accept Call doesn't work for callbacks.Note: You must edit existing users individually to activate the Auto-Accept Call setting in your Amazon Connect instance. However, you can configure the setting for multiple new users when you bulk upload users with the CSV template.Follow" | https://repost.aws/knowledge-center/connect-auto-accept-call |
How do I troubleshoot issues with including metadata on an EC2 instance in CloudFormation? | "I used AWS::CloudFormation::Init to include metadata on an Amazon Elastic Cloud Compute (Amazon EC2) instance, but I don't see the changes on the instance." | "I used AWS::CloudFormation::Init to include metadata on an Amazon Elastic Cloud Compute (Amazon EC2) instance, but I don't see the changes on the instance.Short descriptionIssues with EC2 instance metadata in an AWS CloudFormation stack can occur due to the following reasons:The cfn-init helper script isn't installed on one or more instances of the CloudFormation stack. To resolve this issue, complete the steps in the Verify that the cfn-init helper script is installed section.The instance isn't connected to the internet. To resolve this issue, complete the steps in the Verify that the instance is connected to the internet section.The CloudFormation template contains syntax errors or incorrect values. To resolve this issue, complete the steps in the Search for errors in the cloud-init or cfn-init logs section.Important: Before you complete the following resolutions, set the Rollback on failure option for your CloudFormation stack to No.Note: The following resolutions are specific to CloudFormation stacks that are created with Linux instances.ResolutionVerify that the cfn-init helper script is installedTo confirm that cfn-init is installed on the instance that's configured to send signals to CloudFormation resources:1. Connect to the instance using SSH.2. Run one of the following commands to verify that cfn-init or the aws-cfn-bootstrap package is installed in your directory.cfn-init:$ sudo find / -name cfn-init/opt/aws/bin/cfn-init/opt/aws/apitools/cfn-init/opt/aws/apitools/cfn-init-1.4-34.24.amzn1/bin/cfn-init/var/lib/cfn-init-or-aws-cfn-bootstrap package:$ sudo rpm -q aws-cfn-bootstrapaws-cfn-bootstrap-1.4-34.24.amzn1.noarchImportant: The preceding command works only on distributions that use the RPM Package Manager.Note: By default, CloudFormation helper scripts are installed on the Amazon Linux Amazon Machine Image (AMI). If CloudFormation helper scripts aren't installed, see CloudFormation helper scripts reference for instructions on how to install them.Verify that the instance is connected to the internetIf the instance is in an Amazon Virtual Private Cloud (Amazon VPC), then it can connect to the internet through:A NAT device in a private subnetAn internet gateway in a public subnetTo test the instance's internet connection, access a public webpage, such as AWS, and run a curl command on the instance. For example:curl -I https://aws.amazon.comNote: If the instance is connected to the internet, then the command returns an HTTP 200 status code.If you're using an interface VPC endpoint, then the endpoint must be located in the same AWS Region as the instance. Also, the security group that's attached to the interface endpoint must allow incoming connections on port 443 from the private subnet of the Amazon VPC.Search for errors in the cloud-init or cfn-init logsTo search for errors in the cloud-init logs or cfn-init logs:1. Connect to your instance using SSH.2. Look for detailed error or failure messages by searching for the keywords "error" or "failure" in the following logs:/var/log/cloud-init-output.log/var/log/cloud-init.log/var/log/cfn-init.log/var/log/cfn-init-cmd.logTo parse all instances of the words "error" or "failure" in /var/log/cfn or /var/log/cloud-init files, run the following command:grep -ni 'error\|failure' $(sudo find /var/log -name cfn-init\* -or -name cloud-init\*)Note: The preceding command returns the file name, line number, and error message.Look for cfn-init.log. If you can't find it, then cfn-init wasn't run. You must also check cloud-init-output.log and cloud-init.log to see if there was a failure when running user data. After you identified the error, fix it based on the error message, and then recreate the stack.If cfn-init.log exists, then cfn-init was run, but a failure occurred. Check cfn-init.log to see what went wrong, and fix it based on the error message.To confirm that the UserData property is configured to run cfn-init, complete the following steps:In a code editor, open the AWS CloudFormation template for your stack, and then find the UserData property section.Check for errors, including syntax errors, missing spaces, misspellings, and other typos.Confirm that the values for the stack, resource, and Region properties are correct.For the Fn::Join intrinsic function of the UserData property, use the -v option to run cfn-init in verbose mode. See JSON and YAML example templates.Related informationSetting up VPC endpoints for AWS CloudFormationAWS::CloudFormation::InitHow do I resolve the error "Failed to receive X resource signal(s) within the specified duration" in AWS CloudFormation?Follow" | https://repost.aws/knowledge-center/cloudformation-metadata-instance-issues |
"How do I convert a .pem file to .ppk, or from .ppk to .pem, on Windows and Linux?" | "I want to convert my Amazon Elastic Compute Cloud (Amazon EC2) Privacy Enhanced Mail (.pem) file to a PuTTY Private Key (.ppk) file. Or, I want to convert a .ppk file to a .pem file." | "I want to convert my Amazon Elastic Compute Cloud (Amazon EC2) Privacy Enhanced Mail (.pem) file to a PuTTY Private Key (.ppk) file. Or, I want to convert a .ppk file to a .pem file.Short descriptionPuTTY doesn't natively support the private key format (.pem) generated by Amazon EC2. You must convert your private key into a .ppk file before you can connect to your instance using PuTTY. Use the PuTTYgen tool for this conversion.ResolutionWindows - install PuTTYgenMost Windows operating systems have PuTTY installed. If your system doesn't, then download and install PuTTYgen from the SSH website.Windows - convert a .pem file to a .ppk fileStart PuTTYgen, and then convert the .pem file to a .ppk file. For detailed steps, see Convert your private key using PuTTYgen.Windows - convert a .ppk file to a .pem file1. Start PuTTYgen. For Actions, choose Load, and then navigate to your .ppk file.2. Choose the .ppk file, and then choose Open.3. (Optional) For Key passphrase, enter a passphrase. For Confirm passphrase, re-enter your passphrase.Note: Although a passphrase isn't required, it's a best practice to specify one. This is a security measure to protect the private key from unauthorized use. A passphrase makes automation difficult, because users must manually log in to an instance or copy files to an instance.4. From the menu at the top of the PuTTY Key Generator, choose Conversions, Export OpenSSH Key.Note: If you didn't enter a passphrase, then you receive a PuTTYgen warning. Choose Yes.5. Name the file and add the .pem extension.6. Choose Save.Unix or Linux - install PuTTYInstall PuTTY, if it's not already on your system.Important: The Extra Packages for Enterprise Linux (EPEL) repository contains the PuTTY package. You must activate the EPEL repository before you install PuTTY.To install PuTTY, run one of the following commands:RPM-based$ sudo yum install puttyDpkg-based$sudo apt-get install putty-toolsUnix or Linux - convert a .pem file to a .ppk fileOn the instance shell, run the puttygen command to convert your .pem file to a .ppk file:$ sudo puttygen pemKey.pem -o ppkKey.ppk -O privateUnix or Linux - convert a .ppk file to a .pem fileRun the puttygen command to convert a .ppk file into a .pem file:$ sudo puttygen ppkkey.ppk -O private-openssh -o pemkey.pemRelated informationAmazon EC2 key pairs and Linux instancesFollow" | https://repost.aws/knowledge-center/ec2-ppk-pem-conversion |
How do I troubleshoot SSML issues in Amazon Connect? | The Speech Synthesis Markup Language (SSML) syntax in my Play prompt contact block isn’t working. How do I troubleshoot issues with SSML tags in Amazon Connect? | "The Speech Synthesis Markup Language (SSML) syntax in my Play prompt contact block isn’t working. How do I troubleshoot issues with SSML tags in Amazon Connect?ResolutionTo resolve issues with SSML syntax in Amazon Connect, first review this article to identify the specific issue that you’re experiencing. Then, follow the troubleshooting steps listed for that issue.Note: This article covers the most common reasons why SSML issues can occur in Amazon Connect only. Additional troubleshooting steps might be needed for your specific use case.If your contact flow skips the Play prompt block after you’ve configured the block to interpret text to speech as SSMLReview your SSML syntax to identify any reserved characters. Then, replace each reserved character with its corresponding escape code.For a list of reserved characters and their corresponding escape codes, see Reserved Characters in SSML.SSML escape code examplePlain text: You’ve ordered bananas & apples.SSML syntax: <speak>You've ordered bananas & apples.</speak>If the contact attributes in your SSML tags aren’t workingReview the contact attributes in your SSML tags to verify the following:That you’re using supported SSML tags onlyThat the tags include quotes around each contact attributeThat the contact attributes you're using in your tags exist, and that they don't include any typosSSML tag example that includes the "$Attributes.time" contact attribute<speak>Your order for <break time="$.Attributes.time"/> $.Attributes.ordername is completed. No further action needed.</speak>If your prompt is still played in an English accent (voice) after you’ve added a <lang> tag for another languageThe default voice for the Amazon Connect text-to-speech (TTS) feature is configured to American English (en-US). This default voice isn't changed when you change the language of your message using SSML syntax.To change the default voice, you must use a Set voice contact block by doing the following:1. In your contact flow, add a Set voice block before the Play prompt block.2. Choose the block title (Set voice). The block's settings menu opens.3. For Language, select the language that you want customers to hear from the dropdown list.4. For Voice, select the voice that you want customers to hear from the dropdown list.Note: For a list of AWS Regions that support neural voices, see Feature and Region compatibility in the Amazon Polly developer guide.5. Choose Save.Follow" | https://repost.aws/knowledge-center/connect-ssml-issue-troubleshooting |
How do I configure security groups and network ACLs when creating a VPC interface endpoint for endpoint services? | I want to configure my security groups and network access control lists (ACLs) when I create an Amazon Virtual Private Cloud (Amazon VPC) interface endpoint to connect an endpoint service. | "I want to configure my security groups and network access control lists (ACLs) when I create an Amazon Virtual Private Cloud (Amazon VPC) interface endpoint to connect an endpoint service.Short descriptionWhen you create an Amazon VPC interface endpoint with an endpoint service, an elastic network interface is created inside of the subnet that you specify. This VPC interface endpoint inherits the network ACL of the associated subnet. You must also associate a security group with the interface endpoint to protect incoming traffic.When you associate a Network Load Balancer with an endpoint service, the Network Load Balancer forwards requests to the registered target. The requests are forwarded as if the target was registered by IP address. In this case, the source IP addresses are the private IP addresses of the load balancer nodes. If you have access to the Amazon VPC endpoint service, then verify that:The Inbound security group rules of the Network Load Balancer’s targets allow communication from the private IP address of the Network Load Balancer nodesThe rules within the network ACL associated with the Network Load Balancer’s targets allow communication from the private IP address of the Network Load Balancer nodesResolutionFind the network ACL associated with your interface endpointSign in to the Amazon VPC console.Choose Endpoints.Select your endpoint’s ID from the list of endpoints.Choose the Subnets view.Select the associated subnets, which redirects you to the Subnets section of the Amazon VPC console.Note the network ACL associated with the subnets.Find the security group associated with your interface endpointSign in to the Amazon VPC console.Choose Endpoints.Select your endpoint’s ID from the list of endpoints.Choose the Security Groups view.Note the IDs of the associated security groups.Configure the security group associated with the interface endpointA security group acts as a virtual firewall for your Elastic Network Interfaces to control inbound and outbound traffic.Note: Security groups are stateful. When you define a rule in one direction, return traffic is automatically allowed.Configure an inbound rule:For Port Range, enter the same port as your endpoint service.For Source, enter the IP address or network of the initiating client.Note: You don't need to create a rule in the outbound direction of the security group associated with the interface endpoint.Repeat these steps for each security group associated with your interface endpoint.Configure the network ACL associated with the interface endpointA network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in subnets.Note: Network ACLs are stateless. You must define rules for both outbound and inbound traffic.For the network ACL that you noted previously, edit the rules.Configure an inbound rule to allow traffic from the client:For Port Range, enter the same port as your endpoint service.For Source, enter the client’s IP address or network.Configure an outbound rule to allow return traffic from the interface endpoint.For Port Range, enter 1024-65535.For Destination, enter the client’s IP address or network.If you have separate network ACLs defined for each subnet, then repeat the steps for each network ACL associated with your interface endpoint.Note: When configuring the security group of the source client, verify that the outbound rules allow connectivity to the private IP addresses of the interface endpoint. The inbound direction of the client's security group is irrelevant. For the Network ACL of the source client, configure rules as follows:Inbound rule:For Port Range, enter the ephemeral port range 1024-65535For Source, enter the interface endpoint's private IP addressOutbound rule:For Port Range, enter the same port as your endpoint serviceFor Destination, enter the interface endpoint's private IP addressRelated informationWhy can't I connect to an endpoint service from my interface endpoint in an Amazon VPC?Why can't I connect to a service when the security group and network ACL allow inbound traffic?Follow" | https://repost.aws/knowledge-center/security-network-acl-vpc-endpoint |
Why did my ACM certificate request fail? | I requested a public certificate AWS Certificate Manager (ACM) but the request failed. How can I troubleshoot this? | "I requested a public certificate AWS Certificate Manager (ACM) but the request failed. How can I troubleshoot this?Short descriptionTo troubleshoot failed ACM certificate requests, check the following:Available contactsUnsafe domainsAdditional verification requiredPublic domains that aren't validAmazon-owned domainsResolutionAvailable contactsIf you used email validation to request the certificate, then make sure that:You have a working email address that is registered in WHOIS and that the address is visible with a WHOIS lookup.Your domain is configured to receive email. Your domain's name server must have a mail exchanger record (MX record) so ACM's email servers know where to send the domain validation email.For more information, see Error message: No Available Contacts.Unsafe domainsIf the requested certificate contains at least one domain in its domain scope that was reported as unsafe by VirusTotal, the certificate request fails. To correct the issue, do the following:Search for your domain name on the VirusTotal website to see if it's reported as suspicious.If you believe that the result is a false positive, then notify the organization that is reporting the domain. VirusTotal is an aggregate of several antivirus and URL scanners and can't remove your domain for you.After you correct the problem and the VirusTotal registry is updated, request a new public certificate.For more information, see Error message: Domain Not Allowed.Additional verification requiredThis occurs as a fraud-protection measure if your domain ranks within the Alexa top 1000 websites.Use the AWS Support Center to contact AWS Support. AWS Support will assist you with adding your domains to an allow list. For more information, see Error message: Additional Verification Required.Public domains that aren't validIf the requested certificate includes a public domain that isn't valid, then the certificate fails with the following error:"One or more domain names is not a valid public domain."Request a new certificate, and then make sure that the top-level domains of all domains specified in the certificate’s domain scope are valid.Amazon-owned domainsIf the requested certificate includes an Amazon-owned domain, such as those ending in amazonaws.com, then the certificate request will fail with the following error:"Additional verification required to request certificates for one or more domain names in this request."Request a new certificate with a domain name that isn't owned by Amazon. For more information, see Error message: Additional Verification Required.Related informationCertificate request failsACM certificate characteristicsWhy am I not receiving validation emails when using ACM to issue or renew a certificate?Follow" | https://repost.aws/knowledge-center/acm-certificate-fail |
How can I resolve the "Cannot initialize SFTP Protocol" error when I connect to an AWS Transfer Family SFTP-enabled server? | "I created an AWS Transfer Family SFTP-enabled server. Then, I created a server user and I added a public key to the user. However, when the user connects to the server using WinSCP, they get the error message "Cannot initialize SFTP Protocol. Is the host running an SFTP server?" How can I fix this?Note: This error message varies across SFTP clients. For example, if you're using Cyberduck, the error is "EOF while reading packet. Please contact your web hosting service provider for assistance." If you're using OpenSSH, the error is "Exit status 1 (Connection closed)."" | "I created an AWS Transfer Family SFTP-enabled server. Then, I created a server user and I added a public key to the user. However, when the user connects to the server using WinSCP, they get the error message "Cannot initialize SFTP Protocol. Is the host running an SFTP server?" How can I fix this?Note: This error message varies across SFTP clients. For example, if you're using Cyberduck, the error is "EOF while reading packet. Please contact your web hosting service provider for assistance." If you're using OpenSSH, the error is "Exit status 1 (Connection closed)."ResolutionThis error typically occurs when the logging role of your AWS Transfer Family server is configured incorrectly. To resolve the error, confirm that the AWS Transfer Family service has permission to assume the logging role that's associated with your server. Verify that the logging role's trust policy allows "Action": "sts:AssumeRole" for the Principal "Service": "transfer.amazonaws.com", similar to the following example statement:{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "transfer.amazonaws.com" }, "Action": "sts:AssumeRole" } ]}Follow" | https://repost.aws/knowledge-center/transfer-cannot-initialize-sftp-error |