Question
stringlengths
0
222
Description
stringlengths
0
790
Answer
stringlengths
0
28.2k
Link
stringlengths
35
92
How do I migrate an EC2-Classic instance to a VPC in same Region of same account?
I have multiple Amazon Elastic Compute Cloud (Amazon EC2) instances in EC2-Classic. I want to migrate the EC2-Classic instances to a virtual private cloud (VPC) before EC2-Classic is no longer available.
"I have multiple Amazon Elastic Compute Cloud (Amazon EC2) instances in EC2-Classic. I want to migrate the EC2-Classic instances to a virtual private cloud (VPC) before EC2-Classic is no longer available.ResolutionYou can use the AWSSupport-MigrateEC2ClassicToVPC Automation document to migrate your EC2-Classic instances to a VPC in the same Region of the same AWS account. The runbook supports only EC2-Classic instances with a virtualization type of HVM and a root volume type of EBS.Before using this Automation document, verify the service quotas for the following resources in the VPC:Elastic IP addressesEC2 instancesSecurity groupsIf necessary, increase the quotas before running the Automation. For more information about how to request a quota increase, see AWS service quotas.Note: AWS is retiring EC2-Classic. For more information, see EC2-Classic Network is retiring - Here's how to prepare.Migrate an EC2-Classic instance to a VPCOpen the AWS Systems Manager console, and then choose Automation from the navigation pane.Choose Execute automation.On the Owned by Amazon tab, in the Automation document search box, enter MigrateEC2Classic.Select the radio button for the AWSSupport-MigrateEC2ClassicToVPC document, and then choose Next.Under Input parameters, for InstanceId, enter your source EC2-Classic instance ID.For AutomationAssumeRole and TargetInstanceType, choose your required parameters. By default, TargetInstanceType is set to t2.xlarge.Note:: For more information about the AWS Identity and Access Management (IAM) permissions that are required for AutomationAssumeRole to successfully run the Automation, see AWSSupport-MigrateEC2ClassicToVPC.(Optional) For DestinationSubnetId and DestinationSecurityGroupId, enter your subnet ID and VPC security group ID.Note:: If these fields are left blank, then the Automation selects a random subnet in the default VPC. Security groups that are attached to the source instance are copied to the VPC and used to launch the target instance.For MigrationType, select Test or CutOver.If you select CutOver for MigrationType, then set the following parameters:SNSNotificationARNForApproval: Enter the ARN of the SNS topic used to send Approval notifications to stop the source instance.ApproverIAM: Enter the ARN of the IAM users or roles that can approve or reject the action to stop the source instance.Choose Execute.Related informationMigrate from EC2-Classic to a VPCFollow"
https://repost.aws/knowledge-center/ssm-migrate-ec2classic-vpc
How can I set my Amazon Connect outbound caller ID dynamically based on country?
I want my Amazon Connect outbound caller ID to change dynamically based on a call recipient's country. How can I set that up?
"I want my Amazon Connect outbound caller ID to change dynamically based on a call recipient's country. How can I set that up?Short descriptionTo have your Amazon Connect outbound caller ID change dynamically based on a call recipient's country, do the following:Create a caller ID list in JSON format.Upload the caller ID list to an Amazon Simple Storage Service (Amazon S3) bucket.Create an AWS Lambda function that identifies the outbound contact's country code and selects a corresponding phone number from the caller ID list.Add the Lambda function to your Amazon Connect instance.Create an outbound whisper flow to invoke the Lambda function.Configure the default outbound queue in your agents' routing profile to use the outbound whisper flow.Note: You can customize this solution for your use case. For example, you can store your caller ID list in an Amazon DynamoDB table instead of an S3 bucket. Then, reconfigure the Lambda function and its execution role accordingly.ResolutionCreate a caller ID list in JSON formatCreate a JSON file that contains a country-based list of phone numbers that you want to use for your outbound caller ID.As you create your caller ID list, keep in mind the following:Your outbound caller ID phone number doesn't have to be from the country that you're calling from.For this example setup, the country codes on your list must follow the ISO 3166-1 alpha-2 standard, and the phone numbers must follow the E.164 standard. For more information, see ISO 3166 — COUNTRY CODES on the ISO website and E.164 : The international public telecommunication numbering plan on the ITU website.If the Lambda function fails to invoke during call routing, then Amazon Connect uses the queue's default outbound phone number for the caller ID instead. The default outbound phone number is the number configured in the queue settings in your Amazon Connect instance. For more information, see Set up outbound caller ID.Example JSON caller ID listsIn this first example caller ID list, Amazon Connect uses the following outbound caller IDs:+12345678901 when calling a customer with a United States ("US") phone number.+441234567890 when calling a customer with a United Kingdom ("GB") phone number.+19876543210 when calling a customer with a phone number in a country that's not listed in the caller ID list ("Default").{ "US": "+12345678901", "GB": "+441234567890", "Default": "+19876543210"}In this second example caller ID list, Amazon connect uses the following outbound caller IDs:+441234567890 when calling a customer with a phone number in the United Kingdom ("GB"), France ("FR"), Germany ("DE"), or Ireland ("IE").+19876543210 when calling a customer with a phone number in a country that's not listed in the caller ID list ("Default").{ "GB": "+441234567890", "FR": "+441234567890", "DE": "+441234567890", "IE": "+441234567890", "Default": "+19876543210"}Upload the caller ID list to an Amazon S3 bucketFollow the instructions in Uploading objects in the Amazon S3 User Guide.Create an AWS Lambda function that identifies the outbound contact's country code and selects a corresponding phone number from the caller ID listCreate a Lambda execution roleFollow the instructions in Creating an execution role in the IAM console.As you configure the execution role, keep in mind the following:Your Lambda function's execution role must have Amazon S3 read access to read the JSON object that you uploaded to an S3 bucket.You can create an execution role in the AWS Identity and Access Management (IAM) console, and then attach the AWS managed policy AmazonS3ReadOnlyAccess to the role.Note: You can limit the execution role's access to a particular S3 bucket by creating your own IAM policy. For an example policy, see Amazon S3: Allows read and write access to objects in an S3 bucket.Create a Lambda functionCreate a Lambda function using the execution role that you created in the previous step.In your function code, include logic that checks an incoming JSON request from Amazon Connect. For more information, see How to reference contact attributes. Also, the example JSON request to a Lambda function in the Amazon Connect Administrator Guide.Note: As an example, you can use the Python function from DynamicOutboundCallerID on the aws-support-tools GitHub repository. The function code works with the Python 3.6 (or later) runtime. If you use the example function code, then you must configure the following environment variables in your function:BUCKET_NAME: The name of the S3 bucket where the JSON object is stored.COUNTRY_ROUTING_LIST_KEY: The key from the JSON file stored in the S3 bucket.For example, if the JSON object is stored in s3://hello/world/list.json, then the environment variables would be the following:BUCKET_NAME: "hello"COUNTRY_ROUTING_LIST_KEY: "world/list.json"Create a Lambda deployment package for the Lambda runtime that you're usingFollow the instructions in Lambda deployment packages.Note: The example Python function from DynamicOutboundCallerID uses the phonenumbers Python library. For more information, see phonenumbers on the Python Package Index (PyPI) website.To include a third-party library in your function, you must create a deployment package. You can create the deployment package by running the following commands in the folder that contains lambda_function.py:$ pip install phonenumbers --target ./$ zip -r9 function.zip ./These commands are valid for Linux, Unix, and macOS operating systems only.For more information on deploying Python Lambda functions, see Deploy Python Lambda functions with .zip file archives.Add the Lambda function to your Amazon Connect instanceFollow the instructions in Add a Lambda function to your Amazon Connect instance.Create an outbound whisper flow to invoke the Lambda functionCreate an outbound whisper flowIf you haven't already, create an outbound whisper contact flow.Important: To create and edit a contact flow, you must log in to your Amazon Connect instance as a user that has sufficient permissions in their security profile.1.    Log in to your Amazon Connect instance at https://instance_name.my.connect.aws/.Note: Replace instance_name with your instance's alias.2.    In the left navigation pane, hover over Routing, and then choose Contact flows.3.    On the Contact flows page, choose the arrow icon next to Create contact flow, and then choose Create outbound whisper flow.4.    In the contact flow designer, for Enter a name, enter a name for the contact flow.5.    Choose Save.For more information, see Create a new contact flow.Add an Invoke AWS Lambda function block1.    In the contact flow designer, choose Integrate.2.    Drag and drop an Invoke AWS Lambda function block onto the canvas.3.    Choose the block title (Invoke AWS Lambda function). The block's settings menu opens.4.    Under Function ARN, choose Select a function, and then choose the Lambda function that you added to your instance.5.    (Optional) For Timeout (max 8 seconds), enter a number of seconds that Amazon Connect waits to get a response from Lambda before timing out.6.    Choose Save.Note: When Amazon Connect invokes your Lambda function, the function returns a JSON response similar to the following:{ "customer_number": "<Customer's phone number that you're calling>", "customer_country": "<Country of the customer's phone number>", "outbound_number": "<Outbound phone number that Lambda loads from Amazon S3 and sends to Amazon Connect>", "outbound_country": "<Country of the outbound phone number that Lambda sends to Amazon Connect>", "default_queue_outbound_number": "<Default outbound phone number set up for the queue>", "default_queue_outbound_country": "<Country of the default outbound phone number>"}For more information, see Invoke a Lambda function from a contact flow and Contact Block: Invoke AWS Lambda function.Add a Call phone number blockConfigure this block to use the outbound_number from Lambda as the caller ID phone number.1.    In the contact flow designer, choose Interact.2.    Drag and drop a Call phone number block onto the canvas.3.    Choose the block title (Call phone number). The block's settings menu opens.4.    Do the following:Select the Caller ID number to display (optional) check box.Choose Use attribute.For Type, choose External.For Attribute, enter outbound_number.5.    Choose Save.For more information, see Contact Block: Call phone number.Finish the contact flow1.    Add and connect more contact blocks as needed for your use case. For example use cases, see Sample contact flows.2.    Connect all the connectors in your contact flow to a block. Be sure to connect the Success node of the Invoke AWS Lambda function block to the Call phone number block. Be sure to also connect the Success node of the Call phone number block to an End flow / Resume block. You must use at least these blocks. For example: Entry point > Invoke AWS Lambda function > Call phone number > End flow / Resume.3.    To save a draft of the flow, choose Save.4.    To activate the flow, choose Publish.Configure the default outbound queue in your agents' routing profile to use the outbound whisper flowIn your agents' routing profile, identify the default outbound queue.Edit the queue by doing the following:1.    In your Amazon Connect instance, in the left navigation pane, hover over Routing, and then choose Queues.2.    On the Queues page, choose the name of the queue that you identified as the default outbound queue.3.    On the Edit queue page, for Outbound whisper flow (optional), search for and choose the name of the outbound whisper flow that you created.4.    Choose Save.For more information, see Create a routing profile and RoutingProfile object.Related informationBest practices for Amazon ConnectCreating and sharing Lambda layersHow routing worksHow do I troubleshoot Lambda function failures?Follow"
https://repost.aws/knowledge-center/connect-dynamic-outbound-caller-id
How do I resolve the "HIVE_PATH_ALREADY_EXISTS" exception when I run a CTAS query in Amazon Athena?
"When I run a CREATE TABLE AS SELECT (CTAS) query in Amazon Athena, I get the exception: "HIVE_PATH_ALREADY_EXISTS: Target directory for table"."
"When I run a CREATE TABLE AS SELECT (CTAS) query in Amazon Athena, I get the exception: "HIVE_PATH_ALREADY_EXISTS: Target directory for table".ResolutionIf you use the external_location parameter in the CTAS query, then be sure to specify an Amazon Simple Storage Service (Amazon S3) location that's empty. The Amazon S3 location that you use to store the CTAS query results must have no data. When you run your CTAS query, the query checks that the path location or prefix in the Amazon S3 bucket has no data. If the Amazon S3 location already has data, the query doesn't overwrite the data.To use the Amazon S3 location that has data in your CTAS query, delete the data in the key prefix location in the bucket. Otherwise, your CTAS query fails with the exception "HIVE_PATH_ALREADY_EXISTS".If an existing Athena table is pointing to the Amazon S3 location that you want to use in your CTAS query, then do the following:Drop the Athena table.Delete the data in the key prefix location of the S3 bucket.Related informationCTAS table propertiesConsiderations and limitations for CTAS queriesFollow"
https://repost.aws/knowledge-center/athena-hive-path-already-exists
How do I resolve the "RequestError: send request failed caused by: Post https://ssm.RegionID.amazonaws.com/: dial tcp IP:443: i/o timeout" SSM Agent log error?
"I'm trying to register my Amazon Elastic Compute Cloud (Amazon EC2) instance as a managed instance with AWS Systems Manager. However, the instance fails to register and I receive a TCP timeout error message."
"I'm trying to register my Amazon Elastic Compute Cloud (Amazon EC2) instance as a managed instance with AWS Systems Manager. However, the instance fails to register and I receive a TCP timeout error message.Short descriptionThe TCP timeout error indicates that one of the following issues is preventing the instance from registering:The instance is in a private subnet and uses the Systems Manager virtual private cloud (VPC) endpoint and a custom DNS server.The instance is in a private subnet and doesn't have access to the internet or to the Systems Manager endpoints.The instance is in a public subnet. The VPC security groups and network access control lists (network ACLs) aren't configured to allow outbound connections to the Systems Manager endpoints on port 443.The instance is behind a proxy, but SSM Agent isn't configured to communicate through an HTTP proxy and can't connect to the instance metadata server.You can view the TCP timeout error in the SSM Agent log on your instance located at the following paths:Linux and macOS/var/log/amazon/ssm/amazon-ssm-agent.log/var/log/amazon/ssm/errors.logWindows%PROGRAMDATA%\Amazon\SSM\Logs\amazon-ssm-agent.log%PROGRAMDATA%\Amazon\SSM\Logs\errors.logResolutionInstance in private subnet using Systems Manager endpoint and a custom DNSVPC endpoints only support Amazon-provided DNS through Amazon Route 53. To use your own DNS server, try one of the following:Use the conditional forwarder in your custom DNS server to forward any query for the domain amazonaws.com to the default VPC DNS resolver. For more information, see DHCP options sets.Set up the Route 53 resolver to resolve the DNS queries between the VPC and your network.Instance can't connect to the Systems Manager endpoints-or-VPC security groups and network ACL aren't configured to allow outbound connections on port 443-or-The instance is behind a proxy and can't connect to the instance metadata serviceFor troubleshooting steps, see Why is my EC2 instance not displaying as a managed node or showing a "Connection lost" status in Systems Manager?Related informationCreate VPC endpointsFollow"
https://repost.aws/knowledge-center/ssm-tcp-timeout-error
How do I resolve the "Resource timed out waiting for creation of physical resource" error when I create a resource using my resource provider type in CloudFormation?
"When I use my Resource Provider type to create a resource in AWS CloudFormation, I receive the following error:"Resource timed out waiting for creation of physical resource""
"When I use my Resource Provider type to create a resource in AWS CloudFormation, I receive the following error:"Resource timed out waiting for creation of physical resource"Short descriptionWhen a resource doesn't return its primaryIdentifier or Physical ID within 60 seconds, you receive the "Resource timed out waiting for creation of physical resource" error. This error occurs because the CreateHandler of your resource doesn't return the property that's specified as the primaryIdentifier in the organization-service-resource.json resource provider schema file.For other errors that are related to using a resource provider, see the following articles:How do I resolve the "Resource specification is invalid" error when I run the cfn generate command using the CloudFormation CLI in CloudFormation?How do I resolve the "Model validation failed (#: extraneous key [Key] is not permitted)" error in CloudFormation?How do I resolve the "Attribute 'Key' does not exist" error when I use the Fn::GetAtt function on my resource provider resource in CloudFormation?How do I resolve the "java.lang.ClassNotFoundException: com.example.package.resource.HandlerWrapper" error in CloudFormation?Resolution1.    In your organization-service-resource.json file, confirm that the primaryIdentifier definition uses the following format, where Id is a property that's defined in the properties section:"primaryIdentifier": [ "/properties/Id"]Note: The organization-service-resource.json format is located in the root directory of your project.2.    In your CreateHandler, set the primaryIdentifier property in the model object. For example:final ResourceModel model = request.getDesiredResourceState();model.setId("abcdxyz");return ProgressEvent.<ResourceModel, CallbackContext>builder() .resourceModel(model) .status(OperationStatus.SUCCESS) .build();Related informationAWS CloudFormation CLI (from the GitHub website)Follow"
https://repost.aws/knowledge-center/cloudformation-physical-resource-error
How can I access an internal load balancer using VPC peering?
I want to connect to a load balancer in VPC A from my instance in VPC B. How can I access an internal load balancer using VPC peering?
"I want to connect to a load balancer in VPC A from my instance in VPC B. How can I access an internal load balancer using VPC peering?Short descriptionTo access an internal load balancer in VPC A from VPC B:Establish connectivity between VPC A and VPC B using VPC peering.Establish the necessary routes, security group rules, and network access control list (ACL) rules to allow traffic between the VPCs.ResolutionUsing VPC peering, you can access internal load balancers (including Classic Load Balancers, Application Load Balancers, and Network Load Balancers) from another VPC.Establish connectivity between your VPCs using VPC peering.Note: VPC peering is available for intra-Region and inter-Region connectivity for local or cross-account VPCs.Verify that a route for the load balancer's subnets CIDR (or VPC CIDR) exists in the route table of the client subnet. The route must be directed towards the VPC peering ID of your VPCs. Similarly, verify that the route of the client subnet/VPC CIDR exists in the route table of the load balancer's subnets.Resolve the load balancer DNS name from your instance and use nslookup to verify it.If you're using a Classic Load Balancer or an Application Load Balancer: verify that the security group and network ACL allow traffic from either the complete subnet/VPC of the instance or the specific instance IP:In the security group of the load balancer, allow only inbound traffic on the load balancer's listener port.For the network ACL of the subnet, allow ingress traffic from the instance IP or subnet/VPC for the load balancer's listener port. In egress, be sure that the Ephemeral port range (1024 to 65535) allows return traffic from the load balancer nodes to the instance.-or-If you're using a Network Load Balancer, ensure that the traffic is allowed in the security group of the target instancesNote: Modify your security groups or network ACLs, as needed. If you haven't modified the network ACLs, there's a default rule to allow all (0.0.0.0/0) traffic. In this case, you don't need to modify the network ACLs. However, it's an AWS security best practice to allow traffic to and from specific CIDR ranges.Check that the security group of the instance permits outbound traffic to the load balancer associated with the subnets or default (0.0.0.0/0).For the network ACL of the subnet, verify that there’s a rule in Egress to allow traffic for the load balancer's subnets on the load balancer's listener port. In Ingress, verify that there’s a rule to allow traffic to the instance IP/subnet on Ephemeral ports for response traffic .Note: If you haven't modified these default settings, you don't need to make any changes to the default outbound rule (0.0.0.0/0) for the security group or the default ALLOW ALL rule for the network ACL of the subnet with the instance. However, it's an AWS security best practice to allow traffic to and from specific CIDR ranges.Follow"
https://repost.aws/knowledge-center/elb-access-load-balancer-vpc-peering
How can I set up Amazon SES bounce notifications using an Amazon SNS topic?
I want to get notifications whenever emails that I send using Amazon Simple Email Service (Amazon SES) result in a bounce. How can I set up these notifications using Amazon Simple Notification Service (Amazon SNS)?
"I want to get notifications whenever emails that I send using Amazon Simple Email Service (Amazon SES) result in a bounce. How can I set up these notifications using Amazon Simple Notification Service (Amazon SNS)?ResolutionBefore you begin, complete the Amazon SES verification process for the identity (domain or email address) that you want to get bounce notifications for.Create a topic and subscription in Amazon SNSOpen the Amazon SNS console.In the navigation pane, choose TopicsChoose Create topic.For Name, enter a name to create a unique identifier for your topic.For Display name, enter a display name for your topic.Choose Create topic.From the details page of the topic that you created, navigate to Subscriptions, and then choose Create subscription.For Protocol, select Email-JSON.For Endpoint, enter the email address where you want to receive notifications.Choose Create subscription.From the inbox of the email address that you specified in step 8, open the subscription confirmation email from Amazon SNS with the subject line AWS Notification - Subscription Confirmation.In the subscription confirmation email, open the URL specified as SubscribeURL to confirm your subscription.Configure Amazon SES to send bounce information to Amazon SNSOpen the Amazon SES console.In the navigation pane, choose Verified identities. Then, select the verified domain or email address that you want to get bounce notifications for.Select the Notifications tab, and then choose Edit in the Feedback notifications panel.Under Configure SNS Topics, for Bounce Feedback, select the SNS topic that you created.Note: You can choose to turn on notifications for Complaints and Deliveries. You can publish multiple event types to the same SNS topic or to different SNS topics.Select Include original headers if you want the Amazon SNS notifications to contain the original headers of the emails that you send using Amazon SES.Choose Save Config.Note: It might take a few minutes for your newly configured notification settings to take effect.Test the bounce notifications using the Amazon SES mailbox simulatorNote: The bounces from the mailbox simulator address aren't counted as part of your account's bounce metrics.Open the Amazon SES console.In the navigation pane, choose Verified identities. Then, select the verified domain or email address that you want to set up bounce notifications for.Choose Send a Test Email.From drop-down list under Scenario, select Bounce. Then, complete the rest of the form with the values that you want for the test email.Choose Send Test Email.Open the inbox for the email address that you set as the endpoint of the SNS topic. Confirm that you have an email with the subject line AWS Notification Message that contains the bounce notification.Note: This resolution sets up bounce notifications with Amazon SNS for each verified identity. To get bounce notifications across identities, you can use Amazon SES event publishing. With event publishing, you use a configuration set to specify which emails you receive notifications for. You can use the configuration set for emails sent by different verified identities.Follow"
https://repost.aws/knowledge-center/ses-bounce-notifications-sns
How do I allow my Lambda function access to my Amazon S3 bucket?
I want my AWS Lambda function to be able to access my Amazon Simple Storage Service (Amazon S3) bucket.
"I want my AWS Lambda function to be able to access my Amazon Simple Storage Service (Amazon S3) bucket.Short descriptionTo give your Lambda function access to an Amazon S3 bucket in the same AWS account, do the following:1.    Create an AWS Identity and Access Management (IAM) role for the Lambda function that also grants access to the S3 bucket.2.    Configure the IAM role as the Lambda functions execution role.3.    Verify that the S3 bucket policy doesn't explicitly deny access to your Lambda function or its execution role.Important: If your S3 bucket and the functions IAM role are in different accounts, then you must also grant the required permissions on the S3 bucket policy. For more information, see How can I provide cross-account access to objects that are in Amazon S3 buckets?ResolutionCreate an IAM role for the Lambda function that also grants access to the S3 bucket1.    Follow the steps in Creating an execution role in the IAM console.2.    From the list of IAM roles, choose the role that you just created.3.    The trust policy must allow Lambda to assume the execution role by adding lambda.amazonaws.com as a trusted service. Choose the Trust relationships tab, choose Edit trust policy, and replace the policy with the following:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "lambda.amazonaws.com" }, "Action": "sts:AssumeRole" } ]}4. Choose Update policy.5.    In the Permissions tab, choose Add inline policy.6.    Choose the JSON tab.7.    Enter a resource-based IAM policy that grants access to your S3 bucket. For more information, see Using resource-based policies for AWS Lambda.The following example IAM policy grants access to a specific Amazon S3 bucket with Get permissions. To access the objects inside the Amazon S3 bucket, make sure to specify the correct path or use a wildcard character ("*"). For more information, see Writing IAM Policies: How to Grant Access to an Amazon S3 Bucket for more information.Important: Replace "arn:aws:s3:::EXAMPLE-BUCKET" with your S3 buckets Amazon Resource Name (ARN).{ "Version": "2012-10-17", "Statement": [ { "Sid": "ExampleStmt", "Action": [ "s3:GetObject" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::EXAMPLE-BUCKET/*" ] } ]}6.    Choose Review policy.7.    For Name, enter a name for your policy.8.    Choose Create policy.Configure the IAM role as the Lambda functions execution role1.    Open the Lambda console.2.    Choose your Lambda function.3.    Under Execution role, for Existing role, select the IAM role that you created.4.    Choose Save.Verify that the S3 bucket policy doesn't explicitly deny access to your Lambda function or its execution roleTo review or edit your S3 bucket policy, follow the instructions in Adding a bucket policy by using the Amazon S3 console.Important: If your S3 bucket and the functions IAM role are in different accounts, you must also explicitly grant the required permissions on the S3 bucket policy. For more information, see How can I provide cross-account access to objects that are in Amazon S3-buckets?The following example IAM S3 bucket policy grants a Lambda execution role cross-account access to an S3 bucket.Important: Replace "arn:aws:s3:::EXAMPLE-BUCKET/*" with your S3 buckets ARN. Replace "arn:aws:iam::123456789012:role/ExampleLambdaRoleFor123456789012" with your Lambda execution roles ARN.{ "Id": "ExamplePolicy", "Version": "2012-10-17", "Statement": [ { "Sid": "ExampleStmt", "Action": [ "s3:GetObject" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::EXAMPLE-BUCKET/*" ], "Principal": { "AWS": [ "arn:aws:iam::123456789012:role/ExampleLambdaRoleFor123456789012" ] } } ]}Related informationAWS Policy GeneratorFollow"
https://repost.aws/knowledge-center/lambda-execution-role-s3-bucket
How do I optimize costs with Amazon DynamoDB?
I want to optimize the cost of my Amazon DynamoDB workloads.
"I want to optimize the cost of my Amazon DynamoDB workloads.Short descriptionUse these methods to optimize the cost of your DynamoDB workloads:Use the AWS Pricing Calculator to estimate DynamoDB costs, in advance.Optimize read/write costs by selecting the correct capacity mode.Optimize storage costs by selecting the correct table class.Use cost allocation tags.ResolutionUse the AWS Pricing Calculator to estimate DynamoDB costsUse the AWS Pricing Calculator for DynamoDB to estimate the cost of your DynamoDB workloads before you build them. This includes the cost of features such as on-demand capacity mode, backup and restore, Amazon DynamoDB Streams, and Amazon DynamoDB Accelerator (DAX).Optimize read/write cost by choosing the correct capacity mode for your DynamoDB tableOn-demand capacity modeOn-demand capacity mode is a good option if you have unpredictable application traffic. With on-demand mode, you pay only for what you use.If you configured a table as provisioned capacity mode, then you're charged for provisioned capacities even if you haven't consumed any I/O. So, if you have unused DynamoDB tables in your account, reduce the cost of your unused tables using on-demand mode.Provisioned capacity modeProvisioned capacity mode is a good option if you have predictable application traffic that is consistent or ramps gradually. Use this mode to forecast capacity requirements and control costs.Reserved capacityIf you can predict your need for DynamoDB read and write throughput in a given AWS Region, then use DynamoDB reserved capacity to reduce costs. DynamoDB reserved capacity allows you to make an upfront commitment on your base level of provisioned capacity. Reserved capacity isn't available for tables that use the DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class, or on-demand capacity.For more information on DynamoDB capacity modes, see Read/write capacity mode.Optimize storage cost by choosing the right table class for your DynamoDB tableUsing the DynamoDB Standard-IA table class can reduce costs for tables that store data that you don't access regularly. This is a good option if you require long-term storage of data that you don't use often, such as application logs, or old social media posts. But, be aware that DynamoDB reads and writes for this table class are priced higher than standard tables.For more information on DynamoDB table classes, see Table classes.Use cost allocation tags for DynamoDBTagging for DynamoDB provides fine-grained visibility into your DynamoDB bill. You can assign tags to your tables, and see cost breakouts per tab to help with cost optimization by usage. To learn more about cost allocation reports for DynamoDB, see Introducing cost allocation tags for Amazon DynamoDB .For information on Cost Allocation Tags, see Using cost allocation tags.For additional optimizing methods, see Optimizing costs on DynamoDB tables.Related informationAmazon DynamoDB pricingFollow"
https://repost.aws/knowledge-center/dynamodb-optimize-costs
How do I troubleshoot errors when I create a custom domain in Amazon Cognito?
I need to learn how to resolve common errors when configuring custom domains in Amazon Cognito.
"I need to learn how to resolve common errors when configuring custom domains in Amazon Cognito.Short descriptionWhen configuring custom domain names in Amazon Cognito, the following errors commonly occur:"Custom domain is not a valid subdomain: Was not able to resolve the root domain, please ensure an A record exists for the root domain.""Domain already associated with another user pool.""One or more of the CNAMEs you provided are already associated with a different resource.""The specified SSL certificate doesn't exist, isn't in us-east-1 region, isn't valid, or doesn't include a valid certificate chain.""The domain name contains an invalid character. Domain names can only contain lower-case letters, numbers, and hyphens. Please enter a different name that follows this format: ^a-z0-9?$"ResolutionCustom domain is not a valid subdomain errorTo prevent accidental changes to infrastructure, Amazon Cognito doesn't support top-level domains (TLDs) for custom domains. To create an Amazon Cognito custom domain, the parent domain must have a Domain Name System (DNS) A record.The parent might be either the root of the domain or a child domain that's one step up in the domain hierarchy. For example, if your custom domain is auth.xyz.yourdomain.com, then Amazon Cognito must resolve **xyz.**yourdomain.com to an IP address. Similarly, to configure xyz.yourdomain.com as a custom domain, configure an A record for yourdomain.com.You must create an A record for the parent domain in your DNS configuration. When the parent domain resolves to a valid A record, Amazon Cognito doesn't perform additional verifications. If the parent domain doesn't point to a real IP address, then consider putting a dummy IP address, such as "8.8.8.8", in your DNS configuration.Your DNS provider might take time to propagate the changes that you made to your DNS configuration. To make sure that your DNS provider propagated the change, run one of the following commands.Using auth.xyz.yourdomain.com as the custom domain:$ dig A xyz.yourdomain.com +short-or-Using xyz.yourdomain.com as the custom domain:$ dig A yourdomain.com +shortNote: The preceding example commands are for a Linux environment.If the DNS configuration change propagates, then the previous command returns the IP address that you configured. If the DNS lookup isn't returning the configured IP address, then wait until the change is propagated. Otherwise, you keep getting the "custom domain is not a valid subdomain" error.After the custom domain is created in Amazon Cognito, you can remove the parent domain A record mapping you configured earlier.Domain already associated with another user pool errorCustom domain names must be unique across all AWS accounts in all AWS Regions. When you use a custom domain name for a user pool, the same domain name can't be used for any other user pool. You must delete the custom domain associated with the first user pool if you want to use the domain name for another user pool.After deleting a custom domain, it takes time to fully dissociate the custom domain from the user pool. If you try to configure the domain name with another user pool immediately after deletion, then you might encounter the domain association error. If you receive the domain association error, then wait 15-20 minutes before setting up the domain name with the new user pool.One or more of the CNAMEs you provided are already associated with a different resource errorAfter creating a custom domain, Amazon Cognito creates an AWS managed Amazon CloudFront distribution using the same custom domain name. You can use a domain name with only one CloudFront distribution. If you're using a domain name as an alternate domain in CloudFront, then you can't use the existing domain name to create a custom domain. If you try to create a custom domain that's already associated with a CloudFront distribution, then the CNAME association error appears.You can resolve this error in one of the following two ways:Use a different domain name for the Amazon Cognito custom domain.When using the domain as an Amazon Cognito custom domain, stop using the domain name with another CloudFront distribution.The specified SSL certificate doesn't exist errorTo create an Amazon Cognito custom domain, you must have an AWS Certificate Manager (ACM) certificate in the us-east-1 AWS Region. When you create the custom domain, Amazon Cognito internally creates a CloudFront distribution. CloudFront supports ACM certificates only in the us-east-1 Region.When you configure the custom domain, make sure that the certificate you select isn't expired.If you import a certificate into ACM, then make sure that the certificate is issued by a public certificate authority (CA). The certificate must also have the correct certificate chain. For more information, see Importing certificates into AWS Certificate Manager.Your certificate must be 2,048 bits or smaller in size. The certificate can't be password protected.If an AWS Key Management Service (AWS KMS) policy evaluation results in an explicit deny statement, then you might receive an SSL certificate error. When certain AWS KMS actions are explicitly denied for the IAM user or role that creates the Amazon Cognito custom domain, this error occurs. The error most commonly occurs for the following explicitly denied AWS KMS actions: kms:DescribeKey, kms:CreateGrant, or kms:*.The domain name contains an invalid character errorIf a domain name contains anything other than lowercase letters, numbers, and hyphens, then the domain name isn't accepted. You can't use a hyphen for the first or last character. The maximum length of the whole domain name, including the dots, is 63 characters.Related informationUsing your own domain for the hosted UIFollow"
https://repost.aws/knowledge-center/cognito-custom-domain-errors
How do I troubleshoot my virtual interface on Direct Connect when it's in the DOWN status in the AWS Management Console?
I want to troubleshoot my virtual interface on AWS Direct Connect when it's in the DOWN status in the AWS Management Console.
"I want to troubleshoot my virtual interface on AWS Direct Connect when it's in the DOWN status in the AWS Management Console.Short descriptionYour virtual interface on Direct Connect can go down for multiple reasons:Physical connection is down or flappingOSI layer 2 configuration issuesBorder Gateway Protocol (BGP) configuration issuesBidirectional Forwarding Detection (BFD) configuration issuesResolutionPhysical connection is down or flappingIf your physical connection isn't in the UP status and stable, then you must troubleshoot your layer 1 (physical) issues. For more information, see How do I troubleshoot when my Direct Connect connection is DOWN and the Tx/Rx optical signal receives no or low light readings?OSI layer 2 configuration issuesCheck your OSI layer 2 for the following configurations:Your VLAN ID with dot1Q encapsulation on your device is configured correctly, as shown in your Direct Connect console.The configuration of peer IP addresses is identical on your device in the Direct Connect console.All the intermediate devices along the path are configured correctly for dot1Q VLAN tagging with the correct VLAN ID. Also, make sure that VLAN-tagged traffic is preserved on the AWS end of the Direct Connect device.  Your device learns the MAC address of the Direct Connect device of the configured VLAN ID from the ARP table.Your device can ping the Amazon peer IP address sourcing from your peer IP address.Note: Some network providers use Q-in-Q tagging that alters your tagged VLAN. Direct Connect doesn't support Q-in-Q tagging.For more information, see Troubleshooting layer 2 (data link) issues.BGP configuration issuesIf your OSI layer 2 configuration is correct, then check your BGP for the following configurations:Your local and remote ASNs are correct, as provided in the downloaded configuration file.Your neighbor IP address and BGP MD5 password are correct, as provided in the downloaded configuration file.Your device isn't blocking inbound or outbound traffic on TCP port 179 and other ephemeral ports.Your device isn't advertising more than 100 prefixes to AWS by the BGP. By default, AWS accepts up to 100 prefixes using a BGP session on Direct Connect. For more information, see Direct Connect quotas.If the preceding configurations are correct, then your BGP status indicates UP.For more information, see How can I troubleshoot BGP connection issues over Direct Connect?BFD configuration issuesBFD is a detection protocol that provides fast forwarding path failure detection times. Fast failure detection times facilitate faster routing reconvergence times. AWS supports asynchronous BFD and is automatically turned on for Direct Connect virtual interfaces on AWS.If your OSI layer 2 and BGP configurations are correct, then check your BFD for the following configurations:BFD is turned on for your router. If it's turned on, then check that your BFD is configured correctly on your router.Your BFD session is in the UP status on your router.Your BFD events or logs on your router for any further issues.Note: The default AWS BFD liveness detection minimum interval is 300 ms. The default BFD liveness detection multiplier is three.Related informationTroubleshooting AWS Direct ConnectFollow"
https://repost.aws/knowledge-center/direct-connect-down-virtual-interface
How do I change my AWS Support plan?
I want to change my AWS Support plan.
"I want to change my AWS Support plan.ResolutionTo access the AWS Support Center, sign in with either of the following steps:Use your AWS account root user credentials-or-Use your AWS Identity and Access Management (IAM) user credentials with access permissions for AWS Support plans. For more information, see Manage access for AWS Support plans.To change your Support plan, do the following:1.    Open the AWS Support Center.You can view your current Support plan in the navigation pane.2.    Choose Change next to your current AWS Support plan.3.    (Optional) On the AWS Support plans page, compare the Support plans. For pricing information, see AWS Support plan pricing.Under AWS Support pricing example, choose See examples, and then choose one of the Support plan options to see the estimated cost.4.    Choose Review downgrade or Review upgrade for the plan that you want. Important: If you have an Enterprise On-Ramp or Enterprise Support plan, on the Change plan confirmation dialog box, choose Contact us. Fill in the form, and then choose Submit.5.    For Change plan confirmation, expand the support items to see the features included with the plan.Under Pricing, you can view the projected one-time charges for the new Support plan.6.    Choose Accept and agree.The changes take effect within 15 minutes. You see your new plan reflected in the Billing and Cost Management console within a few hours.Your previous paid AWS Support plan remains valid up to the point when you switch to a new one. The terms and conditions of the new plan start applying to the usage of your account immediately. For more information, see AWS Support plan pricing.For an explanation of how AWS Support plans are billed, see How am I billed for AWS Support?Related informationAWS SupportCompare AWS Support plansHow do I cancel my AWS Support plan?How do AWS Support plans work in an organization in AWS Organizations?Follow"
https://repost.aws/knowledge-center/change-support-plan
What's the most effective way as an AWS reseller to bill my end customers?
"As a reseller of AWS products, I want to use AWS tools to calculate bills for my end users."
"As a reseller of AWS products, I want to use AWS tools to calculate bills for my end users.ResolutionAs a reseller, it's a best practice to design your own specialized reporting and invoicing structures that match your business needs.You can use the AWS Cost and Usage report to help calculate bills for end users. It's a best practice to use cost allocation tags to identify the source of your costs.When you calculate bills for your end customers, make sure to use unblended rates. Blended rates are averages and aren't meant to reflect actual billed rates.Related informationAWS OrganizationsConsolidated billing for AWS OrganizationsFollow"
https://repost.aws/knowledge-center/end-customer-reseller-billing
How do I transfer an Elastic IP address between AWS accounts in the same Region?
"I use an Amazon Elastic IP address, and I want to transfer the address to another AWS account."
"I use an Amazon Elastic IP address, and I want to transfer the address to another AWS account.Short descriptionTo transfer Elastic IP addresses between accounts in the same AWS Region, use either of the following methods:The Amazon Elastic Compute Cloud (Amazon EC2) consoleAmazon EC2 APIsWhen you transfer an Elastic IP address, there's a two-step handshake between the source account and transfer account. The source account can be a standard AWS account or an AWS Organizations account. When the source account starts the transfer, the transfer account has 7 days to accept it. Otherwise, the Elastic IP address returns to its original owner.AWS doesn't inform the transfer account about pending Elastic IP address transfer requests. To facilitate the transfer within the time frame, the source account owner must communicate this request to the transfer account owner.ResolutionUse the Amazon EC2 consoleFor steps on how to send a transfer request through the Amazon EC2 console and prerequisites for it, see Activate Elastic IP address transfer.After you send the transfer request, the owner of the transfer account must accept it. For steps on how to complete the transfer in the Amazon EC2 console, see Accept a transferred Elastic IP address.Use the AWS CLI to transfer a single Elastic IP addressNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re running a recent version of the AWS CLI. To verify that you correctly configured the AWS CLI, see Configuring the AWS CLI.In the following example use case, source account A (111111111111) is transferring an Elastic IP address to transfer account B (222222222222).In the following commands, replace ELASTIC_IP with your Elastic IP address. Replace us-east-1 with your AWS Region.1.     In the source account, use either the Amazon EC2 console or AWS CLI to get the AllocationId of the Elastic IP address. For the AWS CLI, use the describe-addresses API call:aws ec2 describe-addresses --filters "Name=public-ip,Values=ELASTIC_IP" --region us-east-1 { "Addresses": [ { "PublicIp": "ELASTIC_IP", "AllocationId": "eipalloc-1111111111111111", "Domain": "vpc", "PublicIpv4Pool": "amazon", "NetworkBorderGroup": "us-east-1" } ] }2.     In the source account, check if there are any existing or pending address transfers for the Elastic IP address. To do this, check the AllocationId (in this case, eipalloc-1111111111111111):aws ec2 describe-address-transfers --query "AddressTransfers[?AllocationId=='eipalloc-11111111111111111']" --region us-east-1 [ ]In this example, there aren’t any existing or pending address transfers. This means that you can proceed with your new transfer.3.    Use the enable-address-transfer API call to initiate the address transfer:aws ec2 enable-address-transfer --allocation-id eipalloc-11111111111111111 --transfer-account-id 222222222222 --region us-east-1 "AddressTransfer": { "PublicIp": "3.", "AllocationId": "eipalloc-11111111111111111", "TransferAccountId": "222222222222", "TransferOfferExpirationTimestamp": "2022-10-28T08:44:41+00:00", "AddressTransferStatus": "pending" } }4.   Notify the transfer account owner that the Elastic IP address transfer is in the Pending state and that they must accept the transfer. To accept the transfer, the transfer account owner uses the accept-address-transfer API call.Note: The transfer account can't see the Elastic IP address that's in the Pending state. This is a security feature in case you accidentally send an IP address to the wrong account. In this case, you can cancel the transfer before the other account sees the IP address.aws ec2 accept-address-transfer --address ELASTIC_IP --region us-east-1 "AddressTransfer": { "PublicIp": "ELASTIC_IP", "AllocationId": "eipalloc-11111111111111111", "TransferAccountId": "222222222222", "TransferOfferExpirationTimestamp": "2022-10-28T08:44:41+00:00", "AddressTransferStatus": "accepted" } }If the acceptance fails, then you see one of the following errors:AddressLimitExceededInvalidTransfer.AddressCustomPtrSetInvalidTransfer.AddressAssociatedTo troubleshoot any of these errors, see Accept a transferred Elastic IP address.5.    After the Elastic IP address transfers successfully, the transfer account can use the describe-addresses API to confirm the transfer:Note: A successful transfer generates a new AllocationId for the Elastic IP address in the transfer owner's account.aws ec2 describe-addresses --filters "Name=public-ip,Values=ELASTIC_IP" --region us-east-1 { "Addresses": [ { "PublicIp": "ELASTIC_IP", "AllocationId": "eipalloc-22222222222222222", "Domain": "vpc", "PublicIpv4Pool": "amazon", "NetworkBorderGroup": "us-east-1" } ] }6.    The source account can use the describe-address-transfers API to confirm a successful transfer:aws ec2 describe-address-transfers --query "AddressTransfers[?AllocationId=='eipalloc-11111111111111111']" --region us-east-1 [ { "PublicIp": "ELASTIC_IP", "AllocationId": "eipalloc-11111111111111111", "TransferAccountId": "222222222222", "TransferOfferExpirationTimestamp": "2022-10-28T10:44:41+00:00", "AddressTransferStatus": "accepted" } ] Related informationAmazon Virtual Private Cloud (VPC) now supports the transfer of Elastic IP addresses between AWS accountsTransfer Elastic IP addressesFollow"
https://repost.aws/knowledge-center/vpc-transfer-elastic-ip-accounts
How do I troubleshoot eksctl issues with Amazon EKS clusters and node groups?
"When I use eksctl to create or update my Amazon Elastic Kubernetes Service (Amazon EKS), I encounter issues."
"When I use eksctl to create or update my Amazon Elastic Kubernetes Service (Amazon EKS), I encounter issues.Short descriptionThe following are common issues you might encounter when you use eksctl to create or manage an Amazon EKS cluster or node group:You don't know how to create a cluster with eksctl. See Getting started with Amazon EKS - eksctl and the eksctl section of Creating an Amazon EKS cluster.You don't know how to specify kubelet bootstrap options for managed node groups. Follow the steps in the Specify kubelet bootstrap options Resolution section.You don't how to change the instance type of an existing node group. You must create a new node group. See Migrating to a new node group and Nodegroup immutability (from the eskctl website).You reached the maximum number of AWS resources. Check your resources to see if you can delete ones that you're not using. If you still need more capacity, then see, Requesting a quota increase.You launch control plane instances in an Availability Zone with limited capacity. See How do I resolve cluster creation errors in Amazon EKS?Your nodes fail to move to Ready state. Follow the steps in the Resolve operation timeout issues Resolution section.Export values don't exist for the cluster. Follow the steps in the Create the node group in private subnets Resolution section.You used an unsupported instance type to create a cluster or node group. Follow the steps in the Check if your instance type is supported Resolution section.ResolutionSpecify kubelet bootstrap optionsBy default, eksctl creates a bootstrap script and adds it to the launch template that the worker nodes run during the bootstrap process. To specify your own kubelet bootstrap options, use the overrideBootstrapCommand specification to override the eksctl bootstrap script. Use the overrideBootstrapCommand for managed and self-managed node groups.Config file specification:managedNodeGroups: name: custom-ng ami: ami-0e124de4755b2734d securityGroups: attachIDs: ["sg-1234"] maxPodsPerNode: 80 ssh: allow: true volumeSize: 100 volumeName: /dev/xvda volumeEncrypted: true disableIMDSv1: true overrideBootstrapCommand: | #!/bin/bash /etc/eks/bootstrap.sh managed-cluster --kubelet-extra-args '--node-labels=eks.amazonaws.com/nodegroup=custom-ng,eks.amazonaws.com/nodegroup-image=ami-0e124de4755b2734d'Note: You can use overrideBootstrapCommand only when using a custom AMI. If you don't specify an AMI ID, then cluster creation fails.A custom AMI ID wasn't specifiedIf you don't specify a custom AMI ID when creating managed node groups, then by default Amazon EKS uses an Amazon EKS-optimized AMI and bootstrap script. To use an Amazon EKS-optimized AMI with custom user data to specify bootstrap parameters, specify the AMI ID in your managed node group configuration.To get the latest AMI ID for the latest Amazon EKS optimized AMI, run the following command:aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.21/amazon-linux-2/recommended/image_id --region Region --query "Parameter.Value" --output textNote: Replace Region with your AWS Region.Resolve operation timeout issuesIf you receive the following error when creating a node, then your node group may have timeout issues:waiting for at least 1 node(s) to become ready in "nodegroup"When you create an EKS node group with eksctl, the eksctl CLI connects to the API server to continuously check for the Kubernetes node status. The CLI waits for the nodes to move to Ready state and eventually times out if the nodes fail to move.The following are reasons why the nodes fail to move to Ready state:The kubelet can't communicate or authenticate with the EKS API server endpoint during the bootstrapping process.The aws-node and kube-proxy pods are not in Running state.The Amazon Elastic Compute Cloud (Amazon EC2) worker node user data wasn't successfully run.The kubelet can't communicate with the EKS API server endpointIf the kubelet can't communicate with the EKS API server endpoint during the bootstrapping process, then get the EKS API server endpoint.Run the following command on your worker node:curl -k https://123456DC0A12EC12DE0C12BC312FCC1A.yl4.us-east-1.eks.amazonaws.com{ "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"", "reason": "Forbidden", "details": { }, "code": 403}The preceding command should return the HTTP 403 status code. If the command times out, you might have a network connectivity issue between the EKS API server and worker nodes.To resolve the connectivity issue, complete one of the following steps that relates to your use case:If the worker nodes are in a private subnet, then check that the EKS API server endpoint is in Private or Public and Private access mode.If the EKS API server endpoint is set to Private, then you must apply certain rules for the private hosted zone to route traffic to the API server. The Amazon Virtual Private Cloud (Amazon VPC) attributes enableDnsHostnames and enableDnsSupport must be set to True. Also, the DHCP options set for the Amazon VPC must include AmazonProvideDNS in its domain list.If you created the node group in public subnets, then make sure that the subnets' IPv4 public addressing attribute is set to True. If you don't set the attribute to True, then the worker nodes aren't assigned a public IP address and can't access the internet.Check if the Amazon EKS cluster security group allows ingress requests to port 443 from the worker node security group.The kubelet can't authenticate with the EKS API server endpointIf the kubelet can't authenticate with the EKS API server endpoint during the bootstrapping process, then complete the following steps.1.    Run the following command to verify that the worker node has access to the STS endpoint:telnet sts.region.amazonaws.com 443Note: Replace region with your AWS Region.2.    Make sure that the worker node's AWS Identity and Access Management (IAM) role was added to the aws-auth ConfigMap.For example:apiVersion:v1 kind:ConfigMap metadata:name:aws-auth namespace:kube-system data:mapRoles:| - rolearn: ARN of instance role (not instance profile) username: system:node:{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodesNote: For Microsoft Windows node groups, you must add an additional eks:kube-proxy-windows RBAC group to the mapRoles section for the node group IAM role.The aws-node and kube-proxy pods aren't in Running stateTo check whether the aws-node and kube-proxy pods are in Running state, run the following command:kubectl get pods -n kube-systemIf the aws-node pod is in Failing state, then check the connection between the worker node and the Amazon EC2 endpoint:ec2.region.amazonaws.comNote: Replace region with your AWS Region.Check that the AWS managed policies AmazonEKSWorkerNodePolicy and AmazonEC2ContainerRegistryReadOnly are attached to the node group's IAM role.If the nodes are in a private subnet, then configure Amazon ECR VPC endpoints to allow image pulls from Amazon Elastic Container Registry (Amazon ECR).If you use IRSA for your Amazon VPC CNI, then attach the AmazonEKS_CNI_Policy AWS managed policy to the IAM role that the aws-node pods use. You must also attach the policy to the node group's IAM role without IRSA.The EC2 worker node user data wasn't successfully runTo check whether any errors occurred when the user data was run, review the cloud-init logs at /var/log/cloud-init.log and /var/log/cloud-init-output.log.For more information, run the EKS Logs Collector script on the worker nodes.Create the node group in private subnetsIf you receive the following error when creating a node group, create the node group in private subnets:No export named eksctl--cluster::SubnetsPublic found. Rollack requested by userIf you created the Amazon EKS cluster with PrivateOnly networking, then AWS CloudFormation can't create public subnets. This means that export values won't exist for public subnets. If export values don't exist for the cluster, then node group creation fails.To resolve this issue, you can include the --node-private-networking flag when using the eksctl inline command. You can also use the privateNetworking: true specification within the node group configuration to request node group creation in private subnets.Update your eksctl version or specify the correct AWS RegionIf you receive the following error, check your AWS Region:no eksctl-managed CloudFormation stacks found for "cluster-name"If you use an eksctl version that's earlier than 0.40.0, then you can only view or manage Amazon EKS resources that you created with eksctl. To manage resources that weren't created with eksctl, update eksctl to version 0.40.0 or later. To learn about the commands that you can run for clusters that weren't created with eksctl, see Non eksctl-created clusters (from the eksctl website).Also, eksctl-managed CloudFormation stacks aren't found if you specify an incorrect AWS Region. To resolve this issue, make sure that you specify the correct Region where your Amazon EKS resources are located.Check if your instance type is supportedIf you used an unsupported instance type to create a cluster or node, then you receive the following error:You must use a valid fully-formed launch template. The requested configuration is currently not supported. Please check the documentation for supported configurations'To check if your instance type or other configurations are supported in a specific AWS Region, run the following command:aws ec2 describe-instance-type-offerings --region Region --query 'InstanceTypeOfferings[*].{InstanceType:InstanceType}'Note: Replace Region with your AWS Region.Follow"
https://repost.aws/knowledge-center/eks-troubleshoot-eksctl-cluster-node
How do I troubleshoot an Amazon EC2 instance that stops or terminates when I try to start it?
"When I try to start my Amazon Elastic Compute Cloud (Amazon EC2) instance, it terminates or doesn't start."
"When I try to start my Amazon Elastic Compute Cloud (Amazon EC2) instance, it terminates or doesn't start.Short descriptionThe following reasons are the most common causes of an Amazon EC2 instance InternalError message:Your Amazon Elastic Block Store (Amazon EBS) volume isn't attached to the instance correctly.An EBS volume that's attached to the instance is in an error state.An encrypted EBS volume is attached to the instance.If your instance doesn't start and no error code appears, then run the describe-instances command in the AWS Command Line Interface (AWS CLI). Then, specify the instance ID. Check the StateReason message that the command returns in the JSON response.Note: Enter all commands in the AWS CLI. If you receive errors when running AWS CLI commands, make sure that you're using the most recent version of the AWS CLI.ResolutionEBS volumes aren't attached to the instance correctlyYou must attach the EBS root volume to the instance as /dev/sda1 or /dev/xvda, depending on which one is defined in the API. You can't have a second EBS volume with a duplicate or conflicting device name. Otherwise, you can't stop or start the instance. Block device name conflicts affect only Xen-based instance types (c4, m4, t2, and so on). Block device name conflicts don't affect Nitro-based instances (c5, m5, t3, and so on).1.    Run the describe-instances API to verify the StateReason error message and error code:$ aws ec2 describe-instances --instance-id i-xxxxxxxxxxxxxxx --region us-east-1 --query "Reservations[].Instances[].{StateReason:StateReason}" --output jsonNote: Replace us-east-1 with your AWS Region. Replace i-xxxxxxxxxxxxxxx with your instance ID.If there's a device name conflict, then you see an output that's similar to the following message:[ [{ "StateReason": { "Code": "Server.InternalError", "Message": "Server.InternalError: Internal error on launch" } }]]2.    Open the Amazon EC2 console, and then select the instance that you can't start.3.    On the Description tab, verify the device name listed in Block devices. The Block devices field displays all device names of the attached volumes.4.    Verify that the root device is correctly attached and that there isn't a device listed with the same name or with a conflicting name.5.    If there's a device with a duplicate or conflicting device name, then detach the conflicting volume and rename it. Then, reattach the volume with the updated device name.An attached EBS volume is in an error state1.    Run the describe-instances API to verify the StateReason error message and error code:$ aws ec2 describe-instances --instance-id i-xxxxxxxxxxxxxxx --region us-east-1 --query "Reservations[].Instances[].{StateReason:StateReason}" --output jsonNote: Replace us-east-1 with your AWS Region. Replace i-xxxxxxxxxxxxxxx with your instance ID.If there's an attached EBS volume that's in an error state, then you see an output that's similar to the following message:[ [{ "StateReason": { "Code": "Server.InternalError", "Message": "Server.InternalError: Internal error on launch" } }]]2.    Open the Amazon EC2 console, choose Volumes, and then verify if the status of the volume is error. Your options vary depending on whether the volume in an error state is a root volume or a secondary volume.If the volume that's in an error state is a secondary volume, then detach the volume. You can now start the instance.If the volume that's in an error state is a root volume and you have a snapshot of the volume, then complete the following steps:Detach the volume.Create a new volume from the snapshot.Attach the new volume to the instance using the device name of the original instance. Start the instance.Note: If you don’t have an existing snapshot of the root volume that’s in an error state, then you can’t restart the instance. You must launch a new instance, install the relevant applications, and then configure it to replace the old instance.Attached volumes are encrypted and there are incorrect AWS Identity and Access Management (IAM) permissions or policies1.    Run the describe-instances API to verify the StateReason error message and error code:$ aws ec2 describe-instances --instance-id i-xxxxxxxxxxxxxxx --region us-east-1 --query "Reservations[].Instances[].{StateReason:StateReason}" --output jsonNote: Replace us-east-1 with your AWS Region. Replace i-xxxxxxxxxxxxxxx with your instance ID.If there's an encrypted volume that's attached to the instance and there are permissions or policy issues, then you receive a client error. You see an output that's similar to the following message:[ [{ "StateReason": { "Code": "Client.InternalError", "Message": "Client.InternalError: Client error on launch" } }]]2.    Verify that the user who's trying to start the instance has the correct IAM permissions. If you launched the instance indirectly through another service, like EC2 Auto Scaling, then also verify the following configurations:The AWS Key Management Service (AWS KMS) key that's used to encrypt the volume is activated.The key has the correct key policies.Note: To verify if a volume is encrypted, open the Amazon EC2 console, and then select Volumes. Encrypted volumes have Encrypted listed in the Encryption column.Related informationWhen I start my instance with encrypted volumes attached, the instance immediately stops with the error "client error on launch"Why can't I start or launch my EC2 instance?Key policies in AWS KMSTroubleshooting instance launch issues - Instance terminates immediatelyFollow"
https://repost.aws/knowledge-center/ec2-client-internal-error
How can I use JITP with AWS IoT Core?
I want to set up a just-in-time provisioning (JITP) environment that has a custom root certificate authority (CA) registered with AWS IoT Core. How do I set up JITP with AWS IoT Core?
"I want to set up a just-in-time provisioning (JITP) environment that has a custom root certificate authority (CA) registered with AWS IoT Core. How do I set up JITP with AWS IoT Core?Short descriptionTo set up a JITP environment with AWS IoT Core, first register your CA with AWS IoT Core. Then, attach a provisioning template to your CA.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Create a self-signed root CA and verification certificate1.    If you haven't already done so, install OpenSSL.2.    Create a device root CA private key by running the following OpenSSL command:$ openssl genrsa -out deviceRootCA.key 20483.    Using the VIM text editor, create a custom OpenSSL.conf file. To create and edit a custom OpenSSL.conf file, do the following:Create a custom OpenSSL.conf file by running the following VIM command:$ vi deviceRootCA_openssl.confPress i on the keyboard to edit the .conf file. Then, copy and paste the following into the file:[ req ]distinguished_name = req_distinguished_nameextensions = v3_careq_extensions = v3_ca[ v3_ca ]basicConstraints = CA:TRUE[ req_distinguished_name ]countryName = Country Name (2 letter code)countryName_default = INcountryName_min = 2countryName_max = 2organizationName = Organization Name (eg, company)organizationName_default = AMZPress esc on your keyboard, followed by :wq! to save the .conf file. Then, press Enter to exit the file.Note: To confirm that the OpenSSL.conf file was created, you can run the following Linux command:$ cat deviceRootCA_openssl.conf4.    Create a device root CA certificate signing request (CSR) by running the following OpenSSL command:$ openssl req -new -sha256 -key deviceRootCA.key -nodes -out deviceRootCA.csr -config deviceRootCA_openssl.conf5.    Create a device root CA certificate by running the following OpenSSL command:$ openssl x509 -req -days 3650 -extfile deviceRootCA_openssl.conf -extensions v3_ca -in deviceRootCA.csr -signkey deviceRootCA.key -out deviceRootCA.pem6.    Retrieve the registration code for the AWS Region that you want to use JITP in by running the following AWS CLI command:Important: Make sure that you replace us-east-2 with the Region that you want to use JITP in.$ aws iot get-registration-code --region us-east-27.    Create a verification key by running the following OpenSSL command:$ openssl genrsa -out verificationCert.key 20488.    Create a verification certificate CSR by running the following OpenSSL command:$ openssl req -new -key verificationCert.key -out verificationCert.csrThen, enter the Registration Code in the Common Name field. For example: Common Name (server FQDN or YOUR name) []: xxxxxxxx8a33da. Leave the other fields blank.9.    Create the verification certificate by running the following OpenSSL command:$ openssl x509 -req -in verificationCert.csr -CA deviceRootCA.pem -CAkey deviceRootCA.key -CAcreateserial -out verificationCert.crt -days 500 -sha256Important: The registration code of your root CA’s Region is required for the verification certificate to be certified by AWS IoT Core.For more information, see Just-in-time provisioning.Create a JITP template1.    Create an AWS Identity and Access Management (IAM) role for your AWS IoT Core service and name it JITPRole. For instructions, see Create a logging role (steps one and two).Important: You must include the IAM role’s Amazon Resource Name (ARN) in the following JITP template.2.    Using the VIM text editor, create a JITP template JSON file by doing the following:Create a JITP template JSON file by running the following VIM command:$ vi jitp_template.jsonImportant: Make sure that you save the template with the file name jitp_template.json.Press i on the keyboard to edit the JITP template. Then, copy and paste the following JITP template into the file:{ "templateBody":"{ \"Parameters\" : { \"AWS::IoT::Certificate::CommonName\" : { \"Type\" : \"String\" },\"AWS::IoT::Certificate::Country\" : { \"Type\" : \"String\" }, \"AWS::IoT::Certificate::Id\" : { \"Type\" : \"String\" }}, \"Resources\" : { \"thing\" : { \"Type\" : \"AWS::IoT::Thing\", \"Properties\" : { \"ThingName\" : {\"Ref\" : \"AWS::IoT::Certificate::CommonName\"}, \"AttributePayload\" : { \"version\" : \"v1\", \"country\" : {\"Ref\" : \"AWS::IoT::Certificate::Country\"}} } }, \"certificate\" : { \"Type\" : \"AWS::IoT::Certificate\", \"Properties\" : { \"CertificateId\": {\"Ref\" : \"AWS::IoT::Certificate::Id\"}, \"Status\" : \"ACTIVE\" } }, \"policy\" : {\"Type\" : \"AWS::IoT::Policy\", \"Properties\" : { \"PolicyDocument\" : \"{ \\\"Version\\\": \\\"2012-10-17\\\", \\\"Statement\\\": [ { \\\"Effect\\\": \\\"Allow\\\", \\\"Action\\\": [ \\\"iot:Connect\\\" ], \\\"Resource\\\": [ \\\"arn:aws:iot:us-east-2:<ACCOUNT_ID>:client\\\/${iot:Connection.Thing.ThingName}\\\" ] }, { \\\"Effect\\\": \\\"Allow\\\", \\\"Action\\\": [ \\\"iot:Publish\\\", \\\"iot:Receive\\\" ], \\\"Resource\\\": [ \\\"arn:aws:iot:us-east-2:<ACCOUNT_ID>:topic\\\/${iot:Connection.Thing.ThingName}\\\/*\\\" ] }, { \\\"Effect\\\": \\\"Allow\\\", \\\"Action\\\": [ \\\"iot:Subscribe\\\" ], \\\"Resource\\\": [ \\\"arn:aws:iot:us-east-2:<ACCOUNT_ID>:topicfilter\\\/${iot:Connection.Thing.ThingName}\\\/*\\\" ] } ] }\" } } } }", "roleArn":"arn:aws:iam::<ACCOUNT_ID>:role/JITPRole"}Important: Replace the roleArn value with the IAM Role ARN for your AWS IoT Core service. Replace the <ACCOUNT_ID> value with your AWS account ID. Replace us-east-2 with the AWS Region that you're using. Press esc on your keyboard, followed by :wq! to save the JITP template file. Choose Enter to exit the file.Note: The following IAM policies are included in the example JITP template:AWSIoTLoggingAWSIoTRuleActionsAWSIoTThingsRegistrationYou must be signed in to your AWS account to view the policy links. For more information, see Provisioning templates.Register your self-signed root CA certificate with AWS IoT CoreRegister the device root CA as a CA certificate in AWS IoT Core by running the following register-ca-certificate AWS CLI command:Important: Make sure that you replace us-east-2 with the Region that you want to use JITP in.$ aws iot register-ca-certificate --ca-certificate file://deviceRootCA.pem --verification-cert file://verificationCert.crt --set-as-active --allow-auto-registration --registration-config file://jitp_template.json --region us-east-2Note: Adding the parameter --registration-config attaches the JITP template that you created to the CA certificate. The command response returns the CA certificate's ARN.For more information, see Register your CA certificate.Create device certificates and perform JITPImportant: Make sure that you use the same directory where you created the original device root CA files.1.    Download the RootCA1 and save it with the file name awsRootCA.pem.Note: The RootCA1 is used for server-side authentication of publish requests to AWS IoT Core. For more information, see CA certificates for server authentication.2.    Create a device private key by running the following OpenSSL command:$ openssl genrsa -out deviceCert.key 20483.    Create a device CSR by running the following OpenSSL command:$ openssl req -new -key deviceCert.key -out deviceCert.csrImportant: The example JITP template requires the ThingName value to equal the certificate’s CommonName value. The template also requires the CountryName value to equal the Country value in the CA certificate. For example:Country Name (two-letter code) []:INCommon Name (eg. server FQDN or YOUR name) []: DemoThingThe JITP template provided in this article also uses the AWS::IoT::Certificate::Country certificate parameter, which requires you to add a value. Other potential certificate parameters include: AWS::IoT::Certificate::Country AWS::IoT::Certificate::Organization AWS::IoT::Certificate::OrganizationalUnit AWS::IoT::Certificate::DistinguishedNameQualifier AWS::IoT::Certificate::StateName AWS::IoT::Certificate::CommonName AWS::IoT::Certificate::SerialNumber AWS::IoT::Certificate::Id4.    Create a device certificate by running the following OpenSSL command:$ openssl x509 -req -in deviceCert.csr -CA deviceRootCA.pem -CAkey deviceRootCA.key -CAcreateserial -out deviceCert.crt -days 365 -sha2565.    Combine the root CA certificate and device certificate by running the following command:$ cat deviceCert.crt deviceRootCA.pem > deviceCertAndCACert.crt6.    Use Eclipse Mosquitto to make a test publish call to AWS IoT Core and initiate the JITP process.Note: You can also use the AWS Device SDK to make Publish calls to AWS IoT Core.Example Eclipse Mosquitto test publish call commandImportant: Replace a27icbrpsxxx-ats.iot.us-east-2.amazonaws.com with your own endpoint before running the command. To confirm your own endpoint, open the AWS IoT Core console. Then, choose Settings. Your endpoint is listed in the Custom endpoint pane.$ mosquitto_pub --cafile awsRootCA.pem --cert deviceCertAndCACert.crt --key deviceCert.key -h a27icbrpsxxx-ats.iot.us-east-2.amazonaws.com -p 8883 -q 1 -t foo/bar -i anyclientID --tls-version tlsv1.2 -m "Hello" -dExample response from the Eclipse Mosquitto test publish call commandClient anyclientID sending CONNECT Error: The connection was lost. // The error is expected for the first connect callNote: The test publish call fails the first time. When AWS IoT Core receives the test publish call, it creates a Certificate, Policy, and Thing. It also attaches the Policy to the Certificate, and then attaches the Certificate to the Thing. The next time you perform JITP, the IoT policy that was first created is the one that is used. A new IoT policy isn't be created.7.    Confirm the required resources were created by doing the following: Open the AWS IoT Core console. Choose Manage. Choose Things. Choose DemoThing.Verify that the certificate was created and it’s in ACTIVE state.Then, choose Policies and verify that the IAM policy is attached.Use device certificates in general operationNote: The value of Client ID that is added in the publish command must be the same as the ThingName that was created during the JITP process. The Topic Name added to the publish command must also follow the format ThingName/*. In the next publish call, you can use the deviceCert.crt instead of deviceCertAndCACert.crt.1.    Open the AWS IoT Core console.2.    Choose Test.3.    For Subscription Topic, enter DemoThing/test.4.    Run the following Eclipse Mosquitto publish call command to AWS IoT Core:Important: Replace a27icbrpsxxx-ats.iot.us-east-2.amazonaws.com with your own endpoint before running the command. To confirm your own endpoint, open the AWS IoT Core console. Then, choose Settings. Your endpoint appears in the Custom endpoint pane. Also, make sure that you use the custom device certificates that were generated by your custom root CA.$ mosquitto_pub --cafile awsRootCA.pem --cert deviceCert.crt --key deviceCert.key -h a27icbrpsxxx-ats.iot.us-east-2.amazonaws.com -p 8883 -q 1 -t DemoThing/test -i DemoThing --tls-version tlsv1.2 -m "Hello" -dAfter running the command, you will see that the message is received on the AWS IoT Core Test console.Create additional device certificatesTo create more device certificates and register them to AWS IoT Core, repeat the steps outlined in the Create device certificates and perform JITP section.Other JITP templatesTo fetch the ThingName value from the CommonName field of the certificate and to provide admin permissions in the policy, use the following JITP template:{ "templateBody":"{ \"Parameters\" : { \"AWS::IoT::Certificate::CommonName\" : { \"Type\" : \"String\" },\"AWS::IoT::Certificate::Country\" : { \"Type\" : \"String\" }, \"AWS::IoT::Certificate::Id\" : { \"Type\" : \"String\" }}, \"Resources\" : { \"thing\" : { \"Type\" : \"AWS::IoT::Thing\", \"Properties\" : { \"ThingName\" : {\"Ref\" : \"AWS::IoT::Certificate::CommonName\"}, \"AttributePayload\" : { \"version\" : \"v1\", \"country\" : {\"Ref\" : \"AWS::IoT::Certificate::Country\"}} } }, \"certificate\" : { \"Type\" : \"AWS::IoT::Certificate\", \"Properties\" : { \"CertificateId\": {\"Ref\" : \"AWS::IoT::Certificate::Id\"}, \"Status\" : \"ACTIVE\" } }, \"policy\" : {\"Type\" : \"AWS::IoT::Policy\", \"Properties\" : { \"PolicyDocument\" : \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Action\\\":\\\"iot:*\\\",\\\"Resource\\\":\\\"*\\\"}]}\" } } } }", "roleArn":"arn:aws:iam::<ACCOUNT_ID>:role/JITPRole"}To fetch the ThingName value from the CommonName field of the certificate and provide a predefined policy name, use the following JITP template:{ "templateBody":"{ \"Parameters\" : { \"AWS::IoT::Certificate::CommonName\" : { \"Type\" : \"String\" },\"AWS::IoT::Certificate::Country\" : { \"Type\" : \"String\" }, \"AWS::IoT::Certificate::Id\" : { \"Type\" : \"String\" }}, \"Resources\" : { \"thing\" : { \"Type\" : \"AWS::IoT::Thing\", \"Properties\" : { \"ThingName\" : {\"Ref\" : \"AWS::IoT::Certificate::CommonName\"}, \"AttributePayload\" : { \"version\" : \"v1\", \"country\" : {\"Ref\" : \"AWS::IoT::Certificate::Country\"}} } }, \"certificate\" : { \"Type\" : \"AWS::IoT::Certificate\", \"Properties\" : { \"CertificateId\": {\"Ref\" : \"AWS::IoT::Certificate::Id\"}, \"Status\" : \"ACTIVE\" } }, \"policy\" : {\"Type\" : \"AWS::IoT::Policy\", \"Properties\" : { \"PolicyName\" : \"Policy_Name\"} } } }", "roleArn":"arn:aws:iam::<ACCOUNT_ID>:role/JITPRole"}Important: Replace Policy_Name with the policy name of your choice.Follow"
https://repost.aws/knowledge-center/aws-iot-core-jitp-setup
How do I change the email address for my registered domain in Route 53 if I lost access to my previous email account?
I can't access my previous email account. I want to update the email address for my registered domain in Amazon Route 53.
"I can't access my previous email account. I want to update the email address for my registered domain in Amazon Route 53.ResolutionOpen the Route 53 console.Choose Registered Domains.Select the name of the domain that you want to update the email address for.Choose Edit Contacts.Change the email address for only the registrant contact. Don't change the values for any of the domain contacts. You can change other values later in the process.Choose Save.If it's required for the top-level domain (TLD), then Amazon Route 53 sends a verification email to the new address. You must choose the link in the email to verify that the new email address is valid.If you don't verify the new email address, then Route 53 suspends the domain as required by ICANN. The domain status changes from OK to clientHold. For more information, see EPP Status Codes - clientHold on the ICANN website.Note: The email comes from one of the following email addresses based off of the TLD:.fr: nic@nic.fr.com.au and .net.au: noreply@emailverification.infoAll others: noreply@registrar.amazon.com or noreply@domainnameverification.net(Optional) To change other values for the registrant, administrative contacts, or technical contacts for the domain, return to step 1 and repeat the procedure.Related informationUpdating contact information for a domainAmazon Route53 domain registration UpdateDomainContact (through API)Follow"
https://repost.aws/knowledge-center/route-53-change-email-for-domain
My Auto Scaling API calls are getting throttled. What can I do to avoid this?
"My application receives "Rate Exceeded" errors when calling to Amazon EC2 Auto Scaling, AWS Auto Scaling, or AWS Application Auto Scaling. What can I do to avoid this error?"
"My application receives "Rate Exceeded" errors when calling to Amazon EC2 Auto Scaling, AWS Auto Scaling, or AWS Application Auto Scaling. What can I do to avoid this error?Short descriptionAll API calls can't exceed the maximum allowed API request rate per account and per Region. This includes API calls from the AWS Command Line Interface (AWS CLI) and the AWS Management Console. If API requests exceed the maximum rate, then you receive a "Rate Exceeded" error, and further API calls are throttled.Amazon EC2 Auto Scaling, AWS Auto Scaling, and AWS Application Auto Scaling each have their own API throttle buckets. This means that all Amazon EC2 Auto Scaling API calls have a single, shared API limit. Amazon EC2 Auto Scaling API calls don't affect the limit for AWS Application Auto Scaling APIs.To avoid the "Rate Exceeded" error and throttling, verify that your application is making only necessary calls.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.ResolutionTo prevent or mitigate "Rate Exceeded" errors and throttling, try these solutions:Validate "describe" callsExcessive "describe" calls contribute to the total API requests measured against the allowed request rate. Verify your application to be sure that all "describe" calls are necessary, and consider solutions other than "describe" calls where possible. Consider using push notifications from Amazon EventBridge that are sent when instances change state, such as when they start a lifecycle hook.Check calls from third-party applicationsThird-party applications might make continuous calls to Auto Scaling in AWS. Verify your third-party applications to be sure that they're not making unnecessary calls.Implement error retries and exponential backoffsError retries and exponential backoffs can help limit the rate of API calls. Each AWS SDK implements automatic retry logic and exponential backoff algorithms. For more information, see Error Retries and exponential backoff in AWS.Request a service quota increase in the AWS Support CenterTo get a service quota increase, you must confirm that you validated your API call rate, as well as implemented error retries or exponential backoff methods. In your request, you must also provide the Region and timeframe related to the throttling issues.Avoid bursts of activityAvoid situations that cause bursts of API calls. For example, don't set all instances in an Amazon EC2 Auto Scaling group enable scale in protection when you launch them. Instead, enable this option by default on the group so that all instances have protection enabled by default.Related informationExponential backoff and jitterFollow"
https://repost.aws/knowledge-center/autoscaling-api-calls-throttled
How do I resolve "Execution failed due to configuration error: Illegal character in path" errors when creating an API Gateway API with a proxy resource?
"I'm using an AWS CloudFormation template (or OpenAPI API definition) to create an Amazon API Gateway API with a proxy resource. When I try to create the API, I get the following error: "Execution failed due to configuration error: Illegal character in path." How do I resolve the error?"
"I'm using an AWS CloudFormation template (or OpenAPI API definition) to create an Amazon API Gateway API with a proxy resource. When I try to create the API, I get the following error: "Execution failed due to configuration error: Illegal character in path." How do I resolve the error?Short descriptionIf a URL path parameter mapping for the proxy path parameter ({proxy+}) isn't defined, then API Gateway returns the following error:Execution failed due to configuration error: Illegal character in pathWithout a URL path parameter mapping defined for this parameter in the integration request, API Gateway evaluates the parameter as the literal string "{proxy+}". Because "{" isn't a valid character, API Gateway returns an error when this happens.To resolve the error, define the URL path parameter mapping for the proxy path parameter in the integration request by doing the following:Resolution1.    In the API Gateway console, choose the name of your API.2.    With the method selected in the Resources pane, choose Integration Request in the Method Execution pane.3.    In the Integration Request pane, verify that the Endpoint URL uses the correct proxy path parameter: {proxy}. (The greedy path variable without "+".) For example: http://example.com/{proxy}4.    Expand URL Path Parameters. Then, choose Add path and do the following:For Name, enter proxy. This corresponds to the parameter in the Endpoint URL.For Mapped from, enter method.request.path.proxy.Note: Here, proxy corresponds to the name of the request path as defined in the Method Request pane. This request path is added by creating a proxy resource named {proxy} or, for a greedy path variable, {proxy+}.5.    Choose the check mark icon (Create).Note: If you get an Invalid mapping expression specified error, update your AWS CloudFormation template or OpenAPI definition. Then, repeat the steps in this article.6.    Deploy your API.Related informationSet up a proxy integration with a proxy resourceSet up request and response data mappings using the API Gateway consoleSet up an API integration request using the API Gateway consoleFollow"
https://repost.aws/knowledge-center/api-gateway-proxy-path-character-error
How do I get OIDC or social identity provider–issued tokens after integrating the identity provider with Amazon Cognito?
I want to get the access and ID tokens issued by my identity provider (IdP) that's integrated with Amazon Cognito user pools.
"I want to get the access and ID tokens issued by my identity provider (IdP) that's integrated with Amazon Cognito user pools.Short descriptionIn the OpenID Connect (OIDC) IdP authentication flow, Amazon Cognito exchanges the IdP-issued authorization code with IdP tokens. Amazon Cognito then prepares its own set of tokens and returns them to the end user after successful federation. However, this process doesn't allow the user or application to see the actual IdP side tokens. Some use cases might require the actual IdP-issued tokens within the application for authorization or troubleshooting purposes. To capture and store IdP-issued access and ID tokens when you federate into Amazon Cognito user pools, follow the steps in the Resolution section.Important: The steps in this article assume that you already integrated OIDC IdP or social IdP with Amazon Cognito user pools. If you didn't integrate an IdP with your user pool, then follow the steps for adding a user pool sign-in through a third party.ResolutionCreate a custom attribute in a user poolFollow these steps to create a custom attribute in your user pool:1.    Open the new Amazon Cognito console, and then choose the Sign-up Experience tab in your user pool.2.    Under the Custom Attributes section, select the Add custom attributes button.3.    To create a custom attribute for an access token, enter the following values, and then save the changes.Name: access_tokenType: StringMax: 2,048Mutable: Select this check box4.    To create a custom attribute for an ID token, enter the following values, and then save the changes.Name: id_tokenType: StringMax: 2,048Mutable: Select this check boxConfigure attribute mapping between Amazon Cognito and your IdPFollow these steps to configure attribute mapping to IdP attributes:1.    Open the new Amazon Cognito console, and then choose the Sign-in Experience tab in your user pool.2.    Under the Federated Identity Provider sign-in section, select your IdP from the list.3.    Choose the Edit option near the Identity provider information section. Make sure that the following scopes are present in the Authorized scopes section:Facebook example scopes: public_profile, emailGoogle example scopes: profile email openidLogin with Amazon example scopes: profile postal_codeSign in with Apple example scopes: email nameAll other OIDC providers example scopes: profile email openid4.    Go one step back to the Identity provider page. Choose Edit near the Attribute mapping section.5.    From the User pool attribute column, select the custom attribute that you created in the beginning.6.    From the OpenID Connect attribute column, select access_token or id_token, depending on the type of token to be mapped. Then, save your changes.Example results of configuring attribute mapping:User pool attribute: custom:id_tokenOpenID Connect attribute: id_tokenUser pool attribute: custom:access_tokenOpenID Connect attribute: access_tokenTurn on attribute read and write permissions in your Amazon Cognito app clientWhen a user signs in to the application, Amazon Cognito updates the mapped attributes. For Amazon Cognito to update the mapped user pool attributes, the mapped attributes must be writable in your application's app client settings. For Amazon Cognito to update the user's ID token, the attributes must be readable in your application's app client settings.1.    Open the new Amazon Cognito console, and then choose the App integration tab in your user pool.2.    Select your app client from the list of app clients.3.    In the Attribute read and write permissions section, choose Edit.4.    On the Edit attribute read and write permissions page, select the read and write check boxes for your custom attributes.5.    Save the changes.Repeat these steps for each app client that uses the custom attribute.For more information, see User pool attributes and go to the Attribute permissions and scopes tab.Sign in using the third-party OIDC provider or social IdPWhen you perform a new IdP authentication through the Amazon Cognito Hosted UI, you can see the IdP tokens in the custom attributes. Choose an appropriate user to see the IdP tokens in their attributes. Decoding the ID token also provides you with the custom attributes that contain IdP tokens.Sample payload section of the ID token issued to the end user:{    "custom:access_token": "ya29.a0AeTM1ic9iv_FqpDQeIN......w1OPKdFEbR_Tea",    "iss": "https://cognito-idp.example_region.amazonaws.com/example_user_pool_id",    "custom:id_token": "eyJhbGciOiJSUzI1NiIsImtpZCI6IjhjMjdkYjRkMTN............saDMuZ29vZ2xldXNlcmNv"}When creating custom attributes, keep the following points in mind:The maximum length for any custom attribute is 2,048 characters. When an IdP token exceeds 2,048 characters, you receive the following error: "String attributes cannot have a length of more than 2048".You can't remove or modify a custom attribute after its creation.If the custom attribute isn't being updated in subsequent sign-ins, then check the mutability of the custom attribute. This issue is expected after you clear the Mutable check box when creating the attribute. To learn more, see User pool attributes and go to the Custom attributes tab.Note: If you still can't get an IdP token after following the preceding steps, then contact your IdP. Ask whether the IdP supports passing the tokens within attributes to Amazon Cognito. After you confirm, you can reach out to AWS Support for additional troubleshooting.Related informationHow do I set up Auth0 as an OIDC provider in an Amazon Cognito user pool?How do I set up LinkedIn as a social identity provider in an Amazon Cognito user pool?How do I set up Okta as an OpenID Connect identity provider in an Amazon Cognito user pool?How do I set up Google as a federated identity provider in an Amazon Cognito user pool?Follow"
https://repost.aws/knowledge-center/cognito-user-pool-tokens-issued-by-idp
How do I associate multiple ACM SSL or TLS certificates with Application Load Balancer using CloudFormation?
I want to associate multiple AWS Certificate Manager SSL and TLS certificates with Application Load Balancer using AWS CloudFormation.
"I want to associate multiple AWS Certificate Manager SSL and TLS certificates with Application Load Balancer using AWS CloudFormation.Short descriptionTo add a default SSL server for a secure listener, use the Certificates property for the resource AWS::ElasticLoadBalancingV2::Listener. This resource provides one certificate. To add more certificates, use AWS::ElasticLoadBalancingV2::ListenerCertificate. AWS::ElasticLoadBalancingV2::ListenerCertificate includes the Certificates parameter that accepts the list of certificates.ResolutionUse the following CloudFormation template to create an Application Load Balancer listener with one default certificate:HTTPlistener: Type: 'AWS::ElasticLoadBalancingV2::Listener' DependsOn: ApplicationLoadBalancer Properties: DefaultActions: - Type: fixed-response FixedResponseConfig: ContentType: text/plain MessageBody: Success StatusCode: '200' LoadBalancerArn: >- arn:aws:elasticloadbalancing:<Region>:<AccountID>:loadbalancer/app/TestACMELB/1032d48308c9b37f Port: '443' Protocol: HTTPS Certificates: - CertificateArn: >- arn:aws:acm:<Region>:<AccountID>:certificate/cffb8a69-0817-4e04-bfb1-dac7426d6b90Use the following CloudFormation template to add multiple certificates to the Application Load Balancer listener:AdditionalCertificates: Type: 'AWS::ElasticLoadBalancingV2::ListenerCertificate' DependsOn: HTTPlistener Properties: Certificates: - CertificateArn: >- arn:aws:acm:<Region>:<AccountID>:certificate/c71a3c29-e79d-40e6-8834-650fe0d54a3f - CertificateArn: >- arn:aws:acm:<Region>:<AccountID>:certificate/fff1c1ba-3d97-4735-b3d5-9c5269b75db3 ListenerArn: Ref: HTTPlistenerFollow"
https://repost.aws/knowledge-center/cloudformation-ssl-tls-certificates-alb
How do I resolve the 403 "index_create_block_exception" or "cluster_block_exception" error in OpenSearch Service?
"I tried to create an index or write data to my Amazon OpenSearch Service domain, but I received a "index_create_block_exception" or "cluster_block_exception" error."
"I tried to create an index or write data to my Amazon OpenSearch Service domain, but I received a "index_create_block_exception" or "cluster_block_exception" error.ResolutionFollow these troubleshooting steps for the type of ClusterBlockException error message that you received.index_create_block_exception{ "error": { "root_cause": [{ "type": "index_create_block_exception", "reason": "blocked by: [FORBIDDEN/10/cluster create-index blocked (api)];" }], "type": "index_create_block_exception", "reason": "blocked by: [FORBIDDEN/10/cluster create-index blocked (api)];" }, "status": 403}This error occurs because of a lack of storage space. To troubleshoot storage issues, see Lack of available storage space. This error also ocurrs because of a high JVM memory pressure. To troubleshoot high JVM memory pressure issues, see How do I troubleshoot high JVM memory pressure on my Amazon OpenSearch Service cluster?cluster_block_exception (cluster in read-only state){ "error" : { "root_cause" : [ { "type" : "cluster_block_exception", "reason" : "blocked by: [FORBIDDEN/6/cluster read-only (api)];", } ], "type" : "cluster_block_exception", "reason" : "blocked by: [FORBIDDEN/6/cluster read-only (api)];", }, "status" : 403}This error occurs when the read-only block is set to true. If quorum loss occurs and your cluster has more than one node, then OpenSearch restores quorum and places the cluster into a read-only state. You can use GET _cluster/settings to verify if the read-only state is set to true.For more information, see Cluster in read-only state.cluster_block_exception (warm indexes){ "error": { "root_cause": [{ "type": "cluster_block_exception", "reason": "blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];" }], "type": "cluster_block_exception", "reason": "blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];" }, "status": 403}This error occurs when you try to add, update, or delete individual documents in the warm indices. Warm indices are read-only unless you return them to hot storage. You can query the indices and delete them only if they're present in the UltraWarm storage. To update documents, you must migrate the index from UltraWarm storage to hot storage.For more information, see Returning warm indexes to hot storage.Related informationHow can I scale up or scale out an Amazon OpenSearch Service domain?Follow"
https://repost.aws/knowledge-center/opensearch-403-clusterblockexception
Why is it taking a long time to scale up my ElastiCache for Redis cluster?
Why is it taking a long time to scale up my Amazon ElastiCache for Redis (Cluster Mode Disabled) cluster?
"Why is it taking a long time to scale up my Amazon ElastiCache for Redis (Cluster Mode Disabled) cluster?ResolutionYou can scale your Redis cluster based on the current demand. When you scale up your cluster, a new node type is created. After the new node is created, the data is replicated from the existing node to the new node type. The time taken to complete the replication depends on the following factors:The amount of data in the cluster: If there's a lot of data in the cluster, then it takes more time to replicate the data to the new node.The node type of the cluster: If the existing node type doesn't have enough throughput, there might be an increase in the time it takes to scale the cluster.The ongoing traffic on the Redis cluster: If the existing node type is overwhelmed due to incoming traffic, then the time it takes to replicate the data increases. It's a best practice to scale when the Redis cluster isn't in heavy use so that the scaling process completes faster.Related informationAuto Scaling ElastiCache for Redis clustersFollow"
https://repost.aws/knowledge-center/elasticache-scale-up-cluster-delays
How do I set up extensions for agent-to-agent calling in my Amazon Connect contact center?
I want agents in my Amazon Connect contact center to be able to call one another directly. How can I set up extensions so they can do that?
"I want agents in my Amazon Connect contact center to be able to call one another directly. How can I set up extensions so they can do that?Short descriptionTo allow agents in your Amazon Connect contact center to call one another using direct extensions, do the following:Create an Amazon DynamoDB table that holds agent login names and their extensions.Create an AWS Identity and Access Management (IAM) role that a Lambda function can assume to look up an agent login name in DynamoDB.Create a Lambda function that queries the DynamoDB table with an extension input and returns the corresponding agent login name from the table.Add the Lambda function to your Amazon Connect instance.Create a customer queue flow that checks if the dialed agent is available to receive the call.Create an inbound contact flow that does the following:Invokes the Lambda function to get the agent login name from the DynamoDB table when an agent enters another agent's extension.Transfer the call to the correct agent.Create a quick connect that allows agents to call another agent in the contact center using the Contact Control Panel (CCP).Important: Make sure that you follow these steps in the same AWS Region that your Amazon Connect instance is in.ResolutionCreate a DynamoDB table that holds agent login names and their extensions1.    Open the DynamoDB console.2.    In the Create DynamoDB table screen, do the following:For Table name, enter AgenttoAgent.For Primary key, in the Partition key panel, enter Extension.For Data type, choose String.3.    Choose Create.4.    Assign a unique extension to each agent login name. Then, add the extensions and agent login names to the table. The extensions should have the Attribute name key as Extension. The agent login name should have the Attribute name key as AgentLoginName while creating an item in the DynamoDB table.Note: For more information on how to edit DynamoDB tables, see Write data to a table using the console or AWS CLI.Create an IAM role that a Lambda function can assume to look up an agent login name in DynamoDBUse the following JSON policy to create an IAM role:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "dynamodb:BatchGetItem", "dynamodb:GetItem", "dynamodb:Query", "dynamodb:Scan", "dynamodb:BatchWriteItem", "dynamodb:PutItem", "dynamodb:UpdateItem" ], "Resource": "Replace with ARN of DynamoDB table you created" }, { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": "*" } ]}Create a Lambda function that queries the DynamoDB table with an extension input and returns the corresponding agent login name from the tableUse the following Python code to create a Lambda function and attach the role created previously:import jsonimport boto3from boto3.dynamodb.conditions import Keydef get_agent_id(Extension, dynamodb=None): if not dynamodb: dynamodb = boto3.resource('dynamodb') table = dynamodb.Table('AgenttoAgent') response = table.query( KeyConditionExpression=Key('Extension').eq(str(Extension)) ) return response['Items']def lambda_handler(event, context): Extension = event['Details']['Parameters']['Extension'] AgentLoginName = get_agent_id(Extension) for agent in AgentLoginName: print(agent['Extension'], ":", agent['AgentLoginName']) print(AgentLoginName) return agentAdd the Lambda function to your Amazon Connect instance1.    Open the Amazon Connect console.2.    In the Instance Alias column, choose the name of your Amazon Connect instance.3.    In the left navigation pane, choose Contact flows.4.    In the AWS Lambda section, choose the Function dropdown list. Then, choose the Lambda function that you created in the previous section.Note: The Function dropdown list names only the functions that are in the same Region as your Amazon Connect instance. If no functions are listed, choose Create a new Lambda function to create a new function in the correct Region.5.    Choose Add Lambda Function. Then, confirm that the ARN of the function is added under Lambda Functions.Now, you can refer to the Lambda function you created in your Amazon Connect contact flows. For more information about integrating Lambda with Amazon Connect, see Invoke AWS Lambda functions.Create a customer queue flow (AgentQueueFlow) that will check an agent's availability to receive a callCreate a new customer queue flow. After you choose Create customer queue flow, the contact flow designer opens.In the contact flow designer, do the following:Add a Check staffing blockTo check if an agent is available to receive a call, use a Check staffing block.1.    Choose Branch.2.    Drag and drop a Check staffing block onto the canvas to the right of the Entry point block.3.    For Status to check, choose Available.4.    Choose Save.Note: You will see three outputs from the Check staffing block: True, False, and Error.Add a Loop prompts block to the True output of the Check staffing blockTo play prompts while a caller is in the queue, use a Loop prompts block.1.    Choose Interact.2.    Drag and drop a Loop prompts block onto the canvas to the right of the Check staffing block. Then, connect the Loop prompts block to the True output of the Check staffing block.3.    Choose the block title (Loop prompts). The block's settings menu opens.4.    For Prompts, choose Text-to-speech. Then, enter the following prompt: "Now we will be transferring the call to $.External.AgentLoginName"Note: You can change the prompt to match your specific use case.5.    Choose Add another prompt to the loop.6.    Choose Audio recording. Then, choose the music you'd like callers to hear while they wait for the dialed agent to accept the call.7.    For Interrupt, choose Interrupt every 1 minutes, or whatever timeframe you'd like the call to timeout in.8.    Choose Save.Add a Loop prompts block to the False and Error outputs of the Check staffing block1.    Choose Interact.2.    Drag and drop a Loop prompts block onto the canvas to the right of the Check staffing block. Then, connect the Loop prompts block to both the False and Error outputs of the Check staffing block.3.    Choose the block title (Loop prompts). The block's settings menu opens.4.    For prompts, choose Text-to-speech. Then, enter the following prompt: "The agent you are trying to reach is either not available or is busy on another call. Please try again later."Note: You can change the prompt to match your specific use case.5.    For Interrupt, choose Interrupt every 4 seconds, or the time it takes to complete whatever prompt you entered in step 4. After this time period, a timeout branch starts.6.    Choose Save.Add a Disconnect block1.    Choose Terminate/Transfer.2.    Drag and drop the Disconnect block onto the canvas to the right of the Loop prompts blocks.3.    Connect all of the Timeout and Error branches to the Disconnect block.4.    Choose Save to save a draft of the flow. Then, choose Publish to activate the contact flow.Create an inbound contact flow (AgentToAgentCall) that triggers the Lambda function when an agent calls another agent's extension.Create a new inbound contact flow. After you choose Create contact flow, the contact flow designer opens.In the contact flow designer, do the following:Note: The following is an example of a basic inbound contact flow. You can need to add or edit blocks for your specific use case. For example, error branches can be connected to the Play prompt blocks to play a custom message.Add a Store customer input blockTo store extension inputs from agents making agent-to-agent calls use a Store customer input block.1.    Choose Interact.2.    Drag and drop a Store customer input block onto the canvas to the right of the Entry point block.3.    Choose the block title (Store customer input). The block's settings menu opens.4.    Choose Text-to-speech. Then, enter a message that asks callers to enter an agent's extension. For example: "Please enter the agent's extension number to continue."5.    In the Customer input section, choose Custom. Then, enter the maximum number of digits that you'd like to use for each agent's extension.6.    Choose Save.Add an Invoke AWS Lambda function blockTo invoke the Lambda function and have it return the agent login name that corresponds to the extension input, use the Invoke AWS Lambda function block.1.    Choose Integrate.2.    Drag and drop an Invoke AWS Lambda function block onto the canvas to the right of the Store customer input block.3.    Choose the block title (Invoke AWS Lambda function). The block's settings menu opens.4.    Choose the Lambda function that you created earlier.5.    In the Function input parameters section, click Add a parameter and choose Use attribute. Then, do the following:For Destination key, enter the attribute name Extension.For Type, choose System.For Attribute, choose Stored customer input.6.    Choose Save.Add a Set working queue blockTo set the working queue as an agent's queue, use a Set working queue block.1.    Choose Set.2.    Drag and drop a Set working queue block onto the canvas to the right of the Invoke AWS Lambda function block.3.    Choose the block title (Set working queue). The block's settings menu opens.4.    For Outputs, choose By agent. Then, choose Use attribute and do the following:For Type, choose External.For Attribute, enter AgentLoginName (the attribute returned by the Lambda function.)5.    Choose Save.Add a Set customer queue flow blockTo specify the flow that you want to invoke when an agent enters another agent's extension (AgentToAgentCall), use a Set customer queue flow block.1.    Choose Set.2.    Drag and drop a Set customer queue flow block onto the canvas to the right of the Set working queue block.3.    Choose Select a flow.4.    Choose AgentQueueFlow.Note: You will create the AgentQueueFlow customer queue flow in the next section.5.    Choose Save.Add a Transfer to queue blockTo end the contact flow (AgentToAgentCall) and place callers in the customer queue flow (AgentQueueFlow), use a Transfer to queue block.1.    Choose Terminate/Transfer.2.    Drag and drop a Transfer to queue block onto the canvas to the right of the Set customer queue flow block.Note: No settings need to be configured for the Transfer to queue block for this use case.Add a Disconnect blockTo disconnect a caller from the contact flow (AgentToAgentCall) after they've been transferred to the customer queue flow (AgentQueueFlow), add a Disconnect block.1.    Choose Terminate/Transfer.2.    Drag and drop the Disconnect block onto the canvas to the right of the Transfer to queue block.3.    Choose Save to save a draft of the contact flow. Then, choose Publish to activate the contact flow.Important: Make sure that you assign a phone number for agents to use to make internal calls to the AgentToAgentCall contact flow.Create a quick connect that allows agents to use the internal calling feature1.    Create a quick connect using the following configurations:Name the quick connect InternalCalling.For Type, choose External.For Destination, enter the number you assigned to the AgentToAgentCall contact flow.2.    Add the InternalCalling quick connect to the queues assigned to the agents that you want to have access to the internal calling feature.Note: The cost of each call depends on the duration of the call. For more information on pricing, see Amazon Connect pricing.Follow"
https://repost.aws/knowledge-center/connect-agent-to-agent-extensions
How do I troubleshoot audio and microphone issues in Amazon WorkSpaces for Windows?
I'm unable to hear the audio or use a microphone in PCoIP-based Amazon WorkSpaces for Windows.
"I'm unable to hear the audio or use a microphone in PCoIP-based Amazon WorkSpaces for Windows.ResolutionAudio issuesTo troubleshoot audio issues in your WorkSpaces application, do the following:Perform initial checksBe sure that you're running the latest WorkSpaces client application.Reboot your Workspace to check if the issue is resolved.Be sure that the round-trip time for your WorkSpace client is less than 100 ms. If the RTT is between 100 ms and 250 ms, then the user can access the WorkSpace, but the performance is degraded. To check the RTT to the various AWS Regions from your location, use the Amazon WorkSpaces Connection Health Check.Install Teradici Virtual Audio Driver on your WorkSpaceIf you still have audio issues after the initial checks, check if the Teradici Virtual Audio Driver is installed on the WorkSpace. To verify, open Device Manager on the WorkSpace where you have issues in your Windows PC. Then, choose (double-click) Sound, video and game controllers. The driver must be listed in this section. If the driver isn't listed in this section, then use these steps to install the driver:Open Device Manager on the WorkSpace.Choose Action, and then choose Add legacy hardware.In the Welcome to Add Hardware Wizard window, choose Next.Select Install the hardware that I manually select from a list (Advanced), and then choose Next.Choose Show All Devices, and then choose Next.Choose Have Disk.Under Copy manufacturer's files from, choose Browse, and select the following file: C:\Program Files\Teradici\PCoIP Agent\drivers\TeraAudio.Choose OK.Choose Teradici Virtual Audio Driver, and then choose Next.Choose Finish.Reboot the WorkSpace. The driver must be listed under the Sound, video and game controllers section in Device Manager.Configure Teradici Virtual Audio Driver as the active playback deviceVerify that the Teradici Virtual Audio Driver is configured as the active Playback device by following these steps:Connect to the WorkSpace.In the system tray, right-click on the speaker icon, and then choose Sounds.Choose the Playback tab.Check if Teradici Virtual Audio Driver has a green checkmark.If the driver doesn't have a green checkmark, then follow these steps:Right-click anywhere on the Playback tab, and then select Show disabled drivers.Right-click on Teradici Virtual Audio Driver, and then select Enable.You can now listen to the audio inside the WorkSpace.Microphone issuesWindows Server 2019-based WorkSpaces include new privacy settings that disable remote access to microphones, by default. Your applications can't detect an audio device unless you provide the required access in Windows Settings.In your Windows Workspace, Choose the Start menu, and then choose Settings.Choose (double-click) Privacy, and then under App permissions, choose Microphone.Under Allow apps to access your microphone, select On.Your microphone is now detected inside the WorkSpace. No reboot of WorkSpace is required.Follow"
https://repost.aws/knowledge-center/workspaces-audio-microphone-issues
How can I restore an Amazon S3 object from the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage class using the AWS CLI?
I archived an Amazon Simple Storage Service (Amazon S3) object to the Amazon S3 Glacier Flexible Retrieval (formerly Glacier) or Amazon S3 Glacier Deep Archive storage class by using a lifecycle configuration. I want to restore the object using the AWS Command Line Interface (AWS CLI).
"I archived an Amazon Simple Storage Service (Amazon S3) object to the Amazon S3 Glacier Flexible Retrieval (formerly Glacier) or Amazon S3 Glacier Deep Archive storage class by using a lifecycle configuration. I want to restore the object using the AWS Command Line Interface (AWS CLI).ResolutionUse the following steps to restore an S3 object from the S3 Glacier Flexible Retrieval storage or S3 Glacier Deep Archive class using the AWS CLI.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.Initiate a restore requestRun the following command to initiate a restore request. Be sure to replace all the values in the example command with the values for your bucket, object, and restore request.Note: Because data retrieval charges are based on the quantity of requests, be sure to confirm that the parameters of your restore request are correct.$ aws s3api restore-object --bucket awsexamplebucket --key dir1/example.obj --restore-request '{"Days":25,"GlacierJobParameters":{"Tier":"Standard"}}'After you run this command, a temporary copy of the object is available for the duration specified in the restore request. In this example, the duration specified in the restore request is 25 days while the restore tier is set to S3 Standard.Note the following modifications that you can make to the command:To restore a specific object version in a versioned bucket, include the --version-id option, and then specify the corresponding version ID.For the S3 Glacier Flexible Retrieval storage class, you can use the Expedited, Standard, or Bulk retrieval options. However, you can use only the Standard or Bulk retrieval options for the S3 Glacier Deep Archive storage class.If the JSON syntax used in the example results in an error on a Windows client, then replace the restore request with the following syntax:--restore-request Days=25,GlacierJobParameters={"Tier"="Standard"}Note: If an object is stored in S3 Glacier Instant Retrieval, then data retrieval is instant and the restore operation isn't needed. For more information, see Amazon S3 storage classes.Monitor the status of your restore requestRun the following command to monitor the status of your restore request:aws s3api head-object --bucket awsexamplebucket --key dir1/example.objIf the restore is still in progress after you run the command, you receive a response similar to the following:{ "Restore": "ongoing-request=\"true\"", ... "StorageClass": "GLACIER | DEEP_ARCHIVE", "Metadata": {}}After the restore is complete, you receive a response similar to the following:{ "Restore": "ongoing-request=\"false\", expiry-date=\"Sun, 13 Aug 2017 00:00:00 GMT\"", ... "StorageClass": "GLACIER | DEEP_ARCHIVE", "Metadata": {}}Note the expiry-date in the response—you have until this time to access the temporary store object (stored in the Reduced Redundancy Storage class). The temporary object is available alongside the archived object that's in the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage class. After the expiry-date elapses, the temporary object is removed. You must change the object's storage class before the temporary object expires. To change the object's storage class after the expiry-date, you must initiate a new restore request.Change the object's storage class to Amazon S3 StandardTo change the object's storage class to Amazon S3 Standard, use copy. You can overwrite the existing object or copy the object into another location.Warning: If you're using version 1.x of the AWS CLI, make sure that the multipart threshold is set to 5 GB before copying an object. Otherwise, the object's user metadata is lost when the object size is larger than the multipart thresholds of the AWS CLI. For objects larger than 5 GB, use version 2.x of the AWS CLI to preserve user metadata.(Optional) To increase the multipart threshold of the AWS CLI, run the following command:aws configure set default.s3.multipart_threshold 5GBTo overwrite the existing object with the Amazon S3 Standard storage class, run the following command:aws s3 cp s3://awsexamplebucket/dir1/example.obj s3://awsexamplebucket/dir1/example.obj --storage-class STANDARDTo perform a recursive copy for an entire prefix and overwrite existing objects with the Amazon S3 Standard storage class, run the following command:aws s3 cp s3://awsexamplebucket/dir1/ s3://awsexamplebucket/dir1/ --storage-class STANDARD --recursive --force-glacier-transferNote: Objects that are archived to S3 Glacier Flexible Retrieval have a minimum storage duration of 90 days. Objects that are archived to S3 Glacier Deep Archive have a minimum storage duration of 180 days. If you have overwritten an object in S3 Glacier Flexible Retrieval before the 90-day minimum, you are charged for 90 days. Similarly, objects that are in S3 Glacier Deep Archive and are overwritten before the 180-day minimum, you are charged for 180 days.To copy the object into another location, run the following command:aws s3 cp s3://awsexamplebucket/dir1/example.obj s3://awsexamplebucket/dir2/example2.objNote: For suspended buckets or buckets with versioning enabled, this step creates additional copies of objects. These additional objects also incur storage costs. To avoid storage costs, remove the non-current versions that are still in the Amazon S3 Glacier storage class or create an S3 Lifecycle expiration rule.Related informationHow can I initiate restores for a large volume of Amazon S3 objects that are currently in the S3 Glacier or S3 Glacier Deep Archive storage class?How do I use the restore tiers in the Amazon S3 console to restore archived objects from Amazon S3 Glacier storage class?Restoring an archived objectManaging your storage lifecycleFollow"
https://repost.aws/knowledge-center/restore-s3-object-glacier-storage-class
Why is my EFS file system performance slow?
My Amazon Elastic File System (Amazon EFS) performance is very slow. What are some common reasons for slow performance and how do I troubleshoot them?
"My Amazon Elastic File System (Amazon EFS) performance is very slow. What are some common reasons for slow performance and how do I troubleshoot them?Short descriptionThe distributed, multi-Availability Zone architecture of Amazon EFS results in a small latency overhead for each file operation. The overall throughput generally increases as the average I/O size increases because the overhead is amortized over a larger amount of data.Amazon EFS performance relies on multiple factors, including the following:Storage class of EFS.Performance and throughput modes.Type of operations performed on EFS (such as metadata intensive, and so on).Properties of data stored in EFS (such as size and number of files).Mount options.Client side limitations.ResolutionStorage class of EFSFor more information, see Performance summary.Performance and throughput modesPerformance modesAmazon EFS offers two performance modes, General Purpose and Max I/O. Applications can scale their IOPS elastically up to the limit associated with the performance mode. To determine what performance mode to use, see What are differences between General Purpose and Max I/O performance modes in Amazon EFS?Throughput modesFile-based workloads are typically spiky, driving high levels of throughput for short periods, but driving lower levels of throughput for longer periods. Amazon EFS is designed to burst to high throughput levels for periods of time.The configured throughput and IOPS affects the performance of Amazon EFS. It's a best practice to benchmark your workload requirements to help you select the appropriate throughput and performance modes. When you select provisioned throughput, select the values that accommodate your workload requirements properly. In the case of bursting throughput mode, you can increase the size of Amazon EFS using dummy files to increase the baseline throughput. To analyze the throughput and IOPS that's consumed by your file system, see Using metric math with Amazon EFS.Amazon EFS also scales up to petabytes of storage volume and has two modes of throughput: bursting and provisioned. In bursting mode, the greater the size of the EFS file system, the higher the throughput scaling. For provisioned mode, a throughput for your file system is set in MB/s, independent of the amount of data. For more information on throughput modes, see How do Amazon EFS burst credits work?Types of operations performed on the EC2 instanceMetadata I/O operationsEFS performance suffers in the following situations:When the file sizes are small because it's a distributed system. This distributed architecture results in a small latency overhead for each file operation. Due to this per-operation latency, overall throughput generally increases as the average I/O size increases because the overhead is amortized over a larger amount of data.Performance on shared file systems suffers if a workload or operation generates many small files serially. This causes the overhead of each operation to increase.Metadata I/O occurs if your application performs metadata-intensive, operations such as, "ls," "rm," "mkdir," "rmdir," "lookup," "getattr," or "setattr", and so on. Any operation that requires the system to fetch for the address of a specific block is considered to be a metadata-intensive workload. For more information, see the following:Metering: How Amazon EFS reports file system and object sizes.Optimizing small-file performance.Mount optionsIf you mount the file system using amazon-efs-utils, then the recommended mount options are applied by default.Using non-default mount options potentially degrades performance. For example, using lower rsize and wsize, lowering or turning off Attribute Caching. You can check the output of mount command to see the mount options currently in place:For more information, see Mount the file system on the EC2 instance and test.Example command>> mountExample outputfs-EXAMPLE3f75f.efs.us-east-1.amazonaws.com:/ on /home/ec2-user/efs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,noresvport,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=<EXAMPLEIP>,local_lock=none,addr=<EXAMPLEIP>)NFS client versionThe Network File System (NFS) version 4.1 (NFSv4) protocol provides better performance for parallel small-file read operations (greater than 10,000 files per second) compared to NFSv4.0 (less than 1,000 files per second).For more information, see NFS client mount settings.Client-side limitationsBottleneck at the EC2 instanceIf your application using the file system isn't driving the expected performance from EFS, optimize the application. Also, benchmark the host or service that your application is hosted on, such as Amazon EC2, AWS Lambda, and so on. A resource crunch on the EC2 instance might affect your application's ability to use EFS effectively.To check if EC2 is under-provisioned for your application requirements, monitor Amazon EC2 CloudWatch metrics, such as CPU, Amazon Elastic Block Store (Amazon EBS), and so on. Analyzing various metrics on your application architecture and resource requirements helps you determine whether you should reconfigure your application or instance according to your requirements.Using the 4.0+ Linux kernel versionFor optimal performance and to avoid a variety of known NFS client bugs, it's a best practice to use an AMI that has a Linux kernel version 4.0 or newer.An exception to this rule is RHEL and CentOS 7.3 and newer. The kernel for these operating systems received backported versions of the fixes and enhancements applied to NFS v4.1. For more information, see NFS support.Copying filesWhen copying files using cp command, you might experience slowness. This is because the copy command is a serial operation, meaning that it copies each file one at a time. If the file size for each file is small, the throughput to send that file is small.You might also notice latency when sending files. The distributed nature of EFS means that it must replicate to all mount points, so there is overhead per file operation. Therefore, latency in sending files is expected behavior.RecommendationsIt's a best practice to run parallel I/O operations, such as using rsync. If you are using rsync, be aware that cp and rsync work in serial (single-threaded) operations instead of parallel operations. This makes copying slower. Use tools such as fpart or NU Parallel. Fpart is a tool that helps you sort file trees and pack them into "partitions". Fpart comes with a shell script called fpsync that wraps fpart and rsync to launch several rsync in parallel. Fpsync provides its own embedded scheduler. By doing this you can complete these tasks faster than using the more common serial method.For more information, see Amazon EFS performance tips.Related informationQuotas for NFS clientsFollow"
https://repost.aws/knowledge-center/efs-troubleshoot-slow-performance
How can I confirm that my AWS infrastructure is GDPR-compliant?
I want to learn more about how the General Data Protection Regulation (GDPR) affects AWS customers.
"I want to learn more about how the General Data Protection Regulation (GDPR) affects AWS customers.ResolutionFor an overview of how the GDPR affects AWS customers, see All AWS services GDPR ready.For FAQs, whitepapers, and service-specific usage instructions that can help you comply with the GDPR, see General Data Protection Regulation (GDPR) center.To comply with GDPR contractual obligations, AWS offers a GDPR-compliant Data Processing Addendum (GDPR DPA). The AWS GDPR DPA is incorporated into the AWS Service Terms. The DPA applies automatically to all customers globally who require it to comply with the GDPR.If you have additional compliance questions about AWS or your AWS infrastructure, use the form at AWS Compliance contact us to request additional information.Related informationAWS and the General Data Protection Regulation (GDPR)AWS ComplianceFollow"
https://repost.aws/knowledge-center/gdpr-compliance
How does AWS DMS use memory for migration?
"I have an AWS Database Migration Service (AWS DMS) task that is using more or less memory than expected. How does AWS DMS use memory for migration, and how can I optimize the memory usage of my replication instance?"
"I have an AWS Database Migration Service (AWS DMS) task that is using more or less memory than expected. How does AWS DMS use memory for migration, and how can I optimize the memory usage of my replication instance?Short descriptionAn AWS DMS replication instance uses memory to run the replication engine. This engine is responsible for running SELECT statements on the source engine during the full load phase. Also, the replication engine reads from the source engine's transaction log during the change data capture (CDC) phase. These records are migrated to the target, and then compared against the corresponding records on the target database as part of the validation process. This is how generic migration flow works in AWS DMS.AWS DMS also uses memory for task configuration and for the flow of data from source to target.ResolutionTasks with limited LOB settingsWhen you migrate data using an AWS DMS task with limited LOB settings, memory is allocated in advance based on the LobMaxSize for each LOB column. If you set this value too high, then your task might fail. This fail happens due to an Out of Memory (OOM) error, depending on the number of records that you are migrating and the CommitRate.So, if you configure your task with high values, make sure that the AWS DMS instance has enough memory.{ "TargetMetadata": { "SupportLobs": true, "FullLobMode": false, "LobChunkSize": 0, "LimitedSizeLobMode": true, "LobMaxSize": 63, "InlineLobMaxSize": 0, }For more information, see Setting LOB support for source databases in an AWS DMS task.Tasks with ValidationEnabledWhen you migrate using an AWS DMS task that has ValidationEnabled=true, you might see additional memory usage. This happens because AWS DMS retrieves ThreadCount * PartitionSize records from both the source and target databases. It then compares the corresponding data on the replication instance. So, you observe additional memory usage on the replication instance, source database, and target database during migration.To limit the amount of memory in use, ignore LOB columns by using SkipLobColums. You can also perform validation separately from the migration task by using a separate replication instance or AWS DMS task. To do this, use the ValidationOnly setting:"ValidationSettings": { "EnableValidation": true, "ThreadCount": 5, "PartitionSize": 10000, "ValidationOnly": false, "SkipLobColumns": false, },For more information, see AWS DMS data validation.Tasks with parallel threads in full load and CDC phasesWhen you use a non-RDBMS target, then ParallelLoadThreads * ParallelLoadBufferSize determines the number of threads and the size of data transfer to the target. Similarly, ParallelApplyThreads * ParallelApplyBufferSize determines the number of threads and the size of data transfer during the CDC phase. AWS DMS holds the data that's pulled from the source in ParallelLoadQueuesPerThread and ParallelApplyQueuesPerThread. When tuning these settings, make sure that the AWS DMS instance and target have the capacity to handle the workload.{ "TargetMetadata": { "ParallelLoadThreads": 0, "ParallelLoadBufferSize": 0, "ParallelLoadQueuesPerThread": 0, "ParallelApplyThreads": 0, "ParallelApplyBufferSize": 0, "ParallelApplyQueuesPerThread": 0 },For more information on these settings, see Target metadata task settings.Tasks with batch apply settingsWhen you use an AWS DMS task with batch apply settings, use these best practices:The default batch configuration is always enough to handle the normal workload. In batch process, the size of the batch and frequency at which it's applied on the target are determined by the BatchApplyTimeoutMin, BatchApplyTimeoutMax, and BatchApplyMemoryLimit settings. These settings work together to apply changes in batch. If you need to tune these settings because of heavy workload on the source, be sure that the AWS DMS instance has enough memory. Otherwise, an OOM error might occur.Don't set BatchApplyMemoryLimit to more than the size of the replication instance memory or an OOM error might occur. Be aware of other tasks running simultaneously with the AWS DMS task that you're using for migration when you set BatchApplyMemoryLimit.Long-running transactions are retained in memory if BatchApplyPreserveTransaction = true across multiple batches. This can also cause OOM errors, depending on the next section's memory settings.Use the BatchSplitSize setting to set the number of changes to include in every batch, and to limit memory consumption:{ "TargetMetadata": { "BatchApplyEnabled": false, },}, "ChangeProcessingTuning": { "BatchApplyPreserveTransaction": true, "BatchApplyTimeoutMin": 1, "BatchApplyTimeoutMax": 30, "BatchApplyMemoryLimit": 500, "BatchSplitSize": 0, },For more information on using batch apply mode, see Change processing tuning settings.Other memory-related task settingsDuring CDC, MinTransactionSize determines how many changes happen in each transaction. The size of transactions on the replication instance is controlled by MemorylimitTotal. Use this setting when you run multiple CDC tasks that need a lot of memory. Be sure to apportion this setting based on each task's transactional workload.Set MemoryKeepTime to limit the memory that is consumed by long-running transactions on the source. Or, if large batch of INSERT or UPDATE statements are running on the source, then increase this time. Increase this time to retain the changes from processing in the net changes table.Set StatementCacheSize to control the number of prepared statements that are stored on the replication instance.If your AWS DMS replication instance contains a large volume of free memory, then tune the settings in this example. This means that AWS DMS handles the workload in memory itself, rather flushing frequently to AWS DMS storage."ChangeProcessingTuning": { "MinTransactionSize": 1000, "CommitTimeout": 1, "MemoryLimitTotal": 1024, "MemoryKeepTime": 60, "StatementCacheSize": 50 },For more information on these settings, see Change processing tuning settings.Monitor the memory usage of your replication instanceThere are several ways to monitor the memory usage of your replication instance. To isolate the single task that is consuming the most memory, sort your tasks by MemoryUsage. To learn why the task is holding memory, compare CDCChangesMemorySource and CDCChangesMemoryTarget, and then troubleshoot the respective endpoint.The replication instance itself uses minimal memory to run the replication engine. To check if additional AWS DMS tasks can run on the replication instance, review the AvailableMemory metric in Amazon CloudWatch. Then, create a new task to use the amount of FreeMemory available. When you run the AWS DMS task, monitor FreeMemory and SwapUsage to see if resource contention is a concern. For more information, see Replication instance metrics.Avoid memory issuesTo gauge how much memory that your AWS DMS task is using, test an instance with the same configuration in a lower environment (dev and staging).Also, perform a proof of concept migration before working with production data.Related informationChoosing the right AWS DMS replication instance for your migrationChoosing the best size for a replication instanceFollow"
https://repost.aws/knowledge-center/dms-memory-optimization
How can I fix the connection to my Amazon EC2 instance or elastic network interface that has an attached Elastic IP address?
"I tried to connect to my Amazon Elastic Compute Cloud (Amazon EC2) instance using the attached Elastic IP address. However, I received a "Connection timed out" error. How can I fix the connection to my Amazon EC2 instance or elastic network interface that has an attached Elastic IP address?"
"I tried to connect to my Amazon Elastic Compute Cloud (Amazon EC2) instance using the attached Elastic IP address. However, I received a "Connection timed out" error. How can I fix the connection to my Amazon EC2 instance or elastic network interface that has an attached Elastic IP address?Short descriptionIf you can't connect to an Amazon EC2 instance or an elastic network interface that has an attached Elastic IP address, make sure of the following:Security group rules for inbound traffic allow connection to the port or protocol.Inbound and outbound network access control list (network ACL) rules allow connection to the port or protocol.The route table for the subnet of the elastic network interface has a route to send and receive traffic from the internet.The OS firewall on the Amazon EC2 instance allows traffic to the port or protocol.ResolutionOpen the Amazon EC2 console.In the navigation pane, choose Instances. Then, select the instance that you're trying to connect to.On the Security tab, select the security group associated with the Amazon EC2 instance that has an Elastic IP address attached to it.On the Inbound rules tab, confirm that you have a security group rule that allows traffic from your source to your port or protocol. You can add an inbound rule if you don’t have one.Choose Instances, and then select the instance that you're trying to connect to.On the Networking tab, select the elastic network interface that has the attached Elastic IP address. Then, select the elastic network interface ID.On the Details tab, select the associated subnet ID.On the Network ACL tab, confirm that the inbound and outbound rules of the network ACL allow traffic to your port or protocol. You can add inbound and outbound rules if you don't have them.On the Route Table tab, confirm that you have a default route to an internet gateway to send traffic to the internet. If you don't have such a route in your route table, then add a 0.0.0.0/0 route to an internet gateway.Note: Be sure that your default route points to an internet gateway (not a NAT gateway). A NAT gateway doesn't allow inbound connections from the internet, except for the response traffic for an outgoing connection.Important: When you add a 0.0.0.0/0 route to an internet gateway, subnets associated with the route table are made public. Resources with public IP addresses in the associated subnets (for example, your Amazon EC2 instances) will be publicly accessible if they allow such traffic.Retry connecting to your instance.If you're still receiving connection timeout errors after completing the troubleshooting steps, do the following:Review the flow logs for your instance's elastic network interface. Check to confirm that the traffic to and from your source IP is recognized on the elastic network interface.Confirm that the traffic to and from your source IP is recognized on the elastic network interface.Confirm that the instance's OS-level firewall allows traffic.Follow"
https://repost.aws/knowledge-center/vpc-fix-connection-with-elastic-ip
I’m using an S3 REST API endpoint as the origin of my CloudFront distribution. Why am I getting 403 Access Denied errors?
I'm using an Amazon Simple Storage Service (Amazon S3) bucket as the origin of my Amazon CloudFront distribution. I'm using the S3 REST API endpoint as the origin domain name. CloudFront is returning 403 Access Denied errors from Amazon S3.
"I'm using an Amazon Simple Storage Service (Amazon S3) bucket as the origin of my Amazon CloudFront distribution. I'm using the S3 REST API endpoint as the origin domain name. CloudFront is returning 403 Access Denied errors from Amazon S3.Short descriptionTo troubleshoot Access Denied errors, determine if your distribution’s origin domain name is an S3 website endpoint or an S3 REST API endpoint. Follow these steps to find the endpoint type:1.    Open the CloudFront console.2.    Select your CloudFront distribution. Then, choose Distribution Settings.3.    Choose the Origins and Origin Groups tab.4.    Review the domain name under Origin Domain Name and Path. Then, determine the endpoint type based on the format of the domain name. REST API endpoints use these formats:DOC-EXAMPLE-BUCKET.s3.region.amazonaws.com DOC-EXAMPLE-BUCKET.s3.amazonaws.comImportant: The format bucket-name.s3.amazonaws.com doesn't work for Regions launched in 2019 or later. Static website endpoints use this format:DOC-EXAMPLE-BUCKET.s3-website-us-east-1.amazonaws.comIf your distribution is using an S3 static website endpoint, you might receive 403 Access Denied errors. For more information, see I’m using an S3 website endpoint as the origin of my CloudFront distribution. Why am I getting 403 Access Denied errors?If your distribution is using a REST API endpoint, then verify that your configurations meet the following requirements to avoid Access Denied errors:If you don't configure either origin access control (OAC) or origin access identity (OAI), then the objects must be publicly accessible. Or, you must request the objects with AWS Signature Version 4.If the S3 bucket contains objects encrypted by AWS Key Management Service (AWS KMS), then OAC should be used instead of OAI.The S3 bucket policy must allow access to s3:GetObject.If the bucket policy grants access, then the AWS account that owns the S3 bucket must also own the object.The requested objects must exist in the S3 bucket.If clients request the root of your distribution, then you must define a default root object.If you configured an OAI, then the OAI must be included in the S3 bucket policy.If you configured an OAC, then the CloudFront service principal must be included in the S3 bucket policy. If you configured an OAI, then the OAI must be included in your S3 bucket policy.If you don't configure either OAC or OAI, then Amazon S3 Block Public Access must be turned off on the bucket.ResolutionIf you don't configure either OAC or OAI, then your objects must be publicly accessible or requested with AWS Signature Version 4.To determine if the objects in your S3 bucket are publicly accessible, open the S3 object's URL in a web browser. Or, run a curl command on the URL.The following is an example URL of an S3 object:https://DOC-EXAMPLE-BUCKET.s3.amazonaws.com/index.htmlIf either the web browser or curl command returns an Access Denied error, then the object isn't publicly accessible. If the object isn't publicly accessible, then use one of the following configurations:Create a bucket policy that allows public read access for all objects in the bucket.Use the Amazon S3 console to allow public read access for the object.Configure either OAC or OAI for the distribution using the REST API endpoint.Authenticate requests to Amazon S3 using AWS Signature Version 4.Objects encrypted by AWS Key Management Service (AWS SSE-KMS)If the s3 bucket contains objects encrypted by AWS Key Management Service (AWS SSE-KMS), then OAC should be used instead of OAI.AWS KMS-encrypted objects can be served with CloudFront by setting up OAC. To do this, add a statement to the AWS KMS key policy that grants the CloudFront service principal the permission to use the key. To serve AWS KMS-encrypted objects without setting up OAC, serve the AWS KMS Key encrypted from an S3 bucket using Lambda@Edge.Use one of the following ways to check if an object in your bucket is AWS KMS-encrypted:Use the Amazon S3 console to view the properties of the object. Review the Encryption dialog box. If AWS KMS is selected, then the object is AWS KMS-encrypted.Run the head-object command using the AWS Command Line Interface (AWS CLI). If the command returns ServerSideEncryption as aws:kms, then the object is AWS KMS-encrypted. If you receive errors when running AWS CLI commands, then make sure that you’re using the most recent version of the AWS CLI.Note: OAI doesn't support serving AWS KMS-encrypted objects.The S3 bucket policy must allow access to s3:GetObjectTo use a distribution with an S3 REST API endpoint, your bucket policy must allow s3:GetObject either to public users or to CloudFront's OAI. Even if you have an explicit allow statement for s3:GetObject in your bucket policy, confirm that there isn't a conflicting explicit deny statement. An explicit deny statement always overrides an explicit allow statement.Follow these steps to review your bucket policy for s3:GetObject:1.    Open your S3 bucket from the Amazon S3 console.2.    Choose the Permissions tab.3.    Choose Bucket Policy.4.    Review the bucket policy for statements with "Action": "s3:GetObject" or "Action": "s3:*". The example policy below includes an allow statement that grants a CloudFront OAC access to s3:GetObject. It also includes a statement that grants CloudFront OAI access to s3:GetObject and an allow statement that grants public access to s3:GetObject. However, there's an explicit deny statement for s3:GetObject that blocks access unless the request is from a specific Amazon Virtual Private Cloud (Amazon VPC):{ "Version": "2012-10-17", "Id": "PolicyForCloudFrontPrivateContent", "Statement": [{ "Sid": "Allow-OAC-Access-To-Bucket", "Effect": "Allow", "Principal": { "Service": "cloudfront.amazonaws.com" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*", "Condition": { "StringEquals": { "AWS:SourceArn": "arn:aws:cloudfront::111122223333:distribution/EDFDVBD6EXAMPLE" } } }, { "Sid": "Allow-OAI-Access-To-Bucket", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity EAF5XXXXXXXXX" }, "Action": "s3:GetObject", "Resource": [ "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" ] }, { "Sid": "Allow-Public-Access-To-Bucket", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": [ "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" ] }, { "Sid": "Access-to-specific-VPCE-only", "Effect": "Deny", "Principal": "*", "Action": "s3:GetObject", "Resource": [ "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" ], "Condition": { "StringNotEquals": { "aws:sourceVpce": "vpce-1a2b3c4d" } } } ]}5.    Modify the bucket policy to remove or edit statements that block CloudFront OAI access or public access to s3:GetObjectNote: CloudFront caches the results of an Access Denied error for up to five minutes. After removing a deny statement from the bucket policy, you can run an invalidation on your distribution to remove the object from the cache.Ownership of S3 buckets and objectsFor a bucket policy to apply to external accounts or services, the AWS account that owns the bucket must also own the objects. A bucket or object is owned by the account of the AWS Identity and Access Management (IAM) identity that created the bucket or object.Note: The object-ownership requirement applies to access granted by a bucket policy. It doesn't apply to access granted by the object's access control list (ACL).Follow these steps to check if the bucket and objects have the same owner:1.    Run this AWS CLI command to get the S3 canonical ID of the bucket owner:aws s3api list-buckets --query Owner.ID2.    Run this command to get the S3 canonical ID of the object owner:Note: This example shows a single object, but you can use the list command to check several objects.aws s3api list-objects --bucket DOC-EXAMPLE-BUCKET --prefix index.html3.    If the canonical IDs don't match, then the bucket and object have different owners.Note: You can also use the Amazon S3 console to check the bucket and object owners. The owners are found in the Permissions tab of the respective bucket or object.Follow these steps to change the object's owner to the bucket owner:1.    From the object owner's AWS account, run this command to retrieve the access control list (ACL) permissions assigned to the object:aws s3api get-object-acl --bucket DOC-EXAMPLE-BUCKET --key object-name2.    If the object has bucket-owner-full-control ACL permissions, then skip to step #3. If the object doesn't have bucket-owner-full-control ACL permissions, then run this command from the object owner's account:aws s3api put-object-acl --bucket DOC-EXAMPLE-BUCKET --key object-name --acl bucket-owner-full-control3.    From the bucket owner's account, run this command to change the owner of the object by copying the object over itself:aws s3 cp s3://DOC-EXAMPLE-BUCKET/index.html s3://DOC-EXAMPLE-BUCKET/index.html --storage-class STANDARDNote: Make sure to change the --storage-class value in the example command to the storage class applicable to your use case.The requested objects must exist in the bucketIf a user doesn't have s3:ListBucket permissions, then the user gets Access Denied errors for missing objects instead of 404 Not Found errors. Run the head-object AWS CLI command to check if an object exists in the bucket.Note: Confirm that the object request sent to CloudFront matches the S3 object name exactly. S3 object names are case-sensitive. If the request doesn't have the correct object name, then Amazon S3 responds as though the object is missing. To identify which object CloudFront is requesting from Amazon S3, use server access logging.If the object exists in the bucket, then the Access Denied error isn't masking a 404 Not Found error. Verify other configuration requirements to resolve the Access Denied error.If the object isn’t in the bucket, then the Access Denied error is masking a 404 Not Found error. Resolve the issue related to the missing object.Note: It's not a security best practice to allow public s3:ListBucket access. Allowing public s3:ListBucket access allows users to see and list all objects in a bucket. This exposes object metadata details, such as key and size, to users even if the users don't have permissions for downloading the object.If clients request the root of your distribution, then you must define a default root objectIf your distribution doesn't have a default root object defined, and a requester doesn't have s3:ListBucket access, then the requester receives an Access Denied error. The requester gets this error instead of a 404 Not Found error when they request the root of your distribution.To define a default root object, see Specifying a default root object.Note: It's not a security best practice to allow public s3:ListBucket access. Allowing public s3:ListBucket access allows users to see and list all objects in a bucket. This exposes object metadata details, such as key and size, to users even if the users don't have permissions for downloading the object.Permissions for OAC or OAIIf you configured an OAC, then a CloudFront service principal must be included in the S3 bucket policy. If you configured an OAI, then the OAI must be included in your s3 bucket policyTo verify if your bucket policy allows the OAI, open your S3 bucket in the Amazon S3 console. Then, choose the Permissions tab and review the bucket policy. In the example policy below, the first statement is an allow statement for the CloudFront service principal when OAC is configured. The second statement is an allow statement for an OAI:{ "Sid": "Allow-OAC-Access-To-Bucket", "Effect": "Allow", "Principal": { "Service": "cloudfront.amazonaws.com" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*", "Condition": { "StringEquals": { "AWS:SourceArn": "arn:aws:cloudfront::111122223333:distribution/EDFDVBD6EXAMPLE" } } },{ "Sid": "1", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity EAF5XXXXXXXXX" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*"}To update your bucket policy using the CloudFront console, follow these steps:1.    Open the CloudFront console and choose your distribution.2.    Choose the Origins and Origin Groups tab.3.    Select the S3 origin, and then choose Edit.4.    For Restrict Bucket Access, choose Yes.5.    For Origin Access Identity, choose the existing identity or create a new one.6.    For Grant Read Permissions on Bucket, choose Yes, Update Bucket Policy.7.    Choose Yes, Edit.Allowing public access for distribution without OAC or OAIIf your distribution isn't using OAC or OAI, and objects aren't requested with AWS Signature Version 4, then you must allow public access for objects. This is because a distribution with a REST API endpoint supports only publicly readable objects. In this case, you must confirm that there aren't any Amazon S3 Block Public Access settings applied to the bucket. These settings override permissions that allow public read access. Amazon S3 Block Public Access settings can apply to individual buckets or AWS accounts.Related informationTroubleshooting error responses from your originHow do I troubleshoot 403 Access Denied errors from Amazon S3?Follow"
https://repost.aws/knowledge-center/s3-rest-api-cloudfront-error-403
Why is the CNAME record not resolving for my ACM issued certificate and the DNS validation status is still pending validation?
"I requested a new AWS Certificate Manager (ACM) certificate using DNS validation. However, the CNAME record isn't resolving and the status is still pending validation."
"I requested a new AWS Certificate Manager (ACM) certificate using DNS validation. However, the CNAME record isn't resolving and the status is still pending validation.Short descriptionWhen you request an ACM certificate using DNS validation, ACM gives you a CNAME record for each domain name specified in the certificate's domain scope. You must add the CNAME record to your DNS configuration. ACM uses the CNAME records to validate ownership of domains. After all domains are validated, the certificate status updates from Pending validation to Success.Certificate requests using DNS validation might remain in Pending validation if:The CNAME record isn’t added to the correct DNS configuration.The CNAME record has additional characters or is missing characters.The CNAME record added to the correct DNS configuration, but the DNS provider automatically adds the bare domain to the end of its DNS records.A CNAME record and a TXT record exist for same domain name.Note: ACM periodically checks for the DNS record. This process can't be manually checked.For more information, see DNS validation.ResolutionThe CNAME record isn’t added to the correct DNS configurationTo confirm that the CNAME record was added correctly to your DNS configuration, run a command similar to the following:Note: Replace example-cname.example.com with your ACM CNAME record.Linux and macOS:dig +short _example-cname.example.comWindows:nslookup -type=cname _example-cname.example.comThe command returns the CNAME record’s value in the output if the CNAME record was added to the correct DNS configuration and then propagated successfully.Note: Some DNS providers can take 24–48 hours to propagate DNS records.If your certificate is in the Pending validation state, then confirm that the CNAME record provided by ACM was added to the correct DNS configuration. To determine the DNS configuration to add the CNAME record, run a command similar to the following:Linux and macOS:dig NS example.comWindows:nslookup -type=ns example.comThe command provides the name servers included in the NS record of the correct DNS configuration. Be sure that the DNS configuration where the CNAME record is added includes an NS record with the name servers provided in the command's output.For information on adding CNAME records to your Amazon Route 53 Hosted Zone, see Creating records by using the Route 53 console.Note: It isn't possible to validate ownership of a domain when the corresponding CNAME record is in a Route 53 private hosted zone. The CNAME record must be in a publicly hosted zone.The CNAME record has additional characters or is missing charactersBe sure that the CNAME record added to your DNS configuration contains no additional characters or has no missing characters in the name or value.The CNAME record is added to the correct DNS configuration, but the DNS provider automatically adds the bare domain to the end of its DNS recordsSome DNS providers might automatically add the bare domain to the end of the name field of all DNS records. In this scenario, the propagated CNAME record added to your DNS configuration is similar to the following:_example-cname.example.com.example.comBecause the CNAME record name doesn't match the one provided by ACM, the validation isn't successful. The ACM certificate remains in Pending validation until it eventually fails after 72 hours from requesting the certificate.To determine if your DNS provider automatically added the bare domain to the end of the CNAME record, run a command similar to the following:Linux and macOS:dig +short _example-cname.example.com.example.comWindows:nslookup -type=cname _example-cname.example.com.example.comIf the output returns the value of the CNAME record, then your DNS provider added the bare domain. The bare domain was added to the end of the name field of your DNS records.To resolve this issue, edit your CNAME record to remove the bare domain from the text that you entered for the name field.After your DNS provider adds the bare domain, there will be only one bare domain present.A CNAME record and a TXT record exist for same domain nameTo confirm if the CNAME record and TXT record exist for the same domain, run a command similar to the following:Linux and macOS:dig +short CNAME <cname_record_name>dig TXT <cname_record_name>Windows:nslookup -type=CNAME <cname_record_name>nslookup -type=TXT <cname_record_name>Compare the output of the dig command for the CNAME record and TXT record types. If they're identical, then a malformed record is keeping the certificate in the pending validation state, as noted in the external document RFC 1034. To resolve this, you can delete the TXT record.For more information, see Troubleshoot DNS validation problems.Related informationTroubleshooting managed certificate renewalWhy is my certificate renewal still pending after I validated my domain names using the ACM managed renewal process?Setting up DNS validationFollow"
https://repost.aws/knowledge-center/acm-certificate-pending-validation
How can I recreate an AWS Config delivery channel?
I deleted my AWS Config delivery channel. How can I recreate it?
"I deleted my AWS Config delivery channel. How can I recreate it?Short descriptionWhen you set up AWS Config using the AWS Config console, a set-up process guides you to configure AWS resources to send notifications to the delivery channel. AWS Config setup includes configuring an Amazon Simple Storage Service (Amazon S3) bucket, an Amazon Simple Notification Service (Amazon SNS) topic, an AWS Identity and Access Management (IAM) role, and the resource types to record.If you delete an AWS Config delivery channel using the AWS Command Line Interface (AWS CLI) command delete-delivery-channel, then the configuration recorder turns off. Trying to turn on the configuration recorder again returns the error "AWS Config cannot start recording because the delivery channel was not found."Note: You can't recreate the delivery channel using the AWS Config console.ResolutionFollow these instructions to manually recreate the AWS Config delivery channel and turn on the configuration recorder.Note: If you didn't delete the Amazon S3 bucket, S3 topic, and IAM role associated with the deleted AWS Config delivery channel, you can skip these steps.Create the Amazon S3 bucket1.    Open the Amazon S3 console in the same Region as your AWS Config service, and choose Create bucket.2.    In Bucket name, enter a name for the S3 bucket, and then choose Next.3.    Choose Next, Next, and then Create bucket.4.    In S3 buckets, choose the S3 bucket that you just created in step 3.5.    Choose Permissions, and then choose Bucket Policy.6.    Copy and paste the following example bucket policy, and then choose Save.{ "Version": "2012-10-17", "Statement": [ { "Sid": "AWSConfigBucketPermissionsCheck", "Effect": "Allow", "Principal": { "Service": "config.amazonaws.com" }, "Action": "s3:GetBucketAcl", "Resource": "arn:aws:s3:::targetBucketName", "Condition": { "StringEquals": { "AWS:SourceAccount": "sourceAccountID" } } }, { "Sid": "AWSConfigBucketExistenceCheck", "Effect": "Allow", "Principal": { "Service": "config.amazonaws.com" }, "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::targetBucketName", "Condition": { "StringEquals": { "AWS:SourceAccount": "sourceAccountID" } } }, { "Sid": "AWSConfigBucketDelivery", "Effect": "Allow", "Principal": { "Service": "config.amazonaws.com" }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::targetBucketName/[optional] prefix/AWSLogs/sourceAccountID/Config/*", "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control", "AWS:SourceAccount": "sourceAccountID" } } } ]}Create the SNS topic1.    Open the Amazon SNS console in the same Region as your AWS Config service, and then choose Topics.2.    Choose Create topic.3.    For Name, enter a name for your SNS topic. Then, choose Create topic.4.    Choose Create subscription.5.    For Protocol, choose Email.6.    For Endpoint, enter the email address that you want to associate with this SNS topic, and then choose Create subscription.7.    Check your email for the subscription confirmation, and then choose Confirm subscription.8.    You receive the message Subscription confirmed!Note: To use your SNS topic, make sure you have the required permissions.Create the IAM role1.    Open the IAM console.2.    Choose Roles, and then choose Create role.3.    In Select type of trusted entity, choose AWS service.4.    Under Use cases for other AWS services, choose Config.5.    In Select your use case, choose Config - Customizable, and then choose Next: Permissions.6.    Choose Next: Tags, and then choose Next: Review.7.    In Role name, enter a name, and then choose Create role.8.    Choose the role that you created in step 7, choose Add inline policy, and then choose the JSON tab.9.    Copy and paste the following example policy:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:PutObjectAcl" ], "Resource": [ "arn:aws:s3:::arn:aws:s3:::targetBucketName/[optional] prefix/AWSLogs/sourceAccountID-WithoutHyphens/*" ], "Condition": { "StringLike": { "s3:x-amz-acl": "bucket-owner-full-control" } } }, { "Effect": "Allow", "Action": [ "s3:GetBucketAcl" ], "Resource": "arn:aws:s3:::targetBucketName" }, { "Effect": "Allow", "Action": "sns:Publish", "Resource": "arn:aws:sns:region:account_number:targetTopicName" } ]}Create the KMS KeyIt's a best practice to use AWS Key Management Service (AWS KMS) based encryption on objects delivered by AWS Config to an Amazon S3 bucket. Create a KMS key in the same Region as your AWS Config service. Be sure you have the required permissions for your KMS key.If you choose not to encrypt the objects, skip these steps and continue to the Create the delivery channel section.1.    Open AWS KMS Console.2.    In the navigation pane, choose Customer managed keys.3.    Choose Create key.4.    For Key type, choose Symmetric to create a symmetric encryption KMS key. For information about on asymmetric KMS keys, see Creating asymmetric KMS keys (console).5.    For Key usage, the Encrypt and decrypt option is selected by default. Confirm this option, and then choose Next.6.    Enter an alias for your KMS key. Then, choose Next. Note: Your alias name can't begin with aws/.7.    Select the IAM users and roles that can administer the KMS key. Then, choose Next.8.    Select the IAM users and roles that can use the key in cryptographic operations. Then, choose Next.9.    Choose Finish to create the KMS key.10.    Choose Customer managed keys in the navigation pane. Then, under Customer managed keys, select the key that you just created11.    In the Key Policy tab, choose Switch to policy view. Then, choose Edit.12.    If you are using a custom IAM role for AWS Config, then copy and paste this policy statement as additional key policy statement. Then, choose Save changes.{ "Statement": [ { "Sid": "AWSConfigKMSPolicy", "Action": [ "kms:Decrypt", "kms:GenerateDataKey" ], "Effect": "Allow", "Resource": "myKMSKeyARN", "Principal": { "AWS": [ "arn:aws:iam:account_id:role/my-config-role-name" ] } } ]}-or-If you are using Service Linked Roles (SLR) for AWS Config, then use the following policy statement to update the KMS key policy:{ "Statement": [ { "Sid": "AWSConfigKMSPolicy", "Effect": "Allow", "Principal": { "Service": "config.amazonaws.com" }, "Action": [ "kms:Decrypt", "kms:GenerateDataKey" ], "Resource": "myKMSKeyARN", "Condition": { "StringEquals": { "AWS:SourceAccount": "sourceAccountID" } } } ]}Create the delivery channel1.    Using a text editor, copy and paste the following example template, and then save it as a JSON file. You can change the deliveryFrequency value to match your use case. If you choose not to activate encryption, omit the s3KmsKeyArn value from the JSON file.Important: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.{ "name": "default", "s3BucketName": "targetBucketName", "s3KeyPrefix": "Optionalprefix", "snsTopicARN": "arn:aws:sns:region:account_ID:targetTopicName", "s3KmsKeyArn": "arn:aws:kms:region:account_ID:KmsKey", "configSnapshotDeliveryProperties": { "deliveryFrequency": "Twelve_Hours" }}Note: The s3KeyPrefix must be provided if the S3 bucket policy restricts PutObject to a certain prefix, rather than the default.2.    Run the following AWS CLI command:$ aws configservice put-delivery-channel --delivery-channel file://deliveryChannel.json3.    Run the following AWS CLI command to confirm that the Delivery Channel created:$ aws configservice describe-delivery-channelsStart the configuration recorder1.    Open the AWS Config console.2.    In the navigation pane, choose Settings.3.    In Recording is off, choose Turn on, and then choose Continue.-or-Run the following AWS CLI command:$ aws configservice start-configuration-recorder --configuration-recorder-name configRecorderNameFor more information, see Managing the configuration recorder and Managing your AWS Config rules.Related informationSetting up AWS Config with the consoleHow can I troubleshoot AWS Config console error messages?Follow"
https://repost.aws/knowledge-center/recreate-config-delivery-channel
How do I use the forgot password flow in Amazon Cognito?
I need to use the forgot password flow to help users change their passwords in Amazon Cognito.
"I need to use the forgot password flow to help users change their passwords in Amazon Cognito.ResolutionAn Amazon Cognito user can invoke the ForgotPassword API to initiate a forgot password flow to reset their user password.Follow these steps to set up the forgot password flow to change a user's password in Amazon Cognito:1.    Invoke the ForgotPassword API to send a message to the user that includes a confirmation code that's required to reset the user's password.Important: In the example AWS Command Line Interface (AWS CLI) commands, replace all instances of example strings with your values. (For example, replace "example_client_id" with your client ID.)Example forgot-password command:aws cognito-idp forgot-password --client-id example_client_id --username example_user_nameOutput:{  "CodeDeliveryDetails": {    "Destination": "t***@g***",    "DeliveryMedium": "EMAIL",    "AttributeName": "email"  }}The confirmation code message is sent to the user's verified email address or verified phone number. The recovery method can be customized in the AccountRecoverySetting parameter. The confirmation code message in the example goes to the user's verified email address. Your configuration depends on your use case.Note: If neither a verified phone number nor a verified email address exists for a user, then an InvalidParameterException error is thrown. To learn more about configuring email address and phone number verification, see Configuring email or phone verification.2.    Invoke the ConfirmForgotPassword API so that the user can enter the confirmation code to reset their password.Example confirm-forgot-password command:aws cognito-idp confirm-forgot-password --client-id example_client_id --username=user@example.com --password example_password --confirmation-code example_confirmation_code3.    If necessary, customize the password recovery error responses by setting PreventUserExistenceErrors to Enabled. To learn more, see Managing error responses.4.    If your app client is configured with a client secret in the user pool, then you must provide the secret hash. To learn more, see How do I troubleshoot "Unable to verify secret hash for client <client-id>" errors from my Amazon Cognito user pools API?Follow"
https://repost.aws/knowledge-center/cognito-forgot-password-flow
How do I troubleshoot retry and timeout issues when invoking a Lambda function using an AWS SDK?
"When I invoke my AWS Lambda function using an AWS SDK, the function times out, the API request stops responding, or an API action is duplicated. How do I troubleshoot these issues?"
"When I invoke my AWS Lambda function using an AWS SDK, the function times out, the API request stops responding, or an API action is duplicated. How do I troubleshoot these issues?Short descriptionThere are three reasons why retry and timeout issues occur when invoking a Lambda function with an AWS SDK:A remote API is unreachable or takes too long to respond to an API call.The API call doesn't get a response within the socket timeout.The API call doesn't get a response within the Lambda function's timeout period.Note: API calls can take longer than expected when network connection issues occur. Network issues can also cause retries and duplicated API requests. To prepare for these occurrences, make sure that your Lambda function is idempotent.If you make an API call using an AWS SDK and the call fails, the AWS SDK automatically retries the call. How many times the AWS SDK retries and for how long is determined by settings that vary among each AWS SDK.Default AWS SDK retry settingsNote: Some values may be different for other AWS services.AWS SDKMaximum retry countConnection timeoutSocket timeoutPython (Boto 3)depends on service60 seconds60 secondsJavaScript/Node.jsdepends on serviceN/A120 secondsJava310 seconds50 seconds.NET4100 seconds300 secondsGo3N/AN/ATo troubleshoot the retry and timeout issues, first review the logs of the API call to find the problem. Then, change the retry count and timeout settings of the AWS SDK as needed for each use case. To allow enough time for a response to the API call, add time to the Lambda function timeout setting.ResolutionLog the API calls made by the AWS SDKUse Amazon CloudWatch Logs to get details about failed connections and the number of attempted retries for each. For more information, see Accessing Amazon CloudWatch logs for AWS Lambda. Or, see the following instructions for the AWS SDK that you're using:AWS Lambda function logging in PythonLogging AWS SDK for JavaScript callsLogging AWS SDK for Java callsLogging with the AWS SDK for .NETLogging service calls (AWS SDK for Go)Example error log where the API call failed to establish a connection (connection timeout)START RequestId: b81e56a9-90e0-11e8-bfa8-b9f44c99e76d Version: $LATEST2018-07-26T14:32:27.393Z b81e56a9-90e0-11e8-bfa8-b9f44c99e76d [AWS ec2 undefined 40.29s 3 retries] describeInstances({})2018-07-26T14:32:27.393Z b81e56a9-90e0-11e8-bfa8-b9f44c99e76d { TimeoutError: Socket timed out without establishing a connection...Example error log where the API call connection was successful, but timed out after the API response took too long (socket timeout)START RequestId: 3c0523f4-9650-11e8-bd98-0df3c5cf9bd8 Version: $LATEST2018-08-02T12:33:18.958Z 3c0523f4-9650-11e8-bd98-0df3c5cf9bd8 [AWS ec2 undefined 30.596s 3 retries] describeInstances({})2018-08-02T12:33:18.978Z 3c0523f4-9650-11e8-bd98-0df3c5cf9bd8 { TimeoutError: Connection timed out after 30sNote: These logs aren't generated if the API request doesn't get a response within your Lambda function's timeout. If the API request ends because of a function timeout, try one of the following:Change the retry settings in the SDK so that all retries are made within the timeout.Increase the Lambda function timeout setting temporarily to allow enough time to generate SDK logs.Change the AWS SDK's settingsThe retry count and timeout settings of the AWS SDK should allow enough time for your API call to get a response. To determine the right values for each setting, test different configurations and get the following information:Average time to establish a successful connectionAverage time that a full API request takes (until it's successfully returned)If retries should be made by the AWS SDK or codeFor more information on changing retry count and timeout settings, see the following AWS SDK client configuration documentation:Python (Boto 3)JavaScript/Node.jsJava.NETGoThe following are some example commands that change retry count and timeout settings for each runtime.Important: Before using any of the following commands, replace the example values for each setting with the values for your use case.Example Python (Boto 3) command to change retry count and timeout settings# max_attempts: retry count / read_timeout: socket timeout / connect_timeout: new connection timeoutfrom botocore.session import Sessionfrom botocore.config import Configs = Session()c = s.create_client('s3', config=Config(connect_timeout=5, read_timeout=60, retries={'max_attempts': 2}))Example JavaScript/Node.js command to change retry count and timeout settings// maxRetries: retry count / timeout: socket timeout / connectTimeout: new connection timeoutvar AWS = require('aws-sdk');AWS.config.update({ maxRetries: 2, httpOptions: { timeout: 30000, connectTimeout: 5000 }});Example Java command to change retry count and timeout settings// setMaxErrorRetry(): retry count / setSocketTimeout(): socket timeout / setConnectionTimeout(): new connection timeoutClientConfiguration clientConfig = new ClientConfiguration(); clientConfig.setSocketTimeout(60000); clientConfig.setConnectionTimeout(5000);clientConfig.setMaxErrorRetry(2);AmazonDynamoDBClient ddb = new AmazonDynamoDBClient(credentialsProvider,clientConfig);Example .NET command to change retry count and timeout settings// MaxErrorRetry: retry count / ReadWriteTimeout: socket timeout / Timeout: new connection timeoutvar client = new AmazonS3Client( new AmazonS3Config { Timeout = TimeSpan.FromSeconds(5), ReadWriteTimeout = TimeSpan.FromSeconds(60), MaxErrorRetry = 2});Example Go command to change retry count settings// Create Session with MaxRetry configuration to be shared by multiple service clients.sess := session.Must(session.NewSession(&aws.Config{ MaxRetries: aws.Int(3),})) // Create S3 service client with a specific Region.svc := s3.New(sess, &aws.Config{ Region: aws.String("us-west-2"),})Example Go command to change request timeout settingsctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)defer cancel()// SQS ReceiveMessageparams := &sqs.ReceiveMessageInput{ ... }req, resp := s.ReceiveMessageRequest(params)req.HTTPRequest = req.HTTPRequest.WithContext(ctx)err := req.Send()(Optional) Change your Lambda function's timeout settingA low Lambda function timeout can cause healthy connections to be dropped early. If that's happening in your use case, increase the function timeout setting to allow enough time for your API call to get a response.Use the following formula to estimate the base time needed for the function timeout:First attempt (connection timeout + socket timeout) + Number of retries x (connection timeout + socket timeout) + 20 seconds additional code runtime margin = Required Lambda function timeoutExample Lambda function timeout calculationNote: The following calculation is for an AWS SDK that's configured for three retries, a 10-second connection timeout, and a 30-second socket timeout.First attempt (10 seconds + 30 seconds) + Number of retries [3 * (10 seconds + 30 seconds)] + 20 seconds additional code runtime margin = 180 secondsRelated informationInvoke (Lambda API reference)Error handling and automatic retries in AWS LambdaLambda quotasFollow"
https://repost.aws/knowledge-center/lambda-function-retry-timeout-sdk
How do I assume an IAM role using the AWS CLI?
I want to assume an AWS Identity and Access Management (IAM) role using the AWS Command Line Interface (AWS CLI).
"I want to assume an AWS Identity and Access Management (IAM) role using the AWS Command Line Interface (AWS CLI).ResolutionTo assume an IAM role using the AWS CLI and have read-only access to Amazon Elastic Compute Cloud (Amazon EC2) instances, do the following:Note: If you receive errors when running AWS CLI commands, then confirm that you're running a recent version of the AWS CLI.Important: Running the commands the following steps shows your credentials, such as passwords, in plaintext. After you assume the IAM role, it’s a best practice to change your passwords.Create an IAM user with permissions to assume roles1.    Create an IAM user using the AWS CLI using the following command:Note: Replace Bob with your IAM user name.aws iam create-user --user-name Bob2.    Create the IAM policy that grants the permissions to Bob using the AWS CLI. Create the JSON file that defines the IAM policy using your favorite text editor. For example, you can use vim, a text editor that's commonly used in Linux, as follows:Note: Replace example with your own policy name, user name, role, JSON file name, profile name, and keys.vim example-policy.json3.    The contents of the example-policy.json file are similar to the following:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:Describe*", "iam:ListRoles", "sts:AssumeRole" ], "Resource": "*" } ]}For more information about creating IAM policies, see Creating IAM policies, Example IAM identity-based policies, and IAM JSON policy reference.Create the IAM policy1.    Use the following aws iam create-policy command:aws iam create-policy --policy-name example-policy --policy-document file://example-policy.jsonThe aws iam create-policy command outputs several pieces of information, including the ARN (Amazon Resource Name) of the IAM policy as follows:arn:aws:iam::123456789012:policy/example-policyNote: Replace 123456789012 with your own account.2.    Note the IAM policy ARN from the output and attach the policy to Bob using the attach-user-policy command. Then, check to make sure that the attachment is in place using the list-attached-user-policies command as follows:aws iam attach-user-policy --user-name Bob --policy-arn "arn:aws:iam::123456789012:policy/example-policy"aws iam list-attached-user-policies --user-name BobCreate the JSON file that defines the trust relationship of the IAM role1.    Create the JSON file that defines the trust relationship as follows:vim example-role-trust-policy.json2.    The contents of the example-role-trust-policy.json file are similar to the following:{ "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Principal": { "AWS": "123456789012" }, "Action": "sts:AssumeRole" }}This trust policy allows users and roles of account 123456789012 to assume this role if they allow the sts:AssumeRole action in their permissions policy. You can also restrict the trust relationship so that the IAM role can be assumed only by specific IAM users. You can do this by specifying principals similar to arn:aws:iam::123456789012:user/example-username. For more information, see AWS JSON policy elements: Principal.Create the IAM role and attach the policyCreate an IAM role that can be assumed by Bob that has read-only access to Amazon Relational Database Service (Amazon RDS) instances. Because of the IAM role being assumed by an IAM user, you must specify a principal that allows IAM users to assume that role. For example, a principal similar to arn:aws:iam::123456789012:root allows all IAM identities of the account to assume that role. For more information, see Creating a role to delegate permissions to an IAM user.1.    Create the IAM role that has read-only access to Amazon RDS DB instances. Attach the IAM policies to your IAM role according to your security requirements.The aws iam create-role command creates the IAM role and defines the trust relationship according to the JSON file that you created in the prior section. The aws iam attach-role-policy command attaches the AWS Managed Policy AmazonRDSReadOnlyAccess to the role. You can attach different policies (Managed Policies and Custom Policies) according to your security requirements. The aws iam list-attached-role-policies command shows the IAM policies that are attached to the IAM role example-role. See the following command examples:aws iam create-role --role-name example-role --assume-role-policy-document file://example-role-trust-policy.jsonaws iam attach-role-policy --role-name example-role --policy-arn "arn:aws:iam::aws:policy/AmazonRDSReadOnlyAccess"aws iam list-attached-role-policies --role-name example-roleNote: Verify that Bob has read-only access to EC2 instances and can assume the example-role.2.    Create access keys for Bob with the following command:aws iam create-access-key --user-name BobThe AWS CLI command outputs an access key ID and a secret access key. Be sure to note these keys.Configure the access keys1.    To configure the access keys, use either the default profile or a specific profile. To configure the default profile, run aws configure. To create a new specific profile, run aws configure --profile example-profile-name. In this example, the default profile is configured as follows:aws configureAWS Access Key ID [None]: ExampleAccessKeyID1AWS Secret Access Key [None]: ExampleSecretKey1Default region name [None]: eu-west-1Default output format [None]: jsonNote: For Default region name, specify your AWS Region.Verify that the AWS CLI commands are invoked and then verify IAM user access1.    Run the aws sts get-caller-identity command as follows:aws sts get-caller-identityThe aws sts get-caller-identity command outputs three pieces of information including the ARN. The output shows something similar to arn:aws:iam::123456789012:user/Bob to verify that the AWS CLI commands are invoked as Bob.2.    Confirm that the IAM user has read-only access to EC2 instances and no access to Amazon RDS DB instances by running the following commands:aws ec2 describe-instances --query "Reservations[*].Instances[*].[VpcId, InstanceId, ImageId, InstanceType]"aws rds describe-db-instances --query "DBInstances[*].[DBInstanceIdentifier, DBName, DBInstanceStatus, AvailabilityZone, DBInstanceClass]"The aws ec2 describe-instances command should show you all the EC2 instances that are in the eu-west-1 Region. The aws rds describe-db-instances command must generate an access denied error message, because Bob doesn't have access to Amazon RDS.Assume the IAM roleDo one of the following:Use an IAM role by creating a profile in the ~/.aws/config file. For more information, see Using an IAM role in the AWS CLI.-or-Assume the IAM role by doing the following:1.    Get the ARN of the role by running the following command:aws iam list-roles --query "Roles[?RoleName == 'example-role'].[RoleName, Arn]"2.    The command lists IAM roles, but filters the output by role name. To assume the IAM role, run the following command:aws sts assume-role --role-arn "arn:aws:iam::123456789012:role/example-role" --role-session-name AWSCLI-SessionThe AWS CLI command outputs several pieces of information. Inside the credentials block you need the AccessKeyId, SecretAccessKey, and SessionToken. This example uses the environment variables RoleAccessKeyID, RoleSecretKey, and RoleSessionToken. Note that the timestamp of the expiration field is in the UTC time zone. The timestamp indicates when the temporary credentials of the IAM role expire. If the temporary credentials are expired, you must invoke the sts:AssumeRole API call again.Note: You can increase the maximum session duration expiration for temporary credentials for IAM roles using the DurationSeconds parameter.Create environment variables to assume the IAM role and verify access1.    Create three environment variables to assume the IAM role. These environment variables are filled out with the following output:export AWS_ACCESS_KEY_ID=RoleAccessKeyIDexport AWS_SECRET_ACCESS_KEY=RoleSecretKeyexport AWS_SESSION_TOKEN=RoleSessionToken        Note: For Windows systems, replace export with set in this command.2.    Verify that you assumed the IAM role by running the following command:aws sts get-caller-identityThe AWS CLI command should output the ARN as arn:aws:sts::123456789012:assumed-role/example-role/AWSCLI-Session instead of arn:aws:iam::123456789012:user/Bob to verify that you assumed the example-role.3.    Verify you created an IAM role with read-only access to Amazon RDS DB instances and no access to EC2 instances using the following commands:aws ec2 describe-instances --query "Reservations[*].Instances[*].[VpcId, InstanceId, ImageId, InstanceType]"aws rds describe-db-instances --query "DBInstances[*].[DBInstanceIdentifier, DBName, DBInstanceStatus, AvailabilityZone, DBInstanceClass]"The aws ec2 describe-instances command should generate an access denied error message. The aws rds describe-db-instances command should return the Amazon RDS DB instances. This verifies that the permissions assigned to the IAM role are working correctly.4.    To return to the IAM user, remove the environment variables as follows:unset AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKENaws sts get-caller-identityThe unset command removes the environment variables and the aws sts get-caller-identity command verifies that you returned as the IAM user Bob.Note: For Windows systems, set the environmental variables to empty strings to clear their contents as follows:SET AWS_ACCESS_KEY_ID=SET AWS_SECRET_ACCESS_KEY=SET AWS_SESSION_TOKEN=Related informationRoles terms and conceptscreate-roleCreating a role to delegate permissions to an AWS serviceFollow"
https://repost.aws/knowledge-center/iam-assume-role-cli
How can I troubleshoot API Gateway WebSocket API connection errors?
I tried to connect to my Amazon API Gateway WebSocket API but I received errors. How do I troubleshoot my WebSocket API connection?
"I tried to connect to my Amazon API Gateway WebSocket API but I received errors. How do I troubleshoot my WebSocket API connection?Short descriptionAPI Gateway WebSocket API connection errors might occur due to:Insufficient permissions to make the request to the backendIncorrect fields for the API ID, AWS Region, and API stageErrors in the backend integrationAWS Identity and Access Management (IAM) authentication errorsResolutionFollow these troubleshooting steps for your use case.Confirm that the WebSocket API has the required permissions to make a request to the backendAPI Gateway uses IAM roles, policies, tags, and AWS Lambda authorizers to control access to a WebSocket API. For more information, see Controlling and managing access to a WebSocket API in API Gateway.Also, make sure that the WebSocket API integration request is configured correctly.Confirm that the request is sent to the correct API ID, AWS Region, and API stageIn this example request URL, make sure that the following fields are correct:wss://a1b2c3d4e5.execute-api.us-east-1.amazonaws.com/productionThe WebSocket API ID "a1b2c3d4e5".The AWS Region "us-east-1".The API stage name "production" exists.Check CloudWatch logs for errorsFollow the instructions to turn on Amazon CloudWatch logs for troubleshooting API Gateway WebSocket APIs. If a Lambda function is integrated for the backend, check the CloudWatch logs for errors. For more information, see Accessing CloudWatch logs for AWS Lambda.Confirm that the API request is signed if the API method has IAM authentication turned onIf IAM authentication is turned on, then make sure that the API request is signed with Signature Version 4 (SigV4). For more information, see Signing AWS requests with Signature Version 4.To turn on IAM authentication for your API Gateway API, follow these steps:In the API Gateway console, choose the name of your API.In the Resources pane, choose a method (such as GET or POST) that you want to activate IAM authentication for.In the Method Execution pane, choose Method Request.Under Settings, for Authorization, choose the pencil icon (Edit). Choose AWS_IAM from the dropdown list, and then choose the check mark icon (Update).(Optional) Repeat steps 2-4 for each API method that you want to activate IAM authentication for.Deploy your WebSocket API for the changes to take effect.Related informationMonitoring WebSocket API execution with CloudWatch metricsUse API Gateway Lambda authorizersHow do I troubleshoot HTTP 403 Forbidden errors when using a Lambda authorizer with an API Gateway REST API?How do I troubleshoot issues when connecting to an API Gateway private API endpoint?Follow"
https://repost.aws/knowledge-center/api-gateway-websocket-error
How do I migrate to an Amazon RDS or Amazon Aurora DB instance using AWS DMS?
I want to migrate my database to Amazon Relational Database Service (Amazon RDS) or Amazon Aurora. How can I do this with minimal downtime?
"I want to migrate my database to Amazon Relational Database Service (Amazon RDS) or Amazon Aurora. How can I do this with minimal downtime?Short descriptionNote: If you're performing a homogeneous migration, use your engine’s native tools (like MySQL dump or MySQL replication) whenever possible.To migrate to an Amazon RDS DB instance using AWS DMS:Create a replication instanceCreate target and source endpointsRefresh the source endpoint schemasCreate a migration taskMonitor your migration taskYou can use these steps for all Amazon RDS and Amazon Aurora engine types, including Amazon RDS for Oracle and Amazon Aurora for MySQL DB instances.ResolutionNote: AWS DMS creates a table with a primary key on the target only when necessary before migrating table data. To generate a complete target schema, use the AWS Schema Conversion Tool (AWS SCT). For more information, see Converting schema.(Optional) Turn on logging with Amazon CloudWatchAmazon CloudWatch logs can alert you to potential issues when migrating. For more information, see Monitoring replication tasks using Amazon CloudWatch.Create a replication instanceOpen the AWS DMS console, and choose Replication Instances from the navigation pane.Choose Create replication instance.Enter your replication instance name, description, instance class, Amazon Virtual Private Cloud (Amazon VPC), and Multi-AZ preference.Note: -Choose an instance class that's sufficient for your migration workload. If the instance isn't sufficient for your workload, you can modify the replication instance later.From the Advanced section, choose your VPC security groups, or choose the default option.Choose Create replication instance.Create target and source endpointsOpen the AWS DMS console, and choose Endpoints from the navigation pane.Choose Create endpoint to create the source and target database.For Endpoint type, choose Source.Enter the endpoint's engine-specific information.Choose Run Test.After the test is complete, choose Save.Repeat steps 3-6, but for Endpoint type, choose Target.Note: Complete this step for both the target and the source.Refresh the source endpoint schemas<b></b>Open the AWS DMS console, and choose Endpoints from the navigation pane.Select the source endpoint, and choose Refresh schemas.Choose Refresh schemas.Note: You must refresh the source so that the source schemas appear in the table mappings when you create an AWS DMS task.Create a migration taskOpen the AWS DMS console, and choose Database migration tasks from the navigation pane.Choose Create task.Specify the Task identifier, Replication instance, Source database endpoint, Target database endpoint, and Migration type. Choose one of the following migration types:Migrate existing data only—Use this migration type for one-time migrations.Migrate existing data and replicate ongoing changes—Use this migration type to migrate large databases to the AWS Cloud with minimal downtime.Migrate ongoing replication changes—Use this migration type when you already have migrated the existing data and want to synchronize the source database with the target MySQL database hosted on the AWS Cloud.From the Task Settings section, modify the task as needed.From the Table mappings section, choose Guided UI.Choose Add new selection rule, and specify your Schema and Table name.Note: You can change or transform the source schema, table, or column name of some or all of the selected objects. To do this, expand the Transformation rules section. Choose Add new transformation rule. Then select the Target, Schema name, and Action.Choose Create task.Note: If you have large object (LOBs) columns, then use Limited LOB Mode. For more information, see Setting LOB support for source databases in an AWS DMS task.Monitor your migration taskUse the Task Monitoring view to monitor the migration tasks. You can see which tables have been migrated successfully and which tables are in the process of migration. Pay attention to the following message types:I - indicates an informational messageW - indicates warningsE - indicates errors that occurred when migrating the databaseVerify that the databases have been migrated successfully by connecting to the source and target instances through the terminal.Migrating OracleWhen you use Oracle as the source database, then AWS DMS migrates table to the specified target endpoint user. You can change the schema for an Oracle target by using transformation rules. For more information, see Changing the user and schema for an Oracle target.Migrating to MySQL/PostgreSQL/SQL ServerDuring migration, schemas and tables are migrated to the same name on the target. If you want to migrate tables to a different schema/table on the target, then create a mapping rule to specify the new schema/table on the target database.{ "rules": [{ "rule-type": "selection", "rule-id": "1", "rule-name": "1", "object-locator": { "schema-name": "test", "table-name": "%" }, "rule-action": "include" }, { "rule-type": "transformation", "rule-id": "2", "rule-name": "2", "rule-action": "rename", "rule-target": "schema", "object-locator": { "schema-name": "test" }, "value": "newtest" } ]}Check the logs to confirm that there are no errors.Monitor latency and compare data counts on the source and the target databases before switching to the new target database. For more information, see Troubleshooting migration tasks in AWS Database Migration Service.Related informationHow AWS Database Migration Service worksDatabase Migration step-by-step walkthroughsSources for data migrationTargets for data migrationFollow"
https://repost.aws/knowledge-center/migrate-mysql-rds-dms
How can I troubleshoot the error "InvalidPermission.NotFound" with the AWS Config rule vpc-sg-open-only-to-authorized-ports and Systems Manager Automation document AWS-DisablePublicAccessForSecurityGroup?
"I created the AWS Systems Manager Automation document AWS-DisablePublicAccessForSecurityGroup to disable SSH and RDP ports. However, auto-remediation fails with the AWS Config rule vpc-sg-open-only-to-authorized-ports. I receive an error similar to the following:"
"I created the AWS Systems Manager Automation document AWS-DisablePublicAccessForSecurityGroup to disable SSH and RDP ports. However, auto-remediation fails with the AWS Config rule vpc-sg-open-only-to-authorized-ports. I receive an error similar to the following:"An error occurred (InvalidPermission.NotFound) when calling the RevokeSecurityGroupIngress operation: The specified rule does not exist in this security group."  Short descriptionThe AWS Config rule checks that the security group allows inbound TCP or UDP traffic to 0.0.0.0/0. For example, to allow TCP ports 443 and 1020-1025 access to 0.0.0.0/0, specify the ports in the AWS Config rule parameter. The SSM Document AWS-DisablePublicAccessForSecurityGroup is limited to the default SSH 22 and RDP 3389 ports opened to all IP addresses (0.0.0.0/0), or a specified IPv4 address using the IpAddressToBlock parameter.ResolutionThe client error InvalidPermission.NotFound with the RevokeSecurityGroupIngress API action means that the target security group doesn't have an inbound rule, or isn't located in the default Amazon Virtual Private Cloud (Amazon VPC).Important: Before you begin, be sure that you installed and configured the AWS Command Line Interface (AWS CLI).To verify the error message, run the AWS CLI command describe-remediation-execution-status similar to the following:aws configservice describe-remediation-execution-status --config-rule-name vpc-sg-open-only-to-authorized-ports --region af-south-1 --resource-keys resourceType=AWS::EC2::SecurityGroup,resourceId=sg-1234567891234567891The inbound rules for the security group must specify open ports using one of the following patterns:0.0.0.0/0::/0SSH or RDP port + 0.0.0.0/0SSH or RDP port + ::/0To configure auto-remediation for other ports including 22 and 3389, you can use a custom SSM document to automate the process. For instructions, see Creating Systems Manager documents.Related informationWhy did the AWS Config auto remediation action for the SSM document AWS-ConfigureS3BucketLogging fail with the error "(MalformedXML)" when calling the PutBucketLogging API?How can I resolve the error "NoSuchRemediationConfigurationException" or "unexpected internal error" when trying to delete a remediation action in AWS Config?Follow"
https://repost.aws/knowledge-center/config-ssm-error-invalidpermission
How do I resolve the "Already Exists" error that I receive when I re-deploy my CDK code after the stack from the initial deployment is deleted?
I want to resolve the "Already Exists" error I receive when I'm re-deploying my AWS Cloud Development Kit (AWS CDK) code.
"I want to resolve the "Already Exists" error I receive when I'm re-deploying my AWS Cloud Development Kit (AWS CDK) code.Short descriptionMost stateful resources in the AWS CDK Construct Library accept the removalPolicy property with RETAIN as the default. Resources that don't have the removalPolicy set become orphan resources, and remain in the account after the stack is deleted. This occurs when the stack transitions to the DELETE_COMPLETE state. The behavior remains the same when the resource definition of similar resources is removed from the code during an update on the corresponding stack. If the retained resources are custom-named, then the "Already Exists" error appears when you re-deploy the same code.To resolve this error, complete the following actions depending on your use case:For unintentionally retained resources, manually delete the resources.For intentionally retained resources, change the name of the resource in the AWS CDK code to a unique value.Another method for intentionally retained resources is to delete the resource name from the AWS CDK code to let AWS CDK auto-generate a new name.Before deleting a stack, confirm that the removalPolicy is set to DESTROY from the resource.ResolutionNote: The following steps use an example Amazon Simple Storage Service (Amazon S3) bucket resource represented by the s3.Bucket class in AWS CDK. The removalPolicy of this resource in AWS CDK is set to RETAIN by default. This resource is retained in the account when its respective stack is deleted, or when the resource is removed during a stack update.Example:const s3Bucket = new s3.Bucket(this, 's3-bucket', { bucketName: ‘DOC-EXAMPLE-BUCKET1’, versioned: false, encryption: s3.BucketEncryption.S3_MANAGED });Manually delete the retained resource1.    Sign in to the AWS Management Console and access the corresponding service of the resources you don't want to retain.2.    Manually delete the resources that you don't want to retain.Note: For this example, delete the Amazon S3 bucket to delete the s3.bucket resource.3.    Re-deploy the AWS CDK code:cdk deployChange the name of the retained resource1.    Access the AWS CDK code of the resource that you want to change the name of.2.    Update the name of the resource to a unique value that doesn't conflict with the name of the retained resource.Note: For this example, update the bucketName parameter to change the name of the s3.Bucket resource.Example:const s3Bucket = new s3.Bucket(this, 's3-bucket', { bucketName: ‘EXAMPLE-NEW-NAME-S3-BUCKET’, versioned: false, encryption: s3.BucketEncryption.S3_MANAGED });Delete the resource name to allow AWS CDK to auto-generate a unique name1.    Remove the resource name from AWS CDK.Note: For this example, the bucketName property is removed to let AWS CDK auto-generate a new name.Example:const s3Bucket = new s3.Bucket(this, 's3-bucket', { versioned: false, encryption: s3.BucketEncryption.S3_MANAGED });2.    Re-deploy AWS CDK code:cdk deploySet the removalPolicy to DESTROY1.    Access the AWS CDK code of the resources that you don't want to retain.2.    Set the removalPolicy property to DESTROY:const s3Bucket = new s3.Bucket(this, 's3-bucket', { bucketName: ‘EXAMPLE-S3-BUCKET’, removalPolicy: RemovalPolicy.DESTROY });3.    Run cdk synth to access the AWS CloudFormation template, and then check that the DeletionPolicy and UpdateReplacePolicy is set to Delete:cdk synthFollow"
https://repost.aws/knowledge-center/cdk-already-exists-error
Why is my Amazon SNS topic not receiving EventBridge notifications?
I set up an Amazon EventBridge rule to send notifications to my Amazon Simple Notification Service (Amazon SNS) topic. Why isn't my Amazon SNS topic receiving the event notifications?
"I set up an Amazon EventBridge rule to send notifications to my Amazon Simple Notification Service (Amazon SNS) topic. Why isn't my Amazon SNS topic receiving the event notifications?ResolutionVerify that the EventBridge rule's targets are in the same AWS Region as the ruleThe targets you associate with a rule must be in the same Region as the rule.Note: To see the Region that an AWS resource is in, review the resource's Amazon Resource Name (ARN).Verify the cause of the issue by reviewing your EventBridge rule's "Invocations" and "FailedInvocations" metricsIn the CloudWatch console, review your EventBridge rule's Invocations and FailedInvocations metrics.If there are data points for both metrics, then the EventBridge rule notification tried to invoke the target but the invocation failed. To resolve the issue, you must grant EventBridge the required permissions to publish messages to your topic. For instructions, see the Confirm that you've granted EventBridge the required permissions to publish messages to your topic section of this article.If there are data points for the Invocations metric only, then the EventBridge rule notification didn't reach the target. To resolve the issue, correct the misconfiguration on the target.For more information, see View available metrics in the CloudWatch User Guide.Confirm that you've granted EventBridge the required permissions to publish messages to your topicYour Amazon SNS topic's resource-based policy must allow EventBridge to publish messages to the topic. Review your topic's AWS Identity and Access Management (IAM) policy to confirm that it has the required permissions, and add them if needed.Important: "events.amazonaws.com" must be listed as the "Service" value. "sns:Publish" must be listed as the "Action" value.To add the required permissions, see My rule runs, but I don't see any messages published into my Amazon SNS topic.Example IAM permissions statement that allows EventBridge to publish messages to an Amazon SNS topic{ "Sid": "AWSEvents_ArticleEvent_Id4950650036948", "Effect": "Allow", "Principal": { "Service": "events.amazonaws.com" }, "Action": "sns:Publish", "Resource": "arn:aws:sns:us-east-1:123456789012:My_SNS_Topic"}(For topics with server-side encryption (SSE) activated) Confirm that your topic has the required AWS Key Management Service (AWS KMS) permissionsYour Amazon SNS topic must use an AWS KMS key that is customer managed. This AWS KMS key must include a custom key policy that gives EventBridge sufficient key usage permissions.To set up the required AWS KMS permissions, do the following:1.    Create a new AWS KMS key that is customer managed and includes the required permissions for EventBridge (events.amazonaws.com).2.    Configure SSE for your Amazon SNS topic using the custom AWS KMS key you just created.3.    Configure AWS KMS permissions that allow EventBridge to publish messages to your encrypted topic (events.amazonaws.com).Example IAM policy statement that allows EventBridge to publish messages to an encrypted Amazon SNS topic{ "Sid": "Allow CWE to use the key", "Effect": "Allow", "Principal": { "Service": "events.amazonaws.com" }, "Action": [ "kms:Decrypt", "kms:GenerateDataKey*" ], "Resource": "*"}Related informationGetting started with Amazon EventBridgeFollow"
https://repost.aws/knowledge-center/sns-not-getting-eventbridge-notification
How do I troubleshoot Application Load Balancer health check failures for Amazon ECS tasks on Fargate?
I want to resolve Application Load Balancer health check failures when running Amazon Elastic Container Service (Amazon ECS) tasks on AWS Fargate.
"I want to resolve Application Load Balancer health check failures when running Amazon Elastic Container Service (Amazon ECS) tasks on AWS Fargate.Short descriptionWhen Amazon ECS tasks fail Application Load Balancer health checks, you might receive one of the following errors from your Amazon ECS service event message:Request timed outHealth checks failed with no error codesHealth checks failed with 404 or 5xx error codesTarget is in an Availability Zone that is not turned on for the load balancerFor failed container health checks, see How do I troubleshoot the container health check failures for Amazon ECS tasks?If you're using Amazon ECS with Amazon Elastic Compute Cloud (Amazon EC2) container instances, then see the following documentation:How can I get my Amazon ECS tasks running using the Amazon EC2 launch type to pass the Application Load Balancer health check in Amazon ECS?For Amazon ECS tasks that stopped, see Checking stopped tasks for errors.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, confirm that you're running a recent version of the AWS CLI. In the following AWS CLI commands, replace the example values with your values.Request timed out errorCheck the security groups to make sure that the load balancer can make health check requests to the Fargate task. The Fargate task security group must allow inbound and outbound traffic on the container port that's specified in the task definition. The source must be the Application Load Balancer security group. The Application Load Balancer security group must allow outbound traffic to the Fargate task security group.Note: It's a best practice to configure different security groups for your Fargate task and load balancer to allow traffic between them.If the security groups allow communication between your Fargate task and Application Load Balancer, then check your HealthCheckTimeoutSeconds in your health check settings. Slightly increase the timeout seconds, if necessary.Note: Increase HealthCheckTimeoutSeconds only if your application takes a long time to respond to a health check.To check the average response time, run the following command:$ time curl -Iv http://<example-task-pvt-ip>:<example-port>/<example_healthcheck_path>Note: High resource utilization on tasks might cause slowness or a hung process and results in a health check failure.Health checks failed with no error codesExample health check failed error message:(service AWS-service) (port 80) is unhealthy in (target-group arn:aws:elasticloadbalancing:us-east-1:111111111111:targetgroup/aws-targetgroup/123456789) due to (reason Health checks failed)If you receive a similar error message, then check that the task quickly responds after it starts in Amazon ECS. Also, check that the application replies with the correct response code.Make sure that the task has time to respond after it starts in Amazon ECSTo make sure that the task has sufficient time to respond after starting, increase the healthCheckGracePeriodSeconds. This allows Amazon ECS to retain the task for a longer time period, and ignore unhealthy Elastic Load Balancing target health checks.Note: If you're creating a new service, then you can configure the health check grace period on the load balancer configuration page.To update the healthCheckGracePeriodSeconds for your existing Amazon ECS service, run the following command:$ aws ecs update-service --cluster <EXAMPLE-CLUSTER-NAME> --service <EXAMPLE-SERVICE-NAME> --region <EXAMPLE-REGION> --health-check-grace-period-seconds <example-value-in-seconds>Check that the application replies with the correct response codeTo confirm the response code that your application sent on the health check path, use the following methods.If you configured access logging on your application, then use ELB-HealthChecker/2.0 to check the response. If you're using AWS CloudWatch Logs, then use CloudWatch Logs Insights and run the following command:fields @timestamp, @message | sort @timestamp desc | filter @message like /ELB-HealthChecker/For Amazon EC2 instances in the same Amazon Virtual Private Cloud (Amazon VPC), run the following commands to confirm that your tasks respond to manual checks. To launch a new Amazon EC2 instance, see Tutorial: Get started with Amazon EC2 Linux instances.HTTP health checks$ curl -Iv http://<example-task-pvt-ip>:<example-port>/<example_healthcheck_path>HTTPS health checks$ curl -Iv https://<example-task-pvt-ip>:<example-port>/<example_healthcheck_path>If tasks quickly stop and you can't get the private IP addresses, launch a standalone task outside Amazon ECS to troubleshoot the issue. Use the same task definition and run a curl command to its IP address to launch the task. The task doesn't stop because of a health check failure.Also, use Amazon ECS Exec to check listening ports on the container level. Using netstat, confirm that the application is listening on the appropriate port:$ netstat -tulpn | grep LISTENHealth checks failed with 404 or 5xx error codesReceiving health check failures with 404 or 5xx error codes indicate that the health check request was acknowledged, but received an invalid response code. The codes also indicate that the response code that the application sent doesn't match the success code that's configured on the target group level (parameter: Matcher).A 404 error code can occur when a health check path doesn't exist, or there's a typo in the configuration of the health check path. A 5xx error code can occur when the application that's inside the task isn't correctly replying to the request, or there's a processing error.To determine whether your application is starting successfully, check your application logs.Target is in an Availability Zone that is not turned on for the load balancerWhen an Availability Zone is turned on for your load balancer, elastic load balancing creates a load balancer node in the Availability Zone. If you register targets in an Availability Zone and don't turn on the Availability Zone, then the registered targets don't receive traffic. For more information, see Availability Zones and load balancer nodes.To identify the Availability Zones that your load balancer is configured for, run the following command:aws elbv2 describe-load-balancers --load-balancer-arns <EXAMPLE-ALB-ARN> --query 'LoadBalancers[*].AvailabilityZones[].{Subnet:SubnetId}'To identify the Availability Zones that your Fargate task is configured for, run the following command:aws ecs describe-services --cluster <EXAMPLE-CLUSTER-NAME> --service <EXAMPLE-SERVICE-NAME> --query 'services[*].deployments[].networkConfiguration[].awsvpcConfiguration.{Subnets:subnets}'Note: Use the update-service AWS CLI command to change the subnet configuration of an Amazon ECS service. Use the enable-availability-zones-for-load-balancer AWS CLI command to add an Availability Zone to an existing Application Load Balancer.Related informationTroubleshooting service load balancersHealth checks for your target groupsFollow"
https://repost.aws/knowledge-center/fargate-alb-health-checks
How can I resolve the error "NoSuchRemediationConfigurationException" or "unexpected internal error" when trying to delete a remediation action in AWS Config?
"When I delete a remediation action associated with an AWS config, I receive an error similar to one of the following:Using the AWS Command Line Interface (AWS CLI) command delete-remediation-configuration:"An error occurred (NoSuchRemediationConfigurationException) when calling the DeleteRemediationConfiguration operation: No RemediationConfiguration for rule exists."-or-Using the AWS Management Console:"An unexpected internal error occurred with AWS Config. Try again or contact AWS Support if the error persists."How do I resolve this error?"
"When I delete a remediation action associated with an AWS config, I receive an error similar to one of the following:Using the AWS Command Line Interface (AWS CLI) command delete-remediation-configuration:"An error occurred (NoSuchRemediationConfigurationException) when calling the DeleteRemediationConfiguration operation: No RemediationConfiguration for rule exists."-or-Using the AWS Management Console:"An unexpected internal error occurred with AWS Config. Try again or contact AWS Support if the error persists."How do I resolve this error?Short descriptionThis error message occurs because the PutRemediationConfiguration API call ResourceType parameter was specified in creation but not in deletion. If you use the ResourceType parameter in the PutRemediationConfiguration API, you must also use the ResourceType parameter in the DeleteRemediationConfiguration API.Note: If no resource type is provided for PutRemediationConfiguration, the default is ResourceType=*.ResolutionFollow these instructions to delete the resource type associated with your AWS Config rule.Important: Before you begin, be sure that you have the latest version of the AWS CLI installed and configured.1.    Run the AWS CLI command describe-remediation-configurations to identify the resource type that is used with PutRemediationConfiguration:Note: Replace example-config-rule-name with your AWS Config rule name.aws configservice describe-remediation-configurations --config-rule-names example-config-rule-name2.    You receive an output similar to the following:{ "RemediationConfigurations": [ { "TargetType": "SSM_DOCUMENT", "MaximumAutomaticAttempts": 5, "Parameters": { "AutomationAssumeRole": { "StaticValue": { "Values": [ "arn:aws:iam::example-accoun-Id:role/example-IAM-role" ] } }, "BucketName": { "ResourceValue": { "Value": "RESOURCE_ID" } }, "SSEAlgorithm": { "StaticValue": { "Values": [ "AES256" ] } } }, "Config-rule-name": "example-Config-rule-name", "ResourceType": "AWS::S3::Bucket", "TargetId": "AWS-EnableS3BucketEncryption", "RetryAttemptSeconds": 60, "Automatic": true, "Arn": "arn:aws:config:example-region:example-account-ID:remediation-configuration/example-config-rule-name/7467e289-f789-4b99-a848-deeeb3e90a0e" } ]}Note: In this example, the resource type is AWS::S3::Bucket.3.    Run the AWS CLI command delete-remediation-configuration:Note: Replace example-config-rule-name, example-resource-type, and example-region with your AWS Config rule name, resource type, and AWS Region.aws configservice delete-remediation-configuration --config-rule-name example-config-rule-name --resource-type example-resource-type --region example-regionThe remediation action associated with your AWS Config rule deletes successfully. You can now delete the AWS Config rule.Related informationHow can I troubleshoot failed remediation executions in AWS Config?delete-config-ruleDelete remediation action (console)Follow"
https://repost.aws/knowledge-center/config-remediation-error
Why is my Amazon Aurora DB instance in an incompatible parameter status?
"I have an Amazon Aurora DB instance that's in an incompatible-parameters status. Why is my DB instance in an incompatible-parameter status, and how can I resolve this issue?"
"I have an Amazon Aurora DB instance that's in an incompatible-parameters status. Why is my DB instance in an incompatible-parameter status, and how can I resolve this issue?Short descriptionThe incompatible-parameters status occurs when a parameter in the associated parameter group has a value that isn't compatible with your engine version. Or, the value isn't compatible with the current DB instance class and size.A DB instance might be in the incompatible-parameters state for one of these reasons:The sum of memory used by parameters in the cluster and the instance parameter groups exceeds the available memory on the instance.The database engine is incompatible with one or more of the parameter settings in the custom DB parameter group.The workload on the DB instance is memory intensive, and results in an out of memory (OOM) state. This happens even when memory related parameters are not set or are set to default values.If an Amazon Aurora for MySQL DB instance is in the incompatible-parameters status, then you can only reboot or delete your DB instance. You can't modify the DB instance or the engine version.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.To identify the root cause of the issue, first copy the incompatible parameter group. Then, compare the differences between the custom parameter values and the default values. For more information, see Working with parameter groups.After you identify the issue, resolve an incompatible-parameters status using one of these methods.Reset incompatible parameter valuesFollow these steps to reset only the incompatible parameter values:Open the Amazon RDS console, and then choose Parameter groups from the navigation pane.Select the incompatible parameter groups.Choose Parameter group actions, and then choose Edit.Enter valid (lower memory usage) parameter values, and then choose Save changes.Reboot the DB instance to apply the new settings.Reset all the parameters in the parameter group to their default valuesFollow these steps using the Amazon RDS console to reset all parameters in the parameter group to default values:Open the Amazon RDS console, and then choose Parameter groups from the navigation pane.Choose the parameter group that you want to reset.Choose Parameter group actions, and then choose Reset.Reduce memory for heavy workloadsFor memory intensive workloads, reduce the buffer pool size from the default value (75% of memory) to a smaller value. For example, you might use DBInstanceClassMemory*5/8 or DBInstanceClassMemory *1/2. To do this, modify the innodb_buffer_pool_size parameter.Note: If you modified or reset any static parameters, wait for the modification to be applied. Then, trigger a reboot of the DB instance.Related informationViewing Amazon RDS DB instance statusHow do I resolve issues with an Amazon RDS database that is in an incompatible-network stateFollow"
https://repost.aws/knowledge-center/aurora-mysql-incompatible-parameter
How can I automatically evaluate and remediate the increasing volume on an Amazon EC2 instance when free disk space is low?
"I want to see if the volumes attached to my Amazon Elastic Compute Cloud (Amazon EC2) instances need to be extended. Also, extending partitions and file systems at the operating system (OS) level is a time-consuming operation. How can I automate the whole process?"
"I want to see if the volumes attached to my Amazon Elastic Compute Cloud (Amazon EC2) instances need to be extended. Also, extending partitions and file systems at the operating system (OS) level is a time-consuming operation. How can I automate the whole process?Short descriptionYou can use a set of AWS Systems Manager Automation documents to evaluate and extend Amazon Elastic Block Store (Amazon EBS) volumes. The Automation documents work in unison, allowing you to investigate and optionally remediate low disk usage on an Amazon EC2 instance.The AWSPremiumSupport-TroubleshootEC2DiskUsage Automation document orchestrates the run of the other Systems Manager documents, based on the OS type.The first set of documents performs basic diagnostics and evaluation whether it’s possible to migrate by expanding the volume size:AWSPremiumSupport-DiagnoseDiskUsageOnWindowsAWSPremiumSupport-DiagnoseDiskUsageOnLinuxThe second set of documents takes the output of the first document and runs Python code to perform the volume modification. Then, the automation accesses the instance and extends the partition and file system of the volumes:AWSPremiumSupport-ExtendVolumesOnWindowsAWSPremiumSupport-ExtendVolumesOnLinuxUse the following steps to set up the required permissions and run the Automation document.ResolutionGrant permissionsYou must grant the following permissions to use the Automation documents.If you haven't already done so, create an AWS Identity and Access Management (IAM) instance profile for Systems Manager. Then, attach it to the target instance.To set up the AssumeRole, which is required to specify the AutomationAssumeRole parameter during the Automation document configuration process, follow these steps:1.    Create a policy on the JSON tab using the following JSON policy document:{ "Version": "2012-10-17", "Statement": [ { "Action": [ "ec2:DescribeVolumes", "ec2:DescribeVolumesModifications", "ec2:ModifyVolume", "ec2:DescribeInstances", "ec2:CreateImage", "ec2:DescribeImages", "ec2:DescribeTags", "ec2:CreateTags", "ec2:DeleteTags" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "iam:PassRole" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "ssm:StartAutomationExecution", "ssm:GetAutomationExecution", "ssm:DescribeAutomationStepExecutions", "ssm:DescribeAutomationExecutions" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "ssm:SendCommand", "ssm:DescribeInstanceInformation", "ssm:ListCommands", "ssm:ListCommandInvocations" ], "Resource": "*", "Effect": "Allow" } ]}2.    Create the assume role and attach the policy created in the previous step.3.    Modify this statement and replace "Resource": "*", with your ARN for the assume role.{ "Action": [ "iam:PassRole" ], "Resource": "*", "Effect": "Allow" },Run the Automation documentTo use the set of Systems Manager Automation documents, you need to run only the initial AWSPremiumSupport-TroubleshootEC2DiskUsage document. Follow these steps:1.    Open the Systems Manager console, and then choose Automation from the navigation pane.2.    Choose Execute automation.3.    Select the radio button for AWSPremiumSupport-TroubleshootEC2DiskUsage, and then choose Next.4.    For Execute automation document, select Simple execution.5.    Under Input parameters:For InstanceId, enter your Amazon EC2 instance ID.For AutomationAssumeRole, enter the ARN of the role that allows the Automation to perform the actions on your behalf. This is the assume role that you created when granting permissions.6.    (Optional) Under Input parameters, specify the following inputs if your requirements differ from the default values:VolumeExpansionEnabled: Controls whether the document will extend the affected volumes and partitions (default: True)VolumeExpansionUsageTrigger: Minimum percentage of used partition space required to trigger expansion (default: 85)VolumeExpansionCapSize: Maximum size in GiB that the EBS volume will increase to (default: 2048)VolumeExpansionGibIncrease: Volume increase in GiB (default: 20)VolumeExpansionPercentageIncrease: Volume increase in percentage (default: 20)7.    Choose Execute.The console displays the Automation status.ExampleYour current volume is 30 GB and has 4 GB free, which means that you have 26 GB of used space. You specify the following input parameters:VolumeExpansionUsageTrigger: 85VolumeExpansionGibIncrease: 10VolumeExpansionPercentageIncrease: 15VolumeExpansionCapSize: 2048Outcome:The increase triggers, because 26 GB of used space is above the 85% threshold specified for VolumeExpansionUsageTrigger.The volume increased by 10 GB. This is because you specified that the volume should increase by either 10 GB or by 15% of the current volume size of 4.5 GB. The Automation document uses the biggest net increase between VolumeExpansionGibIncrease and VolumeExpansionPercentageIncrease.The new volume size is 40 GB, which is within the specified 2048 VolumeExpansionCapSize.Related informationExtend a Linux file system after resizing a volumeExtend a Windows file system after resizing a volumeUse IAM to configure roles for AutomationFollow"
https://repost.aws/knowledge-center/ec2-volume-disk-space
How do I prevent "ThrottlingException" or "Rate exceeded" errors when I use AWS Systems Manager Parameter Store?
"When I use AWS Systems Manager Parameter Store, I receive a "ThrottlingException" error message indicating "Rate exceeded" similar to one of the following:An error occurred (ThrottlingException) when calling the GetParameters operation (reached max retries: 4): Rate exceededAn error occurred (ThrottlingException) when calling the GetParameter operation (reached max retries: 4): Rate exceededAn error occurred (ThrottlingException) when calling the GetParametersByPath operation (reached max retries: 4): Rate exceededAn error occurred (ThrottlingException) when calling the DescribeParameters operation (reached max retries: 2): Rate exceededWhy am I receiving "Rate exceeded" errors, and how can I resolve the issue?"
"When I use AWS Systems Manager Parameter Store, I receive a "ThrottlingException" error message indicating "Rate exceeded" similar to one of the following:An error occurred (ThrottlingException) when calling the GetParameters operation (reached max retries: 4): Rate exceededAn error occurred (ThrottlingException) when calling the GetParameter operation (reached max retries: 4): Rate exceededAn error occurred (ThrottlingException) when calling the GetParametersByPath operation (reached max retries: 4): Rate exceededAn error occurred (ThrottlingException) when calling the DescribeParameters operation (reached max retries: 2): Rate exceededWhy am I receiving "Rate exceeded" errors, and how can I resolve the issue?Short descriptionParameter Store API calls can't exceed the maximum allowed API request rate per account and per Region. This includes API calls from both the AWS Command Line Interface (AWS CLI) and the AWS Management Console. If API requests exceed the maximum rate, then you receive a "Rate exceeded" error, and further API calls are throttled.Parameter Store requests are throttled for each Amazon Web Services (AWS) account on a per-Region basis to help service performance. For more information about Parameter Store API maximum throughput quotas, see AWS Systems Manager endpoints and quotas.ResolutionTroubleshootingTo prevent or mitigate "ThrottlingException" or "Rate exceeded" errors, try the following troubleshooting steps:Reduce the frequency of the API calls.Stagger the intervals of the API calls so that they don’t all run at the same time.Use APIs that return more than one value. For example, GetParameters and GetParametersByPath support the call of 10 parameters with one API call.Implement error retries and exponential backoff when you make API calls.Increase Parameter Store throughput. Important: Increasing throughput incurs a charge on your AWS account. For more information, see AWS Systems Manager pricing.Note: You can increase throughput using the AWS Systems Manager console, the AWS CLI, or AWS Tools for Windows PowerShell. If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Request a service quota increaseIf you’ve tried all the above troubleshooting steps but still receive "Rate exceeded" errors, then you can request a service quota increase.Note: Before submitting a request, identify the API call and call rate.To request a service quota increase for Parameter Store, follow these steps:Open the AWS Support Center, and then choose Create case.Choose Service limit increase.For Limit type, choose EC2 Systems Manager.For Region, choose your AWS Region.For Resource Type, choose Parameter Store.Choose the Limit that you want to increase, and then enter the new limit value.In the Use case description text box, include the time frame related to the throttling issue and the reason for the quota increase request.Choose your preferred contact options, and then choose Submit.Related informationExponential backoff and jitterTroubleshooting Parameter StoreFollow"
https://repost.aws/knowledge-center/ssm-parameter-store-rate-exceeded
How do I configure Network Firewall standard rule groups and domain list rule group rules together?
I want to configure AWS Network Firewall standard rule group rules and domain list rule group rules to work together to control traffic as expected.
"I want to configure AWS Network Firewall standard rule group rules and domain list rule group rules to work together to control traffic as expected.Short descriptionYou can configure standard rule group rules to drop established TCP traffic. You can then configure domain list rule group rules to allow the TCP (TLS) flows sent to allowed domains from the domain list rule group. You do this by configuring a domain list rule group and standard rule group rules with a "flow" keyword.Note: The Amazon Virtual Private Cloud (Amazon VPC) console displays only previously configured rule options. It doesn't allow you to add rule options. For more information, see Standard stateful rule groups in AWS Network Firewall.You can use AWS CloudFormation or an API to specify rule options for your standard rule group rules. AWS Command Line Interface (AWS CLI) is used for the examples in this article:Note: If you receive errors when running AWS CLI commands, make sure that you're using the most recent version of the AWS CLI.ResolutionPrerequisitesBefore you configure your Network Firewall rules, review the following information:This article presents a method of configuring standard rule group rules that allows for entry of rule options. In this example, adding rule options to a standard rule group rule allows it to work with a domain list rule group. This allows you to better control your traffic.This article uses one of the distributed deployment models. This model protects traffic between a workload public subnet with a client Amazon Elastic Compute Cloud (Amazon EC2) instance and an internet gateway. The firewall policy rule order is set to default action order.The Amazon EC2 instance is allowed to send traffic on TCP port 443 in VPC Security Group and network ACL.The firewall rules used in the article are examples to refer to. You must make sure that the rules you configure for your firewall are suited to your specific need and work as expected.Note: In the code examples in this article, ellipses (...) have been used to shorten outputs.Common configuration of domain list rule group and standard rule group without flow keywordWhen you configure a domain list rule group and standard rule group, your configuration can be similar to the one detailed in this example. If you create a domain list rule group and standard rule group, then you must use a flow keyword. If you don't use a flow keyword, then you might encounter problems such as the one detailed in this example.In this example, the Amazon VPC console is used to create a domain list rule group. The rule allows HTTPS traffic to example.com.Domain name source: example.comSource IPs type: DefaultProtocol: HTTPsAction: AllowNote: A domain list rule group with the action set to Allow generates another rule. The rule is set to deny traffic of the specified protocol type that doesn't match the domain specifications. For more information, see Domain filtering.Using the Amazon VPC console to create a common configuration of a standard rule group rule produces output similar to the following table:ProtocolSourceDestinationSource portDestination portDirectionActionTCPAnyAnyAnyAnyForwardDropWhen sending a request to an allowed domain to test your rule configuration, the traffic is blocked and you receive a "connection timed out" error:$ curl -kv -so /dev/null https://example.com* Trying 93.184.216.34:443...* connect to 93.184.216.34 port 443 failed: Connection timed out* Failed to connect to example.com port 443 after 129180 ms: Connection timed out* Closing connection 0The configuration causes all TCP traffic to be dropped and for the connection to time out. This includes blocking TCP-based traffic to the allowed domain example.com.The domain list rule group rule to allow example.com over HTTPS fails because the TCP protocol is the first protocol to appear in the initial flow. The flow starts with the lower-layer TCP handshake and the deny rule is evaluated. However, there is still no TLS protocol to match, and so the drop rule matches. This causes all traffic to example.com to drop.Note: You can configure logging levels for your firewall's stateful engine, allowing you to access detailed information about filtered traffic. For more information, see Logging network traffic from AWS Network Firewall.Domain list rule group and standard rule group rule with flow keywordYou can use the Network Firewall describe-rule-group and update-rule-group commands to update your standard rule group rules to include an additional flow keyword.1.    Run the describe-rule-group command against the original Stateful rule group object. You need the UpdateToken value in order to run the update-rule-group command.Note: Part of the output from the following command is used as a JSON template for other adjustments later.$ aws network-firewall describe-rule-group --rule-group-arn "arn:aws:network-firewall:us-east-1:XXXXXXXX0575:stateful-rulegroup/stateful-rg-5-tuple" --output json{ "UpdateToken": "40b87af5-a20c-4f8c-8afd-6777c81add3c", (...) "RulesSource": { "StatefulRules": [{ "Action": "DROP", "Header": { "Protocol": "TCP", "Source": "Any", "SourcePort": "Any", "Direction": "FORWARD", "Destination": "Any", "DestinationPort": "Any" }, "RuleOptions": [{ "Keyword": "sid", "Settings": [ "5" ] }] }] } (...)}2.    Create a JSON rule file with a modified rule configuration. Run a command similar to the following to verify JSON rule file content:$ cat tcp-drop-rule-updated.json{ "RulesSource": { "StatefulRules": [ { "Action": "DROP", "Header": { "Direction": "FORWARD", "Protocol": "TCP", "Destination": "Any", "Source": "Any", "DestinationPort": "Any", "SourcePort": "Any" }, "RuleOptions": [ { "Keyword": "sid", "Settings": [ "5" ] }, { "Keyword": "flow", "Settings": [ "established, to_server" ] } ] } ] }}In this example, the flow keyword allows the TCP handshake to complete before evaluating the TCP drop rule when sending a request to example.com. After this, the rule default action order takes precedence. The domain list allow HTTPS rule for example.com matches, allowing the rest of the traffic through for that flow. Any traffic to not allowed domains is blocked, as well as any other established TCP traffic.3.    Run the update-rule-group command using the UpdateToken value and JSON rule file to update standard rule group:$ aws network-firewall update-rule-group --rule-group-arn "arn:aws:network-firewall:us-east-1:XXXXXXXX0575:stateful-rulegroup/stateful-rg-5-tuple" --update-token 40b87af5-a20c-4f8c-8afd-6777c81add3c --rule-group file://tcp-drop-rule-updated.json --output jsonThe result looks similar to this output:{ "UpdateToken": "bf8fe6d4-f13e-406c-90c1-9e3bad2118a7", "RuleGroupResponse": {(...)}, "LastModifiedTime": "2023-02-07T14:12:14.993000+11:00" }}4.    Run the describe-rule-group command to verify the changes to the Stateful rule group:$ aws network-firewall describe-rule-group --rule-group-arn "arn:aws:network-firewall:us-east-1:XXXXXXXX0575:stateful-rulegroup/stateful-rg-5-tuple" --output jsonThe output looks similar to the following message:{(...) "RulesSource": { "StatefulRules": [ { "Action": "DROP", "Header": { "Protocol": "TCP", "Source": "Any", (...) }, "RuleOptions": [ { "Keyword": "sid", "Settings": [ "5" ] }, { "Keyword": "flow", "Settings": [ "established, to_server" ] (...) } }, "RuleGroupResponse": {(...) }, "LastModifiedTime": "2023-02-07T14:12:14.993000+11:00" }}Note: In the preceding example, "established, to_server" reflects the change from the update-rule-group command.5.    Verify that both the domain list rule group and standard rule group filter traffic correctly:$ curl -kv -so /dev/null https://example.com* Trying 93.184.216.34 :443...* Connected to example.com ( 93.184.216.34 ) port 443 (#0)(...)> GET / HTTP/1.1> Host: example.com(...)< HTTP/1.1 200 OK(...)In the previous example, the output shows that HTTPS traffic to the allowed domain example.com succeeds, as configured.In this next example, HTTPS traffic to domains that aren't allowed is blocked, as expected:$ curl -m 5 -kv -so /dev/null https://www.amazon.com* Trying 93.184.216.34 :443...* Connected to www.amazon.com ( 93.184.216.34 ) port 443 (#0)(...)* TLSv1.2 (OUT), TLS handshake, Client hello (1):(...)* Operation timed out after 5000 milliseconds with 0 out of 0 bytes received* Closing connection 0Other types of TCP traffic are also blocked based off the configuration:$ aws s3 ls --cli-read-timeout 30 --debug(...)Read timeout on endpoint URL: "https://s3.amazonaws.com/"Related informationHands-on walkthrough of the AWS Network Firewall flexible rules engine – Part 1Follow"
https://repost.aws/knowledge-center/network-firewall-configure-rule-groups
Why can't I connect to my EC2 instance?
I'm getting an error when trying to connect to my Amazon Elastic Compute Cloud (Amazon EC2) instance? Why can't I connect to my EC2 instance?
"I'm getting an error when trying to connect to my Amazon Elastic Compute Cloud (Amazon EC2) instance? Why can't I connect to my EC2 instance?ResolutionPrerequisitesTo connect to your EC2 instance, complete the following prerequisites:Verify that the security group attached to your instance allows access to port 22 for Linux and port 3389 for Windows.Verify that your network access control list (ACL) allows access to the instance.Verify that your routing table has a route for the connection.TroubleshootingIf the preceding prerequisites are met and you can't connect to the instance, do the following:1.    Verify that your EC2 instance is passing status checks. For more information. see the following:Why is my EC2 Linux instance unreachable and failing one or both of its status checks?Why is my EC2 Windows instance down with an instance status check failure?Why is my EC2 Windows instance down with a system status check failure or status check 0/2?2.    If your instance passed the status checks and you're getting connection errors on your instance when trying to connect, see the following:How do I troubleshoot problems connecting to my Amazon EC2 Linux instance using SSH?Troubleshoot connecting to your instance (Linux)Troubleshoot connecting to your Windows instanceHow do I troubleshoot authentication errors when I use RDP to connect to an EC2 Windows instance?Related informationTroubleshoot EC2 Linux instancesTroubleshoot EC2 Windows instancesFollow"
https://repost.aws/knowledge-center/ec2-instance-connection-troubleshooting
How do I achieve path-based routing on an Application Load Balancer?
I'm running several microservices behind my Application Load Balancer. I want to forward requests to specific target groups based on the URL path.
"I'm running several microservices behind my Application Load Balancer. I want to forward requests to specific target groups based on the URL path.Short descriptionAn Application Load Balancer allows you to create a listener with rules that forwards requests to target groups based on the URL. This feature isn't available for other load balancer types, such as Classic Load Balancer, Network Load Balancer, and Gateway Load Balancer. The path pattern rules are applied only to the path of the URL and not to URL's query parameters. For more information on path patterns, see Path conditions.To establish path-based routing on your Application Load Balancer, do the following:Create a target group.Configure listener rules.Before creating the target groups, be sure that the following prerequisites are met:You launched the Amazon Elastic Compute Cloud (Amazon EC2) instances in an Amazon Virtual Private Cloud (Amazon VPC). For more information, see Tutorial: Get started with Amazon EC2 Linux instances.The security groups of these Amazon EC2 Instances allow access on the listener port and health check port.The application is deployed on the EC2 Instances that you intend to register with target groups. For example, see Tutorial: Install a LAMP web server on Amazon Linux 2022.You created an Application Load Balancer.ResolutionSuppose that you have two services, service A and service B, with applications running on these services on port 80. For example, service A runs an application on the path /svcA, and service B runs an application on path /svcB.Create a target groupAfter your instances are created, register these instances with a target group. Based on the listener rules configured on the load balancer, requests are forwarded to registered targets using the port and protocol that you specified for the target group. However, you can override the port information when you individually register targets. For more information, see Create a target group.For example, suppose that you create two target groups with Protocol as HTTP and Port as 80, each with an application deployed. For example, you register the EC2 instance that's running service A with target-group-A. For this target group, you can set HealthCheckProtocol as HTTP and HealthCheckPath as /svcA.You register the EC2 Instance that's running service B with target-group-B. For this target group, you can set HealthCheckProtocol as HTTP and HealthCheckPath as /svcB.You can add or remove targets to or from your target groups at any time. For more information, see Register targets with your target group.After you specify a target group in a rule for a listener, the load balancer continually monitors the health of all targets registered with the target group that are in the Availability Zone enabled for the load balancer. The load balancer routes requests to the registered targets that are healthy. For more information, see Health checks for your target groups.Configure listener rulesWhen you create a listener for an Application Load Balancer, you can define one or more rules in addition to the default rule. A rule consists of a priority, an action, and one or more conditions. You can't define conditions for the default rule. If conditions for none of the non-default rules are met, then the action for the default rule is performed.To learn more about rule priority, see Reorder rules.To learn more about rule actions, see Rule action types.To learn more about rule conditions, see Rule condition types.To implement path-based routing on an Application Load Balancer, you must configure listener rules. You must configure one rule for each path pattern based on which you want to route your requests.For example:Listener rule 1: If your request URL path contains the string /svcA, then forward the request to target-group-A. This is because target-group-A includes service A that runs an application on the given path.Listener rule 2: If your request URL path contains the string /svcB, then forward that request to target-group-B. This is because target-group-B includes service B that runs an application on the given path.To create a new HTTP listener, see Create an HTTP listener.To create a new HTTPS listener, see Create an HTTPS listener.To update listener rules with conditions and actions, do the following:Open the Amazon EC2 console.In the navigation pane, under Load Balancing, choose Load Balancers.Select the load balancer, and then choose Listeners.To update the listener, choose View/edit rules.Choose the Add rules icon (the plus sign) in the menu bar. This adds Insert Rule icons at the locations where you can insert a rule in the priority order.Choose one of the Insert Rule icons added in the previous step.To add a path-based rule for /svcA, choose Add condition, Path, and then enter the path pattern /svcA. To save the condition, choose the checkmark icon.To add a forward action, choose Add action, Forward to, and then choose the target group target-group-A.Choose Save.Repeat the preceding steps for the path /svcB with the following changes:For step 6, enter the path pattern /svcB.For step 7, choose the target group target-group-B.For more information, see Listener rules for your Application Load Balancer.Note: Path-based routing rules look for an exact match. In this example, path-based routing uses the path definitions /svcA and /svcB. If your application requires requests to be routed further down these paths, for example, /svcA/doc or /svcB/doc, then include a wildcard when you write the condition for the path-based routing rule. Use path patterns similar to /svcA* or /svcB* to be sure that any documents on these paths are accounted for when routing requests.Test path-based routingTo test this routing, copy the DNS name of your Application Load Balancer in a web browser and add the URL path /svcA or /svcB. When the Application Load Balancer listener receives the request, the listener forwards that request to the appropriate target group based on the path condition.For example, suppose that the DNS name of your Application Load Balancer is alb-demo-1234567890.us-west-2.elb.amazonaws.com:http://alb-demo-1234567890.us-west-2.elb.amazonaws.com/svcA must return service A.http://alb-demo-1234567890.us-west-2.elb.amazonaws.com/svcB must return service B.With path-based routing, your Application Load Balancer allows you to host multiple microservices behind a single load balancer using listener rules and target groups. Therefore, you can set up complex rules to route client requests to your applications. In addition to path-based rules, you can also route requests to particular applications based on host header, user-agent header, and query parameter values. For more information, see Advanced request routing for AWS Application Load Balancers.Related informationHow do I troubleshoot and fix failing health checks for Application Load Balancers?Troubleshoot your Application Load BalancersFollow"
https://repost.aws/knowledge-center/elb-achieve-path-based-routing-alb
How do I create the correct EFS access point configuration to mount my file system using a Lambda function?
I want to create an Amazon Elastic File System (Amazon EFS) access point mount using AWS Lambda. How can I do this and what are the working and non-working access point configurations?
"I want to create an Amazon Elastic File System (Amazon EFS) access point mount using AWS Lambda. How can I do this and what are the working and non-working access point configurations?ResolutionPrerequisitesThe following are pre-requisites for mounting EFS access points with Lambda:The Lambda function's execution role must have the following elasticfilesystem permissions:elasticfilesystem:ClientMountelasticfilesystem:ClientWrite (not required for read-only connections)Your AWS Identify and Access Management (IAM) user must have the following permissions:elasticfilesystem:DescribeMountTargetsFor more information, see Configuring a file system and access point.Inbound NFS traffic (port 2049) must be allowed in the security group for the EFS file system.Creating an access point using LambdaOpen the Functions page of the Lambda console.Choose a function.Choose Configuration, File systems.Under File system, choose Add file system.Configure the following properties:EFS file system: The access point for a file system in the same VPC.Local mount path: The location where the file system is mounted on the Lambda function, starting with /mnt/. For example, /mnt/lambda.Amazon EFS automatically creates the root directory with configurable ownership and permissions only if you provide the following:OwnUidOwnGIDPermissions for the directory (creation info)Note: Amazon EFS treats a user or group ID set to 0 in the access point as the root user.For more information, see Creating the root directory for an access point.EFS access point configuration examplesNote: The root directory of the access point is /efsaccesspoint. Mounting file system fs-12345678:/ using this access point is the same as mounting fs-12345678:/efsaccesspoint without this access point.Working ConfigurationsConfiguration 1:Root directory Path: /efs ( /efs doesn’t exist)POSIX user: EMPTYCreation Info: 1000:1000(777)Configuration 2:Root directory Path: /efs ( /efs doesn’t exist)POSIX user: 1000:1000Creation Info: 1000:1000 (777,775,755)Configuration 3:Root directory Path: /efs ( /efs exists)POSIX user: 1000:1000Creation Info: EMPTYConfiguration 4:Root directory Path: /efs (/efs doesn’t exist)POSIX user: 0:0Creation Info: 1000:1000 (755)Configuration 5:Root directory Path: /efs ( /efs doesn’t exist)POSIX user: 0:0Creation Info: 1000:1000 (775)Configuration 6:Root directory Path: /efs ( /efs doesn’t exist)POSIX user: 0:0Creation Info: 1000:1000 (777)Non-working configurationsThe following access point configuration results in an error when accessing EFS using a Lambda function:Root directory Path: /efs ( /efs doesn’t exist)POSIX user: 1000:1000Creation Info: EMPTYYou must provide POSIX user information if your use case requires performing write operations from an AWS Lambda function to the EFS mounted path. If POSIX user information isn't provided, write operations fail with Permission denied errors. For instructions on adding POSIX user information, see the preceding section, Creating an access point using Lambda.Related informationWorking with Amazon EFS access pointsFollow"
https://repost.aws/knowledge-center/efs-mount-with-lambda-function
How can I use the AWS CLI to create an AWS Backup plan or run an on-demand job?
I want to use the AWS Command Line Interface (AWS CLI) to create an AWS Backup plan.-or-I want to use the AWS CLI to run an on-demand job on AWS Backup.
"I want to use the AWS Command Line Interface (AWS CLI) to create an AWS Backup plan.-or-I want to use the AWS CLI to run an on-demand job on AWS Backup.ResolutionCreate an AWS Backup planNote: The following example AWS Backup plan is set up with a copy job configuration in the backup rule. With this configuration, you create a primary backup vault in the source AWS Region. The primary vault hosts the source recovery points. Then, you create a secondary vault in the destination Region. The secondary vault stores the recovery points that AWS Backup creates as part of the copy configuration in the backup plan.1.    Run the create-backup-vault command to create a primary vault in the source Region. Then, run the command again to create a secondary vault in the destination Region. In the following example commands, eu-west-1 is the source Region and eu-west-2 is the destination Region.aws backup create-backup-vault --backup-vault-name primary --region eu-west-1aws backup create-backup-vault --backup-vault-name secondary --region eu-west-2Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.2.    Create a JSON file with the options and parameters for your backup plan. Example:{ "BackupPlanName": "testplan", "Rules": [{ "RuleName": "HalfDayBackups", "TargetBackupVaultName": "primary", "ScheduleExpression": "cron(0 5/12 ? * * *)", "StartWindowMinutes": 480, "CompletionWindowMinutes": 10080, "Lifecycle": { "DeleteAfterDays": 30 }, "CopyActions": [{ "DestinationBackupVaultArn": "arn:aws:backup:eu-west-2:123456789:backup-vault:secondary", "Lifecycle": { "DeleteAfterDays": 30 } }] }]}Note: For the ScheduleExpression field, set the value based on the recovery point objective of your organization. For the Lifecycle field, which is optional, you can enter a value based on the retention policy of your backup strategy.3.    After you create the JSON file, run the create-backup-plan command. Then, pass the JSON file as an input parameter:aws backup create-backup-plan --backup-plan file://4.    In the output of the command, note the value for BackupPlanId.5.    Create a JSON file that sets the parameters for assigning resources to the backup plan, similar to the following:Note: You can use Amazon Resource Names (ARNs), tags, or both, to specify resources for a backup plan. The following example uses both an ARN and tags.{ "SelectionName": "Myselection", "IamRoleArn": "arn:aws:iam::123456789:role/service-role/AWSBackupDefaultServiceRole", "Resources": ["arn:aws:ec2:eu-west-1:123456789:volume/vol-0abcdef1234"], "ListOfTags": [{ "ConditionType": "STRINGEQUALS", "ConditionKey": "backup", "ConditionValue": "yes" }]}6.    After you create the JSON file, run the create-backup-selection command. Then, pass the JSON file as an input parameter:Note: For the value of --backup-plan-id, enter the BackupPlanId that you got in step 4.aws backup create-backup-selection --backup-plan-id abcd-efgh-ijkl-mnop --backup-selection file://Run an on-demand job on AWS BackupTo run an on-demand backup job, run the start-backup-job command. The following example command runs a backup job for the resource vol-0abcdef1234:aws backup start-backup-job --backup-vault-name primary --resource-arn arn:aws:ec2:eu-west-1:123456789:volume/vol-0abcdef1234 --iam-role-arn arn:aws:iam::123456789:role/service-role/AWSBackupDefaultServiceRole --idempotency-token 623f13d2-78d2-11ea-bc55-0242ac130003 --start-window-minutes 60 --complete-window-minutes 10080 --lifecycle DeleteAfterDays=30 --region eu-west-1Note: The preceding command includes a value for --idempotency-token. This value is a unique string that you provide to distinguish between StartBackupJob calls. On a Linux operating system, you can generate a unique identifier by running the uuid command:uuid -rTo run an on-demand copy job, run the start-copy-job command. The following example command runs a job that copies the recovery point for snap-0abcdaf2247b33dbc from the source vault named primary to a destination vault called secondary:aws backup start-copy-job --recovery-point-arn arn:aws:ec2:eu-west-1::snapshot/snap-0abcdaf2247b33dbc --source-backup-vault-name primary --destination-backup-vault-arn arn:aws:backup:eu-west-2:123456789:backup-vault:secondary --iam-role-arn arn:aws:iam::123456789:role/service-role/AWSBackupDefaultServiceRole --idempotency-token 5aac8974-78d2-11ea-bc55-0242ac130003 --lifecycle DeleteAfterDays=30 --region eu-west-1To initiate a restore job, run the start-restore-job command. To initiate a restore job for an Amazon Elastic Block Store (Amazon EBS) volume, follow these steps:1.    Run the get-recovery-point-restore-metadata command on the recovery point that you want to restore:aws backup get-recovery-point-restore-metadata --backup-vault-name primary --recovery-point-arn arn:aws:ec2:eu-west-1::snapshot/snap-0abcdaf2247b33dbc2.    In the output of the command, note the values for volume ID and encryption.3.    Create a JSON file that sets the parameters for the required --metadata option of the start-restore-job command. For encrypted and volumeId, enter the values that you got in step 2.{ "availabilityZone":"eu-west-1a", "encrypted":"false", "volumeId":"vol-0ck270d4c0b2e44c9", "volumeSize":"100", "volumeType":"gp2"}4.    After you create the JSON file, run the start-restore-job command. Then, pass the JSON file as an input parameter:aws backup start-restore-job --recovery-point-arn arn:aws:ec2:eu-west-1::snapshot/snap-0abcdaf2247b33dbc --metadata file:// --iam-role-arn arn:aws:iam::123456789:role/service-role/AWSBackupDefaultServiceRole --idempotency-token 52e602ce-78d2-11ea-bc55-0242ac130003 --resource-type EBS --region eu-west-1To initiate a restore for an Amazon Elastic File System (Amazon EFS), see How do I restore an Amazon EFS file system from an AWS Backup recovery point using the AWS CLI?Follow"
https://repost.aws/knowledge-center/aws-backup-cli-create-plan-run-job
Why do I receive an Access Denied error when I try to access Amazon S3 using an AWS SDK?
"I can access my Amazon Simple Storage Service (Amazon S3) resources when I use the AWS Command Line Interface (AWS CLI). But, I get an Access Denied error when I use an AWS SDK. How can I fix this?"
"I can access my Amazon Simple Storage Service (Amazon S3) resources when I use the AWS Command Line Interface (AWS CLI). But, I get an Access Denied error when I use an AWS SDK. How can I fix this?ResolutionNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Verify your AWS CLI and the AWS SDK credentialsFirst, check that the AWS CLI and the AWS SDK that you're using are configured with the same credentials. To do this, follow these steps:To get the credentials configured on AWS CLI, run this command:aws iam list-access-keysIf you're using an AWS Identity and Access Management (IAM) role associated with the AWS CLI, run this command to get the role:aws sts get-caller-identityTo get the credentials configured on the AWS SDK that you're using, run a GetCallerIdentity call using your AWS Security Token Service (AWS STS) client. For example, if you're using AWS SDK for Python (Boto3), run get_caller_identity.If the AWS CLI and the AWS SDK use different credentials, then use the AWS SDK with the credentials that are stored on the AWS CLI.Troubleshoot AWS CLI and SDK requests to Amazon S3If the credentials used by the CLI and the AWS SDK are the same, then continue to troubleshoot by asking these questions:Are the CLI and SDK requests to S3 coming from the same source? That is, check if the requests are from the same Amazon Elastic Compute Cloud (Amazon EC2) instance.If requests are coming from the same source, is SDK using the intended credentials? For example, if you use AWS SDK for Python (Boto3), the SDK allows you to configure credentials using multiple methods. This means that Boto3 looks in multiple locations for credentials in a specific order. If any incorrect credentials are specified early on, these credentials are used. For more information about the order that Boto3 follows when looking for credentials, see Credentials on the Boto3 SDK website.Check that your VPC endpoints allow requests to S3If requests are sent from different sources, check whether the source using the SDK is sending requests through a VPC endpoint. Then, verify that the VPC endpoint allows the request that you're trying to send to Amazon S3.The VPC endpoint policy in this example allows download and upload permissions for DOC-EXAMPLE-BUCKET. If you're using this VPC endpoint, then you're denied access to any other bucket.{ "Statement": [ { "Sid": "Access-to-specific-bucket-only", "Principal": "*", "Action": [ "s3:GetObject", "s3:PutObject" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::DOC-EXAMPLE-BUCKET", "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" ] } ]}If you don't find any issues in your credentials or source, then review some of the reasons why an Access Denied error might be returned by S3. For more information, see How do I troubleshoot 403 Access Denied errors from Amazon S3?Related informationIdentity and access management in Amazon S3Follow"
https://repost.aws/knowledge-center/s3-access-denied-aws-sdk
How do I redirect an apex domain to its subdomain or any other domain using S3 and Route 53?
"I want to redirect internet traffic from my root (apex) domain (for instance, example.com) to its subdomain (for instance, www.example.com) using Amazon Simple Storage Service (Amazon S3). Or, I want to redirect internet traffic from my apex domain to another domain (for instance, example.net) using Amazon S3."
"I want to redirect internet traffic from my root (apex) domain (for instance, example.com) to its subdomain (for instance, www.example.com) using Amazon Simple Storage Service (Amazon S3). Or, I want to redirect internet traffic from my apex domain to another domain (for instance, example.net) using Amazon S3.ResolutionPrerequisitesYou have a hosted zone for your apex domain in Amazon Route 53.You have permissions to create records in the hosted zone for the apex domain.You have permissions to create S3 buckets.An S3 bucket with the exact same name as your apex domain doesn't already exist.Note: Amazon S3 website endpoints don't support HTTPS. So, redirection works for HTTP requests only. To redirect both HTTP and HTTPS requests, use other methods, such as redirecting requests using an Application Load Balancer or using Amazon CloudFront.Use the following procedure to redirect your domain using Amazon S3. For example, to redirect requests for the apex domain example.com to its subdomain www.example.com, use following steps:In the Amazon S3 console, create an S3 bucket with the exact name of your apex domain. For instance example.com.Note: S3 bucket names are globally unique. If the bucket name that you need is already in use, then you can't use Amazon S3 for redirection. In this case, consider other methods, such as configuring redirection using an Application Load Balancer or using Amazon CloudFront with an edge function.Select the bucket that you created, and then choose Properties.Under Static website hosting, choose Edit.Choose Redirect requests for an object.For Host name, enter the website that you want to redirect to. For instance, www.example.com.For Protocol, choose the protocol for the redirected requests (none, HTTP, or HTTPS).Note: If you don't specify a protocol, then the default option is none.Choose Save changes.In the Route 53 console, select the hosted zone for your apex domain. For instance, example.com.Create an A-Alias record for the apex domain in the selected hosted zone with the following values:Record name: Leave this field blank.Record Type: Choose A – IPv4 address.Route traffic to: Choose Alias to S3 website endpoint.Region: Choose the Region where your S3 bucket is located.Enter S3 Endpoint: From the dropdown list, choose the S3 bucket that you created. For instance example.com. Make sure that the S3 bucket name exactly matches the name of the hosted zone for your apex domain.Routing policy: Choose Simple.Evaluate Health Target: Choose No, and then choose Create Records.To validate the redirection, open your apex domain in a browser. Or, use the following curl command to check the HTTP status code for the response and the value of the Location header in the response. A successful redirection returns the HTTP 301 Moved Permanently status code and the Location header value has a URL for the domain that you're redirecting to.curl -i -s example.com | grep -E "HTTP|Location" HTTP/1.1 301 Moved Permanently Location: http://www.example.com/Related informationHow can I redirect one domain to another in Route 53?Redirect requests for your bucket's website endpoint to another bucket or domainFollow"
https://repost.aws/knowledge-center/redirect-domain-route-53
How do I view CPU and memory usage for my Amazon Aurora DB cluster?
I want to view the CPU and memory usage for my Amazon Aurora DB cluster. How can I do this using Enhanced Monitoring?
"I want to view the CPU and memory usage for my Amazon Aurora DB cluster. How can I do this using Enhanced Monitoring?ResolutionAmazon Aurora automatically sends metric data for your DB instance to Amazon CloudWatch. You can view the metric data in the Amazon Relational Database Service (Amazon RDS) console. To view Aurora metrics in the CloudWatch console, see Overview of monitoring metrics in Amazon Aurora.To view Enhanced Monitoring metrics, you must first turn on Enhanced Monitoring.You can view the available metrics for your DB cluster in the Amazon RDS by doing the following:Open the Amazon RDS console.Choose Databases from the navigation pane.Select your DB instance.Choose the Monitoring tab.From the Monitoring menu, choose CloudWatch, Enhanced Monitoring, or OS process list.In the operating system (OS) process list section of Enhanced Monitoring, review the OS processes and RDS processes. Confirm the percentage of CPU use of a mysqld or Aurora process. These metrics can help you confirm whether any increase in CPU utilization is caused by OS or by RDS processes. Or, you can use these metrics to monitor any CPU usage increases caused by mysqld or Aurora. You can also see the division of CPU use by reviewing the metrics for cpuUtilization. For more information, see the Monitoring OS metrics with Enhanced Monitoring.Related informationMonitoring metrics in an Amazon Aurora clusterHow can I troubleshoot and resolve high CPU utilization on my Amazon RDS for MySQL instance?Follow"
https://repost.aws/knowledge-center/view-cpu-memory-aurora
How can I redirect one domain to another domain using an Application Load Balancer?
I want to redirect one domain to another domain using an Application Load Balancer.
"I want to redirect one domain to another domain using an Application Load Balancer.Short descriptionThe Application Load Balancer service supports redirection of domain names as well as redirection from HTTP to HTTPS. If you have a domain that points to an Application Load Balancer, then it's a best practice to configure redirection using the Application Load Balancer rather than Amazon Simple Storage Service (Amazon S3).ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.If you're using an Application Load Balancer as part of your configuration, you can use it to redirect one domain to another:Open the Amazon Elastic Compute Cloud (Amazon EC2) console.On the navigation pane, choose Load Balancers under Load Balancing.Select your load balancer, and then choose Listeners.Choose View/edit rules for the load balancer listener that you want to use.Choose the Add rule icon (the plus sign).Choose Insert Rule.Choose Add condition.In the conditions section (IF), choose Add condition.Choose Host header, and then enter your hostname (for example, example.com).To save, choose the checkmark icon.In the actions section (THEN), choose Add action.Choose Redirect to.Specify the protocol and port, as required by your use case.Change Original host, path, query to Custom host, path, query.For Host, enter example2.com.For Path and Query, keep the default values (unless your use case requires you to change them).Set the Response to HTTP 301 "Permanently moved" or HTTP 302 "Found".To save, choose the checkmark icon.The THEN section now appears:Redirect to https://example2.com:443/#{path}?#{query}Status code: HTTP_301Choose Save.Note: If both domains point to the same Application Load Balancer, be sure that you:Have separate certificates for both domains, ORUse a Subject Alternative Name (SAN) certificate to validate the domainsTo confirm that the redirect is working:1.    In the AWS CLI, use the curl function.curl -Iv https://example1.com -L* Rebuilt URL to: https://example1.com/. . . * Connected to example1.com (1.2.3.4) port 443 (#0)<SSL handshake> > Host: example1.com. ———> Host name is example1.com > User-Agent: curl/7.61.1> Accept: */*> * Connection state changed (MAX_CONCURRENT_STREAMS == 128)!< HTTP/2 301 ———> ALB does redirection < server: awselb/2.0< date: Fri, 06 Mar 2020 09:18:33 GMT< content-type: text/html< content-length: 150 < location: https://example2.com:443/. ——> redirected to “example2.com” < * Issue another request to this URL: 'https://example2.com:443/‘. ———> Curl initiates another request that is to example2.com * Trying 34.195.219.169... * TCP_NODELAY set<SSL handshake> > Host: example2.com. ———> Host name has changed to example2.com > User-Agent: curl/7.61.1> Accept: */*> * Connection state changed (MAX_CONCURRENT_STREAMS == 128)!< HTTP/2 200 ——> We got a response2.    In your internet browser, enter example1.com and confirm that it redirects to example2.com.Note: Application Load Balancer supports only 301 and 302 redirects. These redirects allow the client to change the HTTP method from POST to GET in subsequent requests. If a 307 redirect is needed, then the redirect must come through the target application.Related informationHow do I redirect an apex domain to its subdomain or any other domain using S3 and Route 53?Application load balancers now support multiple TLS certificates with smart selection using SNIFollow"
https://repost.aws/knowledge-center/elb-redirect-to-another-domain-with-alb
How do I troubleshoot a Windows WorkSpace that's marked as unhealthy?
My Amazon WorkSpaces Windows WorkSpace is marked as unhealthy. How can I fix this?
"My Amazon WorkSpaces Windows WorkSpace is marked as unhealthy. How can I fix this?Short descriptionThe WorkSpaces service periodically checks the health of a WorkSpace by sending the WorkSpace a status request. A WorkSpace is marked as unhealthy if the WorkSpaces service doesn't receive a response in a timely manner.Common causes for this issue are:An application on the WorkSpace is blocking the network connection between the WorkSpaces service and the WorkSpace.High CPU usage on the WorkSpace.The agent or service that responds to the WorkSpaces service isn't running.The computer name of the WorkSpace changed.The WorkSpaces service is in a stopped state or is blocked by antivirus software.ResolutionTry the following troubleshooting steps to return the WorkSpace to a healthy state:Reboot the WorkSpaceFirst, reboot the WorkSpace from the WorkSpaces console.If rebooting the WorkSpace doesn't resolve the issue, connect to the WorkSpace by using a Remote Desktop Protocol (RDP) client.If the WorkSpace is unreachable by using the RDP client, follow these steps:Restore the WorkSpace to roll back to the last known good snapshot.If the WorkSpace is still unhealthy, rebuild the WorkSpace.If you can connect to your WorkSpace using RDP, then verify the following to fix the issue:Verify CPU usageOpen the Windows Task Manager to determine if the WorkSpace is experiencing high CPU usage. If it is, try any of the following troubleshooting steps to resolve the issue:Stop any service that's consuming high CPU.Resize the WorkSpace to a compute type that's greater than what is currently used.Reboot the WorkSpace.Note: To diagnose high CPU usage, see How do I diagnose high CPU utilization on my EC2 Windows instance when my CPU is not being throttled?Verify the WorkSpace's computer nameIf you changed the computer name of the WorkSpace, change it back to the original name:Open the WorkSpaces console, and then expand the unhealthy WorkSpace to show details.Copy the Computer Name.Connect to the WorkSpace using RDP.Open a command prompt, and then enter hostname to view the current computer name.If the name matches the Computer Name from step 2, skip to the next troubleshooting section.If the names don't match, enter sysdm.cpl to open system properties. Then, follow the remaining steps in this section.Choose Change, and then paste the Computer Name from step 2.If prompted, enter your domain user credentials.Confirm that the WorkSpaces services are running and responsiveIf WorkSpaces services are stopped or aren't running, then the WorkSpace is unhealthy. Follow these steps:From Services, verify that the WorkSpaces services named SkyLightWorkspacesConfigService, WSP Agent (for the WorkSpaces Streaming Protocol [WSP] WorkSpaces), and PCoIP Standard Agent for Windows are running. Be sure that the start type for both services is set to Automatic. If any of the three services aren't running, start the service.Verify that any endpoint protection software, such as antivirus or anti-malware software, explicitly allows the WorkSpaces service components.If the status of the three services is Running, then the services might be blocked by antivirus software. To fix this, set up an allow list for the locations where the service components are installed. For more information, see Required configuration and service components for WorkSpaces.If WorkSpaces Web Access is turned on for the WorkSpace, verify that the STXHD Hosted Application Service is running. Make sure that the start type is set to Automatic.Verify that your management adapter isn't blocked by any application or VPN. Then, ensure that proper connectivity is in place.Note: If WorkSpaces Web Access is turned on but not in use, update the WorkSpaces directory details to turn off WorkSpaces Web Access. The STXHD Agent can cause an unhealthy WorkSpace.Verify firewall rulesImportant: The firewall must allow listed traffic on the management network interface.Confirm that Windows Firewall and any third-party firewall that's running have rules to allow the following ports:Inbound TCP on port 4172: Establish the streaming connection.Inbound UDP on port 4172: Stream user input.Inbound TCP on port 8200: Manage and configure the WorkSpace.Inbound TCP on ports 8201–8250: Establish the streaming connection and stream user input on WSP.Outbound UDP on ports 50002 and 55002: Video streaming.If your firewall uses stateless filtering, then open ephemeral ports 49152–65535 to allow for return communication.If your firewall uses stateful filtering, then ephemeral port 55002 is already open.Related informationIP address and port requirements for WorkSpacesTurn on self-service WorkSpace management capabilities for your usersRequired configuration and service components for WorkSpacesFollow"
https://repost.aws/knowledge-center/workspaces-unhealthy
How do I troubleshoot Lambda synchronous invocation issues?
I’ve set up an AWS Lambda function to invoke synchronously but the destination isn’t initiating. How do I fix this issue?When I invoke a Lambda function through the Lambda console does it get invoked synchronously or asynchronously?-or-Why isn't my Lambda function retrying after I set it to retry one or two more times?
"I’ve set up an AWS Lambda function to invoke synchronously but the destination isn’t initiating. How do I fix this issue?When I invoke a Lambda function through the Lambda console does it get invoked synchronously or asynchronously?-or-Why isn't my Lambda function retrying after I set it to retry one or two more times?ResolutionWhen you synchronously invoke a Lambda function and it fails, the following are possible causes:Lambda doesn’t have permission to perform the actions included in the code.The AWS service that invokes the Lambda function doesn’t have sufficient permissions.Lambda is invoked asynchronously.Lambda supports destinations only for asynchronous invocations and stream invocations, and not for synchronous invocations.Follow these steps to troubleshoot synchronous invocation issues:1.    Determine how the Lambda function is invoked. Is the function invoked using AWS CLI? Is the function invoked through an AWS service?2.    Check whether the AWS service invokes the Lambda function synchronously or asynchronously.3.    Invoke the Lambda function synchronously by using the following command:aws lambda invoke --function-name my-function --cli-binary-format raw-in-base64-out --payload '{ "key": "value" }' response.jsonSee if a 200 status code is reported or if the command returns an error.4.    Remember that Lambda function retry behavior is controlled by the client in synchronous invocations. The Retry attempts configuration from the AWS Lambda console is limited to asynchronous invocations. Make sure that the client retries the requests rather than checking the Lambda logs.5.    Remember that a Lambda function invoked in the Lambda console is always a synchronous invocation.6.    Synchronous invocation retry behavior varies between AWS services, based on each service's event source mapping.For more information, see Event-driven invocation.7.    Make sure that the Lambda function's code is idempotent and can handle the same messages multiple times.8.    Identify and resolve any errors that your Lambda function returns.For more information, see How do I troubleshoot Lambda function failures?9.    If you still can’t resolve the issue, open a case with AWS Support. Provide the following information in the case:The Lambda function ARN.The workflow on the Lambda function setup with all included services.Details about whether the issue is intermittent or continuous.Complete CloudWatch logs in .txt format from when the issue occurred. These CloudWatch logs are used to identify Lambda function errors that include timeout issues, init durations, and permissions issues.The exact timestamp of the issue with the timezone or timestamp in UTC.Note: AWS Support representatives don’t have access to customer CloudWatch logs due to security and privacy reasons.Related informationComparing Lambda invocation modesInvoking Lambda functionsIntroducing AWS Lambda DestinationsFollow"
https://repost.aws/knowledge-center/lambda-synchronous-invocation-issues
Why do I get a “The provided token is malformed or otherwise invalid” error when I launch an Amazon EMR cluster using Hive and Presto in the AWS China (Beijing) Region?
"I launched an Amazon EMR cluster in the AWS China (Beijing) Region (cn-north-1). I used Presto and Apache Hive to create an external table from an Amazon Simple Storage Service (Amazon S3) bucket. When I query the table using Hive and Presto, I get an error similar to the following:presto:default> select * from mydata;Query 20200912_072348_00009_qqx96, FAILED, 1 nodeSplits: 1 total, 0 done (0.00%)0:03 [0 rows, 0B] [0 rows/s, 0B/s]Query 20200912_072348_00009_qqx96 failed: The provided token is malformed or otherwise invalid. (Service: Amazon S3; Status Code: 400; Error Code: InvalidToken; Request ID: 811359ED1D9F8250)"
"I launched an Amazon EMR cluster in the AWS China (Beijing) Region (cn-north-1). I used Presto and Apache Hive to create an external table from an Amazon Simple Storage Service (Amazon S3) bucket. When I query the table using Hive and Presto, I get an error similar to the following:presto:default> select * from mydata;Query 20200912_072348_00009_qqx96, FAILED, 1 nodeSplits: 1 total, 0 done (0.00%)0:03 [0 rows, 0B] [0 rows/s, 0B/s]Query 20200912_072348_00009_qqx96 failed: The provided token is malformed or otherwise invalid. (Service: Amazon S3; Status Code: 400; Error Code: InvalidToken; Request ID: 811359ED1D9F8250)Short descriptionIn earlier Amazon EMR release versions, Presto doesn't automatically use the Region that the S3 bucket is in. Use one of the following options to resolve this error:Upgrade to Amazon EMR release version 5.12.0 or later.To use Amazon EMR release version 5.11.x or earlier, set the hive.s3.pin-client-to-current-region property to true.ResolutionUpgrade to Amazon EMR release version 5.12.0 or laterLaunch a new cluster and choose Amazon EMR release version 5.12.0 or later. For more information, see About Amazon EMR releases.Set hive.s3.pin-client-to-current-region property to true (version 5.11.x or earlier)1.    On each node, open the hive.properties file and then set the hive.s3.pin-client-to-current-region property to true. Example:sudo vim /etc/presto/conf/catalog/hive.propertieshive.s3.connect-timeout=2mhive.s3.max-backoff-time=10m...hive.s3.pin-client-to-current-region=true2.    Restart Presto on each node:sudo restart presto-server3.    To confirm that the new configuration works as expected, query a table using Hive and Presto in the China (Beijing) Region.Related informationApache HivePresto and TrinoFollow"
https://repost.aws/knowledge-center/emr-cluster-malformed-token
Why am I getting the error "VolumeInUse" when trying to attach an Amazon EBS io2 Block Express volume to my EC2 instance?
"I'm trying to attach an Amazon Elastic Block Store (Amazon EBS) io2 Block Express volume to my Amazon Elastic Compute Cloud (Amazon EC2) instance. The volume is in the "available" state, but I get the following error:"An error occurred (VolumeInUse) when calling the AttachVolume operation: vol-xxxx is already attached to an instance"."
"I'm trying to attach an Amazon Elastic Block Store (Amazon EBS) io2 Block Express volume to my Amazon Elastic Compute Cloud (Amazon EC2) instance. The volume is in the "available" state, but I get the following error:"An error occurred (VolumeInUse) when calling the AttachVolume operation: vol-xxxx is already attached to an instance".ResolutionYou receive this error when you try to attach an io2 Block Express volume to an EC2 instance type that doesn't support this volume type. To resolve this issue, check whether the EC2 instance type that you're attaching the volume to supports io2 Block Express volumes.The following instance types support io2 Block Express volumes:c6inc7gm6inm6idnr5br6inr6idntrn1x2idnx2iednFor more information, see io2 Block Express volumes.Follow"
https://repost.aws/knowledge-center/ebs-io2-block-express-ec2-instance-error
How can I access a private Amazon Redshift cluster from my local machine?
I want to use my local computer to access an Amazon Redshift cluster that's in a private subnet of an Amazon Virtual Private Cloud (Amazon VPC). How can I do this?
"I want to use my local computer to access an Amazon Redshift cluster that's in a private subnet of an Amazon Virtual Private Cloud (Amazon VPC). How can I do this?Short descriptionUse an Amazon Elastic Compute Cloud (Amazon EC2) instance and SQL Workbench/J to create an SSH tunnel. The tunnel routes all incoming traffic from the local machine to the private Amazon Redshift cluster.ResolutionCreate the Amazon VPC, EC2 instance, and Amazon Redshift cluster1.    Create an Amazon VPC with public and private subnets.2.    Launch an EC2 instance from an Amazon Linux 2 Amazon Machine Image (AMI) into the public subnet of the Amazon VPC that you created in step 1. Choose the following options when creating the instance:In step 3, for Auto-assign Public IP, choose Enable. Or, you can assign an Elastic IP address to the instance.In step 6, create a new security group with an SSH rule. For Source, choose Custom, and then enter your IP CIDR block, or choose My IP.3.    On the Amazon Redshift console, create a cluster subnet group.For VPC ID, choose the ID of the Amazon VPC that you created in step 1.For Subnet ID, choose the ID of the private subnet.4.    Create a new security group.5.    Add a rule to the newly created security group that allows inbound traffic from the instance's security group:For Type, choose Custom TCP.For Port Range, enter 5439 (the default port for Amazon Redshift).For Source, choose Custom, and then enter the name of the security group that you created in step 2.6.    Launch a new Amazon Redshift cluster or restore a cluster from a snapshot. On the Additional Configuration page, choose the following options:For Choose a VPC, choose the Amazon VPC that you created in step 1.For Cluster subnet group, choose the group that you created in step 3.For Publicly accessible, choose No.For VPC security groups, choose the security group that you created in step 4.Wait for the cluster to reach the available state before continuing.7.    Run the following command to connect to the EC2 instance from your local machine. Replace your_key.pem and your_EC2_endpoint with your values. For more information, see Connecting to your Linux instance using SSH.ssh -i "your_key.pem" ec2-user@your_EC2_endpoint8.    Run the following command to install telnet:sudo yum install telnet9.    Use telnet to test the connection to your Amazon Redshift cluster. In the following command, replace cluster-endpoint and cluster-port with your values.telnet cluster-endpoint cluster-portOr, use dig to confirm that your local machine can reach the private IP address of the Amazon Redshift cluster. In the following command, replace cluster-endpoint with your cluster endpoint.dig cluster-endpointCreate the tunnel1.    Install SQL Workbench/J on your local machine.2.    Download the latest Amazon Redshift JDBC driver.3.    In SQL Workbench/J, create a connection profile using the JDBC driver that you downloaded in step 2.4.    To configure the SSH connection in SQL Workbench/J, choose SSH, and then enter the following:SSH hostname: the public IP address or DNS of the EC2 instanceSSH port: 22Username: ec2-userPrivate key file: the .pem file that you downloaded when you created the EC2 instancePassword: keep this field emptyLocal port: any free local port (your Amazon Redshift cluster uses port 5439 by default)DB hostname: the cluster endpoint (should not include the port number or database name)DB port: 5439Rewrite JDBC URL: select this option5.    Choose OK to save the SSH settings.6.    Be sure that the JDBC URL and superuser name and password are entered correctly.7.    Choose Test to confirm that the connection is working. For more information, see Connecting through an SSH tunnel in the SQL Workbench/J documentation.(Optional) Modify the connection for an AWS Identity and Access Management (IAM) userTo connect to the Amazon Redshift cluster as an IAM user, modify the connection profile that you created in the previous step:1.    Confirm that the IAM user has a policy that allows the GetClusterCredentials, JoinGroup, and CreateClusterUser Amazon Redshift actions for the dbgroup, dbuser, and dbname resources. Replace these values in the following example:us-west-2: the Region that your cluster is in012345678912: your AWS account IDclustername: the name of your clustergroup_name: the database group nameuser_name: the name of the Amazon Redshift user (you can use "*" instead of specifying a specific user)database_name: the database name{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "redshift:GetClusterCredentials", "redshift:CreateClusterUser", "redshift:JoinGroup" ], "Resource": [ "arn:aws:redshift:eu-west-2:012345678912:dbgroup:clustername/group_name", "arn:aws:redshift:eu-west-2:012345678912:dbuser:clustername/user_name or * ", "arn:aws:redshift:eu-west-2:012345678912:dbname:clustername/database_name" ] } ]}2.    In SQL Workbench/J, change the first part of connection profile's JDBC URL to jdbc:redshift:iam. (For example, you can change the JDBC URL to "jdbc:redshift:iam://127.0.0.1:5439/example".)3.    Choose Extended Properties, and then create the following properties: AccessKeyID: the IAM user's access key IDSecretAccessKey: the IAM user's secret access keyDbGroups: forces the IAM user to join an existing groupDbUser: the IAM user’s nameAutoCreate: set to true ClusterID: the name of the Amazon Redshift cluster (not the database name)Region: the AWS Region that the cluster is in, such as us-east-14.    On the cluster connection profile page, choose Test.Related informationI can't connect to my Amazon Redshift clusterFollow"
https://repost.aws/knowledge-center/private-redshift-cluster-local-machine
How can I troubleshoot an AWS DMS task that failed with a foreign key constraint violation error?
I have an AWS Database Migration Service (AWS DMS) task that fails with a foreign key constraint violation.
"I have an AWS Database Migration Service (AWS DMS) task that fails with a foreign key constraint violation.Short DescriptionBy default, AWS DMS tasks load eight tables at a time during the full load phase. These tables load alphabetically by default, unless you configure the loading order for the task. For more information, see Tables load order during full load in AWS Database Migration Service improves migration speeds by adding support for parallel full load and new LOB migration mechanisms.If you don't configure the load order to load parent tables first, then a child table might load before its parent table. This causes the task to fail with a foreign key constraint violations error. In this case, you see log entries that are similar to the following examples:[TARGET_LOAD ]E: RetCode: SQL_ERROR SqlState: 0A000 NativeError: 1 Message: ERROR: cannot truncate a table referenced in a foreign key constraint; Error while executing the query [1022502] (ar_odbc_stmt.c:4622)[TARGET_LOAD ]E: RetCode: SQL_ERROR SqlState: HY000 NativeError: 1217 Message: [MySQL][ODBC 5.3(w) Driver][mysqld-5.7.23-log]Cannot delete or update a parent row: a foreign key constraint fails [1022502] (ar_odbc_stmt.c:4615)Ongoing replication uses the Transactional Apply mode that applies the transactions in the same commit order as the source. When the task is in the ongoing replication phase, you can activate foreign key constraints on the target. If you use Batch Apply mode for ongoing replication, then you must deactivate the foreign keys, even during the change data capture (CDC) phase.ResolutionTo resolve this error, complete either of the following steps:Deactivate foreign key constraintsUse Drop tables on target modeDeactivate foreign key constraintsIf the target is a MySQL-compatible database, then you can use extra connection attributes to deactivate foreign key constraints:initstmt=SET FOREIGN_KEY_CHECKS=0If the target is a PostgreSQL-compatible database, then you see foreign key violation errors during the CDC phase. To resolve this error, set the session_replication_role parameter to replica. To do this, add the extra connection attribute afterConnectScript=SET session_replication_role='replica' to the endpoint. Or, use the AWS Command Line Interface to add endpoint settings to the target endpoint.For other databases engines, manually deactivate or drop foreign key constraints.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Use Drop tables on target modeWhen you use Drop tables on target mode, AWS DMS creates only objects that are necessary for a full load to succeed on the target. AWS DMS also refers to this as the DROP_AND_CREATE task setting. However, if you use Drop tables on target mode, then you must manually create other objects outside of AWS DMS. This includes objects such as secondary indexes, data defaults, and triggers.Related InformationUsing the task log to troubleshoot migration issuesFollow"
https://repost.aws/knowledge-center/dms-foreign-key-constraint-error
How do I attach backend instances with private IP addresses to my internet-facing load balancer in ELB?
I have an internet-facing load balancer. I want to attach backend Amazon Elastic Compute Cloud (Amazon EC2) instances located in a private subnet.
"I have an internet-facing load balancer. I want to attach backend Amazon Elastic Compute Cloud (Amazon EC2) instances located in a private subnet.Short descriptionTo attach Amazon EC2 instances that are located in a private subnet, first create public subnets. These public subnets must be in the same Availability Zones as the private subnets that are used by the backend instances. Then, associate the public subnets with your load balancer.Note: Your load balancer establishes a connection with its target privately. To download software or security patches from the internet, use a NAT gateway rule on the target instance's route table to allow internet access.ResolutionBefore you begin, note the Availability Zone of each Amazon EC2 Linux or Amazon EC2 Windows instance that you're attaching to your load balancer.Create public subnets for your backend instances1.    Create a public subnet in each Availability Zone that your backend instances are located. If you have more than one private subnet in the same Availability Zone, then create only one public subnet for that Availability Zone.2.    Confirm that each public subnet has a CIDR block with a bitmask of at least /27 (for example, 10.0.0.0/27).3.    Confirm that each subnet has at least eight free IP addresses.Example: Public subnet (Application Load Balancer subnet) needs a CIDR block with a bitmask of at least /27:Public subnet in AZ A: 10.0.0.0/24Private subnet in AZ A: 10.1.0.0/24Public subnet in AZ B: 10.2.0.0/24Private subnet in AZ B: 10.3.0.0/24Configure your load balancer1.    Open the Amazon EC2 console.2.    Associate the public subnets with your load balancer (see Application Load Balancer, Network Load Balancer, or Classic Load Balancer).3.    Register the backend instances with your load balancer (see Application Load Balancer, Network Load Balancer, or Classic Load Balancer).Configure your load balancer's security group and network access control list (ACL) settingsReview the recommended security group settings for Application Load Balancers or Classic Load Balancers. Be sure that:Your load balancer has open listener ports and security groups that allow access to the ports.The security group for your instance allows traffic on instance listener ports and health check ports from the load balancer.The load balancer security group allows inbound traffic from the client.The load balancer security group allows outbound traffic to the instances and the health check port.Add a rule on the instance security group to allow traffic from the security group that's assigned to the load balancer. For example, you have the following:Load Balancer security group is sg-1234567aIngress rule is HTTP TCP 80 0.0.0.0/0Instance Security group is sg-a7654321Ingress rule is HTTP TCP 80 sg-1234567aIn this case, your rule looks similar to the following:TypeProtocolPort RangeSourceHTTPTCP80sg-1234567aThen, review the recommended network ACL rules for your load balancer. These recommendations apply to both Application Load Balancers and Classic Load Balancers.If you're using Network Load Balancers, then review Troubleshoot your Network Load Balancer and Target security groups for configuration details. Confirm that the backend instance's security group allows traffic to the target group's port from either:Client IP addresses (if targets are specified by instance ID)Load balancer nodes (if targets are specified by IP address)Related informationHow Elastic Load Balancing worksAmazon EC2 security groups for Linux instancesAmazon EC2 security groups for Windows instancesFollow"
https://repost.aws/knowledge-center/public-load-balancer-private-ec2
Why does a subnet that load balancers use in my VPC have insufficient IP addresses?
"A subnet in my virtual private cloud (VPC) ran out of available IP addresses, and I'm using this subnet with Elastic Load Balancing load balancers."
"A subnet in my virtual private cloud (VPC) ran out of available IP addresses, and I'm using this subnet with Elastic Load Balancing load balancers.Short descriptionIf subnets in your VPC run out of available IP addresses, then AWS resources, such as load balancers, might not respond to increased traffic.It's a best practice to keep at least eight IP addresses available in each subnet. There are two ways to free up or add IP addresses to use with load balancers. The following methods apply to both Application Load Balancers and Classic Load Balancers:Delete unused elastic network interfaces to free up IP addresses in the subnet.Create and add a new subnet to your VPC.Note: Load balancers can have only one subnet per Availability Zone. Review the other requirements for subnets on a load balancer.ResolutionDelete unused elastic network interfacesTo delete an unused elastic network interface, see Delete a network interface.Add a new subnet with available IP addresses for your load balancerCreate and add a new subnet to your VPC.Note: You can create a new subnet using the VPC's original CIDR blocks. You can also add CIDR blocks to your VPC to use with the new subnet.Replace your old subnet with the new subnet. For Classic Load Balancers, see Add a subnet. For Application Load Balancers, see Availability Zones for your Application Load Balancer.Check the route tables and network access control list (network ACL) rules that are associated with your subnet. Make sure that your new subnet routes traffic the same way that your previous subnet did. For example, if you configured a default route to an internet gateway in your previous subnet, then make sure that your new subnet has a similar route.(Optional) As a best practice, turn on cross-zone load balancing.Related informationTutorial: Create a Classic Load BalancerTutorial: Create an Application Load Balancer using the AWS Command Line Interface (AWS CLI)Follow"
https://repost.aws/knowledge-center/subnet-insufficient-ips
How do I access a private API Gateway API when the VPC endpoint uses an on-premises DNS?
"I'm using a virtual private cloud (VPC) that has a custom, on-premises Domain Name System (DNS) server. After creating a VPC endpoint for a private Amazon API Gateway API, I received a name-resolution error when I tried to invoke the API. How do I fix this issue?"
"I'm using a virtual private cloud (VPC) that has a custom, on-premises Domain Name System (DNS) server. After creating a VPC endpoint for a private Amazon API Gateway API, I received a name-resolution error when I tried to invoke the API. How do I fix this issue?ResolutionTo troubleshoot name-resolution errors from API Gateway when the VPC endpoint uses an on-premises DNS, do the following:1.    Create an Amazon Route 53 Resolver in the VPC. For more information, see Getting started with Route 53 Resolver.Note: Creating a Route 53 Resolver in the VPC allows the Route 53 Resolver to resolve the VPC endpoint's hostname within the VPC.2.    Add a DNS forwarder to the on-premises DNS server. When you configure the DNS forwarder, do the following:Configure the DNS forwarder so that it forwards DNS queries to the Route 53 Resolver you created in step 1.Add a rule to the DNS forwarder that allows it to forward only DNS queries that end with amazonaws.com. (The domain name of the VPC endpoint.)For more information, see Considerations when creating inbound and outbound endpoints.Note: You must have the DNS forwarder's destination IP addresses to configure the DNS forwarder.To get the DNS forwarder's destination IP addresses1.    Open the Route 53 console.2.    In the left navigation panel, in the Resolver section, choose Inbound endpoints.3.    Open the Details page of the inbound endpoint for the VPC.4.    Note the IP addresses listed in the IP addresses section of the resolver. These are the DNS forwarder's destination IP addresses.Note: The steps to create a DNS forwarder for an on-premises DNS server are different for each DNS server. For more information, consult your on-premises DNS server manual.Related informationRoute 53 Resolver for hybrid cloudsResolving DNS queries between VPCs and your networkFollow"
https://repost.aws/knowledge-center/api-gateway-private-api-on-premises-dns
Why did I get an error when changing or scaling the instance class of my Amazon Aurora DB instance?
"I have an Amazon Aurora DB instance, and I want to scale the instance class. Why can't I change the instance class, and how do I resolve errors when scaling my DB instance?"
"I have an Amazon Aurora DB instance, and I want to scale the instance class. Why can't I change the instance class, and how do I resolve errors when scaling my DB instance?Short descriptionWhen changing the instance class of an Amazon Aurora DB instance, you might receive one of the following errors:"Cannot modify the instance class because there are no instances of the requested class available in the current instance's availability zone. Please try your request again at a later time""DB Cluster <cluster> requires a database engine upgrade to support db.r4.large""RDS does not support creating a DB instance with the following combination: DBInstanceClass=db.r5.8xlarge, Engine=aurora, EngineVersion=5.6.10a, LicenseModel=general-public-license"Before troubleshooting any errors, it's a best practice that you run your DB clusters on the latest engine version, or use long-term support (LTS) versions. Newer engine versions contain fixes for improving security, stability, and instance availability.If your DB cluster is running on a version that shows as 5.6.10a in the Amazon Relational Database Service (Amazon RDS) Console, consider testing and upgrading to 1.22.3 (preferred version) or 1.19.6 (LTS version).If your DB cluster is running on a version that shows as 5.7.12 in the Amazon RDS Console, consider testing and upgrading to 2.07.3 (preferred version) or 2.04.9 (LTS version).After you upgrade from an older version, you might also need to perform OS upgrades to the instances in your DB cluster. Apply these upgrades before proceeding.Note: You can create and test the database upgrade using the Aurora cloning feature. Also, in some Regions or Availability Zones (AZs), older instance classes like T2 or R3 might not be available. It's a best practice that you use newer instance classes like T3 and R5.ResolutionCannot modify the instance class because there are no instances of the requested class available in the current instance's availability zone. Please try your request again at a later time.This is one of the most common errors you receive when you change the instance class of your Aurora DB instance. There are two possible causes for this error:The AZ does not have capacity for the target instance class you choose. When the AZ does not have enough on-demand capacity for the target instance class, wait a few minutes, and then try modifying the instance class again.The target instance class is not supported in the AZ. You receive this error when the target instance class is not supported for the Aurora engine and engine version for the AZ the instance is running in. To check which AZ supports your engine, engine version, and instance class, run the following command:aws rds describe-orderable-db-instance-options --engine <engine_name> --engine-version <engine_version> --db-instance-class <instance_class> --query 'OrderableDBInstanceOptions[].AvailabilityZones'Example:aws rds describe-orderable-db-instance-options --engine aurora --engine-version 5.6.10a --db-instance-class db.t3.medium --query 'OrderableDBInstanceOptions[].AvailabilityZones'DB Cluster <cluster> requires a database engine upgrade to support db.r4.largeAlthough this error is rare, it occurs if the DB cluster is running on an older version of Aurora. The db.r4 instance family is only supported in Aurora version 1.14.4 and above. To find the exact engine version of your DB cluster, log in to the cluster and run this query:SELECT @@AURORA_VERSION;You can schedule a database engine upgrade by running the apply-pending-maintenance-actions CLI command.aws rds apply-pending-maintenance-action --resource-identifier arn:aws:rds:us-east-1:123456789012:cluster:aurora-cluster --apply-action system-update --opt-in-type immediateRDS does not support creating a DB instance with the following combination: DBInstanceClass=db.r5.8xlarge, Engine=aurora, EngineVersion=5.6.10a, LicenseModel=general-public-licenseThis error occurs if you are running an older version of Aurora. The db.r5 instance family is not supported in all Aurora versions. For example, the db.r5.8xlarge instance class is supported in Aurora version 1.19.6 and above for Aurora MySQL 5.6 clusters. If the cluster is running an older version and you try to change this instance class, you receive this error.Run a CLI command similar to the following to find the engine versions that are supported for your engine and instance class combination.aws rds describe-orderable-db-instance-options --engine aurora --db-instance-class db.r5.8xlarge --query 'OrderableDBInstanceOptions[].EngineVersion'Related informationSupported DB Instance classes for Amazon AuroraFollow"
https://repost.aws/knowledge-center/aurora-mysql-instance-class-error
Why can't I access my S3 bucket when using the Hue S3 File Browser in Amazon EMR?
"I'm using Hue (Hadoop User Experience) to access Amazon Simple Storage Service (Amazon S3) buckets on an Amazon EMR cluster. I'm getting one of the following error messages:There are no files matching the search criteria.Failed to access path "s3a://awsdoc-example-bucket.hue1": hostname u'awsdoc-example-bucket.hue1.s3.amazonaws.com' doesn't match either of '.s3.amazonaws.com', 's3.amazonaws.com'Failed to retrieve bucket: hostname u'awsdoc-example-bucket.hue1.s3.amazonaws.com' doesn't match either of '.s3.amazonaws.com', 's3.amazonaws.com'"
"I'm using Hue (Hadoop User Experience) to access Amazon Simple Storage Service (Amazon S3) buckets on an Amazon EMR cluster. I'm getting one of the following error messages:There are no files matching the search criteria.Failed to access path "s3a://awsdoc-example-bucket.hue1": hostname u'awsdoc-example-bucket.hue1.s3.amazonaws.com' doesn't match either of '.s3.amazonaws.com', 's3.amazonaws.com'Failed to retrieve bucket: hostname u'awsdoc-example-bucket.hue1.s3.amazonaws.com' doesn't match either of '.s3.amazonaws.com', 's3.amazonaws.com'Short descriptionThe default Amazon S3 calling format for Hue is https://awsdoc-example-bucket.s3.amazonaws.com. If there is a dot (.) in your S3 bucket name, part of the bucket name is included in the Amazon S3 endpoint. For example, if your bucket is named awsdoc-example-bucket.hue, then Hue treats hue.s3.amazonaws.com as the Amazon S3 endpoint instead of s3.amazonaws.com.ResolutionTo resolve this error, change the endpoint format to https://s3.amazonaws.com/awsdoc-example-bucket. When you use this format, you can have as many dots in your bucket name as you want.1.    Open the /etc/boto.cfg file.2.    Add the following lines to the boto.cfg file:[s3]calling_format=boto.s3.connection.OrdinaryCallingFormat3.    Restart the Hue service:For Amazon EMR versions earlier than 5.30:$ sudo stop hue$ sudo start hueFor Amazon EMR versions 5.30 and later:$ sudo systemctl restart hueRelated informationBoto3 configurationHueFollow"
https://repost.aws/knowledge-center/failed-retrieve-bucket-name-hue-emr
Why is my Lambda function batch size smaller than the configured batch size?
"When processing Amazon Kinesis Data Streams, my AWS Lambda function's batch size is smaller than the batch size that I configured. Why is my Lambda function receiving fewer records per invocation than the batch size that I configured in my Amazon Kinesis event source?"
"When processing Amazon Kinesis Data Streams, my AWS Lambda function's batch size is smaller than the batch size that I configured. Why is my Lambda function receiving fewer records per invocation than the batch size that I configured in my Amazon Kinesis event source?ResolutionThe batch size that you set when configuring a Kinesis event source determines the maximum batch size that your Lambda function can process. The batch size that your function processes when it's invoked by a Kinesis event source can be lower than the batch size that you configure.Four values determine the batch size that your Lambda function processes when it's invoked by a Kinesis event source:The maximum batch size limit that you set when configuring your Kinesis event source.The number of records received from the GetRecords action that Lambda makes when polling your event source.The number of records that can fit within the 6 MB Lambda invocation payload size limit.Note: A larger record size means that fewer records can fit in the payload.The amount of data in your Kinesis Data Streams.Note: If the traffic on your Kinesis Data Streams is low, then the batch size that your function processes will be smaller.To calculate the approximate batch size that your function will process, use the following formula:6000 KB Lambda invocation payload size limit ÷ The size of an individual record in your batch (in KB) = Approximate number of records processed for each batchFor example, if each record in your batch is 64 KB, you can expect your function to process about 90 records per batch.Related informationUsing AWS Lambda with Amazon KinesisWorking with streamsFollow"
https://repost.aws/knowledge-center/lambda-batch-small
How do I terminate an Amazon EMR cluster?
I want to terminate one or more Amazon EMR clusters using the Amazon EMR console.
"I want to terminate one or more Amazon EMR clusters using the Amazon EMR console.ResolutionAn Amazon EMR cluster can be configured with termination protection. For more information, see Using termination protection.To terminate a cluster with or without termination protection, do the following:Terminate a cluster with termination protection offAccess the Amazon EMR console.Select the AWS Region for your Amazon EMR cluster.Select the cluster or clusters to terminate. You can terminate multiple clusters at the same time.Choose Terminate.When prompted, choose Terminate.Terminate a cluster with termination protection onAccess the Amazon EMR console.Select the AWS Region for your Amazon EMR cluster.On the Cluster List page, select the cluster or clusters to terminate. You can terminate multiple clusters at the same time.Choose Terminate.When prompted, choose Change to turn termination protection off. If you selected multiple clusters, then choose Turn off all to turn off termination protection for all the clusters at once.In Terminate clusters, for Termination Protection, choose Off. Then, select the check box to confirm.Choose Terminate.Amazon EMR terminates the instances in the cluster and stops saving log data.You can also terminate a cluster using the AWS CLI or API, for more information, see Terminate a cluster.Follow"
https://repost.aws/knowledge-center/emr-terminate-cluster
How do I resolve data not appearing when I send test email reports in QuickSight?
"I'm trying to send a test email report in Amazon QuickSight, but no data is appearing on the visuals. How do I resolve this?"
"I'm trying to send a test email report in Amazon QuickSight, but no data is appearing on the visuals. How do I resolve this?Short descriptionYou receive the one of the following errors:Report was not sent because no data on all visuals caused by dynanic default parameter.-or-Report was not sent because no data on all visuals (User: All)), when using Static Default parameters with filters.The preceding errors occur when there's a dynamic or static default parameter on the QuickSight dashboard that causes no data to appear on the visuals. For example, if a parameter is linked to a filter and the filter doesn't match any value in the dataset, then the data doesn't appear. When there is no visible data on the dashboard visual, then QuickSight can't send the email.Note: QuickSight email reports use only the static default value for parameters. Dynamic default values are ignored.ResolutionNote: If you receive errors when running the AWS Command Line Interface (AWS CLI) commands, make sure that you're using the most recent AWS CLI version.To resolve this issue, edit the dynamic or static default parameters and the filters or controls that you linked them to.Complete the following steps:1.    Open the Amazon QuickSight console.2.    Choose Dashboards, and then select the dashboard that's experiencing the issue.3.    Note the dashboard ID from the browser URL to use in step 4.For example, https://us-east-1.quicksight.aws.amazon.com/sn/dashboards/a1bc123d-abc1-abc2-abc3-abcdefef123454.    Run the following describe-dashboard CLI command to identify the analysis that the dashboard is published from:aws quicksight describe-dashboard --aws-account-id account_id --dashboard-id dashboard_idNote: Replace account_id with your AWS account ID and dashboard_id with the dashboard's ID.5.    Note the source entity (analysis) ARN from the output to use in step 7.Example output excerpt:{ "Status": 200, "Dashboard": { "DashboardId": "a1bc123d-abc1-abc2-abc3-abcdefef12345", "Arn": "arn:aws:quicksight:us-east-1:658909682992:dashboard/a1bc123d-abc1-abc2-abc3-abcdefef12345", "Name": "12345", "Version": { "CreatedTime": "2022-03-10T09:36:47.593000-06:00", "Errors": [], "VersionNumber": 1, "SourceEntityArn": "arn:aws:quicksight:us-east-1:658909682992:analysis/e87fc9ae-e7dd-41b0-98e4-b7246eddf8ba"Note: In the preceding example output excerpt, the source entity ARN is e87fc9ae-e7dd-41b0-98e4-b7246eddf8ba.6.    In the Amazon QuickSight console, choose Analyses.Add the analysis ARN to the end of the browser URL, and press Enter.For example, https://us-east-1.quicksight.aws.amazon.com/sn/analyses/e87fc9ae-e7dd-41b0-98e4-b7246eddf8ba8.    On the Controls Panel, edit each control, and note the name of the parameters.9.    In the left navigation pane, choose Filter, and then edit each filter. Check if there are any parameters that are used.10.    In the left navigation pane, choose Parameters, and then edit the parameters that you identified in steps 7 and 8. Verify if any dynamic or static default values are set.Follow"
https://repost.aws/knowledge-center/quicksight-no-data-appears-test-email
How do I allocate memory to work as swap space on an Amazon EC2 instance using a partition on my hard drive?
I want to allocate memory to work as swap space on an Amazon Elastic Compute Cloud (Amazon EC2) instance using a partition on my hard drive. How do I do that?
"I want to allocate memory to work as swap space on an Amazon Elastic Compute Cloud (Amazon EC2) instance using a partition on my hard drive. How do I do that?Short descriptionTo allocate memory as swap space, do the following:1..    Calculate the swap space size.2..    Create a partition on your hard disk as swap space.3..    Set up the swap area.You can also create a swap file to use as a swap space. For more information, see How do I allocate memory to work as swap space in an Amazon EC2 instance by using a swap file?Note: The instance uses swap space when the amount of RAM is full. Swap space can be used for instances that have a small amount of RAM, but isn't a replacement for more RAM. Because swap space is located on the instance's hard drive, performance is slower when compared to actual RAM. For more or faster memory, consider increasing your instance size.ResolutionCalculate the swap space sizeIt's a best practice that swap space is equal to 2 times the physical RAM, for up to 2 GB of physical RAM. For any amount above 2 GB, add an additional 1x physical RAM. It's a best practice that swap space is never less than 32 MB.Amount of system RAMRecommended swap space2 GiB of RAM or less2 times the amount of RAM but never less than 32 MBMore than 2 GiB of RAM but less than 64 GiB0.5 times the amount of RAMMore than 64 GiBDepends on the workload or use caseCreate a partition on your hard drive as swap space1..    Log in to the instance using SSH.2..    List the available volumes:$ sudo fdisk -l3..    Select a device to partition from the list. In this example, use the device /dev/xvda.$ sudo fdisk /dev/xvda4..    Create a new partition:-> n5..    Select a partition type. In this example, use primary:-> p6..    Assign the partition number. In this example, use partition 2:-> 27..    Accept the default of "First sector" by pressing Enter.8..    Enter the size of the swap file. For this example, there is 2 GB of RAM, and the partition created is 4 GB (specified as +4G).-> +4G9..    Save and exit:-> wSet up the swap area1..    Use the partprobe command to inform the OS of partition table change:$ partprobe2..    Set up a Linux swap area using the swap partition you created in the preceding steps. In this example, the swap partition is /dev/xvda2.$ mkswap /dev/xvda23..    Add the partition as swap space:$ sudo swapon /dev/xvda24..    Show the current swap space:$ sudo swapon -sOutput similar to the following appears:Filename Type Size Used Priority/dev/xvda2 partition 4194300 0 -15..    Make the swap memory allocation permanent after reboot with the following command:Note: If xvda2 isn't your swap device name, then replace this term with the swap device name in your environment.$ cp /etc/fstab /etc/fstab_$(date +%Y%m%d%H%M%S)$ cat <<EOF >> /etc/fstab`sudo blkid /dev/xvda2 | grep -Eo '[[:alnum:]]{8}(-[[:alnum:]]{4}){3}-[[:alnum:]]{12}'` swap swap defaults 0 0EOF$ rebootRelated informationSwap spaceFollow"
https://repost.aws/knowledge-center/ec2-memory-partition-hard-drive
How do I resolve the "Route did not stabilize in expected time" error in AWS CloudFormation?
"I tried to create an AWS CloudFormation stack, but the stack failed. Then, I received the following error message: "Route did not stabilize in expected time." How do I resolve this error?"
"I tried to create an AWS CloudFormation stack, but the stack failed. Then, I received the following error message: "Route did not stabilize in expected time." How do I resolve this error?Short descriptionYou must specify one of the following targets for the route assigned to your route table in your Amazon Virtual Private Cloud (Amazon VPC):Internet gateway or virtual private gatewayNAT instanceNAT gatewayVPC peering connectionNetwork interfaceEgress-only internet gatewayIf you set any property of your AWS::EC2::Route type (your target) to an incorrect value, then you receive the "Route did not stabilize in expected time" error.For example, you receive this error if you incorrectly set the value of the NatGatewayId property to the GatewayId property, as shown in the following code example:MyRoute ": { "Type": "AWS::EC2::Route", "Properties": { "DestinationCidrBlock": "0.0.0.0/0", "GatewayId": "nat-0a12bc456789de0fg", "RouteTableId": { "Ref": "MyRouteTable" } }}ResolutionAssign the correct value to the corresponding property. See the following examples:"GatewayId" : "igw-eaad4883"-or-"NatGatewayId" : "nat-0a12bc456789de0fg"For more information on a stack failure, check the AWS CloudTrail events that correspond to the failure.Follow"
https://repost.aws/knowledge-center/cloudformation-route-did-not-stabilize
How do I allow agents in my Amazon Connect contact center to transfer contacts using the CCP?
I want agents in my Amazon Connect contact center to be able to transfer contacts using the Amazon Connect Contact Control Panel (CCP).
"I want agents in my Amazon Connect contact center to be able to transfer contacts using the Amazon Connect Contact Control Panel (CCP).ResolutionTo let agents in your Amazon Connect contact center use the CCP to transfer contacts, use quick connects. Complete the steps in the following sections to set up quick connects for your queues.Create or edit Agent, Queue, or External quick connects based on the type of transfer that you're usingNote: Configured Agent and Queue quick connects appear in the CCP only when an agent is on an active contact. External quick connects appear in the CCP at all times.Use your access URL to log in to your Amazon Connect instance (https://domain.awsapps.com/connect/login -or https://domain.my.connect.aws).Important: You must log in as a user that has Edit and Create permissions for quick connects. For more information, see Security profiles.Follow the instructions in Create quick connects.Add the quick connect to the queues that you use in your contact flowsFollow the instructions in Allow agents to see quick connects.Add the queues to your agents' routing profileConfirm that the queues that include your activated quick connects are in a routing profile assigned to the agents who will use the quick connects.For more information, see Set up routing.Important: For quick connects to appear in the CCP, confirm that you correctly configured the routing.Related informationSet up contact transfersHow routing worksCreate a routing profileCreate a new flowFollow"
https://repost.aws/knowledge-center/connect-create-enable-quick-connects
How can I troubleshoot EMR job failures when trying to connect to the Glue Data Catalog?
My Amazon EMR jobs can't connect to the AWS Glue Data Catalog.
"My Amazon EMR jobs can't connect to the AWS Glue Data Catalog.Short descriptionAmazon EMR uses the Data Catalog as a persistent meta store when using Apache Spark, Apache Hive, or Presto/Trino. You can share the Data Catalog across different clusters, services, applications, or AWS accounts.However, the connection to the Data Catalog might fail for the following reasons:Insufficient permissions to the Glue Data Catalog.Insufficient permissions to the Amazon Simple Storage Service (Amazon S3) objects specified as the table location.Insufficient permissions to the AWS Key Management Service (AWS KMS) service for encrypted objects.Insufficient permissions in AWS Lake Formation.Missing or incorrect EMR cluster parameters configuration.Incorrect query formatting.ResolutionThe EC2 instance profile doesn't have sufficient permissions for the Data Catalog or the S3 bucketTo access the Data Catalog from the same account or across accounts, the following must have permissions to AWS Glue actions and to the S3 bucket:The Amazon Elastic Compute Cloud (Amazon EC2) instance profile.The AWS Identity and Access Management (IAM) role calling the Data Catalog.If permissions are missing, then you see an error similar to the following:Unable to verify existence of default database: com.amazonaws.services.glue.model.AccessDeniedException: User: arn:aws:sts::Acct-id:assumed-role/Role/instance-id is not authorized to perform: glue:GetDatabase on resource: arn:aws:glue:region:Acct-id:catalog because no identity-based policy allows the glue:GetDatabase action (Service: AWSGlue; Status Code: 400; Error Code: AccessDeniedException; Request ID: request-id; Proxy: nullTo troubleshoot issues when accessing the Data Catalog from the same account, check the permissions for the instance profile or the IAM user.To troubleshoot issues when accessing the Data Catalog cross accounts, check all the permissions for the calling account and configuration. Then, verify that cross account S3 access is provided.The EC2 instance profile doesn't have the necessary AWS KMS permissionsIf the Data Catalog is encrypted using a customer managed key, then the EC2 instance profile must have the necessary permissions to access the key. If permissions are missing, then you might see an error similar to the following. The error appears in your EMR console if you're using the spark-shell, Hive CLI or the Presto/Trino CLI. The error appears in your container logs if you're submitting your code programmatically.Caused by: MetaException(message:User: arn:aws:sts::acct-id:assumed-role/Role/instance-id is not authorized to perform: kms:GenerateDataKey on resource: arn:aws:kms:region:acct-id:key/fe90458f-beba-460e-8cae-25782ea9f8b3 because no identity-based policy allows the kms:GenerateDataKey action (Service: AWSKMS; Status Code: 400; Error Code: AccessDeniedException; Request ID: request-id; Proxy: null) (Service: AWSGlue; Status Code: 400; Error Code: GlueEncryptionException; Request ID: request-id; Proxy: null))To avoid the preceding error, add the necessary AWS KMS permissions to allow access to the key.If the AWS account calling the service isn't the same account where the Data Catalog is present, then do the following:Turn on key sharing if the calling AWS account is in the same Region as the Data Catalog.For multi-Region access, create a multi-Region key for sharing with other accounts.The instance profile doesn't have access to AWS Lake Formation or the Glue tables don't have the required grantsWhen Data Catalog permissions are managed or registered in AWS Lake Formation, the role must have Lake Formation permissions on the object. If Lake Formation permissions are missing on the role, then you might see the following error:pyspark.sql.utils.AnalysisException: Unable to verify existence of default database: com.amazonaws.services glue.model.AccessDeniedException: Insufficient Lake Formation permission(s) on default (Service: AWSGlue; Status Code: 400; Error Code: AccessDeniedException; Request ID: request-id; Proxy: null)To resolve the preceding error, add the required grants to the EC2 instance profile role. And, provide grants to the Glue tables or to the database along with the table permissions.The EMR cluster doesn't have the correct configurations or the query string is incorrectIf the permissions are correct, but the configuration is incorrect, then you see the following error on spark-shell when attempting cross account Glue access:An error occurred (EntityNotFoundException) when calling the GetTables operation: Database db-name not found.ororg.apache.spark.sql.AnalysisException: Table or view not found: acct-id/db.table-name line 2 pos 14To resolve this error, add all the necessary parameters for the respective configurations.Follow"
https://repost.aws/knowledge-center/emr-jobs-connect-to-data-catalog
How do I know which user made a particular change to my AWS infrastructure?
I want to track which users are changing my AWS resources and infrastructure.
"I want to track which users are changing my AWS resources and infrastructure.ResolutionYou can use AWS CloudTrail to track which users are changing your AWS resources and infrastructure. CloudTrail is turned on by default for your AWS account. For an ongoing record of events in your AWS account, create a trail. Using a trail, CloudTrail creates logs of API calls made on your account and then delivers those logs to an Amazon Simple Storage Service (Amazon S3) bucket that you specify.To view your log files, do the following:Open the CloudTrail console.In the navigation pane, choose Trails.Select the S3 bucket value for the trail you want to view. The Amazon S3 console opens and shows that bucket, at the top level for the log files.Choose the folder for the AWS Region where you want to review log files.Navigate the bucket folder structure to the year, the month, and the day where you want to review logs of activity in that Region.Select the file name, and then choose Download.Unzip the file, and then use your favorite JSON file viewer to see the log.The log file contains the AWS Identity and Access Management (IAM) user, date and time of login, and if the login was successful. For additional information on the content and structure of the CloudTrail log files, see CloudTrail log event reference.For additional instructions on using CloudTrail to analyze your account activity, see Working with CloudTrail.Related informationWhat is AWS CloudTrail?How can I use CloudTrail to review what API calls and actions have occurred in my AWS account?Follow"
https://repost.aws/knowledge-center/cloudtrail-track-changes
Why doesn’t my Amazon EC2 Auto Scaling policy trigger when my CloudWatch alarm changes states?
I've configured an Amazon CloudWatch alarm to trigger my Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling policy. Why doesn't my Amazon EC2 Auto Scaling policy trigger when my CloudWatch alarm changes state?
"I've configured an Amazon CloudWatch alarm to trigger my Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling policy. Why doesn't my Amazon EC2 Auto Scaling policy trigger when my CloudWatch alarm changes state?Short descriptionWhen a CloudWatch alarm transitions to a new alarm state (OK, ALARM, or INSUFFICIENT_DATA), the alarm invokes any configured actions for that state. Amazon EC2 Auto Scaling only uses the period configured on the alarm to determine if the state should change. So, for Auto Scaling actions, the alarm continues to invoke the configured action for every minute that the alarm remains in the new state, regardless of the configured period.Common reasons why a CloudWatch alarm state change doesn't trigger an Amazon EC2 Auto Scaling policy include:The Auto Scaling action isn't enabled for the CloudWatch alarm, which prevents the scaling policy from being invoked.The scaling policy in the Auto Scaling group is disabled. A disabled policy prevents the group from being evaluated.The Auto Scaling group has conflicting simple scaling policies or step scaling policies, which prevent some of the policies from being triggered.The Auto Scaling group has an incomplete lifecycle hook, which prevents all simple scaling policies from being applied. A pending instance also causes delays in step and target tracking scaling policies. This is because Auto Scaling doesn't count the instance towards the group's capacity until after the lifecycle hook finishes and the warmup time completes (for scale out). The instance is still counted towards the group's capacity (for scale in) to prevent overly scaling. lifecycle hook completes when it times out or when a CompleteLifecycleAction API or AWS Command Line Interface (AWS CLI) call is made.ResolutionBefore you start, be sure that your CloudWatch alarm is transitioning to the ALARM state. If an alarm's configuration doesn't match the threshold of the metric that it's monitoring, the alarm might not transition to the ALARM state. If an alarm doesn't change states, it doesn't trigger Amazon EC2 Auto Scaling policies. For more information on how CloudWatch alarms are evaluated, see Evaluating an Alarm.Verify that your CloudWatch alarm enters the ALARM state when expected by checking the alarm's Threshold value. Increase or decrease the Threshold to match your expected value. Also, review the Period and Evaluation Period of the alarm. You might need to edit your alarm Period and Evaluation Period to trigger your Amazon EC2 Auto Scaling policy as expected. For more information on how to be sure your alarm triggers actions, see How can I be sure that CloudWatch alarms trigger actions?.Important: When creating or editing alarms, keep the following points in mind:Be sure that you haven't suspended scaling processes (AlarmNotification, Launch, or Terminate) for your Amazon EC2 Auto Scaling group. Be sure to resume these scaling processes if they're suspended.Never directly edit the alarms associated with target tracking policies. Editing these alarms might cause unintended effects. The threshold of these alarms is automatically determined based on the target value set in the scaling policy.Check if Amazon EC2 Auto Scaling actions are enabled for the CloudWatch alarmFor a CloudWatch alarm to invoke an Amazon EC2 Auto Scaling policy, the ActionsEnabled parameter must be enabled in the alarm's configuration. Be sure the ActionsEnabled parameter is true in your alarm's configuration.Note: If you create or update your alarm using the CloudWatch console, the ActionsEnabled parameter is set to true by default.To check and enable alarm actions using the AWS CLI:Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.1.    Check your current configuration using describe-alarms as follows. Be sure to replace myalarm with your alarm's ID.aws cloudwatch describe-alarms --alarm-names "myalarm" --query 'MetricAlarms[].ActionsEnabled'2.    Review the output. If the ActionsEnabled parameter is not set to true, enable alarm actions using enable-alarm-actions as follows. Be sure to replace myalarm with your alarm's ID.aws cloudwatch enable-alarm-actions --alarm-names myalarmTo check and enable alarm actions using the CloudWatch API:1.    Check your current configuration using DescribeAlarms.2.    If actions are not enabled for your alarm, enable actions using EnableAlarmActions.Check the simple scaling and step scaling policies for your Amazon EC2 Auto Scaling groupTo check your group's scaling policies using the Amazon EC2 console:1.    Sign in to the Amazon EC2 console.2.    In the navigation pane, under Auto Scaling, choose Auto Scaling Groups.3.    In the content pane, select your Auto Scaling group.4.    Choose the Automatic Scaling tab.5.    Note the Policy type (Step scaling, Simple scaling, or Target tracking).To check scaling policies using the AWS CLI, use the command describe-policies for the ID of your Auto Scaling group with the --policy-types parameter. The output lists the policies of each type (SimpleScaling, StepScaling, or Target tracking).To check scaling policies using an API, use the call DescribePolicies with the parameter PolicyTypes. The output lists the policies of each type (SimpleScaling, StepScaling, or Target tracking).If you have one simple scaling policy in effect, then any other simple scaling policies aren't invoked until the following conditions are met:The simple scaling policy currently in effect has completed.The cooldown period for the Amazon EC2 Auto Scaling policy has elapsed. A simple scaling policy honors the Amazon EC2 Auto Scaling policy's default or specified cooldown period.Note: The execution of a simple scaling policy doesn't completely block the execution of step scaling or target tracking policies. Be sure that contradictory policies aren't applied at the same time.Check for Auto Scaling lifecycle hooks in your Amazon EC2 Auto Scaling policyWhen an Auto Scaling lifecycle hook is in effect, simple scaling policies aren't executed. If you use a simple scaling policy in your Auto Scaling group, be sure to stop any lifecycle hooks.Note: Step scaling policies still trigger if there is a lifecycle hook in progress. However, the policies scale slowly because the instances don't start their warmup timer until after the lifecycle hook ends.Check that all lifecycle hooks are completed with a result of CONTINUE or ABANDON after their global timeout periods or heartbeat timeout periods expire.To check for lifecycle hook actions using the Amazon EC2 console:1.    Sign in to the Amazon EC2 console.2.    In the navigation pane, under Auto Scaling, choose Auto Scaling Groups.3.    In the content pane, select your Auto Scaling group.4.    Choose the Activity tab and then scroll to the Activity history section.5.    Review the activity for any ongoing lifecycle hook actions.6.    For steps to end a lifecycle hook, see Complete the Lifecycle Hook. To complete lifecycle hook actions using the AWS CLI, use the command complete-lifecycle-action. To complete lifecycle hook actions using an API, make a CompleteLifecycleAction call.Related informationHow do I troubleshoot scaling issues with my Amazon EC2 Auto Scaling group?Troubleshooting Amazon EC2 Auto ScalingFollow"
https://repost.aws/knowledge-center/autoscaling-policy-cloudwatch-alarm
How can I securely transfer my keys to CloudHSM with OpenSSL and the key_mgmt_util command line tool?
"I have local keys that I want to import to AWS CloudHSM using the unWrapKey command with the key_mgmt_util command line tool. However, I can't import or wrap plaintext keys."
"I have local keys that I want to import to AWS CloudHSM using the unWrapKey command with the key_mgmt_util command line tool. However, I can't import or wrap plaintext keys.ResolutionEncrypt your payload key with an ephemeral AES key. Encrypt the ephemeral AES with your public key from a key pair. Then, concatenate the encrypted payload key and encrypted ephemeral key into a single file. The concatenated file is sent to your CloudHSM in its encrypted format, and then decrypted by the private key from the key pair. The AES_KEY_WRAP mechanism decrypts the ephemeral AES key and uses the key to decrypt your payload key.Create the following keys:Payload AES or RSA key. This is the key that you import and use with your CloudHSM.Temporary AES key required by AES_KEY_WRAP to encrypt the payload. It's a best practice to use AES because there are no size limits on what can be encrypted.RSA key pair used to securely wrap and unwrap these keys into your CloudHSM.Before you begin, make sure that you have a patched version of OpenSSL to allow envelope wrapping. For instructions, see How can I patch OpenSSL to activate use with the CloudHSM CKM_RSA_AES_KEY_WRAP mechanism?Create, encrypt, and import the local keys1.    Run these commands to create the payload, ephemeral, and RSA keys.Tip: Create these keys in their own directory to track your files.openssl rand -out payload_aes 32openssl rand -out ephemeral_aes 32openssl genrsa -out private.pem 2048openssl rsa -in private.pem -out public.pem -pubout -outform PEM2.    Output the raw hex values of the ephemeral AES key into a variable with this command.EPHEMERAL_AES_HEX=$(hexdump -v -e '/1 "%02X"' < ephemeral_aes)Note: Make sure that you have the hexdump utility installed, or this command returns an error. Refer to your OS documentation on how to install the hexdump utility.3.    Use the OpenSSL enc command to wrap the payload with the ephemeral AES key. The -id-aes256-wrap-pad cipher is the RFC 3394 compliant wrapping mechanism that coincides with CKM_RSA_AES_KEY_WRAP. The -iv values are set by RFC 5649 (an extension to RFC 3394).OPENSSL_V111 enc -id-aes256-wrap-pad -K $EPHEMERAL_AES_HEX -iv A65959A6 -in payload_aes -out payload_wrapped4.    Encrypt the AES key with the public key from the RSA key pair that you created in step 1.OPENSSL_V111 pkeyutl -encrypt -in ephemeral_aes -out ephemeral_wrapped -pubin -inkey public.pem -pkeyopt rsa_padding_mode:oaep -pkeyopt rsa_oaep_md:sha1 -pkeyopt rsa_mgf1_md:sha15.    From the local machine, concatenate the encrypted payload key and ephemeral AES key into a single file named rsa_aes_wrapped.cat ephemeral_wrapped payload_wrapped > rsa_aes_wrapped6.    Import the RSA private key into the CloudHSM from your local machine. Create a persistent AES key in the HSM to manage the import using importPrivateKey.Note: Replace user-name and user-password with your CloudHSM user name and password.Note: If you created the RSA key pair on the HSM and exported the public key using exportPubKey, then you can skip steps 6-9./opt/cloudhsm/bin/key_mgmt_util Cfm3Util singlecmd loginHSM -u CU -s user-name -p user-password genSymKey -t 31 -s 32 -l aes256Warning: The command can record details of your user name and password locally. After transferring your keys, it’s a best practice to change your password. Instead of specifying the crypto-user password, you can also write a shell script to avoid having the password recorded in the shell history. This shell script will receive all the arguments for key_mgmt_util, and send them to this command. This allows you to use the shell script to run the command above as well as the other key_mgmt_util commands below.7.    You receive an output similar to the following. Note the AES key handle—it's used to import the private RSA key. In this example, the key handle is 7.Command: genSymKey -t 31 -s 32 -l aes256 Cfm3GenerateSymmetricKey returned: 0x00 : HSM Return: SUCCESS Symmetric Key Created. Key Handle: 7 Cluster Error Status Node id 0 and err state 0x00000000 : HSM Return: SUCCESS8.    Import the private key and wrap it into the HSM. The import is secured with the persistent AES key you created in step 6.Note: Replace option -w 7 with your key handle./opt/cloudhsm/bin/key_mgmt_util Cfm3Util singlecmd loginHSM -u CU -s user-name -p user-password importPrivateKey -l private -f private.pem -w 79.    You receive an output similar to the following. Note the imported RSA private key handle. In this example, the imported RSA Private Key is 8.Cfm3WrapHostKey returned: 0x00 : HSM Return: SUCCESS Cfm3CreateUnwrapTemplate2 returned: 0x00 : HSM Return: SUCCESS Cfm3UnWrapKey returned: 0x00 : HSM Return: SUCCESS Private Key Imported. Key Handle: 8 Cluster Error Status Node id 0 and err state 0x00000000 : HSM Return: SUCCESS10.    Unwrap the concatenated payload key into the HSM using the imported RSA private key with the unWrapKey command. This example uses -w 8 as the key handle of the imported RSA private key.Note: Replace -w 8 with your private key handle./opt/cloudhsm/bin/key_mgmt_util Cfm3Util singlecmd loginHSM -u CU -s user-name -p user-password unWrapKey -f rsa_aes_wrapped -w 8 -m 7 -noheader -l secretkey -kc 4 -kt 31Note: you must use -kc 4 -kt 31 to unwrap AES keys and -kc 3 -kt 0 to unwrap RSA private keys. For more information on using the -m, -kc and -kt parameters, see the unWrapKey example.11.    You receive a successful import of the payload AES key similar to the following output:Note: In this example, key handle 10 of the new unwrapped key can be used in the CloudHSM.Cfm3CreateUnwrapTemplate2 returned: 0x00 : HSM Return: SUCCESS Cfm2UnWrapWithTemplate3 returned: 0x00 : HSM Return: SUCCESS Key Unwrapped. Key Handle: 10 Cluster Error Status Node id 0 and err state 0x00000000 : HSM Return: SUCCESSVerify that you imported the payload AES key1.    Export the payload AES Key back to disk using the wrapping key -w 7. Replace payload key handle 10 with your own value of your imported payload AES key./opt/cloudhsm/bin/key_mgmt_util Cfm3Util singlecmd loginHSM -u CU -s user-name -p user-password exSymKey -k 10 -w 7 -out HSM.key2.    Run this command to compare the imported payload key with the payload_aes key.diff HSM.key payload_aes --report-identical-files3.    If the HSM.key and payload_aes keys are identical, you receive the following output:Files HSM.key and payload_aes are identicalImport the RSA payload1.    If you want to unwrap an RSA private key into the HSM, run these commands to change the payload key to an RSA private key.openssl genrsa -out payload_rsa.pem 2048openssl rand -out ephemeral_aes 32openssl genrsa -out private.pem 2048openssl rsa -in private.pem -out public.pem -pubout -outform PEM2.    RSA Keys created in step 1 from the Steps required for Import RSA payload section using OpenSSL are in PKCS #1 format. However, the key_mgmt_util tool assumes that the private key is in PKCS #8 DER format. View the keys in plaintext using your favorite text editor to confirm the format similar to the following:PKCS1 format: -----BEGIN RSA PRIVATE KEY----- - PKCS8 format: -----BEGIN PRIVATE KEY-----3.    To convert the payload_rsa.pem key into pkcs8 format and DER encoded, run this command:openssl pkcs8 -topk8 -inform PEM -outform DER -in payload_rsa.pem -out payload_rsa_pkcs8.der -nocrypt4.    Follow steps 2-9 from the Create, encrypt, and import the local keys section.Note: replace payload_aes with payload_rsa_pkcs8.der.5.    Run this command to unwrap the payload RSA private key into the CloudHSM, and note the output key handle:/opt/cloudhsm/bin/key_mgmt_util singlecmd loginHSM -u CU -s user-name -p user-password unWrapKey -f rsa_aes_wrapped -kc 3 -kt 0 -w 8 -l private_key -m 7 -noheaderNote: you must use -kc 4 -kt 31 to unwrap AES keys and -kc 3 -kt 0 to unwrap RSA private keys.You now have the payload RSA key unwrapped into the HSM.Verify that you imported the payload RSA private key1.    Export the payload RSA private key back to disk using the wrapping key you created earlier. Replace payload key handle 25 with your own value of your imported payload RSA private key./opt/cloudhsm/bin/key_mgmt_util Cfm3Util singlecmd loginHSM -u CU -s user-name -p user-password exportPrivateKey -k 25 -w 7 -out HSM_rsa_private.key2.    Run this command to convert your payload_rsa key into PKCS #8 format without converting to DER.openssl pkcs8 -topk8 -inform PEM -outform PEM -in payload_rsa.pem -out payload_rsa_pkcs8.pem -nocrypt3.    Run this command to compare the imported payload key with the payload_rsa key.diff HSM_rsa_private.key payload_rsa_pkcs8.pem --report-identical-files4.    If the HSM_rsa_private.key and payload_rsa_pkcs8.pem keys are identical, you receive the following output:Files HSM_rsa_private.key and payload_rsa_pkcs8.pem are identicalRelated informationSupported PKCS #11 mechanismsOpenSSLOasis requirement with the PKCS #11 specificationRFC 3394RFC 5649Follow"
https://repost.aws/knowledge-center/cloudhsm-import-keys-openssl
How can I migrate virtual Interfaces to Direct Connect connections or LAG bundles?
I want to migrate or associate my existing virtual interfaces with AWS Direct Connect connections or LAG bundles. How can I do this?
"I want to migrate or associate my existing virtual interfaces with AWS Direct Connect connections or LAG bundles. How can I do this?Short descriptionIf you have a new AWS Direct Connect connection or link aggregation group (LAG) bundle, you might want to migrate or associate it with your virtual interfaces.Note:When you migrate or associate an existing virtual interface to a new Direct Connect connection, the configuration parameters that are associated with the virtual interfaces are the same. You can pre-stage the configuration on the Direct Connect connection and then update the BGP configuration.You can only migrate or associate a virtual interface with a Direct Connect connection or LAG within the same AWS account.ResolutionFollow these instructions to:Migrate an existing virtual interface associated with a Direct Connect connection to a new LAG bundleMigrate an existing virtual interface associated with a LAG bundle to another LAG bundleMigrate an existing virtual interface associated with a Direct Connect connection to another Direct Connect connectionNote: It is best to perform these steps during a scheduled maintenance window to minimize downtime.Migrate an existing virtual interface associated with a Direct Connect connection to a new LAG bundleIn this example, you have a virtual interface (dxvif-A) that is associated with a Direct Connect connection (dxcon-A) that needs to be migrated or associated with a LAG bundle (LAG-B).From the Direct Connect console, in the navigation pane, choose LAGs, and select the LAG bundle (LAG-B).Navigate to the Actions tab and choose Associate Virtual Interface.In Connection or LAG, choose the bundle that you want to migrate (LAG-B).In Virtual Interface, choose the virtual interface (dxvif-A), select the agreement checkbox, and then choose Continue.Note: The virtual interface will go down for a brief period.The virtual Interface (dxvif-A) will migrate to the new LAG bundle (LAG-B).Migrate an existing virtual interface associated with a LAG bundle to another LAG bundleIn this example, the virtual interface (dxvif-B) is active and associated with a LAG bundle (LAG-B) that is associated with another LAG bundle (LAG-C).From the Direct Connect console, in the navigation pane, choose Virtual Interfaces, and select the virtual interface you want to migrate.Navigate to the Actions tab and choose Associate Connection or LAG.In Connection or LAG, change the value to the LAG bundle (LAG-C) that you want to associate with the virtual interface (dxvif-B).In Virtual Interface, verify that the virtual interface that you want to migrate (dxvif-B) is selected. Select the agreement check box, and then choose Continue.Note: The virtual interface will go down for a brief period.The virtual interface (dxvif-B) will migrate to the new LAG bundle (LAG-C).Migrate an existing virtual interface associated with a Direct Connect connection to another Direct Connect connectionIn this example, the virtual interface (dxvif-C) is active and associated with a Direct Connect connection (DX-C). You have another Direct Connect connection (DX-D) that is UP and operational. From the Direct Connect console, in the navigation pane, choose Virtual Interfaces,and select the virtual interface you want to migrate**.**Navigate to the Actions tab and choose Associate Connection or LAG.In Connection or LAG, choose the Direct Connect connection that you want to migrate to (DX-D).In Virtual Interface, verify that the virtual interface (dxvif-C) is associated with the current Direct Connect connection (DX-C). Select the agreement check box, and then choose Continue.Note: The virtual interface will go down for a brief period.The virtual interface (dxvif-C) will migrate to the new Direct Connect connection.Follow"
https://repost.aws/knowledge-center/migrate-virtual-interface-dx-lag
Can I use an Amazon S3 bucket as an AWS Database Migration Service (AWS DMS) target?
How can I use an Amazon Simple Storage Service (Amazon S3) bucket as the target for AWS Database Migration Service (AWS DMS) for resources that are in the same account?
"How can I use an Amazon Simple Storage Service (Amazon S3) bucket as the target for AWS Database Migration Service (AWS DMS) for resources that are in the same account?Short descriptionAfter you create a replication instance, you can use an S3 bucket as your target endpoint for AWS DMS by following these steps:Create an S3 bucketCreate an AWS Identity and Access Management (IAM) policyCreate a roleCreate your target endpointFor more information, see Using Amazon S3 as a target for AWS Database Migration Service.ResolutionCreate an S3 bucket1.    Open the Amazon S3 console, and then create a bucket.2.    Select the bucket that you created, and then choose Create folder.3.    Enter a folder name, and then choose Save.Create an IAM policy1.    Open the IAM console, and then choose Policies from the navigation pane.2.    Choose Create policy, and then choose JSON.3.    Add an IAM policy similar to the following:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::bucketname*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::bucketname*" ] } ]}Note: Update the policy to refer to your bucket name.4.    Choose Review policy, enter a Name and Description, and then choose Create policy.Create a role1.    Open the IAM console, and then choose Roles from the navigation pane.2.    Choose Create role, choose DMS, and then choose Next: Permissions.3.    In the Create role pane, in the Search field, choose the policy that you created, and then choose Next: tags.4.    Choose Next: Review.5.    Enter a Role name and a Role description.6.    Choose Create role.Create your target endpoint1.    Open the AWS DMS console, and then choose Endpoints from the navigation pane.2.    Choose Create endpoint, and then select Target endpoint.3.    Enter the Endpoint identifier, and then choose S3 as the Target engine.4.    Paste the Role ARN that you copied into the Service Access Role ARN field.5.    Enter a Bucket name and Bucket folder.6.    Under Endpoint-specific settings, add your Extra connection attributes, if you have any.7.    (Optional) Under Test endpoint connection, select your VPC and Replication instance, and then choose Run test.8.    Choose Create endpoint.Related informationUsing Amazon S3 as a source for AWS DMSWorking with an AWS DMS replication instanceTroubleshooting migration tasks in AWS Database Migration ServiceFollow"
https://repost.aws/knowledge-center/s3-bucket-dms-target
"How can I monitor the opens, clicks, and bounces from emails that I send using Amazon SES?"
"I want to be notified about the following events from emails that I send using Amazon Simple Email Service (Amazon SES):When a recipient opens an email, and how many times recipients open my emailsWhen a recipient clicks a link in the email, and how many times recipients click links in my emailsWhen emails that I send result in a bounce, and how many bounces my emails get"
"I want to be notified about the following events from emails that I send using Amazon Simple Email Service (Amazon SES):When a recipient opens an email, and how many times recipients open my emailsWhen a recipient clicks a link in the email, and how many times recipients click links in my emailsWhen emails that I send result in a bounce, and how many bounces my emails getShort descriptionUse Amazon Simple Notification Service (Amazon SNS) to notify you when one of the following occurs:A recipient opens your email.A recipient clicks a link in your email.Your email results in a bounce.Use Amazon CloudWatch to track the following:How many times recipients open your emails or click links in emails.How many bounces your emails get.Follow these steps to configure Amazon SES, Amazon SNS, and CloudWatch for monitoring email opens, clicks, and bounces:Note: If you copied multiple recipients in an email, then the following configuration doesn't show which recipient opened the email.Create an Amazon SNS topic.Configure Amazon SES to send information about email clicks, opens, and bounces to the Amazon SNS topic.Configure Amazon SES to send information about email clicks, opens, and bounces to CloudWatch.Send a test email to verify the notifications for email opens, clicks, and bounces.Check your Amazon SNS notifications and CloudWatch metrics.Specify the configuration set in the headers of your email.Note: With this configuration, you receive notifications for every email that's opened and every link that's clicked in an email.ResolutionBefore you begin, make sure that you verified your domain with Amazon SES.Create an Amazon SNS topicOpen the Amazon SNS console.Choose Topics.On the Topics page, choose Create topic.In the Details section of the Create topic page, do the following:For Type, choose Standard.For Topic name, enter a name.(Optional) For Display name, enter a display name for your topic.Choose Create topic.From the Topic details of the topic that you created, choose Create subscription.For Protocol, select Email-JSON.For Endpoint, enter the email address where you want to receive notifications.Choose Create subscription.From the email address that you specified in step 8, open the subscription confirmation email from Amazon SNS with the subject line "AWS Notification - Subscription Confirmation".In the subscription confirmation email, open the URL that's specified as SubscribeURL to confirm your subscription.Configure Amazon SES to send information about email clicks, opens, and bounces to the Amazon SNS topicOpen the Amazon SES console, and navigate to the appropriate AWS Region.In the navigation pane, under Configuration, choose Configuration Sets.Choose Create Set.For Configuration Set Name, enter a name for your configuration set.Choose Create Set.Select the Event Destinations tab, then choose Add Destination.For Event types, select Hard Bounces, Opens and Clicks. Then, choose Next.For Destination type, select Amazon SNS.For Name, enter a name for the SNS destination.For SNS Topic, choose the Amazon SNS topic that you created. Then choose Next.Choose Add Destination.Configure Amazon SES to send information about email clicks, opens, and bounces to CloudWatchOpen the Amazon SES console, and navigate to the appropriate Region.In the navigation pane, under Configuration, choose Configuration Sets.Choose the configuration set that you created.Choose the Event Destinations tab, and then choose Add Destination.For Event types, choose Hard Bounces, Opens and Clicks. Then choose Next.For Destination type, select Amazon CloudWatch.For Name, enter a name for the CloudWatch destination.For Value Source, choose Message Tag.For Dimension Name, enter the name that you want to use for this metric in CloudWatch.For Default Value, you can enter any value, such as Null.Choose Next, and then choose Add Destination.Send a test email to verify the notifications for email opens, clicks, and bouncesAmazon SES has a mailbox simulator that you can use to test email opens, clicks, and bounces.1.    Open the Amazon SES console.2.    In the navigation pane, under Configuration, choose Verified Identities.3.    Select one of your verified domains.4.    Choose Send Test Email.5.    For Message details, choose the email format Raw.6.    For From-address, enter an email address with your verified domain.7.    For Scenario, choose Custom to verify opens and clicks or Bounce to verify bounces.8.    Enter an email address that you want to use as a test recipient.Note: For the Custom scenario: If you're still in the Amazon SES sandbox, make sure that the address in the Custom recipient field is a verified email address.9.    For Message, enter text that's similar to the following examples:Custom scenarioX-SES-CONFIGURATION-SET: myConfigsetX-SES-MESSAGE-TAGS: Email=NULLFrom: test-verified-domain@example.comTo: test-recipient@example.comSubject: Test emailContent-Type: multipart/alternative; boundary="----=_boundary"------=_boundaryContent-Type: text/html; charset=UTF-8Content-Transfer-Encoding: 7bitThis is a test email.<a href="https://aws.amazon.com/">Amazon Web Services</a>------=_boundaryBounce scenarioX-SES-CONFIGURATION-SET: myConfigsetX-SES-MESSAGE-TAGS: Email=NULLFrom: test-verified-domain@example.comTo: bounce@simulator.amazonses.comSubject: Test emailContent-Type: multipart/alternative; boundary="----=_boundary"------=_boundaryContent-Type: text/html; charset=UTF-8Content-Transfer-Encoding: 7bitThis is a test email.<a href="https://aws.amazon.com/">Amazon Web Services</a>------=_boundaryNote: Replace myConfigset with the name of the configuration set that you created. Replace Email=Null with the Dimension Name and Default Value (Dimension Name=Default Value) that you entered for the CloudWatch destination in your configuration set.9.    Choose Send Test Email.10.    From your test recipient email address, open the test email and click the link.Check your SNS notifications and CloudWatch metricsOpen the inbox of the email address that you used as the endpoint for your Amazon SNS topic subscription. Confirm that you received open, click, and bounce notifications.Open the CloudWatch console.In the navigation pane, choose Metrics.From the All metrics view, choose SES.Choose the metric that you created.Verify that the graph shows the test emails that you sent to simulate opens, clicks, and bounces.Specify the configuration set in the headers of your emailTo apply the configuration set that you created to your email, you must pass the configuration set in the headers of your email. For more information, see Specifying a configuration set when you send email.Related informationAmazon SES email sending metrics FAQsFollow"
https://repost.aws/knowledge-center/ses-email-opens-clicks
How do I port my phone number that's registered in the US or Canada (+1 country code) in Amazon Connect?
"I am trying to port my US (+1 country code) numbers in Amazon Connect, but I am unsure of how to do this. How do I start the process for porting my numbers?"
"I am trying to port my US (+1 country code) numbers in Amazon Connect, but I am unsure of how to do this. How do I start the process for porting my numbers?ResolutionCreate a support case to port your numbers into Amazon Connect. Before you submit your case, determine if your number is a toll-free number (TFN) or a direct inward dial (DID) number. Toll-free numbers have an area code that begins with an "8" and that repeats the next two numbers. These TFNs include the following area codes:800833844855866877If your number doesn't follow this pattern, your phone number is a DID number.Note: This process is for phone numbers registered in the US or Canada. For all other numbers, see How do I port my phone number that's not registered in the US or Canada (+1 country code) in Amazon Connect?Porting toll-free numbersIf you are porting toll-free numbers, include the following information when submitting your case to AWS Support:Connect instance ARN where the number will resideToll-free number or numbers in E.164 format(Optional) Exact name of the contact flow where the numbers must be mapped to after receiving porting approvalNote: Specifying the contact flow information allows AWS Support to map the number to a contact flow when adding the number to the Amazon Connect instance. This helps to avoid downtime during the porting process. For information on mapping the number to a contact flow, see Associate a phone number with a flow.(Optional) Copy of an invoice or bill with your current carrier, with sensitive information redactedPreferred porting window timeframeNote: AWS Support will attempt to meet the window timeframe, but can't guarantee to meet it.After AWS Support collects all additional information and completed forms from you, the information is submitted to our partner carrier. Then, the partner carrier submits the port request to the current carrier. If the current carrier accepts the request to port, your preferred porting window timeframe is considered.The porting process takes from two to four weeks after all completed forms are submitted.The porting process is completed on the scheduled date and time that AWS Support provides to you.AWS Support informs you when the process is complete and successful porting has been verified. It's best practice for you to test the ported numbers by placing test calls to make sure that the attached contact flow is initiated.Note: If there are any discrepancies between the information submitted by you and the current carrier's records, the current carrier will reject the porting request. AWS Support will reach out to you for the correct information, but this means that the porting process will be delayed.Porting DID numbersIf you are porting DID numbers, include the following information when submitting your case to AWS Support:Connect instance ARN where the number will resideList of DID number or numbers in E.164 format(Optional) Exact name of the contact flow where the numbers must be mapped to after receiving porting approvalNote: Specifying the contact flow information allows AWS Support to map the number to a contact flow when adding the number to the Amazon Connect instance. This helps to avoid downtime during the porting process. For information on mapping the number to a contact flow, see Associate a phone number with a flow.(Optional) Copy of an invoice or bill with your current carrier, with sensitive information redactedPreferred porting window timeframeNote: AWS Support will attempt to meet the window timeframe, but can't guarantee to meet it.After AWS Support collects all additional information and completed forms from you, the information is submitted to our partner carrier. Then, the partner carrier submits the port request to the current carrier. If the current carrier accepts the request to port, your preferred porting window timeframe is considered.The porting process takes from two to four weeks after all completed forms are submitted.The porting process is completed on the scheduled date and time that AWS Support provides to you.AWS Support will inform you when the process is complete and successful porting is verified. It's best practice for you to test the ported numbers by placing test calls to make sure that the attached contact flow is initiated.Note: If there are any discrepancies between the information submitted by you and the current carrier's records, the current carrier will reject the porting request. AWS Support will reach out to you for the correct information, but this means that the porting process will be delayed.Follow"
https://repost.aws/knowledge-center/connect-port-us-number
How do I list all of my Amazon EBS snapshots with or without a specified key tag using the AWS CLI?
"I want to use the AWS Command Line Interface (AWS CLI) to list all my Amazon Elastic Block Store (Amazon EBS) snapshots. What commands are a best practice to use when I want to list all snapshots, with or without a specified tag key?"
"I want to use the AWS Command Line Interface (AWS CLI) to list all my Amazon Elastic Block Store (Amazon EBS) snapshots. What commands are a best practice to use when I want to list all snapshots, with or without a specified tag key?ResolutionNote: Before beginning this resolution, install and configure the AWS CLI.If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.List all EBS snapshots in a particular RegionThe following example command lists all EBS snapshots using the describe-snapshots operation in the Region us-east-1:aws ec2 describe-snapshots --owner-ids self --query 'Snapshots[]' --region=us-east-1The following is example output for the describe-snapshots command:Created for policy: policy-08843cf0d7f6189ae schedule: Default Schedule False 111122223333 100% snap-091e33a177cb2e49b 2020-09-10T19:27:07.882Z completed vol-03b223394ea08e690 8TAGS instance-id i-0919c4d810b9c3695TAGS dlm:managed trueTAGS timestamp 2020-09-10T19:27:07.548ZTAGS aws:dlm:lifecycle-policy-id policy-08843cf0d7f6189aeTAGS aws:dlm:lifecycle-schedule-name Default Scheduletest one hellop False 111122223333 100% snap-02faf8ffc48e512f4 2020-09-10T19:17:34.974Z completed vol-03b223394ea08e690 8 TAGS ec2-console falseCreated for policy: policy-08843cf0d7f6189ae schedule: Default Schedule False 111122223333 100% snap-007e74c24d8f3aaf1 2020-09-10T17:28:31.993Z completed vol-03b223394ea08e690 8TAGS instance-id i-0919c4d810b9c3695TAGS dlm:managed trueTAGS aws:dlm:lifecycle-schedule-name Default ScheduleTAGS timestamp 2020-09-10T17:28:31.650ZTAGS aws:dlm:lifecycle-policy-id policy-08843cf0d7f6189aetest one False 111122223333 100% snap-00f20d2d2c17bbea0 2020-09-08T07:47:47.660Z completed vol-062b2c633c981f99e 8 TAGS ec2-console trueFilter the list of EBS snapshots for a specified tag keyThe following command lists EBS snapshots using the describe-snapshots operation with a specified tag key:aws ec2 describe-snapshots --owner-ids self --query 'Snapshots[?(Tags[?Key == `name`].Value)]'The following command lists all snapshots with the tag key ec2-console:$ aws ec2 describe-snapshots --owner-ids self --query 'Snapshots[?(Tags[?Key == `ec2-console`].Value)]'The following is an example output for the preceding command:test one hellop False 111122223333 100% snap-02faf8ffc48e512f4 2020-09-10T19:17:34.974Z completed vol-03b223394ea08e690 8 TAGS ec2-console falsetest one False 111122223333 100% snap-00f20d2d2c17bbea0 2020-09-08T07:47:47.660Z completed vol-062b2c633c981f99e 8 TAGS ec2-console trueFilter the list of EBS snapshots for snapshots that don't have a specified tag keyThe following command lists EBS snapshots that don't have a specified tag key:aws ec2 describe-snapshots --owner-ids self --query 'Snapshots[?!not_null(Tags[?Key == `name`].Value)]'The following example command filters the list of EBS snapshots for all snapshots that don't have the tag key ec2-console:$ aws ec2 describe-snapshots --owner-ids self --query 'Snapshots[?!not_null(Tags[?Key == `ec2-console`].Value)]'The following is an example output for the preceding command:Created for policy: policy-08843cf0d7f6189ae schedule: Default Schedule False 111122223333 100% snap-091e33a177cb2e49b 2020-09-10T19:27:07.882Z completed vol-03b223394ea08e690 8TAGS instance-id i-0919c4d810b9c3695TAGS dlm:managed trueTAGS timestamp 2020-09-10T19:27:07.548ZTAGS aws:dlm:lifecycle-policy-id policy-08843cf0d7f6189aeTAGS aws:dlm:lifecycle-schedule-name Default ScheduleCreated for policy: policy-08843cf0d7f6189ae schedule: Default Schedule False 111122223333 100% snap-007e74c24d8f3aaf1 2020-09-10T17:28:31.993Z completed vol-03b223394ea08e690 8TAGS instance-id i-0919c4d810b9c3695TAGS dlm:managed trueTAGS aws:dlm:lifecycle-schedule-name Default ScheduleTAGS timestamp 2020-09-10T17:28:31.650ZTAGS aws:dlm:lifecycle-policy-id policy-08843cf0d7f6189aeRelated informationTagging AWS resourcesFollow"
https://repost.aws/knowledge-center/ebs-snapshots-list
Why did I get an email from AWS stating that my Amazon SNS subscription was manually disabled?
"I received an email from AWS saying that one of my Amazon Simple Notification Service (Amazon SNS) email subscriptions was manually disabled. Why was my Amazon SNS topic subscription deactivated, and how do I resolve the issue?"
"I received an email from AWS saying that one of my Amazon Simple Notification Service (Amazon SNS) email subscriptions was manually disabled. Why was my Amazon SNS topic subscription deactivated, and how do I resolve the issue?Short descriptionIf messages are published to an SNS topic at a higher rate than the Delivery rate for email messages quota, then Amazon SNS deactivates the subscription.Amazon SNS automatically deactivates subscriptions when TPS quotas are breached to do the following:Prevent spamming the destination inbox with events.Protect the recipient mail server from being flooded with messages.Avoid Internet Service Providers (ISPs) identifying elevated traffic as spam and blocking messages from delivery.Note: Amazon SNS supports an email message delivery rate of 10 transactions per second (TPS) to SNS topics, for each AWS account. For more information, see Amazon SNS endpoints and quotas.ResolutionIt's a best practice to avoid subscribing email addresses to high-volume SNS topics. Common use cases for SNS topic email subscriptions include monitoring Amazon CloudWatch alarms and sending usage reports to multiple email addresses.For high-volume SNS topics, it's a best practice to subscribe only high-throughput, system-to-system endpoints instead. For example: Amazon Simple Queue Service (Amazon SQS) queues, AWS Lambda functions, and HTTP endpoints. These types of subscription endpoints support a higher TPS quota.Related informationCommon Amazon SNS scenariosSending an email in Amazon PinpointSubscribing to an Amazon SNS topicPublishing to a topicFollow"
https://repost.aws/knowledge-center/sns-email-subscription-disabled
My Amazon SES sending account is under review or its ability to send email is paused. How can I fix this?
"I received an email that my Amazon Simple Email Service (Amazon SES) sending account is under review. Or, my Amazon SES sending account's ability to send email is paused. Why is this happening? How can I fix this?"
"I received an email that my Amazon Simple Email Service (Amazon SES) sending account is under review. Or, my Amazon SES sending account's ability to send email is paused. Why is this happening? How can I fix this?Short descriptionIf your sending account is under review, you can still send emails through Amazon SES, but you must resolve the issue that led to the review. Additionally, you must implement measures to prevent the issue from occurring again. Issues that might lead to an account review include:A bounce rate of 5% or greaterA complaint rate of 0.1% or greaterProblems detected by a manual investigation of the accountIf your account's ability to send email is paused, then you can't use Amazon SES to send emails until you resolve the problem with the account. Your Amazon SES account uses a set of IP addresses that are shared with other users. Because of this, Amazon SES might pause your account's ability to send additional emails to protect the reputation of the shared IP addresses. Issues that might lead to a sending pause include:The Amazon SES account was already placed under review and the issue wasn't corrected before the end of the review period.A bounce rate of 10% or greaterA complaint rate is 0.5% or greaterThe account was placed under review several times for the same issue.Emails sent from the account violated AWS Service Terms.Important: An Amazon SES sending pause doesn't impact other AWS services in your AWS account. However, service quota increases for any AWS service that sends outbound communications, such as Amazon Simple Notification Service (Amazon SNS), might be denied until the sending pause is lifted.ResolutionIf Amazon SES is reviewing your sending account or has paused your account's sending ability, you must:Identify the problem.Request a new review and respond to follow-up questions from the Amazon SES team.Identify the problemRead the email from the Amazon SES team that summarizes the problems that led to the review or pause on your sending account. The issues that can lead to a review or pause on your account include bounces, complaints, and spam traps.Review the following FAQs for more information on reducing risk of these actions:Bounce FAQComplaint FAQSpamtrap FAQRequest a new review and respond to follow-up questions from the Amazon SES teamAfter you identify the problems with your sending account and take steps to fix them, request a new review from the Amazon SES team.To request a new review on your sending account, open the AWS Support Center. Then, reply to the case that Amazon SES opened on your behalf. In your reply, include your responses to these items:A list of changes that you made to fix the problemAn explanation of how these changes will prevent the problem from happening againAfter the Amazon SES team receives your request for a new review, they decide whether to uphold or remove the review or pause on your sending account. If your new review is unsuccessful, the Amazon SES team contacts you with follow-up questions. Be sure to respond to the Amazon SES team with the requested follow-up information.Related informationTips and best practices (Amazon SES Developer Guide)Four tips to help you send higher-quality emailFollow"
https://repost.aws/knowledge-center/ses-resolve-account-review-pause
How do I purchase an Amazon EC2 Reserved Instance?
"I have an Amazon Elastic Compute Cloud (Amazon EC2) instance that I plan to run continuously, with few interruptions, over a long period of time. How do I purchase a Reserved Instance?"
"I have an Amazon Elastic Compute Cloud (Amazon EC2) instance that I plan to run continuously, with few interruptions, over a long period of time. How do I purchase a Reserved Instance?ResolutionYou can purchase an EC2 Reserved Instance through the Amazon EC2 console. Note that a Reserved Instance purchase can't be canceled, so it's important to purchase the Reserved Instance that's right for your needs.Choosing the right Reserved InstanceTo get the maximum benefit from your Reserved Instance purchase, the Reserved Instance must match the attributes for running On-Demand Instances on your account. For information about the attributes that must match, see Amazon EC2 RI types.Note that Reserved Instances are not physical instances, but rather a billing discount applied to the use of On-Demand Instances in your account.For an overview of the On-Demand Instance hours you're currently using by instance type, use the Amazon EC2 usage reports in Cost Explorer. For more information, see How do I view my Reserved Instance utilization and coverage?Trusted Advisor can also recommend Reserved Instances for you based on your usage.Purchasing a Reserved Instance from the EC2 consoleIn the Amazon EC2 console, you can purchase an EC2 Reserved Instance either from AWS directly or from a third-party seller on the Reserved Instance Marketplace. Follow the instructions at How to purchase Reserved Instances.When you purchase a Reserved Instance that includes an upfront cost, your default payment method is billed immediately, with a separate invoice. The upfront payment must be charged successfully before your Reserved Instance will activate.Note: AWS credits can't be applied to the upfront costs associated with Reserved Instances. If your credits include Amazon EC2 as an applicable service, they're applied to the discounted hourly rate for any Reserved Instances.Related informationReserved InstancesAmazon EC2 Reserved InstancesBuying Reserved InstancesFollow"
https://repost.aws/knowledge-center/purchase-ec2-ri
How can I troubleshoot the Classic Load Balancer session stickiness feature?
"I have a Classic Load Balancer with duration-based or application-controlled session stickiness. The load balancer is configured to route client HTTP or HTTPS sessions to the same registered instance, but client requests are being routed to different registered instances. How can I troubleshoot problems with Elastic Load Balancing (ELB) session stickiness?"
"I have a Classic Load Balancer with duration-based or application-controlled session stickiness. The load balancer is configured to route client HTTP or HTTPS sessions to the same registered instance, but client requests are being routed to different registered instances. How can I troubleshoot problems with Elastic Load Balancing (ELB) session stickiness?Short descriptionBy default, a Classic Load Balancer routes each request to the registered instance with the fewest outstanding requests. Using sticky sessions (session affinity) configures a load balancer to bind user sessions to a specific instance, so all requests from a user during a session are sent to the same instance. A sticky session can fail if:1.    The registered instance is not inserting an application cookie.2.    The client is not returning the cookies in the request header.3.    The cookies inserted by a registered instance are not formatted correctly.4.    The expiration period specified when using a duration-based sticky session has passed.5.    The HTTP or HTTPS request is passing through more than one load balancer that has stickiness enabled.Note: The load balancer session stickiness feature is supported only with HTTP or HTTPS listeners.ResolutionApplication-controlled session stickiness1.    Check for HTTP errors with your Classic Load Balancer. For more information, see Troubleshoot a Classic Load Balancer: HTTP Errors.2.    Run a command similar to the following to the load balancer DNS name. Check for the these two cookies in the response: An application cookie is inserted by your application running on a backend instance.An AWSELB cookie is inserted by the load balancer.Note: Before you run this command, be sure to replace the DNS name with the name of your load balancer.[ec2-user@ip-172-31-22-85 ~]$ curl -vko /dev/null internal-TESTELB-1430759361.eu-central-1.elb.amazonaws.comNote: The curl utility is native to Linux, but can also be installed on Windows.The following is an example response to the command:...< Set-Cookie: PHPSESSID=k0qu6t4e35i4lgmsk78mj9k4a4; path=/< Set-Cookie: AWSELB=438DC7A50C516D797550CF7DE2A7DBA19D6816D5E6FB20329CD6AEF2B40030B12FF2839757A60E2330136A2182D27D049FB9D887FBFE9E80FB0724130FB3A86A4B0BAC296FDEB9E943EC9272FF52F5A8AEF373DF33;PATH=/...Note: If there is no AWSELB cookie in the response, then there is no stickiness between the client and the backend instance. 3.    Verify that the registered instance is inserting an application cookie by sending an HTTP request directly to the registered instance IP by running a command similar to the following: [ec2-user@ip-172-31-22-85 ~]$ curl -vko /dev/null 172.31.30.168The command returns a response similar to the following example:...< Set-Cookie: PHPSESSID=5pq74110nuir60kpapj04mglg4; path=/...4.    Verify that the cookie name generated by the registered instance is the same as the cookie name configured on the load balancer. 5.    If the registered instances insert an application cookie in their responses and the load balancer inserts an AWSELB cookie, be sure that the client sends both of these cookies in subsequent requests. If the client fails to include either the application or the AWSELB cookie, then stickiness won't work. To verify that the client sends both the application and AWSELB cookies, take a packet capture on the client or use the browser’s web-debugging tools to retrieve the cookie information in the request header.6.    If you want to know which backend instance that the load balancer routed the request to, you can configure the backend instance to display the instance ID using Instance metadata by running a script similar to the following:<?php $instance_id =file_get_contents('http://169.254.169.254/latest/meta-data/instance-id');echo "instance id = " . $instance_id . "\\xA";?>7.    Review your ELB access log to check if requests from the same user were routed to different registered instances. For more information about reviewing ELB access logs, see Access logs for your Classic Load Balancer. 8.    Verify that the cookie inserted by the application is formatted correctly for HTTP.9.    If your request is passing through multiple load balancers, verify that stickiness is enabled on only one load balancer. If more than one load balancer inserts a cookie, it replaces the original cookie and stickiness fails. Duration-based session stickiness1.    Run a command similar to the following to the load balancer DNS name to check for an AWSELB cookie in the response: [ec2-user@ip-172-31-22-85 ~]$ curl -vko /dev/null internal-TESTELB-1430759361.eu-central-1.elb.amazonaws.comThe command returns a response similar to the following example:...< Set-Cookie: AWSELB=438DC7A50C516D797550CF7DE2A7DBA19D6816D5E6FB20329CD6AEF2B40030B12FF2839757A60E2330136A2182D27D049FB9D887FBFE9E80FB0724130FB3A86A4B0BAC296FDEB9E943EC9272FF52F5A8AEF373DF33;PATH=/...Note: If there is no AWSELB cookie in the response, then there is no stickiness between the client and the backend instance.2.    If the load balancer inserts an AWSELB cookie, be sure that the client sends this cookie in subsequent requests. If the client fails to include the AWSELB cookie, then stickiness won't work. To verify that the client sends the AWSELB cookie, take a packet capture on the client or use the browser’s web-debugging tools to retrieve the cookie information in the request header.3.    Check the duration configured on the load balancer. If the cookie expiration period has passed, client sessions no longer stick to the registered instance until a new cookie is issued by the load balancer.4.    If your request is passing through multiple load balancers, verify that stickiness is enabled on only one load balancer. If more than one load balancer inserts a cookie, it replaces the original cookie and stickiness fails. Related informationConfigure sticky sessions for your Classic Load Balancerdescribe-load-balancersFollow"
https://repost.aws/knowledge-center/troubleshoot-classic-elb-stickiness
Why isn't my CloudWatch GetMetricStatistics API call returning data points?
"My Amazon CloudWatch "GetMetricStatistics" API call isn't returning any data points. However, the data points are available on the CloudWatch console."
"My Amazon CloudWatch "GetMetricStatistics" API call isn't returning any data points. However, the data points are available on the CloudWatch console.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.CloudWatch is a Regional service, so make sure that the API call uses the correct AWS Region. CloudWatch issues the GetMetricStatistics API call with multiple arguments, and those arguments must match the properties of the metric. These arguments are case sensitive, so be sure that the names and cases match in the CloudWatch console configuration. Errors are most often the result of incorrect arguments.DimensionsIf the metric is created with multiple dimensions, then you can retrieve the data points for that only if you specify all the configured dimensions. For example, suppose you publish a metric named ServerStats in the DataCenterMetric namespace with these properties:Dimensions: Server=Prod, Domain=Frankfurt, Unit: Count, Timestamp: 2016-10-31T12:30:00Z, Value: 105To retrieve data points for this metric, specify the following dimensions:Server=Prod,Domain=FrankfurtHowever, if you specify just one of the two dimensions, then you can't retrieve the data points. See the following example:Server=ProdWith the AWS CLI, the format for specifying dimensions in the get-metric-statistics command is different from the put-metric-data command. Be sure to use a format similar to the following:"Name"=string, "Value"=stringNote: In this case, a format of Name=Value is unsuccessful.See the following example of a get-metric-statistics call:aws cloudwatch get-metric-statistics --metric-name "MyMetric" --start-time 2018-04-08T23:18:00Z --end-time 2018-04-09T23:18:00Z --period 3600 --namespace "MyNamespace" --statistics Maximum --dimensions Name=Server,Value=ProdSee the following example of a put-metric-data call:aws cloudwatch put-metric-data --namespace "MyNamespace" --metric-name "MyMetric" --dimensions Server=Prod --value 10PeriodIf the metric isn't pushed for the specified period value, then no data points are returned.For example, if you activate basic monitoring for an instance, then Amazon Elastic Compute Cloud (Amazon EC2) pushes data points every five minutes. For example, suppose that Amazon EC2 pushes the data points at timestamps 12:00, 12:05, 12:10, and so on. Your start time and end time are 12:01 and 12:04, and then you try to retrieve the data points with a period of 60 seconds. In this case, you don't see any data points. It's a best practice to have your start time and end time extend beyond the minimum granularity that's offered by the metric. (For this use case, the granularity is 5 minutes.) Or, use a period that's greater than or equal to the minimum granularity that's offered by the metric.StatisticsTo retrieve percentile statistics for a metric, use ExtendedStatistic.CloudWatch uses raw data points to calculate percentiles. When you publish data using a statistic set, you can retrieve percentile statistics for this data only if one of the following conditions is true:The SampleCount of the statistic set is 1.The Min and the Max of the statistic set are equal.UnitIf the specified unit is different from the one that's configured for the metric, then no data points are returned.If you don't specify the unit argument, then data points for all units are returned.Start time and end timeFormat the start time and the end time arguments as specified in the GetMetricStatistics documentation.If no data points are pushed for the metric between the start time and end time, then no data points are returned.Note: Data points with timestamps from 24 hours ago or longer can take at least 48 hours to become available for get-metric-statistics. For more information, see put-metric-data.Related informationAWS services that publish CloudWatch metricsFollow"
https://repost.aws/knowledge-center/cloudwatch-getmetricstatistics-data
How am I billed for AWS Support?
"I want to sign up for a paid AWS Support plan, and I want to know how I'm charged for it."
"I want to sign up for a paid AWS Support plan, and I want to know how I'm charged for it.ResolutionWhen you sign up for a paid AWS Support plan, your payment method is charged a prorated minimum monthly service charge on your first monthly bill. If applicable, usage charges that exceed the monthly minimum charge are also billed at the end of the month. You can see your monthly charges, including your prorated charges, by expanding the Support section on your Bills page.Related informationAWS pricingAWS Support plan pricingCompare AWS Support plansAWS Support FAQsFollow"
https://repost.aws/knowledge-center/support-billing
I accidentally denied everyone access to my Amazon S3 bucket. How do I regain access?
I incorrectly configured my bucket policy to deny all users access to my Amazon Simple Storage Service (Amazon S3) bucket.
"I incorrectly configured my bucket policy to deny all users access to my Amazon Simple Storage Service (Amazon S3) bucket.ResolutionConditions for the bucket policy can't be metIf the conditions for the bucket policy can't be met, then you can still regain access of your Amazon S3 bucket. To regain access to your bucket, sign in to the Amazon S3 console as the AWS account root user. Then, delete the bucket policy.Important: Don't use the root user for everyday tasks. Limit the use of these credentials to only the tasks that require you to sign in as the root user. Root credentials aren't the same as an AWS Identity Access Management (IAM) user or role with full administrator access. You can't attach IAM policies with allow or deny permissions to the root account.Sign in to the AWS Management Console as the account root user.Open the Amazon S3 console.Navigate to the incorrectly configured bucket.Choose the Permissions tab.Choose Bucket Policy.Choose Delete.On the Delete bucket policy page, enter delete into the text field to confirm the deletion of the bucket policy.Choose Delete.Sign out of the AWS Management Console.(Optional) It's a best practice for the account administrator to rotate the root user password.After the root user deletes the bucket policy, an IAM user with bucket access can apply a new bucket policy with the correct permissions. For more information, see Bucket policy examples and Adding a bucket policy by using the Amazon S3 console.Conditions for the bucket policy can be metIf you can't use the root user account, then you can delete the policy if you meet the bucket policy conditions.To regain access to your bucket, complete the following steps:Review the bucket policy to determine the conditions that are set that can be fulfilled.Take the steps to meet the bucket policy conditions.After regaining access, update the bucket policy to remove or modify the restrictive conditions to prevent future lockouts.Test the changes and make sure that the level of access control is correct.If you're unsure of the policy applied to a bucket prior to a lockout, then use AWS CloudTrail to review the event. To search for recent PutBucketPolicy actions in the account using CloudTrail, complete the following steps:Open the CloudTrail console.In the navigation pane, choose Event history.On the Event history page, under Lookup attributes, choose Event name.In the Enter an event name field, choose PutBucketPolicy and press enter.Choose the most recent event and review the details of the event. The event displays the request and response parameters. This includes the bucket name and the full bucket policy.Follow"
https://repost.aws/knowledge-center/s3-accidentally-denied-access
How can I grant access to the AWS Management Console for on-premises Active Directory users?
I want to grant access to the AWS Management Console using my Active Directory domain credentials.
"I want to grant access to the AWS Management Console using my Active Directory domain credentials.Short descriptionManage Amazon Web Services (AWS) resources with AWS Identity and Access Management (IAM) role-based access to the AWS Management Console. Use either AD Connector or AWS Directory Service for Microsoft Active Directory. The IAM role defines the services, resources, and level of access that your Active Directory users have.ResolutionChoose either AD Connector or AWS Managed Microsoft ADCreate a VPN connection and configure an AD Connector between your on-premises domain with the following minimum port requirements:TCP/UDP 53 for DNSTCP/UDP 88 for Kerberos authenticationTCP/UDP 389 for LDAP authenticationFor more information, see AD Connector prerequisites.- or -Use an existing trust relationship between your on-premises domain and AWS Managed Microsoft AD with the following minimum port requirements:TCP/UDP 53 for DNSTCP/UDP 88 for Kerberos authenticationTCP/UDP 389 for LDAP authenticationTCP 445 for SMBFor more information, see Create a trust relationship between your AWS Managed Microsoft AD and your self-managed Active Directory domain.Set up authenticationCreate an access URL for the directory.Activate AWS Management Console access for your AD Connector or AWS Managed Microsoft AD.Create an IAM role that grants access to the AWS Management Console for services that you want your Active Directory users to have access to.Note: Be sure that the IAM role has a trust relationship with AWS Directory Service.Assign Active Directory users or groups to the IAM role.Verify that users can access the AWS Management Console. Open the directory access URL in a private browsing session and sign in with a user account that's assigned to the IAM role. Then, check the AWS service consoles to confirm that you're permitted or denied access to services as specified by the IAM role.Related informationCreating a role to delegate permissions to an IAM userFollow"
https://repost.aws/knowledge-center/enable-active-directory-console-access
How can I perform write operations to my Amazon RDS for MariaDB or MySQL DB instance read replica?
I want to perform both read and write operations in my Amazon Relational Database Service (Amazon RDS) for MariaDB or MySQL DB instance read replica. How can I do this?
"I want to perform both read and write operations in my Amazon Relational Database Service (Amazon RDS) for MariaDB or MySQL DB instance read replica. How can I do this?Short descriptionAmazon RDS DB instance read replicas are read-only by design. In some scenarios, you might need to configure a DB instance read replica so that the replica is modifiable.ResolutionIf you're using Amazon RDS for MySQL or MariaDB, configure a DB instance read replica to be read/write. You can do this by setting the read_only parameter to false for the DB parameter group that is associated with your DB instance. The read_only parameter can't be modified when using other Amazon RDS engines, such as Amazon Aurora.Note: Automations, such as backups, restore, and failover, are not impacted when you enable writes on the replica. But, if you perform writes without understanding the impact of the writes, this can cause inconsistency or replication failures.To configure your Amazon RDS DB instance read replica to be read/write, follow these steps:Create a DB parameter group for your MySQL or MariaDB instance.Modify the parameter group.Associate your RDS DB instance with the DB parameter group.Note: If you create a DB instance without specifying a DB parameter group, a default DB parameter group is created. This means that default parameter groups can't be modified. If you already have a custom parameter group that is associated with the instance, you don't need to create a new parameter group. For more information about DB parameter groups, see Working with parameter groups.Create a DB parameter groupOpen the Amazon RDS console.In the navigation pane, from Parameter groups, choose Create parameter group.For Parameter group family, choose the parameter group family.For Type, choose DB Parameter Group.For Group name, enter the name of the new DB parameter group.For Description, enter a description for the new DB parameter group.Choose Create.Modify the parameter groupOpen the Amazon RDS console.In the navigation pane, from Parameter groups, select the parameter group that you want to modify.Choose Parameter group actions, and then choose Edit.Edit the following parameter: read_only = 0Choose Save changes.Associate your RDS DB instance with the DB parameter groupOpen the Amazon RDS console.In the navigation pane, from Databases, select the DB instance that you want to associate with the modified DB parameter group.Choose Modify.Note: The instance status is Modifying, and the parameter group is Applying.From Database options, choose the parameter group that you want to associate with the DB instance.After the instance status is Available and the parameter group is Pending-reboot, reboot the instance without failover.Note: The parameter group name changes immediately, but changes to the parameter aren't applied until you reboot the instance without failover.Related informationOverview of Amazon RDS read replicasHow do I modify the values of an Amazon RDS DB parameter group?Follow"
https://repost.aws/knowledge-center/rds-read-replica
How do I install SSM Agent on an Amazon EC2 Linux instance at launch?
I want to install the AWS Systems Manager Agent (SSM Agent) on my Amazon Elastic Compute Cloud (Amazon EC2) Linux instance and have it start before launch.
"I want to install the AWS Systems Manager Agent (SSM Agent) on my Amazon Elastic Compute Cloud (Amazon EC2) Linux instance and have it start before launch.Short descriptionFor a list of Amazon Machine Images (AMIs) with SSM Agent preinstalled, see Amazon Machine Images (AMIs) with SSM Agent preinstalled.You must manually install SSM Agent on Amazon EC2 instances created from other versions of Linux AMIs. You can install SSM Agent by adding user data to an Amazon EC2 Linux instance before the launch. You can keep the SSM Agent up to date by activating SSM Agent auto update under Fleet Manager settings.Important: Before installing SSM Agent, make sure that the following requirements are met:AWS System Manager - Supported operating systemsSystems Manager prerequisitesResolution1.    Create an AWS Identity and Access Management (IAM) instance profile to use with SSM Agent.2.    Launch a new Amazon EC2 instance. Then, configure your instance parameters, such as application and OS images, instance type, key pair, network settings, and storage.3.    Expand the Advanced Details section. In the IAM Instance Profile dropdown list, select the instance profile that you created in step 1.4.    In the User data box, enter the following information.Amazon Linux 2023, Amazon Linux 2, RHEL 7, and CentOS 7 (64 bit)#!/bin/bashcd /tmpsudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpmsudo systemctl enable amazon-ssm-agentsudo systemctl start amazon-ssm-agentRHEL 9, RHEL 8, and CentOS 8#!/bin/bashcd /tmpsudo dnf install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpmsudo systemctl enable amazon-ssm-agentsudo systemctl start amazon-ssm-agentNote: Python 2 or Python 3 must be installed on your RHEL 9, RHEL 8 or CentOS 8 instance for SSM Agent to work correctly. To verify that Python is installed, add the following command to the preceding command examples:sudo dnf install python3Amazon Linux, CentOS 6 (64 bit)#!/bin/bashcd /tmpsudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpmsudo start amazon-ssm-agentUbuntu 22 and Ubuntu 16 (Deb Installer), Debian 8 and 9#!/bin/bashmkdir /tmp/ssmcd /tmp/ssmwget https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/debian_amd64/amazon-ssm-agent.debsudo dpkg -i amazon-ssm-agent.debsudo systemctl enable amazon-ssm-agentUbuntu 14 (Deb installer)#!/bin/bashmkdir /tmp/ssmcd /tmp/ssmwget https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/debian_amd64/amazon-ssm-agent.debsudo dpkg -i amazon-ssm-agent.debsudo start amazon-ssm-agentSuse 15, Suse 12#!/bin/bashmkdir /tmp/ssmcd /tmp/ssmwget https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpmsudo rpm --install amazon-ssm-agent.rpmsudo systemctl enable amazon-ssm-agentsudo systemctl start amazon-ssm-agentFor more information, see User data and the console.6.    Enter the number of instances to be launched.7.    Launch your instance(s).For Windows, see How do I install AWS Systems Manager Agent (SSM Agent) on an Amazon EC2 Windows instance at launch?Activate SSM Agent auto update1.    Open the AWS Systems Manager console.2.    In the navigation pane, choose Fleet Manager.3.    Choose the Settings tab, and then choose Auto update SSM Agent under Agent auto update.Note: The Auto update SSM Agent setting applies to all the managed nodes in the Region where this setting is configured.4.    Then, configure your SSM Agent fleet:To change the version of SSM Agent your fleet updates to, choose Edit under Agent auto update on the Settings tab. Then, enter the version number of SSM Agent you want to update to in Version under Parameters. If the version number isn't specified, then the agent updates to the latest version.To change the defined schedule (the default is to run every 14 days), choose Edit under Agent auto update on the Settings tab. Then, configure your preferred schedule using the On Schedule option under Specify schedule based on Cron and rate expressions for associations.To stop automatically deploying updated versions of SSM Agent to managed nodes in your account, choose Delete under Agent auto update on the Settings tab. This deletes the State Manager association that automatically updates SSM Agent on your managed nodes.Related informationAutomating updates to SSM AgentWorking with SSM Agent on EC2 instances for LinuxSetting up AWS Systems ManagerWorking with SSM Agent on EC2 instances for Windows ServerWhy is my EC2 instance not displaying as a managed node or showing a "Connection lost" status in Systems Manager?Follow"
https://repost.aws/knowledge-center/install-ssm-agent-ec2-linux
How can I configure a Site-to-Site VPN connection with dynamic routing between AWS and Microsoft Azure?
I want to configure AWS Site-to-Site VPN connectivity between AWS and Microsoft Azure with Border Gateway Protocol (BGP).
"I want to configure AWS Site-to-Site VPN connectivity between AWS and Microsoft Azure with Border Gateway Protocol (BGP).ResolutionNote: For more information on optimizing performance, see AWS Site-to-Site VPN, choosing the right options to optimize performance.PrerequisitesBefore configuring your connection, check the following:Make sure that you have an Amazon Virtual Private Cloud (Amazon VPC) CIDR associated to a virtual private gateway or attached to a transit gateway. Make sure that the Amazon VPC CIDR doesn't overlap with the Microsoft Azure network CIDR.AWS configuration1.    Create a customer gateway.2.    Verify the Autonomous System Number (ASN). You can add your own, or use the default option (65000). If you choose the default, then AWS provides an ASN for your customer gateway.3.    For Customer Gateway IP address, enter the Microsoft Azure public IP address. You're given this address when you configure the virtual network gateway in the Microsoft Azure portal. Refer to step 2 of the Microsoft Azure Configuration section in this article for more information.4.    Create an AWS Site-to-Site VPN.5.    Choose an address from the Microsoft Azure reserved APIPA address range for your Site-to-Site VPN. This is necessary because you're setting up BGP Site-to-Site VPN for Microsoft Azure and because AWS Site-to-Site VPN devices use APIPA addresses for BGP. This range is from 169.254.21.0 to 169.254.22.255 for tunnels inside the IPv4 CIDR address. See the following example:Example address: 169.254.21.0/30   BGP IP address (AWS): 169.254.21.1BGP Peer IP address (Microsoft Azure): 169.254.21.26.    For Gateway, choose either the virtual private gateway or transit gateway, and then for Routing options, choose Dynamic.7.    Choose your VPN ID, and then for Vendor choose Generic. 8.    Download the AWS configuration file.If you're building a Site-to-Site VPN connection to a transit gateway, then make sure that you have the correct transit gateway attachments. Do this for both the Amazon VPC and your Site-to-Site VPN. Also, turn on route propagation. Initially, only the Amazon VPC routes are propagated. The Microsoft Azure virtual network CIDR isn't propagated in the transit gateway route tables until BGP is established.Microsoft Azure configuration1.    Follow the instructions on the Microsoft website to create a virtual network in Microsoft Azure.2.    Follow the instructions on the Microsoft website to create a virtual network gateway with a public IP address assigned to it. Use the following details:Region: Choose the region that you want to deploy the virtual network gateway in.Gateway type: VPNVPN Type: Route-basedSKU: Choose the SKU that meets your requirements for workloads, throughputs, features, and SLAs.Virtual Network: A virtual network is associated with your virtual network gateway (similar to a VPC in the AWS environment).Enable active-active mode: Choose Disabled. This creates a new public IP address that's used as the customer gateway IP address in the AWS Management Console.Configure BGP: Choose Enabled.Custom Azure APIPA BGP IP address: (169.254.21.2).Note: The ASN that you specify for the virtual network gateway must be the same as the customer gateway ASN in the AWS Management Console (65000).3.    Follow the instructions on the Microsoft website to create a local network gateway. Use the following details:IP Address: Enter the public IP address of Tunnel 1 that you received when you created AWS Site-to-Site VPN. You can view this in the configuration file that you downloaded from the AWS Management Console.Address space: Enter the Amazon VPC CIDR block.Autonomous System Number (ASN): Enter AWS ASN.BGP peer IP address: Enter the AWS BGP IP (As seen in step 5 of AWS configuration).4.    Follow the instructions on the Microsoft website to create a Site-to-Site VPN connection in the Microsoft Azure portal with BGP turned on. Note: The cryptographic algorithms and PSK are the same on both the Microsoft Azure side and the AWS side.Phase 1 (IKE):    Encryption: AES56 Authentication: SHA256 DH Group: 14Phase 2 (IPSEC): Encryption: AES256 Authentication: SHA256 DH Group: 14 _(PFS2048)_ Diffie-Hellmen Group used in Quick Mode or Phase 2 is the PFS Group specified in Azure. Lifetime: 3600s (Default on Azure portal is set to 27000s. AWS supports maximum of 3600s for IPSEC lifetime)Set up Active/Active BGP failover with AWS Site-to-Site VPN between AWS and Microsoft Azure1.    Follow the instructions on the Microsoft website to create a virtual network gateway. For Enable active-active mode, choose Enabled. This provides two public IP addresses.2.    Open the AWS Site-to-Site VPN console. Use the two public IP addresses from the Microsoft Azure portal for the virtual network gateway to create two customer gateways. Use the following details:IP address: Enter the Azure public node 1 IP address for the first customer gateway and Azure public node 2 IP address for the second customer gateway.BGP ASN: Enter the ASN that you configured on the Microsoft Azure side.Routing type: Choose Dynamic.3.    In the AWS Management Console, create two Site-to-Site VPN connections that connect to either a virtual private gateway or a transit gateway. For Tunnel 1 of both Site-to-Site VPN connections, enter the following for the BGP peer IP address:Site-to-Site VPN 1: 169.254.21.0/30Site-to-Site VPN 2: 169.254.22.0/30The first IP address inside the IP /30 address is assigned to the AWS Site-to-Site VPN BGP IP address (169.254.21.1 or 69.254.22.1), and the second is assigned to the Microsoft Azure BGP IP (169.254.21.2 or 69.254.22.2).4.     Using the Microsoft Azure portal, create two Microsoft Azure local network gateways. For the IP addresses, use the Tunnel 1 public IP addresses from your AWS Site-to-Site VPN tunnels. Also, make sure that the ASN matches the virtual private gateway or transit gateway.5.    Using the Microsoft Azure portal, create two Microsoft Azure Site-to-Site VPN connections. Make sure that each connection has a Microsoft Azure virtual network gateway pointing towards the local network gateways that you created in the previous step.Turn on Transit gateway ECMP SupportFor an Active/Active setup, where two Site-to-Site VPN connections terminate on transit gateways, both Site-to-Site VPNs have a single tunnel configured. So, there are two active Site-to-Site VPN tunnels out of a possible four. When the transit gateway has ECMP Support turned on, traffic can be load balanced across both Site-to-Site VPN connections. If one Site-to-Site VPN connection goes into the DOWN status, then failover to the redundant link happens automatically through BGP.Verify VPN Connection StateAfter your Site-to-Site VPN configuration is established, check that the VPN Tunnel State is in the UP status. To do this, choose the Tunnel Details tab on the Site-to-Site VPN console.On the Microsoft Azure portal, verify the VPN connection. Confirm that the status is Succeeded, and then changes to Connected when you made a successful connection.Create an Amazon Elastic Compute Cloud (Amazon EC2) instance in your Amazon VPC to verify the connectivity between AWS and Microsoft Azure. Then, follow the instructions on the Microsoft website to connect to the Microsoft Azure VM private IP address and confirm that the Site-to-Site VPN connection is established.For more information, see Testing the Site-to-Site VPN connection and How doI check the current status of my VPN tunnel?Follow"
https://repost.aws/knowledge-center/vpn-azure-aws-bgp
My Lambda rotation function called a second function to rotate a Secrets Manager secret but it failed. Why did the second Lambda function fail?
My AWS Lambda function failed to rotate an AWS Secrets Manager secret for another function.
"My AWS Lambda function failed to rotate an AWS Secrets Manager secret for another function.Short descriptionIf a rotation function calls a second Lambda function to rotate the secret, the rotation fails with a message similar to the following:Pending secret version EXAMPLE1-90ab-cdef-fedc-ba987EXAMPLE for Secret MySecret was not created by Lambda MyRotationFunction. Remove the AWSPENDING staging label and restart rotation.ResolutionTo resolve this issue, make sure that the code to rotate your secret is contained in a single Lambda function that's set as the rotation function for the secret.Related informationTroubleshoot AWS Secrets Manager rotationFollow"
https://repost.aws/knowledge-center/secrets-manager-function-rotate
Why do I still see charges for Amazon S3 after I deleted all the files from my Amazon S3 buckets?
"I deleted all the files from my Amazon Simple Storage Service (Amazon S3) buckets, but I still have charges for S3 on my bill."
"I deleted all the files from my Amazon Simple Storage Service (Amazon S3) buckets, but I still have charges for S3 on my bill.ResolutionHere are some things you can check to minimize your S3 charges:Disable logging for any S3 buckets before deleting them. Otherwise, logs might be immediately written to your bucket after you delete your bucket's objects.Delete any file versions (backups of S3 objects) in versioning-enabled buckets.Delete empty S3 buckets that you don't need. Some programs (including the AWS Management Console) initiate LIST requests to pull the names of your buckets. These requests are charged at standard rates.Cancel any incomplete multipart upload requests.Related informationWorking with Amazon S3 bucketsWorking with Amazon S3 objectsFollow"
https://repost.aws/knowledge-center/stop-deleted-s3-charges
How can I troubleshoot AWS DMS endpoint connectivity failures?
"I can't connect to my AWS Database Migration Service (AWS DMS) endpoints. Why is my test connection failing, and how can I troubleshoot these connectivity issues?"
"I can't connect to my AWS Database Migration Service (AWS DMS) endpoints. Why is my test connection failing, and how can I troubleshoot these connectivity issues?Short descriptionThere are two types of errors typically seen when testing the connection from the replication instance to the source or target endpoint:1.    If the error occurred because of connectivity issue between replication instance and source or target, then you see errors similar to the following:"Application-Status: 1020912, Application-Message: Failed to connect Network error has occurred, Application-Detailed-Message: RetCode: SQL_ERROR SqlState: HYT00 NativeError: 0 Message: [unixODBC][Microsoft][ODBC Driver 13 for SQL Server]Login timeout expired ODBC general error.""Application-Status: 1020912, Application-Message: Cannot connect to ODBC provider Network error has occurred, Application-Detailed-Message: RetCode: SQL_ERROR SqlState: 08001 NativeError: 101 Message: [unixODBC]timeout expired ODBC general error.""Application-Status: 1020912, Application-Message: Cannot connect to ODBC provider ODBC general error., Application-Detailed-Message: RetCode: SQL_ERROR SqlState: HY000 NativeError: 2005 Message: [unixODBC][MySQL][ODBC 5.3(w) Driver]Unknown MySQL server host 'mysql1.xxxxx.us-east-1.rds.amazonaws.com' (22) ODBC general error."2.    If the error occurred because of a native database error, such as database permission or authentication error, then you see errors similar to the following:"Application-Status: 1020912, Application-Message: Cannot connect to ODBC provider Network error has occurred, Application-Detailed-Message: RetCode: SQL_ERROR SqlState: 08001 NativeError: 101 Message: [unixODBC]FATAL: password authentication failed for user "dmsuser" ODBC general error."Based on the type of error you receive and your network configuration, see the appropriate resolution section.Note: It's a best practice to test the connectivity from the AWS DMS replication instance to the endpoints after creating your AWS DMS source and target endpoints. Do this before starting the AWS DMS migration task. Otherwise, the task can fail because of connectivity issues with the endpoint.ResolutionResolve connectivity issues for AWS hosted resourcesVerify that connectivity can be established between the source or target database and the replication instance. Depending on your use case and network infrastructure, connect your source or target database to a replication instance in public subnet or private subnet. For more information, see Setting up a network for a replication instance.Note: AWS DMS versions 3.4.7 and later require that you configure AWS DMS to use VPC endpoints or to use public routes to all your source and target endpoints that interact with certain Amazon Web Services. If your DMS endpoint tests are successful on earlier versions but failing on later, see Configuring VPC endpoints as AWS DMS source and target endpoints.Check your replication instance configurationIn your replication instance, confirm that your configuration includes the following:An Outbound Rule for the IP address with the port of the source or target database in the security group. By default, the Outbound Rule of a security group allows all traffic. Security groups are stateful, so you don't need to modify the Inbound Rule from the default.An Outbound Rule for the IP address with the port of the source or target database in the network ACL. By default, the Outbound Rule of network access control list (ACL) allows all trafficAn Inbound Rule for the IP address with the ephemeral ports of the source or target database in the network ACL. By default, the Inbound Rule of a network ACL allows all traffic.Check your source or target database configurationIn your source or target database, confirm that your configuration includes the following:An Inbound Rule for the IP address of the replication instance or the CIDR of the subnet group of the replication instance with the port of the source or target database in the security group. Security groups are stateful, so you don't need to modify the Outbound Rule from the default.Note: To find the IP addresses and CIDRs, see the Determine the IP addresses and CIDR of a subnet group section.An Inbound Rule for the IP address of the replication instance or the CIDR of the subnet group of the replication instance with the port of the source or target database in the network ACL. Confirm that there is no explicit deny rule for the IP address and port allowed.An Outbound Rule for the IP address or the CIDR of the subnet group of the replication instance with ephemeral ports in the network ACL. By default, the Outbound Rule of a network ACL allows all traffic.As a best practice, configure your network to allow the CIDR of the subnet group of the replication instance. The IP address of the replication instance changes during a failover or host replacement event.Determine the IP addresses and CIDR of a subnet groupTo determine the IP addresses and CIDR of the subnet group to set up inbound and outbound rules, use the AWS DMS console or the CLI.Using the AWS console:Access the AWS DMS console.From the navigation pane, choose Replication instances.Choose the name of your replication instance.Under Details, note the Public IP address, Private IP address, and the Replication subnet group of your replication instance.Under Replication subnet group, choose the link to access the subnet group page. Note the name of each subnet in the subnet group.To verify the CIDR of each subnet access the Amazon Virtual Private Cloud (Amazon VPC) console.In the Subnets tab, search for the subnets noted in step 5. For each subnet, note the CIDR.Using the AWS CLI:Run the describe-subnets command to determine the CIDR of each subnet. For replication-instance-name, enter the name of your replication instance.aws ec2 describe-subnets --filters Name=subnet-id,Values="$(aws dms describe-replication-instances --filters "Name=replication-instance-id,Values=replication-instance-name" --query "ReplicationInstances[*].ReplicationSubnetGroup.Subnets[*].SubnetIdentifier" --output text | sed -e 's/\t/,/g')" --query "Subnets[*].{SubnetId:SubnetId,CidrBlock:CidrBlock}" --output tableRun the describe-replication-instances command to determine the IP addresses of the replication instance. For replication-instance-name, enter the name of your replication instance.aws dms describe-replication-instances --filters "Name=replication-instance-id,Values=replication-instance-name" --query "ReplicationInstances[*].{ReplicationInstancePublicIpAddresses:ReplicationInstancePublicIpAddresses,ReplicationInstancePrivateIpAddresses:ReplicationInstancePrivateIpAddresses}" --output tableResolve connectivity issues (on-premises resources)If your source or target database is hosted on-premises, then confirm the following:Check with your network administrator to confirm that the database allows incoming connections from the AWS DMS replication instance.Confirm that a firewall isn't blocking communication to the source or target database.Confirm that the DNS configuration is set up correctly. If you require DNS resolution, then use the Amazon Route 53 Resolver. For information on using an on-premises name server to resolve endpoints using the Amazon Route 53 Resolver, see Using your own on-premises name server.-or-Create a new DMS instance through the AWS CLI to use a customer DNS name server (--dns-name-servers) to resolve DNS issues. By default, DMS instances use Amazon-provided DNS for resolutions. DMS endpoints can fail if the source or target is configured to use custom DNS. For more information, see create-replication-instance.Be sure that your Amazon EC2 instance has the same network configurations as the AWS DMS replication instance with the connectivity issues. Run the following commands on the new EC2 instance to troubleshoot network connectivity:telnet <database_IP_address_or_DNS> <port_number>nslookup <domain_name>For database_IP_address_or_DNS, use the IP address or domain name of the database specified for the DMS source or target endpoint. For port_number, use the port number of the database specified for the DMS source or target endpoint. For domain_name, use the domain name of the database specified for the DMS source or target endpoint.Resolve native database errorsTo resolve native database errors, confirm that the following endpoint configurations are set correctly:UsernamePasswordSet the ServerName to the DNS or IP of the on-premises database, or Amazon Relational Database Service (Amazon RDS) endpointPortDatabase nameNote: Don't specify a Database name for MySQL source or target.Note: If any of these fields are specified with AWS Secrets Manager, then see Using secrets to access AWS Database Migration Service endpoints.For native database errors related to the source or target database, refer to the resolution from the specific database documentation. Use the error code and the error message on the DMS console.For more information, check error, trace, alert, or other logs from the source or target database.For database access errors, confirm the permissions required by DMS for the specific source or target.For more information on encrypting connections for source and target endpoints using Secure Sockets Layer (SSL), see Using SSL with AWS Database Migration Service.Related informationHow can I troubleshoot Amazon S3 endpoint connection test failures when using AWS DMS?How can I troubleshoot connectivity failures and errors for an AWS DMS task that uses Amazon Redshift as the target endpoint?How can I troubleshoot connectivity failures between AWS DMS and a MongoDB source endpoint?Migrate an on-premises Oracle database to Amazon RDS for PostgreSQL using an Oracle bystander and AWS DMSMigrating Microsoft SQL Server databases to the AWS CloudFollow"
https://repost.aws/knowledge-center/dms-endpoint-connectivity-failures
How do I access the Kubernetes Dashboard on a custom path in Amazon EKS?
I want to access the Kubernetes dashboard on a custom path in Amazon Elastic Kubernetes Service (Amazon EKS).
"I want to access the Kubernetes dashboard on a custom path in Amazon Elastic Kubernetes Service (Amazon EKS).Short descriptionTo access the Kubernetes dashboard, you must complete the following:1.    Create or use an existing self-signed certificate, and then upload the certificate to the AWS Certificate Manager (ACM).2.    Deploy the NGINX Ingress Controller, and then expose it as a NodePort service.3.    Create an Ingress object for the Application Load Balancer Ingress Controller. Have the Ingress object forward all the requests from the Application Load Balancer to the NGINX Ingress Controller that you deploy using a manifest file.4.    Deploy the Kubernetes dashboard.5.    Create an Ingress for the NGINX Ingress Controller.Here's how the resolution works:1.    The Application Load Balancer forwards all incoming traffic to the NGINX Ingress Controller.2.    The NGINX Ingress Controller evaluates the path-pattern of the incoming request (for example, /custom-path/additionalcustompath).3.    The NGINX Ingress Controller rewrites the URL to /additionalcustompath before forwarding the request to the kubernetes-dashboard service.Note: This solution doesn't work on clusters running kubernetes versions earlier than 1.19.ResolutionCreate or use an existing self-signed certificate, and then upload the certificate to ACMIf your Application Load Balancer uses an existing ACM certificate, then skip to "Deploy the NGINX Ingress Controller and expose it as a NodePort service".Note: The following steps apply to the Amazon Linux Amazon Machine Image (AMI) release 2018.03.1.    Generate a self-signed certificate using OpenSSL:openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout kube-dash-private.key -out kube-dash-public.crtImportant: Provide a fully qualified domain for Common Name. Application Load Balancer allows only ACM certificates with fully qualified domain names to be attached to the listener 443.The output looks similar to the following:Country Name (2 letter code) [XX]:State or Province Name (full name) []:Locality Name (eg, city) [Default City]:Organization Name (eg, company) [Default Company Ltd]:Organizational Unit Name (eg, section) []:Common Name (eg, your name or your server's hostname) []:kube-dashboard.com ==>This is importantEmail Address []:3.    Install the AWS Command Line Interface (AWS CLI) and set up the credentials.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.4.    Upload the private key and the certificate to the ACM in your AWS Region:aws acm import-certificate --certificate fileb://kube-dash-public.crt --private-key fileb://kube-dash-private.key --region us-east-1Note: Replace us-east-1 with your AWS Region.The output looks similar to the following:{"CertificateArn": "arn:aws:acm:us-east-1:your-account:certificate/your-certificate-id"}5.    Open the ACM console, and then verify that the domain name appears in your imported ACM certificate.Note: If the domain name doesn't appear in the ACM console, it's a best practice to recreate the certificate with a valid fully qualified domain name.Deploy the NGINX Ingress Controller and expose it as a NodePort service1.    Create the namespace ingress-nginx:kubectl create ns ingress-nginx2.    Install Helm version 3.3.    Use Helm to deploy the NGINX Ingress Controller:helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginxhelm repo updatehelm install nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --set controller.service.type=NodePortNote: For nginx-ingress controller to run on Fargate nodes, set allowPrivilegeEscalation: false in "nginx-ingress-nginx-controller" deploymentCreate an Ingress object for the Application Load Balancer Ingress ControllerCreate an Ingress object using a manifest file. Have the Ingress object forward all requests from the Application Load Balancer Ingress Controller to the NGINX Ingress Controller that you deployed earlier.1.    Deploy the Application Load Balancer Ingress Controller.2.    Create an Ingress object for the Application Load Balancer Ingress Controller based on the following alb-ingress.yaml file:---apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: "alb-ingress" namespace: "ingress-nginx" annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:your-region:your-account-id:certificate/XXXX-XXXX-XXXX-XXXX-XXXXX alb.ingress.kubernetes.io/healthcheck-path: /dashboard/ alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]' alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}' labels: app: dashboardspec: ingressClassName: alb rules: - http: paths: - path: / pathType: Prefix backend: service: name: ssl-redirect port: name: use-annotation - path: / pathType: Prefix backend: service: name: "nginx-ingress-nginx-controller" port: number: 80Note: Replace alb.ingress.kubernetes.io/certificate-arn with the Amazon Resource Name (ARN) of your ACM certificate. For Fargate, add "alb.ingress.kubernetes.io/target-type: ip" in annotationsThe preceding manifest file uses the following annotations:The "alb.ingress.kubernetes.io/scheme" annotation creates an internet-facing Application Load Balancer. The "alb.ingress.kubernetes.io/certificate-arn" annotation associates the ARN of your ACM certificate with the 443 listener of the Application Load Balancer. The "alb.ingress.kubernetes.io/listen-ports" annotation creates the listeners for ports 80 and 443. The "alb.ingress.kubernetes.io/actions.ssl-redirect" annotation redirects all the requests coming to ports 80 to 443. The "alb.ingress.kubernetes.io/healthcheck-path" annotation sets the health check path to /dashboard/.3.    Apply the manifest file from the preceding step 2:kubectl apply -f alb-ingress.yamlDeploy the Kubernetes dashboardTo deploy the Kubernetes dashboard, see Tutorial: Deploy the Kubernetes dashboard (web UI).Create an Ingress for the NGINX Ingress Controller1.    Create an Ingress for the NGINX Ingress Controller based on the following ingress-dashboard.yaml file:---apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: dashboard namespace: kubernetes-dashboard annotations: nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/configuration-snippet: | rewrite ^(/dashboard)$ $1/ redirect;spec: ingressClassName: nginx rules: - http: paths: - path: /dashboard(/|$)(.*) pathType: Prefix backend: service: name: kubernetes-dashboard port: number: 443Note: The "nginx.ingress.kubernetes.io/rewrite-target" annotation rewrites the URL before forwarding the request to the backend pods. In /dashboard(/|$)(.*) for path, (.*) stores the dynamic URL that's generated while accessing the Kubernetes dashboard. The "nginx.ingress.kubernetes.io/rewrite-target" annotation replaces the captured data in the URL before forwarding the request to the kubernetes-dashboard service. The "nginx.ingress.kubernetes.io/configuration-snippet" annotation rewrites the URL to add a trailing slash ("/") only if ALB-URL/dashboard is accessed.2.    Apply the manifest file ingress-dashboard.yaml:kubectl apply -f ingress-dashboard.yaml3.    Check the Application Load Balancer URL in the ADDRESS of the alb-ingress that you created earlier:kubectl get ingress alb-ingress -n ingress-nginxYou can now access the Kubernetes dashboard using ALB-URL/dashboard/. If you access ALB-URL/dashboard, then a trailing slash ("/") is automatically added to the URL.Clean up the resources that you created earlier1.    Delete the ingress for the NGINX Ingress Controller:helm uninstall nginx -n ingress-nginx2.    Delete the Kubernetes dashboard components and the Metrics Server:kubectl delete -f eks-admin-service-account.yamlkubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yamlkubectl delete -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml>3.    Delete the alb-ingress:kubectl delete -f alb-ingress.yamlNote: If you created AWS Identity and Access Management (IAM) resources, then you can delete the IAM role and IAM policy.4.    Delete the AWS Load Balancer Controller:helm uninstall aws-load-balancer-controller -n kube-system5.    Delete the ingress-ngix namespace:kubectl delete ns ingress-nginx6.    To delete the ACM certificate that you created, run the following command:aws acm delete-certificate \ --certificate-arn arn:aws:acm:us-east-1:your-account-id:certificate/XXXX-XXXX-XXXX-XXXX-XXXXX \ --region us-east-1Note: Replace certificate-arn with your certificate ARN. Replace us-east-1 with your AWS Region. Replace your-account-id with your account ID.Follow"
https://repost.aws/knowledge-center/eks-kubernetes-dashboard-custom-path
Why did my publicly trusted ACM certificate fail managed renewal?
My AWS Certificate Manager (ACM) certificate failed to renew. Why didn't my ACM certificate renew?
"My AWS Certificate Manager (ACM) certificate failed to renew. Why didn't my ACM certificate renew?Short descriptionACM provides managed renewal for your AWS issued SSL/TLS certificates. This means that ACM either renews your certificates automatically if you are using DNS validation, or sends you an email notification when expiration is approaching. ACM tries to validate each domain name included in the certificate. After all domain names associated with the certificate are validated, the ACM certificate is renewed. For more information, see Troubleshooting managed certificate renewal.Managed renewal can fail for email and DNS validated certificates if:The certificate was imported into ACM. Imported certificates aren't renewed automatically.The ACM certificate that's being renewed is not in use—the ACM certificate isn't associated with any of the services integrated with ACM.Renewal for domains validated by email require manual action. ACM begins sending email renewal notices 45 days before expiration using the domain's WHOIS mailbox addresses and to five common administrator addresses. The notifications contain a link that the domain owner can select for renewal. After all listed domains are validated, ACM issues a renewed certificate with the same ARN.Managed renewal for domains validated by DNS can fail if ACM was unable to find the appropriate CNAME record in the DNS database.ResolutionEmail and DNS validated certificatesBe sure that the ACM certificate is in use with one of the services integrated with ACM.Email validated certificatesFor email-validated certificates, ACM must be able to send to the WHOIS mailbox addresses and the five common administrator addresses for each domain listed in your certificate. After all listed domains are validated, ACM issues a renewed certificate with the same ARN. For more information, see Email validation.DNS validated certificatesUpdate your DNS configuration to include the CNAME records provided by ACM. ACM looks for the CNAME record in the DNS configuration for the domain names included in the DNS-validated certificates.After the certificate is renewed, the Amazon Resource Name (ARN) of the renewed ACM certificate remains the same. Renewed ACM certificates are automatically updated to the integrated, in-use AWS resources.For more information, see Troubleshooting managed certificate renewal.Related informationACM best practicesHow does the ACM managed renewal process work with email-validated certificates?Check a certificate's renewal statusFollow"
https://repost.aws/knowledge-center/certificate-fails-to-auto-renew
How do I resolve SQL exception errors with custom SQL data sources in QuickSight?
"I'm trying to use custom SQL data sources in Amazon QuickSight, but I get the error message "Your database generated a SQL exception." How do I resolve this?"
"I'm trying to use custom SQL data sources in Amazon QuickSight, but I get the error message "Your database generated a SQL exception." How do I resolve this?Short descriptionYou receive the following error message when Amazon QuickSight is querying or refreshing your SQL data source:"Your database generated a SQL exception. This can be caused by query timeouts, resource constraints, unexpected DDL alterations before or during a query, and other database errors. Check your database settings and your query, and try again."For more detailed information on what caused the error, choose Show Details under the error message.Common reasons for receiving the error message include:The query times out.There's an issue with the VPC connection to your data source.Your QuickSight account doesn't have permission to access the data.Your QuickSight service role doesn't have permission to access the AWS managed Key Management Service (AWS KMS) key.You're using unsupported data types or functions.ResolutionIf you receive errors when running AWS Command Line Interface (AWS CLI) commands,make sure that you’re using the most recent AWS CLI version.The query times outIf the custom SQL query times out, simplify the query to optimize runtime. For other query timeout solutions, see How do I resolve query timeout issues in QuickSight?There's an issue with the VPC connection to your data sourceThe details of your error message include the following:Communications link failure The last packet successfully received from the server was nnnn milliseconds ago. The last packet sent successfully to the server was nnnn milliseconds ago.-or-Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.If you're experiencing VPC connection issues to your data sources, check the network security group in the VPC that's associated with the resource. For more information see, Connecting to a VPC with Amazon QuickSight.Your QuickSight account doesn't have permission to access the dataIf you experience an SQL exception error when trying to access data in an AWS service, check your QuickSight security and permissions settings.Open the Amazon QuickSight console.Choose Manage QuickSight.Choose Security & Permissions.Configure access to the supported services that you use.If you use AWS Organizations, you can receive the error when you don't have the necessary service control policies (SCPs) assigned to you. Ask the AWS Organizations administrator to check your SCP settings to verify the permissions that are assigned to you. If you're an AWS Organizations administrator, see Creating, updating, and deleting service control policies.Your QuickSight service role doesn't have permission to access the AWS managed KMS keyYou receive the following error:If you are encrypting query results with KMS key, please ensure you are allowed to access your KMS key.Make sure that the QuickSight service role has the correct AWS KMS key permissions.Use the AWS Identity and Access Management (IAM) console to locate the QuickSight service role ARN.Use the Amazon Simple Storage Service (Amazon S3) console to find the AWS KMS key ARN.Go to the bucket that contains your data file.Choose the Overview tab, and locate KMS key ID.Add the QuickSight service role ARN to the KMS key policy.Run the AWS CLI create-grant command:aws kms create-grant —key-id aws_kms_key_arn —grantee-principal quicksight_role_arn —operations DecryptNote: Replace aws_kms_key_arn with the ARN of your AWS KMS key and quicksight_role_arn with the ARN of your QuickSight service role.You're using unsupported data types or functionsIf you try to import an unsupported data type or use an unsupported SQL function, you receive an SQL exception error. To resolve this issue, check the SQL data source to determine if the data type or SQL function is supported.To see what's supported, check the following resources:Supported data types from other data sourcesSupported data types and valuesFunctions by categoryRelated informationQuotas for direct SQL queriesHow can I create a private connection from Amazon QuickSight to an Amazon Redshift cluster or an Amazon Relational Database Service (Amazon RDS) DB instance that's in a private subnet?Actions, resources, and condition keys for Amazon QuickSightFollow"
https://repost.aws/knowledge-center/quicksight-custom-sql-data-source-error
Why is it taking a long time to replicate Amazon S3 objects using Cross-Region Replication between my buckets?
I'm using Cross-Region Replication between two Amazon Simple Storage Service (Amazon S3) buckets in different AWS Regions. Why is object replication taking longer than I expect?
"I'm using Cross-Region Replication between two Amazon Simple Storage Service (Amazon S3) buckets in different AWS Regions. Why is object replication taking longer than I expect?ResolutionCross-Region Replication is an asynchronous process, and the objects are eventually replicated. Most objects replicate within 15 minutes, but sometimes replication can take a couple hours or more. There are several factors that can affect the replication time, including:The size of the objects to replicate.The number of objects to replicate. For example, if Amazon S3 is replicating more than 3,500 objects per second, then there might be some latency while the destination bucket scales up for the request rate.You can check the replication status on the source object. If the object replication status is PENDING, then Amazon S3 hasn't completed the replication.Note: To monitor replication metrics such as the maximum replication time to the destination bucket, you can enable S3 Replication Time Control (S3 RTC).Follow"
https://repost.aws/knowledge-center/s3-crr-replication-time