Question
stringlengths
0
222
Description
stringlengths
0
790
Answer
stringlengths
0
28.2k
Link
stringlengths
35
92
A user with permission to add objects to my Amazon S3 bucket is getting Access Denied errors. Why?
"An AWS Identity and Access Management (IAM) user has permission to the s3:PutObject action on my Amazon Simple Storage Service (Amazon S3) bucket. However, when they try to upload an object, they get an HTTP 403: Access Denied error. How can I fix this?"
"An AWS Identity and Access Management (IAM) user has permission to the s3:PutObject action on my Amazon Simple Storage Service (Amazon S3) bucket. However, when they try to upload an object, they get an HTTP 403: Access Denied error. How can I fix this?Short descriptionIf the IAM user has the correct permissions to upload to the bucket, then check the following policies for settings that are preventing the uploads:IAM user permission to s3:PutObjectAclConditions in the bucket policyAccess allowed by an Amazon Virtual Private Cloud (Amazon VPC) endpoint policyAWS KMS encryptionResolutionIAM user permission to s3:PutObjectAclIf the IAM user must update the object's access control list (ACL) during the upload, then the user also must have permissions for s3:PutObjectAcl in their IAM policy. For instructions on how to update a user's IAM policy, see Changing permissions for an IAM user.Conditions in the bucket policyReview your bucket policy for the following example conditions that restrict uploads to your bucket. If the bucket policy has a condition and the condition is valid, then the IAM user must meet the condition for the upload to work.Important: When you review conditions, be sure to verify that the condition is associated with an Allow statement ("Effect": "Allow") or a Deny statement ("Effect": "Deny"). For the upload to work, the user must comply with the condition of an Allow statement, or avoid the condition of a Deny statement.Check for a condition that allows uploads only from a specific IP address, similar to the following:"Condition": { "IpAddress": { "aws:SourceIp": "54.240.143.0/24" }}If your bucket policy has this condition, the IAM user must access your bucket from the allowed IP addresses.Check for a condition that allows uploads only when the object is a specific storage class, similar to the following:"Condition": { "StringEquals": { "s3:x-amz-storage-class": [ "STANDARD_IA" ] }If your policy has this condition, then the user must upload objects with the allowed storage class. For example, the previous condition statement requires the STANDARD_IA storage class. This means that the user must upload the object with an AWS Command Line Interface (AWS CLI) command similar to the following:aws s3api put-object --bucket DOC-EXAMPLE-BUCKET --key examplefile.jpg --body c:\examplefile.jpg --storage-class STANDARD_IANote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Check for a condition that allows uploads only when the object is assigned a specific access control list (ACL), similar to the following:"Condition": { "StringEquals": { "s3:x-amz-acl":["public-read"] } }If your policy has this condition, then users must upload objects with the allowed ACL. For example, the previous condition requires the public-read ACL, so the user must upload the object with a command similar to the following:aws s3api put-object --bucket DOC-EXAMPLE-BUCKET --key examplefile.jpg --body c:\examplefile.jpg --acl public-readCheck for a condition that requires that uploads grant full control of the object to the bucket owner (canonical user ID), similar to the following:"Condition": { "StringEquals": { "s3:x-amz-grant-full-control": "id=AccountA-CanonicalUserID" }}If your policy has this condition, then the user must upload objects with a command similar to the following:aws s3api put-object --bucket DOC-EXAMPLE-BUCKET --key examplefile.jpg --body c:\examplefile.jpg --grant-full-control id=CanonicalUserIDCheck for a condition that allows uploads only when objects are encrypted by an AWS Key Management System (AWS KMS) key, similar to the following:"Condition": {<br>"StringEquals": {<br>"s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:us-east-1:111122223333:key/abcdabcd-abcd-abcd-abcd-abcdabcdabcd"<br>}<br>}If your policy has this condition, then the user must upload objects with a command similar to the following:aws s3api put-object --bucket DOC-EXAMPLE-BUCKET --key examplefile.jpg --body c:\examplefile.jpg --server-side-encryption aws:kms --ssekms-key-id arn:aws:kms:us-east-1:111122223333:key/abcdabcd-abcd-abcd-abcd-abcdabcdabcdCheck for a condition that allows uploads only when objects use a certain type of server-side encryption, similar to the following:"Condition": { "StringEquals": { "s3:x-amz-server-side-encryption": "AES256" }}If your policy has this condition, the user must upload objects with a command similar to the following:aws s3api put-object --bucket DOC-EXAMPLE-BUCKET --key examplefile.jpg --body c:\examplefile.jpg --server-side-encryption "AES256"Access allowed by a VPC endpoint policyIf the IAM user is uploading objects to Amazon S3 using an Amazon Elastic Compute Cloud (Amazon EC2) instance, and that instance is routed to Amazon S3 using a VPC endpoint, you must check the VPC endpoint policy. Be sure that the endpoint policy allows uploads to your bucket.For example, the following VPC endpoint policy allows access only to DOC-EXAMPLE-BUCKET. If your bucket isn't listed as an allowed resource, then users can't upload to your bucket using the instance in the VPC.{ "Statement": [{ "Sid": "Access-to-specific-bucket-only", "Principal": "*", "Action": [ "s3:PutObject" ], "Effect": "Allow", "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" }]}Additionally, if users upload objects with an ACL, then the VPC endpoint policy must also grant access to the s3:PutObjectAcl action, similar to the following:{ "Statement": [{ "Sid": "Access-to-specific-bucket-only", "Principal": "*", "Action": [ "s3:PutObject", "s3:PutObjectAcl", ], "Effect": "Allow", "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" }]}AWS KMS encryptionBased on the error message that you receive, update the AWS KMS permissions of your IAM user or role. To resolve these Access Denied errors, see Why am I getting an Access Denied error message when I upload files to my Amazon S3 bucket that has AWS KMS default encryption?Important: If the AWS KMS key and IAM role belong to different AWS accounts, then the IAM policy and KMS key policy must be updated. Make sure to add the KMS permissions to both the IAM policy and KMS key policy. Also, an AWS KMS key with an "aws/s3" alias can't be used for default bucket encryption if cross-account IAM principals are uploading the objects. Any object upload, copy, or bucket that is configured to use an S3 Bucket Key for SSE-KMS must have access to kms:Decrypt permission. For more information about AWS KMS keys and policy management, see AWS managed KMS keys and customer managed keys.Related informationAmazon S3 condition key examplesSetting default server-side encryption behavior for Amazon S3 buckets  Follow"
https://repost.aws/knowledge-center/s3-403-upload-bucket
How do I deploy an AWS CloudFormation stack in a different account using CodePipeline?
I want to use AWS CodePipeline to deploy an AWS CloudFormation stack in a different AWS account.
"I want to use AWS CodePipeline to deploy an AWS CloudFormation stack in a different AWS account.Short descriptionTo deploy a CloudFormation stack in a different AWS account using CodePipeline, do the following:Note: Two accounts are used to create the pipeline and deploy CloudFormation stacks in. Account 1 is used to create the pipeline and account 2 is used to deploy CloudFormation stacks in.1.    (Account 1) Create a customer-managed AWS Key Management Service (AWS KMS) key that grants key usage permissions to the following:Account 1 CodePipeline service roleAccount 22.    (Account 1) Create an Amazon Simple Storage Service (Amazon S3) bucket with a bucket policy that grants account 2 access to the bucket.3.    (Account 2) Create a cross-account AWS Identity and Access Management (IAM) role that allows the following:CloudFormation API actionsAccess to the Amazon S3 bucket in account 1Decryption with the customer-managed AWS KMS key in account 14.    (Account 1) Add the AssumeRole permission for the account 1 CodePipeline service role to allow it to assume the cross-account role in account 2.5.    (Account 2) Create a service role for the CloudFormation stack that includes the required permissions for the services deployed by the stack.6.    (Account 1) Update the CodePipeline configuration in account 1 to include the resources associated with account 2.Resolution(Account 1) Create a customer-managed AWS KMS key that grants usage permissions to account 1's CodePipeline service role and account 21.    In account 1, open the AWS KMS console.2.    In the navigation pane, choose Customer managed keys.3.    Choose Create key. Then, choose Symmetric.Note: In the Advanced options section, leave the origin as KMS.4.    For Alias, enter a name for your key.5.    (Optional) Add tags based on your use case. Then, choose Next.6.    On the Define key administrative permissions page, for Key administrators, choose your AWS Identity and Access Management (IAM) user. Also, add any other users or groups that you want to serve as administrators for the key. Then, choose Next.7.    On the Define key usage permissions page, for This account, add the IAM identities that you want to have access to the key. For example: The CodePipeline service role.8.    In the Other AWS accounts section, choose Add another AWS account. Then, enter the Amazon Resource Name (ARN) of the IAM role in account 2.9.    Choose Next. Then, choose Finish.10.    In the Customer managed keys section, choose the key that you just created. Then, copy the key's ARN.Important: You must have the AWS KMS key's ARN when you update your pipeline and configure your IAM policies.(Account 1) Create an Amazon S3 bucket with a bucket policy that grants account 2 access to the bucket1.    In account 1, open the Amazon S3 console.2.    Choose an existing Amazon S3 bucket or create a new S3 bucket to use as the ArtifactStore for CodePipeline.Note: Artifacts can include a stack template file, a template configuration file, or both. CodePipeline uses these artifacts to work with CloudFormation stacks and change sets. In your template configuration file, you must specify template parameter values, a stack policy, and tags.3.    On the Amazon S3 details page for your bucket, choose Permissions.4.    Choose Bucket Policy.5.    In the bucket policy editor, enter the following policy:Important: Replace codepipeline-source-artifact with the SourceArtifact bucket name for CodePipeline. Replace ACCOUNT_B_NO with the account 2 account number.{ "Id": "Policy1553183091390", "Version": "2012-10-17", "Statement": [{ "Sid": "", "Action": [ "s3:Get*", "s3:Put*" ], "Effect": "Allow", "Resource": "arn:aws:s3:::codepipeline-source-artifact/*", "Principal": { "AWS": [ "arn:aws:iam::ACCOUNT_B_NO:root" ] } }, { "Sid": "", "Action": [ "s3:ListBucket" ], "Effect": "Allow", "Resource": "arn:aws:s3:::codepipeline-source-artifact", "Principal": { "AWS": [ "arn:aws:iam::ACCOUNT_B_NO:root" ] } } ]}6.    Choose Save.(Account 2) Create a cross-account IAM roleCreate an IAM policy that allows the following:The pipeline in account 1 to assume the cross-account IAM role in account 2CloudFormation API actionsAmazon S3 API actions related to the SourceArtifact1.    In account 2, open the IAM console.2.    In the navigation pane, choose Policies. Then, choose Create policy.3.    Choose the JSON tab. Then, enter the following policy into the JSON editor:Important: Replace codepipeline-source-artifact with your pipeline's Artifact store's bucket name.{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": [ "cloudformation:*", "iam:PassRole" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:Get*", "s3:Put*", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::codepipeline-source-artifact/*" ] } ]}4.    Choose Review policy.5.    For Name, enter a name for the policy.6.    Choose Create policy.Create a second IAM policy that allows AWS KMS API actions1.    In account 2, open the IAM console.2.    In the navigation pane, choose Policies. Then, choose Create policy.3.    Choose the JSON tab. Then, enter the following policy into the JSON editor:Important: Replace arn:aws:kms:REGION:ACCOUNT_A_NO:key/key-id with your AWS KMS key's ARN that you copied earlier.{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": [ "kms:DescribeKey", "kms:GenerateDataKey*", "kms:Encrypt", "kms:ReEncrypt*", "kms:Decrypt" ], "Resource": [ "arn:aws:kms:REGION:ACCOUNT_A_NO:key/key-id" ] }]}4.    Choose Review policy.5.    For Name, enter a name for the policy.6.    Choose Create policy.Create the cross-account IAM role using the policies that you created1.    In account 2, open the IAM console.2.    In the navigation pane, choose Roles.3.    Choose Create role.4.    Choose Another AWS account.5.    For Account ID, enter the account 1 account ID.6.    Choose Next: Permissions. Then, complete the steps to create the IAM role.7.    Attach the cross-account role policy and KMS key policy to the role that you created. For instructions, see Adding and removing IAM identity permissions.(Account 1) Add the AssumeRole permission to the account 1 CodePipeline service role to allow it to assume the cross-account role in account 21.    In account 1, open the IAM console.2.    In the navigation pane, choose Roles.3.    Choose the IAM service role that you're using for CodePipeline.4.    Choose Add inline policy.5.    Choose the JSON tab. Then, enter the following policy into the JSON editor:Important: Replace ACCOUNT_B_NO with the account 2 account number.{ "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": [ "arn:aws:iam::ACCOUNT_B_NO:role/*" ] }}6.    Choose Review policy, and then create the policy.(Account 2) Create a service role for the CloudFormation stack that includes the required permissions for the services deployed by the stackNote: This service role is configured directly on the CloudFormation stack in account 2. The role must include the permissions for the services deployed by the stack.1.    In account 2, open the IAM console.2.    In the navigation pane, choose Roles.3.    Create a role for AWS CloudFormation to use when launching services on your behalf.4.    Apply permissions to your role based on your use case.Important: Make sure that your trust policy is for AWS CloudFormation and that your role has permissions to access services that are deployed by the stack.(Account 1) Update the CodePipeline configuration to include the resources associated with account 2Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, confirm that you're running a recent version of the AWS CLI.You can't use the CodePipeline console to create or edit a pipeline that uses resources associated with another account. However, you can use the console to create the general structure of the pipeline. Then, you can use the AWS CLI to edit the pipeline and add the resources associated with the other account. Or, you can update a current pipeline with the resources for the new pipeline. For more information, see Create a pipeline in CodePipeline.1.    Get the pipeline JSON structure by running the following AWS CLI command:aws codepipeline get-pipeline --name MyFirstPipeline >pipeline.json2.    In your local pipeline.json file, confirm that the encryptionKey ID under artifactStore contains the ID with the AWS KMS key's ARN.Note: For more information on pipeline structure, see create-pipeline in the AWS CLI Command Reference.3.    In the pipeline.json file, update the AWS CloudFormation action configuration.Note: The RoleArn inside the action configuration JSON structure for your pipeline is the role for the CloudFormation stack (CFN_STACK_ROLE). The roleArn outside the action configuration JSON structure is the cross-account role that the pipeline assumes to operate a CloudFormation stack (CROSS_ACCOUNT_ROLE).4.    Verify that the role is updated for both of the following:The RoleArn inside the action configuration JSON structure for your pipeline.The roleArn outside the action configuration JSON structure for your pipeline.Note: In the following code example, RoleArn is the role passed to AWS CloudFormation to launch the stack. CodePipeline uses roleArn to operate an AWS CloudFormation stack.{ "name": "Prod_Deploy", "actions": [{ "inputArtifacts": [{ "name": "MyApp" }], "name": "test-cfn-x", "actionTypeId": { "category": "Deploy", "owner": "AWS", "version": "1", "provider": "CloudFormation" }, "outputArtifacts": [], "configuration": { "ActionMode": "CHANGE_SET_REPLACE", "ChangeSetName": "test", "RoleArn": "ARN_FOR_CFN_STACK_ROLE", "Capabilities": "CAPABILITY_IAM", "StackName": "test-cfn-sam", "TemplatePath": "MyApp::template.yaml" }, "roleArn": "ARN_FOR_CROSS_ACCOUNT_ROLE", "runOrder": 1 }]}5.    Remove the metadata configuration from the pipeline.json file. For example:"metadata": { "pipelineArn": "arn:aws:codepipeline:REGION:ACC:my_test", "updated": 1551216777.183, "created": 1551207202.964}Important: To align with proper JSON formatting, remove the comma before the metadata section.6.    (Optional) To create a pipeline and update the JSON structure, run the following command to update the pipeline with the new configuration file:aws codepipeline update-pipeline --cli-input-json file://pipeline.json7.    (Optional) To use a current pipeline and update the JSON structure, run the following command to create a new pipeline:aws codepipeline create-pipeline --cli-input-json file://pipeline.jsonImportant: In your pipeline.json file, make sure that you change the name of your new pipeline.Related informationCreate a pipeline in CodePipeline that uses resources from another AWS accountCodePipeline pipeline structure referenceFollow"
https://repost.aws/knowledge-center/codepipeline-deploy-cloudformation
How can I troubleshoot signature mismatch errors when making SigV4 signed requests with IAM authentication to API Gateway?
The Signature Version 4 (SigV4) signed request to Amazon API Gateway failed with a 403 response and an error. The error is similar to the following: "The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method."
"The Signature Version 4 (SigV4) signed request to Amazon API Gateway failed with a 403 response and an error. The error is similar to the following: "The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method."Short descriptionAPI Gateway API endpoints using AWS Identity and Access Management (IAM) authentication might return 403 errors if:The API request isn't signed and the API request uses IAM authentication.The IAM credentials used to sign the request are incorrect or don't have permissions to invoke the API.The signature of the signed API request doesn't match the signature for the API Gateway API endpoint.The API request header is incorrect.ResolutionIAM authenticationMake sure that the API request using IAM authentication is signed with SigV4. If the API request isn't signed, then you might receive the following error: "Missing Authentication Token."IAM credentialsVerify that the authentication credentials for the access key and secret key are correct. If the access key is incorrect, then you might receive the following error: "The security token included in the request is invalid."Make sure that the IAM entity used to sign the request has execute-api:Invoke permissions. If the IAM entity doesn't have execute-api:Invoke permissions, then you might receive the following error: "User: arn:aws:iam::xxxxxxxxxxxx:user/username is not authorized to perform: execute-api:Invoke on resource"Signature mismatchIf the secret access key is incorrect, then you might receive the following error: "The request signature we calculated does not match the signature you provided."The secret access key must match the access key ID in the Credential parameter. For instructions, follow the Send a request to test the authentication settings section in How do I activate IAM authentication for API Gateway REST APIs?Make sure that you followed the instructions for the SigV4 signing process. If any values in the signature calculation are incorrect, then you might receive the following error: "The request signature we calculated does not match the signature you provided."When API Gateway receives a signed request, it recalculates the signature. If there are differences in the values, then API Gateway gets a different signature. Compare the canonical request and string to your signed request with the value in the error message. Modify the signing process if there are any differences.Example canonical request:GET -------- HTTP method/ -------- Path. For API stage endpoint, it should be /{stage-name}/{resource-path} -------- Query string key-value pair. Leave it blank if the request doesn't have any query stringcontent-type:application/json -------- header key-value pair. One header per linehost:0123456789.execute-api.us-east-1.amazonaws.com -------- host and x-amz-date are required headers for all signed request x-amz-date:20220806T024003Z content-type;host;x-amz-date -------- A list of signed headersd167e99c53f15b0c105101d468ae35a3dc9187839ca081095e340f3649a04501 -------- hash of the payloadExample canonical error response:<ErrorResponse xmlns="https://iam.amazonaws.com/doc/2010-05-08/"> <Error> <Type>Sender</Type> <Code>SignatureDoesNotMatch</Code> <Message>The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.The canonical string for this request should have been 'GET / Action=ListGroupsForUser&MaxItems=100&UserName=Test&Version=2010-05-08&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIOSFODNN7EXAMPLE%2F20120223%2Fus-east-1%2Fiam%2Faws4_request&X-Amz-Date=20120223T063000Z&X-Amz-SignedHeaders=hosthost:iam.amazonaws.comhost<hashed-value>'The String-to-Sign should have been'AWS4-HMAC-SHA25620120223T063000Z20120223/us-east-1/iam/aws4_request<hashed-value>'</Message> </Error> <RequestId>4ced6e96-5de8-11e1-aa78-a56908bdf8eb</RequestId></ErrorResponse>Note: For API gateway headers, only the host and x-amz-date headers are required.API request headerMake sure that the SigV4 authorization header includes the correct credential key similar to the following:Authorization: AWS4-HMAC-SHA256 Credential=AKIAIOSFODNN7EXAMPLE/20130524/us-east-1/s3/aws4_request, SignedHeaders=host;range;x-amz-date,Signature=example-generated-signatureIf the credential key is missing or incorrect, you might receive the following error: "Authorization header requires 'Credential' parameter. Authorization header requires 'Signature' parameter."Make sure that the SigV4 authorization request also includes the request date using either HTTP Date or the x-amz-date header.Related informationCode examples in the AWS SDKsHow do I troubleshoot HTTP 403 errors from API Gateway?Follow"
https://repost.aws/knowledge-center/api-gateway-iam-sigv4-mismatch
How do I use ISM to manage low storage space in Amazon OpenSearch Service?
My Amazon OpenSearch Service cluster is running low on storage space.
"My Amazon OpenSearch Service cluster is running low on storage space.Short descriptionIndex State Management (ISM) allows you to automate routine tasks and then apply them to indices and index patterns in OpenSearch Service. With ISM, you can define custom management policies that help you maintain issues such as low disk space. For example, you can use a rollover operation and an ISM policy to automate deletion of old indices based on conditions such as index size. The rollover operation rolls over a target to a new index when an existing index meets the defined condition.To create an ISM policy for an index pattern using an operation like rollover, perform the following steps:1.    Set up your rollover index.2.    Create an ISM policy.3.    Attach the policy to an index.4.    Add the template.After you attach your policy to an index, your index begins to initialize and then transitions into different states until the rollover operation completes. For more information about the rollover operation, see rollover on the Open Distro for OpenSearch website.ResolutionSet up your rollover indexCreate an index and alias where the index format matches the index pattern:^.*-\d+$Important: Make sure to correctly configure your rollover alias. Otherwise, you receive an error message.In the following example, "test-index-000001" is created and populated with several documents. Because this example uses a rollover index, the index format must match the pattern.PUT test-index-000001/_doc/1{ "user": "testuser", "post_date": "2020-05-08T14:12:12", "message": "ISM testing"}A rollover index requires an alias that points towards the latest index. This means that you must create an alias ("test-index") using the following query:POST /_aliases{ "actions": [ { "add": { "index": "test-index-000001", "alias": "test-index" } } ]}Note: If a rollover operation is included in the ISM policy, you must include rollover alias. For more information, see Why does the rollover index action in my ISM policy keep failing in Amazon OpenSearch Service?Create an ISM policyIn OpenSearch Dashboards, choose the Index Management tab, and then create an ISM policy for your rollover operation.For example:Rollover to warm state{ "policy": { "policy_id": "Roll_over_policy", "description": "A test policy. DO NOT USE FOR PRODUCTION!", "schema_version": 1, "error_notification": null, "default_state": "hot", "states": [ { "name": "hot", "actions": [ { "rollover": { "min_size": "10mb" } } ], "transitions": [ { "state_name": "warm" } ] }, { "name": "warm", "actions": [ { "replica_count": { "number_of_replicas": 2 } } ], "transitions": [] } ] }}In this ISM policy, there are two defined states: "hot" and "warm." By default, your index is in the "hot" state. The index transitions into the "warm" state as soon as the size of the index reaches 10 MB and a new rollover index is created. In the "warm" state, you can perform various actions on the index such as changing the replica count to two or performing a force_merge operation.Rollover to delete after few days{ "policy": { "policy_id": "Roll_over_policy", "description": "A test policy. DO NOT USE FOR PRODUCTION!", "schema_version": 1, "error_notification": null, "default_state": "hot", "states": [ { "name": "hot", "actions": [ { "rollover": { "min_size": "10mb" } } ], "transitions": [ { "state_name": "delete", "conditions": { "min_index_age": "30d" } } ] }, { "name": "delete", "actions": [ { "delete": {} } ], "transitions": [] } ] }}In this ISM policy, there are two defined states: "hot" and "delete" By default, index is in the "hot" state. After the index reaches 10 MB, a new rollover index is created. Then, after 30 days the index transitions to the "delete" state and the index is deleted.Attaching the policy to an indexTo attach your ISM policy to an index, perform the following steps:1.    Open OpenSearch Dashboards from the OpenSearch Service console. You can find a link to OpenSearch Dashboards in the domain summary of your OpenSearch Service console.2.    Choose the Index Management tab.3.    Select the index that you want to attach your ISM policy to (for example: "test-index-000001").4.    Choose Apply policy.5.    (Optional) If your policy specifies any actions that require an alias, provide the alias, and then choose Apply. Your index appears under the list of Policy Managed Indices.Updating the policy of an existing indexNote : Any update made in the existing policy doesn't apply to existing indices automatically, it requires a reapplication of the same policy to the indices.To reapply your ISM policy to any existing index, perform the following steps:1.    Open OpenSearch Dashboards from the OpenSearch Service console.2.    Choose the Index Management tab.3.    From the Policy Managed Indices section, choose Change Policy.4.    Choose the indices that you want to apply change to (for example: "test-index-000001").5.    Choose the current state of the indices.6.    From the Choose New Policy section, choose update policy name.7.    (Optional) If you want to switch the indices to another state after the policy's updated, choose Switch indices to the following state after the policy takes effect. Then, choose the state from dropdown list.Adding the templateAttach the policy to a specific index such as "test-index-000002," which was created as an outcome of the ISM policy. With this attachment, the indices also rollover after the required condition (such as index size) is met.You can create and use an ISM template like this one:PUT _plugins/_ism/policies/test_policy{ "policy": { "description": "A test policy. DO NOT USE FOR PRODUCTION!", "last_updated_time": 1642027350875, "schema_version": 1, "error_notification": null, "default_state": "hot", "states": [ { "name": "hot", "actions": [ { "rollover": { "min_size": "10mb" } } ], "transitions": [ { "state_name": "warm" } ] }, { "name": "warm", "actions": [ { "replica_count": { "number_of_replicas": 2 } } ], "transitions": [] } ], "ism_template": { "index_patterns": [ "test*" ], "priority": 100 } }}In this example, the explain index API verifies that the "test_policy" template that you created is attached to the newly created index:GET _plugins/_ism/explain/test-index-000002{ "test-index-000002": { "index.plugins.index_state_management.policy_id": "test_policy", "index.opendistro.index_state_management.policy_id": "test_policy", "index": "test-index-000002", "index_uuid": "CZrQ-RzRS8SmiWIuyqFmVg", "policy_id": "test_policy", "enabled": true }, "total_managed_indices": 1}Note: This index also populates under the Managed Indices section in the OpenSearch Dashboards Index Management tab.ISM policy statesWhen an ISM policy is attached to an index, the index enters an "Initializing" state. From the "Initializing" state, the index moves into the "Default" state, which is defined in the policy. This "Initializing" operation, and every subsequent operation, can take 30 to 48 minutes. ISM uses this time to perform policy actions, and then checks for any conditions and transitions the index into different states. A random jitter of 0-60% is also added to make sure that there aren't surges of activity coming from all indices at the same time.Note: For a rollover operation, an index is "complete" after the index rolls over, transitions into a "warm" state, and the replica count is updated.If you're using an ISM policy and the index isn't properly migrating, then check the status of the ISM.To check the status of the migration for particular index, use the following syntax:GET _ultrawarm/migration/<put_index_name_here>/_statusTo get a summary migration of all indices, use the following syntax:GET _ultrawarm/migration/_status?Related informationExample policiesFollow"
https://repost.aws/knowledge-center/opensearch-low-storage-ism
How can I resend the validation email to verify my domain for AWS Certificate Manager (ACM)?
"I requested a certificate from AWS Certificate Manager (ACM) to verify my domain using email validation, but I didn't receive the validation email."
"I requested a certificate from AWS Certificate Manager (ACM) to verify my domain using email validation, but I didn't receive the validation email.Short descriptionWhen requesting a certificate for a domain, you might not receive the validation email if:You don't have DNS MX records configured for the domain.Your registrar doesn't support domain email forwarding.First, try these troubleshooting steps to help you receive validation email.If that doesn't work, you can configure your domain to receive the validation email. To do this, use Amazon WorkMail or Amazon Simple Email Service (Amazon SES) with Amazon Simple Notification Service (Amazon SNS).ResolutionOption 1: Resend the Validation Email Using WorkMailCreate a WorkMail user using one of the five common system administration addresses for your domain. For more information, see MX record.Open the WorkMail console, and then follow the instructions for Creating an organization.Follow the instructions for Adding a domain.Choose the organization that you created in step 1, and then choose Create user.Enter the User name and Display name for "admin", and then choose Next Step.Note: You can also use "hostmaster", "postmaster", and "webmaster" for the user name. You can't use "administrator", because this is the AWS Organizations default system user account.Enter your primary email address and password for the new user.In the dropdown list next to Email address, choose the domain that you created in step 2, and then choose Add user.Follow the instructions to resend the validation email.Follow the instructions for signing into the Amazon WorkMail web client for the user name created in step 4.You receive a validation email in your WorkMail web client inbox. Follow the instructions for using email to validate domain ownership.For more information, see How do I add and verify a domain to use with WorkMail?Option 2: Resend the Validation Email Using Amazon SES and Amazon SNSCreate an Amazon SNS topic.Open the Amazon SNS console, expand the menu from the left navigation pane, choose Topics, and then choose Create Topic.Enter the Topic name and Display name. Here are some suggested names:Topic name: Validation-EmailDisplay name: ValidationChoose Create topic, and then choose Create subscription.Use the default Topic ARN, and for Protocol, choose Email.Enter your email address for the Endpoint, and then choose Create subscription.Note: A confirmation email is sent to the subscribed endpoint.From the confirmation email, choose Confirm subscription. You receive the message "Subscription confirmed!".Verify your domain.Open the Amazon SES console and choose Domains from the left navigation pane.Choose Verify a New Domain, enter your domain name, and then choose Verify This Domain.If your domain is hosted with Amazon Route 53, choose Use Route 53. Copy the Email Receiving Record MX Value, and then choose Close.Note: If your domain isn't hosted by Amazon Route 53, enter the record set manually in your domain registrar's DNS settings.(Optional) If you choose Use Route 53, you can choose the records to import by selecting Domain Verification Record or Email Receiving Record. Select the hosted zones that you want to update, and then choose Create Record Sets.Note: This option replaces all existing MX records for your domain. Don't use this option unless you are setting up your domain to receive email through Amazon SES. For more information, see Email receiving with Amazon SES.Open the Amazon Route 53 console, and then choose Hosted zones from the left navigation pane.Select your Domain Name from step 2, and then choose Create Record Set.Select your MX Record Set, enter your domain or subdomain name, and then choose the MX --Mail exchange record type.In Value: paste the Email Receiving Record MX Value from step 3, and then choose Create.Create Amazon SES rules.Open the Amazon SES console, and then choose Rule Sets from the left navigation pane.If you don't have an existing rule, choose Create a Rule Set. In Rule set name, enter a name, and then choose Create a Rule Set.In Rule set name, choose your rule set, and then choose Create Rule.For Recipient, enter your recipient email address, Add Recipient, and then choose Next Step. You can select any of the following validation email addresses:administrator@your_domainhostmaster@your_domainpostmaster@your_domainwebmaster@your_domainadmin@your_domainNote: Receipt rule sets have two states—active or disabled. Only one receipt rule set can be active at any time. For more information, see Creating rule sets and receipt rules.Choose the Add action menu**,** and then select SNS.From the SNS topic menu, choose the SNS topic that you created earlier (for example, Validation-Email). For Encoding, choose UTF-8.Select the Add action menu, choose Stop Rule Set, and then choose Next Step.In Rule Details, for Rule name, enter "Validation-Rule-Set", choose Next Step, and then choose Create Rule.Choose Rule Sets from the left navigation pane, choose your rule set, choose Set as Active Rule Set, and then choose Set Active.Resend the validation email and verify the domain.Open the AWS Certificate Manager console.Select the Domain name, choose the Actions menu, choose Resend validation email, and then choose Resend.You receive an email message for each domain listed with the subject " Amazon SES Email Receipt Notification".Note: If the email isn't properly formatted, search the email for \r\nTo approve this request, go to Amazon Certificate Approvals at\r\n. This is the certificate validation link.Follow the instructions for using email to validate domain ownership.After validating your ACM certificate, you can use the certificate with supported AWS resources in the same Region as the certificate. If you have AWS resources in multiple Regions, request a certificate from each Region.Note: If you intend to use an ACM certificate with Amazon CloudFront, you must request or import the certificate in the US East (N. Virginia) Region. For more information, see AWS Region for AWS Certificate Manager.Related information(Optional) Configure email for your domainHow does the ACM managed renewal process work with email-validated certificates?Why can't I resend the validation email from ACM when renewing a certificate?Follow"
https://repost.aws/knowledge-center/ses-sns-domain-validation-email
How do I use and manage Amazon Redshift WLM memory allocation?
I'm trying to check the concurrency and Amazon Redshift workload management (WLM) allocation to the queues. How does WLM allocation work and when should I use it?
"I'm trying to check the concurrency and Amazon Redshift workload management (WLM) allocation to the queues. How does WLM allocation work and when should I use it?Short descriptionAmazon Redshift workload management (WLM) allows you to manage and define multiple query queues. It routes queries to the appropriate queues with memory allocation for queries at runtime. Some of the queries might consume more cluster resources, affecting the performance of other queries.You can configure workload management to manage resources effectively in either of these ways:Automatic WLM: Allows Amazon Redshift to manage the concurrency level of the queues and memory allocation for each dispatched query. The dispatched query allows users to define the query priority of the workload or users to each of the query queues.Manual WLM: Allows you to have more control over concurrency level and memory allocation to the queues. As a result, queries with more resource consumption can run in queues with more resource allocation.Note: To define metrics-based performance boundaries, use a query monitoring rule (QMR) along with your workload management configuration.To check the concurrency level and WLM allocation to the queues, perform the following steps:1.FSPCheck the current WLM configuration of your Amazon Redshift cluster.2.FSPCreate a test workload management configuration, specifying the query queue's distribution and concurrency level.3.FSP(Optional) If you are using manual WLM, then determine how the memory is distributed between the slot counts.ResolutionChecking your WLM configuration and memory usageUse the STV_WLM_SERVICE_CLASS_CONFIG table to check the current WLM configuration of your Amazon Redshift cluster:[ { "query_concurrency": 2, "memory_percent_to_use": 30, "query_group": [], "query_group_wild_card": 0, "user_group": [ "user_group1" ], "user_group_wild_card": 0, "rules": [ { "rule_name": "BlockstoDiskSpill", "predicate": [ { "metric_name": "query_temp_blocks_to_disk", "operator": ">", "value": 50 } ], "action": "abort" } ] }, { "query_concurrency": 5, "memory_percent_to_use": 40, "query_group": [], "query_group_wild_card": 0, "user_group": [ "user_group2" ], "user_group_wild_card": 0 }, { "query_concurrency": 5, "memory_percent_to_use": 10, "query_group": [], "query_group_wild_card": 0, "user_group": [], "user_group_wild_card": 0, "rules": [ { "rule_name": "abort_query", "predicate": [ { "metric_name": "scan_row_count", "operator": ">", "value": 1000 } ], "action": "abort" } ] }, { "query_group": [], "query_group_wild_card": 0, "user_group": [], "user_group_wild_card": 0, "auto_wlm": false }, { "short_query_queue": false }]Note: In this example, the WLM configuration is in JSON format and uses a query monitoring rule (Queue1).In the WLM configuration, the “memory_percent_to_use” represents the actual amount of working memory, assigned to the service class.Note that Amazon Redshift allocates memory from the shared resource pool in your cluster. Therefore, Queue1 has a memory allocation of 30%, which is further divided into two equal slots. Each slot gets an equal 15% share of the current memory allocation. Meanwhile, Queue2 has a memory allocation of 40%, which is further divided into five equal slots. Each slot gets an equal 8% of the memory allocation. The default queue uses 10% of the memory allocation with a queue concurrency level of 5.Use the following query to check the service class configuration for Amazon Redshift WLM:select rtrim(name) as name,num_query_tasks as slots,query_working_mem as mem,max_execution_time as max_time,user_group_wild_card as user_wildcard,query_group_wild_card as query_wildcardfrom stv_wlm_service_class_configwhere service_class > 4;Here's an example output: name | slots | mem | max_time | user_wildcard | query_wildcard----------------------------------------------------+-------+-----+----------+---------------+---------------- Service class for super user | 1 | 297 | 0 | false | false Queue 1 | 2 | 522 | 0 | false | false Queue 2 | 5 | 278 | 0 | false | false Default queue | 5 | 69 | 0 | false | false Service class for vacuum/analyze | 0 | 0 | 0 | false | falseQueue 1 has a slot count of 2 and the memory allocated for each slot (or node) is 522 MB. The memory allocation represents the actual amount of current working memory in MB per slot for each node, assigned to the service class.Note: If all the query slots are used, then the unallocated memory is managed by Amazon Redshift. The unallocated memory can be temporarily given to a queue if the queue requests additional memory for processing. For example, if you configure four queues, then you can allocate your memory like this: 20 percent, 30 percent, 15 percent, 15 percent. The remaining 20 percent is unallocated and managed by the service. For more information about unallocated memory management, see WLM memory percent to use.Identifying high-level tuning parametersHere is an example query execution plan for a query:Use the SVL_QUERY_METRICS_SUMMARY table to check the detailed execution and “query_queue_time” column to see which queries are getting queued. The "query_queue_time" column indicates that the query is waiting in the queue for a WLM slot to execute.Use the SVL_QUERY_SUMMARY table to check the memory consumption for the query even if the query ran in-memory.dev=# select userid, query, service_class, query_cpu_time, query_blocks_read, query_execution_time, query_cpu_usage_percent, query_temp_blocks_to_disk, query_queue_time from SVL_QUERY_METRICS_SUMMARY where query=29608; userid | query | service_class | query_cpu_time | query_blocks_read | query_execution_time | query_cpu_usage_percent | query_temp_blocks_to_disk | query_queue_time--------+-------+---------------+----------------+-------------------+----------------------+-------------------------+---------------------------+------------------ 100 | 29608 | 8 | 18 | 942 | 64 | 10.05 | |(1 row)ev=# select query, step, rows, workmem, label, is_diskbasedfrom svl_query_summarywhere query = 29608order by workmem desc; query | step | rows | workmem | label | is_diskbased-------+------+----------+----------+-----------------------------------------+-------------- 29608 | 3 | 49999 | 54263808 | hash tbl=714 | f 29608 | 2 | 49999 | 0 | project | f 29608 | 0 | 49999 | 0 | scan tbl=255079 name=part | f 29608 | 1 | 49999 | 0 | project | f 29608 | 6 | 1561938 | 0 | return | f 29608 | 4 | 1561938 | 0 | project | f 29608 | 5 | 1561938 | 0 | project | f 29608 | 2 | 29995220 | 0 | project | f 29608 | 1 | 1561938 | 0 | return | f 29608 | 1 | 29995220 | 0 | project | f 29608 | 0 | 1561938 | 0 | scan tbl=4893 name=Internal Worktable | f 29608 | 3 | 1561938 | 0 | hjoin tbl=714 | f 29608 | 0 | 29995220 | 0 | scan tbl=255087 name=lineorder | f(13 rows)Use the SVL_QUERY_SUMMARY table to obtain a detailed view of resource allocation during each step of the query. Check the is_diskbased and workmem columns to view the resource consumption. For more information, see Analyzing the query summary.Updating to WLM dynamic configuration propertiesYou can also use WLM dynamic configuration properties to adjust to changing workloads. You can apply dynamic properties to the database without a cluster reboot. However, WLM static configuration properties require a cluster reboot for changes to take effect.Here's an example of a cluster that is configured with two queues:Queue Concurrency % Memory to Use 1 5 60%2 5 40%If the cluster has 200 GB of available memory, then the current memory allocation for each of the queue slots might look like this:Queue 1: (200 GB * 60% ) / 5 slots = 24 GBQueue 2: (200 GB * 40% ) / 5 slots = 16 GBTo update your WLM configuration properties to be dynamic, modify your settings like this:Queue Concurrency % Memory to Use1 3 75%2 4 25%As a result, the memory allocation has been updated to accommodate the changed workload:Queue 1: (200 GB * 75% ) / 3 slots = 50 GBQueue 2: (200 GB * 25% ) / 4 slots = 12.5 GBNote: If there are any queries running in the WLM queue during a dynamic configuration update, Amazon Redshift waits for the queries to complete. After the query completes, Amazon Redshift updates the cluster with the updated settings.Use the STV_WLM_SERVICE_CLASS_CONFIG table while the transition to dynamic WLM configuration properties is in process. When the num_query_tasks (concurrency) and query_working_mem (dynamic memory percentage) columns become equal in target values, the transition is complete.Identifying insufficient memory allocated to the queryIf a query execution plan in SVL_QUERY_SUMMARY has an is_diskbased value of "true", then consider allocating more memory to the query. You can allocate more memory by increasing the number of query slots used. For more information, see Step 1: Override the concurrency level using wlm_query_slot_count.Note: It's a best practice to first identify the step that is causing a disk spill. Then, decide if allocating more memory to the queue can resolve the issue. Or, you can optimize your query.Follow"
https://repost.aws/knowledge-center/redshift-wlm-memory-allocation
How do I troubleshoot errors related to Lambda triggers in Amazon Cognito?
I want to resolve the errors I encounter while configuring AWS Lambda functions as triggers in Amazon Cognito.
"I want to resolve the errors I encounter while configuring AWS Lambda functions as triggers in Amazon Cognito.ResolutionThe following are common errors to troubleshoot when you use Lambda triggers in Amazon Cognito."PreSignUp invocation failed due to error AccessDeniedException."Note: The trigger type is mentioned in the error message. For example, a Lambda function that's attached as a PreSignUp trigger to your user pool responds with the preceding error.Reason for the errorWhen you add a Lambda function as a trigger to your user pool from the Amazon Cognito console, Amazon Cognito performs the following actions:Adds the required permission to the function's resource policy. This resource policy allows Amazon Cognito to invoke the function in case of certain event trigger types.Displays the following message: "Permission to invoke Lambda function - You are granting Amazon Cognito permission to invoke this Lambda function on your behalf. Amazon Cognito will add a resource-based policy statement to the function."This error also occurs when you delete the function that you added as a trigger. If you delete a Lambda trigger, then you must update the corresponding trigger in the user pool. For example, when you delete the post authentication trigger, you must set the Post authentication trigger in the corresponding user pool to none.Resolving the errorWhen you create a trigger outside the Amazon Cognito console, you must explicitly add permissions as you assign the trigger to the user pool. To add permissions, use an AWS SDK, AWS Command Line Interface (AWS CLI), or Amazon CloudFormation.When you add permissions, Amazon Cognito invokes the function only on behalf of your user pool and account. To add permissions from the Lambda console, follow the steps in Using resource-based policies for Lambda. You can also use the AddPermission operation.The following is an example Lambda resource-based policy that allows Amazon Cognito to invoke a function. The user pool is in the aws:SourceArn condition and the account is in the aws:SourceAccount condition.Note: Replace example_lambda_function_arn, example_account_number, and example_user_pool_arn with your own values.{ "Version": "2012-10-17", "Id": "default", "Statement": [ { "Sid": "lambda-allow-cognito", "Effect": "Allow", "Principal": { "Service": "cognito-idp.amazonaws.com" }, "Action": "lambda:InvokeFunction", "Resource": "example_lambda_function_arn", "Condition": { "StringEquals": { "AWS:SourceAccount": "example_account_number" }, "ArnLike": { "AWS:SourceArn": "example_user_pool_arn" } } } ]}"Error in authentication, please contact the app owner."Reason for the errorThis error occurs for two reasons:Except for custom sender Lambda triggers, Amazon Cognito invokes Lambda functions synchronously. The function must respond within 5 seconds. If the function doesn't respond, then Amazon Cognito retries the call. After three unsuccessful attempts, the function times out. You can't change the 5-second timeout value.If Amazon Cognito doesn't get a response from the trigger in 5 seconds, then after three unsuccessful attempts, Amazon Cognito returns the error.Resolving the errorIf the function times out, then apply best practices for working with Lambda functions to optimize the function. You can have the Lambda function that's associated with the user pool asynchronously call a second Lambda function. With this setup, the functions can perform all required actions without timing out."PreSignUp failed with error Syntax error in module lambda_function."Reason for the errorAmazon Cognito returns this error when there are any syntax errors in your Lambda function.Resolving the errorRecheck the function code, and correct any syntax errors."PreSignUp failed with error Handler 'lambda_handler'; missing on module; lambda_function."Reason for the errorThe function's runtime settings include a handler parameter. If incorrect information or syntax is set for HandlerInfo, then the function can't run and results in this error.Resolving the errorConfigure the handler parameter in your function's configuration to tell the Lambda runtime which handler method to invoke. When you configure a function in Python, the handler setting value is the file name and the handler module name separated by a dot. For example, main.Handler calls the handler method that's defined in main.py.For more information about handler syntax, see Modifying the runtime environment.Related informationImportant considerationsCustom AWS Lambda runtimesFollow"
https://repost.aws/knowledge-center/cognito-lambda-trigger-errors
How do I purchase an Amazon EC2 Reserved Instance that reserves capacity in a specific Availability Zone?
I want to make sure that an instance type in a particular Availability Zone is available when I need it. How do I purchase an Amazon Elastic Compute Cloud (Amazon EC2) Reserved Instance that does that?
"I want to make sure that an instance type in a particular Availability Zone is available when I need it. How do I purchase an Amazon Elastic Compute Cloud (Amazon EC2) Reserved Instance that does that?ResolutionBy default, newly purchased Reserved Instances don't reserve capacity.To purchase a Reserved Instance that reserves capacity for a specific Availability Zone, do the following:Open the Amazon EC2 console.Choose Reserved Instances from the navigation pane.Choose Purchase Reserved Instances.In the purchasing menu, select Only show offerings that reserve capacity.Choose the desired instance characteristics from the dropdown list.Choose Search.A column for Availability Zone appears in the search results. These Reserved Instances reserve the capacity for your use in that Availability Zone.Related informationBuying Reserved InstancesFollow"
https://repost.aws/knowledge-center/ri-reserved-capacity
How can I troubleshoot restoring my ElastiCache cluster from S3?
"When restoring my Amazon ElastiCache for Redis backup from Amazon Simple Storage Service (Amazon S3), cluster creation fails. I receive a "Create-failed" or "Permission denied" error message. How can I troubleshoot this?"
"When restoring my Amazon ElastiCache for Redis backup from Amazon Simple Storage Service (Amazon S3), cluster creation fails. I receive a "Create-failed" or "Permission denied" error message. How can I troubleshoot this?Short descriptionThe following are common reasons why restoring an ElastiCache backup from Amazon S3 fails:You're attempting to restore a backup outside of the backup constraints.ElastiCache couldn't retrieve the file from Amazon S3.The ElastiCache backup file is located in an Amazon S3 bucket in another Region.You're restoring an .rdb file containing multiple databases to an ElastiCache (cluster mode enabled) cluster.ResolutionYou're attempting to restore a backup outside of the backup constraintsWhen restoring an ElastiCache for Redis backup, it's important to note the following backup constraints:You can't restore from a backup created using a Redis (cluster mode enabled) cluster to a Redis (cluster mode disabled) cluster.When restoring a backup made from an ElastiCache (cluster mode enabled) cluster, you can't select the cluster mode disabled option in the ElastiCache console. Only the cluster mode enabled option is available.When you export an ElastiCache (cluster mode enabled) cluster backup to Amazon S3, multiple .rdb files are created (one for each shard). If you try to seed the backup from Amazon S3, you can reference only one backup (.rdb). This results in seeding a single shard's keys. Trying to circumvent this by including a wildcard results in the following error:Error: Object or bucket does not exist for S3 object: examplebucket/cluster-mode-enabled-*.rdb.You can't restore a backup from a cluster that uses data tiering. For example, you can't restore a r6gd node into a cluster that doesn't use data tiering, such as a r6g node.Due to limitations of ElastiCache Redis data tiering, you can't export the backup to Amazon S3. So, you can't restore an ElastiCache data tiering backup from S3.You can't restore from a Redis (cluster mode disabled) cluster to a Redis (cluster mode enabled) cluster if the .rdb file references more than one database. Attempting to do this results in the following error:Error: To restore a snapshot in cluster mode, all keys in the RDB file should reside in DB 0.ElastiCache couldn't retrieve the file from Amazon S3This error occurs when ElastiCache doesn't have the necessary permissions to access the ElastiCache backup stored in the S3 bucket. You can confirm the permissions issue by reviewing ElastiCache Events.The following example ElastiCache Event shows that Redis replication group "test" creation failed because ElastiCache couldn't retrieve the backup file from S3:Restore from snapshot failed for node group 0001 in replication group test. Failed to retrieve file from S3After determining that the cause of the error is that ElastiCache couldn't retrieve the file from Amazon S3, confirm that your Region is one of the following:An opt-in RegionChina (Beijing) and China (Ningxia)AWS GovCloud (US-West)A default RegionAn opt-in Region requires a bucket policy allowing ElastiCache to retrieve the backup file from Amazon S3.If your S3 bucket is located in one of the following Regions, you must allow the ElastiCache service access to the backup file in S3:China (Beijing) and China (Ningxia)AWS GovCloud (US-West)A default RegionNote: The canonical ID for the China (Beijing), China (Ningxia), and AWS GovCloud (US-West) Regions are different from the default AWS Regions:China (Beijing) and China (Ningxia)Canonical ID: b14d6a125bdf69854ed8ef2e71d8a20b7c490f252229b806e514966e490b8d83AWS GovCloud (US-West) RegionCanonical ID: 40fa568277ad703bd160f66ae4f83fc9dfdfd06c2f1b5060ca22442ac3ef8be6AWS default RegionsCanonical ID: 540804c33a284a299d2547575ce1010f2312ef3da9b3a053c8bc45bf233e4353After granting ElastiCache access to the .rdb backup object in Amazon S3 using a canonical ID or bucket policy, restore the Redis cluster.The ElastiCache backup file is located in an Amazon S3 bucket in another RegionThe following error message indicates that you're trying to restore an ElastiCache backup that's located in an Amazon S3 bucket within another Region:"Permission denied to access S3 object. Please use the S3 object in the same region."To resolve this issue, do the following:1.    Copy the backup (.rdb) from the S3 bucket that contains the backup to an S3 bucket located in the region where the Redis cluster is being restored.The following is an example AWS Command Line Interface (AWS CLI) command that you can use to copy between Amazon S3 buckets in different Regions:aws s3 cp s3://SourceBucketName/BackupName.rdb s3://DestinationBucketName/BackupName.rdb --acl bucket-owner-full-control --source-region SourceRegionName --region DestinationRegionNameNote: If you receive errors when running AWS CLI commands, make sure that you're using the most recent version of the AWS CLI.2.    After the copy is completed, confirm that the backup (.rdb) object has the correct permissions assigned to it in the form of a canonical ID or a bucket policy. See the previous section for the correct canonical IDs.Now that the backup object is copied to the correct Region and the correct permissions are applied, you can continue to restore the cluster.You're restoring an .rdb file containing multiple databases to an ElastiCache (cluster mode enabled) clusterYou can't restore a Redis backup (.rdb) file containing multiple databases to an ElastiCache (cluster mode enabled) cluster. ElastiCache (cluster mode enabled) doesn't support multiple databases. All the keys should reside in DB0. You can confirm if this is the cause for the restore failure by reviewing ElastiCache Events.The following example ElastiCache Event shows that Redis replication group "test" creation failed due to the .rdb file containing multiple databases.Restore from snapshot failed for node group 0001 in replication group test. To restore a snapshot in cluster mode, all keys in the RDB file should reside in DB 0. Snapshot ID: arn:aws:s3:::example-bucket/multidb.rdbTo correct this issue, do the following:1.    Make sure that that all the keys are migrated to a single database.Note: If the source database is located on ElastiCache Redis, the migrate command isn't supported.2.    After all keys are on the same database, you can create a local backup of your Redis database, upload the backup to Amazon S3 and continue to restore to an ElastiCache (cluster mode enabled) cluster.Follow"
https://repost.aws/knowledge-center/elasticache-restore-cluster-from-s3
How do I configure my Network Firewall rules to block or allow specific domains?
I want to filter outbound web traffic from resources in my Amazon Virtual Cloud (Amazon VPC) using AWS Network Firewall.
"I want to filter outbound web traffic from resources in my Amazon Virtual Cloud (Amazon VPC) using AWS Network Firewall.Short descriptionNetwork Firewall policies and rule groups are defined by their rule evaluation order, having either default action order or strict evaluation order. A firewall policy configured for default action order evaluates rules in the following order: pass, drop, reject, and alert. The stateful default action for a default policy is pass.With strict evaluation order, rule groups are evaluated in order of priority from lowest to highest. Rules within a group are then evaluated in the order that they are configured as. The stateful default action for a strict order policy is configurable, such as configuring Drop established. Default rule groups are associated with default action order policies, while strict order rule groups are associated with strict evaluation order policies.You can configure Network Firewall to allow or block access to specific domains. This can be done for policies or rules using either default action order or strict evaluation order by using one of the following:Stateful domain list rule groupSuricata compatible IPS rulesStateful domain name inspection can be configured for HTTP and HTTPS protocols. For HTTP, the request is unencrypted and allows Network Firewall to see the hostname value in the HTTP host header. For HTTPS, Network Firewall uses the Server Name Indication (SNI) extension in the TLS handshake to determine the hostname. The firewall then compares the hostname (or domain name) against the configured HTTP or TLS rules.With a domain allowlist, the firewall passes HTTP or HTTPS requests only to specified domains. Requests to all other domains are dropped.ResolutionDefault action orderFor policies with default action order, configure a domain list rule group to allow HTTP and HTTPS requests to specific domains.1.    Open the Amazon VPC console.2.    Create a firewall.3.    In the navigation pane, under Network Firewall, choose Firewall policies.4.    Choose the default action order firewall policy that you want to edit.5.    Under Stateful rule groups, choose Actions, and then choose Create stateful rule group.6.    Enter a unique rule group name.7.    For Capacity reservation, enter an estimated number of domains that the list will include. This value can't be changed after the rule group is created.8.    For Stateful rule group options, choose Domain list.Note: Stateful rule order can't be changed because it's inherited from the policy. The Rule order appears as Default.9.    Under Domain list, for Domain name source, enter the domain names that you want to match. Domains can be defined as an exact match, such as abc.example.com. They can also be defined as a wildcard, such as .example.com.10.    For Source IPs type, choose Default if the source IP exists in the same VPC as the firewall. Choose Defined if the source IP exists in a remote VPC. When choosing Defined, enter the source subnets that you want the firewall to inspect under Source IP CIDR ranges.11.    For Protocols, select HTTP and HTTPS.12.    For Action, choose Allow.13.    Choose Create and add to policy.To manually define Suricata compatible IPS rules for HTTP and HTTPS, configure a default action order Suricata compatible IPS rule in the rule group.1.    Open the Amazon VPC console.2.    Create a firewall.3.    In the navigation pane, under Network Firewall, choose Firewall policies.4.    Choose the default action order firewall policy that you want to edit.5.    Under Stateful rule groups, choose Actions, then choose Create stateful rule group.6.    Enter a unique rule group Name.7.    For Capacity reservation, enter an estimated number of rules that the list will include. This value can't be changed after the rule group is created.8.    For Stateful rule group options, choose Suricata compatible IPS rules.Note: Stateful rule order can't be changed because it's inherited from the policy. The Rule order appears as Default.9.    (Optional) Define custom Rule variables for use in the Suricata signatures.10.    (Optional) Define IP set references for use in the Suricata signatures.11.    Under Suricata compatible IPS rules, enter the following rules. Change the domains to the specific domains that you want to address.pass http $HOME_NET any -> $EXTERNAL_NET any (http.host; dotprefix; content:".amazonaws.com"; endswith; msg:"matching HTTP allowlisted FQDNs"; flow:to_server, established; sid:1; rev:1;)pass http $HOME_NET any -> $EXTERNAL_NET any (http.host; content:"example.com"; startswith; endswith; msg:"matching HTTP allowlisted FQDNs"; flow:to_server, established; sid:2; rev:1;)pass tls $HOME_NET any -> $EXTERNAL_NET any (tls.sni; dotprefix; content:".amazonaws.com"; nocase; endswith; msg:"matching TLS allowlisted FQDNs"; flow:to_server, established; sid:3; rev:1;)pass tls $HOME_NET any -> $EXTERNAL_NET any (tls.sni; content:"example.com"; startswith; nocase; endswith; msg:"matching TLS allowlisted FQDNs"; flow:to_server, established; sid:4; rev:1;)drop http $HOME_NET any -> $EXTERNAL_NET any (http.header_names; content:"|0d 0a|"; startswith; msg:"not matching any HTTP allowlisted FQDNs"; flow:to_server, established; sid:5; rev:1;)drop tls $HOME_NET any -> $EXTERNAL_NET any (msg:"not matching any TLS allowlisted FQDNs"; flow:to_server, established; sid:6; rev:1;)12.    Choose Create and add to policy.Note: The established flow keyword is commonly used in domain rules, but it might not account for all out of flow packet exchange edge cases. Before using any example rule listing, test the rule to verify that it works as expected.Strict evaluation orderFor policies with strict evaluation order, configure a domain list rule group to allow HTTP and HTTPS requests to specific domains.1.    Open the Amazon VPC console.2.    Create a firewall.3.    In the navigation pane, under Network Firewall, choose Firewall policies.4.    Choose the strict evaluation order firewall policy that you want to edit.5.    Under Stateful rule groups, choose Actions, then choose Create stateful rule group.6.    Enter a unique rule group Name.7.    For Capacity reservation, enter an estimated number of domains the list will include. This value can't be changed after the rule group is created.8.    For Stateful rule group options, choose Domain list.Note: Stateful rule order can't be changed because it's inherited from the policy. The Rule order appears as Strict.9.    Under Domain list, for Domain name source, enter the domain names that you want to match. Domains can be defined as an exact match, such as abc.example.com. They can also be defined as a wildcard, such as .example.com.10.    For Source IPs type, choose Default if the source IP exists in the same VPC as the firewall. Choose Defined if the source IP exists in a remote VPC. When choosing Defined, enter the source subnets that you want the firewall to inspect under Source IP CIDR ranges.11.    For Protocols, choose HTTP and HTTPS.12.    For Action, choose Allow.13.    Choose Create and add to policy.14.    In the navigation pane, under Network Firewall, choose Firewall policies.15.    Choose the strict order policy that you added this rule group to.16.    For Stateful rule evaluation order and default actions, choose Edit.17.    For Default actions, choose Drop established. Then, choose Save.To manually define Suricata compatible IPS rules for HTTP and HTTPS, configure a strict evaluation order Suricata compatible IPS rule in the rule group.1.    Open the Amazon VPC console.2.    Create a firewall.3.    In the navigation pane, under Network Firewall, choose Firewall policies.4.    Choose the strict evaluation order firewall policy that you want to edit.5.    Under Stateful rule groups, choose Actions, then choose Create stateful rule group.6.    Enter a unique rule group Name.7.    For Capacity reservation, enter an estimated number of rules that the list will include. This value can't be changed after the rule group is created.8.    For Stateful rule group options, choose Suricata compatible IPS rules.Note: Stateful rule order can't be changed because it's inherited from the policy. The Rule order appears as Strict.9.    (Optional) Define custom Rule variables for use in the Suricata signatures that you define.10.    (Optional) Define IP set references for use in the Suricata signatures that you define.11.    Under Suricata compatible IPS rules, enter the following rules. Change the domains to the specific domains that you want to address.pass http $HOME_NET any -> $EXTERNAL_NET any (http.host; dotprefix; content:".amazonaws.com"; endswith; msg:"matching HTTP allowlisted FQDNs"; flow:to_server, established; sid:1; rev:1;)pass http $HOME_NET any -> $EXTERNAL_NET any (http.host; content:"example.com"; startswith; endswith; msg:"matching HTTP allowlisted FQDNs"; flow:to_server, established; sid:2; rev:1;)pass tls $HOME_NET any -> $EXTERNAL_NET any (tls.sni; dotprefix; content:".amazonaws.com"; nocase; endswith; msg:"matching TLS allowlisted FQDNs"; flow:to_server, established; sid:3; rev:1;)pass tls $HOME_NET any -> $EXTERNAL_NET any (tls.sni; content:"example.com"; startswith; nocase; endswith; msg:"matching TLS allowlisted FQDNs"; flow:to_server, established; sid:4; rev:1;)12.    Choose Create and add to policy.13.    In the navigation pane, under Network Firewall, choose Firewall policies.14.    Choose the strict order policy that you added this rule group to.15.    For Stateful rule evaluation order and default actions, choose Edit.16.    For Default actions, choose Drop established. Then, choose Save.Note: The established flow keyword is commonly used in domain rules, but it might not account for all out of flow packet exchange edge cases. Before using any example rule listing, test the rule to verify that it works as expected.VerificationYou can verify that the domains are handled correctly based off your configurations by running test commands to the specified domains.In the following example, the domain https://example.com is allowed and a 200 OK response is returned to the client:curl -v --silent https://example.com --stderr - | grep 200< HTTP/2 200In the following example, the HTTP domain http://www.google.com is blocked:curl -v http://www.google.com* Trying 172.253.115.99:80...* Connected to www.google.com (http://www.google.com/) (172.253.115.99) port 80 (#0)> GET / HTTP/1.1> Host: www.google.com (http://www.google.com/)> User-Agent: curl/7.79.1> Accept: /In the following example, the HTTPS domain https://www.google.com is blocked:curl -v https://www.google.com* Trying 172.253.115.147:443...* Connected to www.google.com (http://www.google.com/) (172.253.115.147) port 443 (#0)* ALPN, offering h2* ALPN, offering http/1.1* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH* successfully set certificate verify locations:* CAfile: /etc/pki/tls/certs/ca-bundle.crt* CApath: none* TLSv1.2 (OUT), TLS header, Certificate Status (22):* TLSv1.2 (OUT), TLS handshake, Client hello (1):Related informationFirewall policies in Network FirewallCreate a stateful rule groupExamples of stateful rules for Network FirewallEvaluation order for stateful rule groupFollow"
https://repost.aws/knowledge-center/network-firewall-configure-domain-rules
How can I monitor daily EstimatedCharges and trigger a CloudWatch alarm based on my usage threshold?
How can I monitor daily EstimatedCharges and trigger an Amazon CloudWatch alarm based on my usage threshold?
"How can I monitor daily EstimatedCharges and trigger an Amazon CloudWatch alarm based on my usage threshold?Short descriptionThe EstimatedCharges metric is published at approximately six-hour intervals. The results reset every month. The MetricMath RATE expression is calculated by dividing the difference between the latest data point value and the previous data point value by the time difference in seconds between the two values. You can use this expression to calculate the EstimatedCharges value for each day.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent version of the AWS CLI.Test the RATE expression and verify the outputBefore creating a CloudWatch alarm, call the GetMetricData API to test the RATE expression:$ cat metric_data_queries.json{ "MetricDataQueries": [ { "Id": "m1", "MetricStat": { "Metric": { "Namespace": "AWS/Billing", "MetricName": "EstimatedCharges", "Dimensions": [ { "Name": "Currency", "Value": "USD" } ] }, "Period": 86400, "Stat": "Maximum" } }, { "Id": "e1", "Expression": "IF(RATE(m1)>0,RATE(m1)*86400,0)", "Label": "DailyEstimatedCharges", "Period": 86400 } ], "StartTime": "2020-06-01T00:00:00Z", "EndTime": "2020-06-05T00:00:00Z", "ScanBy": "TimestampAscending"}$ aws cloudwatch get-metric-data --cli-input-json file://metric_data_queries.json{ "MetricDataResults": [ { "Id": "m1", "Label": "EstimatedCharges", "Timestamps": [ "2020-06-01T00:00:00Z", "2020-06-02T00:00:00Z", "2020-06-03T00:00:00Z", "2020-06-04T00:00:00Z" ], "Values": [ 0.0, 22.65, 34.74, 46.91 ], "StatusCode": "Complete" }, { "Id": "e1", "Label": "DailyEstimatedCharges", "Timestamps": [ "2020-06-02T00:00:00Z", "2020-06-03T00:00:00Z", "2020-06-04T00:00:00Z" ], "Values": [ 22.65, 12.090000000000003, 12.169999999999995 ], "StatusCode": "Complete" } ], "Messages": []}Output table:Timestamps2020-06-01T00:00:00Z2020-06-02T00:00:00Z2020-06-03T00:00:00Z2020-06-04T00:00:00ZEstimatedCharges0.022.6534.7446.91DailyEstimatedCharges---22.6512.0912.17In the output table, confirm that DailyEstimatedCharges is correctly calculated as the difference between the latest data point and the previous data point. You use this expression to create your CloudWatch alarm.Create a CloudWatch alarm using the AWS Management Console1.    Follow the steps in Creating a CloudWatch alarm based on a metric math expression.2.    Paste the following code in the Source tab of the CloudWatch Metrics page. This code creates a metric math expression [ IF(RATE(m1)>0,RATE(m1)*86400,0) ] using EstimatedCharges as the base metric with the label "m1".{ "metrics": [ [ { "expression": "IF(RATE(m1)>0,RATE(m1)*86400,0)", "label": "Expression1", "id": "e1", "period": 86400, "stat": "Maximum" } ], [ "AWS/Billing", "EstimatedCharges", "Currency", "USD", { "id": "m1" } ] ], "view": "timeSeries", "stacked": false, "region": "us-east-1", "stat": "Maximum", "period": 86400}3.    Create a CloudWatch alarm for the MetricMath expression. To do this, select Graphed metrics. In the Actions column for the DailyEstimatedCharges metric, choose the Alarm icon.4.    In the CloudWatch Alarm Creation Wizard: Confirm the details of the metric configuration. Add an appropriate threshold value to receive notifications when the threshold is breached (for example, 50 USD). Configure your alarm actions. Add an alarm name and description.Create a CloudWatch alarm using the AWS CLI1.    Create an alarm configuration as a JSON file:$ cat alarm_config.json{ "AlarmName": "DailyEstimatedCharges", "AlarmDescription": "This alarm would be triggered if the daily estimated charges exceeds 50$", "ActionsEnabled": true, "AlarmActions": [ "arn:aws:sns:<REGION>:<ACCOUNT_ID>:<SNS_TOPIC_NAME>" ], "EvaluationPeriods": 1, "DatapointsToAlarm": 1, "Threshold": 50, "ComparisonOperator": "GreaterThanOrEqualToThreshold", "TreatMissingData": "breaching", "Metrics": [{ "Id": "m1", "MetricStat": { "Metric": { "Namespace": "AWS/Billing", "MetricName": "EstimatedCharges", "Dimensions": [{ "Name": "Currency", "Value": "USD" }] }, "Period": 86400, "Stat": "Maximum" }, "ReturnData": false }, { "Id": "e1", "Expression": "IF(RATE(m1)>0,RATE(m1)*86400,0)", "Label": "DailyEstimatedCharges", "ReturnData": true }]}Note: Be sure to update the REGION, ACCOUNT_ID, and SNS_TOPIC_NAME with your corresponding values.2.    Call the PutMetricAlarm API:aws cloudwatch put-metric-alarm --cli-input-json file://alarm_config.json(Optional) Find the previous day's top ten contributors to the Maximum DailyEstimatedCharges valueUse the following query:$ cat top_contributors_query.json{ "MetricDataQueries": [{ "Id": "e1", "Expression": "SORT(RATE(SEARCH('{AWS/Billing,Currency,ServiceName} AWS/Billing ServiceName', 'Maximum', 86400))*86400, MAX, DESC, 10)", "Label": "DailyEstimatedCharges", "Period": 86400 }], "ScanBy": "TimestampAscending"}$ aws cloudwatch get-metric-data --cli-input-json file://top_contributors_query.json --start-time `date -v -2d '+%Y-%m-%dT%H:%M:%SZ'` --end-time `date '+%Y-%m-%dT%H:%M:%SZ'` --region us-east-1Note: The supported DateTime format for StartTime and EndTime is '2020-01-01T00:00:00Z'.Important: The preceding command incurs charges based on the number of metrics retrieved by each GetMetricData API call. For more information, see Amazon CloudWatch pricing.Follow"
https://repost.aws/knowledge-center/cloudwatch-estimatedcharges-alarm
Why can't I delete an Amazon S3 bucket that Elastic Beanstalk created?
My AWS Elastic Beanstalk application created an Amazon Simple Storage Service (Amazon S3) bucket. Why can't I delete the bucket?
"My AWS Elastic Beanstalk application created an Amazon Simple Storage Service (Amazon S3) bucket. Why can't I delete the bucket?ResolutionWhen you use an Elastic Beanstalk application to create an S3 bucket, a policy is applied to the bucket that protects it from accidental deletion. To delete the bucket, you must delete the bucket policy first. For instructions, see Deleting the Elastic Beanstalk Amazon S3 Bucket.Warning: If you delete the bucket, then any other AWS resources or applications that depend on the bucket might stop working correctly.Related informationDeleteBucketPolicydelete-bucket-policyFollow"
https://repost.aws/knowledge-center/elastic-beanstalk-delete-s3
How do I resolve the "Unable to verify/create output bucket" error in Amazon Athena?
"When I run Amazon Athena queries in SQL Workbench/J, in AWS Lambda, or with an AWS SDK, I get the error: "Unable to verify/create output bucket.""
"When I run Amazon Athena queries in SQL Workbench/J, in AWS Lambda, or with an AWS SDK, I get the error: "Unable to verify/create output bucket."Short descriptionHere are some common causes of this error:The Amazon Simple Storage Service (Amazon S3) bucket that you specified for the query result location doesn't exist.The AWS Identity and Access Management (IAM) policy for the user or role that runs the query doesn't have the required Amazon S3 permissions, such as s3:GetBucketLocation.ResolutionIf you manually set the query result location, you must confirm that the S3 bucket exists. Then, check the IAM policy for the user or role that runs the query:Confirm that the permissions in the following example policy, such as s3:GetBucketLocation are allowed.Be sure that the IAM policy does not contain a Deny statement that uses aws:SourceIp or aws:SourceVpc to restrict S3 permissions.Note: If the bucket already exists, then the s3:CreateBucket permission isn't required. If you manually set the query result location, then don't include arn:aws:s3:::aws-athena-query-results-* in the policy. The policy must include arn:aws:s3:::query-results-custom-bucket and arn:aws:s3:::query-results-custom-bucket/* only if you manually set the query result location.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetBucketLocation", "s3:GetObject", "s3:ListBucket", "s3:ListBucketMultipartUploads", "s3:ListMultipartUploadParts", "s3:AbortMultipartUpload", "s3:CreateBucket", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::aws-athena-query-results-*", "arn:aws:s3:::query-results-custom-bucket", "arn:aws:s3:::query-results-custom-bucket/*" ] } ]}Related informationAccess to Amazon S3Bucket policy examplesControlling access from VPC endpoints with bucket policiesExample - object operationsFollow"
https://repost.aws/knowledge-center/athena-output-bucket-error
How can I disable the API Gateway default endpoint for REST or HTTP APIs?
I want to allow clients to invoke my APIs only using the custom domain name. How can I deactivate the default API execute-api endpoint URL for Amazon API Gateway REST or HTTP APIs?
"I want to allow clients to invoke my APIs only using the custom domain name. How can I deactivate the default API execute-api endpoint URL for Amazon API Gateway REST or HTTP APIs?Short descriptionAPI Gateway REST APIs and HTTP APIs use a default API endpoint in the following format: "https://{api_id}.execute-api.{region}.amazonaws.com". If you use a custom domain name for your API Gateway REST or HTTP APIs, you can deactivate the default endpoint. This allows all traffic to route to your APIs through the custom domain name.ResolutionFollow these steps to disable the default endpoint using the API Gateway console, AWS Command Line Interface (AWS CLI), or AWS CloudFormation.Note:If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.After you activate or deactivate the default endpoint, a deployment is required for the update to take effect.API Gateway consoleREST APIOpen the API Gateway console.In the navigation pane, choose APIs, and then choose your REST API.In the navigation pane, choose Settings.For the Default Endpoint, choose Disabled, and then choose Save Changes.In the navigation pane, choose Resources, Actions, and then choose Deploy API.HTTP APIOpen the API Gateway console.In the navigation pane, choose APIs, and then choose your HTTP API.In the navigation pane, choose Settings.For the Default Endpoint, choose Disabled, and then choose Save Changes.In the navigation pane, choose Resources, Actions, and then choose Deploy API.AWS CLIREST APIRun the AWS CLI command update-rest-api similar to the following:aws apigateway update-rest-api --rest-api-id abcdef123 --patch-operations op=replace,path=/disableExecuteApiEndpoint,value='True'To deploy the updated API, run the AWS CLI command create-deployment similar to the followingaws apigateway create-deployment --rest-api-id abcdef123 --stage-name devNote: Replace api_id abcdef123 and stage_name dev with your REST API ID and respective stage.HTTP APIRun the AWS CLI commandupdate-api similar to the following:aws apigatewayv2 update-api --api-id abcdef123 --disable-execute-api-endpointTo deploy the updated API, run the AWS CLI command create-deployment similar to the following:aws apigatewayv2 create-deployment --api-id abcdef123 --stage-name devNote: Replace api_id abcdef123 and stage_name dev with your HTTP API ID and respective stage.CloudFormation templateTo disable the default endpoint from a CloudFormation template, you can set the DisableExecuteApiEndpoint parameter to True. Update the CloudFormation template for REST API or HTTP API.Important: Disabling the default endpoint results in HTTP 403 Forbidden errors if the API is invoked using the default endpoint URL.Related informationHow do I troubleshoot HTTP 403 errors from API Gateway?Follow"
https://repost.aws/knowledge-center/api-gateway-disable-endpoint
How do I troubleshoot primary node failure with error “502 Bad Gateway” or “504 Gateway Time-out” in Amazon EMR?
My Amazon EMR primary node is failing with a "502 Bad Gateway" or "504 Gateway Time-out" error.
"My Amazon EMR primary node is failing with a "502 Bad Gateway" or "504 Gateway Time-out" error.Short descriptionAn EMR primary node might fail with one of the following errors:The master failed: Error occurred:<html>?? <head><title>502 Bad Gateway</title></head> <body>?? <center><h1>502 Bad Gateway</h1></center> <hr><center>nginx/1.20.0</center>?? </body>?? </html>??-or-The master failed: Error occurred: <html>??<head><title>504 Gateway Time-out</title></head>??<body>??<center><h1>504 Gateway Time-out</h1></center>??<hr><center>nginx/1.16.1</center>??</body>??</html>??The following are common reasons for these errors:The instance-controller daemon is in the stopped state or is down on the primary node instance.The primary node runs out of memory or disk space.The Amazon Elastic Compute Cloud (Amazon EC2) instance status checks fail.ResolutionTroubleshoot primary node instance-controller daemon failuresThe primary node's instance controller (I/C) is the daemon that communicates with the EMR control plane and the rest of the cluster. If the instance controller can't communicate with the EMR control plane, then the primary node is classified as unhealthy and the cluster is terminated.To resolve this, analyze the instance-controller logs to determine why the process failed. The instance-controller logs are located at /emr/instance-controller/log/.If termination protection is turned on, SSH into the primary node and restart the instance-controller process.In Amazon EMR 5.30.0 and later release versions:1.    Use the following command to check the status of the I/C:sudo systemctl status instance-controller.service2.    Use the following command to restart the I/C if the status is down:sudo systemctl start instance-controller.serviceIn Amazon EMR 4.x-2.x release versions:1.    Use the following command to check the status of I/C:sudo /etc/init.d/instance-controller status2.    Use the following command to restart the I/C if the status is down:sudo /etc/init.d/instance-controller startAnalyze log files to troubleshoot memory and disk issuesIf termination protection is turned on, use SSH to connect into the primary node. Then, review the instance-state log file.Analyze instance metrics such as memory and disk listed in the instant-state log. You can analyze these metrics using Linux commands such as free -m and df -h.Use the log file results to determine why the primary node is using a high amount of disk or memory.Troubleshoot primary node EC2 instance status check failuresDetermine if the primary instance status check failed by viewing the instance status check metrics.Troubleshoot the instance status check failure. Be aware that starting and stopping your EC2 instance results in EMR cluster termination.Troubleshoot primary nodes that have termination protection turned off and the cluster is already terminatedTurn on termination protection while launching a new EMR cluster.Switch to a larger instance type. For more information, see Supported instance types in Amazon EMR.Turn on Amazon CloudWatch alarms for EMR primary node memory and disk usageFollow"
https://repost.aws/knowledge-center/emr-fix-primary-node-failure
How can I resolve asymmetric routing issues when I create a VPN as a backup to Direct Connect in a transit gateway?
"I have an AWS Direct Connect connection. The Direct Connect gateway is associated with an AWS Transit Gateway. I created a Site to Site VPN as a backup to the Direct Connect connection, but have asymmetric routing issues. How can I resolve the asymmetric routing issues and maintain automatic failover with the AWS VPN?"
"I have an AWS Direct Connect connection. The Direct Connect gateway is associated with an AWS Transit Gateway. I created a Site to Site VPN as a backup to the Direct Connect connection, but have asymmetric routing issues. How can I resolve the asymmetric routing issues and maintain automatic failover with the AWS VPN?Short descriptionUsing a VPN connection as a backup to Direct Connect can result in asymmetric routing issues. Asymmetric routing occurs when network traffic enters through one connection and exits through another connection. Some network devices such as firewalls drop packets if the traffic received isn't logged in your stateful table.ResolutionFollow these best practices for configuring outbound and inbound network traffic.Best Practices for outbound traffic from AWS to your networkConfigure the VPN with dynamic routing using Border Gateway Protocol (BGP).Make sure your devices advertise the same prefixes from on-premises to AWS with the VPN and Direct Connect, or less specific VPN prefixes. For example, 10.0.0.0/16 is less specific compared to 10.0.0.0/24.AWS sets a higher preference value for Direct Connect over the VPN connection when sending on-premises traffic to your network if the prefix length received is the same value.For the AWS Transit Gateway, a static route pointing to a VPN attachment is more preferred to a dynamically propagated Direct Connect gateway route if the prefix length is the same value.For Direct Connect deployed with dynamic VPN as backup, AS PATH prepending isn't recommended. This is because if the prefixes are the same, Direct Connect routes are preferred regardless of the AS PATH prepend length.For more information, see Route tables and VPN priority.Best practices for inbound traffic from your network to AWSMake sure that your network device is configured to prefer sending return traffic through the Direct Connect connection.If the prefixes advertised from AWS to your network device are the same for Direct Connect and VPN, then the BGP local preference attribute can be used to force your device to send outbound traffic through the Direct Connect connection towards AWS. Set the Direct Connect path with a higher local preference value and a lower preference for VPN. For example, Local preference 200 for Direct Connect and 100 for VPN.Important:If the Direct Connect allowed prefix is summarized and routes advertised through VPN are more specific, then your network devices prefer the routes received through VPN.For example:The transit gateway propagated routes are VPC-A CIDR 10.0.0.0/16, VPC-B CIDR 10.1.0.0/16, and VPC-C 10.2.0.0/16.The summarized prefix on the Direct Connect gateway allowed prefixes is 10.0.0.0/14 in order to accommodate the 20 prefixes limit.Direct Connect advertises the Direct Connect gateway prefix 10.0.0.0/14, and the VPN transit gateway advertises the /16 CIDRs for each VPC over VPN.To resolve this issue, insert the summarized Direct Connect gateway route into the transit gateway route table. For example, add a static route 10.0.0.0/14 pointing to a VPC attachment. This makes sure that the transit gateway advertises the summarized network over VPN. Your network devices receive the same prefix from Direct Connect and VPN. Then, configure your gateway to filter out the specific prefixes received to make sure that only the summarized prefix is installed in the routing table from the VPN peer. There are different options available to filter out routes depending on the vendor specifications. For example, route-maps, prefix-lists, router-filter-lists, and so on.Traffic from your network to AWS reaches the transit gateway route table and the gateway does a lookup to select the most specific routes from each VPC attachment. For example:Attachment A pointing to VPC-A CIDR is 10.0.0.0/16.Attachment B pointing to VPC-B CIDR is 10.1.0.0/16.Attachment C pointing to VPC-C CIDR is 10.2.0.0/16.Related informationAWS Site to Site VPN routingAmazon VPC route table priorityHow do I configure Direct Connect and VPN failover with Transit Gateway?Follow"
https://repost.aws/knowledge-center/direct-connect-asymmetric-routing
I'm receiving a "Kernel panic" error after I've upgraded the kernel or tried to reboot my EC2 Linux instance. How can I fix this?
"I completed a kernel or system upgrade or after a system reboot on my Amazon Elastic Compute Cloud (Amazon EC2) instance. Now the instance fails to boot and the following message appears:"VFS: Cannot open root device XXX or unknown-block(0,0)Please append a correct "root=" boot option; here are the available partitions:Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)""
"I completed a kernel or system upgrade or after a system reboot on my Amazon Elastic Compute Cloud (Amazon EC2) instance. Now the instance fails to boot and the following message appears:"VFS: Cannot open root device XXX or unknown-block(0,0)Please append a correct "root=" boot option; here are the available partitions:Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)"Short descriptionYour instance might fail to boot and show the kernel panic error message for the following reasons:The initramfs or initrd image is missing from the newly updated kernel configuration in /boot/grub/grub.conf. Or, the initrd or initramfs file itself is missing from the /boot directory.The kernel or system packages weren't fully installed during the upgrade process due to insufficient space.Third-party modules are missing from the initrd or initramfs image. For example, NVMe, LVM, or RAID modules.ResolutionThe initramfs or initrd image is missing from the /boot/grub/grub.conf or /boot directoryUse one of the following methods to correct this:Method 1: Use the EC2 Serial ConsoleIf you turned on EC2 Serial Console for Windows, then you can use it to troubleshoot supported Nitro-based instance types. The serial console helps you troubleshoot boot issues, network configuration, and SSH configuration issues. The serial console connects to your instance without the need for a working network connection. You can access the serial console using the Amazon EC2 console or the AWS Command Line Interface (AWS CLI).Before using the serial console, grant access to it at the account level. Then, create AWS Identity and Access Management (IAM) policies granting access to your IAM users. Also, every instance using the serial console must include at least one password-based user. If your instance is unreachable and you haven't configured access to the serial console, follow the instructions in Method 2: Use a rescue instance. For information on configuring the EC2 Serial Console for Linux, see Configure access to the EC2 Serial Console.Note: If you receive errors when running AWS CLI commands, make sure that you're using the most recent version of the AWS CLI.Method 2: Use a rescue instanceWarning:This procedure requires a stop and start of your EC2 instance. Be aware that if your instance is instance store-backed or has instance store volumes containing data, the data is lost when you stop the instance. For more information, see Determine the root device type of your instance.If you launch instances using EC2 Auto Scaling, stopping the instance might terminate the instance. Some AWS services use EC2 Auto Scaling to launch instances, such as Amazon EMR, AWS CloudFormation, and AWS Elastic Beanstalk. Check the instance scale-in protection settings for your Auto Scaling group. If your instance is part of an Auto Scaling group, temporarily remove the instance from the Auto Scaling group before starting the resolution steps.When you stop and start an instance, the public IP address of your instance changes. It's a best practice to use an Elastic IP address instead of a public IP address when routing external traffic to your instance.1.    Open the Amazon EC2 console.2.    Choose Instances from the navigation pane, and then select the impaired instance.3.    Choose Actions, Instance State, Stop instance.4.    In the Storage tab, select the Root device, and then select the Volume ID.Note: You can create a snapshot of the root volume as a backup before proceeding to step 5.5.    Choose Actions, Detach Volume (/dev/sda1 or /dev/xvda), and then choose Yes, Detach.6.    Verify that the State is Available.7.    Launch a new EC2 instance in the same Availability Zone and with the same operating system and same kernel version as the original instance. You can install the appropriate kernel version after the initial launch and then perform a reboot. This new instance is your rescue instance.8.    After the rescue instance launches, choose Volumes from the navigation pane, and then select the detached root volume of the original instance.9.    Choose Actions, Attach Volume.10.    Select the rescue instance ID (1-xxxx) and then enter /dev/xvdf.11.     Run the following command to verify that the root volume of the impaired instance attached to the rescue instance successfully:$ lsblkThe following is an example of the output:NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTxvda 202:0 0 15G 0 disk└─xvda1 202:1 0 15G 0 part /xvdf 202:80 0 15G 0 disk└─xvdf1 202:0 0 15G 0 part12.    Create a mount directory and then mount under /mnt.$ mount -o nouuid /dev/xvdf1 /mntInvoke a chroot environment by running the following command:$ for i in dev proc sys run; do mount -o bind /$i /mnt/$i; done14.    Run the chroot command on the mounted /mnt file system:$ chroot /mntNote: The working directory is changed to "/".15.    Run the following commands based on your operating system.RPM-based operating systems:$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg$ sudo dracut -f -vvvvvDebian-based operating systems:$ sudo update-grub && sudo update-grub2$ sudo update-initramfs -u -vvvvv16.    Verify that the initrd or initramfs image is present in the /boot directory and that the image has a corresponding kernel image. For example, vmlinuz-4.14.138-114.102.amzn2.x86_64 and initramfs-4.14.138-114.102.amzn2.x86_64.img.17.    After verifying that the latest kernel has a corresponding initrd or initramfs image, run the following commands to exit and cleanup the chroot environment:$ exitumount /mnt/{dev,proc,run,sys,}18.    Detach the root volume from the rescue instance and attach the volume to the original instance.19.    Start the original instance.The kernel or system package wasn't fully installed during an updateRevert to a previous kernel version. For instructions, see How do I revert to a known stable kernel after an update prevents my Amazon EC2 instance from rebooting successfully?Third-party modules are missing from the initrd or initramfs imageInvestigate to determine what module or modules are missing from the initrd or initramfs image. Then verify if you can add the module back to the image. In many cases, it's easier to rebuild the instance.The following is example console output from an Amazon Linux 2 instance running on the Nitro platform. The instance is missing the nvme.ko module from the initramfs image:dracut-initqueue[1180]: Warning: dracut-initqueue timeout - starting timeout scriptsdracut-initqueue[1180]: Warning: Could not boot.[ OK ] Started Show Plymouth Boot Screen.[ OK ] Reached target Paths.[ OK ] Reached target Basic System.dracut-initqueue[1180]: Warning: /dev/disk/by-uuid/55da5202-8008-43e8-8ade-2572319d9185 does not existdracut-initqueue[1180]: Warning: Boot has failed. To debug this issue add "rd.shell rd.debug" to the kernel command line.Starting Show Plymouth Power Off Screen...To determine if the kernel panic error is caused by a missing third-party module or modules, do the following:1.    Use Method 1: Use the EC2 Serial Console in the preceding section to create a chroot environment in root volume of the non-booting instance.-or-Follow steps 1-14 in Method 2: Use a rescue instance in the preceding section to create a chroot environment in the root volume of the non-booting instance.2.    Use one of the following three options to determine which module or modules are missing from the initramfs or initrd image:Option 1: Run the dracut -f -v command in the /boot directory to determine if rebuilding the initrd or initramfs image fails. Also use the dracut -f -v command to list which module or modules is missing.Note: The dracut -f -v command might add any missing modules to the initrd or intramifs image. If the command doesn't find errors, try to reboot the instance. If the instance reboots successfully, then the command resolved the error.Option 2: Run the lsinitrd initramfs-4.14.138-114.102.amzn2.x86_64.img | less command to view the contents of the initrd or initramfs file. Replace initramfs-4.14.138-114.102.amzn2.x86_64.img with the name of your image.Option 3: Inspect the /usr/lib/modules directory.3.     If you find a missing module, you can try to add it back to the kernel. For information on how to obtain and add modules into the kernel, see the documentation specific to your Linux distribution.Follow"
https://repost.aws/knowledge-center/ec2-linux-kernel-panic-unable-mount
Why can't I see or play call recordings after setting up the Amazon Connect CTI Adapter for Salesforce?
"I can't see or play call recordings in Salesforce after setting up the Amazon Connect CTI Adapter. I configured the AmazonConnectSalesforceLambda Serverless Application Repository package, but it's not working as expected. How do I troubleshoot the issue?"
"I can't see or play call recordings in Salesforce after setting up the Amazon Connect CTI Adapter. I configured the AmazonConnectSalesforceLambda Serverless Application Repository package, but it's not working as expected. How do I troubleshoot the issue?Short descriptionIf the AmazonConnectSalesforceLambda Serverless Application Repository package isn't configured correctly, then calls recorded in Amazon Connect won't display or play in Salesforce.Two types of call recording and playback issues can occur when the AmazonConnectSalesforceLambda Serverless Application Repository package is misconfigured.The Contact Channel Analytics object isn't being created in the Salesforce dashboard.The Contact Channel Analytics object is created in Salesforce, but recordings are either not displaying or not playing.To troubleshoot call recordings not displaying or playing in Salesforce after setting up the Amazon Connect CTI Adapter, do the following:Verify that you've deployed the correct AmazonConnectSalesforceLambda Serverless Application Repository package for the Amazon Connect CTI Adapter version that you're using.Verify that you've deployed the AmazonConnectSalesforceLambda Serverless Application Repository package with the correct parameters.Verify that call recording streaming is activated in your AWS CloudFormation stack.Verify that call recording streaming is activated in your Amazon Connect contact flow.Verify that non-admin users are added to the AC_CallRecording permission set in Salesforce.Verify that the agent cleared the After Contact Work (ACW) state before they tried to playback the call recording.Verify that the Lambda functions in your AmazonConnectSalesforceLambda Serverless Application Repository package are invoking.Review the network calls made on the Salesforce dashboard to identify and troubleshoot any networking errors.For more information, see the following sections in the Amazon Connect CTI Adapter for Salesforce Lightning Installation Guide on GitHub:Setting up the Salesforce Lambdas manuallyCall recording streamingResolutionVerify that you've deployed the correct AmazonConnectSalesforceLambda Serverless Application Repository package for the Amazon Connect CTI Adapter version that you're usingThe Serverless Application Repository package won't work as expected if the version is different than the Amazon Connect CTI Adapter version that you're using.To upgrade from an earlier Amazon Connect CTI Adapter version, see Upgrading from an earlier version.Note: It's a best practice to upgrade the Amazon Connect CTI Adapter version, rather than installing earlier versions. If you choose to install an earlier version, make sure that you refer to the specific documentation for that version.Verify that you've deployed the AmazonConnectSalesforceLambda Serverless Application Repository package with the correct parametersIf the Serverless Application Repository package is deployed with incorrect parameters, it can cause its associated AWS Lambda functions to fail or not invoke as expected.To review and confirm your required parameters, follow the instructions in Setting up the Salesforce Lambdas manually.Verify that call recording streaming is activated in your AWS CloudFormation stackMake sure that the PostcallRecordingImportEnabled parameter is set to true in your AWS CloudFormation stack.For instructions, see Viewing stack information in the CloudFormation User Guide.Verify that call recording streaming is activated in your Amazon Connect contact flow1.    Be sure that the Set recording and analytics behavior contact block in your Amazon Connect contact flow has the Recording setting turned to On. For instructions, see How to set up recording behavior.2.    Make sure that the Set contact attributes contact block has the postcallRecordingImportEnabled setting configured as true.Note: You can verify the recordings appear in your Amazon Connect instance by reviewing the Contact Search page in the Amazon Connect console.Verify that non-admin users are included on the AC_CallRecording permission set in SalesforceNon-admin users must be added to the AC_CallRecording permission set in Salesforce to use call recording streaming.For instructions, see Adding users to the AC_CallRecording permission set.If you're using Amazon Connect CTI Adapter version 5.16+Also verify the following:The non-admin users are logged in to the Amazon Connect instance.The non-admin users have the required security profile permissions to access the recordings.Verify that the agent cleared ACW state before they tried to playback the call recordingAgents must clear ACW state before a Contact Trace Record (CTR) can be added to your Kinesis data stream.To view agents' past statuses, review your Amazon Connect instance's real-time metrics report.Verify that the Lambda functions in your AmazonConnectSalesForceLambda Serverless Application Repository package are invokingTo view the aggregate metrics for the resources in your Serverless Application Repository package, do the following:1.    Open the Lambda console Applications page.2.    Choose serverlessrepo-AmazonConnectSalesforceLambda.3.    Choose Monitoring.If you don't see invocations for any of the associated Lambda functions1.    Verify that you're exporting contact records from Amazon Connect using the right Kinesis data stream. For instructions, see Activate data streaming for your instance.2.    Be sure that the correct Kinesis stream Amazon Resource Name (ARN) is configured in your CloudFormation stack. For instructions, see Viewing stack information in the CloudFormation User Guide.3.    Verify that the Kinesis trigger is activated for the serverlessrepo-xxxx-sfCTRTrigger-xxxx Lambda function.For more information, see Activate data streaming for your instance in the Amazon Connect Administrator Guide.If you do see invocations for the associated Lambda functionsReview each function's Amazon CloudWatch Logs to identify and resolve any Lambda function errors.For more information, see How do I troubleshoot Lambda function failures?Note: If one of your Lambda functions returns an Invalid credentials error, do the following:Verify that the correct Salesforce credentials are stored in AWS Secrets Manager. For instructions, see Store Salesforce credentials in AWS Secrets Manager.Verify that the SalesforceUsername and SalesforceHost parameters are configured correctly in the AWS CloudFormation stack. For instructions, see Viewing stack information in the CloudFormation User Guide.The following are the Lambda functions associated with creating the Contact Channel Analytics object:serverlessrepo-xxxx-sfCTRTrigger-xxxx is invoked by the Kinesis stream and processes the incoming CTRs. Based on its configuration, it calls other Lambda functions in the package.serverlessrepo-xxxx-sfContactTraceRecord-xxxx processes the CTR event.serverlessrepo-xxxx-sfExecuteTranscriptionSt-xxxx checks the CTR to see if the recording import or transcription is activated.serverlessrepo-xxxx-sfInvokeAPI-xxxx creates and accesses objects in the Salesforce dashboard by calling the Salesforce API.Review the network calls made on the Salesforce dashboard to identify and troubleshoot any networking errorsCreate an HTTP Archive (HAR) file that reproduces the call recording or playback issue. Then, use the HAR file from your browser to identify and troubleshoot any potential networking issues.Related informationManaging applications in the AWS Lambda consoleIntelligent case management using Amazon Connect and Amazon Kinesis Data Streams (AWS Blog)Follow"
https://repost.aws/knowledge-center/connect-salesforce-call-recording-issues
How do I troubleshoot the error "javasqlSQLException: No more data to read from socket" when I'm trying to connect to my Amazon RDS for Oracle instance?
I get the error "javasqlSQLException: No more data to read from socket" when I try to connect to my Amazon Relational Database Service (Amazon RDS) for Oracle DB instance.
"I get the error "javasqlSQLException: No more data to read from socket" when I try to connect to my Amazon Relational Database Service (Amazon RDS) for Oracle DB instance.ResolutionYou get the error “javasqlSQLException: No more data to read from socket“ because of a connectivity issue between the Oracle server and the client JDBC driver. The most common reasons and troubleshooting options for these connection failures are the following:The connection is abruptly terminated due to network interruptions: To troubleshoot this issue, check the alert.log file of the instance for any TNS timeout errors posted during the time when the connection timed out from the application end. For more information, see Oracle documentation for TNS timeout errors. For more information on accessing the alert log for RDS instances, see Oracle database log files.The connection is terminated because of Oracle errors on the server side: Check the alert.log file for ORA-0600 or ORA-07445 errors. Collect the trace dump for specific Oracle errors. Check if these errors have a known fix provided by Oracle support.The client-server connection is not active: To troubleshoot this issue, set the parameter SQLNET.EXPIRE_TIME to a specified interval, in minutes, to send a probe that verifies that the client-server connections are active. For more information, see Oracle documentation for SQLNET. EXPIRE_TIME.The RDS for Oracle instance is not available or was restarted when the JDBC client was trying to use an existing connection to the Oracle server: To troubleshoot this issue, retrieve events for the RDS instance and check if the instance was restarted or stopped when the connections were established from the JDBC client.The JDBC drivers used for connecting to the RDS for Oracle Instance are incompatible: To troubleshoot this issue, confirm that the version of JDBC driver is compatible with that of the DB instance. For the list of compatible JDBC drivers, see Oracle documentation for Compatibility matrix for Java machines and JDBC drivers used with ODI. If the JDBC driver is incompatible, download the latest JAR file in your source code. Then, include this file in your classpath when you compile the class that creates connections to the database. For more information, see Downloading the JDBC driver.The memory components on the client side cause timeouts: To troubleshoot this issue, check if the Oracle Data Integrator has memory components on the client side that cause unwanted timeouts. Be sure that you set the correct values for these components on the client side. For more information, see Oracle documentation for How to define Java options (such as the limits of memory heap, the location of non-Java libraries, etc.) in ODI.Related informationOracle documentation for A "No More Data to Read From Socket" error has been signaled from an ODI integration interfaceFollow"
https://repost.aws/knowledge-center/rds-oracle-error-no-more-data
How do I resolve the CloudHSM error "InitializeCluster request failed: CloudHsmInvalidRequestException - TrustAnchor provided is not a valid x509 certificate"?
"I tried to initialize an AWS CloudHSM cluster, and received the error "InitializeCluster request failed: CloudHsmInvalidRequestException - TrustAnchor provided is not a valid x509 certificate.""
"I tried to initialize an AWS CloudHSM cluster, and received the error "InitializeCluster request failed: CloudHsmInvalidRequestException - TrustAnchor provided is not a valid x509 certificate."ResolutionYou must use a self-signed root certificate (customerCA.crt) to sign the cluster certificate signing request (CSR). Verify that the certificate is an issuing certificate or trust anchor root certificate with the following AWS CLI command:Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.$ openssl x509 -in customerCA.crt -text -nooutIf the certificate customerCA.crt is a root certificate, then the issuer and subject are the same.For more information, see Sign the CSR.Related informationWhat is AWS CloudHSM?Follow"
https://repost.aws/knowledge-center/cloudhsm-error-request-failed
What are some best practices for implementing Lambda-backed custom resources with CloudFormation?
I want to follow best practices when implementing AWS Lambda-backed custom resources with AWS CloudFormation.
"I want to follow best practices when implementing AWS Lambda-backed custom resources with AWS CloudFormation.ResolutionConsider the following best practices when implementing AWS Lambda-backed custom resources with AWS CloudFormation.Build your custom resources to report, log, and handle failureExceptions can cause your function code to exit without sending a response. CloudFormation requires an HTTPS response to confirm whether the operation is a success or a failure. An unreported exception causes CloudFormation to wait until the operation times out before starting a stack rollback. If the exception reoccurs during rollback, then CloudFormation waits again for a timeout before it ends in a rollback failure. During this time, your stack is unusable.To avoid timeout issues, include the following in the code that you create for your Lambda function:Logic to handle exceptionsThe ability to log the failure for troubleshooting scenariosThe ability to respond to CloudFormation with an HTTPS response confirming that an operation failedA dead-letter queue that lets you capture and deal with incomplete runsA cfn-response module to send a response to CloudFormationSet reasonable timeout periods, and report when they're about to be exceededIf an operation doesn't run within its defined timeout period, then the function raises an exception and no response is sent to CloudFormation.To avoid this issue, consider the following:Set the timeout value for your Lambda functions high enough to handle variations in processing time and network conditions.Set a timer in your function to respond to CloudFormation with an error when a function is about to time out. A timer can help prevent delays for custom resources.Build around Create, Update, and Delete eventsDepending on the stack action, CloudFormation sends your function a Create, Update, or Delete event. Because each event is handled differently, make sure that there are no unintended behaviors when any of the three event types is received.For more information, see Custom resource request types.Understand how CloudFormation identifies and replaces resourcesWhen an update initiates the replacement of a physical resource, CloudFormation compares the PhysicalResourceId that your Lambda function returns to the previous PhysicalResourceId. If the IDs differ, then CloudFormation assumes that the resource is replaced with a new physical resource.However, to allow for potential rollbacks, the old resource isn't implicitly removed. When the stack update is successfully completed, a Delete event request is sent with the old physical ID as an identifier. If the stack update fails and a rollback occurs, then the new physical ID is sent in the Delete event.Use PhysicalResourceId to uniquely identify resources so that when a Delete event is received, only the correct resources are deleted during a replacement.Design your functions with idempotencyAn idempotent function can be repeated numerous times with the same inputs, and the result is the same as doing it only once. Idempotency makes sure that retries, updates, and rollbacks don't create duplicate resources or introduce errors.For example, CloudFormation invokes your function to create a resource, but doesn't receive a response that the resource is successfully created. CloudFormation might invoke the function again, and create a second resource. The first resource can then become orphaned.Implement your handlers to correctly handle rollbacksWhen a stack operation fails, CloudFormation attempts to roll back and revert all resources to their prior state. This results in different behaviors depending on whether the update caused a resource replacement.To make sure that rollbacks are successfully completed, consider the following:Avoid implicitly removing old resources until a Delete event is received.Use accustom or the Custom Resource Helper on the GitHub website to help you follow best practices when using custom resources in CloudFormation.Related informationCustom resourcesFollow"
https://repost.aws/knowledge-center/best-practices-custom-cf-lambda
What are the best practices for using multiple Network Load Balancers with an endpoint service?
I want to use multiple Network Load Balancers with an endpoint service.
"I want to use multiple Network Load Balancers with an endpoint service.ResolutionWhen you associate multiple Network Load Balancers with an endpoint service, the endpoint interface connects to only one Network Load Balancer per Availability Zone. To make sure that all endpoint consumers have a consistent service experience, use the same configuration across your Network Load Balancers.It's a best practice to identically configure the following items for each of your Network Load Balancers:Listener port and protocolTarget groups and targetsThe application that runs on the target instancesRelated informationWhat is a Network Load Balancer?Access an AWS service using an interface VPC endpointFollow"
https://repost.aws/knowledge-center/vpc-network-load-balancers-with-endpoint
Why did Amazon EC2 terminate my Spot Instance?
"I launched a Spot Instance but now I can't find it in the Amazon Elastic Compute Cloud (Amazon EC2) console. Or, my Amazon EMR node Spot Instance was terminated."
"I launched a Spot Instance but now I can't find it in the Amazon Elastic Compute Cloud (Amazon EC2) console. Or, my Amazon EMR node Spot Instance was terminated.ResolutionAmazon EC2 can interrupt your Spot Instance at any time with a two-minute notice for the following reasons:Lack of Spot capacity: Amazon EC2 can interrupt your Spot Instance when its capacity is required. Usually, Amazon EC2 reclaims your instance to repurpose capacity. Amazon EC2 might also terminate your Spot Instance for issues such as host maintenance or hardware decommission.Amazon EC2 can't meet your Spot Instance request constraints: Some Spot requests include a constraint, such as a launch group or a specific Availability Zone group. The Spot Instances are terminated as a group when the constraint can no longer be met.The Spot Price is higher than the maximum price that you set: When you request a Spot Instance, you can specify a maximum price for the instance. By default, this maximum price is equal to the On-Demand pricing for that instance type. When the Spot Price increases beyond your set maximum price, your Spot Instance is interrupted. If the interruption behavior is "stop" or "hibernate", your Spot Instance starts again when the Spot price reduces to under your maximum price. Setting a high maximum price doesn't mean that a Spot Instance is available. For more information, see How Spot Instances work.Because of these interruptions, it's a best practice to use Spot Instances for workloads that are stateless, fault-tolerant, and flexible enough to withstand interruptions.Note: When Amazon EC2 interrupts a Spot Instance, the Spot Instance is terminated by default. You can change this default behavior to hibernate, or you can stop the instance instead of terminating it. For more information, see Spot Instance interruptions.To determine why Amazon EC2 interrupted your Spot Instance, do the following:Open the Amazon EC2 console, and then select Spot Requests.Select the Request ID of the terminated Spot Instance.View the Status field under the Description section to see the reason code for why the instance was terminated. For example, if Amazon EC2 didn’t have enough Spot Capacity, the Status field says "instance-terminated-no-capacity". For a complete list of reason codes, see Spot request status codes.You can use Spot Instance interruption notices to work around potential interruptions. For more information, see Taking advantage of Amazon EC2 Spot Instance interruption notices.Related informationSpot request statusSpot Instance best practicesWhy is my Spot Instance terminating even though the maximum price is higher than the Spot price?When should you use Spot Instances - Amazon EMR documentationFollow"
https://repost.aws/knowledge-center/ec2-spot-instance-unexpected-termination
How do I use table statistics to monitor an AWS DMS task?
How do I use table statistics to monitor an AWS Database Migration Service (AWS DMS) task?
"How do I use table statistics to monitor an AWS Database Migration Service (AWS DMS) task?ResolutionOpen the AWS DMS console, and then choose Database migration tasks.Choose the name of the task that you want to monitor.Note: If your table had an issue that you corrected, select the table, and then choose Reload table data. This action reloads one or more tables, so you don't have to restart the task.From the Table statistics section, you can view column details for your table.The Load state column has the following states:Table does not exist: AWS DMS can't find the table on the source endpoint.Before load: The full load process is enabled, but it hasn't started yet.Full load: The full load process is in progress.Table completed: Full load is completed.Table cancelled: Loading of the table is canceled.Table error: An error occurred when loading the table.The Inserts, Deletes, Updates, and DDLs columns show the number of these statements that were replicated during the change data capture (CDC) phase.The Full load rows column shows the total number of rows that were migrated during the full load phase.The Total column shows the total number of rows that were migrated during the full load and the applied Insert, Update, or Delete statements during the CDC phase.The Validation state column shows the state of the validation, such as Not enabled, Pending records, Validated, or Error.The Validation pending, Validation failed, and Validation suspended columns show the number of transactions for each transaction type. For example, this value represents the number of rows that failed validation, are suspended, or are in pending state. For more information, see Validating AWS DMS tasks.The Last updated column shows the date and time that the information contained in the table statistics tab was last updated.If you stop and restart a task, the table statistics are reset. For example, if you perform a CDC with entries for inserts, updates, and deletes, stopping and resuming a task resets the count to 0.Related informationMonitoring AWS DMS tasksFollow"
https://repost.aws/knowledge-center/table-statistics-aws-dms-task
How do I troubleshoot and resolve high CPU utilization on my Amazon RDS for MySQL or Amazon Aurora MySQL instance?
I'm experiencing high CPU utilization on my Amazon Relational Database Service (Amazon RDS) for MySQL DB instances or my Amazon Aurora MySQL-Compatible Edition instances. How can I troubleshoot and resolve high CPU utilization?
"I'm experiencing high CPU utilization on my Amazon Relational Database Service (Amazon RDS) for MySQL DB instances or my Amazon Aurora MySQL-Compatible Edition instances. How can I troubleshoot and resolve high CPU utilization?Short descriptionIncreases in CPU utilization can be caused by several factors, such as user-initiated heavy workloads, multiple concurrent queries, or long-running transactions.To identify the source of the CPU usage in your Amazon RDS for MySQL instance, review the following approaches:Enhanced MonitoringPerformance InsightsQueries that detect the cause of CPU utilization in the workloadLogs with activated monitoringAfter you identify the source, you can analyze and optimize your workload to reduce CPU usage.ResolutionUsing Enhanced MonitoringEnhanced Monitoring provides a view at the operating system (OS) level. This view can help identify the cause of a high CPU load at a granular level. For example, you can review the load average, CPU distribution (system% or nice%), and OS process list.Using Enhanced Monitoring, you can check the loadAverageMinute data in intervals of 1, 5, and 15 minutes. A load average that's greater than the number of vCPUs indicates that the instance is under a heavy load. Also, if the load average is less than the number of vCPUs for the DB instance class, CPU throttling might not be the cause for application latency. Check the load average to avoid false positives when diagnosing the cause of CPU usage.For example, if you have a DB instance that's using a db.m5.2xlarge instance class with 3000 Provisioned IOPS that reaches the CPU limit, you can review the following example metrics to identify the root cause of the high CPU usage. In the following example, the instance class has eight vCPUs associated with it. For the same load average, exceeding 170 indicates that the machine is under heavy load during the timeframe measured:Load Average MinuteFifteen170.25Five391.31One596.74CPU UtilizationUser (%)0.71System (%)4.9Nice (%)93.92Total (%)99.97Note: Amazon RDS gives your workload a higher priority over other tasks that are running on the DB instance. To prioritize these tasks, workload tasks have a higher Nice value. As a result, in Enhanced Monitoring, Nice% represents the amount of CPU being used by your workload against the database.After turning on Enhanced Monitoring, you can also check the OS process list that's associated with the DB instance. Enhanced monitoring shows a maximum of 100 processes. This can help you identify which processes have the largest impact on performance based on CPU and memory use.In the operating system (OS) process list section of Enhanced Monitoring, review the OS processes and RDS processes. Confirm the percentage of CPU utilization of a mysqld or Aurora process. These metrics can help you confirm whether the increase in CPU utilization is caused by OS or by RDS processes. Or, you can use these metrics to monitor any CPU usage increases caused by mysqld or Aurora. You can also see the division of CPU utilization by reviewing the metrics for cpuUtilization. For more information, see Monitoring OS metrics with Enhanced Monitoring.Note: If you activate Performance Schema, then you can map the OS thread ID to the process ID of your database. For more information, see Why is my Amazon RDS DB instance using swap memory when I have sufficient memory?Using Performance InsightsYou can use Performance Insights to identify the exact queries that are running on the instance and causing high CPU usage. First, activate Performance Insights for MySQL. Then, you can use Performance Insights to optimize your workload. Be sure to consult with your DBA.To see database engines that you can use with Performance Insights, see Monitoring DB load with Performance Insights on Amazon RDS.Using queries to detect the cause of CPU utilization in the workloadBefore you can optimize your workload, you must identify the problematic query. You can run the following queries while the high CPU issue is occurring to identify the root cause of the CPU utilization. Then, optimize your workload to reduce your CPU usage.The SHOW PROCESSLIST command shows you the threads that are running currently on your MySQL instance. Sometimes, the same set of statements might continue running without completion. When this happens, the subsequent statements must wait for the first set of statements to finish. This is because InnoDB row-level locking might be updating the same rows. For more information, see SHOW PROCESSLIST statement on the MySQL website.SHOW FULL PROCESSLIST;Note: Run the SHOW PROCESSLIST query as the primary system user. If you're not the primary system user user, then you must have MySQL PROCESS server administration privileges to see all the threads running on a MySQL instance. Without admin privileges, SHOW PROCESSLIST shows only the threads associated with the MySQL account that you're using.The INNODB_TRX table provides information about all currently running InnoDB transactions that aren't read-only transactions.SELECT * FROM INFORMATION_SCHEMA.INNODB_TRX;The INNODB_LOCKS table provides information about locks that an InnoDB transaction has requested but hasn't received.For MySQL 5.7 or earlier:SELECT * FROM INFORMATION_SCHEMA.INNODB_LOCKS;For MySQL 8.0:SELECT * FROM performance_schema.data_locks;The INNODB_LOCK_WAITS table provides one or more rows for each blocked InnoDB transaction.For MySQL 5.7 or earlier:SELECT * FROM INFORMATION_SCHEMA.INNODB_LOCK_WAITS;For MySQL 8.0:SELECT * FROM performance_schema.data_lock_waits;You can run a query similar to the following to see the transactions that are waiting, and the transactions that are blocking the waiting transactions. For more information, see Using InnoDB transaction and locking information on the MySQL website.For MySQL 5.7 or earlier:SELECT r.trx_id waiting_trx_id, r.trx_mysql_thread_id waiting_thread, r.trx_query waiting_query, b.trx_id blocking_trx_id, b.trx_mysql_thread_id blocking_thread, b.trx_query blocking_queryFROM information_schema.innodb_lock_waits wINNER JOIN information_schema.innodb_trx b ON b.trx_id = w.blocking_trx_idINNER JOIN information_schema.innodb_trx r ON r.trx_id = w.requesting_trx_id;For MySQL 8.0:SELECT r.trx_id waiting_trx_id, r.trx_mysql_thread_id waiting_thread, r.trx_query waiting_query, b.trx_id blocking_trx_id, b.trx_mysql_thread_id blocking_thread, b.trx_query blocking_queryFROM performance_schema.data_lock_waits wINNER JOIN information_schema.innodb_trx b ON b.trx_id = w.blocking_engine_transaction_idINNER JOIN information_schema.innodb_trx r ON r.trx_id = w.requesting_engine_transaction_id;The SHOW ENGINE INNODB STATUS query provides information from the standard InnoDB monitor about the state of the InnoDB storage engine. For more information, see SHOW ENGINE statement on the MySQL website.SHOW ENGINE INNODB STATUS;The SHOW [GLOBAL | SESSION] STATUS provides information about the server status. For more information, see SHOW STATUS statement on the MySQL website.SHOW GLOBAL STATUS;Note: These queries were tested on Aurora 2.x (MySQL 5.7); Aurora 1. x (MySQL 5.6); MariaDB 10.x. Additionally, the INFORMATION_SCHEMA.INNODB_LOCKS table is no longer supported as of MySQL 5.7.14 and removed in MySQL 8.0. The performance_schema.data_locks table replaces the INFORMATION_SCHEMA.INNODB_LOCKS table. For more information, see The data_locks table on the MySQL website.Analyzing logs and turning on monitoringWhen you analyze logs or want to activate monitoring in Amazon RDS for MySQL, consider the following approaches:Analyze the MySQL General Query Log to view what the mysqld is doing at a specific time. You can also view the queries that are running on your instance at a specific time, including information about when clients connect or disconnect. For more information, see The General Query Log on the MySQL website.Note: When you activate the General Query Log for long periods, the logs consume storage and can add to performance overhead.Analyze the MySQL Slow Query Logs to find queries that take longer to run than the seconds that you set for long_query_time. You can also review your workload and analyze your queries to improve performance and memory consumption. For more information, see The Slow Query Log on the MySQL website. Tip: When you use Slow Query Log or General Query Log, set the parameter log_output to FILE.Use the MariaDB Audit Plugin to audit database activity. For example, you can track users that are logging on to the database or queries that are run against the database. For more information, see MariaDB Audit Plugin support.If you use Aurora for MySQL, then you can also use Advanced Auditing. Auditing can give you more control over the types of queries you want to log. Doing so reduces the overhead for logging.Use the innodb_print_all_deadlocks parameter to check for deadlocks and resource locking. You can use this parameter to record information about deadlocks in InnoDB user transactions in the MySQL error log. For more information, see innodb_print_all_deadlocks on the MySQL website.Analyzing and optimizing the high CPU workloadAfter you identify the query that's increasing CPU usage, optimize your workload to reduce the CPU consumption.If you see a query that's not required for your workload, you can terminate the connection using the following command:CALL mysql.rds_kill(processID);To find the processID of a query, run the SHOW FULL PROCESSLIST command.If you don't want to end the query, then optimize the query using EXPLAIN. The EXPLAIN command shows the individual steps involved in running a query. For more information, see Optimizing Queries with EXPLAIN on the MySQL website.To review profile details, activate PROFILING. The PROFILING command can indicate resource usage for statements that are running during the current session. For more information, see SHOW PROFILE statement on the MySQL website.To update any table statistics, use ANALYZE TABLE. The ANALYZE TABLE command can help the optimizer choose an appropriate plan to run the query. For more information, see ANALYZE TABLE statement on the MySQL website.Related informationAmazon RDS for MySQLAmazon RDS for MariaDBHow do I activate and monitor logs for an Amazon RDS MySQL DB instance?Tuning Amazon RDS for MySQL with Performance Insights Follow"
https://repost.aws/knowledge-center/rds-instance-high-cpu
How can I check to see who modified a Lambda function what changes were made?
I want to find out who modified an AWS Lambda function and what changes were made.
"I want to find out who modified an AWS Lambda function and what changes were made.ResolutionYou can use AWS CloudTrail to track which users are modifying Lambda functions and what changes were made. CloudTrail is turned on by default for your AWS account.For an ongoing record of events in your AWS account, create a trail. Using a trail, CloudTrail creates logs of API calls made on your account. These logs are delivered to an Amazon Simple Storage Service (Amazon S3) bucket that you specify. If you don't configure a trail, you can still view the most recent events in the CloudTrail console in Event history.Event history1.    Open the CloudTrail console.2.    In the navigation pane, choose Event history.3.    Follow the instructions for viewing, displaying, and filtering CloudTrail events for your use case.You can also download recorded event history as a file in CSV or JSON format.For an example CloudTrail log entry for the GetFunction and DeleteFunction API actions, see Understanding Lambda log file entries in the Lambda Developer Guide.Important: The eventName might include date and version information, such as "GetFunction20150331", but it's still referring to the same public API.For a list of all supported Lambda APIs, see Actions in the Lambda Developer Guide.CloudTrail logs1.    Open the CloudTrail console.2.    In the navigation pane, choose Trails.3.    Select the S3 bucket value for the trail that you want to view. The Amazon S3 console opens and shows that bucket, at the top level for the log files.4.    Choose the folder for the AWS Region where you want to review log files.5.    Navigate the bucket folder structure to the year, the month, and the day where you want to review logs of activity in that Region.6.    Select the file name, and then choose Download.7.    Unzip the file, and then use your favorite JSON file viewer to see the log.The log contains information about requests for resources in your account. For example, who made the request, the services used, and the actions performed. For more information, see Understanding Lambda log file entries.Related informationUsing AWS Lambda with AWS CloudTrailLogging Lambda API calls with CloudTrailHow do I know which user made a particular change to my AWS infrastructure?Follow"
https://repost.aws/knowledge-center/lambda-function-modified
How do I troubleshoot issues related to tagging in ECS tasks?
I have an issue with Amazon Elastic Container Service (Amazon ECS) task tags. How do I troubleshoot this?
"I have an issue with Amazon Elastic Container Service (Amazon ECS) task tags. How do I troubleshoot this?ResolutionWhen setting tags with Amazon ECS, you might have the following issues:Your tags aren't propagated from service or task definition to tasks.Your tags have an outdated Amazon Resource Name (ARN) and resource ID format.You're unable to add tags to your ECS resources due to missing AWS Identity and Access Management (IAM) permissions or tag restrictions.You're unable to see ECS tags in the AWS Billing Dashboard.To troubleshoot these issues, do the following:Verify that PropagateTags parameter is used to propagate from service or task definitions to tasksThe PropagateTags parameter can be used to copy tags from the task definition or service to the task. This can be done when you're running a task or creating a service. This parameter is not turned on by default.You can check if the PropagateTags is being used in a specific service by running the following command in AWS CLI and replacing servicename, clustername, and region with the appropriate values:aws ecs describe-services --services <servicename> --cluster <clustername> --region <region> --query 'services[*].propagateTags' --output textTo configure tags to propagate from the service or task definition using CLI, see RunTask and CreateService API.To activate tag propagation using the console:Open the Amazon ECS console.Select the AWS Region for your ECS resource.In the navigation pane, select Task Definitions.Select the task definition from the resource list, and choose Actions. Then, choose Create Service or Run Task.In the Task tagging configuration, next to Propagate tags from, choose Service or Task definitions.Note: The default option is Do not propagate.To use tags in ECS using AWS CloudFormation, you need to declare the entity AWS::ECS::Service using the properties EnableECSManagedTags and PropagateTags with the value: SERVICE or TASK_DEFINITION.Note:Using ECS service tags related properties after stack creation in CloudFormation will require a stack update and resource replacement. That means the service will be deleted and recreated through CloudFormation.Using the PropagateTags parameter can only be done when you're running a task or creating a service. For more information, see RunTask and CreateService API.You have access to the same configurations for Scheduled tasks as you do for tasks launched directly using the Amazon ECS RunTask API.Verify that you are using the new ARN formatTo be able to tag Amazon ECS resources, you must use the new Amazon Resource Name (ARNs) and IDs formats.Example of the two formats:Old format: arn:aws:ecs:region:aws_account_id:service/service-nameNew format: arn:aws:ecs:region:aws_account_id:service/cluster-name/service-nameTo migrate your ECS deployment to the new ARN and resource ID format, see Migrating your Amazon ECS deployment to the new ARN and resource ID format.Note: Your existing resources will not receive the new ARN format while tagging until they are recreated.Review that the IAM entity has the required permissions and check tags restrictionsIf you are unable to add tags to your ECS service, do the following:Check CloudTrail events in CloudTrail console for TagResource events.If you see one of the following errors: AccessDenied or The tags cannot be updated at this time. Wait a few minutes and try again, then the IAM entity doesn't have the ecs:TagResource permissions.To solve this, add ecs:TagResource permissions to the IAM entities.Once the permissions have been added, retry adding the tags to ECS cluster.Confirm that your ECS tags are within the tags restrictions. To review tags restrictions, see Tag restrictions.Check if it is an AWS Billing and Cost Management issueTo verify that the required tags are present on ECS tasks level, run the following command in AWS CLI and replace value with the ARN:aws ecs list-tags-for-resource --resource-arn <value>To verify that the required tags are present on ECS tasks level using the console:Open the Amazon ECS console.Select the AWS Region for your ECS resource.In the navigation pane, select a resource type (for example, Clusters).Select the resource from the resource list and choose **Tags.**If tags exist, they will be listed.If you are looking for managed tags, then ECS-managed must be turned on. Verify the ECS-managed status by running the following command and replacing servicename, clustername, and region with the appropriate values:aws ecs describe-services --services <servicename> --cluster <clustername> --region <region> --query 'services[*].enableECSManagedTags' --output textThe command output will contain enableECSManagedTags value.You can activate ECS-managed tags while creating service or running task using CLI, for more information, see RunTask and CreateService API.To activate ECS-managed tags using the console:Open the Amazon ECS console.Select the AWS Region for your ECS resource.In the navigation pane, select Task Definitions.Select the task definition from the resource list, and choose Actions. Then, choose Create Service or Run Task.In the Task tagging configuration, choose Enable ECS managed tags.If tags used for billing are listed but can't be seen in AWS Cost Explorer, be sure that tags are activated from the Billing and Cost Management console. To activate Cost allocation tags, see Activating user-defined cost allocation tags.Note: Every tag that has to be viewed as a filter in the Cost Explorer needs to be activated. It can take up to 24 hours for tags to activate.Related informationAmazon ECS troubleshootingTagging your Amazon ECS resourcesFollow"
https://repost.aws/knowledge-center/ecs-troubleshoot-tagging-tasks
How can I use system policies to control access to my EFS file system?
I want to access my Amazon Elastic File System (Amazon EFS) file system across accounts so that I can share files. How can I do this using AWS Identity and Access Management (IAM) authorization for NFS clients and EFS access points?
"I want to access my Amazon Elastic File System (Amazon EFS) file system across accounts so that I can share files. How can I do this using AWS Identity and Access Management (IAM) authorization for NFS clients and EFS access points?Short descriptionYou can mount your Amazon EFS file system by using IAM authorization for NFS clients and access points with the Amazon EFS mount helper. By default, the mount helper uses DNS to resolve the IP address of your mount target. So if you're mounting from another account or Amazon Virtual Private Cloud (Amazon VPC), you must resolve the Amazon EFS mount target IP manually.PrerequisitesThe VPCs of your NFS client and your EFS file system are connected using either a VPC peering connection or a VPC Transit Gateway. This allows Amazon Elastic Compute Cloud (Amazon EC2) instances from the same or different accounts, to access EFS file systems in a different VPC.Your IAM role (instance role or any other role) has console or read access on both the Amazon EFS and NFS client resources.The Amazon EFS client and the botocore package are installed in the NFS client.ResolutionNote: If you receive errors when running AWS CLI commands, make sure that you're using the most recent version of the AWS Command Line Interface (AWS CLI).In this example, the EFS file system is present in account A and the NFS client is present in account B.    1.    To access and mount the cross account EFS file system, add a policy statement in the an IAM policy similar to this:{ "Sid": "EfsPermissions", "Effect": "Allow", "Action": [ "elasticfilesystem:ClientMount", "elasticfilesystem:ClientWrite", "elasticfilesystem:ClientRootAccess" ], "Resource": "arn:aws:elasticfilesystem:region:account-id:file-system/file-system-id" }This statement allows the IAM role to have mount, write and root access on the EFS file system. If your NFS client is an EC2 instance, attach the IAM role to the instance.2.    Or, you can assume the role using the AWS CLI. Note that the AWS CLI can't resolve the DNS of an EFS file system present in another VPC. So, first determine the right mount target IP for your client. Then, configure the client to mount the EFS file system using that IP.To be sure of high availability, always use the mount target IP address in the same Availability Zone (AZ) as your NFS client. AZ name mappings might differ between accounts. Because you're mounting an EFS file system in another account, the NFS client and the mount target must be in the same AZ ID.To determine the AZ of your EC2 instance, call the DescribeAvailabilityZone API using one of these methods:Log in to the Amazon EC2 console, and choose Instances. Choose, EC2-Instance-ID, and then choose Networking. Under Networking details, you can find the Availability zone.-or-Run a command similar to this from the IAM entity that has sufficient read permissions for Amazon EC2 and get a similar output :$ aws ec2 describe-availability-zones --zone-name `curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone`{ "AvailabilityZones": [ { "State": "available", "ZoneName": "us-east-2b", "Messages": [], "ZoneId": "use2-az2", "RegionName": "us-east-2" } ]}3.    To determine the mount target IP for the local AZ, all the DescribeMountTargets API using one of these methods:Log in to the Amazon EFS console, and choose File Systems. Choose, EFS-File-System-ID, and then under Network, note the IP address for your Availability zone.-or-Run a command similar to this from the IAM entity that has sufficient read permissions for Amazon EC2 and get a similar output :$ aws efs describe-mount-targets --file-system-id fs-cee4feb7{ "MountTargets": [ { "MountTargetId": "fsmt-a9c3a1d0", "AvailabilityZoneId": "use2-az2", "NetworkInterfaceId": "eni-048c09a306023eeec", "AvailabilityZoneName": "us-east-2b", "FileSystemId": "fs-cee4feb7", "LifeCycleState": "available", "SubnetId": "subnet-06eb0da37ee82a64f", "OwnerId": "958322738406", "IpAddress": "10.0.2.153" }, ... { "MountTargetId": "fsmt-b7c3a1ce", "AvailabilityZoneId": "use2-az3", "NetworkInterfaceId": "eni-0edb579d21ed39261", "AvailabilityZoneName": "us-east-2c", "FileSystemId": "fs-cee4feb7", "LifeCycleState": "available", "SubnetId": "subnet-0ee85556822c441af", "OwnerId": "958322738406", "IpAddress": "10.0.3.107" } ]}4.    From the output you get, note the IP address that corresponds to the mount target in the AZ of the EC2 instance.5.    Use the IP address you obtained and add the hosts entry in the /etc/hosts file in the NFS client. The format of the DNS name is mount-target-IP-Address file-system-ID.efs.region.amazonaws.com.See this example command:$ echo "10.0.2.153 fs-cee4feb7.efs.us-east-2.amazonaws.com" | sudo tee -a /etc/hosts6.    Mount the EFS file system using the mount helper.Note: In a cross-account scenario, you can't use the usual NFS command, so botocore and the Amazon EFS client is necessary.After following these steps, you are able to mount the EFS file system and start using it. If you experience any errors, see the troubleshooting guide.Related informationCreating file system policiesFollow"
https://repost.aws/knowledge-center/access-efs-across-accounts
How do I create an AWS access key?
"I need an AWS access key to allow a program, script, or developer to have programmatic access to the resources on my AWS account."
"I need an AWS access key to allow a program, script, or developer to have programmatic access to the resources on my AWS account.ResolutionAn access key grants programmatic access to your resources. This means that you must guard the access key as carefully as the AWS account root user sign-in credentials.It's a best practice to do the following:Create an IAM user, and then define that user's permissions as narrowly as possible.Create the access key under that IAM user.For more information, see What are some best practices for securing my AWS account and its resources?Related informationBest practices for managing AWS accountsAccess management for AWS resourcesFollow"
https://repost.aws/knowledge-center/create-access-key
How do I troubleshoot throttling errors in Kinesis Data Streams?
"My Amazon Kinesis data stream is throttling. However, the stream didn't exceed the data limits. How do I detect "Rate Exceeded" or "WriteProvisionedThroughputExceeded" errors?"
"My Amazon Kinesis data stream is throttling. However, the stream didn't exceed the data limits. How do I detect "Rate Exceeded" or "WriteProvisionedThroughputExceeded" errors?Short DescriptionYou can detect and troubleshoot throttling errors in your Kinesis data stream by doing the following:Enable enhanced monitoring and compare IncomingBytes values.Log full records to perform stream count and size checks.Use random partition keys.Check for obscure metrics or micro spikes in Amazon CloudWatch metrics.ResolutionTo prevent "Rate Exceeded" or "WriteProvisionedThroughputExceeded" errors in your Kinesis data stream, try the following:Enable enhanced monitoring and compare IncomingBytes valuesTo verify whether you have hot shards, enable enhanced monitoring on your Kinesis data stream. When you enable shard level monitoring in a Kinesis data stream, then you can investigate the shards individually. You can examine the stream on a per shard basis to identify which shards are receiving more traffic or breaching any service limits.Note: Hot shards are often excluded from the Kinesis data stream metrics when the enhanced monitoring setting is disabled. For more information about hot shards, see Strategies for Resharding.You can also compare the IncomingBytes average and maximum values to verify whether there are hot shards in your stream. If enhanced monitoring is enabled, you can also see which specific shards deviate from the average.Log full records to perform stream count and size checksTo identify micro spikes or obscure metrics that breach stream limits, log full records or custom code to perform stream count and size checks. Then, evaluate the number and size of records that are sent to the Kinesis data stream. This can help you identify any spikes that breach data limits.You can also take the value from a one minute data point and divide it by 60. This gives you an average value per second to help determine if throttling is present within the time period specified. If the successful count doesn't breach the limits, then add the IncomingRecords metric to the WriteProvisionedThroughputExceeded metric, and retry the calculation. The IncomingRecords metric signals successful or accepted records, whereas the WriteProvisionedThroughputExceeded metric indicates how many records were throttled.Note: Check the sizes and number of records that are sent from the producer. If the combined total of the incoming and throttled records are greater than the stream limits, then consider changing the number of records.The PutRecord.Success metric is also a good indicator for operations that are failing. When there is a dip in the success metric, then investigate the data producer logs to find the root causes of the failures. If throttling occurs, establish logging on the data producer side to determine the total amount and size of submitted records. If the total number of records in the PutRecord.Success metric breaches the stream limits, then your Kinesis data stream throttles. For more information about Kinesis stream limits, see Kinesis Data Streams Quotas.Use random partition keysIf there are hot shards in your Kinesis data stream, use a random partition key to ingest your records. If the operations already use a random partition key, then adjust the key to correct the distribution. Then, monitor the key for changes in metrics such as IncomingBytes and IncomingRecords. If the maximum and average patterns are close together, then there are no hot shards.Check for obscure metrics or micro spikes in Amazon CloudWatch metricsIf there are CloudWatch metrics that don't clearly indicate breaches or micro spikes in the data, try the following solutions:Increase the number of shards, and then split the size of log records.Scale your Kinesis data stream to match the producer output.Use an exponential backoff and retry mechanism in the producer logic.Change the configuration settings for your producer so that your write rate is decreased.Limit the request rate of the producer and the number of records that are sent (per second) to match the capacity of the stream.For more information about stream quotas, see Kinesis Data Stream Quotas.Related InformationDeveloping and Using Custom Consumers with Dedicated Throughput (Enhanced Fan-Out)Monitoring the Amazon Kinesis Data Streams Service with Amazon CloudWatchFollow"
https://repost.aws/knowledge-center/kinesis-data-stream-throttling-errors
How do I install an SSL/TLS certificate on my EC2 Windows instance running IIS server?
I want my web application or website running on an Amazon Elastic Compute Cloud (Amazon EC2) instance to use HTTPS. How do I install my own SSL certificate on an EC2 Windows instance running Internet Information Services (IIS) server to allow this?
"I want my web application or website running on an Amazon Elastic Compute Cloud (Amazon EC2) instance to use HTTPS. How do I install my own SSL certificate on an EC2 Windows instance running Internet Information Services (IIS) server to allow this?Short descriptionNote: If you're using Elastic Load Balancing (ELB), you can use an Amazon-provided certificate from AWS Certificate Manager (ASM). For more information, see How can I associate an ACM SSL/TLS certificate with a Classic, Application, or Network Load Balancer?There are three steps to install an SSL/TLS certificate on your EC2 Windows instance:Create a Certificate Signing Request (CSR) and request your SSL certificate.Install your SSL certificate.Assign the SSL certificate to your IIS deployment.You can also modify an existing SSL certificate assigned to a site.ResolutionStep 1: Create a CSR and request your SSL certificate1.    Open the IIS Manager by selecting Start, Control Panel, Administrative Tools, Internet Information Services (IIS) Manager.2.    Select Connections, and then select the name of the server where you're installing the certificate.3.    In the IIS section of the home page, select Server Certificates.4.    On the Server Certificates console, select Actions, and then select Create Certificate Request. The Request Certificate wizard opens.5.    Enter the following values in the Request Certificate wizard:Common name: Enter the fully qualified domain name (FQDN) of the domain (for example, www.example.com).Organization: Enter your company's name.Organizational unit: Optionally, enter name of the department within your organization.City/locality: Enter the city where the company is legally located.State/province: Enter the state or province where the company is legally located.Country: Enter the country where the company is legally located.6.    Cryptographic Service Provider Properties, enter the information following:Cryptographic service provider: Select Microsoft RSA Channel Cryptographic Provider. You can select other options, if needed.Bit length: Use 2048, which is the current best practice, unless a higher value is required.7.    Select Browse next to the Specify a file name for the certificate request field to browse to the location where you're saving the CSR.Note: If you don't select a location, the file saves to C:\windows\system32.8.    Select Next.9.    Select Finish.10.    Use a text editor to copy the text from the created file. The following is an example of the text:-----BEGIN NEW CERTIFICATE REQUEST-----<examplekey>-----END NEW CERTIFICATE REQUEST-----11.    Send this value, including the first and last lines, to your chosen certificate provider so that they can issue the certificate.When the certificate is available, move to Step 2: Install your SSL certificate.Step 2: Install your SSL certificate1.    Save the certificate file issued by the chosen provider to the server where you created the Certificate Signing Request (CSR).2.    Open the IIS Manager by selecting Start, Control Panel, Administrative Tools, Internet Information Services (IIS) Manager.3.    Select Connections, and then select the name of the server where you're installing the certificate.4.    In the IIS section, select Server Certificates.5.    Select Actions, Complete Certificate Request. A wizard launches.6.    For Specify Certificate Authority Response, enter the following information:File name containing the certificate authority's response: Select the certificate (.cer) file.Friendly name: Enter a name for you to identify the certificate. For easier identification, consider adding the expiration date and use case.Select a certificate store for the new certificate: Select Web Hosting.Your SSL certificate is installed on the server and ready for use. Now you must assign it to your site.Step 3: Assign the SSL certificate to your IIS deployment1.    Open the IIS Manager by selecting Start, Control Panel, Administrative Tools, Internet Information Services (IIS) Manager.2.    Under Connections, expand the section of the server where you installed the certificate.3.    Expand the Sites section, and then select the site where you want to assign the certificate.4.    On the site's home page, select Bindings.5.    In the Site Bindings wizard, select Add.6.    On the Add Site Binding enter the following information:Type: Select HTTPS.IP Address: Select the IP Address of the site or select All Unassigned.Port: Enter 443. Port 443 is the port used by HTTPS for SSL secured traffic.SSL Certificate: Select the SSL certificate for this site (for example, example.com).Now the SSL certificate is assigned to this specific site for use with HTTPS.Modify an existing SSL certificate assigned to a siteTo modify a certificate assigned to a site, do the following:1.    Follow the steps in Step 1: Create a CSR and request your SSL certificate.2.    Follow the steps in Step 2: Install your SSL certificate.3.    Follow steps 1 through 4 in the Step 3: Assign the SSL certificate to your IIS deployment.4.    In the Site Bindings wizard, find the HTTPS binding, select it, and then choose Edit.5.    Select the new certificate from the SSL certificate dropdown list, and then select Ok.Follow"
https://repost.aws/knowledge-center/ec2-windows-install-ssl-certificate
How can I customize my log files in Elastic Beanstalk?
"I want to customize my log files in AWS Elastic Beanstalk, and be sure that my custom application logs are included and streamed to Amazon CloudWatch."
"I want to customize my log files in AWS Elastic Beanstalk, and be sure that my custom application logs are included and streamed to Amazon CloudWatch.Short DescriptionIf the default log files that Elastic Beanstalk collects and streams don't meet the needs of your application or use case, then consider the following options to customize the collection and streaming of your log files:Include your custom logs in the log bundleRotate your logs(Optional) Stream your logs to CloudWatchNote: If you have a custom log file or if one of your logs is missing from the default logs, then you can further customize your log configuration.ResolutionInclude your custom logs in the log bundleWhen you request logs from Elastic Beanstalk, Elastic Beanstalk returns default log files from the Amazon Elastic Compute Cloud (Amazon EC2) instances in your environment. However, you might not receive these default log files if your application has a unique log location.To get Elastic Beanstalk to return your log files from a unique log location, extend the default log task configuration.Rotate your logsTo prevent your application log files from taking up too much disk space or even exhausting disk space, rotate your old log files with log rotation.Rotating your logs makes sure that old logs are deleted automatically from your environment's EC2 instances. If you want your old logs to persist, you can enable rotated logs to be uploaded to Amazon Simple Storage Service (Amazon S3) before the logs are deleted from an instance.(Optional) Stream your logs to CloudWatchIn production applications, it's a best practice to stream your logs to a remote storage solution, such as CloudWatch. To learn how to enable log streaming on Elastic Beanstalk, see Streaming Log Files to Amazon CloudWatch Logs or Using Elastic Beanstalk with Amazon CloudWatch Logs.If you want to stream custom log locations, see Instance Log Streaming Using Configuration Files.Streaming your logs to CloudWatch can help safeguard your data. For example, if your Elastic Beanstalk environment has a problem with an EC2 instance that terminates, then you can still recover your logs from CloudWatch. You can also use log rotation to protect against data loss.Related InformationWhat Is Amazon CloudWatch Logs?Troubleshooting CloudWatch Logs IntegrationStreaming Elastic Beanstalk Environment Health Information to Amazon CloudWatch LogsFollow"
https://repost.aws/knowledge-center/elastic-beanstalk-customized-log-files
Why did my CloudWatch alarm trigger when its metric doesn't have any breaching data points?
"My Amazon CloudWatch alarm changed to the ALARM state. When I check the metric that's being monitored, the CloudWatch graph doesn't show any breaching datapoints. However, the Alarm History contains an entry with a breaching data point. Why did my CloudWatch alarm trigger?"
"My Amazon CloudWatch alarm changed to the ALARM state. When I check the metric that's being monitored, the CloudWatch graph doesn't show any breaching datapoints. However, the Alarm History contains an entry with a breaching data point. Why did my CloudWatch alarm trigger?Short descriptionCloudWatch alarms evaluate metrics based on the datapoints that are available at a given moment. Alarm History captures a record of the datapoints that the alarm evaluated at that timestamp. However, it's possible for new samples to be published after the alarm evaluation occurred. These new samples might impact the value that's calculated when CloudWatch aggregates the metric data.ResolutionFind the breaching datapointsIf your CloudWatch graph doesn't show any breaching datapoints, then those datapoints occurred outside of the alarm evaluation time. To understand how this happens, refer to the following example.In this example, X number of samples are available when an alarm evaluation occurs, resulting in an aggregated value of A. Later, new samples are posted, resulting in Y number of samples that are retrieved for the same timestamp. This results in a different aggregated value of B.In this situation, an alarm is configured with the following parameters:Namespace: Web_AppMetric: ResponseTimeDimension: host,h_04254448d4e964956Statistic: AverageThreshold: 0.005ComparisonOperator: GreaterThanThresholdPeriod: 60 seconds (1 minute)Evaluation Period: 1When the alarm evaluates the period from 12:00:00 - 12:01:00 UTC, the following values are retrieved by the metric:Sample-1: 12:00:00 UTC, numeric value: 0.00675Sample-2: 12:00:00 UTC, numeric value: 0.00789Sample-3: 12:00:00 UTC, numeric value: 0.00421The average of these values is 0.006283333, which breaches the threshold of 0.005 seconds. Therefore, the alarm changes to the ALARM state. The alarm's history captures the aggregated values that exceed the threshold.The host might temporarily experience a performance issue, which impacts the client application that's responsible for publishing metrics. As a result, the host might not post datapoints at equally spaced intervals. In this situation, samples for 12:00 were published after the alarm evaluation occurred. Below are all the samples for the 12:00 timestamp:Sample-1: 12:00:00 UTC, numeric value: 0.00675Sample-2: 12:00:00 UTC, numeric value: 0.00789Sample-3: 12:00:00 UTC, numeric value: 0.00421Sample-4: 12:00:00 UTC, numeric value: 0.00002Sample-5: 12:00:00 UTC, numeric value: 0.00007After receiving an alert from this alarm, the user renders a CloudWatch graph to review the metric behavior. CloudWatch retrieves the five samples from 12:00:00 - 12:01:00 UTC and aggregates them as an average of 0.003788. This is different from the previously calculated value and is below the threshold. Therefore, the breaching datapoints are not visible in the time range because additional samples were posted after the alarm evaluation occurred.Increase the Alarm Evaluation IntervalAn alarm's evaluation interval is the number of data points multiplied by the period. Configuring Datapoints to Alarm can result in a longer evaluation interval. When an alarm generates false alerts due to delayed metrics, increasing the evaluation interval allows delayed datapoints to be considered in the alarm evaluation. This reduces the number of false alerts.The evaluation interval can be increased by one of two ways:1.    Increase the period.In the following example, the period is increased to five minutes:Namespace: Web_AppMetric: ResponseTimeDimension: host,h_04254448d4e964956Statistic: AverageThreshold: 0.005ComparisonOperator: GreaterThanThresholdPeriod: 300 seconds (5 minutes)Evaluation Period: 1-or-2.    Configure "M out of N" Datapoints to Alarm.In the following example, M out of N datapoints are configured to two out of three.Namespace: Web_AppMetric: ResponseTimeDimension: host,h_04254448d4e964956Statistic: AverageThreshold: 0.005ComparisonOperator: GreaterThanThresholdPeriod: 60 seconds (1 minute)Evaluation Period (N): 3Datapoints To Alarm (M): 2When you configure Evaluation Periods and Datapoints to Alarm as different values, you set an "M out of N" alarm. Datapoints to Alarm is M and Evaluation Period is N. For example, if you configure four out of five data points with a period of one minute, then the evaluation interval is five minutes. Similarly, if you configure three out of three data points with a period of ten minutes, the evaluation interval is thirty minutes.With Datapoints to Alarm configured in this way, CloudWatch Alarms evaluate more data points. They also change the alarm state only when a minimum number of data points (M) breach a given set of data points (N). This parameter can adjust the alarm to trigger on a single datapoint or require multiple datapoints to transition to the ALARM state.For more information, see Create a CloudWatch alarm based on a static threshold and Configuring how CloudWatch alarms treat missing data.Related informationWhy didn't I receive an Amazon Simple Notification Service (Amazon SNS) notification for my CloudWatch alarm trigger?Why is my CloudWatch alarm in INSUFFICIENT_DATA state?Why did my CloudWatch alarm send me a notification after a single breached data point?Follow"
https://repost.aws/knowledge-center/cloudwatch-trigger-metric
How do I download AWS Site-to-Site VPN example configuration files?
I want to access AWS Site-to-Site VPN example configuration files. How can I do this?
"I want to access AWS Site-to-Site VPN example configuration files. How can I do this?Short descriptionTo download Site-to-Site VPN example configuration files, use the Download Configuration utility.There are two ways to access the Download Configuration utility:Amazon Virtual Private Cloud (Amazon VPC) consoleAWS Command Line Interface (AWS CLI)For a list of available example configuration files, see Example configuration files. Important: To use the Download Configuration utility, the following AWS Identity and Access Management (IAM) permissions are required:ec2:GetVpnConnectionDeviceTypesec2:GetVpnConnectionDeviceSampleConfigurationIf your IAM policy has an EC2 wildcard (*), you don't need to manually add these permissions. ResolutionTo access the Download Configuration utility from the Amazon VPC console1.    Open the Amazon VPC console.2.    In the left navigation pane, under VIRTUAL PRIVATE NETWORK (VPN), choose Site-to-Site VPN Connections.3.    Choose the name of your VPN connection.4.    Choose Download Configuration.5.    For Vendor, select your Customer Gateway device vendor.   -or-If your vendor isn't listed, select Generic. 6.    For Platform and Software, select the values that apply to your use case. 7.    For IKE Version, select the protocol version that applies to your use case.8.    Choose Download.The example configuration file downloads to your computer.To access the Download Configuration utility from the AWS CLINote: If you receive errors when running AWS CLI commands, make sure that you're using the most recent version of the AWS CLI.1.    List all available device configuration example files and get the VpnConnectionDeviceTypeId for your device by running the following get-vpn-connection-device-types command: Important: Replace <aws-region> with your AWS Region. aws ec2 get-vpn-connection-device-types --region <aws-region>Example command output:........ }, { "VpnConnectionDeviceTypeId": "7125681a", "Vendor": "Fortinet", "Platform": "Fortigate 40+ Series", "Software": "FortiOS 6.4.4+ (GUI)" }, { "VpnConnectionDeviceTypeId": "9005b6c1", "Vendor": "Generic", "Platform": "Generic", "Software": "Vendor Agnostic" }, { "VpnConnectionDeviceTypeId": "670add1b", "Vendor": "H3C", "Platform": "MSR800", "Software": "Version 5.20" }, {.......2.    Return the example configuration files you want by running the following get-vpn-connection-device-sample-configuration command:aws ec2 get-vpn-connection-device-sample-configuration --vpn-connection-id <vpn-id> --vpn-connection-device-type-id <device-type-id> --internet-key-exchange-version <ike-version> --region <aws-region> --output textImportant: Replace --vpn-connection-id with your VPN connection ID.Replace --internet-key-exchange-version with your internet key exchange version.Replace --vpn-connection-device-type-idwith the the Vendor:Platform:Software version from the previous command output.Related informationWhat do I do if I can't find the device specific VPN configuration file for my vendor?Follow"
https://repost.aws/knowledge-center/vpn-download-example-configuration-files
How can I enable Elastic IP addresses on my AWS Transfer Family SFTP-enabled server endpoint?
I want to make my AWS Transfer Family SFTP-enabled server accessible using Elastic IP addresses.
"I want to make my AWS Transfer Family SFTP-enabled server accessible using Elastic IP addresses.ResolutionTo make your AWS Transfer Family SFTP-enabled server accessible using Elastic IP addresses, create an internet-facing endpoint for your server.However, if you must change the listener port to a port other than port 22 (for migration), then see How can I enable Elastic IP addresses on my AWS Transfer Family SFTP-enabled server endpoint with a custom listener port?Related informationLift and shift migration of SFTP servers to AWSFollow"
https://repost.aws/knowledge-center/sftp-enable-elastic-ip-addresses
How can I use Amazon EMR to process data?
I want to use Amazon EMR to process data.
"I want to use Amazon EMR to process data.ResolutionAmazon EMR processes data using Amazon Elastic Compute Cloud (Amazon EC2) instances and open-source applications such as Apache Spark, HBase, Presto, and Flink.To launch your first EMR cluster, follow the video tutorial in the article, or see Tutorial: Getting started with Amazon EMR.Related informationOverview of Amazon EMROverview of Amazon EMR architectureFollow"
https://repost.aws/knowledge-center/emr-process-data
How can I get my Amazon SQS subscription to successfully receive a notification from my Amazon SNS topic?
My Amazon Simple Queue Service (Amazon SQS) subscription won't receive a notification from my Amazon Simple Notification Service (Amazon SNS) topic.
"My Amazon Simple Queue Service (Amazon SQS) subscription won't receive a notification from my Amazon Simple Notification Service (Amazon SNS) topic.Short descriptionBefore you get started, configure Amazon CloudWatch delivery status logging for your SNS topic. For more information, see Monitoring Amazon SNS topics using CloudWatch.Then, try the following troubleshooting steps.ResolutionConfigure your SQS queue's access policy to allow Amazon SNS to send messagesTo view the access policy of your SQS queue, configure your access policy.If your SQS queue's access policy doesn't include the "sqs:SendMessage" action for your SNS topic, then update your policy with the correct permissions. The permissions must allow Amazon SNS to send messages to the SQS queue.Configure your AWS KMS key policy to work with server-side encryption on your SQS queueIf server-side encryption is enabled on your SQS queue, you must do the following:1.    Enable AWS KMS key status.2.    Verify that your SQS queue is using a customer managed key. The KMS key must have an AWS Key Management Service (AWS KMS) key policy that grants Amazon SNS the correct permissions.To allow the SNS event source to perform kms:GenerateDataKey and kms:Decrypt API actions, add the following statement to the KMS key policy:{ "Sid": "Allow Amazon SNS to use this key", "Effect": "Allow", "Principal": { "Service": "sns.amazonaws.com" }, "Action": [ "kms:Decrypt", "kms:GenerateDataKey*" ], "Resource": "*"}If the KMS key policy isn't configured, then the Amazon SNS message delivery status logs show the following KMS.AccessDeniedException error:{ "notification": { "messageMD5Sum": "1234567890abcdefghijklmnopqrstu0", "messageId": "abcdef01-gh23-4i5j-678k-90l23m45nopq", "topicArn": "arn:aws:sns:us-east-1:111111111111:sns", "timestamp": "2021-06-17 17:08:10.299" }, "delivery": { "deliveryId": "12a3b4c5-6789-0de1-fgh2-ij34k56lmn78", "destination": "arn:aws:sqs:us-east-1:111111111111:sns-sqs", "providerResponse": "{\"ErrorCode\":\"KMS.AccessDeniedException\",\"ErrorMessage\":\"null (Service: AWSKMS; Status Code: 400; Error Code: AccessDeniedException; Request ID: 12a345b6-7c89-0d1e-2f34-5gh67i8kl901; Proxy: null)\",\"sqsRequestId\":\"Unrecoverable\"}", "dwellTimeMs": 60, "attempts": 1, "statusCode": 400 }, "status": "FAILURE"} Note: For more information, see Why aren't messages that I publish to my Amazon SNS topic getting delivered to my subscribed Amazon SQS queue that has server-side encryption activated?Confirm that your subscribed SQS queue's filter policy matches the message sent from the SNS topicReview the NumberOfNotificationsFilteredOut metric in your CloudWatch metrics for Amazon SNS.The Publish requests made by the AWS Identity and Access Management (IAM) entity that's invoking your function can appear in the NumberOfNotificationsFilteredOut metric. In this scenario, check the SNS topic subscription filter policy of your SQS queue:1.    Open the Amazon SNS console.2.    On the navigation pane, choose Subscriptions.3.    Select your subscription, and then choose Edit.4.    Expand the Subscription filter policy section.5.    In the subscription filter policy, confirm that the Publish request message attributes match the attributes required by the filter policy. If the attributes don't match, then update your Publish request message attributes to match the attributes required by the filter policy.Note: For more information, see Amazon SNS subscription filter policies.6.    Choose Save changes.Troubleshoot raw message delivery issuesIf you enabled raw message delivery for your SQS queue subscription, then verify that you're sending no more than 10 message attributes in the published notification.Amazon SNS maps the message attributes for raw delivery enabled messages to SQS message attributes. If you use more than 10 message attributes, then the notification delivery fails and your delivery status logs show the following error log:{ "notification": { "messageMD5Sum": "5c10d6c5d7f246fc3fb85334b4ed55ca", "messageId": "50f51b06-ee71-56fc-b657-424391902ee7", "topicArn": "arn:aws:sns:us-east-1:111111111111:sns", "timestamp": "2021-06-17 16:51:45.468" }, "delivery": { "deliveryId": "36b3ee88-bc85-5587-b2af-b7cdc3644e07", "destination": "arn:aws:sqs:us-east-1:111111111111:sns-sqs", "providerResponse": "{\"ErrorCode\":\"InvalidParameterValue\",\"ErrorMessage\":\"Number of message attributes [SENT DURING PUBLISH] exceeds the allowed maximum [10].\",\"sqsRequestId\":\"Unrecoverable\"}", "dwellTimeMs": 44, "attempts": 1, "statusCode": 400 }, "status": "FAILURE"}Troubleshoot message deduplication with notification delivery for SNS FIFO topicsSNS FIFO topics order and deduplicate messages. If a notification for a deduplication ID is successfully sent to an SNS FIFO topic, then any message published with the same deduplication ID, within the five-minute deduplication interval, is accepted but not delivered.You can configure the deduplication ID in the Publish API operation. Or, the deduplication ID is computed by the FIFO topic based on the message body if content-based deduplication is enabled for the SNS FIFO topic.The SNS FIFO topic continues to track the message deduplication ID, even after the message is delivered to subscribed endpoints.For more information, see Message deduplication for FIFO topics.Follow"
https://repost.aws/knowledge-center/sns-sqs-subscriptions-notifications
How can I resolve access denied issues caused by permissions boundaries?
I received an access denied or unauthorized error when trying to access my AWS service. How can I troubleshoot access denied errors on my AWS account?
"I received an access denied or unauthorized error when trying to access my AWS service. How can I troubleshoot access denied errors on my AWS account?Short descriptionYou might receive an access denied or unauthorized error because your AWS Identity and Access Management (IAM) policy does not meet specific conditions requirements. First, review any service control policies (SCPs) on your account, and then check that there are no denies present in your resource-based policies. If this doesn't resolve the error, then the issue might be caused by the presence of a permissions boundary.A permissions boundary is a feature that allows you to use a managed policy to set the maximum permissions that an identity-based policy can grant to an IAM entity (user or role). When you set a permissions boundary for an entity, that entity can only perform actions that are allowed by both its identity-based policies and its permissions boundary.Note: The permissions boundary sets the maximum permissions for an entity, but does not grant those permissions.To troubleshoot authorization errors, follow these steps:Check if an action is allowed in your IAM policy but not in the permissions boundaryInclude all required actions in the permissions boundary using the IAM consoleUse the "iam:PermissionsBoundary" condition key in your IAM policyResolutionCheck if an action is allowed in your IAM policy, but not in the permissions boundaryThe following example shows an action that is allowed in an IAM policy, but not in the permissions boundary. In this example, an IAM user has the policy USER_IAM_POLICY attached to it:IAM policy:(USER_IAM_POLICY) “Effect”: “Allow”, “Action”: [ “ec2:*”, “s3:*” ],This policy gives the user full access to Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3) services. The user also has a permissions boundary named USER_PB_POLICY set.Permissions Boundary:(USER_PB_POLICY) “Effect”: “Allow”, “Action”: [ “cloudwatch:*”, “s3:*” ],The permissions boundary sets the maximum permissions that the user can perform. In this example, this permission boundary allows full access to Amazon CloudWatch and Amazon S3 services. But, because Amazon S3 is the only service that is allowed in both the IAM policy and the permissions boundary, the user only has access to S3. If the user tries to access Amazon EC2, they receive an access denied error.To resolve this error, edit the permissions boundary and allow access to Amazon EC2:“Effect”: “Allow”, “Action”: [ “cloudwatch:*”, “s3:*”, “ec2:*” ],Include all required actions in the permissions boundary using the IAM consoleFollow these steps to edit the permissions boundary to include all actions that a user requires:Open the IAM console.In the navigation pane, choose Roles/Users.Choose the IAM entity you want to edit.In the Permissions boundary section, check your settings. If a permissions boundary is set, this means that there is a permissions boundary in place. The name of the managed policy that is used as a permissions boundary on your IAM entity is listed in this section.Expand the JSON policy, and check if the action you require is whitelisted in the permissions boundary. If your action is not whitelisted, edit the JSON policy to allow all actions that your IAM entity requires.For more information on editing policies, Editing IAM policies.Use the iam:PermissionsBoundary condition key in your IAM policiesAdd the iam:PermissionsBoundary condition key to your IAM policies. This condition key checks that a specific policy is attached as a permissions boundary on an IAM entity.The following example shows an IAM policy named RestrictedRegionpermissionsBoundary:{ "Version": "2012-10-17", "Statement": [ { "Sid": "EC2RestrictRegion", "Effect": “Allow”, "Action": "ec2:*” "Resource": "*", "Condition": { "StringEquals": { "aws:RequestedRegion": [ "us-east-1" ] } } }Create a policy and attach it to a delegated admin who has the responsibility to create users. When we attach the following example policy to the admin, they can only create an IAM user when they attach the RestrictedRegionPermissionsBoundary policy to that user. If the admin tries to create an IAM user without attaching the policy, they receive an access denied error.{ "Sid": "CreateUser", "Effect": "Allow", "Action": [ "iam:CreateUser" ], "Resource": "arn:aws:iam::111222333444:user/test1*", "Condition": { "StringEquals": { "iam:PermissionsBoundary": "arn:aws:iam::111222333444:policy/RestrictedRegionPermissionsBoundary" } }To set the IAM policy RestrictedRegionPermissionsBoundary as a permissions boundary when creating a new user, follow these steps:Open the IAM console.In the navigation pane, choose Users, and then choose Add Users.Enter the user name that you want to edit, choose AWS access type, and then choose next.Expand the Set permissions boundary section, and choose Use a permissions boundary to control the maximum role permissions.In the search field, enter RestrictedRegionPermissionsBoundary, and then choose the radio button for your policy.Choose Next:Tags.Review your settings and create a user.Related informationPermissions boundaries for IAM entitiesEvaluating effective permissions with boundariesFollow"
https://repost.aws/knowledge-center/iam-access-denied-permissions-boundary
Why am I getting the "CROSSSLOT Keys in request don't hash to the same slot" error while doing multi-key operations on a Redis (cluster mode enabled) ElastiCache cluster?
Why am I getting "CROSSSLOT Keys in request don't hash to the same slot" error while doing multi-key operations on an Amazon ElastiCache for Redis (cluster mode enabled) cluster even though the keys are stored on the same node?
"Why am I getting "CROSSSLOT Keys in request don't hash to the same slot" error while doing multi-key operations on an Amazon ElastiCache for Redis (cluster mode enabled) cluster even though the keys are stored on the same node?Short descriptionThis error occurs because keys must be in the same hash slot and not just the same node. To implement multi-key operations in a sharded Redis (cluster mode enabled) ElastiCache cluster, the keys must be hashed to the same hash slot. You can force keys into the same hash slot by using hashtags.In this example, the following sets, "myset2" and "myset," are in the same node:172.31.62.135:6379> scan 01) "0"2) 1) "myset" 2) "myset2"But a multi-key operation isn't supported:172.31.62.135:6379> SUNION myset myset2(error) CROSSSLOT Keys in request don't hash to the same slotThis is because the keys aren't in the same hash slot. In this example, the two sets are in two different slots, 560 and 7967:172.31.62.135:6379> CLUSTER KEYSLOT myset(integer) 560172.31.62.135:6379> CLUSTER KEYSLOT myset2(integer) 7967ResolutionMethod 1You can use a Redis client library that provides support for Redis (cluster mode enabled) clusters. For more information about Redis clusters, see theredis-py-cluster website.For example, using redis-cli returns the CROSSSLOT error when used to get keys from slots located in different shards:redis-cli -c -h RedisclusterCfgEndpointRedisclusterCfgEndpoint:6379> mget key1 key2(error) CROSSSLOT Keys in request don't hash to the same slotUsing the redis-py-cluster to get keys from slots located in different shards returns the correct output:>>> from rediscluster import RedisCluster>>> startup_nodes = [{"host": "RedisclusterCfgEndpoint", "port": "6379"}]>>> rc = RedisCluster(startup_nodes=startup_nodes, decode_responses=True,skip_full_coverage_check=True)>>> print(rc.mget("key1","key2"))Method 2When creating keys that are used by multi-key operations on a cluster mode enabled cluster, use hashtags to force the keys into the same hash slot. When the key contains a "{...}" pattern, only the substring between the braces, "{" and "}," is hashed to obtain the hash slot.For example, the keys {user1}:myset and {user1}:myset2 are hashed to the same hash slot, because only the string inside the braces "{" and "}", that is, "user1", is used to compute the hash slot.172.31.62.135:6379> CLUSTER KEYSLOT {user1}:myset(integer) 8106172.31.62.135:6379> CLUSTER KEYSLOT {user1}:myset2(integer) 8106172.31.62.135:6379> SUNION {user1}:myset {user1}:myset21) "some data for myset"2) "some data for myset2"Now that both sets are hashed to the same hash slot, you can perform a multi-key operation.Follow"
https://repost.aws/knowledge-center/elasticache-crossslot-keys-error-redis
How can I register a domain in Route 53?
I want to register my domain with Amazon Route 53.
"I want to register my domain with Amazon Route 53.Short descriptionBefore registering a domain in Route 53, verify that the TLD is supported by Route 53 and review the associated cost implications. For more information, see Registering a new domain.Note: Be aware when registering your domain name that you can't change the name after you register it. If you register an incorrect domain name, you must register another domain name and specify the correct name. Refunds aren't given for a domain name that you registered incorrectly.ResolutionFor a list of TLDs that you can use to register domains with Amazon Route 53, see Domains that you can register with Amazon Route 53.For a step-by-step guide on registering domain names, see How to register a domain name with Amazon Route 53.For information on transferring a domain to Amazon Route 53, see Transferring registration for a domain to Amazon Route 53.Related informationRegistering domainsUpdating domain settingsRenewing registration for a domainRestoring an expired or deleted domainReplacing the hosted zone for a domain that is registered with Route 53Follow"
https://repost.aws/knowledge-center/route-53-register-domain
How can I use an Application Load Balancer to route requests based on the source IP address?
I want to use an Application Load Balancer to perform specific actions on requests based on the source IP address of the request.
"I want to use an Application Load Balancer to perform specific actions on requests based on the source IP address of the request.ResolutionThere are several use cases for performing specific actions based on the source IP address of a request. For example, you have two versions of an application. One version is a public version that's for global users. The other is an internal version that includes some extended (beta) features. You want the internal version to be available only to employees who are accessing the application from corporate network CIDRs. To accomplish this, and other similar tasks, configure listener rules based on source IP addresses.A rule that's based on source IP address checks the source IP address in the IP header (layer-3). If there's a proxy or firewall that changes the source IP address, then specify the proxy or firewall's IP address in the listener rule.Note: Don't use listener rules to block requests from clients. It's a best practice to use security groups or network access control lists instead. To block a large number of clients, you can use AWS WAF.1.    Create an Application Load Balancer. Or, use an Application Load Balancer that you already created.2.    Open the Amazon Elastic Compute Cloud (Amazon EC2) console.3.    On the navigation pane, under Load Balancing, choose Load Balancers.4.    Select your load balancer.5.    Choose the Listeners tab.6.    Select your listener, and then choose Actions. Then, select Manage rules.7.    Choose the Add rules icon (the plus sign), and then choose Insert rule.8.    Choose Add condition, and then choose Source IP.9.    Specify the IP addresses that you plan to configure a different action for.Note: You can specify either a single IP address or network CIDRs with prefixes. For example, specify 1.1.1.1/32 or 10.8.0.0/21.10.   Choose Add action, and then select the required action. See the following examples of actions:Forward: This forwards the request to a different target group, such as a target group that runs an internal version of an application.Return fixed response: This blocks specific users or provides custom responses to specific users.12.   To save the condition, choose the checkmark icon.13.   To save the rule, choose Save.Related informationListener rules for your Application Load BalancerFollow"
https://repost.aws/knowledge-center/elb-route-requests-with-source-ip-alb
How can I identify Availability Zones using AWS Resource Access Manager?
I want to identify the Availability Zone (AZ) ID using AWS Resource Access Manager (AWS RAM).
"I want to identify the Availability Zone (AZ) ID using AWS Resource Access Manager (AWS RAM).Short descriptionAWS maps Availability Zones to names for each account to be sure that resources are distributed across the Availability Zones of a Region. For example, the Availability Zone us-east-1a for your AWS account might not have the same location as us-east-1a for another AWS account.ResolutionTo coordinate Availability Zones across accounts for VPC sharing, use the AZ ID. For example, use1-az1 is one of the Availability Zones in the us-east-1 Region.To find the AZ IDs for the Availability Zones in an account:Open the AWS Resource Access Manager console.In the navigation pane, choose Resource Access Manager.Under Your AZ ID, view the AZ IDs for the selected Region.Using the AZ IDs, determine the location of resources in one account relative to the resources in another account. For example, if you share a subnet in an Availability Zone with an AZ ID of use-az2 with another account, the subnet is available to that account in the Availability Zone whose AZ ID is also use-az2. You can find the AZ ID for each subnet in the Amazon Virtual Private Cloud (Amazon VPC) console.For more information, see Availability Zone IDs for your AWS resources.Related informationShare VPCs with AWS RAMVPC sharing: A new approach to multiple accounts and VPC managementHow can I map Availability Zones across my accounts?Follow"
https://repost.aws/knowledge-center/aws-ram-az-id
How do I transfer ownership of a domain or an Amazon Route 53 hosted zone to a different AWS account?
I want to transfer ownership of a domain or Amazon Route 53 hosted zone from one AWS account to another. How can I do this?
"I want to transfer ownership of a domain or Amazon Route 53 hosted zone from one AWS account to another. How can I do this?Short descriptionYou can transfer a domain from one AWS account to another using the TransferDomainToAnotherAwsAccount command.Although it's a best practice to transfer a domain using an API call, you can also transfer a domain by contacting AWS Support.If you don't own both the source and destination accounts that you're transferring the domain to and from, you must do one of the following:Migrate the existing hosted zone to the AWS account that you're transferring the domain to,-or-Create a new hosted zone in an AWS account that you own.If you don't own the account that created the hosted zone that routes traffic for the domain, then you can't control how traffic is routed.ResolutionMigrate a hosted zoneFollow the steps in Migrating a hosted zone to a different AWS account.Transfer a domainNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.1.    Run the following command in the AWS CLI. Replace example.com with your domain name. Replace 111122223333 with your AWS account ID.aws route53domains transfer-domain-to-another-aws-account --domain-name example.com --account-id 111122223333 --region us-east-12.    In the output, note the Password value.3.    To accept the transfer, log in to the AWS account that is the destination account. Then, run this command. Replace example.com with your domain name. Replace YourPassword with the password that you noted in step 2.aws route53domains accept-domain-transfer-from-another-aws-account --domain-name example.com --password YourPassword --region us-east-1If you receive errors accepting the transfer, see the Troubleshoot accepting a domain section of this article.Note: The accept-transfer command must be completed within three days of the transfer domain call. After three days, the transfer is canceled.4.    After accepting the transfer, view the domain by accessing the Route 53 console and then choosing the Registered Domains tab.Troubleshoot accepting a domainIf you encounter errors while accepting a transfer, then the generated password might contain special characters. Use one of two options to solve this:(Option 1) Use a text file to accept the domain transfer1.    Create a .txt file that contains the password that was generated in Step 2 of the Transfer a domain section. Use a simple format.2.    Run the accept-transfer command:aws route53domains accept-domain-transfer-from-another-aws-account --domain-name <domain name> --password file:///tmp/password.txt --region us-east-1Note: Replace with your domain, and replace password in password.txt with the name of your file. Also, make sure that the Region is correct.(Option 2) Use quotation marks around the password stringsUse single or double quotation marks around your password to prevent the AWS CLI from misinterpreting special characters.The following example uses double quotation marks:aws route53domains accept-domain-transfer-from-another-aws-account --domain-name example.com --password "YourPassword" --region us-east-1The following example uses single quotation marks:aws route53domains accept-domain-transfer-from-another-aws-account --domain-name example.com --password 'YourPassword' --region us-east-1Related informationTransferring a domain to a different AWS accountFollow"
https://repost.aws/knowledge-center/account-transfer-route-53
How do I assign a static hostname to an Amazon EC2 instance running Ubuntu Linux?
"I changed the hostname of my Amazon Elastic Compute Cloud (Amazon EC2) instance. However, when I reboot or stop and then restart the instance, the hostname changes back. How do I make the new hostname to persist?"
"I changed the hostname of my Amazon Elastic Compute Cloud (Amazon EC2) instance. However, when I reboot or stop and then restart the instance, the hostname changes back. How do I make the new hostname to persist?Short descriptionTo be sure that the hostname persists when rebooting or stopping and starting your instance, add the hostname to the appropriate configuration files on your instance.Note: The following steps apply to Ubuntu Linux. For instructions that apply to other distributions, see one of the following:Changing the system hostnameHow do I assign a static hostname to an Amazon EC2 instance running RHEL 5 or 6, CentOS 5 or 6, or Amazon Linux?How do I assign a static hostname to an Amazon EC2 instance running SLES?How do I assign a static hostname to an Amazon EC2 instance running RHEL 7 or CentOS 7?Resolution1.     Use vim to open the /etc/hosts file.sudo vim /etc/hosts2.     Update the /etc/hosts file to include your persistent hostname for localhost, similar to the following:127.0.0.1 localhost persistent-hostnameNote: You might have to create an entry for localhost if the /etc/hosts file on your EC2 instance doesn't have an entry for it.For more information about the hosts file on Ubuntu, see the Ubuntu 18.04 hosts file manpage.3.     If your EC2 instance uses IPv6, add the following configuration data.::1 ip6-localhost ip6-loopback  fe00::0 ip6-localnet  ff00::0 ip6-mcastprefix  ff02::1 ip6-allnodes  ff02::2 ip6-allrouters  ff02::3 ip6-allhosts4.    Save and exit the vim editor.Note: After making this change, press SHIFT+:[colon] to open a new command entry box in the vim editor. Type wq, and then press Enter to save changes and exit vim. Or use SHIFT + ZZ to save and close the file.5.    Run the hostnamectl command and specify the new hostname. Replace the persistent-hostname with the new hostname.sudo hostnamectl set-hostname persistent-hostname6.     After you start or reboot the EC2 instance, run the Linux hostname command without any parameters to verify that the hostname change persisted.hostnameThe command returns the new hostname.Note: If you install any system updates that affect the /etc/hosts file, the hostname file, or the hostname utility, you must run these steps again.Related informationChanging the hostname of your Linux instanceFollow"
https://repost.aws/knowledge-center/linux-static-hostname
Why did I receive a bill after I closed my AWS account?
"I closed my AWS account, but I received another bill. Why did I get another bill from AWS?"
"I closed my AWS account, but I received another bill. Why did I get another bill from AWS?ResolutionWhen you close your AWS account, you must terminate all your resources or you might continue to incur charges. The on-demand billing for your resources stops when you close your account.However, you might receive a bill after you close your account due to one of the following reasons:You incurred charges in the month before you closed your account: You receive a final bill for the usage incurred between the beginning of the month and the date that you closed your account. For example, if you closed your account on January 15th, then you receive a bill for usage incurred from January 1st through January 15th at the beginning of February.You have active capacity reservations on your account: You might have provisioned Amazon Elastic Compute Cloud (Amazon EC2) Reserved Instances, Amazon Relational Database Service (Amazon RDS) Reserved Instances, Amazon Redshift Reserved Instances, or Amazon ElastiCache Reserved Cache Nodes. You continue to receive a bill for these resources until the reservation period expires. For more information, see Reserved Instances.You signed up for Savings Plans: You continue to receive a bill for your compute usage covered under Savings Plans until the plan term is completed.Important: Within 90 days of closing your account, you can sign in to your account, view past billing, and pay for AWS bills.To pay your unpaid AWS bills, do the following:Open the Billing and Cost Management console.Choose Payments from the navigation pane. You can view your overdue bills in the Payments Due tab.Choose Verify and pay next to your unpaid bills to retry your payments.If the payment doesn’t have Verify and pay next to it, then contact AWS Support and ask them to retry the payment for you.Related informationHow do I close my AWS account?Considerations before you close your accountAvoiding unexpected chargesFollow"
https://repost.aws/knowledge-center/closed-account-bill
Why am I getting the error "A conflicting conditional operation is currently in progress against this resource" from Amazon S3 when I try to re-create a bucket?
"I deleted my Amazon Simple Storage Service (Amazon S3) bucket. Now I'm trying to create a new bucket with the same name. However, I'm getting the error: "A conflicting conditional operation is currently in progress against this resource. Try again." How can I resolve this?"
"I deleted my Amazon Simple Storage Service (Amazon S3) bucket. Now I'm trying to create a new bucket with the same name. However, I'm getting the error: "A conflicting conditional operation is currently in progress against this resource. Try again." How can I resolve this?ResolutionAfter you send a request to delete a bucket, Amazon S3 queues the bucket name for deletion. A bucket name must be globally unique because the namespace is shared by all AWS accounts. Because Amazon S3 is a large distributed system, changes such as deleting a bucket take time to become eventually consistent across all AWS Regions.Until the bucket is completely deleted by Amazon S3, you can't use the same bucket name. However, when the bucket is deleted and the name is available, other accounts can use the bucket name. If another account uses the bucket name, you can't use the same name.Note: If you must keep a bucket name, you can empty the bucket instead of deleting it.If your application automatically creates buckets, be sure to choose a bucket-naming logic that's unlikely to cause naming conflicts. Additionally, verify that your application's logic chooses a different bucket name when a bucket name is already taken.Related informationCreating, configuring, and working with Amazon S3 bucketsFollow"
https://repost.aws/knowledge-center/s3-conflicting-conditional-operation
Why does my AWS Glue job fail with the error "Temporary directory not specified" when I insert or extract data from Amazon Redshift?
My AWS Glue job fails with the error "Temporary directory not specified" when I insert or extract data from Amazon Redshift.
"My AWS Glue job fails with the error "Temporary directory not specified" when I insert or extract data from Amazon Redshift.Short descriptionHere are a few things to remember when your AWS Glue job writes or reads data from Amazon Redshift:Your AWS Glue job writes data into an Amazon Redshift cluster: The job initially writes the data into an Amazon Simple Storage Service (Amazon S3) bucket in CSV format. Then, the job issues a COPY command to Amazon Redshift.Your AWS Glue job reads data from an Amazon Redshift cluster: The job first unloads the data into an Amazon S3 bucket in CSV format using the UNLOAD command. Then, the job loads the data into the DynamicFrame from these temporary bucket files.You might get this error either when either of the following conditions are true:You are unloading the data from Amazon Redshift into the temporary S3 bucket.You are loading the data from the S3 bucket to Amazon Redshift using the COPY or UNLOAD command.ResolutionThe following are some of the common causes and solution options for this error.Define a temporary directoryThe most common reason for this error is the missing temporary S3 bucket that's used by the AWS Glue job as a staging directory. Therefore, be sure to define an S3 bucket as the temporary directory for your job. For more information on how to define a temporary bucket, see Special parameters used by AWS Glue.Verify the IAM role permissionsVerify the IAM role permissions to be sure that you have the right permissions to access the temporary S3 bucket. Also, be sure that you didn't block required permissions for the bucket in the following policies for the AWS Glue IAM role:Bucket policyS3 VPC endpoint policyAWS Organizations policyService control policySome examples of required permissions are ListObjects, GetObject, and PutObject.Verify the name of the temporary directoryBe sure that the name of the S3 bucket that's used as the temporary directory doesn't have a period in it to avoid getting the following exception:Caused by: java.sql.SQLException: [Amazon](500310) Invalid operation: UNLOAD destination is not supported.Verify AWS Key Management Service (AWS KMS) permissionsIf you use customer managed keys from AWS Key Management Service (AWS KMS) to encrypt your data, then be sure to do the include extraunloadoptions in additional_options for your ETL statement in the AWS Glue script. For example:datasource0 = glueContext.create_dynamic_frame.from_catalog( database = "database-name", table_name = "table-name", redshift_tmp_dir = args["TempDir"], additional_options = {"extraunloadoptions":"ENCRYPTED KMS_KEY_ID 'CMK key ID'"}, transformation_ctx = "datasource0" )If you are using AWS KMS to encrypt S3 data and facing permission issues related to AWS KMS, then be sure of the following:You have the AWS KMS action permissions similar to the following in the AWS Glue IAM role.You added the AWS Glue IAM role into the AWS KMS key."Action": [ "kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt*", "kms:GenerateDataKey*", "kms:DescribeKey" ]Verify the IAM role permissions for AWS Glue Python Shell jobIf you are trying to run the COPY or UNLOAD command from an AWS Glue Python Shell job and can't load the credentials, then be sure of the following:You added the AWS Glue IAM role in Amazon Redshift.The AWS Glue job role includes Amazon Redshift and AWS Glue in its trust relationship policy. For example:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "glue.amazonaws.com", "redshift.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ]}Check for packet dropThe failure of your queries to reach the Amazon Redshift cluster even after the connection is successful might be due to the maximum transmission unit (MTU) size mismatch between Amazon Redshift and AWS Glue network path. Try configuring Amazon Redshift security groups to allow ICMP "destination unreachable". For more information, see Queries appear to hang and sometimes fail to reach the cluster.Define the temporary directory in the AWS CloudFormation templateIf you created your AWS Glue job using CloudFormation, then be sure that you provided the temporary directory location in the DefaultArguments parameter in your CloudFormation template. For example:"DefaultArguments": { "--TempDir": "s3://doc-example-bucket/doc-example-folder"}Define temporary directory in the DynamicFrameIf you get the error even after defining the temporary directory in your AWS Glue job, then check if you are reading or writing into Amazon Redshift. You can do so using the AWS Glue's DynamicFrame method. Then, confirm the following:You passed your temporary directory into the redshift_tmp_dir property of your DynamicFrame:To create a DynamicFrame using a Data Catalog database and table, see create_dynamic_frame_from_catalog.To create a DynamicFrame with a specified connection and format, see create_dynamic_frame_from_options.To write into a DynamicFrame using information from a Data Catalog database and table, see write_dynamic_frame_from_catalog.To write into a DynamicFrame with a specified connection and format, see write_dynamic_frame_from_options.To write into a DynamicFrame or DynamicFrameCollection with a specified connection and format see write_from_options.To write into a DynamicFrame or DynamicFrameCollection using information from a specified JDBC connection, see write_from_jdbc_conf.You have TempDir specified in the getResolvedOptions function for your ETL job:Use the following commands to retrieve the job name and TempDir parameters:import sysfrom awsglue.utils import getResolvedOptionsargs = getResolvedOptions(sys.argv,['JOB_NAME','TempDir'])TempDir = args['TempDir']For more information, see Accessing parameters using getResolvedOptions.Related informationMoving data to and from Amazon RedshiftFollow"
https://repost.aws/knowledge-center/glue-error-redshift-temporary-directory
Why is my Amazon MSK cluster going into the HEALING state?
I want to troubleshoot my Amazon Managed Streaming for Apache Kafka (Amazon MSK) cluster that's in HEALING state.
"I want to troubleshoot my Amazon Managed Streaming for Apache Kafka (Amazon MSK) cluster that's in HEALING state.ResolutionYour Amazon MSK cluster goes into the HEALING state when the service is running an internal operation to address an issue (Example: brokers are unresponsive). However, you can use the cluster to produce and consume data. You can't perform Amazon MSK API or AWS Command Line Interface (AWS CLI) update operations on the cluster until it returns to the ACTIVE state.Use the Amazon CloudWatch metrics for Amazon MSK to see why the cluster is in HEALING state:Open the CloudWatch console.In the navigation pane, choose Metrics, and then choose All metrics.In the Browse tab, then choose AWS/Kafka.Under Metrics, Choose Cluster Name.Select the cluster that you want to monitor.If you see spikes in the ActiveControllerCount or OfflinePartitionsCount metric, they indicate that one or more brokers are unhealthy. This might have caused your cluster to go into the HEALING state.For broker-level metrics, choose Broker ID, Cluster Name under Metrics.From the list, select the entries with the cluster name and the metrics CpuUser and CpuSystem. Check if the sum of these two values for all the entries reaches an average of higher than 60% for the cluster. If so, high CPU utilization might have caused the broker to go into the HEALING state. For more information on monitoring CPU usage, see Best practices - Monitor CPU usage.The following are the common reasons for an Amazon MSK cluster to go into the HEALING state:A node or an Amazon Elastic Block Store (Amazon EBS) volume must be replaced because of a hardware failure.A node doesn't meet the Amazon MSK performance SLA for the broker, and the node must be replaced for optimal performance.Note that Amazon MSK is a fully managed service. Therefore, brokers have self-managed workflows that perform corrective actions on themselves, such as replacing nodes during failure situations. When an Amazon EBS volume in a broker becomes unhealthy, Amazon MSK observes the state of the volume for a certain period of time. If the volume becomes healthy during this time, no action is performed. If the volume continues to be unhealthy after this period, then Amazon MSK automatically replaces this volume. The cluster goes into the HEALING state when Amazon MSK performs these actions. However, this doesn't affect the availability of the Amazon MSK cluster as long as you follow the best practices. Even when the broker is in HEALING state, the cluster can handle requests from producers and consumers.Rarely, your cluster might enter into a perpetual HEALING state. This might be caused due to the following reasons:Workload on the cluster is high, and the brokers are being continuously replaced. To avoid this issue, it's a best practice not to use t3.small instances for hosting production clusters. If you're using m5 instances, make sure that you chose right size for your cluster. You can determine the size for your cluster based on your workload and by monitoring your CPU usage. Also, make sure that the number of partitions per broker doesn't exceed the recommended value.The Auto Scaling Group is unable to bring up a new instance. This might happen due to an internal issue or a missing dependency. For example, the AWS Key Management Service (AWS KMS) key that was specified during cluster creation might no longer be accessible.A rare internal event impacted the availability of the underlying Amazon Elastic Compute Cloud (Amazon EC2) instances or caused Amazon EBS latency in an Availability Zone or AWS Region.If your cluster stays in perpetual HEALING state that's not load induced, then contact AWS Support.Related informationCluster statesFollow"
https://repost.aws/knowledge-center/msk-cluster-healing-state
How do I troubleshoot a HTTP 500 or 503 error from Amazon S3?
"When I make a request to Amazon Simple Storage Service (Amazon S3), Amazon S3 returns a 5xx status error. How do I troubleshoot these errors?"
"When I make a request to Amazon Simple Storage Service (Amazon S3), Amazon S3 returns a 5xx status error. How do I troubleshoot these errors?Short descriptionAmazon S3 can return one of the following 5xx status errors:AmazonS3Exception: Internal Error (Service: Amazon S3; Status Code: 500; Error Code: 500 Internal Error; Request ID: A4DBBEXAMPLE2C4D)AmazonS3Exception: Slow Down (Service: Amazon S3; Status Code: 503; Error Code: 503 Slow Down; Request ID: A4DBBEXAMPLE2C4D)The error code 500 Internal Error indicates that Amazon S3 can't handle the request at that time. The error code 503 Slow Down typically indicates that the number of requests to your S3 bucket is very high. For example, you can send 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in an S3 bucket. However, in some cases, Amazon S3 can return a 503 Slow Down response if your requests exceed the amount of bandwidth available for cross-Region copying.Because Amazon S3 is a distributed service, a very small percentage of 5xx errors is expected during normal use of the service. All requests that return 5xx errors from Amazon S3 can be retried. This means that it's a best practice to have a fault-tolerance mechanism or to implement retry logic for any applications making requests to Amazon S3. By doing so, S3 can recover from these errors.To resolve or avoid 5xx status errors, consider the following approaches:Use a retry mechanism in the application making requests.Configure your application to increase request rates gradually.Distribute objects across multiple prefixes.Monitor the number of 5xx error responses.Note: Amazon S3 doesn't assign additional resources for each new prefix. It automatically scales based on call patterns. As the request rate increases, Amazon S3 optimizes dynamically for the new request rate.ResolutionUse a retry mechanism in the application making requestsBecause of the distributed nature of Amazon S3, requests that return 500 or 503 errors can be retried. It's a best practice to build retry logic into applications that make requests to Amazon S3.All AWS SDKs have a built-in retry mechanism with an algorithm that uses exponential backoff. This algorithm implements increasingly longer wait times between retries for consecutive error responses. Most exponential backoff algorithms use jitter (randomized delay) to prevent successive collisions. For more information, see Error retries and exponential backoff in AWS.Configure your application to gradually increase request ratesTo avoid the 503 Slow Down error, configure your application to start with a lower request rate (transactions per second). Then, increase the application's request rate exponentially. Amazon S3 automatically scales to handle a higher request rate.Distribute objects across multiple prefixesThe request rates described in performance guidelines and design patterns apply per prefix in an S3 bucket. To set up your bucket to handle overall higher request rates and to avoid 503 Slow Down errors, you can distribute objects across multiple prefixes. For example, if you're using your S3 bucket to store images and videos, you can distribute the files into two prefixes similar to the following:mybucket/imagesmybucket/videosIf the request rate on the prefixes increases gradually, Amazon S3 scales up to handle requests for each of the two prefixes. S3 will scale up to handle 3,500 PUT/POST/DELETE or 5,500 GET requests per second. As a result, the overall request rate handled by the bucket doubles.Monitor the number of 5xx status error responsesTo monitor the number of 5xx status error responses that you're getting, you can use one of these options:Turn on Amazon CloudWatch metrics. Amazon S3 CloudWatch request metrics include a metric for 5xx status responses.Turn on Amazon S3 server access logging. Because server access logging captures all requests, you can filter and review all requests that received a 500 Internal Error response. You can also parse logs using Amazon Athena.Additional troubleshootingIf you continue to see high a 5xx status error rates, contact AWS Support. Include the Amazon S3 request ID pairs for the requests that failed with a 5xx status error code.Related informationTroubleshooting Amazon S3Follow"
https://repost.aws/knowledge-center/http-5xx-errors-s3
How do I troubleshoot low bandwidth issues on my VPN connection?
I'm experiencing low bandwidth on my VPN connection. What tests can I run to verify that the issue is not occurring inside my Amazon Virtual Private Cloud (Amazon VPC)?
"I'm experiencing low bandwidth on my VPN connection. What tests can I run to verify that the issue is not occurring inside my Amazon Virtual Private Cloud (Amazon VPC)?ResolutionLaunch two EC2 instances running Linux for testingBefore beginning performance tests, launch Amazon Elastic Compute Cloud (Amazon EC2) Linux instances in at least two different Availability Zones in the same VPC. You'll use these instances for network performance testing. Verify that the instances support enhanced networking on Linux.Note: When performing network testing between instances that aren't co-located in the same placement group or that don't support jumbo frames, check and set the MTU on your Linux instance.Then, make sure that you can connect to the instances through SSH. Finally, configure the security groups used by your instances to allow communication over the port used by iperf3. The default port for testing TCP performance is 5201.Note: You can use -p to configure iperf3 to use your desired port.Install the iperf3 network benchmark tool on both instancesConnect to your Linux instances using a terminal session, and then install iperf3:To install iperf3 on RHEL-based Linux hosts:$ sudo yum install iperf3To install iperf3 on Debian/Ubuntu hosts:$ sudo apt-get update$ sudo apt-get upgrade$ sudo apt-get install git gcc make$ git clone https://github.com/esnet/iperf3$ cd iperf3$ ./configure$ sudo make$ sudo make install# optionally run "make clean" to free up disk space# by removing artifacts in the build tree.$ sudo make clean$ sudo ldconfigNext, run the following command to configure one instance as a server to listen on the default port:$ sudo iperf3 -s -VRun network tests using iperf3Configure your on-premises host as a client, and then run one or more of the following tests against your instance:The output of the following commands displays the results of 20 parallel streams with increasing window size per TCP connection:sudo iperf3 -c <Private/public IP of instance> -P 20 -w 128K -Vsudo iperf3 -c <Private/public IP of instance> -P 20 -w 512K -Vsudo iperf3 -c <Private/public IP of instance> -P 20 -w 1024K -VThe output of the following commands displays the results of increasing bandwidth capacity and a time frame of 30 seconds per UDP connection:iperf3 -c <Private/public IP of EC2 instance> -u -b 200M -t 30iperf3 -c <Private/public IP of EC2 instance> -u -b 500M -t 30iperf3 -c <Private/public IP of EC2 instance> -u -b 1G -t 30Run the iperf3 tests between the private IP addresses of your EC2 instances and on-premises hosts bi-directionally to benchmark the network throughput on your VPN connection. Then, run these tests between the two public IP addresses of your instances to benchmark throughput over the internet.**Note:**The -w option denotes the window size.This size must be lower than kernel parameter net.core.rmem_max and net.core.wmem_max on both sides.Depending on the system build, rmem_max or wmem_max may be lower than 512KB by default.If lower than 512KB by default, increase rmem_max and wmem_max on both sides before iperf test.Example:Verify current rmrm_max and wmem_max value:$ sudo sysctl net.core.rmem_max net.core.rmem_max = 212992$ sudo sysctl net.core.wmem_max net.core.wmem_max = 212992Increase window size to 2048KB:$ sudo sysctl -w net.core.rmem_max=2097152$ sudo sysctl -w net.core.wmem_max=2097152Related informationHow do I benchmark network throughput between Amazon EC2 Linux instances in the same VPC?Follow"
https://repost.aws/knowledge-center/low-bandwidth-vpn
Can I use EBS Multi-Attach volumes to enable multiple EC2 instances to simultaneously access a standard file system?
I want to access my Amazon Elastic Block Store (Amazon EBS) volume from more than one Amazon Elastic Compute Cloud (Amazon EC2) instance. Can I use Amazon EBS Multi-Attach to enable multiple EC2 instances to simultaneously access a standard file system?
"I want to access my Amazon Elastic Block Store (Amazon EBS) volume from more than one Amazon Elastic Compute Cloud (Amazon EC2) instance. Can I use Amazon EBS Multi-Attach to enable multiple EC2 instances to simultaneously access a standard file system?ResolutionStandard file systems aren't supported with EBS Multi-Attach. File systems such as XFS, EXT3, EXT4, and NTFS aren't designed to be simultaneously accessed by multiple servers or EC2 instances. Therefore, these file systems don't have built-in mechanisms to manage the coordination and control of writes, reads, locks, caches, mounts, fencing, and so on.Enabling multiple servers to simultaneously access a standard file system can result in data corruption or loss. The operation of standard file systems on EBS Multi-Attach volumes isn't a supported configuration.EBS Multi-Attach allows the attachment of a single io1 Provisioned IOPS volume to up to 16 Nitro-based instances in the same Availability Zone. EBS Multi-Attach volumes can be used as a block-level subcomponent of an overall shared storage solution. Configuration and operation of shared storage systems should be attempted only with a deep understanding of the potential pitfalls and configuration requirements. For more information on using EBS Multi-Attach, refer to Attaching a volume to multiple instances with Amazon EBS Multi-Attach - Considerations and limitations.Related informationWorking with Multi-AttachFollow"
https://repost.aws/knowledge-center/ebs-access-volumes-using-multi-attach
How can I get notifications for AWS Backup jobs that failed?
I want to be notified if my AWS Backup job fails. How can I set up email notifications for an unsuccessful backup job?
"I want to be notified if my AWS Backup job fails. How can I set up email notifications for an unsuccessful backup job?Short descriptionUse Amazon Simple Notification Service (Amazon SNS) to send email notifications about failed backup jobs. Follow these steps to configure Amazon SNS and your backup vault for notifications:1.    Create an SNS topic to send AWS Backup notifications to.2.    Configure your backup vault to send notifications to the SNS topic.3.    Create an SNS subscription that filters notifications to backup jobs that are unsuccessful.4.    Monitor emails for notifications.To receive notifications for other events, such as restore jobs and recovery points, see Using Amazon SNS to track AWS Backup events.ResolutionCreate an SNS topic to send AWS Backup notifications1.    Open the Amazon SNS console.2.    From the navigation pane, choose Topics.3.    Choose Create topic.4.    For Name, enter a name for the topic.5.    Choose Create topic.6.    Under the Details of the topic that you just created, copy the value for ARN (Amazon Resource Name). You need this value for later steps.7.    Above the Details pane, choose Edit.8.    Expand Access policy.9.    In the JSON editor, append the following permissions into the policy:Important: Replace the value for Resource with the ARN that you copied in step 6.{ "Sid": "My-statement-id", "Effect": "Allow", "Principal": { "Service": "backup.amazonaws.com" }, "Action": "SNS:Publish", "Resource": "arn:aws:sns:eu-west-1:111111111111:exampletopic"}10.    Choose Save changes.Configure your backup vault to send notifications to the SNS topic1.    Install and configure the AWS Command Line Interface (AWS CLI).2.    Using the AWS CLI, run the put-backup-vault-notifications command with --backup-vault-events set to BACKUP_JOB_COMPLETED. Replace the following values in the example command:--endpoint-url: the endpoint for the AWS Region where you have the backup vaulteu-west-1: the AWS Region where you have the backup vault--backup-vault-name: the name of your backup vault--sns-topic-arn: the ARN of the SNS topic that you createdaws backup put-backup-vault-notifications --endpoint-url https://backup.eu-west-1.amazonaws.com --backup-vault-name examplevault --sns-topic-arn arn:aws:sns:eu-west-1:111111111111:exampletopic --backup-vault-events BACKUP_JOB_COMPLETEDNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.3.    Run the get-backup-vault-notifications command to confirm that notifications are configured:aws backup get-backup-vault-notifications --backup-vault-name examplevaultThe command returns output similar to the following:{ "BackupVaultName": "examplevault", "BackupVaultArn": "arn:aws:backup:eu-west-1:111111111111:backup-vault:examplevault", "SNSTopicArn": "arn:aws:sns:eu-west-1:111111111111:exampletopic", "BackupVaultEvents": [ "BACKUP_JOB_COMPLETED" ]}Create an SNS subscription that filters notifications to backup jobs that are unsuccessful1.    Open the Amazon SNS console.2.    From the navigation pane, choose Subscriptions.3.    Choose Create subscription.4.    For Topic ARN, select the SNS topic that you created.5.    For Protocol, select Email-JSON.6.    For Endpoint, enter the email address where you want to get email notifications about failed backup jobs.7.    Expand Subscription filter policy.8.    In the JSON editor, enter the following:{ "State": [ { "anything-but": "COMPLETED" } ]}9.    Choose Create subscription.10.    The email address that you entered in step 6 receives a subscription confirmation email. Be sure to confirm the SNS subscription.Monitor emails for notificationsWhen your vault has an unsuccessful backup job, you get an email notification similar to the following:"An AWS Backup job was stopped. Resource ARN : arn:aws:ec2:eu-west-1:111111111111:volume/vol-example56d7w92d4b. BackupJob ID : example4-3dd5-5678-b52d-90bd749355a5"You can test notifications by creating two on-demand backups and then stopping one of the backups. You get an email notification for the stopped backup only.Related informationTroubleshooting AWS BackupFollow"
https://repost.aws/knowledge-center/aws-backup-failed-job-notification
How can I prevent a Hadoop or Spark job's user cache from consuming too much disk space in Amazon EMR?
The user cache for my Apache Hadoop or Apache Spark job is taking up all the disk space on the partition. The Amazon EMR job is failing or the HDFS NameNode service is in safe mode.
"The user cache for my Apache Hadoop or Apache Spark job is taking up all the disk space on the partition. The Amazon EMR job is failing or the HDFS NameNode service is in safe mode.Short descriptionOn an Amazon EMR cluster, YARN is configured to allow jobs to write cache data to /mnt/yarn/usercache. When you process a large amount of data or run multiple concurrent jobs, the /mnt file system can fill up. This causes node manager failures on some nodes, which then causes the job to freeze or fail.Use one of the following methods to resolve this problem:Adjust the user cache retention settings for YARN NodeManager. Choose this option if you don't have long-running jobs or streaming jobs.Scale up the Amazon Elastic Block Store (Amazon EBS) volumes. Choose this option if you have long-running jobs or streaming jobs.ResolutionOption 1: Adjust the user cache retention settings for NodeManagerThe following attributes define the cache cleanup settings:yarn.nodemanager.localizer.cache.cleanup.interval-ms: This is the cache cleanup interval. The default value is 600,000 milliseconds. After this interval—and if the cache size exceeds the value set in yarn.nodemanager.localizer.cache.target-size-mb—files that aren't in use by running containers are deleted.yarn.nodemanager.localizer.cache.target-size-mb: This is the maximum disk space allowed for the cache. The default value is 10,240 MB. When the cache disk size exceeds this value, files that aren't in use by running containers are deleted on the interval set in yarn.nodemanager.localizer.cache.cleanup.interval-ms.To set the cleanup interval and maximum disk space size on a running cluster:1.    Open /etc/hadoop/conf/yarn-site.xml on each core and task node, and then reduce the values for yarn.nodemanager.localizer.cache.cleanup.interval and yarn.nodemanager.localizer.cache.target-size-mb. For example:sudo vim /etc/hadoop/conf/yarn-site.xmlyarn.nodemanager.localizer.cache.cleanup.interval-ms 400000yarn.nodemanager.localizer.cache.target-size-mb 51202.    To restart the NodeManager service, run the following commands on each core and task node:sudo stop hadoop-yarn-nodemanagersudo start hadoop-yarn-nodemanagerNote: In Amazon EMR release versions 5.21.0 and later, you can also use a configuration object, similar to the following, to override the cluster configuration or specify additional configuration classifications for a running cluster. For more information, see Reconfigure an instance group in a running cluster.To set the cleanup interval and maximum disk space size on a new cluster, add a configuration object similar to the following when you launch the cluster:[ { "Classification": "yarn-site", "Properties": { "yarn.nodemanager.localizer.cache.cleanup.interval-ms": "400000", "yarn.nodemanager.localizer.cache.target-size-mb": "5120" } }]Remember that the deletion service doesn't complete on running containers. This means that even after you adjust the user cache retention settings, data might still be spilling to the following path and filling up the file system:{'yarn.nodemanager.local-dirs'}/usercache/user/appcache/application_id ,Option 2: Scale up the EBS volumes on the EMR cluster nodesTo scale up storage on a running EMR cluster, see Dynamically scale up storage on Amazon EMR clusters.To scale up storage on a new EMR cluster, specify a larger volume size when you create the EMR cluster. You can also do this when you add nodes to an existing cluster:Amazon EMR release version 5.22.0 and later: The default amount of EBS storage increases based on the size of the Amazon Elastic Compute Cloud (Amazon EC2) instance. For more information about the amount of storage and number of volumes allocated by default for each instance type, see Default Amazon EBS storage for instances.Amazon EMR release versions 5.21 and earlier: The default EBS volume size is 32 GB. Of this amount, 27 GB is reserved for the /mnt partition. HDFS, YARN, the user cache, and all applications use the /mnt partition. Increase the size of your EBS volume as needed (for example, 100-500 GB or more). You can also specify multiple EBS volumes. Multiple EBS volumes will be mounted as /mnt1, /mnt2, and so on.For Spark streaming jobs, you can also perform an unpersist ( RDD.unpersist()) when processing is done and the data is no longer needed. Or, explicitly call System.gc() in Scala ( sc._jvm.System.gc() in Python) to start JVM garbage collection and remove the intermediate shuffle files.Follow"
https://repost.aws/knowledge-center/user-cache-disk-space-emr
How can I resolve high CPU utilization on my T2 or T3 EC2 Windows instance if my CPU is being throttled?
My T2 or T3 Amazon Elastic Compute Cloud (Amazon EC2) Windows instance is experiencing high CPU utilization because my CPU is being throttled. How can I fix this?
"My T2 or T3 Amazon Elastic Compute Cloud (Amazon EC2) Windows instance is experiencing high CPU utilization because my CPU is being throttled. How can I fix this?ResolutionTo resolve CPU throttling, you can either enable T2/T3 Unlimited, or change the instance type.Enable T2/T3 UnlimitedThis solution has no downtime, but might cost more, especially if the instance has to burst often.Open the Amazon EC2 console, and then choose Instances from the navigation pane.Select the instance that is being throttled.For Actions, choose Instance Settings, Change T2/T3 Unlimited.Choose Enable.Note: If you still see high CPU utilization after you enable T2/T3 Unlimited, troubleshoot the process that is causing the issue. For more information, see How do I diagnose high CPU utilization on my EC2 Windows instance when my CPU is not being throttled?Change the instance typeThis solution allows you to select an instance that is better suited to your current needs if the instance has to burst often.Note: Stop your instance before you change your instance type.You can change to one of the following instance types:A T2 or T3 instance type with a higher CPU credit limit.An instance type that doesn't use a CPU credit bucket model.For more information, see How do I get more CPU and memory for my EC2 instance?Related InformationHow can I find out if the CPU on my T2 or T3 EC2 Windows instance is being throttled?Follow"
https://repost.aws/knowledge-center/ec2-cpu-utilization-throttled
How do I remove sensitive data from my CloudFront logs?
"By default, Amazon CloudFront standard logs capture sensitive data for some of its fields. Due to privacy concerns, I want to remove this part of the logs."
"By default, Amazon CloudFront standard logs capture sensitive data for some of its fields. Due to privacy concerns, I want to remove this part of the logs.Short descriptionNote: This article uses the example of Client-IP (c-ip) field.CloudFront logs capture c-ip as one of its fields by default. There are three ways to remove c-ip from your logs.Trigger an AWS Lambda function that removes the field on the log delivery into Amazon Simple Storage Service (Amazon S3).Have an Amazon Elastic Compute Cloud (Amazon EC2) process that runs at certain intervals to remove the field.Use CloudFront real-time logs, and remove the sensitive field before you send the log data to Amazon S3.ResolutionTrigger a Lambda functionOne way to remove the c-ip field is to use Amazon S3 notification events. When CloudFront delivers the log file into the Amazon S3 bucket, configure your bucket to trigger a Lambda function.Create a Lambda function1.    Open the AWS Lambda console.2.    Under Functions, create a new Lambda function that has the following configurations:Uses the object name from the Amazon S3 event.Gets the object from the S3 bucket.3.    Remove the c-ip column, or replace the values with anonymized data.Note: Replace the values to keep the same format in case you have other applications process the logs further.4.    Save and upload the log back to Amazon S3.Create a new event1.    In the logs target bucket, go to Properties.2.    Under Event notifications, create a new event.3.    Select the event type Put, and the destination Lambda function.4.    Select the Lambda function created in step 1, and then choose Save.Important: To avoid a recursive invocation (infinite loop) with your Lambda function, perform the following actions:Have your CloudFront logs delivered to an initial staging prefix. For example, "original".Have the Amazon S3 event triggered on that prefix only.Have the Lambda function deliver the logs into a different prefix. For example, "processed".If you deliver the logs into the same prefix, the Lambda function triggers again and creates a recursive invocation. For more information, see Avoiding recursive invocation with Amazon S3 and AWS Lambda.Note: To keep Amazon S3 costs low, set up an Amazon S3 Lifecycle policy to expire the original logs after a certain time period.Have an Amazon EC2 processUse Amazon EventBridge to create a scheduled rule (cron) that launches an EC2 instance and processes the log files at a scheduled recurrence. For example, one time per day. When the process is done, stop the EC2 instance until the next recurrence to save on costs.1.    Configure EventBridge and Lambda to start an EC2 instance at a given time. For more information, see How do I stop and start Amazon EC2 instances at regular intervals using Lambda?2.    On the EC2 instance, deploy a code that'll download the logs for a certain time period. For example, a full day. Remove the c-ip column to process the logs, or replace the column values with anonymized data. Upload the processed logs back to the S3 bucket.Optional: Merge all the processed logs into a single file to save on Amazon S3 Lifecycle transitions costs. This process is helpful if you intend to store the logs for long time periods.Use Kinesis Data FirehoseUse CloudFront real-time logs to select the fields that you want to save. Later, have Amazon Kinesis Data Firehose send the log data to Amazon S3.When you configure CloudFront real-time logs, a list of fields that are included in each real-time log record is available. Each log record contains up to 40 fields. You have the option to receive all the available fields or only the fields that you must monitor and analyze performance. Deactivate the field c-ip to exclude the field from your logs.Note: Due to the use of Amazon Kinesis Data Streams, this option can get expensive. Consider the other two options (Trigger a Lambda function or Have an Amazon EC2 process) for a more cost-effective solution.Follow"
https://repost.aws/knowledge-center/cloudfront-remove-sensitive-data
How do I customize my Elastic Beanstalk environment using .ebextensions?
"How do I customize my AWS Elastic Beanstalk environment using .ebextensions to create files, install packages, and run commands on my Amazon Elastic Compute Cloud (Amazon EC2) instances?"
"How do I customize my AWS Elastic Beanstalk environment using .ebextensions to create files, install packages, and run commands on my Amazon Elastic Compute Cloud (Amazon EC2) instances?Short descriptionConfigure your Amazon EC2 instances in an Elastic Beanstalk environment by using Elastic Beanstalk configuration files (.ebextensions).Configuration changes made to your Elastic Beanstalk environment won't persist if you use the following configuration methods:Configuring an Elastic Beanstalk resource directly from the console of a specific AWS service.Installing a package, creating a file, or running a command directly from your Amazon EC2 instance.Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.ResolutionSet up your .ebextensions directory1.    In the root of your application bundle, create a hidden directory named .ebextensions.2.    Store your configuration file in the .ebextensions directory.Your application source bundle should look similar to the following example:~/workspace/my-application/|-- .ebextensions| |-- 01-server-configuration.config| `-- 02-asg-healthcheck.config|-- index.php`-- styles.cssCustomize your Elastic Beanstalk environmentTo customize your environment, consider the following:Use the option_settings key to modify the environment configuration. You can choose from general options for all environments and platform-specific options.Note: Recommended values are applied when you create or update an environment on the Elastic Beanstalk API by a client. For example, the client could be the AWS Management Console, Elastic Beanstalk Command Line Interface (EB CLI), AWS CLI, or SDKs. Recommended values are directly set at the API level and have the highest precedence. The configuration setting applied at the API level can't be changed using option_settings, as the API has the highest precedence.Precedence rules can result in your option_settings modifications not being applied to the environment configuration. To remove the configurations directly applied during environment creation or make the update on the Elastic Beanstalk API, use the update-environment command with the --options-to-remove flag.If there are no option settings for your desired resource configuration, use the Resources key to customize the resources in your Elastic Beanstalk environment.Note: Resources defined in configuration files are added to the AWS CloudFormation template that's used to launch your environment. All AWS CloudFormation resource types are supported. For more information on logical resource names, see Modifying the resources that Elastic Beanstalk creates for your environment.Use keys to customize software on Linux or Windows servers.For configuration file samples, see the AWS GitHub repository.Apply your custom settings to your application1.    Create an application source bundle that includes your configuration files.Note: Folders starting with a period, such as .ebextensions, can be hidden by file browsers. To keep these folders visible, include the .ebextensions folder in the root of your application bundle when you create your application source bundle.2.    Deploy your updated Elastic Beanstalk application.Follow"
https://repost.aws/knowledge-center/elastic-beanstalk-configuration-files
How do I configure the hosted web UI for Amazon Cognito?
"I want to configure the hosted web UI for my Amazon Cognito user pool, but I'm not sure what settings to turn on. How do I set it up?"
"I want to configure the hosted web UI for my Amazon Cognito user pool, but I'm not sure what settings to turn on. How do I set it up?Short descriptionWhen you create a user pool in Amazon Cognito and then configure a domain for it, Amazon Cognito automatically provisions a hosted web UI to let you add sign-up and sign-in pages to your app.If you're not sure how to set this up or what settings to use—such as the types of OAuth 2.0 flows and scopes to turn on—then follow the steps in this article.ResolutionIf you haven't already done so, create a user pool, and create an app client in the user pool. Then, follow these instructions:Note: These directions use the new Amazon Cognito console.Add a domain name to your user poolIn the Amazon Cognito console, choose User pools, and then choose your user pool.Under App integration, choose Domain name, and then choose Actions.Choose Create Cognito domain to add your own domain prefix to the Amazon Cognito hosted domain. Or, choose Create custom domain to add your own custom domain.Change app client settingsIn the Amazon Cognito console, choose User pools, and then choose your user pool.Under App integration, choose your app client from the App clients and analytics section.Choose Edit from the Hosted UI section.Do the following:For Allowed callback URLs, enter the URL of your web application that will receive the authorization code. Your users are redirected here when they sign in.For Allowed sign-out URLs - optional, enter the URL where you want to redirect your users when they sign out.For Identity providers, choose Cognito user pool from the dropdown list.For OAuth 2.0 grant types, select either Authorization Code grant or Implicit grant OAuth 2.0 authentication flow. Authorization code grant type is used by confidential and public clients to exchange an auth code for an access token. Implicit grant type is only used when there's a specific reason that authorization code grant can't be used. For more information, see Understanding Amazon Cognito user pool OAuth 2.0 grants.For OpenID Connect scopes, select openid and any other OAuth scopes that you want Amazon Cognito to add in the tokens for when your users authenticate. For example, phone and email.For Custom scopes, select any custom scopes that you want to authorize for this app.Choose Save changes.For more information, see Configuring a user pool app client.(Optional) Customize the hosted web UIYou can add a custom logo or customize the CSS for the hosted web UI. For more information, see Customizing the built-in sign-in and sign-up webpages.(Optional) Construct the URL for the hosted web UIIf you want to control which parameters are included in the login URL for the hosted web UI, then construct the URL manually.In the Amazon Cognito console, choose User pools, and then choose your user pool.Under App integration, copy the Domain URL under the Domain section. Then, paste the URL into a text editor for reference.Under App clients and analytics, click your client name.Copy the Client ID to your clipboard. Then, paste the ID into a text editor for reference.Copy one of the Allowed callback URLs to your clipboard. Then, paste the URL into a text editor for reference.Construct the URL for the hosted web UI by pasting together the information that you just copied into this format:domainUrl/login?response_type=code&client_id=appClientId&redirect_uri=callbackUrlFor example: https://my-user-pool.auth.us-east-1.amazoncognito.com/login?response_type=code&client_id=a1b2c3d4e5f6g7h8i9j0k1l2m3&redirect_uri=https://my-website.com-or-Choose View Hosted UI in the App client section to access the default URL for the login endpoint. Then, replace parts of the URL as detailed earlier.If you turned on Authorization code grant earlier for OAuth 2.0 grant types, then using this URL prompts Amazon Cognito to return an authorization code when your users sign in. If you turned on Implicit grant for OAuth 2.0 grant types earlier and you want Amazon Cognito to return an access token instead when your users sign in, then replace response_type=code with response_type=token in the URL.Launch the hosted web UINote: If you constructed the URL for the hosted web UI manually, enter that URL in your web browser instead.In the Amazon Cognito console, choose User pools, and then choose your user pool.Under App integration, click your Client name from the App clients and analytics section.Under Hosted UI, choose View Hosted UI. The sign-in page of the hosted web UI opens in a new browser tab or window.Related informationGetting started with user poolsFollow"
https://repost.aws/knowledge-center/cognito-hosted-web-ui
Why can't I launch EC2 instances from my copied AMI?
I copied my Amazon Machine Image (AMI) to a different account or Region. I'm not able to launch Amazon Elastic Compute Cloud (Amazon EC2) instances from the copied AMI. How do I fix this?
"I copied my Amazon Machine Image (AMI) to a different account or Region. I'm not able to launch Amazon Elastic Compute Cloud (Amazon EC2) instances from the copied AMI. How do I fix this?Short descriptionYou might not be able to launch instances from a copied AMI with an encrypted Amazon Elastic Block Store (Amazon EBS) for the following reasons:The AWS Key Management Service (KMS) customer managed key's (KMS key) key policy is missing the proper principals to allow the requesting account's access.The AWS Identity and Access Management (IAM) entity in the requesting account doesn't have the necessary KMS permissions for the volume's cross-account KMS key.ResolutionEnable cross-account access to existing KMS custom keys on the copied AMIFor detailed instructions, see How to enable cross-account access to existing custom keys in Share custom encryption keys more securely between accounts by using AWS Key Management Service.Set permissions for EC2 instances to access the KMS key1.    Open the AWS KMS console.Note: Make sure you're in the correct Region.2.    Choose Customer managed keys, and then select the appropriate key.3.    Under Key policy, scroll down to Key users. Verify that the Key users section lists all internal and external accounts and users that need access to the key.4.    If any accounts or users are missing from the Key users section, select Policy view.Note: If you've ever edited the AWS KMS key policy manually, the key policy is only available in policy (JSON) view.5.    Verify that the Allow use of the key statement in the key policy is correct. The statement must include the ARN of all accounts and users who need access to the key.The following is an example of the Allow use of the key statement in the default key policy. The Allow use of the key statement in the following example includes the following ARNs:The external AWS account containing the copied AMI.The parent account of the AMI.A user within the external account.For an overview and example of the entire default key policy, see Default key policy.{ "Sid": "Allow use of the key", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::111122223333:root", "arn:aws:iam::444455556666:root", "arn:aws:iam::111122223333:user/UserA" ] }, "Action": [ "kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt*", "kms:GenerateDataKey*", "kms:DescribeKey" ], "Resource": "*" }, { "Sid": "Allow attachment of persistent resources", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::111122223333:root", "arn:aws:iam::444455556666:root", "arn:aws:iam::111122223333:user/UserA" ] }, "Action": [ "kms:CreateGrant", "kms:ListGrants", "kms:RevokeGrant" ], "Resource": "*", "Condition": { "Bool": { "kms:GrantIsForAWSResource": "true" } } } ]}6.    If you haven't already created the IAM policy, proceed to the next section to create and assign the policy.Create the IAM policy and attach it to your IAM user or group1.    Sign in to the IAM console with your user that has administrator permissions.2.    Choose Policies.3.    Choose Create policy.4.    Choose the JSON tab. Copy the following sample JSON policy, and then paste it into the JSON text box. Replace arn:aws:kms:REGION:MAINACCOUNTNUMBER:key/1a345678-1234-1234-1234-EXAMPLE with the ARN of your KMS key.{ "Version": "2012-10-17", "Statement": [ { "Sid": "AllowUseOfTheKey", "Effect": "Allow", "Action": [ "kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt*", "kms:GenerateDataKey*", "kms:DescribeKey" ], "Resource": [ "arn:aws:kms:REGION:MAINACCOUNTNUMBER:key/1a345678-1234-1234-1234-EXAMPLE" ] }, { "Sid": "AllowAttachmentOfPersistentResources", "Effect": "Allow", "Action": [ "kms:CreateGrant", "kms:ListGrants", "kms:RevokeGrant" ], "Resource": [ "arn:aws:kms:REGION:MAINACCOUNTNUMBER:key/1a345678-1234-1234-1234-EXAMPLE" ], "Condition": { "Bool": { "kms:GrantIsForAWSResource": true } } } ]}5.    Choose Review policy. The Policy Validator reports any syntax errors.6.    On the Review page, enter KmsKeyUsagePolicy for the policy name. Review the policy Summary to see the permissions granted by your policy, and then choose Create policy to save the policy. The new policy appears in the list of managed policies and is ready to attach to your IAM user or group.7.    In the navigation pane of the IAM console, choose Policies.8.    At the top of the policy list, in the search box, start typing KmsKeyUsagePolicy until you see your policy. Then check the box next to KmsKeyUsagePolicy in the list.9.    Choose Policy actions, and then choose Attach.10.    For Filter, choose Users.11.    In the search box, start typing username until your user is visible on the list. Then check the box next to that user in the list.12.    Choose Attach Policy.Related informationCopying an AMIEditing keysTutorial: Create and attach your first customer managed policyValidating IAM policiesFollow"
https://repost.aws/knowledge-center/ec2-instance-launch-copied-ami
How do I identify errors related to Data API in Amazon Redshift?
How do I identify the reason why a Data API query in Amazon Redshift failed?
"How do I identify the reason why a Data API query in Amazon Redshift failed?ResolutionAmazon Redshift Data API is asynchronous, meaning you can run long-running queries without having to wait for it to complete. When a Data API query fails, the status of the query isn't displayed immediately. To determine the reasons for failure, use the DescribeStatement action for single or multiple queries. To run the DescribeStatement, you must have the statement ID.Single queryTo run a single query against the cluster, use the ExecuteStatement action to return a statement ID:Note: The following example command uses the AWS Secrets Manager authentication method. The command runs an SQL statement against a cluster and returns an identifier to fetch the results.aws redshift-data execute-statement --region us-east-1 --secret arn:aws:secretsmanager:us-east-1:123456789012:secret:myuser-secret-hKgPWn --cluster-identifier redshift-cluster-1 --sql "select * from test_table;" --database devNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.The output looks similar to the following:{ "ClusterIdentifier": "redshift-cluster-1", "CreatedAt": "2022-09-16T12:22:31.894000+05:30", "Database": "dev", "Id": "458c568d-717b-4f36-90bd-e642bfb06cbf", "SecretArn": "arn:aws:secretsmanager:us-east-1:123456789012:secret:myuser-secret-hKgPWn"}The preceding SQL statement returns an ExecuteStatementOutput, which includes the statement Id. You can check the status of the query using DescribeStatement and entering the statement ID:aws redshift-data describe-statement --id 458c568d-717b-4f36-90bd-e642bfb06cbfThe output for DescribeStatement provides the following additional details:RedshiftPidQuery durationNumber of rows inSize of the result setRedshiftQueryIDThe output looks similar to the following:{ "ClusterIdentifier": "redshift-cluster-1", "CreatedAt": "2022-09-16T12:22:31.894000+05:30", "Duration": -1, "Error": "ERROR: relation \"test_table\" does not exist", "HasResultSet": false, "Id": "458c568d-717b-4f36-90bd-e642bfb06cbf", "QueryString": "select * from test_table;", "RedshiftPid": 1074727629, "RedshiftQueryId": -1, "ResultRows": -1,< "ResultSize": -1, "SecretArn": "arn:aws:secretsmanager:us-east-1:123456789012:secret:myuser-secret-hKgPWn", "Status": "FAILED", "UpdatedAt": "2022-09-16T12:22:32.365000+05:30"}The "Error": section in the preceding response displays the exact error. Which in the preceding example is "ERROR: relation "test_table" does not exist".Multiple queriesTo run multiple queries against the cluster use the BatchExecuteStatement action to return a statement ID:aws redshift-data batch-execute-statement --region us-east-1 --secret-arn arn:aws:secretsmanager:us-east-1:123456789012:secret:myuser-secret-hKgPWn --cluster-identifier redshift-cluster-1 --database dev --sqls "select * from test_table;" "select * from another_table;"The output looks similar to the following:{ "ClusterIdentifier": "redshift-cluster-1", "CreatedAt": "2022-09-16T12:37:16.707000+05:30", "Database": "dev", "Id": "08b4b917-9faf-498a-964f-e82a5959d1cb", "SecretArn": "arn:aws:secretsmanager:us-east-1:123456789012:secret:myuser-secret-hKgPWn"}To get the status of the queries, use the DescribeStatement with the statement ID from the preceding response:aws redshift-data describe-statement --id 08b4b917-9faf-498a-964f-e82a5959d1cbThe output looks similar to the following:{ "ClusterIdentifier ": "redshift-cluster-1 ", "CreatedAt ": "2022-09-16T12:37:16.707000+05:30 ", "Duration ": 0, "Error ": "Query #1 failed with ERROR: relation \ "test_table\"does not exist ", "HasResultSet ": false, "Id ": "08b4b917-9faf-498a-964f-e82a5959d1cb ", "RedshiftPid ": 1074705048, "RedshiftQueryId ": 0, "ResultRows ":-1, "ResultSize ": -1, "SecretArn ": "arn:aws:secretsmanager:us-east-1:123456789012:secret:myuser-secret-hKgPWn ", "Status ": "FAILED ", "SubStatements ": [ { "CreatedAt ": "2022-09-16T12:37:16.905000+05:30 ", "Duration ": -1, "Error ": "ERROR: relation \ "test_table\" does not exist ", "HasResultSet ": false, "Id ": "08b4b917-9faf-498a-964f-e82a5959d1cb:1", "QueryString ": "select * from test_table; ", "RedshiftQueryId ": -1, "ResultRows ": -1, "ResultSize ": -1, "Status ": "FAILED ", "UpdatedAt ": "2022-09-16T12:37:17.263000+05:30 " }, { "CreatedAt ": "2022-09-16T12:37:16.905000+05:30", "Duration ": -1, "Error ": "Connection or an prior query failed. ", "HasResultSet ": false, "Id ": "08b4b917-9faf-498a-964f-e82a5959d1cb:2 ", "QueryString ": "select * from another_table;", "RedshiftQueryId ": 0, "ResultRows ": -1, "ResultSize": -1, "Status ": "ABORTED ", "UpdatedAt ": "2022-09-16T12:37:17.263000+05:30 " } ], "UpdatedAt ": "2022-09-16T12:37:17.288000+05:30 "}The preceding output displays the status of all sub-statements for a multi-statement query. The "Error": section in the preceding response displays the exact error for each sub-statement.To troubleshoot problems with the Data API, see Troubleshooting issues for Amazon Redshift Data API.Monitoring Data API eventsData API events can be monitored using Amazon EventBridge. This information can be sent to an AWS Lambda function that's integrated with Amazon Simple Notification Service (Amazon SNS) to send notifications. For more information, see Building an event-driven application with AWS Lambda and the Amazon Redshift Data API.Follow"
https://repost.aws/knowledge-center/redshift-identify-data-api-errors
How can I grant my Amazon EC2 instance access to an Amazon S3 bucket?
I'm unable to access an Amazon Simple Storage Service (Amazon S3) bucket from my Amazon Elastic Compute Cloud (Amazon EC2) instance. How can I activate read/write access to S3 buckets from an EC2 instance?
"I'm unable to access an Amazon Simple Storage Service (Amazon S3) bucket from my Amazon Elastic Compute Cloud (Amazon EC2) instance. How can I activate read/write access to S3 buckets from an EC2 instance?Short descriptionTo connect to your S3 buckets from your EC2 instances, you must do the following:1.    Create an AWS Identity and Access Management (IAM) profile role that grants access to Amazon S3.2.    Attach the IAM instance profile to the instance.3.    Validate permissions on your S3 bucket.4.    Validate network connectivity from the EC2 instance to Amazon S3.5.    Validate access to S3 buckets.ResolutionCreate an IAM instance profile that grants access to Amazon S31.    Open the IAM console.2.    Choose Roles, and then choose Create role.3.    Select AWS Service, and then choose EC2 under Use Case.Note: Creating an IAM role from the console with EC2 selected as the trusted entity automatically creates an IAM instance profile with the same name as the role name. However, if the role is created using the AWS Command Line Interface (AWS CLI) or from the API, then an instance profile isn't automatically created. For more information, refer to I created an IAM role, but the role doesn't appear in the dropdown list when I launch an instance. What do I do?4.    Select Next: Permissions.5.    Create a custom policy that provides the minimum required permissions to access your S3 bucket. For instructions on creating custom policies, see Writing IAM policies: how to grant access to an Amazon S3 bucket and Identity and access management in Amazon S3.Note: Creating a policy with the minimum required permissions is a security best practice. However, to allow EC2 access to all your Amazon S3 buckets, use the AmazonS3ReadOnlyAccess or AmazonS3FullAccess managed IAM policy.6.    Select Next: Tags, and then select Next: Review.7.    Enter a Role name, and then select Create role.Attach the IAM instance profile to the EC2 instance1.    Open the Amazon EC2 console.2.    Choose Instances.3.    Select the instance that you want to attach the IAM role to.4.    Choose the Actions tab, choose Security, and then choose Modify IAM role.5.    Select the IAM role that you just created, and then choose Save. The IAM role is assigned to your EC2 instance.Validate permissions on your S3 bucket1.    Open the Amazon S3 console.2.    Select the S3 bucket that you want to verify the policy for.3.    Choose Permissions.4.    Choose Bucket Policy.5.    Search for statements with Effect: Deny.6.    In your bucket policy, edit or remove any Effect: Deny statements that are denying the IAM instance profile access to your bucket. For instructions on editing policies, see Editing IAM policies.Validate network connectivity from the EC2 instance to Amazon S3For your EC2 instance to connect to S3 endpoints, the instance must be one of the following:EC2 instance with a public IP address and a route table entry with the default route pointing to an Internet GatewayPrivate EC2 instance with a default route through a NAT gatewayPrivate EC2 instance with connectivity to Amazon S3 using a gateway VPC endpointTo troubleshoot connectivity between a private EC2 instance and an S3 bucket, see Why can’t I connect to an S3 bucket using a gateway VPC endpoint?Validate access to S3 buckets1.    Install the AWS CLI on your EC2 instance.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.2.    Verify access to your S3 buckets by running the following command. Replace DOC-EXAMPLE-BUCKET with the name of your S3 bucket.aws s3 ls s3://DOC-EXAMPLE-BUCKETNote: S3 objects encrypted with an AWS Key Management Service (AWS KMS) key must have kms: Decrypt permissions granted in the following:The IAM role attached to the instance.The KMS key policy.If these permissions aren't granted, then you can't copy or download the S3 objects. For more information, see Do I need to specify the AWS KMS key when I download a KMS-encrypted object from Amazon S3?Related informationWhy can’t I connect to an S3 bucket using a gateway VPC endpoint?Follow"
https://repost.aws/knowledge-center/ec2-instance-access-s3-bucket
Why is the core node in my Amazon EMR cluster running out of disk space?
"I'm running Apache Spark jobs on an Amazon EMR cluster, and the core node is almost out of disk space."
"I'm running Apache Spark jobs on an Amazon EMR cluster, and the core node is almost out of disk space.ResolutionDetermine which core nodes are unhealthyNodes that have at least one Amazon Elastic Block Store (Amazon EBS) volume attached are considered unhealthy if they hit more than 90% disk utilization. To determine which nodes might have reached 90% disk utilization, do the following:1.    Check the Amazon CloudWatch metric MRUnhealthyNodes. This metric indicates the number of unhealthy nodes of an EMR cluster.Note: You can create a CloudWatch Alarm to monitor the MRUnhealthyNodes metric.2.    Connect to the primary node and access the instance controller log at /emr/instance-controller/log/instance-controller.log. In the instance controller log, search for InstanceJointStatusMap to identify which nodes are unhealthy.For more information, see High disk utilization in How do I resolve ExecutorLostFailure "Slave lost" errors in Spark on Amazon EMR?3.    Log in to the core nodes and then run the following command to determine if a mount has high utilization:df -hRemove unnecessary local and temporary Spark application filesWhen you run Spark jobs, Spark applications create local files that consume the rest of the disk space on the core node. If the df -h command shows that /mnt, for example, is using more than 90% disk space, check which directories or files have high utilization.Run the following command on the core node to see the top 10 directories that are using the most disk space:cd /mntsudo du -hsx * | sort -rh | head -10If the /mnt/hdfs directory has high utilization, check the HDFS usage and remove any unnecessary files, such as log files. Reducing the retention period helps in cleaning the log files from HDFS automatically.hdfs dfsadmin -reporthadoop fs -du -s -h /path/to/dirReduce the retention period for Spark event and YARN container logsA common cause of HDFS usage is the /var/log directory. The /var/log directory is where log files such as Spark event logs and YARN container logs are stored. You can change the retention period for these files to save space.The following example command displays the /var/log/spark usage.Note: /var/log/spark is the default Spark event log directory.hadoop fs -du -s -h /var/log/sparkReduce the default retention period for Spark job history filesSpark job history files are located in /var/log/spark/apps by default. When the file system history cleaner runs, Spark deletes job history files older than seven days. To reduce the default retention period, do the following:On a running cluster:1.    Connect to the primary node using SSH.2.    Add or update the following values in /etc/spark/conf/spark-defaults.conf. The following configuration runs the cleaner every 12 hrs. The configuration clear files that are more than 1 day old. You can customize this time period for your individual use case in the spark.history.fs.cleaner.internval and spark.history.fs.cleaner.maxAge parameters.------spark.history.fs.cleaner.enabled truespark.history.fs.cleaner.interval 12hspark.history.fs.cleaner.maxAge 1d------3.    Restart the Spark History Server.During cluster launch:Use the following configuration. You can customize the time period for your individual use case in the spark.history.fs.cleaner.internval and spark.history.fs.cleaner.maxAge parameters.{"Classification": "spark-defaults","Properties": {"spark.history.fs.cleaner.enabled":"true","spark.history.fs.cleaner.interval":"12h","spark.history.fs.cleaner.maxAge":"1d" }}For more information on these parameters, see Monitoring and instrumentation in the Spark documentation.Reduce the default retention period of YARN container logsSpark application logs, which are the YARN container logs for your Spark jobs, are located in /var/log/hadoop-yarn/apps on the core node. Spark moves these logs to HDFS when the application is finished running. By default, YARN keeps application logs on HDFS for 48 hours. To reduce the retention period:1.    Connect to the primary, core, or task nodes using SSH.2.    Open the /etc/hadoop/conf/yarn-site.xml file on each node in your Amazon EMR cluster (primary, core, and task nodes).3.    Reduce the value of the yarn.log-aggregation.retain-seconds property on all nodes.4.    Restart the ResourceManager daemon. For more information, see Viewing and restarting Amazon EMR and application processes.You can also reduce the retention period by reconfiguring the cluster. For more information, see Reconfigure an instance group in a running cluster.Reduce /mnt/yarn usageIf the /mnt/yarn directory is highly utilized, adjust the user cache retention or scale the EBS volumes on the node. For more information, see How can I prevent a Hadoop or Spark job's user cache from consuming too much disk space in Amazon EMR?Resize the cluster or scale Amazon EMRAdd more core nodes to mitigate HDFS space issues. And, add any of the core or task nodes if directories other than HDFS directories are getting full. For more information, see Scaling cluster resources.You can also extend the EBS volumes in existing nodes or use a dynamic scaling script. For more information, see the following:How do I resolve "no space left on device" stage failures in Spark on Amazon EMR?Dynamically scale up storage on Amazon EMR clustersRelated informationConfigure cluster hardware and networkingHDFS configurationWork with storage and file systemsFollow"
https://repost.aws/knowledge-center/core-node-emr-cluster-disk-space
Why is my Amazon EFS file system read-only?
"My Amazon Elastic File System (Amazon EFS) file system is available and mounted correctly on my Amazon Elastic Compute Cloud (Amazon EC2) instance, but I can't write to it. How do I fix this?"
"My Amazon Elastic File System (Amazon EFS) file system is available and mounted correctly on my Amazon Elastic Compute Cloud (Amazon EC2) instance, but I can't write to it. How do I fix this?Short descriptionThe following are two common issues that prevent you from writing to your file system:The mount option in the /etc/fstab file is set to read-only access.The associated AWS Identity and Access Management (IAM) policy indicates read-only access, or root access disabled.ResolutionNote: This resolution uses the Amazon EFS mount helper. The Amazon EFS mount helper is preinstalled on Amazon Linux. If you're using another distribution, see Installing the amazon-efs-utils package on other Linux distributions.Verify that mount options are correct in the /etc/fstab file1.    Run the following command to check the current mount options for the file system:$ mount -t nfs4In the following example output, the variable ro indicates that the file system currently allows read-only access.file-system-id.efs.region.amazonaws.com:/ on /efs type nfs4 (ro,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,noresvport,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.0.2.0,local_lock=none,addr=192.0.0.0)2.    Change the mount parameter to rw (read/write access) in the /etc/fstab file using an editing tool such as vi:Note: Replace file-system-id with the ID of your file system.file-system-id:/ efs-mount-point efs rw,_netdev 0 03.    Run the following command to unmount and remount the file system.$ sudo mount -o remount,rw /efs -t efs && mount -t nfs4file-system-id.efs.region.amazonaws.com:/ on /efs type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,noresvport,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=190.0.2.0,local_lock=none,addr=190.0.2.0)4.    Create or edit a file in the file system to confirm that you can write to the file system.Verify that permissions are set correctly1.    Open the Amazon EFS console.Note: Make sure that you're in the same Region as your Amazon EFS file system.2.    Select the file system you want to check, and then choose View details.3.    On the File system policy tab, select Edit.4.    Uncheck the following options, if selected:Prevent root access by defaultEnforce read-only access by default5.    Select Set policy.6.    Select Save policy.7.    Run the umount command to unmount the file system.$ sudo umount /efs8.    Run the mount command to mount the file system again to apply the changes.Note: Replace file-system-id with the ID of your file system.$ sudo mount -t efs -o iam fs-file-system-id /efs9.    Add the following line to the /etc/fstab to make the new mount persistent after reboot using an editing tool such as vi:Mount with IAM authorization to an instance that has an instance profile:file-system-id:/ efs-mount-point efs _netdev,iam 0 0Mount with IAM authorization to a Linux instance using a credentials file:file-system-id:/ efs-mount-point efs _netdev,iam,awsprofile=namedprofile 0 0Mount using an EFS access point:file-system-id efs-mount-point efs _netdev,accesspoint=access-point-id 0 0For more information, see Using /etc/fstab to mount automatically.10.    Create or edit a file in the file system to confirm that you can write to the file system.Related informationNew for Amazon EFS - IAM authorization and access pointsMounting EFS file systems from another account or VPCFollow"
https://repost.aws/knowledge-center/efs-enable-read-write-access
"I configured Amazon CloudWatch to export log data to Amazon S3, but the log data is either missing or invalid. How do I resolve this issue?"
"I configured Amazon CloudWatch to export log data to Amazon Simple Storage Service (Amazon S3) as described at Exporting log data to Amazon S3 using the AWS Command Line Interface (AWS CLI). But despite completing these steps, I can't locate any useful log file data at the specified Amazon S3 destination. What do I need to do?"
"I configured Amazon CloudWatch to export log data to Amazon Simple Storage Service (Amazon S3) as described at Exporting log data to Amazon S3 using the AWS Command Line Interface (AWS CLI). But despite completing these steps, I can't locate any useful log file data at the specified Amazon S3 destination. What do I need to do?Short descriptionThis issue occurs because you must specify the time interval for the log data using timestamps expressed as the number of milliseconds that have elapsed since Jan 1, 1970 00:00:00 UTC.ResolutionExport CloudWatch log data to Amazon S3 by specifying the time interval for the log data using starting and ending timestamps that are expressed in milliseconds.For example, to export CloudWatch log data to an Amazon S3 bucket or folder for the previous two-hour period, use the following syntax:aws logs create-export-task --task-name "example-task" --log-group-name "/var/logs/example-logs" --from $(($(date -d "-2 hours" +%s%N)/1000000)) --to $(($(date +%s%N)/1000000)) --destination " log_bucket" --destination-prefix "example-logs"Related informationExporting log data to Amazon S3Linux command to get time in millisecondsExporting log data to Amazon S3 using the AWS CLIExporting log data to Amazon S3 using the CloudWatch consoleFollow"
https://repost.aws/knowledge-center/missing-invalid-cloudwatch-log-data
How can I migrate databases from EC2 instances or on-premises VMs to RDS for SQL Server?
I want to migrate databases from an Amazon Elastic Compute Cloud (Amazon EC2) instance or on-premises Microsoft SQL Server instance to my Amazon Relational Database Service (Amazon RDS) for Microsoft SQL Server instance. What are the available options for migrating the data?
"I want to migrate databases from an Amazon Elastic Compute Cloud (Amazon EC2) instance or on-premises Microsoft SQL Server instance to my Amazon Relational Database Service (Amazon RDS) for Microsoft SQL Server instance. What are the available options for migrating the data?ResolutionMethods for migrating dataNative SQL Server backup and restoreYou can migrate the SQL Server database from an on-premises or EC2 instance to Amazon RDS for SQL Server instance using native backup and restore.1.    Create an Amazon Simple Storage Service (Amazon S3) bucket to store the backup from the source instance. The S3 bucket must be in the same Region as the RDS instance.2.    Create the AWS Identity and Access Management (IAM) role to access the bucket.3.    Add the SQLSERVER_BACKUP_RESTORE option to the option group associated with the RDS for SQL Server instance.4.    Create a backup from the source instance (on-premises or EC2), and then copy it to the S3 bucket you created in step 1.5.    Run the following script to restore the backup to the RDS for SQL Server instance:exec msdb.dbo.rds_restore_database@restore_db_name='database_name', @s3_arn_to_restore_from='arn:aws:s3:::bucket_name file_name_and_extension';6.    Run the following script to back up the RDS instance database to S3:exec msdb.dbo.rds_backup_database @source_db_name='database_name',@s3_arn_to_backup_to='arn:aws:s3:::bucket_name/file_name_and_extension', @overwrite_S3_backup_file=1;Note: You can also backup and restore differential backups.AWS Database Migration Service (AWS DMS)1.    Verify the pre-requisites and limitations of using SQL Server as a source or target for AWS DMS:Limitations on using SQL Server as a source for AWS DMSLimitations on using SQL Server as a target for AWS Database Migration Service2.    Create a DMS replication instance.3.    Create source and target endpoints using DMS.4.    Create a migration task.Transactional replicationYou can set up transactional replication from on-premises or EC2 SQL Server instances to RDS for SQL Server instance. RDS for SQL Server instance can only be made as a subscriber with push subscription from the On-premises or EC2 SQL Server instance as Publisher-Distributor.For step-by-step instructions for setting up transaction replication from an on-premises or EC2 SQL Server instance, see the following:Migrating to Amazon RDS for SQL Server using transactional replication: Part 1Migrating to Amazon RDS for SQL Server using transactional replication: Part 2Backup Package (.bacpac) fileThe .bacpac file consists of copied metadata and the data compressed to a file. This approach is the best choice for databases that are around 200 GB.You can create a .bacpac file using Export/Import or using the SQLPackage.exe (command line) utility.For more information on the .bacpac file, see Migrate SQL Server database from an Azure SQL database to Amazon RDS for SQL Server using .bacpac method.Methods for importing dataGenerate and Publish Script WizardIf your database is smaller than 1GB, you can use the Generate and Publish Script Wizard. For larger databases, you can script the schema of the database using the Import and Export Wizard or Bulk copy methods.For more information on the Generate and Publish Script Wizard, see How to: Generate a script (SQL Server Management Studio) in the Microsoft SQL Server documentation.Note: Make sure that you select Save scripts to specific location, Advanced on the Set Scripting Option page. The Advanced setting provides additional options for including or excluding object in the table during import and export.Import and Export WizardThe Import and Export Wizard creates an integration package. The integration package is used to copy data from your on-premises or EC2 SQL Server database to the RDS for SQL Server instance. You can filter the specific tables you want to copy to the RDS instance.For more details on the Import and Export Wizard, see How to: Run the SQL Server Import and Export Wizard in the Microsoft SQL Server documentation.Note: When running the Import and Export Wizard, make sure you choose the following options for the Destination RDS for SQL Server instance:For Server Name, enter the name of the endpoint for the RDS DB instance.For Authentication mode, select SQL Server Authentication.For the User name and Password, enter the master user that you created in the RDS instance.Bulk Copy Program utilityThe Bulk Copy Program (bcp) is a command line utility that's used to bulk copy data between SQL Server instances. You can use the bcp utility to import large sets of data to a SQL Server instance or export to a file.The following are examples of the IN and OUT commands:OUT: Use this command to export or dump the records from a table into a file:bcp dbname.schema_name.table_name out C:\table_name.txt -n -S localhost -U username -P password -b 10000The preceding code includes the following options:-n: specifies that the bulk copy uses the native data types of the data to be copied.-S: specifies the SQL Server instance that the bcp utility connects to.-U: specifies the user name of the account to log in to the SQL Server instance.-P: specifies the password for the user specified by -U.-b: specifies the number of rows per batch of imported data.IN: Use this command to import all records from the dump file to the existing table. The table must be created before running the bcp command.bcp dbname.schema_name.table_name in C:\table_name.txt -n -S endpoint,port -U master_user_name -P master_user_password -b 10000For more information, see bcp utility in the Microsoft SQL Server documentation.Follow"
https://repost.aws/knowledge-center/rds-sql-server-migrate-databases-to
"When I activate default encryption on my Amazon S3 bucket, do I need to update my bucket policy so that objects in the bucket are encrypted?"
I activated default encryption on my Amazon Simple Storage Service (Amazon S3) bucket. Do I need to change my bucket policy to make sure that objects stored in my bucket are encrypted?
"I activated default encryption on my Amazon Simple Storage Service (Amazon S3) bucket. Do I need to change my bucket policy to make sure that objects stored in my bucket are encrypted?ResolutionNo, you don't need to update your bucket policy to make sure that objects stored in my bucket are encrypted. If you activate default encryption, and a user uploads an object without encryption information, then Amazon S3 uses the default encryption method that you specify. If a user specifies encryption information in the PUT request, then Amazon S3 uses the encryption specified in the request.This behavior applies to encryption with keys that are:Managed by Amazon S3.Labeled as SSE-S3 keys.Managed by AWS Key Management Service (AWS KMS).Labeled as SSE-KMS keys.For more information on encryption behavior after you activate default encryption, see Setting default server-side encryption behavior for Amazon S3 buckets.Important: After you activate default encryption using a custom AWS KMS key, you must grant users additional permissions to be able to access objects. Grant those users permissions to use the key on the key policy or on their AWS Identity and Access Management (IAM) policy. For instructions on how to grant these permissions, see My Amazon S3 bucket has default encryption using a custom AWS KMS key. How can I allow users to download from and upload to the bucket? For cross-account operations, see Using SSE-KMS encryption for cross-account operations.Related informationKey policies in AWS KMSAWS KMS conceptsFollow"
https://repost.aws/knowledge-center/bucket-policy-encryption-s3
Why are my list queries returning the wrong number of items when I use DynamoDB with AWS AppSync?
My list queries are returning the wrong number of items when I use Amazon DynamoDB with AWS AppSync. How do I resolve this?
"My list queries are returning the wrong number of items when I use Amazon DynamoDB with AWS AppSync. How do I resolve this?Short descriptionWhen you use a list query with AWS AppSync and Amazon DynamoDB, DynamoDB performs a Scan operation on the table and returns a certain number of evaluated items. If you provide a FilterExpression operation on the request, then DynamoDB applies the FilterExpression to only these evaluated items.The number of items that the Scan checks depends on the limit variable that you applied to the query. If no limit is applied, then DynamoDB uses the default limit that's configured in the request mapping template. The Scan returns only the items that are evaluated and match the filter expression. To return the expected number of items, you must adjust the limit variable.You can increase the default limit on your mapping template, or add a limit variable to the list GraphQL query. However, the maximum number of items that you can evaluate is 1 MB. If your Scan operation exceeds 1 MB of data, then you can use pagination to get the rest of the data. To help you query a database that has thousands of items, you can index your data to filter results by a common field.ResolutionIncrease the default limitIn the following example mapping template for an AWS Amplify generated API, the default limit is 100:#set( $limit = $util.defaultIfNull($context.args.limit, 100) )#set( $ListRequest = { "version": "2018-05-29", "limit": $limit} )To evaluate more items, change the default limit in the mapping template. Or, add a limit variable to the list query. For example, in the following list query, the limit is set to 1000:query MyQuery { listEmployees(limit: 1000) { items { id name company } }}Paginate with AWS AppSyncTo paginate with AWS AppSync, add a nextToken argument to your GraphQL query.Example query:query MyQuery { listEmployees{ items { id name company } nextToken }}Note: The nextToken value that the query returns is either null or a long string that starts with ey and ends with =. This value represents the last evaluated key. If the value is null, then there are no more items in the table to evaluate. If it's a long string, then there are more items to evaluate.For example:"nextToken": "eyJ2ZXJzaW9uIjoyLCJ0b2tlbiI6IkFRSUNBSGg5OUIvN3BjWU41eE96NDZJMW5GeGM4WUNGeG1acmFOMUpqajZLWkFDQ25BRkQxUjREaVVxMkd1aDZNZ2ZTMmhPMUFBQUNIVENDQWhrR0NTcUdTSWIzRFFFSEJxQ0NBZ293Z2dJR0FnRUFNSUlCL3dZSktvWklodmNOQVFjQk1CNEdDV0NHU0FGbEF3UUJMakFSQkF4c0RFY1ZZUGpaRDFxbDcxNENBUkNBZ2dIUWkwaGtJYytoU04vRFMzL3VQT2ZDMnpIV1dBVkg4LzU3TWFwMGZWSHVackt1VmV4emRvTVQrNml6VC9RdDRsSVNGN1pmc3NWTHVvcnJpRE1RZVE5ckNyd3J4dmNOY3ZZUzhTc21PRFFkaTUreVhQcDJ1OENaK29Sd2wrV3VoOGJ0YlBpQXQydjRNdmw2L09jRzRHV2RxRmFjeTFDRjRyK2FPd29velRTV3NqMTd4OUpwVi93cHVYc2tTN2R5TmYxa3JJS3hSL3BMWlY5M3JPSlVocEpDV2xEL3Y1UU5JdGJoOWpyaTI3N09LbUZsVlh3bDRna3hDa1kzeGZMNjZrM2dZY0hpbHlUOE1GUnBLU0VCTFE3RGEzSlJtUG8xUkJvQ051K3dBTkwwd09Vckp0N1BFb0QvTVRrenZDV1lCTXhVaUhaRDdrM3Y5a2FJS2NScHY0ajhuTDJ2Nk9EZ3plTjgwN1RtRFViM21rRUsrakRRcjUvd3ZQVW56cGFaN1RnR21yT21FaTlGQklOUnl6dk9rdDRWazZEaVU3RCtNYUJSdm5iNnR0VklPa2lDdFlhODRqenhlOFlFRUZGOElyTksrQm9yL28vdktxMVczSUxsU1VWWFd0N0hPWjV4TDBudHVTeGlBdW9ZK1Y0NEkzMXlPQkJ1T1AwMVpUek1TdGUvZCtIT1RRUEt2SGVGanF5Y0tpNGNTQUdZN3BobGs5eWJJem9hOTM0YldJOUFyRmF0WDY4UnkzTkF4cWNCbzh4ZklxZGZNN3Rlam02NldMT0Z6T3F6MDRrK1B0K0lXdWhOeS9CWEN2YXh2dk09In0="To get the rest of the items in the table, run another query with the nextToken as a query variable. Continue to include the nextToken variable until all the items are evaluated.Example query:query MyQuery { listEmployees(nextToken:"eyJ......="){ items { id name company } nextToken }}Index your dataIn the following example query, the filter expression is set to return a number of items that are based on employees that equal AnyCompany:query MyQuery { listEmployees(filter: {company: {eq: "AnyCompany"}}) { items { id name company } nextToken }}To list all employees based on their company, create a global secondary index for the field to query against. For example, in Amplify, you can use the @index directive to define secondary indexes:type Employee @model{ id:ID name : String email : AWSEmail company: String @index(name:"employeeByCompany", queryField: "listEmployeesByCompany")}In the following example query, a GSI employeeByCompany is set for the company field, and the query is defined as listEmployeesByCompany:query MyQuery { listEmployeesByCompany(company: "AnyCompany") { items { id name company } nextToken }}You can specify a limit variable to return an intended number of items in the database. For example, in the following query, the limit variable is set to 5:query MyQuery { listEmployeesByCompany(company: "AnyCompany",limit:5) { items { id name company } nextToken }}Note: Setting a limit variable returns only the intended number of items, even if you applied additional filterExpressions. For example, in the following query, the limit variable is set to 5 and the filter equals Mary Major:query MyQuery { listEmployeesByCompany(company: "AnyCompany",limit:5) { name:{ eq:"Mary Major" } }) { items { id name company } nextToken }}The preceding query returns only 5 items that equal Mary Major. However, there might be more than 5 Mary Majors.Note: Indexed queries also return a maximum of 1 MB of data at a time. For larger indexed queries, you can use pagination.Follow"
https://repost.aws/knowledge-center/appsync-wrong-query-item-number-dynamodb
How do I terminate HTTPS traffic on Amazon EKS workloads with ACM?
I want to terminate HTTPS traffic on Amazon Elastic Kubernetes Service (Amazon EKS) workloads with AWS Certificate Manager (ACM).
"I want to terminate HTTPS traffic on Amazon Elastic Kubernetes Service (Amazon EKS) workloads with AWS Certificate Manager (ACM).Short descriptionTo terminate HTTPS traffic at the Elastic Load Balancing level for a Kubernetes Service object, you must:Request a public ACM certificate for your custom domain.Publish your Kubernetes service with the type field set to LoadBalancer.Specify the Amazon Resource Name (ARN) of your ACM certificate on your Kubernetes service using the service.beta.kubernetes.io/aws-load-balancer-ssl-cert annotation from the Kubernetes website. The annotation allows the Kubernetes API server to associate that certificate with the Classic Load Balancer when it's created.Associate your custom domain with the load balancer.The following resolution assumes that:You have an active Amazon EKS cluster with associated worker nodes.You are working with a Classic Load Balancer.Note: To use an Application Load Balancer, you must deploy application load balancing on Amazon EKS.Note: Terminating TLS connections on a Network Load Balancer is supported only in Kubernetes 1.15 or greater. For more information, see Support TLS termination with AWS NLB on the Kubernetes website.Resolution1.    Request a public ACM certificate for your custom domain.2.    Identify the ARN of the certificate that you want to use with the load balancer's HTTPS listener.3.    To identify the nodes registered to your Amazon EKS cluster, run the following command in the environment where kubectl is configured:$ kubectl get nodes4.    In your text editor, create a deployment.yaml manifest file based on the following:apiVersion: apps/v1kind: Deploymentmetadata: name: echo-deploymentspec: replicas: 3 selector: matchLabels: app: echo-pod template: metadata: labels: app: echo-pod spec: containers: - name: echoheaders image: k8s.gcr.io/echoserver:1.10 imagePullPolicy: IfNotPresent ports: - containerPort: 80805.    To create a Kubernetes Deployment object, run the following command:$ kubectl create -f deployment.yaml6.    To verify that Kubernetes pods are deployed on your Amazon EKS cluster, run the following command:$ kubectl get podsNote: The pods are labeled app=echo-pod. You can use this label as a selector for the Service object to identify a set of pods.7.    In your text editor, create a service.yaml manifest file based on the following example. Then, edit the service.beta.kubernetes.io/aws-load-balancer-ssl-cert annotation to provide the ACM ARN from step 2.apiVersion: v1kind: Servicemetadata: name: echo-service annotations: # Note that the backend talks over HTTP. service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http # TODO: Fill in with the ARN of your certificate. service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:{region}:{user id}:certificate/{id} # Only run SSL on the port named "https" below. service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"spec: selector: app: echo-pod ports: - name: http port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8080 type: LoadBalancer8.    To create a Service object, run the following command:$ kubectl create -f service.yaml9.    To return the DNS URL of the service of type LoadBalancer, run the following command:$ kubectl get serviceNote: If you have numerous active services running in your cluster, then get the URL of the correct service of type LoadBalancer from the command output.10.    Open the Amazon Elastic Compute Cloud (Amazon EC2) console, and then choose Load Balancers.11.    Select your load balancer, and then choose Listeners.12.    For Listener ID, confirm that your load balancer port is set to 443.13.    For SSL Certificate, confirm that the SSL certificate that you defined in the YAML file is attached to your load balancer.14.    Associate your custom domain name with your load balancer name.15.    In a web browser, test your custom domain with the following HTTPS protocol:https://yourdomain.comA successful response returns a webpage with details about the client. This response includes the hostname, pod information, server values, request information, and request headers.Important: You can't install certificates with 4096-bit RSA keys or EC keys on your load balancer through integration with ACM. To use the keys with your load balancer, you must upload certificates with 4096-bit RSA or EC keys to AWS Identity and Access Management (IAM). Then, use the corresponding ARN with the service.beta.kubernetes.io/aws-load-balancer-ssl-cert annotation.Follow"
https://repost.aws/knowledge-center/terminate-https-traffic-eks-acm
How do I delete my Network Load Balancer that's associated with VPC endpoint services (PrivateLink)?
I have a Network Load Balancer that's associated with Amazon Virtual Private Cloud (Amazon VPC) endpoint services (PrivateLink). How do I delete the Network Load Balancer?
"I have a Network Load Balancer that's associated with Amazon Virtual Private Cloud (Amazon VPC) endpoint services (PrivateLink). How do I delete the Network Load Balancer?Short descriptionIf you try to delete a Network Load Balancer that's associated with PrivateLink, you receive the error Network Load Balancer is currently associated with another service. Before you can delete a Network Load Balancer, you must first disassociate it from any associated VPC endpoint services.ResolutionFirst verify that the Network Load Balancer that you're trying to delete isn't associated with VPC endpoint services. If the load balancer is associated with VPC endpoint services, make sure to follow the below steps:Reject the endpoint connections on the endpoint service.Disassociate the Network Load Balancer from the endpoint service.Delete the Network Load Balancer.You can use the Amazon VPC console or the AWS Command Line Interface (AWS CLI) to perform these steps.Using the Amazon VPC console1.    Open the Amazon VPC console.2.    Choose Endpoint services.3.    Enter the Network Load Balancer's ARN in the Filter field to search for endpoint services.4.    Select the Endpoint connections tab to determine which endpoint connections are attached to your endpoint service.5.    For all the connections that aren't in the Rejected state, choose Actions, Reject endpoint connection request.6.    Select the Load Balancers tab.7.    Choose Associate or Disassociate Load Balancers to disassociate your Network Load Balancer from the endpoint service.8.    Uncheck the Network Load Balancer's name under Available Load Balancers and then select Save changes.Note: If there are no other load balancers associated with this endpoint service, then you receive the error message Must select at least one Load Balancer. If you receive this error, delete the VPC endpoint service to remove the association.9.    To delete the Network Load Balancer, see Delete a Network Load Balancer.Using the AWS CLINote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.1.    Run the describe-vpc-endpoint-service-configurations command to describe the VPC endpoint service configurations:Note: In the following example command, replace us-east-1 with the Region where your Network Load Balancer is located.aws ec2 describe-vpc-endpoint-service-configurations --region us-east-1 | grep -B 1 -A 3 /net/The preceding command filters the Network Load Balancer ARN and the associated endpoint service name in the Region. In the command output, search for the Network Load Balancer's ARN (or use a more specific filter in grep). If you find a match, then the Network Load Balancer is associated with VPC endpoint services. Note the service ID of the VPC endpoint service.Example output$ aws ec2 describe-vpc-endpoint-service-configurations --region us-east-1 | grep -B 1 -A 3 /net/ "NetworkLoadBalancerArns": [ "arn:aws:elasticloadbalancing:us-east-1:123456789012:loadbalancer/net/NLB-Test/ca76ff83bdfc24c6" ], "ServiceName": "com.amazonaws.vpce.us-east-1.vpce-svc-1234abc1234abc123", "Tags": [2.    Reject the endpoint connections on the service using the reject-vpc-endpoint-connections command, as shown in the following example:aws ec2 reject-vpc-endpoint-connections --service-id vpce-svc-1234abc1234abc123 --vpc-endpoint-ids vpce-1234abc1234abc1233 Run the modify-vpc-endpoint-service-configuration command to disassociate the Network Load Balancer from the VPC endpoint service, as shown in the following example:aws ec2 modify-vpc-endpoint-service-configuration --service-id vpce-svc-1234abc1234abc123 --remove-network-load-balancer-arns arn:aws:elasticloadbalancing:us-east-1:123456789012:loadbalancer/net/NLB-Test/ca76ff83bdfc24cRun the delete-load-balancer to delete the Network Load Balancer.Follow"
https://repost.aws/knowledge-center/delete-nlb-associated-endpoint-services
How do I troubleshoot Route 53 geolocation routing issues?
"My DNS queries return an IP address of a web server in a different AWS Region. For example, a user in the United States is being routed to an IP address of a web server located in Europe."
"My DNS queries return an IP address of a web server in a different AWS Region. For example, a user in the United States is being routed to an IP address of a web server located in Europe.ResolutionRoute 53 geolocation routing issues are caused by the following issues:There's a missing default location in your geolocation routing setup.The DNS resolver doesn't support the edns0-client-subnet extension of EDNS0. This leads to inaccurate determination of your location.The DNS resolvers are geographically diverse.There are DNS changes for resource records that haven't propagated globally.To resolve these issues, do the following:1.    Confirm that the resource records for your Route 53 hosted zone are properly configured for your use case. Also, confirm that there's a default resource record set. From the Route 53 console, check the default location specified in your Route 53 hosted zone configuration.Example: Consider the following sample output:>> dig images.example.com ; <<>> DiG9.8.2rc1-RedHat-9.8.2-0.37.rc1.45.amzn1 <<>> images.example.com;; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR,id: 51385;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1,ADDITIONAL: 0;; QUESTION SECTION;images.example.com.    IN A;; AUTHORITY SECTION:images.example.com.    60 IN SOA ns-1875.awsdns-42.co.uk.awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400;; Query time: 65 msec;; SERVER: 172.31.0.2#53(172.31.0.2);; WHEN: Tue Feb 7 22:02:30 2017;; MSG SIZE rcvd: 124In the preceding example, there isn't a default location specified in the geolocation routing configuration. So, for a non-matching geolocation, the DNS response returns NOERROR for the rcode field, and there's no result in the ANSWER section. To correct this issue, add a default location in your geolocation routing configuration.2.    To check the IP address range of the DNS resolver, run the following commands, and then note the output.On Linux or macOS, use dig:for i in {1..10}; do dig +short resolver-identity.cloudfront.net; sleep 11; done;On Windows, use nslookup:for /l %i in (1,1,10) do (nslookup resolver-identity.cloudfront.net && timeout /t 11 /nobreak)3.    Check if the DNS resolver supports edns0-Client-Subnet using one of the following commands and note the output.On Linux or macOS, use dig:dig +nocl TXT o-o.myaddr.l.google.comOn Windows, use nslookup:nslookup -type=txt o-o.myaddr.l.google.comReview the first TXT record returned in the Answer section of the output. The first TXT record value is the IP address of the DNS resolver. If there isn't a second TXT record, then the DNS resolver doesn't support edns0-client-subnet. If there's a second TXT record, then the DNS resolver supports edns0-client-subnet. The resolver provides a truncated client subnet IP address (/24 or /32) to the Route 53 authoritative name server. For more information, see How can I determine if my public DNS resolver supports the EDNS Client Subnet (ECS) extension?4.    Use the Route 53 test record set from the checking tool to determine the resource records that are returned for a specific request. For more information, see Using the checking tool to see how Amazon Route 53 responds to DNS queries.If the DNS resolver doesn't support edns0-client-subnet, then specify the DNS Resolver IP address as your value in the tool.If the DNS resolver supports edns0-client-subnet, then specify the EDNS0 client subnet IP address as your value in the tool. Choose More Options, and then specify the Subnet mask. Don't specify a Resolver IP address.5.    (Optional) If you don't have access to the checking tool, then use dig to query the Route 53 authoritative name servers for your hosted zone with EDNS0-Client-Subnet. Use the output to determine the authoritative geolocation record response for your source IP address:dig geo.example.com +subnet=<Client IP>/24 @ns-xx.awsdns-xxx.com +short6.    Route 53 name servers support the edns0-client-subnet extension of EDNS0.The resolver or local DNS server appends edns0-client-subnet to the DNS query to make a DNS lookup based on the client's source IP subnet. If this data isn't passed with the request, then Route 53 uses the source IP address of the DNS resolver to approximate the location of the client. Then, Route 53 responds to geolocation queries with the DNS record for the resolver's location. The EDNS0 data must be passed to Route 53 and the client must use a geographically closer recursive name server. If not, the result is a suboptimal location serving the incorrect resource record to the DNS query.To fix this configuration, change the recursive DNS server that supports edns0-client-subnet. Perform the DNS resolution, and then share the output. If the recursive DNS server doesn't support the edns0-client-subnet, then try using one that does. Options that support edns0-client-subnet include Google DNS and OpenDNS resolvers.7.    Check the geographic location of the client subnet IP address using MaxMind's GeoIP database on the MaxMind website, or your preferred GeoIP database. Verify that the DNS resolver is geographically close to the client's public IP address. If the answer or country on the MaxMind website doesn't match the answer that Route 53 gave, then Route 53's production geo data might be stale. If there's stale routing, then contact AWS Support.8.    Check for issues with DNS propagation using a tool such as CacheCheck on the OpenDNS site.9.    (Optional) Determine whether the geography-based routing records are associated with a Route 53 health check. And, determine whether Evaluate target health (ETH) is turned on for alias records. If either are true, then Route 53 returns the healthy endpoint that best matches the source location.Check the status of your Route 53 health check in the Route 53 console. If ETH is turned on, then check the health status of the record endpoint. Route 53 considers an endpoint for a Classic Load Balancer with ETH turned on as healthy if at least one backend instance is healthy. For Application Load Balancers and Network Load Balancers, every target group with targets must contain at least one healthy target to be considered healthy. A target group with no registered targets is considered unhealthy. If any target group contains only unhealthy targets, then the load balancer is considered unhealthy.Example: You have records for Texas in the US, for the US, North America, and all locations (location is Default). And, you have queries that originate from Texas with an unhealthy endpoint. Route 53 checks the US, North America, and then all locations, in that order, until a record with a healthy endpoint is found. If the US record is healthy, then Route 53 returns this endpoint. Otherwise, Route 53 returns a default record. If all applicable records are unhealthy, then Route 53 responds to the DNS query using the value of the record for the smallest geographic Region.Note: Changes to aliased geolocation resource records might take up to 60 seconds to propagate.Related informationHow can I troubleshoot unhealthy Route 53 health checks?Why is my alias record pointing to an Application Load Balancer marked as unhealthy when I'm using "Evaluate Target Health"?Checking DNS responses from Amazon Route 53Follow"
https://repost.aws/knowledge-center/troubleshoot-route53-geolocation
How do I resolve HTTP 503 (Service unavailable) errors when I access a Kubernetes Service in an Amazon EKS cluster?
I get HTTP 503 (Service unavailable) errors when I connect to a Kubernetes Service that runs in my Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
"I get HTTP 503 (Service unavailable) errors when I connect to a Kubernetes Service that runs in my Amazon Elastic Kubernetes Service (Amazon EKS) cluster.Short descriptionHTTP 503 errors are server-side errors. They occur when you connect to a Kubernetes Service pod located in an Amazon EKS cluster that's configured for a load balancer.To troubleshoot HTTP 504 errors, see How do I resolve HTTP 504 errors in Amazon EKS?To troubleshoot HTTP 503 errors, complete the following troubleshooting steps.ResolutionCheck if the pod label matches the value that's specified in Kubernetes Service selector1.    Run the following command to get the value of the selector:$ kubectl describe service service_name -n your_namespaceNote: Replace service_name with your service name and your_namespace with your service namespace.Example output:Name: service-nameNamespace: pod-nameLabels: noneAnnotations: noneSelector: app.kubernetes.io/name=namespaceType: NodePortIP Families: noneIP: 10.100.17.189IPs: 10.100.17.189Port: unset 80/TCPTargetPort: 80/TCPNodePort: unset 31560/TCPEndpoints: noneSession Affinity: noneExternal Traffic Policy: ClusterEvents: noneIn the preceding output, the example selector value is app.kubernetes.io/name=namespace.2.    Check if there are pods with the label app.kubernetes.io/name=namespace:$ kubectl get pods -n your_namespace -l "app.kubernetes.io/name=namespace"Example output:No resources found in your_namespace namespace.If no resources are found with the value you searched for, then you get an HTTP 503 error.Verify that the pods defined for the Kubernetes Service are runningUse the label in the Kubernetes Service selector to verify that the pods exist and are in Running state:$ kubectl -n your_namespace get pods -l "app.kubernetes.io/name=your_namespace"Output:NAME READY STATUS RESTARTS AGEPOD_NAME 0/1 ImagePullBackOff 0 3m54sCheck if the pods can pass the readiness probe for your Kubernetes deployment1.    Verify that the application pods can pass the readiness probe. For more information, see Configure liveness, readiness, and startup probes (from the Kubernetes website).2.    Check the readiness probe for the pod:$ kubectl describe pod pod_name -n your_namespace | grep -i readinessNote: replace pod_name with your pad name and your_namespace with your namespace.Example output:Readiness: tcp-socket :8080 delay=5s timeout=1s period=2s #success=1 #failure=3Warning Unhealthy 2m13s (x298 over 12m) kubelet Readiness probe failed:In the preceding output, you can see Readiness probe failed.Note: This step provides helpful output only if the application is listening on the right path and port. Check the curl output with the curl -Ivk command, and make sure the path defined at the service level is getting a valid response. For example, 200 ms is a good response.Check the capacity for your Classic Load BalancerIf you get an intermittent HTTP 503 error, then your Classic Load Balancer doesn't have enough capacity to handle the request. To resolve this issue, make sure that your Classic Load Balancer has enough capacity and your worker nodes can handle the request rate.Verify that your instances are registeredYou also get an HTTP 503 error if there are no registered instances. To resolve this issue, try the following solutions:Verify that the security groups for the worker node have an inbound rule that allows port access on the node port to worker nodes. Also, verify that no NAT rules are blocking network traffic on the node port ranges.Verify that the custom security group that's specified for the Classic Load Balancer is allowed inbound access on the worker nodes.Make sure that there are worker nodes in every Availability Zone that's specified by the subnets.Related informationWhy did I receive an HTTP 5xx error when connecting to web servers running on EC2 instances configured to use Classic Load Balancing?HTTP 503: Service unavailableMonitor your Classic Load BalancerMonitor your Application Load BalancersTroubleshoot a Classic Load Balancer: HTTP errorsFollow"
https://repost.aws/knowledge-center/eks-resolve-http-503-errors-kubernetes
Why is my domain suspended with the status ClientHold in Route 53?
My domain in Amazon Route 53 is suspended with the status ClientHold.
"My domain in Amazon Route 53 is suspended with the status ClientHold.ResolutionConfirm the registrant email addressICANN requires email verification for several domain registration operations. To revoke the domain suspension, choose the link in the verification email to confirm the registrant email address. If you don't have access to the registered email for the account, then proceed to the following section: Change the registered email address.Amazon Route 53 sends the verification email to the registered email address from one of the following email addresses:noreply@registrar.amazon.com - for TLDs registered by the Amazon registrarnoreply@domainnameverification.net - for TLDs registered by the GandiNote: If you don't see the confirmation email in your primary mailbox, then check your spam or junk folder.Resend the confirmation emailIf you can't find or didn't receive the confirmation mail, then resend the email using the following steps:Open the Route 53 console.In the navigation pane, choose Registered domains.Select the name of the domain that you want to resend the email for.In the warning box with the heading Your domain might be suspended, choose Send email again.For more information, see Resending emails.Change the registered email addressIf you don't have access to the registered email address, then you can change the email address. See, How do I change the email address for my registered domain in Route 53 if I lost access to my previous email address?Follow"
https://repost.aws/knowledge-center/route-53-fix-client-hold-status
The available memory in my ElastiCache Redis node is less than the value listed on the Amazon ElastiCache pricing page. Why is this?
The available memory in my Amazon ElastiCache Redis node is always less than the value listed in Amazon ElastiCache pricing. Why is this?
"The available memory in my Amazon ElastiCache Redis node is always less than the value listed in Amazon ElastiCache pricing. Why is this?ResolutionThe ElastiCache pricing page shows the available memory in GiB for each supported node type. However, in the default parameter group, a percentage of the memory is reserved for backups and failover operations. For Redis versions before 2.8.22, it's a best practice to reserve 50% of the total memory. For Redis versions 2.8.22 and later it's a best practice to reserve 25% of the total memory. The parameter that regulates this is reserved-memory (for customers who started with ElastiCache before March 16, 2017) or reserved-memory-percent (for customers who started with ElastiCache on or after March 16, 2017).For example, a cluster using cache.t3.micro node type that is in the default parameter group has 0.5 GiB of total memory. Due to the reserved-memory-percent parameter, 25% of this memory is reserved. Therefore, the available memory in this node is 0.375 GiB.To see the available memory in an ElastiCache Redis node:1.    Connect to the cluster using the redis-cli tool or another tool of your choice. For information on using the redis-cli tool, see Connect to a Redis cluster or replication group (Linux).2.    Run the info memory command and check the maxmemory value. The following example output was generated using redis-cli connected to a Redis server with IP address 172.31.35.93.172.31.35.93:6379> info memory# Memorymaxmemory:402653184Note: The maxmemory value is in bytes. 402653184 bytes is equivalent to 0.375 GiB. The Redis engine uses bytes or MB to represent memory. The AWS documentation uses GiB to represent memory. Although the difference between these two units is marginal and makes almost no difference with small numbers, the difference grows exponentially based on the size of the node memory. You can use an online calculator of your choice to convert between these units.Related informationManaging reserved memoryFollow"
https://repost.aws/knowledge-center/available-memory-elasticache-redis-node
How can I identify if my Amazon EBS volume is micro-bursting and then prevent this from happening?
I have an Amazon Elastic Block Store (Amazon EBS) volume that isn't breaching its throughput or IOPS limit in Amazon CloudWatch. But the volume appears throttled and is experiencing high latency and queue length.
"I have an Amazon Elastic Block Store (Amazon EBS) volume that isn't breaching its throughput or IOPS limit in Amazon CloudWatch. But the volume appears throttled and is experiencing high latency and queue length.Short descriptionCloudWatch monitors the IOPS (op/s) and throughput (byte/s) for all Amazon EBS volume types by collecting samples every one minute.Micro-bursting occurs when an EBS volume bursts high IOPS or throughput for significantly shorter periods than the collection period. Because the volume bursts high IOPS or throughput for a shorter time than the collection period, CloudWatch doesn't reflect the bursting.Example: An IO1 volume (one-minute collection period) with 950 provisioned IOPS has an application that pushes 1,000 IOPS for five seconds. Amazon EBS throttles the application when it reaches the volume's IOPS limit. At this point, the volume can't handle the workload, causing increased queue length and higher latency.CloudWatch doesn't show that the volume breached the IOPS limit because the collection period is 60 seconds. 1,000 IOPS occurred for only 5 seconds. For the remaining 55 seconds of the one-minute collection period, the volume remains idle. This means that the number of VolumeReadOps+VolumeWriteOps over the whole minute is 5000 operations (1000*5 seconds). This equates to an average of 83.33 IOPS over one minute (5000/60 seconds). This average usually isn't a concern.In this case, the VolumeIdleTime at the same sample time is 55 seconds because the volume is idle for the remainder of the collection period. This means that the 5,000 operations (VolumeReadOps+VolumeWriteOps) at that sample time occurs over only five seconds. If you divide 5,000 by 5 to calculate the average IOPS, then you get 1,000 IOPS. 1,000 IOPS is the volume limit.To determine if micro-bursting is occurring on your volume, do the following:Use CloudWatch metrics to identify possible micro-bursting.Use CloudWatch to get the micro-bursting event.Confirm micro-bursting using an OS-level tool.Prevent micro-bursting by changing your volume size or type to accommodate your applications.ResolutionUse CloudWatch metrics to identify possible micro-bursting1.    Check the VolumeIdleTime metric. This metric indicates the total number of seconds in a specified period of time when no read or write operations are submitted. If VolumeIdleTime is high, then the volume remained idle for most of the collection period. Sufficiently high IOPS or throughput at the same sample time indicates that micro-bursting potentially occurred.With the VolumeIdleTime metric for throughput there are VolumeReadBytes and VolumeWriteBytes metrics.2.    Use the following formula to calculate the average throughput that's reached when the volume is active:Actual Average Throughput in Bytes/s = (Sum(VolumeReadBytes) + Sum(VolumeWriteBytes) ) / (Period - Sum(VolumeIdleTime) ).With the VolumeIdleTime metric for IOPS there are VolumeReadOps and VolumeWriteOps metrics.3.    Use the following formula to calculate the average IOPS that's reached when the volume is active:Actual average IOPS in Ops/s = (Sum(VolumeReadOps) + Sum(VolumeWriteOps) ) / ( Period - Sum(VolumeIdleTime) )Use CloudWatch to get the micro-bursting eventOpen the CloudWatch console.Choose All Metrics.Use the volume ID to search for the volume that's affected.To view throughput metrics, choose Browse, and then add VolumeReadBytes, VolumeWriteBytes, and VolumeIdleTime.Choose Graphed metrics.For Statistics, choose Sum, and for Period, choose 1 minute.For Add Math, choose Start with empty expression.In the Details of Expression, enter the graph IDs for the Actual Average Throughput in Bytes/s formula. For example, (m1+m2)/(60-m3).If the formula calculates a value that's greater than the maximum throughput for the volume, then micro-bursting occurred. To check the IOPS metrics, follow the preceding steps, and add VolumeReadOps, VolumeWriteOps, and VolumeIdleTime for step 4.Confirm micro-bursting using an OS-level toolThe preceding formulas don't always identify micro-bursting in real time. This is because the volume might be micro-bursting even if the VolumeIdleTime is low.Example: Your volume spikes to a level that breaches the volume's limits. The volume then reduces to a very low level of activity without being completely idle for the remainder of the collection period. The VolumeIdleTime metric doesn't reflect the low activity, even though micro-bursting occurred.To confirm micro-bursting, use an OS-level tool that has a finer granularity than CloudWatch.LinuxUse the iostat command. For more information, see iostat(1) on the Linux man page.1.    To report I/O statistics for all your mounted volumes with one-second granularity, run the following command:iostat -xdmzt 1Note: The iostat tool is part of the sysstat package. If you can't find the iostat command, then run the following command to install sysstat on Amazon Linux AMIs:$ sudo yum install sysstat -y2.    To determine whether you're reaching the throughput limit, review the rMB/s and wMB/s in the output. If rMB/s + wMB/s is greater than the volume's maximum throughput, then micro-bursting is occurring.To determine whether you're reaching the IOPS limit, review the r/s and w/s in the output. If r/s + w/s is greater than the volume's maximum IOPS, then micro-bursting is occurring.WindowsRun the perfmon command in Windows Performance Monitor. For more information see, Determine your IOPS and throughput requirements.Prevent micro-bursting by changing your volume size or type to accommodate your applicationsChange the volume to a type and size that accommodates your required IOPS and throughput. For more information on volume types and their respective IOPS and throughput limits, see Amazon EBS volume types. There are limits on the IOPS/throughput the instance can push to all attached EBS volumes.It's a best practice to benchmark your volumes against your workload to verify which volume types can safely accommodate your workload in a test environment. For more information, see Benchmark EBS volumes.Follow"
https://repost.aws/knowledge-center/ebs-identify-micro-bursting
How do I authorize access to API Gateway APIs using custom scopes in Amazon Cognito?
I want to authorize access to my Amazon API Gateway API resources using custom scopes in an Amazon Cognito user pool.
"I want to authorize access to my Amazon API Gateway API resources using custom scopes in an Amazon Cognito user pool.Short descriptionDefine a resource server with custom scopes in your Amazon Cognito user pool. Then, create and configure an Amazon Cognito authorizer for your API Gateway API to authenticate requests to your API resources.If you have different app clients that need varying levels of access to your API resources, then you can define differentiated scopes of access. Consider what levels of granular access that the different app clients might need, and then design accordingly.ResolutionCreate the following prerequisites:An Amazon Cognito user pool with a user, an app client, and a domain nameAn API Gateway REST API with a resource and a methodAdd a resource server with custom scopes in your user pool1.    Open the Amazon Cognito console.2.    Define the resource server and custom scopes.3.    After you create the resource server, choose the App Integration tab.4.    From the App clients and analytics section, select your app client.5.    From the Hosted UI section, choose Edit. Then, complete the following steps:For the OAuth 2.0 grant types dropdown list, choose Implicit grant.For the Custom scopes dropdown list, choose the custom scope that you defined.Note: The format for a custom scope is resourceServerIdentifier/scopeName. When a client requests a custom scope in an OAuth 2.0 flow, the request must include the full identifier for the scope in this format.6.    Choose Save changes.If your mobile applications have a server side component, then use the Authorization code grant flow and Proof Key for Code Exchange (PKCE). With the Authorization code grant flow, the tokens are more secure and never exposed directly to an end user.If your setup doesn't contain any server-side logic, then you can use the implicit grant flow. The implicit grant doesn't generate refresh tokens. This prevents refresh tokens from exposure to the client. Refresh tokens have a longer validity and retrieve newer ID and access tokens.Important: Don't store the refresh tokens in a client-side environment.For more information, see App client settings terminology. For more information on Amazon Cognito user pool OAuth 2.0 grants, see Understanding Amazon Cognito user pool OAuth 2.0 grants.Create an authorizer and integrate it with your APITo complete the following steps, follow the instructions to integrate a REST API with an Amazon Cognito user pool.1.    To create the authorizer, follow the instructions under To create a COGNITO_USER_POOLS authorizer by using the API Gateway console.Note: After creation, an option appears in the console to Test your authorizer. This requires an identity token. To use an access token to test your setup outside the console, see the Get a user pool access token for testing section in this article.2.    To integrate the authorizer with your API, follow the instructions under To configure a COGNITO_USER_POOLS authorizer on methods.Note: For OAuth Scopes, enter the full identifier for a custom scope in the format resourceServerIdentifier/scopeName.3.    Deploy your API.Get a user pool access token for testingUse the hosted web UI for your user pool to sign in and retrieve an access token from the Amazon Cognito authorization server. Or, use the OAuth 2.0 endpoint implementations that are available in the mobile and web AWS SDKs to retrieve an access token.Note: When an app client requests authentication through the hosted web UI, the request can include any combination of system-reserved scopes, or custom scopes. If the client doesn't request any scopes, then the authentication server returns an access token that contains all scopes that are associated with the client. When you design your app client, be sure that the client includes the intended scopes in the request to avoid granting unnecessary permissions.1.    Enter the following URL in your web browser:https://yourDomainPrefix.auth.region.amazoncognito.com/login?response_type=token&client_id=yourClientId&redirect_uri=redirectUrlNote: Replace yourDomainPrefix and region with the values for your user pool. Find them in the Amazon Cognito console on the Domain name tab for your user pool.Replace yourClientId with your app client's ID, and replace redirectUrl with your app client's callback URL. Find them in the console on the App client settings tab for your user pool. For more information, see Login endpoint.2.    Sign in to your user pool as the user that you created.3.    Copy the access token from the URL in the address bar. The token is a long string of characters following access_token=.Call your API as a testAs a test, use the access token as the value of the authorization header to call your API using the access token. You can use the Postman app (on the Postman website) or curl command from a command line interface. For more information about curl, see the curl project website.To use curl, run the following command:curl https://restApiId.execute-api.region.amazonaws.com/stageName/resourceName -H "Authorization: accessToken"Note: Replace restApiId with the API ID. Replace region with the AWS Region of your API. Replace stageName with the name of the stage where your API is deployed. Replace resourceName with the name of the API resource. Replace accessToken with the token that you copied. For more information, see Invoking a REST API in Amazon API Gateway.When you correctly configure everything, you get a 200 OK response code.Related informationConfiguring a user pool app clientAccess token scope (The OAuth 2.0 Authorization Framework)Follow"
https://repost.aws/knowledge-center/cognito-custom-scopes-api-gateway
How can I get customized email notifications when my EC2 instance changes states?
I want to receive email notifications when my Amazon Elastic Compute Cloud (Amazon EC2) instance changes states. How can I do this?
"I want to receive email notifications when my Amazon Elastic Compute Cloud (Amazon EC2) instance changes states. How can I do this?Short descriptionTo receive email notifications when your EC2 instance changes states:1.    Create an Amazon Simple Notification Service (Amazon SNS) topic. The SNS topic sends messages to subscribing endpoints or clients.2.    Create an Amazon EventBridge using the EC2 Instance State-change Notification event type.ResolutionCreate an SNS topic1.    Open the Amazon SNS console, and then choose Topics from the navigation pane.2.    Select Create topic.3.    For Type, choose Standard.4.    For Name, enter a name for your topic.5.    For Display name, enter a display name for your topic.6.    Select Create topic.7.    On the Subscriptions tab, choose Create subscription.8.    For Protocol, choose Email.9.    For Endpoint, enter the email address where you want to receive the notifications.10.  Select Create subscription.A subscription confirmation email is sent to the address that you entered. Choose Confirm subscription in the email. Note the SNS topic that you created. You use this topic when creating the EventBridge rule.Create an EventBridge event1.    Open the EventBridge console.2.    Select Create rule from the homepage. Or, choose Rules under Events in the sidebar, and then select Create rule.3.    Enter a Name for your rule. You can optionally enter a Description.4.    Keep the default Event bus and Rule type settings, and then select Next.5.    In Event pattern, keep the Event source as AWS services. For the AWS service, choose EC2.6.    For Event type, choose EC2 Instance State-change Notification.7.    Keep Any state and Any instance as the default settings, and then select Next.8.    For Select a target, choose SNS topic.9.    For Topic, choose the topic name that you created earlier, and then select Next.10.  Expand the Additional settings section. For Configure target input, choose Input transformer.11.  Select Configure input transformer, and then enter the following text:        For Input path, enter the following:{"instance-id":"$.detail.instance-id", "state":"$.detail.state", "time":"$.time", "region":"$.region", "account":"$.account"}        For Template, enter the following:"At <time>, the status of your EC2 instance <instance-id> on account <account> in the AWS Region <region> has changed to <state>."        Note: The Input Template also allows custom inputs.12.  Select Next.13.  Leave the optional Tags empty, and select Next. Then, select Create rule.        Note: The rule that you created applies to a single AWS Region.You can test the rule by starting or stopping an instance. This rule generates an email notification every time an instance changes to any state, including stopped.Follow"
https://repost.aws/knowledge-center/ec2-email-instance-state-change
How do I set up an SSL connection between Hive on Amazon EMR and a metastore on Amazon RDS for MySQL?
I want to set up an SSL connection between Apache Hive and a metastore on an Amazon Relational Database Service (Amazon RDS) MySQL DB instance. How can I do that?
"I want to set up an SSL connection between Apache Hive and a metastore on an Amazon Relational Database Service (Amazon RDS) MySQL DB instance. How can I do that?Short descriptionSet up an encrypted connection between Hive and an external metastore using an SSL certificate. You can set up this connection when you launch a new Amazon EMR cluster or after the cluster is running.ResolutionNote: The following steps are tested with Amazon EMR release version 5.36.0 and Amazon RDS for MySQL version 8.0.28Set up the SSL connection on a new Amazon EMR cluster1.    Run a command similar to the following to create an Amazon RDS for MySQL DB instance. Replace $RDS_LEADER_USER_NAME, $RDS_LEADER_PASSWORD, $RDS_VPC_SG, and $DB_SUBNET_GROUP with your user name, password, security group, and DB Subnet Group respectively.For more information, see create-db-instance.Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, be sure that you’re using the most recent version of the CLI.aws rds create-db-instance --db-name hive --db-instance-identifier mysql-hive-meta --db-instance-class db.t2.micro --engine mysql --engine-version 8.0.28 --db-subnet-group-name $DB_SUBNET_GROUP --master-username $RDS_LEADER_USER_NAME --master-user-password $RDS_LEADER_PASSWORD --allocated-storage 20 --storage-type gp2 --vpc-security-group-ids $RDS_VPC_SG --publicly-accessible2.    Connect to the Amazon RDS for MySQL DB instance as the primary user. Then, create a user for the Hive metastore, as shown in the following example.Important: Be sure that you restrict access for this user to the DB instance that you created in Step 1.mysql -h mysql-hive-meta.########.us-east-1.rds.amazonaws.com -P 3306 -u $RDS_LEADER_USER_NAME -pEnter password: $RDS_LEADER_PASSWORDCREATE USER 'hive_user'@'%' IDENTIFIED BY 'hive_user_password' REQUIRE SSL;REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'hive_user'@'%';GRANT ALL PRIVILEGES ON hive.* TO 'hive_user'@'%';FLUSH PRIVILEGES;3.    Create a JSON configuration file similar to the following. Replace hive_user and hive_user_password with the values that you used in the JSON script in Step 2. Replace the endpoint in the JDBC URL with the endpoint for your RDS DB instance.You use this file to launch the Amazon EMR cluster in the next step. The file enables Hive makes an SSL connection to the RDS DB instance. For more information, see Using new SSL/TLS certificates for MySQL DB instances.[ { "Classification": "hive-site", "Properties": { "javax.jdo.option.ConnectionURL": "jdbc:mysql:\/\/mysql-hive-meta.########.us-east-1.rds.amazonaws.com:3306\/hive?createDatabaseIfNotExist=true&useSSL=true&serverSslCert=\/home\/hadoop\/global-bundle.pem, "javax.jdo.option.ConnectionDriverName": "org.mariadb.jdbc.Driver", "javax.jdo.option.ConnectionUserName": "hive_user", "javax.jdo.option.ConnectionPassword": "hive_user_password" } }]4.    In the security group that's associated with the Amazon RDS for MySQL instance, create an inbound rule with the following parameters:For Type, choose MYSQL/Aurora (3306).For Protocol, TCP (6) is selected by default.For Port Range, 3306 is selected by default.For Source, enter the Group ID of the Amazon EMR-managed security group that's associated with the leader node.This rule allows the Amazon EMR cluster's leader node to access the Amazon RDS instance. For more information, see Overview of VPC security groups.5.    Run the create-cluster command to launch an Amazon EMR cluster using the JSON file from Step 3, along with a bootstrap action. The bootstrap action downloads the SSL certificate to /home/hadoop/ on the leader node.For example:aws emr create-cluster --applications Name=Hadoop Name=Hive --tags Name="EMR Hive Metastore SSL" --ec2-attributes KeyName=$EC2_KEY_PAIR,InstanceProfile=EMR_EC2_DefaultRole,SubnetId=$EMR_SUBNET,EmrManagedSlaveSecurityGroup=$EMR_CORE_AND_TASK_VPC_SG,EmrManagedMasterSecurityGroup=$EMR_MASTER_VPC_SG --service-role EMR_DefaultRole --release-label emr-5.36.0 --log-uri $LOG_URI --name "Hive External Metastore RDS MySQL w/ SSL" --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m4.xlarge,Name="Master- 1" --configurations file:///<Full-Path-To>/hive-ext-meta-mysql-ssl.json --bootstrap-actions Path=s3://elasticmapreduce/bootstrap-actions/run-if,Args=["instance.isMaster=true","cd /home/hadoop && wget -S -T 10 -t 5 https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem"]6.    Connect to the leader node using SSH.7.    Open a hive session on the leader node. Then, create any table (to be used for testing purposes).For example:hive> create table tb_test (col1 STRING, col2 BIGINT);OKTime taken: 2.371 secondshive> describe tb_test;OKcol1 stringcol2 bigintTime taken: 0.254 seconds, Fetched: 2 row(s)8.    Connect to the Amazon RDS for MySQL metastore using the mysql client on the leader node. Then, verify the table metadata in the metastore. If the metadata corresponds to the table that you created in the previous step exists, then the SSL connection is working.For example:mysql -h mysql-hive-meta.########.us-east-1.rds.amazonaws.com -P 3306 -u $RDS_LEADER_USER_NAME -pEnter password: $RDS_LEADER_PASSWORDmysql> use hive;Database changedmysql> select t1.OWNER, t1.TBL_NAME, t1.TBL_TYPE, s1.INPUT_FORMAT, s1.OUTPUT_FORMAT, s1.LOCATION from TBLS t1 inner join SDS s1 on s1.SD_ID = t1.SD_ID where t1.TBL_NAME = 'tb_test'\G*************************** 1. row *************************** OWNER: hadoop TBL_NAME: tb_test TBL_TYPE: MANAGED_TABLE INPUT_FORMAT: org.apache.hadoop.mapred.TextInputFormatOUTPUT_FORMAT: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat LOCATION: hdfs://ip-xxx-xx-xx-xxx.ec2.internal:8020/user/hive/warehouse/tb_test1 row in set (0.23 sec)mysql> select t1.OWNER, t1.TBL_NAME, c1.COLUMN_NAME, c1.TYPE_NAME from TBLS t1 inner join SDS s1 on s1.SD_ID = t1.SD_ID inner join COLUMNS_V2 c1 on c1.CD_ID = s1.CD_ID where t1.TBL_NAME = 'tb_test';+--------+----------+-------------+-----------+| OWNER | TBL_NAME | COLUMN_NAME | TYPE_NAME |+--------+----------+-------------+-----------+| hadoop | tb_test | col1 | string || hadoop | tb_test | col2 | bigint |+--------+----------+-------------+-----------+2 rows in set (0.22 sec)Set up the SSL connection on a running Amazon EMR clusterNote: The following steps assume that you have an Amazon RDS for MySQL DB instance.1.    Connect to the leader node using SSH.2.    Run the following command to download the SSL certificate to /home/hadoop/ on the leader node:cd /home/hadoop && wget -S -T 10 -t 5 https://s3.amazonaws.com/rds-downloads/global-bundle.pem3.    In the /etc/hive/conf.dist directory, add or edit the following lines in the hive-site.xml file:<property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://mysql-hive-meta.########.us-east-1.rds.amazonaws.com:3306/hive?createDatabaseIfNotExist=true&amp;useSSL=true&amp;serverSslCert=/home/hadoop/global-bundle.pem</value> <description>JDBC URL for the metastore database</description></property><property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive_user</value> <description>User name for the metastore database</description></property><property> <name>javax.jdo.option.ConnectionPassword</name> <value>HIVE_USER_PASSWORD</value> <description>Password for metastore database</description></property>This syntax allows an SSL connection to the RDS DB instance. Make sure to replace the endpoint in the JDBC URL with the endpoint for your RDS DB instance.Important: The ampersand (&) is a special character in XML. To use an ampersand in hive-site.xml, such as in the JDBC string, you must use "&" instead of "&." Otherwise, you get an error when you restart hive-hcatalog-server.4.    Run a command similar to the following to test the SSL connection:mysql -h mysql-hive-meta.########.us-east-1.rds.amazonaws.com -P 3306 -u hive_user -p --ssl-ca /home/hadoop/rds-combined-ca-bundle.pem5.    Restart hive-hcatalog-server on the leader node. For more information, see Stopping and restarting processes.6.    Verify that the services restarted successfully:sudo systemctl status hive-hcatalog-server.service7.    Open a hive session on the leader node. Then, create any table (to be used for testing purposes).For example:hive> create table tb_test (col1 STRING, col2 BIGINT);OKTime taken: 2.371 secondshive> describe tb_test;OKcol1 stringcol2 bigintTime taken: 0.254 seconds, Fetched: 2 row(s)8.    Connect to the Amazon RDS for MySQL metastore using the mysql client on the leader node. Then, verify the table metadata in the metastore. If the metadata corresponds to the table that you created in the previous step, the SSL connection is working.For example:$ mysql -h mysql-hive-meta.########.us-east-1.rds.amazonaws.com -P 3306 -u $RDS_LEADER_USER_NAME -pEnter password: $RDS_LEADER_PASSWORDmysql> use hive;Database changedmysql> select t1.OWNER, t1.TBL_NAME, t1.TBL_TYPE, s1.INPUT_FORMAT, s1.OUTPUT_FORMAT, s1.LOCATION from TBLS t1 inner join SDS s1 on s1.SD_ID = t1.SD_ID where t1.TBL_NAME = 'tb_test'\G*************************** 1. row *************************** OWNER: hadoop TBL_NAME: tb_test TBL_TYPE: MANAGED_TABLE INPUT_FORMAT: org.apache.hadoop.mapred.TextInputFormatOUTPUT_FORMAT: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat LOCATION: hdfs://ip-xxx-xx-xx-xxx.ec2.internal:8020/user/hive/warehouse/tb_test1 row in set (0.23 sec)mysql> select t1.OWNER, t1.TBL_NAME, c1.COLUMN_NAME, c1.TYPE_NAME from TBLS t1 inner join SDS s1 on s1.SD_ID = t1.SD_ID inner join COLUMNS_V2 c1 on c1.CD_ID = s1.CD_ID where t1.TBL_NAME = 'tb_test';+--------+----------+-------------+-----------+| OWNER | TBL_NAME | COLUMN_NAME | TYPE_NAME |+--------+----------+-------------+-----------+| hadoop | tb_test | col1 | string || hadoop | tb_test | col2 | bigint |+--------+----------+-------------+-----------+2 rows in set (0.22 sec)Troubleshoot hive-hcatalog-server restart errorsYou might get an error message similar to the following when you try to restart hive-hcatalog-server:2020-08-20T14:18:50,750 WARN [main] org.apache.hadoop.hive.metastore.HiveMetaStore - Retrying creating default database after error: Unable to open a test connection to the given database. JDBC url = jdbc:mysql://mysql-hive-meta.########.us-east-1.rds.amazonaws.com:3306/hive?createDatabaseIfNotExist=true&useSSL=true&serverSSlCert=/home/hadoop/global-bundle.pem, username = masteruser. Terminating connection pool (set lazyInit to true if you expect to start your database after your app). Original Exception: ------java.sql.SQLException: Host '172.31.41.187' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'This error message typically occurs when the Amazon RDS for MySQL DB instance blocks the Amazon EMR cluster's leader node as a security precaution.To resolve this error, connect to a different local machine or Amazon Elastic Compute Cloud (Amazon EC2) instance that has the mysqladmin tool installed. Run the following command to flush the leader node from the DB instance.mysqladmin -h mysql-hive-meta.########.us-east-1.rds.amazonaws.com -P 3306 -u $RDS_LEADER_USER_NAME -p flush-hostsEnter password: $RDS_LEADER_PASSWORDThen, restart hive-hcatalog-server.Related informationUsing an external MySQL database or Amazon AuroraFollow"
https://repost.aws/knowledge-center/ssl-hive-emr-metastore-rds-mysql
How do I resolve ExecutorLostFailure "Slave lost" errors in Spark on Amazon EMR?
"I submitted an Apache Spark application to an Amazon EMR cluster. The application fails with this error:"Most recent failure: Lost task 1209.0 in stage 4.0 (TID 31219, ip-xxx-xxx-xx-xxx.compute.internal, executor 115): ExecutorLostFailure (executor 115 exited caused by one of the running tasks) Reason: Slave lost""
"I submitted an Apache Spark application to an Amazon EMR cluster. The application fails with this error:"Most recent failure: Lost task 1209.0 in stage 4.0 (TID 31219, ip-xxx-xxx-xx-xxx.compute.internal, executor 115): ExecutorLostFailure (executor 115 exited caused by one of the running tasks) Reason: Slave lost"Short descriptionThis error indicates that a Spark task failed because a node terminated or became unavailable. There are many possible causes of this error. The following resolution covers these common root causes:High disk utilizationUsing Spot Instances for cluster nodesAggressive Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling policiesResolutionHigh disk utilizationIn Hadoop, NodeManager periodically checks the Amazon Elastic Block Store (Amazon EBS) volumes that are attached to the cluster's nodes. If disk utilization on a node that has one volume attached is greater than the YARN property yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage (default value 90%), then the node is considered unhealthy. When a node is unhealthy, ResourceManager kills all containers running on that node. ResourceManager doesn't schedule new containers on unhealthy nodes. For more information, see NodeManager in the Hadoop documentation.If ResourceManager kills multiple executors because of unhealthy nodes, then the application fails with a "slave lost" error. To confirm that a node is unhealthy, review the NodeManager logs or the instance controller logs:The location of the NodeManager logs is defined in the YARN_LOG_DIR variable in yarn-env.sh.The instance controller logs are stored at /emr/instance-controller/log/instance-controller.log on the master node. The instance controller logs provide an aggregated view of all the nodes of the cluster.If a node is unhealthy, the logs show an entry that looks like this:2019-10-04 11:09:37,163 INFO Poller: InstanceJointStatusMap contains 40 entries (R:40): i-006baxxxxxx 1829s R 1817s ig-3B ip-xxx-xx-xx-xxx I: 7s Y:U 11s c: 0 am: 0 H:R 0.0%Yarn unhealthy Reason : 1/1 local-dirs are bad: /mnt/yarn; 1/1 log-dirs are bad: /var/log/hadoop-yarn/containers i-00979xxxxxx 1828s R 1817s ig-3B ip-xxx-xx-xx-xxx I: 7s Y:R 9s c: 3 am: 2048 H:R 0.0% i-016c4xxxxxx 1834s R 1817s ig-3B ip-xxx-xx-xx-xxx I: 13s Y:R 14s c: 3 am: 2048 H:R 0.0% i-01be3xxxxxx 1832s R 1817s ig-3B ip-xxx-xx-xx-xxx I: 10s Y:U 12s c: 0 am: 0 H:R 0.0%Yarn unhealthy Reason : 1/1 local-dirs are bad: /mnt/yarn; 1/1 log-dirs are bad: /var/log/hadoop-yarn/containersTo resolve this problem, increase the size of the EBS volumes that are attached to the core and task nodes. Or, delete unused data from HDFS.Spot InstancesIf you're using Amazon EC2 Spot Instances for EMR cluster nodes—and one of those instances terminates—you might get a "slave lost" error. Spot Instances might terminate for the following reasons:The Spot Instant price is greater than your maximum price.There aren't enough unused EC2 instances to meet the demand for Spot Instances.For more information, see Reasons for interruption.To resolve this problem:Consider switching to On-Demand Instances.If you're using Amazon EMR release version 5.11.0 or earlier, consider upgrading to the latest version.Amazon EC2 Auto Scaling policiesWhen a scaling policy performs many scale-in and scale-out events in sequence, a new node might get the same IP address that a previous node used. If a Spark application is running during a scale-in event, Spark adds the decommissioned node to the deny list to prevent an executor from launching on that node. If another scale-out event occurs and the new node gets the same IP address as the previously decommissioned node, YARN considers the new node valid and attempts to schedule executors on it. However, because the node is still on the Spark deny list, attempts to launch executors on that node fail. When you reach the maximum number of failures, the Spark application fails with a "slave lost" error.To resolve this problem:Use less aggressive rules in your scaling policies. For more information, see Understanding automatic scaling rules.Increase the number of available IP addresses in the subnet. For more information, see VPC and subnet sizing.To remove a node from the Spark deny list, decrease the Spark and YARN timeout properties, as shown in the following examples:Add the following property in /etc/spark/conf/spark-defaults.conf. This reduces the amount of time that a node in the decommissioning state remains on the deny list. The default is one hour. For more information, see Configuring node decommissioning behavior.spark.blacklist.decommissioning.timeout 600sModify the following YARN property in /etc/hadoop/conf/yarn-site.xml. This property specifies the amount of time to wait for running containers and applications to complete before a decommissioning node transitions to the decommissioned state. The default is 3600 seconds.yarn.resourcemanager.nodemanager-graceful-decommission-timeout-secs 600For more information see, Spark enhancements for elasticity and resiliency on Amazon EMR.Related informationCluster configuration guidelines and best practicesHow can I troubleshoot stage failures in Spark jobs on Amazon EMR?Follow"
https://repost.aws/knowledge-center/executorlostfailure-slave-lost-emr
Can I migrate my Amazon Connect instance from a test environment to a production environment?
"I have a test Amazon Connect contact center instance with resources that I want to migrate to my production instance. Can I migrate my test Amazon Connect instance to a production environment? If so, what resources can I migrate?"
"I have a test Amazon Connect contact center instance with resources that I want to migrate to my production instance. Can I migrate my test Amazon Connect instance to a production environment? If so, what resources can I migrate?ResolutionYou can migrate the following resources between Amazon Connect contact center instances:Contact flowsIf you want to migrate a small number of contact flows, see Import/export contact flows. To migrate hundreds of contact flows, see Migrate flows to a different instance.Phone numbersTo migrate phone numbers from an Amazon Connect instance to another instance in same AWS Region, see How do I migrate phone numbers from one Amazon Connect instance to another?To migrate phone numbers from an Amazon Connect instance to another instance in a different AWS Region, create an AWS Support case and ask for assistance with Phone Number migration in different region.UsersFor more information, see CreateUser, DeleteUser, and ListUsers in the Amazon Connect API Reference.Note: If you added users in bulk using the CSV template, you can upload the same file to add those users to another instance.QueuesFor more information, see CreateQueue, DescribeQueue, and ListQueues in the Amazon Connect API Reference.Quick connectsFor more information, see CreateQuickConnect, DeleteQuickConnect, and AssociateQueueQuickConnects in the Amazon Connect API Reference.Routing profilesFor more information, see CreateRoutingProfile, AssociateRoutingProfileQueues, and DescribeRoutingProfile in the Amazon Connect API Reference.Hours of operationFor more information, see CreateHoursOfOperation in the Amazon Connect API Reference.Agent statusFor more information, see CreateAgentStatus in the Amazon Connect API Reference.Security profilesFor more information, see CreateSecurityProfile in the Amazon Connect API Reference.Related informationWhat information should I include in my AWS Support case?Follow"
https://repost.aws/knowledge-center/connect-migrate-instance-resources
Why is my Amazon EBS volume stuck in the "attaching" state?
"I attached my Amazon Elastic Block Store (Amazon EBS) to my Amazon Elastic Compute Cloud (Amazon EC2) instance, but it’s still in the "attaching" state after 10-15 minutes."
"I attached my Amazon Elastic Block Store (Amazon EBS) to my Amazon Elastic Compute Cloud (Amazon EC2) instance, but it’s still in the "attaching" state after 10-15 minutes.ResolutionCheck that the device name you specified when you attempted to attach the EBS volume isn't already in use. If the specified device name is already being used by the block device driver of the EC2 instance, the operation fails.When attaching an EBS volume to an Amazon EC2 instance, you can specify a device name for the volume (by default, one is filled in for you). The block device driver of the EC2 instance mounts the volume and assigns a name. The volume name can be different from the name that you assign.For more details on device naming, see Device naming on Linux instances or Device naming on Windows instances.If you specify a device name that's not in use by Amazon EC2, but is used by the block device driver within the EC2 instance, the attachment of the Amazon EBS volume fails. Instead, the EBS volume is stuck in the attaching state. This is usually due to one of the following reasons:The block device driver is remapping the specified device nameOn an HVM EC2 instance, /dev/sda1 remaps to /dev/xvda. If you attempt to attach a secondary Amazon EBS volume to /dev/xvda, the secondary EBS volume can't successfully attach to the instance. This can cause the EBS volume to be stuck in the attaching state.The block device driver didn't release the device nameIf a user has initiated a forced detach of an Amazon EBS volume, the block device driver of the Amazon EC2 instance might not immediately release the device name for reuse. Attempting to use that device name when attaching a volume causes the volume to be stuck in the attaching state. You must either choose a different device name or reboot the instance.You can resolve most issues with volumes stuck in the attaching state by following these steps:Important: Before you begin, back up your data. For more information, see Best practices for Amazon EC2.In the Volumes pane of the Amazon EC2 console, select the volume.Open the Actions menu and then choose Force Detach Volumes.Attempt to attach the volume to the instance, again, but use a different device name. For example, use /dev/sdg instead of /dev/sdf.Note: The instance must be in the running state.If these steps don’t resolve the issue, or if you must use the device name that isn't working, try the following procedures:Reboot the instance.Stop and start the instance to migrate it to new underlying hardware. Keep in mind that instance store data is lost when you stop and start an instance. If your instance is instance store-backed or has instance store volumes containing data, the data is lost when you stop the instance. For more information, see Determining the root device type of your instance.Related informationAttaching an Amazon EBS volume to an instanceMapping disks to volumes on your Windows instanceFollow"
https://repost.aws/knowledge-center/ebs-stuck-attaching
How do I resolve the error "The address with allocation id cannot be released because it is locked to your account" when trying to release an Elastic IP address from my Amazon EC2 instance?
"I want to release an Elastic IP address from my Amazon Elastic Compute Cloud (Amazon EC2) instance. However, I receive the error "Error [IP address]: The address with allocation id [allocation id] cannot be released because it is locked to your account"."
"I want to release an Elastic IP address from my Amazon Elastic Compute Cloud (Amazon EC2) instance. However, I receive the error "Error [IP address]: The address with allocation id [allocation id] cannot be released because it is locked to your account".Short descriptionYou receive this error message when Amazon EC2 creates a reverse Domain Name System (rDNS) record for your Elastic IP address. The Elastic IP address locks to your account for as long as the rDNS record exists.ResolutionConfirm whether rDNS is set for your Elastic IP address1.    Connect to your instance using SSH.2.    Run the host command. Replace the 203.0.113.0 sample IP address with your IP address.$ host 203.0.113.0If your Elastic IP address has an rDNS set, then this command returns an output that's similar to the following example:$ 203.0.113.0.in-addr.arpa. domain-name-pointer mail.domain.comRemove the rDNS entryUsing the Amazon EC2 consoleNote: It's a best practice to remove the rDNS entry using the Amazon EC2 console.1.    Open the Amazon EC2 console.2.    Under Network & Security, select Elastic IPs.3.    Choose the Elastic IP address, and then select Actions, Update reverse DNS.4.    For Reverse DNS domain name, clear the domain name.5.    Enter update to confirm.6.    Select Update.Using the AWS Command Line Interface (AWS CLI)To remove a reverse DNS record using the AWS CLI, use the reset-address-attribute command as shown in the following example:aws ec2 reset-address-attribute --allocation-id <value> --attribute <value>See the following example command for Linux:aws ec2 reset-address-attribute --allocation-id eipalloc-abcdef01234567890 --attribute domain-nameSee the following example command for Windows:aws ec2 reset-address-attribute --allocation-id eipalloc-abcdef01234567890 --attribute domain-nameNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Using AWS SupportIf you can't remove the request using the Amazon EC2 console or the AWS CLI, then request AWS assistance using these steps:1.    Open the Request to remove email sending limitations form.2.    Complete the form using the following information: Email Address: Your email address. Use Case Description: Your specific use case for requesting rDNS removal. Elastic IP address: A list of your Elastic IP addresses**.Reverse DNS record:** This field is optional. Reverse DNS Record for EIP 1: Enter please remove rDNS. Reverse DNS Record for EIP 2: Leave blank.3.    Choose Submit.Note: Removing the rDNS might take a few days to propagate through the system.Release the Elastic IP address1.    After you receive confirmation of the rDNS removal, run the host command for your IP address to verify removal completion:$ host 203.0.113.0This command returns output that's similar to the following example:$ 203.0.113.0.in-addr.arpa. domain-name-pointer ec2-54-244-68-210.us-west-2.compute.amazonaws.com.2.    Open the Amazon EC2 console, and then choose Elastic IPs from the navigation pane.3.    Select the Elastic IP address, and then choose Actions, Release addresses.4.    Choose Release.If you still encounter the error when releasing your Elastic IP address after removing the rDNS, then contact AWS Support to unlock your Elastic IP address.Related informationConfigurable reverse DNS for Amazon EC2's Elastic IP addressesFollow"
https://repost.aws/knowledge-center/ec2-address-errors
How do I troubleshoot the error "Status Code: 400; Error Code: xxx" when using CloudFormation for ElastiCache?
"When invoking my AWS CloudFormation stack or using the AWS API call for Amazon ElastiCache, the request fails and I receive the following error:"Status Code: 400; Error Code: xxx"How do I troubleshoot this error?"
"When invoking my AWS CloudFormation stack or using the AWS API call for Amazon ElastiCache, the request fails and I receive the following error:"Status Code: 400; Error Code: xxx"How do I troubleshoot this error?Short descriptionWhen you start an AWS API request directly or using a CloudFormation stack, AWS performs initial syntax checks. These checks verify that the request is complete and has all mandatory parameters. The following are common reasons the 400 error occurs when you send an API request for Amazon ElastiCache:Your request was denied because of API request throttling.AWS doesn't have enough available capacity to complete your request.The cache node isn't support in the Region or Availability Zone specified in your request.You used an invalid parameter combination.You used an invalid or out-of-range value for the input parameter.The API is missing a required parameter or action.You're trying to remove a resource currently used by another ElastiCache resource or AWS service.ResolutionIdentify the specific ElastiCache Invoke API error that you received. Then, follow the troubleshooting steps listed for that error.Note: For a list of possible errors and their descriptions, see Common errors in the ElastiCache Invoke API Reference.Error Code: ThrottlingError: "Rate exceeded (Service: AmazonElastiCache; Status Code: 400; Error Code: Throttling; Request ID: xxx)"This error means that your request was denied due to API request throttling. These account-level API call limits aren't specific to any service.Note: You can't increase or modify limits for a particular call. AWS makes sure that API calls don't exceed the maximum allowed API request rate. This includes API calls that come from an application, are a call to a command line interface or to the AWS Management Console.Avoid this error using the following methods:Retry your call with exponential backoff and jitter.Distribute your API calls evenly over time rather than making several API calls in a short time span.Error Code: InsufficientCacheClusterCapacityError: "cache.xxx (VPC) is not currently supported in the availability zone xxx. Retry the launch with no availability zone or target: xxx. (Service: AmazonElastiCache; Status Code: 400; Error Code: InsufficientCacheClusterCapacity; Request ID: xxx)".This error indicates that AWS doesn't currently have enough available On-Demand capacity to complete your request. For more information, see Error Messages: InsufficidentCacheClusterCapacity.If you receive this error, do the following:Wait a few minutes and then submit your request again. Capacity shifts frequently.Use another cache node type and then submit your request again.Use another subnet and Availability Zone and then submit your request again.Error Code: SubnetInUseError: "The subnet ID subnet-xxx is in use (Service: AmazonElastiCache; Status Code: 400; Error Code: SubnetInUse; Request ID: xxx)".This error occurs if you try to remove a subnet from an Elasticache subnet group that currently has instances associated with it. You must remove all related resources from the subnet and then submit your request again. For more information, see DeleteCacheSubnetGroup.Error Code: InvalidParameterValueThis error indicates that a parameter value isn't valid, is unsupported, or can't be used in your request. Check each parameter for your request call. For example, If you've used an unsupported parameter value, you might see one of the following error messages:"Invalid AuthToken provided. (Service: AmazonElastiCache; Status Code: 400; Error Code: InvalidParameterValue"; Request ID: xxx)".This error indicates that the auth-token setting doesn't meet constraints when using AUTH with ElastiCache for Redis. For more information, see Authenticating users with the Redis AUTH command."The snapshot window and maintenance window must not overlap. (Service: AmazonElastiCache; Status Code: 400; Error Code: InvalidParameterValue; Request ID: xxx)".Snapshot windows and maintenance windows can't set up at the same time. Adjust the operation window to another period to avoid this error."The number of replicas per node group must be within 0 and 5. (Service: AmazonElastiCache; Status Code: 400; Error Code: InvalidParameterValue; Request ID: xxx)".ElastiCache Redis supports one primary and from 0 to 5 replicas per shard. If you add more than 5 replica nodes, you receive this error. For more information, see Understanding Redis replication.Error Code: InvalidParameterCombinationThis error indicates that your request call contains an incorrect combination of parameters or a missing parameter. If this occurs, you might see one of the following error messages:"Cannot find version 5.0.0 for redis (Service: AmazonElastiCache; Status Code: 400; Error Code: InvalidParameterCombination; Request ID: xxx)".This error indicates that the version of Redis indicated in your request call isn't supported. For more information, see Supported ElastiCache for Redis verisons and Supported ElastiCache for Memcached versions."Cannot restore redis from 6.0.5 to 5.0.6. (Service: AmazonElastiCache; Status Code: 400; Error Code: InvalidParameterCombination; Request ID: xxx)".ElastiCache for Redis doesn't support downgraded Redis engine versions when using a backup to create a new Redis cluster. ElastiCache for Redis also doesn't support downgrading the Redis engine on a running Redis cluster. When creating a new Redis cluster using a backup, the Redis engine version must be greater than or equal to the current engine version."When using automatic failover, there must be at least 2 cache clusters in the replication group. (Service: AmazonElastiCache; Status Code: 400; Error Code: InvalidParameterCombination; Request ID: xxx)".You can turn on the automatic failover option in a Redis cluster that has at least one available read replica on it. Verify that your Redis replication group has more than one replica node and then submit your request again. For more information, see Minimizing downtime in ElastiCache for Redis with Multi-AZ.Related informationQuotas for ElastiCacheAmazon ElastiCache error messagesTroubleshooting - Amazon ElastiCache for RedisTroubleshooting AWS CLI errorsFollow"
https://repost.aws/knowledge-center/elasticache-fix-status-code-400
How do I estimate the cost of my planned AWS resource configurations?
I want to provision AWS resources. How can I estimate how much my AWS resources will cost?
"I want to provision AWS resources. How can I estimate how much my AWS resources will cost?ResolutionUnderstand the principles of AWS cloud pricingMost AWS services offer pay-as-you-go pricing, so you pay for only what you use each month. For more information, see AWS pricing.Each AWS service has its own pricing model. For more information, see Cloud Services pricing.Estimate your AWS billingIf you plan to migrate significant infrastructure to AWS, use the AWS Sales & Business Development contact form, and then choose I need to speak to someone in sales.To estimate a bill, use the AWS Pricing Calculator. Choose Create estimate, and then choose your planned resources by service. The AWS Pricing Calculator provides an estimated cost per month. For more information, see What is AWS Pricing Calculator?To forecast your costs, use the AWS Cost Explorer. Use cost allocation tags to divide your resources into groups, and then estimate the costs for each group.Note: AWS Support can't estimate your costs for migrating infrastructure to AWS.Follow"
https://repost.aws/knowledge-center/estimating-aws-resource-costs
Why am I receiving a "no Spot capacity available" error when trying to launch an Amazon EC2 Spot Instance?
"I get an error message when I try to launch an Amazon Elastic Compute Cloud (Amazon EC2) Spot Instance. The error reads, "There is no Spot capacity available that matches your request.""
"I get an error message when I try to launch an Amazon Elastic Compute Cloud (Amazon EC2) Spot Instance. The error reads, "There is no Spot capacity available that matches your request."ResolutionThe "no Spot capacity available" error occurs when Amazon EC2 doesn't have enough Spot capacity to fulfill a Spot Instance or Spot Fleet request. Spot capacity is the amount of spare, unused EC2 compute capacity that's available to customers at a lower price than On-Demand Instances.To troubleshoot this error, do one of the following:Keep the request as it is. The Spot request continues to automatically make the launch request until capacity becomes available. When capacity becomes available, Amazon EC2 fulfills the Spot request. If you encounter the "no Spot capacity available" error frequently, then consider using the next workaround.Be flexible about the instance types that you request and the Availability Zones that you deploy when setting up your workload. For example, instead of requesting an m5.large in us-east-1a, request an m4.large, c5.large, r5.large, or t3.xlarge in multiple Availability Zones. This type of request increases the chances of Amazon Web Services (AWS) finding and allocating your required amount of compute capacity.Use the price and capacity optimized allocation strategy (best practice). This allocation strategy looks at both price and capacity to select the Spot Instance pools. The Spot Instance pools that are selected are the least likely to be interrupted and have the lowest possible price. The price and capacity optimized strategy maintains an interruption rate that's comparable to the capacity-optimized allocation strategy. Also, with this strategy, the total price of your Spot Instances is typically lower than the capacity-optimized strategy. For more information, see Allocation strategies for Spot Instances.Use the capacity-optimized allocation strategy. This allocation strategy analyses real-time capacity data to launch your Spot Instances into pools with the most available capacity. The capacity-optimized allocation strategy reduces your chances of receiving "no Spot capacity available" errors.You might implement the preceding solutions when provisioning a Spot Instance through Amazon EC2 Auto Scaling, EC2 Fleet, and Spot Fleet. For a complete list of best practices when using Spot Instances, see Spot Instance best practices.Related informationSpot Instance interruptionsSpot request statusFollow"
https://repost.aws/knowledge-center/ec2-spot-instance-insufficient-capacity
Why isn't my Reserved Instance applying to my AWS billing?
"I purchased a Reserved Instance (RI), but I'm not getting a discount."
"I purchased a Reserved Instance (RI), but I'm not getting a discount.Short descriptionYou purchased an RI, but aren't seeing the expected billing benefits due to one of the following reasons:The payment for your RI was unsuccessful.Your RI isn't active.Your RI doesn't match the specifications of running instances.Your RI is size-flexible, or has the Regional benefit, and the benefit is applying to a different On-Demand Instance.ResolutionRIs aren't physical instances. Instead, they are a billing discount applied to On-Demand Instances in your account. For an RI's discount to apply, the following conditions must be true:The upfront cost for your RI must process successfully. Check the status of your payments on the Payment History page of the Billing and Cost Management console. To retry a failed RI payment, contact AWS Support. Failed RI purchases from previous billing periods can’t be retried.Your RI must still be active. When you purchase an RI, you choose a one-year or three-year term. After the term expires, your instance is billed at the On-Demand Instance price. To continue receiving the discount, purchase another RI with the same specifications. To check if your RIs are active, you can sign in to the Amazon EC2 console, and then choose Reserved Instances from the navigation pane.Tip: To avoid gaps in RI discounts, use reservation expiration alerts.If you have an Amazon Elastic Compute Cloud (Amazon EC2) RI, the RI must exactly match a running EC2 instance’s characteristics. To get the maximum benefit from your RI, a running On-Demand Instance must exactly match the instance type, Availability Zone, platform, and tenancy of your RI. To review the characteristics of your running EC2 instance, sign in to the Amazon EC2 console, choose Running instances, and choose the running On-Demand Instance. Then, choose Reserved Instances from the navigation pane, and check if the RI was launched with similar attributes. To check if your EC2 RIs are being fully used, see How do I find out if my Amazon EC2 Reserved Instances are being fully used?If you have an Amazon Relational Database (Amazon RDS) RI, the RI must exactly match the specifications of a running DB instance. Otherwise, the DB instance is billed at the On-Demand rate. For more information, see Amazon RDS Reserved Instances. The charges for a reserved DB instance cover only the instance costs. These charges don't include regular costs associated with storage, backups, and I/O. For more information, see Reserved DB instance billing example. Note that the Region, DB engine, DB instance class, Offering type and Term chosen during the purchase of RI can't be changed later.If you have Reserved Instances in Amazon OpenSearch Service, the RI must match the Region, instance class, and instance type of the standard On-Demand Instance. Otherwise, the instance is billed at the On-Demand rate.If you have an Amazon ElastiCache Reserved Node, the specifications of the Reserved Node must match those of the On-Demand node. Otherwise, the node is billed at the On-Demand rate. Each hour, if the number of running cache nodes is less than or equal to the number of applicable Reserved Cache Nodes you have, all running cache nodes are charged at the Reserved Cache Node hourly rate. If the number of running cache nodes exceeds the number of applicable Reserved Cache Nodes, you are charged the On-Demand rate.You must have one RI for each instance that you want to receive a discount. Each RI provides the discount to only one running EC2 instance at a time. All additional running instances are billed at the On-Demand Instance price. RI billing benefits apply only to one instance-hour per clock-hour.Your RI is applying to a different instance. Check if your RIs are size-flexible or has the Regional benefit. An RI with the Regional benefit applies to any On-Demand Instance in the same Region with matching specifications. An RI that's size-flexible applies either all or part of its pricing benefit to any On-Demand Instance in the same instance family, irrespective of the Availability Zone or instance size. For more information, see How can I find out if my Amazon EC2 Reserved Instance provides regional benefit or size flexibility?If you purchased an On-Demand Capacity Reservation, it remains unused until an EC2 instance with matching attributes is running. If an EC2 instance with matching attributes isn't running, then the capacity reservation will appear as an unused reservation on your EC2 bill. To check if your EC2 RIs are being fully used, see How do I find out if my Amazon EC2 Reserved Instances are being fully used?If your RI is active and matches the specifications of a running On-Demand Instance, use Cost Explorer to analyze your spending and usage. You can use AWS Cost Explorer to generate the RI Utilization and RI coverage reports. For more information, see How do I view my Reserved Instance utilization and coverage?Related informationHow you are billedHow Reserved Instances are appliedHow do Amazon EC2 Reserved Instances that are size-flexible apply to my AWS bill?Follow"
https://repost.aws/knowledge-center/ec2-ri-mismatch
How can I increase my IAM default quota?
I want to increase my AWS Identity and Access Management (IAM) default quota.
"I want to increase my AWS Identity and Access Management (IAM) default quota.ResolutionYou can request an increase to default quotas for adjustable IAM quotas up to the maximum limit. Quota requests within the maximum limit are automatically approved.To view the default and maximum IAM quota limits, see IAM object quotas.Note:IAM quota increase requests are available only in the US East (N. Virginia) Region.You can't request an increase to default quotas that aren't adjustable.Follow these steps to request an increase for adjustable IAM quotas within the maximum limit:1.    Open the Service Quotas console in the us-east-1 Region.2.    In the navigation pane, choose AWS services.3.    In the AWS services search bar, enter IAM.4.    In Service, choose IAM.5.    In Quota name, choose the quota that you want to increase.Note: Make sure that the Adjustable value equals Yes.6.    Choose Request quota increase.7.    In Change quota value, enter the quota amount, and then choose Request.8.    In Recent quota increase requests, the Status is Pending.9.    After five minutes, refresh the page.If the quota request is below the maximum quota limit, the request is automatically approved and the Status is now Quota request approved.(Optional) To request an increase for adjustable IAM quotas above the maximum limit:1.    Follow the previous steps 1-9.2.    If the quota request is above the maximum quota limit, the Status is now Quota requested. Choose Quota requested to view the Support Center case number opened on your behalf.You can view the status of your quota increase request by choosing the Support case number.Note: Quota requests above the maximum default limit are not automatically approved.Related informationRequesting a quota increaseFollow"
https://repost.aws/knowledge-center/iam-quota-increase
Do I need to specify the AWS KMS key when I download a KMS-encrypted object from Amazon S3?
I want to download stored objects from Amazon Simple Storage Service (Amazon S3) that use server-side encryption with AWS Key Management Service-managed keys (SSE-KMS).
"I want to download stored objects from Amazon Simple Storage Service (Amazon S3) that use server-side encryption with AWS Key Management Service-managed keys (SSE-KMS).ResolutionYou don't need to specify the AWS Key Management Service (AWS KMS) key ID when you download an SSE-KMS-encrypted object from an S3 bucket. Instead, you need the permission to decrypt the AWS KMS key.When a user sends a GET request, Amazon S3 must check for the appropriate authorization. Amazon S3 checks if the AWS Identity and Access Management (IAM) user or role that sent the request is authorized to decrypt the object's key. If the IAM user or role and key belong to the same AWS account, then decrypt permissions must be granted on the key policy.Note: When the IAM user or role and KMS key are in the same account, you can use IAM policies to control access to the key. However, you must modify the key policy to explicitly turn on IAM policies to allow access to the key. For more information, see Using IAM policies with AWS KMS.If the IAM user or role and key belong to different accounts, then you have to grant decrypt permissions on the IAM user's policy and the key's policy.The following is an example IAM policy that allows the user to both decrypt the AWS KMS key and also download from the S3 bucket:{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "kms:Decrypt", "s3:GetObject" ], "Resource": [ "arn:aws:kms:example-region-1:123456789012:key/example-key-id", "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" ] } ]}The following is an example key policy statement that allows the user to decrypt the key:{ "Sid": "Allow decryption of the key", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::123456789012:user/Bob" ] }, "Action": [ "kms:Decrypt" ], "Resource": "*"}Note: For IAM users or roles that belong to a different account than the bucket, the bucket policy must also grant the user access to objects. For example, if the user needs to download from the bucket, then the user must have permission to the s3:GetObject action on the bucket policy.After you have the permission to decrypt the key, you can download S3 objects encrypted with the key using the AWS Command Line Interface (AWS CLI). Run a command similar to the following:aws s3api get-object --bucket DOC-EXAMPLE-BUCKET --key dir/example-object-name example-object-nameNote: If you receive errors when running AWS CLI commands, make sure that you're using the most recent version of the AWS CLI.Related informationGetObjectget-objectProtecting data using server-side encryption with CMKs stored in AWS Key Management Service (SSE-KMS)Follow"
https://repost.aws/knowledge-center/decrypt-kms-encrypted-objects-s3
Why is my Amazon EBS volume stuck in the "deleting" state?
"My Amazon Elastic Block Store (Amazon EBS) volume is stuck in the "deleting" state, and I don't know why."
"My Amazon Elastic Block Store (Amazon EBS) volume is stuck in the "deleting" state, and I don't know why.ResolutionGenerally, Amazon EBS volumes are immediately deleted when you use the Amazon Elastic Compute Cloud (Amazon EC2) console or AWS Command Line Interface (AWS CLI). However, the volume can get stuck in the "deleting" state when there's an active snapshot that's associated with your volume.If you make a CreateSnapshot API call immediately before making a DeleteVolume request, then volume deletion is postponed until snapshot creation completes. Before deleting a volume, check for any snapshot creations that are still in progress. For instructions on how to access snapshot information, see View Amazon EBS snapshot information.Note: Even after a snapshot creation is complete, volume deletion might still be delayed because volume workflows are asynchronous.Follow"
https://repost.aws/knowledge-center/ebs-resolve-volume-stuck-deleting-state
How do I subscribe a Lambda function to an Amazon SNS topic in the same account?
I want to subscribe my AWS Lambda function to an Amazon Simple Notification Service (Amazon SNS) topic in my AWS account. How do I do that?
"I want to subscribe my AWS Lambda function to an Amazon Simple Notification Service (Amazon SNS) topic in my AWS account. How do I do that?ResolutionNote: The instructions in this article follow those in Tutorial: Using AWS Lambda with Amazon Simple Notification Service. However, this article provides same-account setup instructions. For prerequisites and cross-account set-up instructions, see the tutorial.1.    Run the following command to create an Amazon SNS topic:Note: Replace lambda-same-account with the name that you want for your topic.$ aws sns create-topic --name lambda-same-accountNote the topic's Amazon Resource Name (ARN) that's returned in the command output. You'll need it later.2.    Create an execution role for Lambda to access AWS resources. Note the role's ARN. You'll need it later.3.    Create a deployment package. (Follow steps 1 and 2 in the tutorial.)4.    Run the following command to create a Lambda function:Note: Replace sns-same-account with the name that you want for your function. Replace arn:aws:iam::123456789012:role/service-role/lambda-sns-role with your execution role's ARN.$ aws lambda create-function --function-name sns-same-account \--zip-file fileb://function.zip --handler index.handler --runtime nodejs14.x \--role arn:aws:iam::123456789012:role/service-role/lambda-sns-role \--timeout 60Note the function's ARN that's returned in the command output. You'll need it in the next step.5.    Run the following command to add Lambda permissions for your Amazon SNS topic:Note: Replace sns-same-account with the name you gave your function. Replace arn:aws:sns:us-east-1:123456789012:lambda-same-account with your topic's ARN.$ aws lambda add-permission --function-name sns-same-account \--source-arn arn:aws:sns:us-east-1:123456789012:lambda-same-account \--statement-id sns-same-account --action "lambda:InvokeFunction" \--principal sns.amazonaws.com6.    Run the following command to subscribe your Lambda function to the Amazon SNS topic:Note: Replace arn:aws:sns:us-east-1:123456789012:lambda-same-account with your topic's ARN. Replace arn:aws:lambda:us-east-1:123456789012:function:sns-same-account with your function's ARN.$ aws sns subscribe --protocol lambda \--topic-arn arn:aws:sns:us-east-1:123456789012:lambda-same-account \--notification-endpoint arn:aws:lambda:us-east-1:123456789012:function:sns-same-account7.    Run the following command to test the subscription by publishing a sample message:Note: Replace arn:aws:sns:us-east-1:123456789012:lambda-same-account with your topic's ARN.$ aws sns publish --message "Hello World" --subject Test \--topic-arn arn:aws:sns:us-east-1:123456789012:lambda-same-accountThe command output returns a message ID, confirming that the message is published to your topic.8.    (Optional) Run the following commands to confirm in your Amazon CloudWatch Logs that the Lambda function was invoked:Note: Replace sns-same-account with the name of your function.$ aws logs describe-log-streams --log-group-name /aws/lambda/sns-same-accountNote the logStreamName returned. Then, use the following command to retrieve the logs:Note: Replace sns-same-account with the name of your function and logStreamName with the logStreamName returned by describe-log-streams.$ aws logs get-log-events --log-group-name /aws/lambda/sns-same-account \--log-stream-name 'logStreamName'Related informationInvoking AWS Lambda functions via Amazon SNSFollow"
https://repost.aws/knowledge-center/lambda-subscribe-sns-topic-same-account
Why can't I access or view my Performance Insights data in Amazon RDS for MySQL?
I'm trying to enable Performance Insights in Amazon Relational Database Service (Amazon RDS) for MySQL. Why can't I access the data?
"I'm trying to enable Performance Insights in Amazon Relational Database Service (Amazon RDS) for MySQL. Why can't I access the data?Short descriptionYou might not be able to access or view your data in Performance Insights in Amazon RDS for MySQL for the following reasons:You've tried to manually set the Performance Schema values in a parameter group.Your DB instance doesn't have enough resources to access the data from Performance Insights.There's a transient networking issue, or system maintenance is underway on your DB instance.You've performed an upgrade of your DB instance from an unsupported Performance Insights version to a supported version.The data load on your MySQL DB instance is below the database load threshold.ResolutionYou've tried to manually set the Performance Schema values in a parameter groupIf you tried to manually update the Performance Schema parameter values in a parameter group, then Performance Insights won't work properly. A list of detailed wait events won't appear.The following parameters can't be automatically updated by Performance Insights:performance-schema-consumer-events-waits-current: ONperformance-schema-instrument: wait/%=ONperformance-schema-consumer-global-instrumentation: ONperformance-schema-consumer-thread-instrumentation: ONNote: You can reset the Performance Schema parameters back to the default values. After resetting the values, make sure to reboot your DB instance to enable the Performance Schema.Your DB instance doesn't have enough resources to access the data from Performance InsightsIf your DB instance is experiencing a heavy load, then your resources are dedicated to the database process. As a result, system processes such Performance Insights are deprioritized. To check whether your DB instance is under a heavy load, review the CPU utilization, disk queue depth, and read write latency values in Amazon CloudWatch.If your MySQL DB instance is experiencing a heavy load, then consider vertically scaling your DB instance class. When you configure a DB instance class, there will be some downtime. To troubleshoot CPU usage issues, see How can I troubleshoot and resolve high CPU utilization on my Amazon RDS for MySQL instances?There's a transient networking issue, or system maintenance is underway on your DB instanceWhen your DB instance experiences a transient networking issue or system maintenance, Performance Insights might not properly report data. If these factors affect your resources, then review the Personal Health Dashboard. The Personal Health Dashboard will provide guidance on how to proceed.You've performed an upgrade of your DB instance from an unsupported Performance Insights version to a supported oneIf you enable Performance Insights while performing a DB engine version upgrade, then your DB instance might not properly apply these changes. Also, make sure that your Amazon RDS Performance Insights version is supported, or your data might not properly sync.If your MySQL DB engine version is supported, then you can enable or disable Performance Insights when creating an instance or when modifying an instance. Make sure to choose Apply Immediately to apply the changes right away.Performance Insights is available only for MySQL DB engine versions 8.0.17 and higher, version 5.7.22 and higher, and version 5.6.41 and higher. Additionally, Performance Insights isn't supported on the following DB instance classes: db.t2.micro, db.t2.small, db.t3.micro, and db.t3.small. Therefore, check to make sure that your MySQL DB engine version is compatible. For more information about supported DB engine versions for Performance Insights, see Amazon RDS DB engine support for Performance Insights.The data load on your MySQL DB instance is below the database load thresholdIf you enabled Performance Insights and you can't view your data, then check the Db load chart and Counter metrics in your Performance Insights dashboard. If you see data under Counter metrics, but not Db load chart, then your DB load might be below the database load threshold for MySQL. To test and confirm, run a long-running transaction on your MySQL DB instance, and then check the Performance Insights dashboard again. If the data populates, then your original data load is likely below the data load threshold.Follow"
https://repost.aws/knowledge-center/rds-mysql-performance-insights
How do I send AWS WAF logs to an Amazon S3 bucket in a centralized logging account?
I want to send AWS WAF logs to an Amazon Simple Storage Service (Amazon S3) bucket that's in a different account or AWS Region.
"I want to send AWS WAF logs to an Amazon Simple Storage Service (Amazon S3) bucket that's in a different account or AWS Region.ResolutionTo send AWS WAF logs to an Amazon S3 bucket that's in a centralized logging account, complete the steps in the following sections.Create an S3 bucket in the centralized logging account in your selected Region1.    Create an S3 bucket in the centralized logging account for your selected AWS Region.2.    Enter a bucket name that starts with the prefix aws-waf-logs-. For example, name your bucket similar to aws-waf-logs-example-bucket.Create and add a bucket policy to the S3 bucketAdd the following S3 bucket policy to your S3 bucket:Important:Replace the account IDs in aws:SourceAccount with the list of source account IDs that you want to send logs to this bucket.Replace the ARNs in aws:SourceArn with the list of ARNs of source resources that you want to publish logs to this bucket. Use the format of arn:aws:logs:*:source-account-id:*.Replace the S3 bucket name aws-waf-logs-example-bucket in Resource with the name of your S3 bucket.{ "Version": "2012-10-17", "Id": "AWSLogDeliveryWrite20150319", "Statement": [ { "Sid": "AWSLogDeliveryWrite", "Effect": "Allow", "Principal": { "Service": "delivery.logs.amazonaws.com" }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::aws-waf-logs-example-bucket/AWSLogs/*", "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control", "aws:SourceAccount": [ "111111111111", "222222222222" ] }, "ArnLike": { "aws:SourceArn": [ "arn:aws:logs:*:111111111111:*", "arn:aws:logs:*:222222222222:*" ] } } }, { "Sid": "AWSLogDeliveryAclCheck", "Effect": "Allow", "Principal": { "Service": "delivery.logs.amazonaws.com" }, "Action": "s3:GetBucketAcl", "Resource": "arn:aws:s3:::aws-waf-logs-example-bucket", "Condition": { "StringEquals": { "aws:SourceAccount": [ "111111111111", "222222222222" ] }, "ArnLike": { "aws:SourceArn": [ "arn:aws:logs:*:111111111111:*", "arn:aws:logs:*:222222222222:*" ] } } } ]}Configure your web ACLs to send the logs to the desired S3 bucketNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.You must configure your web ACLs to send the AWS WAF logs to the centralized logging account's S3 bucket. To configure a web ACL, run the put-logging-configuration AWS CLI command from the account that owns the web ACL.Important:Replace the ResourceArn value with your web ACLs ARN.Replace LogDestinationConfigs value with the ARN of the S3 bucket that's in your centralized logging account.Replace region with the AWS Region where your web ACL is located.aws wafv2 put-logging-configuration --logging-configuration ResourceArn=arn:aws:wafv2:eu-west-1: 111111111111:regional/webacl/testing/b4a768c9-4895-4f35-9354-3049ab8acc29,LogDestinationConfigs=arn:aws:s3:::aws-waf-logs-example-bucket --region eu-west-1Note: For web ACLs in the CloudFront(Global) Region, use us-east-1 as the Region in preceding command.Repeat the preceding put-logging-configuration command for each of your web ACLs.Related informationPermissions to publish logs to Amazon S3Follow"
https://repost.aws/knowledge-center/waf-send-logs-centralized-account
How do I resolve the "Unable to import module" error that I receive when I run Lambda code in Python?
I receive an "Unable to import module" error when I try to run my AWS Lambda code in Python.
"I receive an "Unable to import module" error when I try to run my AWS Lambda code in Python.Short descriptionYou typically receive this error when your Lambda environment can't find the specified library in the Python code. This is because Lambda isn't prepackaged with all Python libraries.To resolve this, create a deployment package or Lambda layer that includes the libraries that you want to use in your Python code for Lambda.Important: Make sure that you put the library that you import for Python inside the /python folder.ResolutionNote: The following steps show you how to create a Lambda layer rather than a deployment package. This is because you can reuse the Lambda layer across multiple Lambda functions. Each Lambda runtime adds specific folders inside the /opt directory referenced by the PATH variable. If the layer uses the same folder structure, then your Lambda function's code can access the layer content without specifying the path.It's a best practice to create a Lambda layer on the same operating system that your Lambda runtime is based on. For example, Python 3.8 is based on an Amazon Linux 2 Amazon Machine Image (AMI). However, Python 3.7 and Python 3.6 are based on the Amazon Linux AMI.To create a Lambda layer for a Python 3.8 library, do the following:Note: Steps 1-3 are optional.1.    In the AWS Cloud9 console, create an Amazon Elastic Compute Cloud (Amazon EC2) instance with Amazon Linux 2 AMI. For instructions, see Creating an EC2 environment in the AWS Cloud9 User Guide.2.    Create an AWS Identity and Access Management (IAM) policy that grants permissions to call the PublishLayerVersion API operation.Example IAM policy statement that grants permissions to call the PublishLayerVersion API operation{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "lambda:PublishLayerVersion", "Resource": "*" } ]}3.    Create an IAM role and attach the IAM policy to the role. Then, attach the IAM role to the Amazon EC2 instance.Note: Your EC2 instance now has permissions to upload Lambda layers for the PublishLayerVersion API call.4.    Open your AWS Cloud9 Amazon EC2 environment. Then, install Python 3.8 and pip3 by running the following commands:$ sudo amazon-linux-extras install python3.8$ curl -O https://bootstrap.pypa.io/get-pip.py$ python3.8 get-pip.py --user5.    Create a python folder by running the following command:$ mkdir python6.    Install the Pandas library files into the python folder by running the following command:Important: Replace Pandas with the name of the Python library that you want to import.$ python3.8 -m pip install pandas -t python/7.    Zip the contents of the python folder into a layer.zip file by running the following command:$ zip -r layer.zip python8.    Publish the Lambda layer by running the following command:Important: Replace us-east-1 with the AWS Region that your Lambda function is in.$ aws lambda publish-layer-version --layer-name pandas-layer --zip-file fileb://layer.zip --compatible-runtimes python3.8 --region us-east-19.    Add the layer to your Lambda function.Related informationHow do I troubleshoot "permission denied" or "unable to import module" errors when uploading a Lambda deployment package?Follow"
https://repost.aws/knowledge-center/lambda-import-module-error-python
Why is my instance terminating immediately after launching or starting an instance?
"My Amazon Elastic Compute Cloud (Amazon EC2) instance state changes from pending to terminated immediately after launching a new instance, or after starting a stopped instance. How can I troubleshoot this issue?"
"My Amazon Elastic Compute Cloud (Amazon EC2) instance state changes from pending to terminated immediately after launching a new instance, or after starting a stopped instance. How can I troubleshoot this issue?ResolutionFirst, determine the termination reason using the Amazon EC2 console or AWS Command Line Interface (AWS CLI). Then, follow the troubleshooting steps for that reason. For more information, see Instance terminates immediately.Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Related informationInstance volume limitsdescribe-instancesUsing key policies in AWS KMSWhy am I unable to start or launch my EC2 instance?Follow"
https://repost.aws/knowledge-center/ec2-instance-terminating
How can I create additional listeners for AWS Elastic Beanstalk environments that use a shared load balancer?
I want to create additional listeners for AWS Elastic Beanstalk environments that use a shared load balancer.
"I want to create additional listeners for AWS Elastic Beanstalk environments that use a shared load balancer.Short descriptionIf you're using a shared load balancer with Elastic Beanstalk, then you can't create additional listeners using the aws:elbv2:listener:listener_port option setting or the Elastic Beanstalk console. This is because the load balancer isn't managed by Elastic Beanstalk.You can use .ebextension custom resources to create additional listeners for an Elastic Beanstalk environment with a shared load balancer.Tip: It's a best practice to associate additional listeners with the lifecycle of the environment, and to remove the listeners if you terminate the environment.Resolution1.    Create an Application Load Balancer that includes a default listener and target group.2.    Create a configuration file called additional-listener.config file that includes the following:Resources: AdditionalHttpListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: LoadBalancerArn: "Fn::GetOptionSetting": Namespace: "aws:elbv2:loadbalancer" OptionName: "SharedLoadBalancer" DefaultActions: - Type: forward TargetGroupArn: Ref: AWSEBV2LoadBalancerTargetGroup Port: 8080 Protocol: HTTPNote: The YAML file in step 2 follows the AWS CloudFormation specification for the AWS::ElasticLoadBalancingV2::Listener resource.3.    Place the file from step 2 into the .ebextensions folder that's part of your application source bundle.4.    Create a ZIP file of your updated application source bundle.5.    Use the ZIP file from step 4 to create a new Elastic Beanstalk environment, or update an existing environment that's configured with the shared load balancer from step 1.The configuration file from step 2 creates an HTTP listener on port 8080 for the shared load balancer associated with your Elastic Beanstalk environment. Then, the listener forwards all traffic to the default process. You can further extend this configuration file to add additional rules to the listener using the AWS::ElasticLoadBalancingV2::ListenerRule resource definition of CloudFormation.Important: Because this listener is created as an additional resource as part of the Elastic Beanstalk environment, the listener is removed if the environment is terminated.Note: To learn more about shared load balancers and default listener rules, see Configuring a shared Application Load Balancer.Follow"
https://repost.aws/knowledge-center/elastic-beanstalk-listeners
Why are my scheduled backup plans in AWS Backup not running?
"I have configured backup plans and rules in AWS Backup, but my backup doesn't run as scheduled. How do I troubleshoot this issue?"
"I have configured backup plans and rules in AWS Backup, but my backup doesn't run as scheduled. How do I troubleshoot this issue?Short descriptionTo troubleshoot a scheduled backup plan that doesn't get initiated automatically, check if:the resource is opted-in for backupthe backup window for the backup rule is configured according to your needsthe AWS Identity and Access Management (IAM) role used to assign resources to the backup plan has sufficient permissions for resource assignmentsthe tags on the resources match the tag keys and values configured in the resource assignmentsthe backup policy is configured correctly for the cross-account management backup (if you are using cross-account management)ResolutionResource type turned on for backupBe sure that the resource type is turned on for protection by the backup plans in your account. The service opt-in feature allows you to choose the resource types that are protected by your backup plans.To activate a resource type for backup protection, do the following:Open the AWS Backup console.In the navigation pane, expand My account.Choose Settings.In the Service opt-in section, choose Configure resources.Turn on the services that you want to activate.Note: Services, such as Amazon Aurora and Amazon FSx, aren't activated by default.Choose Confirm.Note: Service opt-in settings are Region-specific. Be sure to check this setting in all AWS Regions where you've configured backups.For more information, see Service Opt-in.Configuration of the backup windowWhen you configure a backup rule, you can customize your backup window. Backup windows consist of the time that the backup window begins (that is, the Backup window start time) and the duration of the window (that is, Start within) in hours. By default, the Backup window start time and Start within fields are set to UTC 05:00 AM and 8 hours, respectively. Backup jobs are started within this window. Your backup jobs might be initiated during this backup window. Your backup jobs might not be initiated depending on when you check the status of these jobs.You can customize the backup window by modifying the default values for Backup window start time and Start within fields to your preferred values. To modify the Backup window start time and Start within fields, do the following:Open the AWS Backup console.In the navigation pane, choose Backup plans.Choose the backup plan that you want to update.Select the Backup rule that you want to update, and then choose Edit.In the Backup rule configuration section, select Customize backup window.For Backup window start time, select the start time of your preference.For Start within, select the duration of your preference.Choose Save.Configuration of the IAM role for resource assignmentsWhen you assign resources to a backup plan, you must choose an IAM role. If you are assigning resources through a deployment service, such as AWS CloudFormation, be sure of the following:The IAM role that's associated with the AWS::Backup::BackupSelection resource exists in the AWS account where the CloudFormation template is deployed. For more information, see Using AWS CloudFormation to provision AWS Backup resources.The IAM role has sufficient permissions to initiate the backup job on resources that are assigned to the backup plan.For more information, see Assign resources to a backup plan.Tags on assigned resourcesYou can assign resources to backup plans using tags. During these assignments, be sure that the tags on the resources match the tag keys and values configured in the resource assignments in terms of the following:Case-sensitivity: The tag keys and values are both case sensitive. Therefore, a tag value of true isn't equal to TRUE or True. For example, if the resource to be backed up is tagged with the key-value pair of backup:true, it's backed up only if the tag-based policy is configured with a key-value pair that completely matches the letters and the case.No white space: When you create tags for some AWS resources, the trailing white space might be accepted as allowed characters in tag names and values. For example, the tag name AWSBackup with a trailing space ("AWSBackup ") isn't the same as AWSBackup. The trailing space on tags might not be easy to view from the console. You can run a command similar to the following using the AWS Command Line Interface (AWS CLI):aws backup get-backup-selection --backup-plan-id abcd-efgh-ijkl-mnop --selection-id 11111111-2222-3333-4444-55555example            Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.            Replace abcd-efgh-ijkl-mnop and 11111111-2222-3333-4444-55555example with the backup-plan-id and selection-id of your backup plan.            The output of the AWS CLI command is similar to the following:{...... "ListOfTags": [ { "ConditionType": "STRINGEQUALS", "ConditionKey": "examplekey ", "ConditionValue": "examplevalue " } ] },......}            You can view the trailing spaces after both the tag name and the tag value in the output. For more information, see get-backup-selection.Backup policy for cross-account backupAs part of a scheduled backup plan, you can back up to multiple AWS accounts on demand. If you are configuring the backup policy for a cross-account management, check all the previous troubleshooting steps. Then, be sure of the following:The backup vault configured in the backup policy exists in the member accounts where the backup policy is attached.The backup policy is attached in the correct member account.The backup vault name configured in the backup policy matches the name of an existing backup vault in the target account. Note: Backup vault names are case-sensitive.For more information, see Managing AWS Backup resources across multiple AWS accounts.Amazon RDS backup failureWhen your Amazon Relational Database Service (Amazon RDS) instance misses a backup cycle, you get one of the following error messages:Can't start a backup now. RDS DB instance is closer to enter RDS automated maintenance window.Backup job could not start because it is either inside or too close to the automated backup window configured in RDS instance.This can happen when the RDS maintenance window or the RDS automated backup window is approaching. In AWS Backup, RDS backups aren't allowed within an hour before the RDS maintenance window or the RDS automated backup window. Therefore, be sure that your backup plans for RDS databases are scheduled more than an hour apart from the RDS maintenance window and the RDS automated backup window. This time frame is extended to 4 hours for Amazon FSx.Related informationTroubleshooting AWS BackupAccess controlTagging AWS resourcesFollow"
https://repost.aws/knowledge-center/aws-backup-troubleshoot-scheduled-backup-plans
How can I troubleshoot 5xx errors for API Gateway?
"When I call my Amazon API Gateway API, I get an 5xx error."
"When I call my Amazon API Gateway API, I get an 5xx error.Short descriptionHTTP 5xx response codes indicate server errors. API Gateway 5xx errors include the following cases:500 internal server502 bad gateway503 service unavailable504 endpoint request timed outResolutionBefore you begin, follow the steps to turn on Amazon CloudWatch Logs for troubleshooting API Gateway errors.Use the CloudWatch logs to find 5xx errors from API Gateway. The API Gateway metric 5XXError counts the number of server-side errors that are captured in a given period.Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you're using the most recent AWS CLI version.500 error: internal server errorThis error can occur because of any of the following scenarios:Errors in the AWS Lambda function codeMissing permissions for using a stage variableIncorrect or missing HTTP status code mappingThrottling issuesUndefined HTTP method of POSTLambda permissionsLambda function JSON format issueBackend payload size exceeding 10 MBPrivate endpoint integrationInternal service failuresErrors in the Lambda function codeAPI endpoint 500 errors that integrate with Lambda might indicate that the Lambda function has an error in the code. For more information and troubleshooting, see Error handling patterns in Amazon API Gateway and AWS Lambda.Missing permissions for using a stage variableIf you use a stage variable to set up an API Gateway to invoke a Lambda function, then you might receive an Internal server error. To resolve this error, see I defined my Lambda integration in API Gateway using a stage variable. Why do I get an "Internal server error" and a 500 status code when I invoke the API method?Incorrect or missing HTTP status code mappingIncorrect or missing HTTP status code mapping can also result in 500 errors. To resolve this issue, set up mock integrations in API Gateway.Throttling issuesIf a high number of requests is throttling the backend service, then the API Gateway API might return an Internal server error. To resolve this issue, activate an exponential backoff and retry mechanism, and then try the request again. If the issue persists, then check your API Gateway quota limit. If you exceed the service quota limit, then you can request a quota increase.Undefined HTTP method of POSTFor Lambda integration, you must use the HTTP method of POST for the integration request.Run the AWS CLI command put-integration to update the method integration request:aws apigateway put-integration \ --rest-api-id id \ --resource-id id \ --http-method ANY \ --type AWS_PROXY \ --integration-http-method POST \ --uri arn:aws:apigateway:us-east-2:lambda:path//2015-03-31/functions/arn:aws:lambda:us-east-2:account_id:function:helloworld/invocationsThen, use the AWS CLI command create-deployment to deploy the REST API:aws apigateway create-deployment \ --rest-api-id id \ --stage-name <value>Lambda permissionsMake sure that the integrated Lambda function's or Lambda authorizer's resource-based policy includes permissions for your API to invoke the function. Follow the instructions to update your Lambda function's resource-based policy.Lambda function JSON format issueThe integrated Lambda function isn't returning output according to the predefined JSON format for REST APIs and HTTP APIs. Update your Lambda function or Lambda authorizer in JSON format:REST API:{ "isBase64Encoded": true|false, "statusCode": httpStatusCode, "headers": { "headerName": "headerValue", ... }, "multiValueHeaders": { "headerName": ["headerValue", "headerValue2", ...], ... }, "body": "..."}HTTP API:{ "isBase64Encoded": true|false, "statusCode": httpStatusCode, "headers": { "headername": "headervalue", ... }, "multiValueHeaders": { "headername": ["headervalue", "headervalue2", ...], ... }, "body": "..."}Backend payload size exceeding 10 MBThe maximum backend payload size is 10 MB. You can't increase the size. Make sure that the backend payload size doesn't exceed the 10 MB default quota.Private endpoint integrationIf you're using a private API endpoint, you must also configure API Gateway private integration. Follow the instructions to set up API Gateway private integrations.Internal service failuresIf AWS experiences internal service problems, then you might receive a 500 error. Wait for the issue to resolve within AWS or the API Gateway service, and then retry the request with exponential backoff.502 error: bad gatewayA 502 error code is related to the AWS service that your API Gateway integrates with, such as an AWS Lambda function. API Gateway can't process the response as a gateway or proxy.To troubleshoot this issue, see How do I resolve HTTP 502 errors from API Gateway REST APIs with Lambda proxy integration?Note: When API Gateway interprets the response from the backend service, it uses mapping templates to map the format in the integration response section. For more information, see Set up an integration response in API Gateway.503 error: service unavailableA 503 error code is related to the backend integration and if the API Gateway API can't receive a response.This error might occur in the following scenarios:The backend server is overloaded beyond capacity and can't process new client requests.The backend server is under temporary maintenance.To resolve this error, consider provisioning more resources to the backend server and activating an exponential backoff and retry mechanism on the client. Then, try the request again.504 error: endpoint request timed outIf an integration request takes longer than your API Gateway REST API maximum integration timeout parameter, API Gateway returns an HTTP 504 status code.To resolve this error, see How can I troubleshoot API HTTP 504 timeout errors with API Gateway?Related informationSecurity best practices in Amazon API GatewayMonitoring REST API execution with Amazon CloudWatch metricsFollow"
https://repost.aws/knowledge-center/api-gateway-5xx-error
How can I update the WorkSpace image in a custom bundle?
I want to update the application installed in the Amazon WorkSpaces image associated with a custom WorkSpaces bundle. How can I do this?
"I want to update the application installed in the Amazon WorkSpaces image associated with a custom WorkSpaces bundle. How can I do this?ResolutionYou can update an existing custom WorkSpaces bundle by:Modifying the WorkSpace that is based on the bundle.Creating an image from the WorkSpace.Updating the bundle with the new image.You can then launch new WorkSpaces using the updated bundle.For a detailed tutorial, see Update a custom WorkSpaces bundle.Follow"
https://repost.aws/knowledge-center/workspace-image-update-custom-bundle
Why isn’t my AWS Config rule working?
My AWS Config rule isn't working. How can I troubleshoot this issue?
"My AWS Config rule isn't working. How can I troubleshoot this issue?ResolutionVarious issues can cause managed AWS Config rules to not work, including permissions, resource scope, or configuration change items. To resolve AWS Config rules that don't work, try the following troubleshooting steps.Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you're using the most recent version of the AWS CLI.General AWS Config rule troubleshootingVerify that your configuration recorder is recording all of the resource types that your rule requires (for example, AWS::EC2::Instance).Open the AWS Config console , and then choose Rules from the navigation pane. If the Compliance field indicates No results reported or No resources in scope, see step 8 of Setting up and activating an AWS managed rule.If an evaluation time isn't reported and indicates Evaluations failed, review the PutEvaluations API call in AWS CloudTrail Logs for reported errors.Open the AWS CloudTrail console , and then choose Event history from the navigation pane. To filter the logs, choose Event source from the dropdown, and enter config.amazonaws.com in the search field. Review the filtered log results for Access Denied errors.For periodic trigger AWS Config rules, access the CloudTrail console Event history dashboard to verify the relevant service APIs on the resource.Review specific resource configuration and compliance timelines. Confirm that a configuration item generated to reflect the change to the AWS Config rules with a configuration change-based trigger.Confirm that the recorder role permissions requirements are met. These credentials are used to describe the resource configuration and publishing compliance using the PutEvaluations API.Run the following AWS CLI command. Replace ConfigRuleName with your AWS Config rule name, and replace RegionID with your AWS Region. From the output, review the LastErrorMessage value.aws configservice describe-config-rule-evaluation-status --config-rule-names ConfigRuleName --region RegionIDCustom AWS Config rule troubleshootingFor custom AWS Config rules, in addition to the preceding general troubleshooting steps, verify the following:An "Unable to execute lambda function" error message indicates that the AWS Config service doesn't have permission to invoke the AWS Lambda function. To resolve this error, run the following command to grant the required permissions. Replace function_name with your Lambda function name, RegionID with your AWS Region, and AWS-accountID with your AWS account ID:aws lambda add-permission --function-name function_name --region RegionID --statement-id allow_config --action lambda:InvokeFunction --principal config.amazonaws.com --source-account AWS-accountIDThe following is an example resource policy of the Lambda function:{ "Version": "2012-10-17", "Id": "default", "Statement": [ { "Sid": "allow_config", "Effect": "Allow", "Principal": { "Service": "config.amazonaws.com" }, "Action": "lambda:InvokeFunction", "Resource": "lambda-function-arn", "Condition": { "StringEquals": { "AWS:SourceAccount": "AWS-accountID" } } } ]}Identify the PutEvaluations event that has a User name value matching the Lambda function name. Review the errorMessage for details.If the role that the Lambda function uses to run the code isn't authorized to perform config:PutEvaluations, then add the permissions to the specified role.If the permissions are correct, review the Lambda function code for any raised exceptions. For more details, review the logs in the Amazon CloudWatch log group (/aws/lambda/FunctionName) associated with the Lambda function. Add a print statement in the code to generate more debugging logs.Related informationWhy can't I create or delete organization config rules?Follow"
https://repost.aws/knowledge-center/config-rule-not-working
How do I find out what stopped my AWS OpsWorks Stacks instance?
One of my Amazon Elastic Compute Cloud (Amazon EC2) instances that's managed by AWS OpsWorks Stacks stopped running. How do I verify what stopped the instance?
"One of my Amazon Elastic Compute Cloud (Amazon EC2) instances that's managed by AWS OpsWorks Stacks stopped running. How do I verify what stopped the instance?Short descriptionThere are two ways to stop an OpsWorks Stacks instance:Manually stopping an instance in the OpsWorks console or by using the OpsWorks Stacks API.The OpsWorks Stacks autohealing feature.Important: OpsWorks Stacks doesn't recognize start, stop, or restart operations performed in the Amazon EC2 console. For more information, see Manually starting, stopping, and rebooting 24/7 instances.To verify what stopped your OpsWorks Stacks instance, you can do either of the following:Review your AWS CloudTrail for simultaneous Amazon EC2 StopInstances API calls and OpsWorks Stacks StopInstance API callsIf the two API calls are logged over the same time period, then the instance was stopped manually on the OpsWorks Stacks side. If there's an Amazon EC2 StopInstances API call logged only, then autohealing was applied to the instance.Review your instance's Agent logs to see if the OpsWorks agent was still sending its keepalive signal when the instance stoppedIf successful keepalive signals are logged when the instance stopped, then the instance was stopped manually on the OpsWorks Stacks side. If the keepalive logs are missing or there are failed signal attempts logged when the instance stopped, then autohealing was applied.If autohealing was applied to your instance, see How do I stop AWS OpsWorks Stacks from unexpectedly restarting healthy instances? If your instance was stopped manually, review the AWS Identity and Access Management (IAM) role that made the StopInstance API call. Then, determine who has access to that role and find out why they stopped the instance.ResolutionReview your instance's CloudTrail logs for Amazon EC2 StopInstances API calls1.    Open the CloudTrail console.Important: Make sure that the AWS Region selected is the same Region that your instance is in.2.    In the left navigation pane, choose Event history.3.    In the upper left of the Event history page, select the filter dropdown list. Then, choose Resource name.4.    In the search text box to the right of the filter dropdown list, enter your Amazon EC2 instance ID. Results for all of the events associated with the instance appear.5.    In the Event name column, look for StopInstances.6.    In the Event time column of the StopInstances event row, note the API call's timestamp. You will reference the timestamp when reviewing your instance's CloudTrail logs for OpsWorks Stacks StopInstance API calls.7.    Open the event record by choosing the name of the event (StopInstances) in the Event name column.8.    In the Event record pane, look for the "invokedBy" value. If the instance was stopped on the OpsWorks Stacks side—either manually or through autohealing—then the Amazon EC2 StopInstances API response shows the following output:"invokedBy": "opsworks.amazonaws.com"Note: There is no indicator in the Event record if autohealing was applied to the instance or not.Review your instance's CloudTrail logs for OpsWorks Stacks StopInstance API calls1.    Open the CloudTrail console.Important: Make sure that the AWS Region selected is the same Region that your OpsWorks Stacks API endpoint is in.2.    In the left navigation pane, choose Event history.3.    In the upper left of the Event history page, select the filter dropdown list. Then, choose Resource name.4.    In the search text box to the right of the filter dropdown list, enter your OpsWorks Stacks instance ID. Results for all of the events associated with the instance appear.5.    In the Event name column, look for StopInstance.6.    In the Event time column of the StopInstance event row, verify if the event's timestamp is the same as the Amazon EC2 StopInstances event's timestamp or not.If the StopInstance API call is logged at the same time as the StopInstances API call, then the instance was stopped manually on the OpsWorks Stacks side.If no StopInstance API call is logged at the same time as the StopInstances API call, then autohealing was applied to the instance.(Optional) Review your instance's Agent logs to see if the OpsWorks agent was still sending its keepalive signal when the instance stoppedConnect to your Linux instance by using SSH (Secure Shell), or connect to your Windows instance by using the Windows remote desktop protocol (RDP). Then, check for the log file opsworks-agent.keep_alive.log in the instance's OpsWorks Agent log.If successful keepalive signals are logged when the instance stopped, then the instance was stopped manually on the OpsWorks Stacks side. If the keepalive logs are missing or there are failed signal attempts logged when the instance stopped, then autohealing was applied.Related informationHow to set up AWS OpsWorks Stacks autohealing notifications in Amazon CloudWatch EventsHow do I stop AWS OpsWorks Stacks from unexpectedly restarting healthy instances?How do I troubleshoot "Internal Error" messages when stopping an AWS OpsWorks Stacks instance that's in the "stop_failed" state?Follow"
https://repost.aws/knowledge-center/opsworks-determine-what-stopped-instance
How can I back up a DynamoDB table to Amazon S3?
I want to back up my Amazon DynamoDB table using Amazon Simple Storage Service (Amazon S3).
"I want to back up my Amazon DynamoDB table using Amazon Simple Storage Service (Amazon S3).Short descriptionDynamoDB offers two built-in backup methods:On-demand: Create backups when you choose.Point-in-time recovery: Turn on automatic and continuous backups.Both of these methods are suitable for backing up your tables for disaster recovery purposes. However, with these methods, you can't use the data for use cases involving data analysis or extract, transform, and load (ETL) jobs. The DynamoDB Export to S3 feature is the easiest way to create backups that you can download locally or use with another AWS service. To customize the process of creating backups, you can use Amazon EMR, AWS Glue, or AWS Data Pipeline.ResolutionDynamoDB Export to S3 featureUsing this feature, you can export data from an Amazon DynamoDB table anytime within your point-in-time recovery window to an Amazon S3 bucket. For more information, see DynamoDB data export to Amazon S3.For an example of how to use the Export to S3 feature, see Export Amazon DynamoDB table data to your data lake in Amazon S3, no code writing required.Using the Export to S3 Feature allows you to use your data in other ways including the following:Perform ETL against the exported data on S3 and import the data back to DynamoDBRetain historical snapshots for auditingIntegrate the data with other services/applicationsBuild an S3 data lake from the DynamoDB data and analyze the data from various services, such as Amazon Athena, Amazon Redshift, and Amazon SageMaker.Run as-needed queries on your data from Athena or Amazon EMR without affecting your DynamoDB capacityKeep in mind the following pros and cons when using this feature:Pros: This feature allows you to export data across AWS Regions and accounts without building custom applications or writing code. The exports don't affect the read capacity or the availability of your production tables.Cons: This feature exports the table data in DynamoDB JSON or Amazon Ion format only. The AWS Data Pipeline Import DynamoDB backup data from S3 feature can't be used to import data directly to DynamoDB, because this feature doesn't meet the data format requirements. You can't use the Data Pipeline templates to import the data back to a DynamoDB table. To re-import the data natively with an S3 bucket, see DynamoDB data import from Amazon S3. You can also create a new template or use AWS Glue, Amazon EMR, or the AWS SDK to re-import the data.Amazon EMRUse Amazon EMR to export your data to an S3 bucket. You can do so using either of the following methods:Run Hive/Spark queries against DynamoDB tables using DynamoDBStorageHandler. For more information, see Exporting data from DynamoDB.Use the open-source emr-dynamodb-tool on GitHub to export/import DynamoDB tables.Keep in the mind the following pros and cons when using these methods:Pros: If you're an active Amazon EMR user and are comfortable with Hive or Spark, then these methods offer more control than the native Export to S3 function. You can also use existing clusters for this purpose.Cons: These methods require you to create and maintain an EMR Cluster. If you use DynamoDBStorageHandler, then you must be familiar with Hive or Spark.AWS GlueUse AWS Glue to copy your table to Amazon S3. For more information, see Using AWS Glue and Amazon DynamoDB export.Pros: Because AWS Glue is a serverless service, you don't need to create and maintain resources. You can directly write back to DynamoDB. You can add custom ETL logic for use cases, such as filtering and converting, when exporting data. You can also choose your preferred format from CSV, JSON, Parquet, or ORC. For more information, see Data format options for inputs and outputs in AWS Glue.Cons: If you choose this option, you must have knowledge about using Spark. You also must maintain a source code for your AWS Glue ETL job. For more information, see "connectionType": "dynamodb".Data PipelineUse AWS Data Pipeline to export your table to an S3 bucket in the same account or a different account. For more information, see Import and export DynamoDB data using AWS Data Pipeline.Pros: Data Pipeline uses Amazon EMR to create the backup and the scripting is done for you. You don't have to learn Apache Hive or Apache Spark to accomplish this task. The cluster is created and maintained for you.Cons: If you use the templates provided, then creating the backups is not as customizable as AWS Glue or Amazon EMR. To create customizable backups to Amazon S3, choose one of the other methods, or create your own template for Data Pipeline.If none of these options offer the flexibility that you need, then you can use the DynamoDB API to create your own solution.Related informationRequesting a table export in DynamoDBHow to export an Amazon DynamoDB table to Amazon S3 using AWS Step Functions and AWS GlueFollow"
https://repost.aws/knowledge-center/back-up-dynamodb-s3
How can I resolve the error "Unknown dataset URI pattern: dataset" when exporting Amazon RDS data to Amazon S3 in Parquet format using Sqoop?
"I'm trying to use an Amazon EMR cluster to export Amazon Relational Database Service (Amazon RDS) data to Amazon Simple Storage Service (Amazon S3) in Apache Parquet format using Apache Sqoop. I'm using the --as-parquetfile parameter, but I keep getting this error:"Check that JARs for s3a datasets are on the class path org.kitesdk.data.DatasetNotFoundException: Unknown dataset URI pattern: dataset.""
"I'm trying to use an Amazon EMR cluster to export Amazon Relational Database Service (Amazon RDS) data to Amazon Simple Storage Service (Amazon S3) in Apache Parquet format using Apache Sqoop. I'm using the --as-parquetfile parameter, but I keep getting this error:"Check that JARs for s3a datasets are on the class path org.kitesdk.data.DatasetNotFoundException: Unknown dataset URI pattern: dataset."Short descriptionThis error affects Sqoop version 1.4.7. To resolve the error, download and install the kite-data-s3-1.1.0.jar.ResolutionNote: The following solution was tested on Amazon EMR release version 5.34.0 and Sqoop version 1.4.7.1.    Connect to the master node using SSH.2.    Use wget to download the kite-data-s3-1.1.0.jar:[hadoop@ip-xxx-xx-xx-x]$ wget https://repo1.maven.org/maven2/org/kitesdk/kite-data-s3/1.1.0/kite-data-s3-1.1.0.jar3.    Confirm that the downloaded file is the correct size (1.7 MB):[hadoop@ip-xxx-xx-xx-x]$ du -h1.7M /usr/lib/sqoop/lib/kite-data-s3-1.1.0.jar4.    Move the JAR to the Sqoop library directory ( /usr/lib/sqoop/lib/):sudo cp kite-data-s3-1.1.0.jar /usr/lib/sqoop/lib/5.    Grant permission on the JAR:sudo chmod 755 kite-data-s3-1.1.0.jar6.    Use the s3n connector to import the jar. If you use the s3 connector, you get the Unknown dataset URI pattern: dataset error.sqoop import --connect jdbc:mysql://mysql.cdfqbesrukqe.eu-west-1.rds.amazonaws.com:8193/dev --username admin -P --table hist_root --target-dir "s3n://awsexamplebucket/sqoop_parquet/demo" --as-parquetfile -m 2 --split-by identifiers -- --schema onwatchFor more information about the Kite SDK dataset URI, see Dataset, view, and repository URIs.Follow"
https://repost.aws/knowledge-center/unknown-dataset-uri-pattern-sqoop-emr