Question
stringlengths
0
222
Description
stringlengths
0
790
Answer
stringlengths
0
28.2k
Link
stringlengths
35
92
How do I exclude specific URIs from XSS or SQLi inspection for HTTP requests when using AWS WAF?
I'm receiving false positives for SQL injection (SQLi) or cross-site scripting (XSS) on certain HTTP requests.  How do I exclude specific URIs from XSS or SQLi inspection for HTTP requests when using AWS WAF?
"I'm receiving false positives for SQL injection (SQLi) or cross-site scripting (XSS) on certain HTTP requests.  How do I exclude specific URIs from XSS or SQLi inspection for HTTP requests when using AWS WAF?Short descriptionFalse positives sometimes occur during XSS and SQLi rule inspection for AWS Managed Rules and custom rules. You can exclude specific URI paths from XSS and SQLi inspection to avoid false positives. To do this, use nested statements to write the allow rule so that the request is evaluated against all the other rules.ResolutionExample HTTP or HTTPS requesthttp://www.amazon.com/path/string1?abc=123&xyz=567In the preceding request, the URI path is '/path/string1'. Any string following '?' is called the query string, such as ' abc=123&xyz=567' in the preceding example. In the query string, 'abc' and 'xyz' are the query parameters with values '123' and '567', respectively.Example rules for allowing specific URIs from XSS or SQLi inspectionNote: The following example rule configurations are for reference only. You must customize these rules for PositionalConstraint, SearchString, TextTransformations, and so on. You can use similar logic to allow specific headers, query parameters, and so on.Case 1: Using AWS Managed RulesThe AWS managed rule group AWSManagedRulesCommonRuleSet contains the following rules:CrossSiteScripting_COOKIECrossSiteScripting_QUERYARGUMENTSCrossSiteScripting_BODYCrossSiteScripting_URIPATHThe AWSManagedRulesCommonRuleSet rule group has BLOCK action that inspects for an XSS attack string in the corresponding part of the request. For more information, see Core rule set (CRS) managed rule group.Similarly, the rule group AWSManagedRulesSQLiRuleSet has rules to inspect query parameters, the body, and a cookie for an SQLi Injection attack pattern. For more information, see Use-case specific rule groups.When a request matches the preceding rules, AWS WAF generates the corresponding labels. The labels are used in the rule defined later in the Web ACL to selectively exclude specific requests (based on URI, in this example).To allow specific URIs, do the following:1.    Keep the following rules from the AWSManagedRulesCommonRuleSet rule group in Count mode:CrossSiteScripting_COOKIECrossSiteScripting_QUERYARGUMENTSCrossSiteScripting_BODYCrossSiteScripting_URIPATH2.    Create an allow rule configured with lower priority than that of AWSManagedRulesCommonRuleSet.The logic of the rule is as follows:(XSS_URIPATH or XSS_Cookie or XSS_Body or XSS_QueryArguments) AND (NOT whitelisted URIString) = BLOCKUse the following configuration:{ "Name": "whitelist-xss", "Priority": 10, "Statement": { "AndStatement": { "Statements": [ { "OrStatement": { "Statements": [ { "LabelMatchStatement": { "Scope": "LABEL", "Key": "awswaf:managed:aws:core-rule-set:CrossSiteScripting_URIPath" } }, { "LabelMatchStatement": { "Scope": "LABEL", "Key": "awswaf:managed:aws:core-rule-set:CrossSiteScripting_Cookie" } }, { "LabelMatchStatement": { "Scope": "LABEL", "Key": "awswaf:managed:aws:core-rule-set:CrossSiteScripting_Body" } }, { "LabelMatchStatement": { "Scope": "LABEL", "Key": "awswaf:managed:aws:core-rule-set:CrossSiteScripting_QueryArguments" } } ] } }, { "NotStatement": { "Statement": { "ByteMatchStatement": { "SearchString": "URI_SearchString", "FieldToMatch": { "UriPath": {} }, "TextTransformations": [ { "Priority": 0, "Type": "NONE" } ], "PositionalConstraint": "CONTAINS" } } } } ] } }, "Action": { "Block": {} }, "VisibilityConfig": { "SampledRequestsEnabled": true, "CloudWatchMetricsEnabled": true, "MetricName": "whitelist-xss" }}Follow the preceding steps for AWSManagedRulesSQLiRuleSet by replacing the labels with AWSManagedRulesSQLiRuleSet generated labels.Case 2: Using custom XSS and SQLi rulesThe logic of the rule is as follows:(XSS_URIPATH or XSS_Cookie or XSS_Body or XSS_QueryArguments) AND (NOT whitelisted URIString) = BLOCKUse the following configuration for the rule to inspect XSS attack strings for the request while selectively excluding a specific URI_PATH:{ "Name": "xss-URI", "Priority": 10, "Action": { "Block": {} }, "VisibilityConfig": { "SampledRequestsEnabled": true, "CloudWatchMetricsEnabled": true, "MetricName": "xss-URI" }, "Statement": { "AndStatement": { "Statements": [ { "OrStatement": { "Statements": [ { "XssMatchStatement": { "FieldToMatch": { "UriPath": {} }, "TextTransformations": [ { "Priority": 0, "Type": "NONE" } ] } }, { "XssMatchStatement": { "FieldToMatch": { "Cookies": { "MatchPattern": { "All": {} }, "MatchScope": "ALL", "OversizeHandling": "CONTINUE" } }, "TextTransformations": [ { "Priority": 0, "Type": "NONE" } ] } }, { "XssMatchStatement": { "FieldToMatch": { "Body": { "OversizeHandling": "CONTINUE" } }, "TextTransformations": [ { "Priority": 0, "Type": "NONE" } ] } }, { "XssMatchStatement": { "FieldToMatch": { "AllQueryArguments": {} }, "TextTransformations": [ { "Priority": 0, "Type": "NONE" } ] } } ] } }, { "NotStatement": { "Statement": { "ByteMatchStatement": { "FieldToMatch": { "UriPath": {} }, "PositionalConstraint": "CONTAINS", "SearchString": "URI_SearchString", "TextTransformations": [ { "Type": "NONE", "Priority": 0 } ] } } } } ] } }}Follow the preceding procedure when using SQLi statements.Note: It's not a best practice to have a rule with higher priority that allows just the URI. Doing this doesn't allow the request with the allowed URI_PATH to be evaluated against all the other rules defined in the Web ACL.Related informationAWS WAF rule statementsCross-site scripting attack rule statementSQL injection attack rule statementFollow"
https://repost.aws/knowledge-center/waf-exclude-specific-requests-inspection
How do I migrate a Dedicated Host or Dedicated Instance to another Dedicated Host?
"I want to migrate an instance running on dedicated hardware (a Dedicated Host or Dedicated Instance), or I received a maintenance notification and have a scheduled event for an instance running on dedicated hardware. What should I do?"
"I want to migrate an instance running on dedicated hardware (a Dedicated Host or Dedicated Instance), or I received a maintenance notification and have a scheduled event for an instance running on dedicated hardware. What should I do?Short descriptionTo migrate an instance away from a Dedicated Host, do one of the following:Allocate a new Dedicated Host to your account, and then move your Dedicated Instance to the new Dedicated Host.Edit the placement of the instance so that the instance runs on another one of your existing Dedicated Hosts.Migrate your instance from a Dedicated Host to a Dedicated Instance, and vice versa, from the Amazon Elastic Compute Cloud (Amazon EC2) console by modifying the tenancy of the instance.ResolutionTo allocate a new Dedicated Host to your accountOpen the Amazon EC2 console, and then choose Dedicated hosts from the navigation pane.Choose Allocate Dedicated Host.Choose an instance type and Availability Zone that matches the instance you want to migrate. For all other options, choose the appropriate settings.Choose Allocate Host.To move an instance from one Dedicated Host to anotherChoose the instance in the Amazon EC2 console.From the Actions menu, choose Manage instance state.Choose Stop, Change state, Stop.From the Actions menu, choose Instance Settings, and then choose Modify instance placement.From the Target host menu, choose the new Dedicated Host that you want the instance to run on.Choose Save.From the Actions menu, choose Manage instance state.Choose Start, Change state, Start.To migrate an instance from a Dedicated Host to a Dedicated InstanceOpen the Amazon EC2 console, and then choose Dedicated Hosts from the navigation pane.Select the Running instances tab, and then select the instance.From the Actions menu, choose Manage instance state.Choose Stop, Change state, Stop.From the Actions menu, choose Instance Settings, and then choose Modify instance placement.Change Tenancy to Dedicated instance.Choose Save.From the Actions menu, choose Manage instance state.Choose Start, Change state, Start.For more information on the two offerings, see Differences between Dedicated Hosts and Dedicated Instances.Related informationDedicated HostsWhat do I need to know when my Amazon EC2 instance is scheduled for retirement?What happens during an EC2 scheduled maintenance event and what preventative actions can I take?Allocate Dedicated HostsModify instance tenancy and affinityFollow"
https://repost.aws/knowledge-center/migrate-dedicated-different-host
How do I prevent duplicate Lambda function invocations?
My AWS Lambda function keeps receiving more than one invocation request for a single event. How do I prevent my Lambda function from invoking multiple times from the same event?
"My AWS Lambda function keeps receiving more than one invocation request for a single event. How do I prevent my Lambda function from invoking multiple times from the same event?Short descriptionTo help prevent duplicate Lambda function invocations, do the following based on the invocation type that you're using.Note: For synchronous invocations, the clients and AWS services that invoke a Lambda function are responsible for performing retries. For asynchronous invocations, Lambda automatically retries on error, with delays between retries.ResolutionFor asynchronous invocationsReview your Lambda function's Amazon CloudWatch Logs to verify the following:If the duplicate invocations have the same request ID or notIf the duplicate invocations returned errors or timed outThen, do one of the following based on your use case:For duplicate invocations that returned errors or timed out and that have the same request IDNote: Duplicate invocations that return errors or timeout and that have the same request ID indicate that the Lambda service retried the function.Configure error handling for asynchronous invocations to reduce the number of times that your Lambda function retries failed asynchronous invocation requests.For more information, see Error handling and automatic retries in AWS Lambda.For duplicate invocations that didn't return errors or timeoutNote: Duplicate invocations that don't return errors or timeout indicate client-side retries.Make sure that your Lambda function's code is idempotent and capable of handling messages multiple times.Make sure that your Lambda function has its concurrency limit set high enough to handle the number of invocation requests it receives.Identify and resolve any errors that your Lambda function returns.Note: To troubleshoot function invocation failures, see How do I troubleshoot Lambda function failures?For synchronous invocationsNote: Synchronous invocation retry behavior varies between AWS services, based on each service's event source mapping. For more information, see Event-driven invocation.Make sure that your Lambda function's code is idempotent and capable of handling messages multiple times.Identify and resolve any errors that your Lambda function returns.Note: To troubleshoot function invocation failures, see How do I troubleshoot Lambda function failures?Related informationViewing events with CloudTrail Event historyBest practices for working with AWS Lambda functionsLambda function scalingWhy is my Lambda function retrying valid Amazon SQS messages and placing them in my dead-letter queue?Follow"
https://repost.aws/knowledge-center/lambda-function-duplicate-invocations
How do I share my ACM Private Certificate Authority with another AWS account?
I created an AWS Certificate Manager (ACM) Private Certificate Authority (ACM PCA) in one AWS account. Can I share it with a different AWS account to issue certificates?
"I created an AWS Certificate Manager (ACM) Private Certificate Authority (ACM PCA) in one AWS account. Can I share it with a different AWS account to issue certificates?Short descriptionYou can create a resource share using AWS Resource Access Manager (AWS RAM) to share an ACM PCA with another AWS account. In addition, you can share an ACM PCA with:Other principals, such as AWS Identify and Access Management (IAM) users and IAM roles.Organizational units (OUs).The entire AWS organization that your account is a member of.Sharing your ACM PCA allows users and roles in other accounts to issue private x509 certificates signed by the shared PCA.ResolutionCreate an AWS RAM share in the account where your ACM PCA resides.ExampleYou have an existing ACM PCA in Account A. You want to share it with Account B.In Account A, create a resource share in AWS RAM. For detailed instructions, see the Console instructions in Creating a resource share.Note: In Step 2: Associate a permission with each resource type, choose the permission for the type of certificates that you want to issue. For example:To issue end-entity certificates with the default certificate template arn:aws:acm-pca:::template/EndEntityCertificate/V1: choose the default permission AWSRAMDefaultPermissionCertificateAuthority.To issue a subordinate certificate (PathLen0) using the certificate template arn:aws:acm-pca:::template/SubordinateCACertificate_PathLen0/V1: choose AWSRAMSubordinateCACertificatePathLen0IssuanceCertificateAuthority.Accept the shared resource in your shared account (Account B, in this example). If you share with AWS Organizations (with resource sharing within AWS Organization turned on), you can skip to step 6.In the shared account (Account B, in this example), open the AWS RAM console in the same Region as step 1.Under Shared with me, select Resource shares. You see the pending share invitation.Select the name of the shared resource, and then choose Accept resource share. After accepting the share, the share is listed as Active.In the shared account (Account B, in this example), open the ACM PCA console in the Region where the PCA is located. You see the shared PCA in your account. You can begin to issue private x509 certificates using the shared PCA.Related informationHow to use AWS RAM to share your ACM Private CA cross-accountCreating a resource share in AWS RAMFollow"
https://repost.aws/knowledge-center/acm-share-pca-with-another-account
What do I do if the tax I'm charged for AWS services is incorrect?
I'm charged the incorrect amount or type of tax.
"I'm charged the incorrect amount or type of tax.Short descriptionTo determine your account's location for tax purposes, AWS uses the tax registration number (TRN) and the business legal address associated with the account. If you don't provide a TRN or business legal address, then AWS uses the address of the account's default payment method, the account's billing address, and the account's contact address to determine the account's tax jurisdiction.A TRN might also be known as a value-added tax (VAT) number, VAT ID, VAT registration number, or Business Registration Number.If you're charged the incorrect amount or type of tax, update your account's information to make sure that the information is accurate.ResolutionTo update your account informationTo update your TRN or business legal address, see How do I add or update my tax registration number or business legal address for my AWS account?To update the contact address associated with your account, see Managing an AWS Account.To update the billing address associated with your payment method, see Managing your AWS payment preferences.To add or update your US sales tax exemption, see Manage your tax exemptions.If you receive special status or relief from paying VAT, contact AWS Support and provide any applicable certificates issued by the tax authorities in your country or specific sections of tax legislation that are applicable to your organization.If you're the payer account and have a TRN, the TRN must be applied to each linked account for the tax charges to be calculated correctly.For more information on how AWS calculates tax, see Amazon Web Services Tax Help.Note: AWS Support can't provide tax advice. If you have questions about your tax obligation, consult your tax advisor.Related informationTax registration numbersFollow"
https://repost.aws/knowledge-center/incorrect-aws-tax
How do I troubleshoot Amazon Cognito authentication issues with OpenSearch Dashboards?
"I'm trying to access OpenSearch Dashboards using Amazon Cognito authentication on my Amazon OpenSearch Service cluster. But, I receive an error or encounter a login issue."
"I'm trying to access OpenSearch Dashboards using Amazon Cognito authentication on my Amazon OpenSearch Service cluster. But, I receive an error or encounter a login issue.ResolutionLogin page doesn't appear when you enter the OpenSearch Dasboards URLYou might be redirected from the OpenSearch Dashboards URL to the Dashboards dashboard for several reasons:You used an IP-based domain access policy that allows your local machine’s public IP address to access Dashboards. Make sure to add the Amazon Cognito authenticated role in the domain access policy. If you don't add the authenticated role, then your access policy behaves like a normal policy.Requests are signed by a permitted AWS Identity Access Management (IAM) user or role. When you access the Dashboards URL, avoid using any Dashboards proxy methods to sign your requests.Your OpenSearch Service domain is in a virtual private cloud (VPC), and your domain has an open access policy. In this scenario, all VPC users can access Dashboards and the domain without Amazon Cognito authentication.Note: Amazon Cognito authentication isn't required. To require Amazon Cognito authentication, change your domain access policy. For more information, see Configuring access policies.If you're redirected to the OpenSearch Dashboards login page but can't log in, then Amazon Cognito is incorrectly configured. To resolve this issue, consider these approaches:Verify that the identity provider is correctly configured.Verify that your account status is set to "CONFIRMED". You can view your account status on the User and groups page of the Amazon Cognito console. For more information, see Signing up and confirming user accounts.Verify that you're using the correct user name and password."Missing Role" errorIf you turned on fine-grained access control (FGAC) on OpenSearch Dashboards in your OpenSearch Service domain, then you might receive this error:"Missing RoleNo roles available for this user, please contact your system administrator."The preceding error occurs when there's a mismatch between your IAM primary or lead user and the Amazon Cognito role that's assumed. The role that's assumed from your Amazon Cognito identity pool must match the IAM role that you specified for the primary or lead user.To make the primary or lead user's IAM role match the assumed Amazon Cognito role, complete the following steps:Open the OpenSearch Service console.From the navigation pane, under Managed clusters, choose Domains.Choose Actions.Choose Edit security configuration.Under Fine-grained access control, choose Set IAM role as the primary or lead user. Make sure to specify the Amazon Cognito Authentication role's ARN.(Optional) If you forgot the primary or lead user's ARN (or other configuration details of the role), then modify the primary or lead user. When you reconfigure your primary or lead user, you can specify a new IAM ARN.Choose Submit.Invalid identity pool configuration errorAfter you successfully authenticate your login using Amazon Cognito, you might still receive this error:com.amazonaws.services.cognitoidentity.model.InvalidIdentityPoolConfigurationException:Invalid identity pool configuration. Check assigned IAM roles for this pool.(Service: AmazonCognitoIdentity; Status Code: 400; Error Code:InvalidIdentityPoolConfigurationException; Request ID:xxxxx-xxxx-xxxx-xxxx-xxxxx)The preceding error message occurs when Amazon Cognito doesn't have the proper permissions to assume an IAM role on behalf of the authenticated user. Modify the trust relationship for the IAM role:Open the Amazon IAM console.Choose Roles.Select your IAM role.Choose the Trust relationships tab.Choose Edit trust relationship. Make sure that your Amazon Cognito identity pool can assume the IAM role.For example:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "cognito-identity.amazonaws.com" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "cognito-identity.amazonaws.com:aud": "identity-pool-id" }, "ForAnyValue:StringLike": { "cognito-identity.amazonaws.com:amr": "authenticated" } } } ]}Choose Update Trust Policy.For more information about updating your IAM role policy where fine-grained access control (FGAC) is turned on, see Tutorial: Configure a domain with an IAM master user and Amazon Cognito authentication.Redirect mismatch errorYou might receive the following error when you try to access OpenSearch Dashboards in OpenSearch Service using a Dashboards URL or custom endpoint URL:"An error was encountered with the requested page"The preceding error occurs when you're missing the callback URL configuration in Amazon Cognito's app client settings.To check that your App client settings are correctly configured, perform these steps:Open the Amazon Cognito console.Choose Manage User Pools.Select the user pool that you want to edit.On the left side of the console, under App integration, choose the OpenSearch App Client from the App client.Verify that the callback URL(s) and sign out URL(s) are correctly configured. For example:<dashboards-endpoint>/_dashboards/app/homeFor a domain where a custom endpoint is turned on, your callback URL and sign out URL looks similar to the following one:<domain-custom-endpoint>/_dashboards/app/homeAmazon Cognito identity pool authorization role errorIf you can't log in but you can't see OpenSearch Dashboards, then you might receive this error:User: arn:aws:sts:: 123456789012:assumed-role/Cognito_identitypoolAuth_Role/CognitoIdentityCredentials is not authorized to perform: es:ESHttpGetBy default, the authenticated IAM role for identity pools doesn't include the privileges required to access Dashboards. Complete the following steps to find the name of the authenticated role and add it to the OpenSearch Service access policy:Open the Amazon Cognito console.Choose Manage Identity Pools.In the top-right corner of the console, choose Edit identity pool.Add your authenticated role to your OpenSearch Service domain access policy.Note: It's a best practice to use a resource-based policy for authenticated users. The authenticated role specifically controls the Amazon Cognito authentication for Dashboards. Therefore, don't remove other resources from the domain access policy.Related informationCommon configuration issuesFollow"
https://repost.aws/knowledge-center/opensearch-dashboards-authentication
How can I troubleshoot the error "FAILED: SemanticException table is not partitioned but partition spec exists" in Athena?
"When I run ALTER TABLE ADD PARTITION in Amazon Athena, I get this error: "FAILED: SemanticException table is not partitioned but partition spec exists"."
"When I run ALTER TABLE ADD PARTITION in Amazon Athena, I get this error: "FAILED: SemanticException table is not partitioned but partition spec exists".ResolutionThis error happens if you didn't define any partitions in the CREATE TABLE statement. To resolve this error, take one of the following actions:Re-create the table and use PARTIONED BY to define the partition key.Edit the table schema.Re-create the tableCreate the table again and use PARTITIONED BY to define the partition key. For an example, see Create the table. After you define the partition, you can use ALTER TABLE ADD PARTITION to add more partitions.For example, if you're using the following data definition language (DDL) to create the table with three partitions for year, month, and day:CREATE EXTERNAL TABLE test (requestBeginTime string,adId string,...)PARTITIONED BY (year string,month string,day string)ROW FORMAT serde 'org.apache.hive.hcatalog.data.JsonSerDe'LOCATION 's3://.../' ;Then, add the partitions similar to the following ones:ALTER TABLE impressions ADDPARTITION (year = '2016', month = '05', day='04') LOCATION 's3://mystorage/path/to/data\_14\_May\_2016/';Edit the table schemaTo edit the table schema in AWS Glue, complete the following steps:Open the AWS Glue console.Choose the table name in the list, and then choose Edit schema.Choose Add column.Enter the column name, type, and number. Then, check the Partition key box.Choose Add.For more information, see Viewing and editing table details.Follow"
https://repost.aws/knowledge-center/athena-failed-semanticexception-table
How can I use an SSH tunnel and MySQL Workbench to connect to a private Amazon RDS MySQL DB instance that uses a public EC2 instance?
"I have a private Amazon Relational Database Service (Amazon RDS) MySQL DB instance and a public Amazon Elastic Compute Cloud (Amazon EC2) instance, and I want to connect to them by using an SSH tunnel and MySQL Workbench. How can I do this?"
"I have a private Amazon Relational Database Service (Amazon RDS) MySQL DB instance and a public Amazon Elastic Compute Cloud (Amazon EC2) instance, and I want to connect to them by using an SSH tunnel and MySQL Workbench. How can I do this?Short descriptionBefore you connect over an SSH tunnel using MySQL Workbench, confirm that the security group inbound rules, network access control lists (network ACLs), and route tables are configured to allow a connection between your EC2 instance and your RDS DB instance. Also confirm that the EC2 instance can be connected to over the internet using its public IP address from your local machine. For more information, see Scenarios for accessing a DB Instance in a VPC.ResolutionOpen MySQL Workbench.Select MySQL New Connection and enter a connection name.Choose the Connection Method, and select Standard TCP/IP over SSH.For SSH Hostname, enter the public IP address of your EC2 instance.For SSH Username, enter the default SSH user name to connect to your EC2 instance.Choose SSH Key File, and select the .pem file used to connect from your file system.For MySQL Hostname, enter the database endpoint name.For MySQL Server Port, enter the port number that you use to connect to your database.For Username, enter the user name that you use to connect to your database.For Password, enter the MySQL user password.Choose Test Connection. After the test is successful, choose OK to save the connection.After the connection is configured, you can connect to your private RDS DB instance using an SSH tunnel.Related informationHow do I resolve problems when connecting to my Amazon RDS DB instance?Connect to your Linux instance using an SSH clientFollow"
https://repost.aws/knowledge-center/rds-mysql-ssh-workbench-connect-ec2
Why am I unable to run sudo commands on my EC2 Linux instance?
I'm receiving the error "sudo: /usr/bin/sudo must be owned by uid 0 and have the setuid bit set" or "sudo: /etc/sudoers is world writable" when trying to run sudo commands on my Amazon Elastic Compute Cloud (Amazon EC2) Linux instance.
"I'm receiving the error "sudo: /usr/bin/sudo must be owned by uid 0 and have the setuid bit set" or "sudo: /etc/sudoers is world writable" when trying to run sudo commands on my Amazon Elastic Compute Cloud (Amazon EC2) Linux instance.Short descriptionThe error sudo: "/usr/bin/sudo must be owned by uid 0 and have the setuid bit set" occurs when the /usr/bin/sudo file is owned by a non-root user. The /usr/bin/sudo file must have root:root as the owner.The error "sudo: /etc/sudoers is world writable" occurs when the /etc/sudoers file has the incorrect permissions. The sudoers file must not be world-writable. If a file is world-writable, then everyone can write to the file. By default, the file mode for the sudoers file is 0440. This allows the owner and group to read the file, and forbids anyone from writing to the file.You can correct these errors using the EC2 Serial Console or a user data script.ResolutionMethod 1: Use the EC2 Serial ConsoleIf you activated EC2 Serial Console for Linux, you can use it to troubleshoot supported Nitro-based instance types. The serial console helps you troubleshoot boot issues, network configuration, and SSH configuration issues. The serial console connects to your instance without the need for a working network connection. You can access the serial console using the Amazon EC2 console or the AWS Command Line Interface (AWS CLI).Before using the serial console, grant access to the console at the account level. Then, create AWS Identity and Access Management (IAM) policies granting access to your IAM users. Also, every instance using the serial console must include at least one password-based user. If your instance is unreachable and you haven’t configured access to the serial console, then follow the instructions in the section, Method 2: Use a user data script. For information on configuring the EC2 Serial Console for Linux, see Configure access to the EC2 Serial Console.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Method 2: Use a user data scriptUse a user data script to correct these errors on the following:Red Hat-based distributions such as SUSE, CentOS, Amazon Linux 1, Amazon Linux 2, and RHEL.Debian-based distributions (such as Ubuntu).1.    Open the Amazon EC2 console, and then select your instance.2.    Choose Actions, Instance State, Stop.Note: If Stop is not activated, either the instance is already stopped, or its root device is an instance store volume.3.    Choose Actions, Instance Settings, Edit User Data.4.    Copy and paste the following script into the Edit User Data field, and then choose Save. Be sure to copy the entire script. Don't insert additional spaces when pasting the script.Red Hat-based distributionsFor Red Hat-based distributions, use the following user data script:Content-Type: multipart/mixed; boundary="//"MIME-Version: 1.0--//Content-Type: text/cloud-config; charset="us-ascii"MIME-Version: 1.0Content-Transfer-Encoding: 7bitContent-Disposition: attachment; filename="cloud-config.txt"#cloud-configcloud_final_modules:- [scripts-user, always]--//Content-Type: text/x-shellscript; charset="us-ascii"MIME-Version: 1.0Content-Transfer-Encoding: 7bitContent-Disposition: attachment; filename="userdata.txt"#!/bin/bashPATH=$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:rpm --setugids sudo && rpm --setperms sudofind /etc/sudoers.d/ -type f -exec /bin/chmod 0440 {} \;find /etc/sudoers.d/ -type f -exec /bin/chown root:root {} \;--//Note: The final two command lines recover the permissions, owner, and group for the custom sudo security policy plugins in the directory "/etc/sudoers.d/".Debian-based distributionsFor Debian-based distributions, use the following user data script:Content-Type: multipart/mixed; boundary="//"MIME-Version: 1.0--//Content-Type: text/cloud-config; charset="us-ascii"MIME-Version: 1.0Content-Transfer-Encoding: 7bitContent-Disposition: attachment; filename="cloud-config.txt"#cloud-configcloud_final_modules:- [scripts-user, always]--//Content-Type: text/x-shellscript; charset="us-ascii"MIME-Version: 1.0Content-Transfer-Encoding: 7bitContent-Disposition: attachment; filename="userdata.txt"#!/bin/bash/bin/chown root:root /usr/bin/sudo/bin/chmod 4111 /usr/bin/sudo/bin/chmod 644 /usr/lib/sudo/sudoers.so/bin/chmod 0440 /etc/sudoers/bin/chmod 750 /etc/sudoers.dfind /etc/sudoers.d/ -type f -exec /bin/chmod 0440 {} \;find /etc/sudoers.d/ -type f -exec /bin/chown root:root {} \;--//Note: The final two command lines recover the permissions, owner, and group for the custom sudo security policy plugins in the directory "/etc/sudoers.d/".5.    Start the instance and then connect to the instance using SSH.Note: If you receive syntax errors when trying to connect to the instance using SSH after editing the sudoers file, see I edited the sudoers file on my EC2 instance and now I'm receiving syntax errors when trying to run sudo commands. How do I fix this?Follow"
https://repost.aws/knowledge-center/ec2-sudo-commands
How do I troubleshoot intermittent timeout errors in Amazon DynamoDB?
"When I use AWS SDK to interact with Amazon DynamoDB, I see intermittent connection timeout or request timeout errors, such as the following:Unable to execute HTTP request: Connect to dynamodb.xx-xxxx-x.amazonaws.com:443 [dynamodb.us-east-1.amazonaws.com/x.xxx.xxx.x] failed: connect timed outcom.amazonaws.SdkClientException: Unable to execute HTTP request. Request did not complete before the request timeout configuration"
"When I use AWS SDK to interact with Amazon DynamoDB, I see intermittent connection timeout or request timeout errors, such as the following:Unable to execute HTTP request: Connect to dynamodb.xx-xxxx-x.amazonaws.com:443 [dynamodb.us-east-1.amazonaws.com/x.xxx.xxx.x] failed: connect timed outcom.amazonaws.SdkClientException: Unable to execute HTTP request. Request did not complete before the request timeout configurationResolutionWhen you make an API call to DynamoDB, the following happens:Your application resolves the DynamoDB endpoint using your local DNS server.After getting the IP address of the DynamoDB endpoint, the application connects to the endpoint and makes the API call.The endpoint routes this call to one of the backend nodes.During this process, the API call might intermittently result in connection timeout or request timeout errors. In most cases, the timeout error results from a client-side error that occurs before the API call reaches DynamoDB due to network issues or incorrect SDK configurations on the client side.To troubleshoot these errors, do the following:Tune the SDK HTTP client parameters according to your use case and application SLA. In addition to tuning connectionTimeout, requestTimeout, and maxRetries, you can also tune clientExecutionTimeout and socketTimeout. The ClientExecutionTimeout parameter indicates the maximum allowed total time spent to perform an end-to-end operation and receive the desired response, including any retries that might occur. Be sure that you set this value to be greater than the individual requestTimeout value. The socketTimeout parameter indicates the maximum amount of time that the HTTP client waits to receive a response from an already established TCP connection. For more information, see Tuning AWS Java SDK HTTP request settings for latency-aware Amazon DynamoDB applications.Be sure to send constant traffic or reuse connections. When you're not making requests, consider having the client send dummy traffic to a DynamoDB table. Or, you can reuse client connections or use connection pooling. These techniques keep internal caches warm, which helps to reduce the latency and avoid timeout errors on client side. For example, see Reusing connections with Keep-Alive in Node.js.View your Amazon Virtual Private Cloud (Amazon VPC) flow logs to check if there was incoming traffic to DynamoDB during the timeframe when you got the error. You can also use AWS X-Ray to monitor the latency of your application.Follow"
https://repost.aws/knowledge-center/dynamodb-intermittent-timeout-errors
How do I create a new FSx for ONTAP file system in a shared VPC?
I want to create an Amazon FSx for NetApp ONTAP file system in a shared Amazon Virtual Private Cloud (Amazon VPC) so that it can be accessed by another account. How do I do this?
"I want to create an Amazon FSx for NetApp ONTAP file system in a shared Amazon Virtual Private Cloud (Amazon VPC) so that it can be accessed by another account. How do I do this?ResolutionPrerequisitesTo create and share your FSx for ONTAP file system, the following prerequisites must be met:You must have two AWS accounts that are within the same organization. For more information, see Inviting an AWS account to join your organization.Sharing must be turned on in AWS Organizations for the main account of the organization. For more information, see Enable resource sharing within AWS Organizations.You must have a shared VPC with a subnet shared to the account. This subnet can be in the main account or in the account that joined your organization.Example:Account A is the main account of the organization, and Account B has joined the organization.Account A turned on sharing in the AWS Resource Access Manager (AWS RAM).Account A creates one VPC with two subnets, and shared these two subnets to Account B.Account A creates a FSx for ONTAP file system.Account B creates an Amazon Elastic Compute Cloud (Amazon EC2) instance using the shared VPC.Account B mounts the FSx for ONTAP file system of Account A in the shared VPC.Create shared subnetsOpen the AWS RAM console.Select Create a resource share.In Step 1: Specify resource share details, enter a resource share name. For example, Shared_VPC_TEST.For Select resource type, select Subnet, and then choose the subnets to be shared from the list or resources.Select Next.In Step 2: Associate a permission with each resource type, select Next. ( AWSRAMDefaultPermissionSubnet is the only permission available.)In Step 3: Choose principals to grant access, enter the 12-digit account ID that you’re sharing the resource with, and then choose Add.Select Next.In Step 4: Review and create, make sure that all details are correct, and then select Create resource share.For detailed steps on creating a resource share, see Create a resource share.Note: You can't share subnets that are in a default VPC.Create your FSx for ONTAP file system in a shared VPC and access it from the shared accountCreate an FSx for ONTAP file system in the shared VPC that contains the shared subnet. For detailed instructions on creating a file system, see Step 1: Create an Amazon FSx for NetApp ONTAP file system.From the account that the resource is shared, launch an EC2 instance in the shared VPC, and then mount the FSx for ONTAP file system on the instance. For detailed instructions on how to mount the file system, see Mounting volumes.Related informationAWS Resource Access ManagerAWS account managementFollow"
https://repost.aws/knowledge-center/fsx-ontap-create-file-system-shared-vpc
Why is my worker node status "Unhealthy" when I use the NGINX Ingress Controller with Amazon EKS?
"I use the NGINX Ingress Controller to expose the ingress resource. However, my Amazon Elastic Kubernetes Service (Amazon EKS) worker nodes fail to use the Network Load Balancer."
"I use the NGINX Ingress Controller to expose the ingress resource. However, my Amazon Elastic Kubernetes Service (Amazon EKS) worker nodes fail to use the Network Load Balancer.Short descriptionTo preserve the client IP, the NGINX Ingress Controller sets the spec.externalTrafficPolicy option to Local. Also, it routes requests only to healthy worker nodes.To troubleshoot the status of your worker nodes and update your traffic policy, see the following steps.Note: There's no requirement to maintain the cluster IP address or preserve the client IP address.ResolutionCheck the health status of your worker nodesNote: The following examples use the NGINX Ingress Controller v1.5.1 running on EKS Cluster v1.23.1.    Create the mandatory resources for the NGINX Ingress Controller (from the Kubernetes website) in your cluster:$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.5.1/deploy/static/provider/aws/deploy.yamlBy default, the NGINX Ingress Controller creates the Kubernetes Service ingress-nginx-controller with the .spec.externalTrafficPolicy option set to Local (from the GitHub website).2.    Check if the external traffic policy (from the Kubernetes website) is set to Local:$ kubectl -n ingress-nginx describe svc ingress-nginx-controllerYou receive an output that's similar to the following:Name: ingress-nginx-controllerNamespace: ingress-nginxLabels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress-nginx app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ingress-nginx app.kubernetes.io/version=1.0.2 helm.sh/chart=ingress-nginx-4.0.3Annotations: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: true service.beta.kubernetes.io/aws-load-balancer-type: nlbSelector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginxType: LoadBalancerIP Families: <none>IP: 10.100.115.226IPs: 10.100.115.226LoadBalancer Ingress: a02245e77404f4707a725d0b977425aa-5b97f717658e49b9.elb.eu-west-1.amazonaws.comPort: http 80/TCPTargetPort: http/TCPNodePort: http 31748/TCPEndpoints: 192.168.43.203:80Port: https 443/TCPTargetPort: https/TCPNodePort: https 30045/TCPEndpoints: 192.168.43.203:443Session Affinity: NoneExternal Traffic Policy: LocalHealthCheck NodePort: 30424Events: <none>Note: The Local setting drops packets that are sent to Kubernetes nodes and doesn't need to run instances of the NGINX Ingress Controller. Assign NGINX pods (from the Kubernetes website) to the nodes that you want to schedule the NGINX Ingress Controller for.3.    Check the iptables command that set up the DROP rules on the nodes that aren't running instances of the NGINX Ingress Controller:$ sudo iptables-save | grep -i "no local endpoints"-A KUBE-XLB-CG5I4G2RS3ZVWGLK -m comment --comment "ingress-nginx/ingress-nginx-controller:http has no local endpoints " -j KUBE-MARK-DROP-A KUBE-XLB-EDNDUDH2C75GIR6O -m comment --comment "ingress-nginx/ingress-nginx-controller:https has no local endpoints " -j KUBE-MARK-DROPSet the policy optionUpdate the spec.externalTrafficPolicy option to Cluster:$ kubectl -n ingress-nginx patch service ingress-nginx-controller -p '{"spec":{"externalTrafficPolicy":"Cluster"}}'service/ingress-nginx-controller patchedBy default, NodePort services perform source IP address translation (from the Kubernetes website). For NGINX, this means that the source IP address of an HTTP request is always the IP address of the Kubernetes node that received the request. If you set externalTrafficPolicy (.spec.externalTrafficPolicy) to Cluster in the ingress-nginx service specification, then the incoming traffic doesn't preserve the source IP address. For more information, see Preserving the client source IP address (on the Kubernetes website).Follow"
https://repost.aws/knowledge-center/eks-unhealthy-worker-node-nginx
How can I connect to a database from an Amazon ECS task on Fargate?
I want to connect to a database from an Amazon Elastic Container Service (Amazon ECS) task on AWS Fargate.
"I want to connect to a database from an Amazon Elastic Container Service (Amazon ECS) task on AWS Fargate.ResolutionBefore completing the following steps, confirm that you have an Amazon ECS cluster running on Fargate and an Amazon Relational Database Service (Amazon RDS) database. If Amazon ECS and Amazon RDS have communication issues see, Troubleshoot connectivity issues between Amazon ECS tasks for Amazon EC2 launch types and Amazon RDS databases.Note: The following resolution uses MySQL as the engine type.Connect your task to your database1.    Create a Python script that connects to your MySQL database.The following example rds.py script outputs the result of the connection to the database to Amazon CloudWatch:import pymysqlimport osDatabase_endpoint = os.environ['ENDPOINT']Username = os.environ['USER']Password = os.environ['PASS']try: print("Connecting to "+Database_endpoint) db = pymysql.connect(host=Database_endpoint, user=Username, password=Password) print ("Connection successful to "+Database_endpoint)except Exception as e: print ("Connection unsuccessful due to "+str(e))Note: Replace ENDPOINT, USER, and PASS with your database values.2.    Create a Dockerfile that includes the required commands to assemble an image. For example:FROM pythonRUN pip install pymysql cryptographyCOPY rds.py /CMD [ "python", "/rds.py" ]Important: Be sure to place your rds.py script and Dockerfile in the same folder.3.    Create an Amazon ECR repository, and then push the Docker image to that repository.4.    Create a task definition, and then add the Docker image from step 2 as the container image. For example:{ "executionRoleArn": "arn:aws:iam::account_ID:role/ecsTaskExecutionRole", "containerDefinitions": [{ "name": "sample-app", "image": "YOUR-ECR-Repository-URL", "essential": true }], "requiresCompatibilities": [ "FARGATE" ], "networkMode": "awsvpc", "cpu": "256", "memory": "512", "family": "sample-app"}Note: In your task definition, set the values for the ENDPOINT, USER, and PASS environment variables. You can pass these values directly as environment variables or retrieve them from secrets in AWS Secrets Manager. For more information, see How can I pass secrets or sensitive information securely to containers in an Amazon ECS task?5.    Open the Amazon ECS console, and choose Task Definitions from the navigation pane.6.    Select your task definition, choose Actions, and then choose Run Task.7.    For Launch type, choose FARGATE.8.    For Cluster, choose the cluster for your task definition.9.    For Number of tasks, enter the number of tasks that you want copied.10.    In the VPC and security groups section, for Cluster VPC, choose your Amazon Virtual Private Cloud (Amazon VPC).11.    For Subnets, choose your subnets.12.    For Security groups, select at least one security group.13.    Choose Run Task.The rds.py script stops the task and returns the following message:Essential container in task exited.Confirm that your task is connected to your database1.    Open the Amazon ECS console.2.    From the navigation menu, choose Clusters, and then choose your cluster.3.    Choose the Tasks tab.4.    For Desired task status, choose Stopped to see a list of stopped tasks.5.    Choose your stopped task.6.    On the Details tab of your stopped task, in the Containers section, choose the expander icon.7.    Choose View logs in CloudWatch.You should see the following message in the Amazon CloudWatch console:Connection successful to [Your Endpoint]Related informationCreating a MySQL DB instance and connecting to a database on a MySQL DB instanceFollow"
https://repost.aws/knowledge-center/ecs-fargate-task-database-connection
How do I troubleshoot timeout issues with a Lambda function that's in an Amazon VPC?
My AWS Lambda function returns timeout errors when I configure the function to access resources in an Amazon Virtual Private Cloud (Amazon VPC). How do I troubleshoot the issue?
"My AWS Lambda function returns timeout errors when I configure the function to access resources in an Amazon Virtual Private Cloud (Amazon VPC). How do I troubleshoot the issue?ResolutionConfirm that there's a valid network path to the endpoint that your function is trying to reach. To review your network settings, follow the instructions in How do I give internet access to my Lambda function in a VPC?Important: If you're using a custom Dynamic Host Configuration Protocol (DHCP) options set, confirm that your custom DNS server is working as expected.When you review your network settings, make sure that the following settings are configured correctly:Route tablesSubnetsSecurity groupsNetwork access control lists (network ACLs)Domain Name System (DNS) hostnames and DNS resolutionNote: Security groups require outbound rules for Lambda only. Network ACLs require both inbound and outbound rules for Lambda.If you're using an AWS SDK, then see if the SDK throws any relevant errors that can help you determine what's causing the timeouts.Related informationHow do I troubleshoot issues with VPC route tables?How does DNS work, and how do I troubleshoot partial DNS failures?Follow"
https://repost.aws/knowledge-center/lambda-vpc-troubleshoot-timeout
How do I create source or target endpoints using AWS DMS?
How do I specify a source and a target database endpoint for my AWS Database Migration Service (AWS DMS) task?
"How do I specify a source and a target database endpoint for my AWS Database Migration Service (AWS DMS) task?ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Before you can run an AWS DMS task, you must create a replication instance, a source endpoint, and a target endpoint. You can create source and target endpoints:Using the AWS DMS console when you create a replication instance.Using the AWS DMS console after you create a replication instance.Using the AWS CLI.Create endpoints when creating a replication instanceOpen the AWS DMS console.From the navigation pane, choose Replication instances.Choose Create replication instance, and enter your replication instance information.On the Connect source and target database endpoints page, specify your source and target endpoints. For more information, see Creating source and target endpoints.Create source and target endpoints after creating a replication instanceOpen the AWS DMS console.From the navigation pane, choose Endpoints, and then choose Create endpoint.Select Source endpoint and enter the source endpoint information.Choose Create endpoint.From the Endpoints pane, choose Create endpoint.Select Target endpoint and enter the target endpoint information.Choose Create endpoint.Note: When you create a password for your PostgreSQL endpoint, you can't use the following special characters: +, %, or ;. For MySQL endpoint passwords, you can't use ;.Create endpoints using the AWS CLIAfter you install and configure the latest version of the AWS CLI, run the create-endpoint command.Related informationSources for data migrationTargets for data migrationWorking with an AWS DMS replication instanceMigration strategy for relational databasesFollow"
https://repost.aws/knowledge-center/create-source-target-endpoints-aws-dms
I have a primary connection Direct Connect gateway with a backup VPN connection. Why is traffic prioritizing the backup connection?
"I have an AWS Direct Connect gateway with my primary connection set to on-premises. I also have a backup VPN connection for failover with my AWS Direct Connect connection. The traffic from my on-premises connection to AWS is prioritizing the backup connection (VPN connection) and not the primary connection (Direct Connect connection). Why is this happening, and how can I fix it?"
"I have an AWS Direct Connect gateway with my primary connection set to on-premises. I also have a backup VPN connection for failover with my AWS Direct Connect connection. The traffic from my on-premises connection to AWS is prioritizing the backup connection (VPN connection) and not the primary connection (Direct Connect connection). Why is this happening, and how can I fix it?Short descriptionCustomer gateways prefer the most specific route to Amazon Virtual Private Cloud (Amazon VPC). If the VPN connection has the most specific route, then it's preferred over the Direct Connect connection.ResolutionAWS Site-to-Site VPN supports two types of deployment: static and dynamic. Based on your use case, see the related resolution.Static VPN:Configure your customer gateway with less specific routes for the VPN connection than the Direct Connect connection.Dynamic VPN:Confirm that you're advertising the same routes over the VPN connection and Direct Connect connection.If the customer gateway receives the same routes over the VPN and Direct Connect connections, it always prefers Direct Connect.However, if your customer gateway has a more specific route over the VPN than the Direct Connect connection, then VPN is preferred. For example, Direct Connect has a maximum of 20 allowed prefixes. If you add summarized routes to cover all the prefixes, then the CIDRs advertised over VPN become more specific than the CIDRs advertised over Direct Connect. As a result, the customer gateway prioritizes the VPN over the Direct Connect connection.To resolve this issue, follow these steps:Add the same route associated with Direct Connect to the Site-to-Site VPN routing table. This results in the Site-to-Site VPN advertising the specific routes and the route that you added.In the customer gateway, filter out the specific routes advertised by the Site-to-Site VPN. The customer gateway then has the same routes over both connections and prefers the Direct Connect connection.Traffic from AWS to the customer gatewayIf traffic is coming from an AWS connection to your customer gateway, the more specific route is preferred. If the routes are the same, then AWS prefers a Direct Connect connection over a VPN connection for the same on-premises subnet.To set your AWS connection to prefer VPN over Direct Connect:For a static VPN, add a more specific route in the static VPN route table.For a Border Gateway Protocol (BGP) VPN, advertise a less specific route over the Direct Connect connection. As the most specific route is preferred, the VPN connection is then preferred.Related informationRoute tables and VPN route priorityRoute priorityFollow"
https://repost.aws/knowledge-center/direct-connect-gateway-primary-connection
How do I identify which volumes attached to my Amazon EC2 Linux instance are instance store (ephemeral) volumes?
I have an Amazon Elastic Compute Cloud (Amazon EC2) Linux instance with both Amazon Elastic Block Store (Amazon EBS) volumes and instance store (ephemeral) volumes attached to it. How do I identify which of my attached volumes are instance store volumes?
"I have an Amazon Elastic Compute Cloud (Amazon EC2) Linux instance with both Amazon Elastic Block Store (Amazon EBS) volumes and instance store (ephemeral) volumes attached to it. How do I identify which of my attached volumes are instance store volumes?Short descriptionTo identify instance store volumes on Amazon EC2 Linux instances, first check if the instance type supports instance store volumes. If the instance supports instance store volumes, then check the type of instance store volumes that are supported and review the volume's information from the operating system (OS).Resolution1.    Verify what type of instance store volume (HDD, SSD, or NVMe SSD), if any, that your instance supports. See Instance store volumes to view the quantity, size, type, and performance optimizations of instance store volumes that are available for each supported instance type.2.    Determine which volumes attached to your instance are instance store volumes. The identification method that is used depends on whether you have NVMe SSD or HDD/SSD instance store volumes.NVMe SSD instance store volumes1.    Connect to your instance.2.    Install the NVMe command line package, nvme-cli, using the package management tools for your Linux distribution. For Amazon Linux instances, install the nvme-cli package using the yum install command. For download and installation instructions for other distributions, see the GitHub documentation for nvme-cli or see the documentation specific to your distribution.3.    Run the nvme list command as a privileged user.$ sudo nvme listThe Model column in the following output example lists whether each attached device is Amazon Elastic Block Store or Amazon EC2 NVMe Instance Storage. The example output is from an instance type that supports one NVMe SSD device.$ sudo nvme listNode SN Model Namespace Usage Format FW Rev---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------/dev/nvme0n1 vol0923757ba05df9515 Amazon Elastic Block Store 1 0.00 B / 8.59 GB 512 B + 0 B 1.0/dev/nvme1n1 AWS1A4FC25FB16B79F76 Amazon EC2 NVMe Instance Storage 1 50.00 GB / 50.00 GB 512 B + 0 B 0HDD or SSD instance store volumesFor HDD or SSD instance store volumes, get the list of attached block devices from the OS, and then retrieve the block device mapping section that is contained in the instance metadata.1.    Connect to your instance.2.    Run the lsblk command. If the lsblk command isn't present, then install the util-linux package using the package management tools for your Linux distribution. For Amazon Linux instances, install the util-linux package using the yum install command. For download and installation instructions for other distributions, refer to the documentation specific to your distribution.$ sudo lsblkThe following output example shows the list of block devices that are retrieved from an instance that has many drives and that is running on an instance type that supports SSD instance store volumes.$ sudo lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTxvda 202:0 0 8G 0 disk└─xvda1 202:1 0 8G 0 part /xvdb 202:16 0 745.2G 0 diskxvdc 202:32 0 745.2G 0 diskxvdd 202:48 0 745.2G 0 diskxvde 202:64 0 745.2G 0 disk3.    To identify if xvdb, from the previous example output, is an ephemeral drive, retrieve the block-device-mapping metadata using the base URL for all instance metadata requests: http://169.254.169.254/latest/meta-data/block-device-mapping.$ curl http://169.254.169.254/latest/meta-data/block-device-mapping/ephemeral0sdb$ ls -l /dev/sdblrwxrwxrwx 1 root root 4 Aug 27 13:07 /dev/sdb -> xvdbIn the previous example, the block device mapping for ephemeral0 is to sdb, which is a symbolic link to xvdb. So in this example, the xvdb is an ephemeral device.Or, you can automate the check to display the ephemeral devices attached to your instance by using the following set of commands.Identify the OS block devices.OSDEVICE=$(sudo lsblk -o NAME -n | grep -v '[[:digit:]]' | sed "s/^sd/xvd/g")Set the block device mapping URL.BDMURL="http://169.254.169.254/latest/meta-data/block-device-mapping/"Loop through OS devices and find the mapping in the block device mapping.for bd in $(curl -s ${BDMURL}); do MAPDEVICE=$(curl -s ${BDMURL}/${bd}/ | sed "s/^sd/xvd/g"); if grep -wq ${MAPDEVICE} <<< "${OSDEVICE}"; then echo "${bd} is ${MAPDEVICE}"; fi; done | grep ephemeralThe following is an example of the three commands described previously, along with example output.$ OSDEVICE=$(sudo lsblk -o NAME -n | grep -v '[[:digit:]]' | sed "s/^sd/xvd/g")$ BDMURL="http://169.254.169.254/latest/meta-data/block-device-mapping/"$ for bd in $(curl -s ${BDMURL}); do MAPDEVICE=$(curl -s ${BDMURL}/${bd}/ | sed "s/^sd/xvd/g"); if grep -wq ${MAPDEVICE} <<< "${OSDEVICE}"; then echo "${bd} is ${MAPDEVICE}"; fi; done | grep ephemeralephemeral0 is xvdbephemeral1 is xvdcephemeral2 is xvddephemeral3 is xvdeFollow"
https://repost.aws/knowledge-center/ec2-linux-instance-store-volumes
How do I resolve the error "Cache of region boundaries are out of date" when running an HBase read-replica Amazon EMR cluster with Phoenix?
"When I try to connect to Apache HBase on an Amazon EMR read-replica cluster using Apache Phoenix, I get an error message similar to the following:Error: ERROR 1108 (XCL08): Cache of region boundaries are out of date. (state=XCL08,code=1108) org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108(XCL08): Cache of region boundaries are out of date.at org.apache.phoenix.exception.SQLExceptionCode$14.newException(SQLExceptionCode.java:365)at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)at org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:189)at org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:169)at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:140)"
"When I try to connect to Apache HBase on an Amazon EMR read-replica cluster using Apache Phoenix, I get an error message similar to the following:Error: ERROR 1108 (XCL08): Cache of region boundaries are out of date. (state=XCL08,code=1108) org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108(XCL08): Cache of region boundaries are out of date.at org.apache.phoenix.exception.SQLExceptionCode$14.newException(SQLExceptionCode.java:365)at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)at org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:189)at org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:169)at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:140)Short descriptionPhoenix tries to connect to the hbase:meta table by default. However, because the hbase:meta table belongs to the primary cluster, Phoenix can't to connect the read-replica cluster. To resolve this problem, modify hbase-site.xml to point to the hbase:meta_cluster-id table that belongs to the HBase read-replica cluster.ResolutionBefore you begin, confirm that Phoenix is installed on the primary cluster as well as the read-replica cluster. Phoenix can't connect to HBase from a read-replica cluster if Phoenix isn't installed on the primary cluster.On a running cluster1.    Add the following configurations to the HBase configuration file ( /etc/phoenix/conf/hbase-site.xml) on the master node. Replace cluster-id with the ID of your read-replica cluster.<property> <name>hbase.balancer.tablesOnMaster</name> <value>hbase:meta</value></property><property> <name>hbase.meta.table.suffix</name> <value>cluster-id</value></property>2.    Restart the Phoenix service:For Amazon EMR release versions 5.29 and earlier:sudo stop phoenix-queryserversudo start phoenix-queryserverFor Amazon EMR release versions 5.30 and later:sudo systemctl stop phoenix-queryserver.servicesudo systemctl start phoenix-queryserver.serviceIn Amazon EMR release versions 5.21.0 and later, you can also make these configuration changes by overriding the cluster configuration for the master instance group:1.    Open the Amazon EMR console.2.    In the cluster list, choose the active read-replica cluster that you want to reconfigure.3.    Open the cluster details page for the cluster and go to Configurations tab.4.    In the Filter dropdown list, choose the master instance group.5.    In Reconfigure dropdown list, choose either Edit in table.6.    Choose Add configuration, and then add the following two configurations:Classification: phoenix-hbase-siteProperty: hbase.balancer.tablesOnMasterValue: hbase:metaClassification: phoenix-hbase-siteProperty: hbase.meta.table.suffixValue: ${emr.clusterId}7.    Choose Save changes.For more information about this process, see Reconfigure an instance group in the console.On a new clusterAdd a configuration object similar to the following when you launch a cluster using Amazon EMR release version 4.6.0 or later:[ { "Classification": "phoenix-hbase-site", "Configurations": [ ], "Properties": { "hbase.balancer.tablesOnMaster" : "hbase:meta", "hbase.meta.table.suffix" : "${emr.clusterId}" } }, { "Classification": "hbase-site", "Configurations": [ ], "Properties": { "hbase.meta.table.suffix" : "${emr.clusterId}" } }]Related informationHBase on Amazon S3 (Amazon S3 storage mode)Apache PhoenixFollow"
https://repost.aws/knowledge-center/cache-region-boundaries-date-error-emr
Why aren't my Amazon Cognito user pool analytics appearing on my Amazon Pinpoint dashboard?
My Amazon Cognito user pool analytics aren't publishing to my Amazon Pinpoint project dashboard. Why aren't my user pool analytics appearing in Amazon Pinpoint after I specify Amazon Pinpoint analytics settings in the Amazon Cognito console?
"My Amazon Cognito user pool analytics aren't publishing to my Amazon Pinpoint project dashboard. Why aren't my user pool analytics appearing in Amazon Pinpoint after I specify Amazon Pinpoint analytics settings in the Amazon Cognito console?Short descriptionVerify that your application is passing an AnalyticsMetadata parameter in its requests to the InitiateAuth API operation. Without this parameter, Amazon Cognito can't pass user pool analytics from your application to Amazon Pinpoint.To have your application pass an AnalyticsMetadata parameter in its requests to the InitiateAuth API operation, use AWS SDKs.ResolutionImportant: The AnalyticsMetadata parameter value must be unique for each endpoint. Each unique value corresponds to a single data point in your Amazon Pinpoint dashboard.For instructions for each language-specific AWS SDK, see the See also section of the InitiateAuth page in the Amazon Cognito API Reference.AWS SDK for JavaScript code examplevar cognitoidentityserviceprovider = new AWS.CognitoIdentityServiceProvider();var params = { AuthFlow: "USER_PASSWORD_AUTH", ClientId: 'STRING_VALUE', /* the client ID attached to the Pinpoint project */ AuthParameters: { 'USERNAME': 'STRING_VALUE', 'PASSWORD': 'STRING_VALUE' }, AnalyticsMetadata: { AnalyticsEndpointId: 'STRING_VALUE' /* random UUID unique for each Cognito user */ },};cognitoidentityserviceprovider.initiateAuth(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); // successful response});Follow"
https://repost.aws/knowledge-center/pinpoint-cognito-user-pool-analytics
How do I activate and monitor logs for an Amazon RDS MySQL DB instance?
"I want to activate and monitor the error log, the slow query log, and the general log for an Amazon Relational Database Service (Amazon RDS) instance running MySQL. How can I do this?"
"I want to activate and monitor the error log, the slow query log, and the general log for an Amazon Relational Database Service (Amazon RDS) instance running MySQL. How can I do this?Short descriptionYou can monitor the MySQL error log, the slow query log, and the general log directly through the Amazon RDS console, Amazon RDS API, Amazon RDS AWS Command Line Interface (AWS CLI), or AWS SDKs. The MySQL error log file is generated by default. You can generate the slow query log and the general log.ResolutionFirst, if you don't have a customer DB parameter group associated with your MySQL instance, create a custom DB parameter group and modify the parameter. Then, associate the parameter group with your MySQL instance.If you already have a custom DB parameter group associated with the RDS instance, then proceed with modifying the required parameters.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Create a DB parameter groupOpen the Amazon RDS console, and then choose Parameter groups from the navigation pane.Choose Create parameter group.From the Parameter group family drop-down list, choose a DB parameter group family.For Type, choose DB Parameter Group.Enter the name in the Group name field.Enter a description in the Description field.Choose Create.Modify the new parameter groupOpen the Amazon RDS console, and then choose Parameter groups from the navigation pane.Choose the parameter group that you want to modify.Choose Parameter group actions, and then choose Edit.Choose Edit parameters, and set the following parameters to these values: General_log = 1 (default value is 0 or no logging) Slow_query_log = 1 (default value is 0 or no logging) Long_query_time = 2 (to log queries that run longer than two seconds) log_output = FILE (writes both the general and the slow query logs to the file system, and allows viewing of logs from the Amazon RDS console) log_output =TABLE (writes both the general and the slow query logs to a table so you can view these logs with a SQL query)Choose Save Changes. Note: You can't modify the parameter settings of a default DB parameter group. You can modify the parameter in a custom DB parameter group if Is Modifiable is set to true.Associate the instance with the DB parameter groupOpen the Amazon RDS console, and then choose Databases from the navigation pane.Choose the instance that you want to associate with the DB parameter group, and then choose Modify.From the Database options section, choose the DB parameter group that you want to associate with the DB instance.Choose Continue.Note: The parameter group name changes and applies immediately, but parameter group isn't applied until you manually reboot the instance. There is a momentary outage when you reboot a DB instance, and the instance status displays as rebooting.View the logIf log_output =TABLE, run the following command to query the log tables:Select * from mysql.slow_logSelect * from mysql.general_logNote: Enabling table logging can affect the database performance for high throughput workload. For more information about table-based MySQL logs, see Managing table-based MySQL logs.If log_output =FILE, view database log files for your DB engine using the AWS Management Console.Note: Error logs are stored as files and are not affected by the log_output parameter.Related informationWorking with DB parameter groupsAmazon RDS database log filesMySQL database log filesFollow"
https://repost.aws/knowledge-center/rds-mysql-logs
Should I use CloudFront to serve my website content?
I'm using Amazon Elastic Compute Cloud (Amazon EC2) and Elastic Load Balancing (ELB) to serve my website. Should I integrate Amazon CloudFront?
"I'm using Amazon Elastic Compute Cloud (Amazon EC2) and Elastic Load Balancing (ELB) to serve my website. Should I integrate Amazon CloudFront?ResolutionCloudFront can speed up the distribution of your website's static and dynamic content. For more information, see What is Amazon CloudFront?CloudFront complements a website that has:A high volume of static content, such as image filesRequests from clients around the worldBecause CloudFront caches static content on edge locations around the world, CloudFront can reduce latency for websites serving high volumes of static content to global clients.Other advantages of CloudFront include:Integration with other AWS services, including AWS Certificate Manager (ACM), Amazon Simple Storage Service (Amazon S3), AWS WAF, AWS Elemental MediaPackage, and AWS Elemental MediaStore.Reduced overhead on the origin because of CloudFront's SSL offloading and persistent connection with the origin.Improved load-handling capacity for content cached on regional edge locations.CloudFront isn't recommended for websites that have an origin server that's geographically close to the website's users, such as a regional website.Related informationWorking with distributionsFollow"
https://repost.aws/knowledge-center/cloudfront-integrate-website
What are the differences between General Purpose and Max I/O performance modes in Amazon EFS?
"I want to know the difference between the two performance modes, General Purpose and Max I/O, in Amazon Elastic File System (Amazon EFS)."
"I want to know the difference between the two performance modes, General Purpose and Max I/O, in Amazon Elastic File System (Amazon EFS).Short descriptionThe performance modes differ in the following aspects:Number of file system operations per secondIn General Purpose performance mode, read and write operations consume a different number of file operations. Read data or metadata consumes one file operation. Write data or update metadata consumes five file operations. A file system can support up to 35,000 file operations per second (IOPS). This can be 35,000 only READ operations, 7,000 only WRITE operations, or a combination of the two. For example, a file system supports 20,000 READ operations and 3,000 WRITE operations. This is because 20,000 READ operations (one file operation per read) + 3,000 WRITE operations (five file operations per write) = 35,000 IOPS. For more information, see Quotas for Amazon EFS file systems.Max I/O performance modeThere's no IOPS limit for the MAX I/O performance mode of a file system. If you require a high number of IOPS from a file system, then it's a best practice to use Max I/O performance mode.Latency per file system operationGeneral Purpose performance mode has the lower latency of the two performance modes. If your workload is sensitive to latency, then this performance mode is likely suitable for your use case.Max I/O performance mode doesn't have an IOPS limit, but it has a slightly higher latency per each file system operation.For more information, including additional use cases, see Performance modes.Note: After you create your file system, you can't change the file system performance mode. As a workaround, you can use AWS DataSync or AWS Backup to migrate and restore your file system with a different performance mode. Because AWS bills each mode the same way, there are no additional costs for either performance mode. For DataSync, note the Considerations with Amazon EFS locations.ResolutionTo determine which performance mode to use for your workflow, use General Purpose mode to create a file system. Then, run your application for 2-3 weeks. During this time, monitor the file system's PercentIOLimit Amazon CloudWatch metric. Note how close your file system is from reaching 100% of the PercentIOLimit metric. If it reaches 90-100% of the metric for most of the time, then Max I/O performance mode is a better option for your workload.Keep in mind that Max I/O performance mode doesn't have IOPS limitations, but it does have higher latency for IOPS. Therefore, before moving all your applications to Max I/O performance mode, test your workflow to validate if the latency is acceptable for your use case.Create a file system with General Purpose performance mode1.    Open the Amazon EFS console.2.    Choose Create file system.3.    Choose Customize.4.    Choose Additional Settings, and then select General Purpose (Recommended) as your Performance mode.Note: General Purpose is the default performance mode.5.    (Optional) You can add tags, configure a Lifecycle policy, select a throughput mode, and activate encryption at rest.6.    Choose Next to view the Network access pane. Configure the Virtual Private Cloud (VPC) and Mount targets for your file system. Then, choose Next.7.    (Optional) You can configure a File system policy to control access to your file system. Then, choose Next.8.    Review the summary of your configuration settings, and then choose Create.Monitor the maximum file system operations using Amazon CloudWatch metricsThe PercentIOLimit CloudWatch metric monitors how close a file system is to hitting its maximum file system operations per second limit. This metric is available only for file systems that are running with General Purpose performance mode. When you reach 35,000 file system operations per second, General Purpose performance mode reaches the 100% PercentIOLimit.To check the PercentIOLimit, complete the following steps:1.    Open the CloudWatch console in the AWS Region where you created your file system.2.    Choose Metrics from the navigation bar. Then, choose All metrics.3.    In the search bar, enter your file system ID, and then press ENTER. This displays the relevant metrics that are associated with your file system.4.    Choose File System Metrics. This displays all the available CloudWatch metrics of your file system.5.    Choose PercentIOLimit. This metric displays your file system's IOPS usage. 35,000 IOPS equal a PercentIOLimit of 100%.Note: For more information about PercentIOLimit and other available CloudWatch metrics, see Amazon CloudWatch metrics for Amazon EFS.Follow"
https://repost.aws/knowledge-center/linux-efs-performance-modes
How do I use transformations in AWS DMS?
"How do I use transformations in AWS Database Migration Service (AWS DMS) to modify a schema, table, or column?"
"How do I use transformations in AWS Database Migration Service (AWS DMS) to modify a schema, table, or column?Short descriptionYou can use transformations to modify a schema, table, or column. For example, you can rename, add, replace, or remove a prefix or suffix for a table, or change the table name to uppercase or lowercase. You can define your transformation rules by using the AWS Command Line Interface (AWS CLI) or API, or by using the AWS DMS console.ResolutionNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Open the AWS DMS console, and then choose Database migration tasks from the navigation pane.Choose Create task.Enter the details for Task configuration and Task settings.Select Enable CloudWatch logs.From the Table mappings section, choose Guided UI. You can also choose JSON editor to enter the mappings in JSON format.Expand the Selection rules section, and then choose Add a new selection rule.Enter a Schema and Table name.From Action, select Include or Exclude.Note: You can add multiple selection rules by choosing Add new selection rule and then entering the details for your selection rule. You must have at least one selection rule to use transformations.Expand the Transformation rules section, and then choose Add a new transformation rule.Choose a Target by selecting Schema, Table, or Column.Note: If you choose Column, you must enter the Schema, Table, and Column names. If you choose Table, you must enter the Schema and Table names only. If you choose Schema, you must enter the Schema name only.From Action, select one of the following:Rename toRemove column (Unavailable if you choose Schema or Table as the target)Make lowercaseMake uppercaseAdd prefixRemove prefixReplace prefixAdd suffixRemove suffixReplace suffixChoose Add a new transformation rule to save the transformation rule.Choose Create task.To add transformations to a task that already exists, choose Database migration tasks from the navigation pane. Select your task, choose Actions, and then choose Modify. From the Table mappings section, expand Selection rules, and then choose Add new selection rule. To add more transformations, expand Transformation rules, choose Add a new transformation rule, and then choose Save.For more information on how each of these transformation rules work (with examples), see Transformation rules and actions.Follow"
https://repost.aws/knowledge-center/transformations-aws-dms
How do I use Amazon SageMaker Python SDK local mode with SageMaker Studio?
I want to use Amazon SageMaker Python SDK local mode with SageMaker Studio.
"I want to use Amazon SageMaker Python SDK local mode with SageMaker Studio.Short descriptionInstall the SageMaker Studio Docker CLI and (optional) SageMaker Studio Docker UI extensions to add local mode and Docker functionality to SageMaker Studio.ResolutionPrerequisitesBefore you begin, be sure that you complete the following:Your SageMaker Studio domain setup is in VpcOnly mode (note that the PublicInternetOnly mode isn't supported).Your domain is connected to Amazon VPC with DNS hostname and DNS resolution options turned on.Your SageMaker Studio user profile execution role has the following permissions:sagemaker:DescribeDomainsagemaker:DescribeUserProfilesagemaker:ListTagselasticfilesystem:DescribeMountTargetselasticfilesystem:DescribeMountTargetSecurityGroupselasticfilesystem:ModifyMountTargetSecurityGroupsec2:RunInstancesec2:TerminateInstancesec2:DescribeInstancesec2:DescribeInstanceTypesec2:DescribeImagesec2:DescribeSecurityGroupsec2:DescribeNetworkInterfacesec2:DescribeNetworkInterfaceAttributeec2:CreateSecurityGroupec2:AuthorizeSecurityGroupIngressec2:ModifyNetworkInterfaceAttributeec2:CreateTagsYou installed the Docker CLI extension. (Note that Docker CLI is required for using the UI extension.)You installed Docker Compose.You installed PpYAML, 5.4.1.Create SageMaker Studio Lifecycle Configuration scripts1.    Create a Studio Lifecycle Configuration script for JupyterServer App to install the extensions in one of two ways:Install both the CLI and UI extensions#!/bin/bashset -excd ~if cd sagemaker-studio-docker-cli-extensionthen git reset --hard git pullelse git clone https://github.com/aws-samples/sagemaker-studio-docker-cli-extension.git cd sagemaker-studio-docker-cli-extensionfinohup ./setup.sh > docker_setup.out 2>&1 &if cd ~/sagemaker-studio-docker-ui-extensionthen git reset --hard git pull cdelse cd git clone https://github.com/aws-samples/sagemaker-studio-docker-ui-extension.gitfinohup ~/sagemaker-studio-docker-ui-extension/setup.sh > docker_setup.out 2>&1 &Install only the CLI extension#!/bin/bashset -excd ~if cd sagemaker-studio-docker-cli-extensionthen git reset --hard git pullelse git clone https://github.com/aws-samples/sagemaker-studio-docker-cli-extension.git cd sagemaker-studio-docker-cli-extensionfinohup ./setup.sh > docker_setup.out 2>&1 &2.    Create a SageMaker Studio Lifecycle Configuration script for the KernelGateway App:#!/bin/bashset -euxSTATUS=$(python3 -c "import sagemaker_dataprep";echo $?)if [ "$STATUS" -eq 0 ]then echo 'Instance is of Type Data Wrangler'else echo 'Instance is not of Type Data Wrangler' cd ~ if cd sagemaker-studio-docker-cli-extension then git reset --hard git pull else git clone https://github.com/aws-samples/sagemaker-studio-docker-cli-extension.git cd sagemaker-studio-docker-cli-extension fi nohup ./setup.sh > docker_setup.out 2>&1 &fi3.    From a terminal, encode both script contents using base64 encoding:$ LCC_JS_CONTENT=`openssl base64 -A -in <LifeCycle script file for JupyterServer>`$ LCC_KG_CONTENT=`openssl base64 -A -in <LifeCycle script file for KernelGateway>`4.    Create Studio Lifecycle Configurations from environment variables LCC_JS_CONTENT and LCC_KG_CONTENT using these AWS Command Line Interface (CLI) commands:$ aws sagemaker create-studio-lifecycle-config --studio-lifecycle-config-name sdocker-js --studio-lifecycle-config-content $LCC_JS_CONTENT --studio-lifecycle-config-app-type JupyterServer$ aws sagemaker create-studio-lifecycle-config --studio-lifecycle-config-name sdocker-kg --studio-lifecycle-config-content $LCC_KG_CONTENT --studio-lifecycle-config-app-type KernelGatewayNote: If you get errors running the CLI commands, make sure you are using the most recent version of AWS CLI. See Troubleshooting AWS CLI errors - AWS Command Line Interface.Update the Studio domain (optional)Update the Studio domain to add LCC to default user settings:$ aws sagemaker update-domain --domain-id <domain-id> --default-user-settings '{"JupyterServerAppSettings": {"DefaultResourceSpec": {"InstanceType": "system", "LifecycleConfigArn": "arn:aws:sagemaker:<region>:<AWS account ID>:studio-lifecycle-config/sdocker-js"}}, "KernelGatewayAppSettings": {"DefaultResourceSpec": {"InstanceType": "<default instance type>", "LifecycleConfigArn": "arn:aws:sagemaker:<region>:<AWS account ID>:studio-lifecycle-config/sdocker-kg"}}}'Update the Studio user profileUpdate your Studio use profile settings as follows:$ aws sagemaker update-user-profile --domain-id <domain-id> --user-profile-name <user profile> --user-settings '{"JupyterServerAppSettings ": {"DefaultResourceSpec": {"InstanceType": "system", "LifecycleConfigArn": "arn:aws:sagemaker:<region>:<AWS account ID>:studio-lifecycle-config/sdocker-js"}, "LifecycleConfigArns": ["arn:aws:sagemaker:<region>:<AWS account ID>:studio-lifecycle-config/sdocker-js"]}, "KernelGatewayAppSettings": {"DefaultResourceSpec": {"InstanceType": "<default instance type>", "LifecycleConfigArn": "arn:aws:sagemaker:<region>:<AWS account ID>:studio-lifecycle-config/sdocker-kg"}, "LifecycleConfigArns": ["arn:aws:sagemaker:<region>:<AWS account ID>:studio-lifecycle-config/sdocker-kg"]}}'Launch the new JuypterServer AppDelete any running instance of the JupyterServer App to complete the configuration. Then, launch the new JupyterServer App. When done, the new app shows an InService status.If you're using the UI extension, wait for it to install. This takes about 10 minutes after you launch the new JupyterServer App. When done, refresh your browser to see the extension.(Optional) Some Studio kernels come with PyYAML>=6.0 and don't have pgrep or procps Python packages. Local mode requires PyYAML==5.4.1 as higher versions break this functionality. Also, you need pgrep to delete a local endpoint. If required, use the following commands to install these requirements from your Studio notebook. Restart your kernel after the installation is complete.!conda update --force -y conda!conda install -y pyyaml==5.4.1!apt-get install -y procpsCreate a Docker hostNow, create a Docker host using the CLI extension that you installed earlier. Use any Amazon Elastic Compute Cloud (EC2) instance type (for example c5.xlarge) as follows:!sdocker create-host --instance-type c5.xlargeThe output must look similar to the following:Successfully launched DockerHost on instance i-xxxxxxxxxxxxxxxxx with private DNS ip-xxx-xxx-xxx-xxx.ec2.internalWaiting on docker host to be readyDocker host is ready!ip-xxx-xxx-xxx-xxx.ec2.internalSuccessfully created context "ip-xxx-xxx-xxx-xxx.ec2.internal "ip-xxx-xxx-xxx-xxx.ec2.internalCurrent context is now "ip-xxx-xxx-xxx-xxx.ec2.internal "If you installed the UI extension, select the instance type from the UI, and then choose the Start Host button. The new host appears in the Docker Hosts list, next to a green circle.Run in local modeUse the SageMaker Python SDK in local mode.Important: To avoid extra charges, close any Docker host that you launched after you're done with local mode and no longer need to use Docker. To close a Docker host using the CLI extension, enter:!sdocker terminate-current-hostOr, in the UI extension, under Docker Hosts, choose the Power icon next to each Docker host. This action shuts down the Docker host and removes it from the Docker Hosts list.Note: For more information on how to use the CLI extension, see SageMaker Docker CLI extension - Docker integration for SageMaker Studio on the GitHub website.Follow"
https://repost.aws/knowledge-center/sagemaker-studio-local-mode
How do I revoke JWT tokens in Amazon Cognito using the AWS CLI?
I want to revoke JSON Web Tokens (JWTs) tokens that are issued in an Amazon Cognito user pool.
"I want to revoke JSON Web Tokens (JWTs) tokens that are issued in an Amazon Cognito user pool.Short descriptionAmazon Cognito refresh tokens expire 30 days after a user signs in to a user pool. You can set the app client refresh token expiration between 60 minutes and 10 years. For more information, see Using the refresh token.You can also revoke refresh tokens in real time. This makes sure that refresh tokens can't generate additional access tokens. All previously issued access tokens by the refresh token aren't valid.When you revoke refresh tokens, this has no effect on other refresh tokens that are associated with parallel user sessions.ResolutionTo revoke a JWT token, refer to the relevant instructions based on your app client.Note:If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Amazon Cognito user pool app clients can have an optional secret for the app. For more information, see Configuring a user pool app client.Replace us-east-1 with your AWS Region, and user-pool-id, client-id, username, email, tokens, secret, and password with your variables.App client without a secretRun the AWS CLI command admin-initiate-auth to initiate the authentication flow as an administrator to get the ID, access token, and refresh token:$ aws --region us-east-1 cognito-idp admin-initiate-auth --user-pool-id us-east-1_123456789 --client-id your-client-id --auth-parameters USERNAME=user-name,PASSWORD=your-password --auth-flow ADMIN_NO_SRP_AUTHYou receive an output similar to the following:{ "ChallengeParameters": {}, "AuthenticationResult": { "AccessToken": "eyJra....", "ExpiresIn": 3600, "TokenType": "Bearer", "RefreshToken": "ey.._9Dg", "IdToken": "ey..DU-Q" }}Run the AWS CLI command revoke-token to revoke the refresh token similar to the following:$ aws --region us-east-1 cognito-idp revoke-token --client-id your-client-id --token eyJra....Note: You don't receive an output.Test using the same refresh token for getting a fresh access token and ID:$ aws --region us-east-1 cognito-idp admin-initiate-auth --user-pool-id us-east-1_123456789 --client-id your-client-id --auth-parameters REFRESH_TOKEN=eyJra....tw --auth-flow REFRESH_TOKEN_AUTHYou receive an output that the refresh tokens revoked similar to the following:Error: An error occurred (NotAuthorizedException) when calling the AdminInitiateAuth operation: Refresh Token has been revokedApp client with a secretFollow the instructions to create a SecretHash value using a Python script.Run the AWS CLI command admin-initiate-auth to initiate the authentication flow as an administrator. This gives you the ID, access token, and refresh token. This command looks similar to the following:$ aws --region us-east-1 cognito-idp admin-initiate-auth --user-pool-id us-east-1_123456789 --client-id your-client-id --auth-parameters USERNAME=user-name,PASSWORD=your-password,SECRET_HASH=IkVyH...= --auth-flow ADMIN_NO_SRP_AUTHYou receive an output that's similar to the following:{ "ChallengeParameters": {}, "AuthenticationResult": { "AccessToken": "eyJra....", "ExpiresIn": 3600, "TokenType": "Bearer", "RefreshToken": "eyJjd....", "IdToken": "ey..YQSA" }}Run the AWS CLI command revoke-token to revoke the refresh token:$ aws --region us-east-1 cognito-idp revoke-token --client-id your-client-id --token eyJjd... --client-secret 1n00....Run a test using the same refresh token to get a fresh access token and ID:$ aws --region us-east-1 cognito-idp admin-initiate-auth --user-pool-id us-east-1_123456789 --client-id your-client-id --auth-parameters REFRESH_TOKEN=eyJjdH.... --auth-flow REFRESH_TOKEN_AUTHYou receive an output that the refresh tokens are revoked:Error: An error occurred (NotAuthorizedException) when calling the AdminInitiateAuth operation: Refresh Token has been revokedNew added claimsTwo new claims, origin_jti and jti, are added in the access and ID token, increasing in the size of the tokens in the app client.The jti claim provides a unique identifier for the JWT. The identifier value must be assigned so that the same value can't be assigned to a different data object. If the app client uses multiple issuers, then use different values to prevent collisions.Note: The jti claim is optional. For more information, see RFC-7519) on the Internet Engineering Task Force website.Related informationVerifying a JSON web tokenRevoking refresh tokensHow can I decode and verify the signature of an Amazon Cognito JSON Web Token?Follow"
https://repost.aws/knowledge-center/revoke-cognito-jwt-token
How do I configure an AWS AppSync schema to handle nested JSON data in DynamoDB?
I want my AWS AppSync schema to retrieve the response from an Amazon DynamoDB table that has nested JSON data. How can I do that?
"I want my AWS AppSync schema to retrieve the response from an Amazon DynamoDB table that has nested JSON data. How can I do that?Short descriptionTo get an AWS AppSync schema to handle nested JSON data in DynamoDB, do the following:Add a nested JSON data item to the DynamoDB table.Create an AWS AppSync API and attach the data source.Configure the nested JSON schema in the AWS AppSync API.Attach a resolver to the getItems query.Create a new test query.Important: The AWS AppSync schema passes null values in its response to DynamoDB if the field names aren’t mapped to the nested JSON data.ResolutionAdd a nested JSON data item to the DynamoDB table1.    Open the Amazon DynamoDB console.2.    Choose Create table.3.    In the Table name field, enter a descriptive name.4.    In the Partition key field, enter a field name. For example: id.5.    Choose Create table. The new table appears on the console's Tables page.6.    In the Name column, choose the new table's name. The table's Overview page opens.7.    Select the Actions dropdown list. Then, choose Create item. The Create item page opens.8.    Choose the JSON button.9.    Copy and paste the following nested JSON record into the JSON editor, and then choose Save:Important: Make sure that you overwrite the prefilled content in the JSON editor with the nested JSON record.Example nested JSON record{ "id": "123", "product": { "model": { "property": { "battery": "li-ion", "device": "iOT-Device", "pressure": "1012", "up_time": "02:12:34" } } }, "status": "In-Stock"}For more information, see Create a table.Create an AWS AppSync API and attach the data source1.    Open the AWS AppSync console.2.    Choose Create API.3.    On the Getting Started page, under Customize your API or import from Amazon DynamoDB, choose Build from scratch.4.    Choose Start.5.    In the API name field, enter a name for your API.6.    Choose Create.7.    In the left navigation pane, choose Data Sources.8.    Choose Create data source.9.    On the New Data Source page, under Create new Data Source, choose the following options: For Data source name, enter a descriptive name. For Data source type, choose Amazon DynamoDB table. For Region, choose the Region that contains your DynamoDB table. For Table name, choose the table you just created.Important: Leave all other options as default.10.    Choose Create.For more information, see Attaching a data source.Configure the nested JSON schema in the AWS AppSync API1.    Open the AWS AppSync console.2.    In the left navigation pane, choose Schema.3.    Copy and paste the following nested JSON schema into the JSON editor, and then choose Save Schema:Important: Make sure that you overwrite the prefilled content in the JSON editor with the nested JSON schema.Example nested JSON schematype Query { getItems(id: String!): allData}type allData { id: String! product: toModel status: String}type items { battery: String device: String pressure: String up_time: String}schema { query: Query}type toModel { model: toProperties}type toProperties { property: items}For more information, see Designing your schema.Attach a resolver to the getItems query1.    Open the AWS AppSync console.2.    On the Schema page of your API, under Resolvers, scroll to Query.Note: Or, in the Filter types field, you can enter Query.3.    Next to getItems(...): allData, under Resolver, choose Attach.4.    On the Create new Resolver page, for Data source name, choose the name of the DynamoDB table that you created.Important: Don't change the default mapping templates for the DynamoDB GetItem operation.5.    Choose Save Resolvers.Example request mapping template{ "version": "2017-02-28", "operation": "GetItem", "key": { "id": $util.dynamodb.toDynamoDBJson($ctx.args.id), }}Example response mapping template$util.toJson($ctx.result)For more information, see Configuring resolvers.Create a new test query1.    Open the AWS AppSync console.2.    In the left navigation pane, choose Queries.3.    On the Queries page of your API, in the Query editor, copy and paste the following query:Example test queryquery getItem { getItems(id:"123") { id product{ model{ property{ pressure device battery up_time } } } status } }4.    To run the test query, choose the play icon -or- press Ctrl/Cmd + Enter.Example test query results{ "data": { "getItems": { "id": "123", "product": { "model": { "property": { "pressure": "1012", "device": "iOT-Device", "battery": "li-ion", "up_time": "02:12:34" } } }, "status": "In-Stock" } }}You can now retrieve any nested JSON data from an Amazon DynamoDB table through AWS AppSync GraphQL operations.Related informationSupported data types and naming rules in Amazon DynamoDBSetting up the getPost resolver (DynamoDB GetItem)Follow"
https://repost.aws/knowledge-center/appsync-nested-json-data-in-dynamodb
How do I view my AWS CloudHSM audit logs?
"I need to view or monitor AWS CloudHSM activity for compliance or security reasons. For example, I need to know when a user created or used a key."
"I need to view or monitor AWS CloudHSM activity for compliance or security reasons. For example, I need to know when a user created or used a key.Short descriptionCloudHSM sends audit logs collected by HSM instances to Amazon CloudWatch Logs. For more information, see Monitoring AWS CloudHSM logs.ResolutionFollow these instructions to view CloudHSM audit logs.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Get your HSM cluster IDNote: If you already know what your HSM cluster ID is, you can skip this step.1.    Run this AWS CLI command to get your HSM cluster IP address.cat /opt/cloudhsm/etc/cloudhsm_mgmt_util.cfg | grep hostname2.    Run this AWS CLI command.Note: Replace your-region with your AWS Region and your-ip-address with your HSM cluster IP address.aws cloudhsmv2 describe-clusters --region your-region --query 'Clusters[*].Hsms[?EniIp==`your-ip-address`].{ClusterId:ClusterId}'You receive an output similar to the following."ClusterID": "cluster-likphkxygsn"AWS Management Console1.    Open the CloudWatch console for your AWS Region.2.    In the navigation pane, choose Logs.3.    In Filter, enter the Log Group name prefix. For example, /aws/cloudhsm/cluster-likphkxygsn.4.    In Log Streams, choose the log stream for your HSM ID in your cluster. For example, hsm-nwbbiqbj4jk.Note: For more information about log groups, log streams, and using Filter events, see Viewing audit logs in CloudWatch logs.5.    Expand the log streams to display audit events collected from the HSM device.6.    To list successful CRYPTO_USER logins, enter:Opcode CN_LOGIN User Type CN_CRYPTO_USER Response SUCCESS7.    To list failed CRYPTO_USER logins, enter:Opcode CN_LOGIN User Type CN_CRYPTO_USER Response RET_USER_LOGIN_FAILURE8.    To list successful key deletion events, enter:Opcode CN_DESTROY_OBJECT Response SUCCESSThe opcode identifies the management command that ran on the HSM. For more information about HSM management commands in audit log events, see Audit log reference.AWS CLI1.    Use the describe-log-groups command to list the log group names.aws logs describe-log-groups --log-group-name-prefix "/aws/cloudhsm/cluster" --query 'logGroups[*].logGroupName'2.    Use this command to list successful CRYPTO_USER logins.aws logs filter-log-events --log-group-name "/aws/cloudhsm/cluster-exampleabcd" --log-stream-name-prefix <hsm-ID> --filter-pattern "Opcode CN_LOGIN User Type CN_CRYPTO_USERResponse SUCCESS" --output text"3.    Use this command to list failed CRYPTO_USER logins.aws logs filter-log-events --log-group-name "/aws/cloudhsm/cluster-exampleabcd" --log-stream-name-prefix <hsm ID> --filter-pattern "Opcode CN_LOGIN User Type CN_CRYPTO_USER Response RET_USER_LOGIN_FAILURE" --output text4.    Use this command to list successful key deletion.aws logs filter-log-events --log-group-name "/aws/cloudhsm/cluster-exampleabcd" --log-stream-name-prefix <hsm ID> --filter-pattern "Opcode CN_DESTROY_OBJECT Response SUCCESS" --output textFor more information, see Viewing HSM audit logs in CloudWatch logs.Related informationInterpreting HSM audit logsfilter-log-eventsFollow"
https://repost.aws/knowledge-center/cloudhsm-audit-logs
Why can't I receive email notifications from my Amazon SNS topic?
I'm not receiving email notifications from my Amazon Simple Notification Service (Amazon SNS) topic.
"I'm not receiving email notifications from my Amazon Simple Notification Service (Amazon SNS) topic.ResolutionVerify that your email endpoint is in the confirmed stateNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you're using the most recent AWS CLI version.Use either the AWS Management Console or AWS CLI to verify the state of your email endpoint.AWS Management ConsoleOpen the Amazon SNS console.On the navigation pane, choose Topics, and then choose your topic.In the Subscriptions section of the topic page, find your email endpoint in the Endpoint column.In the Status column of your subscription, verify that the status is Confirmed. The status is confirmed when the email endpoint is successfully subscribed.Manually confirm the subscription in the Amazon SNS console. If you can't receive the confirmation email, complete the steps in the following sections.To re-request the confirmation email, select the subscription with your endpoint, and then choose Request confirmation.AWS CLIRun the list-subscriptions-by-topic AWS CLI command.Note: If there is no email endpoint in the Endpoint column, then that the endpoint was deleted.Check if email addresses can receive emails from external contactsTo check if the issue is limited to Amazon SNS, send a test email from an external provider to your destination email address. This helps you gauge what kind of traffic is allowed from external sources. Mailboxes within an organization are often limited to internal traffic.If the mailbox works and has no issues, then complete the steps in the Check for a firewall, spam filter, blockers, or filter policy section.Check for a firewall, spam filter, blockers, or filter policyComplete the following troubleshooting steps:Check with email administrators to see if the no-reply@sns.amazonaws.com address is filtered out by a firewall or spam filter.Tip: It's a best practice to add the no-reply@sns.amazonaws.com address to your mailbox allow list. For more information, see the Q: Do subscribers need to specifically configure their email settings to receive notifications from Amazon SNS? entry in Amazon SNS FAQs.If your emails are still filtered out as spam, check the mailbox rules for explicit denies that block your SNS topic email. You can also check if emails are routed to specific folders in the mailbox.To prevent individuals from unsubscribing all recipients of your SNS topic email, set up an authentication to unsubscribe.Note: You must have the required permissions to unsubscribe to your email endpoint. You can confirm the subscription with an authenticated user in the Amazon SNS console or with the AWS CLI.Check for a filter policy on the subscription:Open the Amazon SNS console.On the navigation pane, choose Subscriptions.In the search box, enter the email address or SNS topic that the email endpoint is subscribed to, and then choose your subscription in the results.For your email endpoint, choose the Subscription filter policy tab, and then look for a filter policy on the subscription in the Subscription filter policy section.Note: Amazon SNS compares the message attributes to the attributes in the filter policy when a message is sent to the endpoint. If the message attributes and the filter policy attributes don't align, the message won't be received on the email endpoint.Confirm you're not using the default AWS KMS key settingsAmazon SNS allows encryption at rest for topics. If the default AWS Key Management Service (AWS KMS) key is used for encryption, services (such as Amazon CloudWatch), can't publish messages to the SNS topic. The key policy of the default AWS KMS key for Amazon SNS doesn't allow these services to perform kms:Decrypt and kms:GenerateDataKey API calls. Because this key is AWS managed, you can't manually edit the policy.If you're encrypting your Amazon SNS topic, use a customer managed key. The customer managed key must include the following permissions under the Statement section of the key policy:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "service.amazonaws.com" }, "Action": [ "kms:GenerateDataKey*", "kms:Decrypt" ], "Resource": "*" } ]}These permissions allow the services to publish messages to encrypted SNS topics.Follow"
https://repost.aws/knowledge-center/sns-topic-email-notifications
How do I close my AWS account?
I want to close or cancel my AWS account.
"I want to close or cancel my AWS account.Short descriptionFor security reasons, AWS Support can't close your AWS account on your behalf. Follow the instructions in the Resolution section to prepare your account for closing, and then close your account.ResolutionBefore closing your accountTroubleshoot common issuesTroubleshoot common issues that you might encounter with your account:For unintentional charges, see I unintentionally incurred charges while using the AWS Free Tier. How do I make sure that I'm not billed again?For concerns about account compromise, see What do I do if I notice unauthorized activity in my AWS account?For error messages related to closing your account, see Why do I get an error when I try to close my AWS account?Sign in to your account, and back up resourcesReview the following procedures, and take the necessary actions:If you didn't complete the account sign-up process, then complete it.Sign in as the AWS account root user. If you sign in to an account with an AWS Identity and Access Management (IAM) user or role, then you can't close the account.Back up any resources or data that you want to keep. For instructions about how to back up a particular resource, see the AWS documentation for that service.Use AWS Transfer Family to transfer data from Amazon Simple Storage Service (Amazon S3) or Amazon Elastic File System (Amazon EFS).For AWS Organizations accounts:By default, member accounts don't have a root password. Before you can sign in as the root user, you must reset the root user password for these accounts.If your account is the management account of an organization, then be sure to close or remove all member accounts from your organization. For more information, see Removing a member account from your organization and Impacts of closing an account.To close the payer account in an organization, first delete the organization.Terminate all your resourcesClosing your account might not automatically terminate all your active resources. You might continue to incur charges for some of your active resources, even after you close your account. You're charged for any usage fees that you incurred before closure.Find all your active resources, and terminate them. See the following documentation for instructions:Find all your active resources: How do I check for active resources that I no longer need on my AWS account?Terminate all your resources: How do I terminate active resources that I no longer need on my AWS account?Make sure that your billing information is up to dateReview the following billing information, and take the necessary actions:If you purchased subscriptions with ongoing payment commitments, then you're charged for these subscriptions until the plan term ends, even after you close your account. This applies to subscriptions such as, Amazon Elastic Compute Cloud (Amazon EC2) Reserved Instances (RIs) and Savings Plans.After you close your account, your designated payment method is charged for any usage fees that you incurred before closure. If applicable, refunds are issued through the same payment method.You must add a valid credit card, debit card, or other payment method to close your account.If you reopen your closed account within 90 days, you might be charged for any AWS services that you didn't terminate before you closed the account.Pay your outstanding billsView your outstanding bills, and pay them:Open the Billing and Cost Management console.In the navigation pane, choose Payments. You can see your overdue payments in the Payments Due section.Choose Verify and pay next to any unpaid bills.Close your accountTo close your AWS account, do the following:Sign in to the AWS Management Console as the root user of the account.In the navigation pane, choose your account name, and then choose Account.Scroll to the Close Account section.Read and understand the terms of closing your account.Choose Close Account.Within a few minutes, you receive email confirmation that your account is closed.You can choose to sign in to your account three days after closing the account to check if you terminated all the resources. Open the AWS Billing and Cost Management console to monitor whether you continue to incur charges. If you continue to incur charges after terminating all your resources, then contact AWS Support.After closing your accountYou can still sign in and file an AWS Support case, or contact Support for 90 days.After 90 days, AWS permanently deletes any content remaining in your account, and shuts down any AWS services that you didn't shut down. However, AWS might retain service attributes as long as necessary for billing and administration purposes. AWS retains your account information as described in the Privacy Notice. You can't permanently delete your account before 90 days. You can't reopen the account after 90 days.Note: The account resources in AWS China (Beijing) and AWS China (Ningxia) Regions are subject to the policies of their operating partners. The operating partners are Sinnet in the Beijing Region and NWCD in the Ningxia Region. Account closure procedures in China might take longer than in other AWS Regions.You can't use the same email address that was associated with your closed account to create new AWS accounts.Related informationClosing an accountWhy did I receive a bill after I closed my AWS account?What do I do if I receive a bill from AWS but can't find the resources related to the charges?The administrator of an AWS account has left the company. How do I access this account?Follow"
https://repost.aws/knowledge-center/close-aws-account
How do I calculate the query charges in Amazon Redshift Spectrum?
"I want to calculate query charges in Amazon Redshift Spectrum when the data is scanned from Amazon Simple Storage Service (Amazon S3). How do I calculate the Redshift Spectrum query usage or cost, and what are some best practices to reduce the charges?"
"I want to calculate query charges in Amazon Redshift Spectrum when the data is scanned from Amazon Simple Storage Service (Amazon S3). How do I calculate the Redshift Spectrum query usage or cost, and what are some best practices to reduce the charges?Short descriptionPrerequisites:An Amazon Redshift cluster.An SQL client that's connected to your cluster to run SQL commands.Both the Amazon Redshift cluster and Amazon S3 bucket must be in the same Region.With Redshift Spectrum, you can run SQL queries directly against the data in S3. You are charged for the number of bytes scanned from S3. Additional charges (of $5 to $6.25 per TB of data scanned) can be incurred depending on the Region. Byte numbers are always rounded up to the next megabyte, with a minimum of 10 MB per query. For more information, see Amazon Redshift pricing.Note: There are no charges for Data Definition Language (DDL) statements like CREATE, ALTER, or DROP TABLE statements for managing partitions and failed queries.ResolutionTo calculate the estimated query cost (and to obtain a summary of all S3 queries that were run in Redshift Spectrum), use the SVL_S3QUERY_SUMMARY table. The s3_scanned_bytes column returns the number of bytes scanned from S3 sent to the Redshift Spectrum layer.UsageYou can run the query against SVL_S3QUERY_SUMMARY to determine the number of bytes transferred by queryID:SELECT s3_scanned_bytesFROM SVL_S3QUERY_SUMMARYWHERE query=<queryID>;To determine the sum of all bytes scanned from S3, use the following query:SELECT sum(s3_scanned_bytes)FROM SVL_S3QUERY_SUMMARY;You can also determine the sum of bytes for all queries from Redshift Spectrum in a specific time interval. The following example shows you how to calculate the sum of all bytes for queries that started running since the previous day:SELECT sum(s3_scanned_bytes)FROM SVL_S3QUERY_SUMMARYWHERE starttime >= current_date-1;If you run this query against an S3 bucket in the US East (N. Virginia) Region, Redshift Spectrum charges $5 per terabyte. If the sum for s3_scanned_bytes returns 621,900,000,000 bytes when querying SVL_S3QUERY_SUMMARY, you have 0.565614755032584 terabytes (when you convert from bytes to terabytes).621900000000 bytes = 621900000000/1024 = 607324218.75 kilobytes607324218.75 kilobytes = 607324218.75/1024 = 593090.057373046875 megabytes593090.057373046875 megabytes = 593090.057373046875 /1024 = 579.189509153366089 gigabytes579.189509153366089 gigabytes = 579.189509153366089/1024 = 0.565614755032584 terabytesIn this example, your usage is approximately 0.5657 terabytes. To calculate the usage cost of Redshift Spectrum, multiply it by the cost per terabyte:$5 * 0.5657= $2.83You can also use the following SQL query to calculate the Redshift Spectrum usage charges:SELECT round(1.0*sum(s3_scanned_bytes/1024/1024/1024/1024),4) s3_scanned_tb, round(1.0*5*sum(s3_scanned_bytes/1024/1024/1024/1024),2) cost_in_usd FROM SVL_S3QUERY_SUMMARY;In this example, the charges in Redshift Spectrum are queried against your S3 bucket for data scanned from the previous day.Note: All queries that scan up to 9.9 MB are rounded up and charged for 10 MB. There are no charges for failed or aborted queries.Additionally, system log tables (STL) retain only two to five days of log history, depending on log usage and available disk space. Therefore, it's a best practice to calculate the daily query charges and to store it in another table, retaining a record of transferred bytes. Here's an example:CREATE VIEW spectrum_cost ASSELECT starttime::date as date, xid, query, trim(usename) as user, CASE WHEN s3_scanned_bytes < 10000000 then 10 ELSE s3_scanned_bytes/1024/1024 end as scanned_mb, round(CASE WHEN s3_scanned_bytes < 10000000 then 10*(5.0/1024/1024) ELSE (s3_scanned_bytes/1024/1024)*(5.0/1024/1024) end,5) as cost_$ FROM svl_s3query_summary s LEFT JOIN pg_user u ON userid=u.usesysid JOIN (select xid as x_xid,max(aborted) as x_aborted from svl_qlog group by xid) q ON s.xid=q.x_xid WHERE userid>1 AND x_aborted=0AND s.starttime >= current_date-1;Note: You can also use the CREATE TABLE query to calculate and store the data in another table. If you don't want to specify a time period, remove "current_date-1".Calculate the total sum of data scanned from S3 to Redshift Spectrum since the day before. Then, calculate the total estimate of charges by running the following query:SELECT current_date-1 as query_since, SUM(scanned_mb) as total_scanned_mb, SUM(cost_$) as total_cost_$FROM spectrum_cost;Result: query_since | total_scanned_mb | total_cost_$--------------+------------------+--------------- 2020-05-15 | 5029 | 0.02515Redshift Spectrum best practicesTo reduce the query charges and to improve Redshift Spectrum's performance, consider the following best practices:Use cost controls for Redshift Spectrum and concurrency scaling features to monitor and control your usage.Use Optimized Data Formats to improve performance and lower costs. Use columnar data formats such as PARQUET and ORC to select only the columns you want to scan from S3.Load the data in S3 and use Redshift Spectrum if the data is infrequently accessed.When using multiple Amazon Redshift clusters to scale concurrency, terminate those clusters as soon as the jobs are complete.Cost controls and concurrency scaling for Redshift SpectrumBy using the cost controls and concurrency scaling feature for Redshift Spectrum, you can create daily, weekly, and monthly usage limits. When the usage limits are reached, Amazon Redshift automatically takes action based on the usage limits.To configure the cost control from the Amazon Redshift console, perform the following steps:1.    Sign in to the AWS Management Console.2.    Open the Amazon Redshift console.Note: You can also define and manage usage limits from the AWS Command Line Interface (AWS CLI) or Amazon Redshift API operations.3.    Choose Configure usage limit.4.    Update the following configuration settings:Time period (Daily/Weekly/Monthly)Usage limit (TB)Action (Alert/Log to system table/Disable feature)Note: The Action features can help you manage your usage limits.To configure the Concurrency Scaling usage limit, perform the following steps:1.    Sign in to the AWS Management Console.2.    Open the Amazon Redshift console.3.    Choose Concurrency scaling usage limit as your usage limit.4.    Update the following configuration settings:Time period (Daily/Weekly/Monthly)Usage limit (hh:mm)Action (Alert/Log to system table/Disable feature)Note: The Time period is in UTC time zone. For the Alert and Disable feature, you can also attach an Amazon Simple Notification Service (SNS) subscription to the alarm. Additionally, if you activate an alert using the Amazon Redshift console, an Amazon CloudWatch alarm is created automatically for those metrics.Additional cost control requirements and limitationsWhen managing your Redshift Spectrum usage and cost, be aware of the following requirements and limitations:Usage limits are available with supported versions 1.0.14677 or later.You can add up to 4 limits and actions per category (8 limits total).Redshift Spectrum is supported only in Regions where Redshift Spectrum and Concurrency Scaling are available.Only one limit per feature can use the Disable action.Usage limits persist until the usage limit definition itself or the cluster is deleted.If you create a limit in the middle of a period, the limit is measured from that point to the end of the defined period.If you are choosing log options, review the details in the STL_USAGE_CONTROL logs.Related informationManaging usage limits in Amazon RedshiftSVL_QLOGSystem tables and viewsBest practices for Amazon Redshift SpectrumFollow"
https://repost.aws/knowledge-center/redshift-spectrum-query-charges
How do I upgrade or downgrade the SQL Server engine edition in RDS for SQL Server?
I want to upgrade or downgrade the SQL Server engine edition in Relational Database Service (Amazon RDS) for SQL Server. How can I do this?
"I want to upgrade or downgrade the SQL Server engine edition in Relational Database Service (Amazon RDS) for SQL Server. How can I do this?Short descriptionAmazon RDS for SQL Server supports Express, Web, Standard, and Enterprise editions. You can't perform a SQL Server edition change as an in-place modification using the RDS console or using the AWS Command Line Interface (AWS CLI).To upgrade your SQL Server edition, create a snapshot and then restore using the higher engine edition. To downgrade, use one of these methods:Use the native backup and restore option in RDS for SQL Server.Use AWS Database Migration Service (AWS DMS).Import and export SQL Server data using other tools.ResolutionUpgrade the SQL Server engine editionTo upgrade the SQL Server engine edition, create an RDS snapshot and then restore from that snapshot. For upgrade limitations, see Microsoft SQL Server considerations.To upgrade using a snapshot, follow these steps:1.    Create a snapshot of the original RDS for SQL Server instance.2.    Restore the snapshot taken in Step 1, to create a new RDS instance. Change the required edition to the higher edition during the restore.3.    Rename or delete the original RDS for SQL Server instance to free up the DNS endpoint name for reuse. For more information, see the section Rename the RDS instance.For detailed instructions and steps on upgrading from Standard edition to Enterprise edition, see Modify an Amazon RDS for SQL Server instance from Standard Edition to Enterprise Edition.You can use the same snapshot and restore method for these upgrades:Standard edition to Enterprise editionWeb edition to Standard edition or Enterprise editionExpress edition to Web edition, Standard edition, or Enterprise editionImportant note: Snapshot restoration while upgrading the edition creates a new RDS for SQL Server instance. The new instance has a different RDS endpoint than the snapshot source instance.Downgrade the SQL Server editionIn-place downgrading of RDS for SQL Server instance from higher to lower editions isn't supported because of limitations with SQL Server as a product. However, you can downgrade your RDS for SQL Server edition in any one of these combinations following the workaround options mentioned later:Enterprise edition to Standard, Web, or Express editionStandard edition to Web or Express editionWeb edition to Express editionTo downgrade the RDS for SQL Server edition use one of these options:Option 1: Use the native backup and restore option in RDS for SQL ServerNote: You can also use this option to move databases from lower to higher editions of RDS instances.Native backup and restore creates a full backup of the databases on the existing source RDS for SQL Server instance. Store the backups on Amazon Simple Storage Service (Amazon S3) and then restore the backup files onto a new target RDS instance.To downgrade from a source Enterprise instance to a target Standard instance, follow these steps:1.    Create a new RDS for SQL Server with Standard edition SQL Server. This is the new target instance.2.    Add the native backup and restore option on the source Enterprise and target Standard edition instances.3.    Back up each user database on the source (Enterprise) instance to an S3 bucket.4.    Run the sys.dm_dm_persisted_sku-features (Transact-SQL) query on each database on the source instance. This query checks if there are any features currently in use that are bound to the higher edition. Features bound to the higher edition might not work when you restore the databases on to the lower edition target instance.USE [database-name] GO SELECT feature_name FROM sys.dm_db_persisted_sku_features; GO5.    Restore the backups from the S3 bucket to the target (Standard) RDS instance.6.    Make sure to create the required logins and users on the target RDS instance databases. Also create the appropriate security group and attach the appropriate parameter-option groups. These are the same as the source RDS instance.Note: You can use the preceding steps to export and import databases across any editions of SQL Server on RDS.Option 2: Use AWS DMSNote: You can use also use this option to move databases from lower to higher editions of RDS instances.Use AWS DMS to migrate your databases. AWS DMS also replicates ongoing changes from the higher edition instance (the source endpoint) to the lower edition instance (the target endpoint).AWS DMS allows unidirectional replication, bulk-load tables, and captures data changes (if supported by the source and target RDS for SQL Server instance versions).For more information, see these topics:Using a Microsoft SQL Server database as a source for AWS DMSUsing a Microsoft SQL Server database as a target for AWS Database Migration ServiceLimitations on using SQL Server as a source for AWS DMSMigrating your SQL Server database to Amazon RDS for SQL Server using AWS DMSOption 3: Import and export SQL Server data using other toolsYou can use these additional tools to import and export your database:SQL Server Import and Export WizardGenerate and Publish Scripts WizardBulk copy (bcp utility)The instance with the lower SQL Server edition must be created and active before using these tools.Keep in mind that these tools require more effort than native backup and restore or AWS DMS. You might experience multiple data consistency or integrity errors that must be fixed. These errors occur when moving data using these tools. Thoroughly test the process in a test environment before deciding to use one of these tools.SQL Server Import and Export Wizard: Copy and create the schema of the source instance's databases and object on to the target instance. Then, use this wizard to copy one or more tables, views, or queries from one RDS for SQL Server DB instance to another data store. For more information, see SQL Server Import and Export Wizard.SQL Server Generate and Publish Scripts Wizard and bcp utility: Use the SQL Server Generate and Publish Scripts Wizard to create scripts for an entire database or selected objects. You can run these scripts on a target SQL Server DB instance to recreate the scripted objects. Then, use the bcp utility to bulk export the data for the selected objects to the target DB instance. Run the bcp utility from an Amazon Elastic Compute Cloud (Amazon EC2) instance that has connectivity to both the source and target RDS instances. For more information, see SQL Server Generate and Publish Scripts Wizard and bcp utility.Note: All the options mentioned in this section can also be used to migrate databases from lower edition to higher edition RDS for SQL Server instances. However, the approach explained in the Upgrade SQL Server edition section is easier. Deciding which option to use depends on factors such as downtime, effort, complexity involved, and so on.Rename the RDS instanceThe options described for upgrading or downgrading the RDS for SQL Server edition always result in the creation of a new target RDS Instance. The new RDS instance has a different RDS DNS endpoint than the existing source RDS instance.Sometimes, updating the new RDS endpoint across applications and other services misses the connection string update in one or more of these components. When this occurs, you might run into issues after the edition change of the RDS for SQL Server instance.To avoid this, consider renaming the source and target RDS instances. Renaming makes sure that the target edition instance has the same RDS DNS endpoint as that of the original source edition instance.Doing this avoids making changes in the connection strings of the dependent applications or services after the edition change on the RDS for SQL Server instance.To rename the source and target RDS instances after changing the edition, follow these steps:This example assumes that the source RDS instance is rds-original with Enterprise edition. The target instance is rds-new with Standard edition.1.    Stop all incoming traffic (stop application) to the source instance rds-original.2.    Follow any of the preceding steps or options for upgrading or downgrading the SQL Server edition on the RDS Instance. After the edition successfully changes, there are two instances: the source instance is rds-original and the target instance is rds-new.3.    Modify the source instance to rename the DB identifier from rds-original to a different name, such as rds-original-old.4.    After the instance rds-original-old is in the Available state, rename the target instance DB Identifier from rds-new to the name of original instance, rds-original.5.    Verify that the instances are renamed to rds-original-old and rds-original and are in the Available state.6.    Make sure to keep the related RDS security groups that are attached to the new edition target RDS instance the same as the source instance. This makes sure that network connectivity from the existing applications remains the same.7.    Allow incoming traffic (start application) now to the instance rds-original that has the required SQL Server edition. No changes are required for the application connection strings , since RDS has the same DNS endpoint as source instance.8.    Perform the application testing to make sure that there is no impact after RDS instance edition change.9.    If everything works, create a final snapshot of the instance rds-original-old, and then delete this instance to save on costs.Note: It's a best practice to test activities first in a lower environment before implementing on the production environment. This gives you an estimate of how much time the changes take. Also, you can identify any issues that occur during the activity to help make implementation in the production environment smoother.Related informationAWS Prescriptive Guidance - Evaluating downgrading Microsoft SQL Server from Enterprise edition to Standard edition on AWSFollow"
https://repost.aws/knowledge-center/rds-sql-server-change-engine-edition
Why am I getting errors setting up an AWS Organizations member account as a delegated administrator for AWS Config rules?
"I followed the instructions to deploy AWS Config Rules and conformance packs using a delegated admin. However, I received an error similar to the following:"
"I followed the instructions to deploy AWS Config Rules and conformance packs using a delegated admin. However, I received an error similar to the following:An error occurred (AccessDeniedException) when calling the DeregisterDelegatedAdministrator operation: You don't have permissions to access this resource.An error occurred (InvalidInputException) when calling the RegisterDelegatedAdministrator operation: You specified an unrecognized service principal.An error occurred (ConstraintViolationException) when calling the RegisterDelegatedAdministrator operation: You have exceeded the allowed number of delegated administrators for the delegated service.ResolutionFollow these troubleshooting steps for the specific error message received.Important: Before you begin, be sure that you installed and configured the AWS Command Line Interface (AWS CLI)."An error occurred (AccessDeniedException) when calling the DeregisterDelegatedAdministrator operation: You don't have permissions to access this resource."This error means that you ran the register-delegated-administrator command from an AWS Organizations member account similar to the following:  $aws organizations register-delegated-administrator --service-principal config-multiaccountsetup.amazonaws.com --account-id member-account-IDYou can delegate an administrator only from the AWS Organizations primary account. Run the register-delegated-administrator command from the AWS Organizations primary account.  "An error occurred (InvalidInputException) when calling the RegisterDelegatedAdministrator operation: You specified an unrecognized service principal."This error can occur if your AWS Organizations organization doesn't have all features and trusted access enabled.1.    Run the enable-aws-service-access command similar to the following:$aws organizations enable-aws-service-access --service-principal=config-multiaccountsetup.amazonaws.com2.    Run the register-delegated-administrator command from the AWS Organizations primary account to delegate the member account to deploy AWS Organization conformance packs and AWS Config rules:$aws organizations register-delegated-administrator --service-principal config-multiaccountsetup.amazonaws.com --account-id member-account-ID"An error occurred (ConstraintViolationException) when calling the RegisterDelegatedAdministrator operation: You have exceeded the allowed number of delegated administrators for the delegated service."This error means that the maximum member account limit of 3 is reached for registered delegated administrators.1.    To determine which delegated administrators are registered, run the list-delegated-administrators similar to the following:$aws organizations list-delegated-administrators --service-principal=config-multiaccountsetup.amazonaws.comYou receive an output similar to the following:{ "DelegatedAdministrators": [ { "Id": "987654321098", "Arn": "arn:aws:organizations::123456789012:account/o-anz8bj0hfs/987654321098", "Email": "youremailalias@example.com", "Name": "your-account-name", "Status": "ACTIVE", "JoinedMethod": "CREATED", "JoinedTimestamp": 1557432887.92, "DelegationEnabledDate": 1590681859.773 } ]}2.    To de-register a delegated administrator, run the deregister-delegated-administrator command:$aws organizations deregister-delegated-administrator --service-principal config-multiaccountsetup.amazonaws.com --account-id member-account-ID3.    Rerun the register-delegated-administrator command to delegate an account as an administrator:  $aws organizations register-delegated-administrator --service-principal config-multiaccountsetup.amazonaws.com --account-id member-account-IDRelated informationHow do I remove a member account from an organization in AWS Organizations when I can't sign in to the member account?How do I move accounts between organizations in AWS Organizations?Follow"
https://repost.aws/knowledge-center/config-organizations-admin
How can I copy S3 objects from another AWS account?
"I want to copy Amazon Simple Storage Service (Amazon S3) objects across AWS accounts. Then, I want to make sure that the destination account owns the copied objects."
"I want to copy Amazon Simple Storage Service (Amazon S3) objects across AWS accounts. Then, I want to make sure that the destination account owns the copied objects.ResolutionImportant: Objects in S3 aren't always automatically owned by the AWS account that uploads them. When you change Object Ownership, it's a best practice to use the Bucket owner enforced setting. However, this option turns off all bucket ACLs and ACLs on any objects in your bucket.With the Bucket owner enforced setting in S3 Object Ownership, the same bucket owner automatically owns all objects in an Amazon S3 bucket. The Bucket owner enforced feature also turns off all access control lists (ACLs). This simplifies access management for data stored in Amazon S3. However, for existing buckets, an S3 object is still owned by the AWS account that uploaded it, unless you explicitly turn off the ACLs.If your existing method of sharing objects relies on using ACLs, then identify the principals that use ACLs to access objects. For more information about how to review permissions before turning off any ACLs, see Prerequisites for turning off ACLs.If you can't turn off your ACLs, then follow these steps to take ownership of objects until you can adjust your bucket policy:1.    In the source account, create an AWS Identity and Access Management (IAM) customer managed policy that grants an IAM identity (user or role) proper permissions. The IAM user must have access to retrieve objects from the source bucket and put objects back into the destination bucket. You can use an IAM policy that's similar to the following example:{ "Version": "2012-10-17", "Statement": \[ { "Effect": "Allow", "Action": \[ "s3:ListBucket", "s3:GetObject" \], "Resource": \[ "arn:aws:s3:::source-DOC-EXAMPLE-BUCKET", "arn:aws:s3:::source-DOC-EXAMPLE-BUCKET/\*" \] }, { "Effect": "Allow", "Action": \[ "s3:ListBucket", "s3:PutObject", "s3:PutObjectAcl" \], "Resource": \[ "arn:aws:s3:::destination-DOC-EXAMPLE-BUCKET", "arn:aws:s3:::destination-DOC-EXAMPLE-BUCKET/\*" \] } \]}Note: This example IAM policy includes only the minimum required permissions for listing objects and copying objects across buckets in different accounts. You must customize the allowed S3 actions according to your use case. For example, if the user must copy objects that have object tags, then you must also grant permissions for s3:GetObjectTagging. If you experience an error, try performing these steps as an admin user.2.    In the source account, attach the customer managed policy to the IAM identity that you want to use to copy objects to the destination bucket.3.    In the destination account, set S3 Object Ownership on the destination bucket to bucket owner preferred. After you set S3 Object Ownership, new objects uploaded with the access control list (ACL) set to bucket-owner-full-control are automatically owned by the bucket's account.4.    In the destination account, modify the bucket policy of the destination bucket to grant the source account permissions for uploading objects. Additionally, include a condition in the bucket policy that requires object uploads to set the ACL to bucket-owner-full-control. You can use a statement that's similar to the following example:Note: Replace destination-DOC-EXAMPLE-BUCKET with the name of the destination bucket. Then, replace arn:aws:iam::222222222222:user/Jane with the Amazon Resource Name (ARN) of the IAM identity from the source account.{ "Version": "2012-10-17", "Id": "Policy1611277539797", "Statement": \[ { "Sid": "Stmt1611277535086", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::222222222222:user/Jane" }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::destination-DOC-EXAMPLE-BUCKET/\*", "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } } }, { "Sid": "Stmt1611277877767", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::222222222222:user/Jane" }, "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::destination-DOC-EXAMPLE-BUCKET" } \]}This example bucket policy includes only the minimum required permissions for uploading an object with the required ACL. You must customize the allowed S3 actions according to your use case. For example, if the user must copy objects that have object tags, then you must also grant permissions for s3:GetObjectTagging5.    After you configure the IAM policy and bucket policy, the IAM identity from the source account must upload objects to the destination bucket. Make sure that the ACL is set to bucket-owner-full-control. For example, the source IAM identity must run the cp AWS CLI command with the --acl option:aws s3 cp s3://source-DOC-EXAMPLE-BUCKET/object.txt s3://destination-DOC-EXAMPLE-BUCKET/object.txt --acl bucket-owner-full-controlNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.When you set S3 Object Ownership to bucket owner preferred, objects that you upload with bucket-owner-full-control are owned by the destination bucket's account.Important: If your S3 bucket has default encryption with AWS Key Management Service (AWS KMS) activated, then you must also modify the AWS KMS key permissions. For instructions, see My Amazon S3 bucket has default encryption using a custom AWS KMS key. How can I allow users to download from and upload to the bucket?Related informationBucket owner granting cross-account bucket permissionsHow do I change object ownership for an Amazon S3 bucket when the objects are uploaded by other AWS accounts?Using a resource-based policy to delegate access to an Amazon S3 bucket in another accountFollow"
https://repost.aws/knowledge-center/copy-s3-objects-account
Why am I getting 503 Slow Down errors from Amazon S3 when the requests are within the supported request rate per prefix?
"The rate of requests to a prefix in my Amazon Simple Storage Service (Amazon S3) bucket is within the supported request rates per prefix, but I'm getting 503 Slow Down errors."
"The rate of requests to a prefix in my Amazon Simple Storage Service (Amazon S3) bucket is within the supported request rates per prefix, but I'm getting 503 Slow Down errors.ResolutionAmazon S3 supports a request rate of 3,500 PUT, COPY, POST, DELETE or 5,500 GET, HEAD requests per second per prefix in a bucket. When you create a prefix, Amazon S3 doesn't automatically assign the resources for this request rate. Instead, as the request rate for a prefix gradually increases, Amazon S3 automatically scales to handle the increased request rate.When you make requests at a high request rate that's close to the rate limit, Amazon S3 returns 503 Slow Down errors. To prevent these errors, maintain the request rate and implement a retry with exponential backoff. This allows Amazon S3 time to monitor the request patterns, and scale in the backend to handle the request rate.If there's a sudden increase in the request rate for objects in a prefix, then Amazon S3 might return 503 Slow Down errors. It returns these errors as it scales in the background to handle the increased request rate. To prevent these errors, configure your application to gradually increase the request rate. Then, use an exponential backoff algorithm to retry failed requests.If you exceed supported request rates, then it's a best practice to distribute objects and requests across multiple prefixes.Note: If the preceding resolutions don't resolve the errors, then get the request IDs for the failed requests, and contact AWS Support.Follow"
https://repost.aws/knowledge-center/s3-503-within-request-rate-prefix
How do I troubleshoot Amazon ECS tasks that take a long time to stop when the container instance is set to DRAINING?
"My Amazon Elastic Container Service (Amazon ECS) task is taking a long time to move to the STOPPED state. Or, my Amazon ECS task is stuck in the RUNNING state when the container instance is set to DRAINING. How can I resolve this issue?"
"My Amazon Elastic Container Service (Amazon ECS) task is taking a long time to move to the STOPPED state. Or, my Amazon ECS task is stuck in the RUNNING state when the container instance is set to DRAINING. How can I resolve this issue?Short descriptionWhen you set an ECS instance to DRAINING, Amazon ECS does the following:Prevents new tasks from being scheduled for placement on the container instanceStops tasks on the container instance that are in the RUNNING stateYour tasks can be stuck in the RUNNING state or take a longer time to move to the STOPPED state due to issues with configuration parameters or tasks. To troubleshoot these issues, consider the following options:Confirm that your DeploymentConfiguration parameters are set correctlyConfirm that the deregistration delay value is set correctlyConfirm that the ECS_CONTAINER_STOP_TIMEOUT value is set correctlyLook for other task-related issuesResolutionConfirm that your DeploymentConfiguration parameters are set correctlyOpen the Amazon ECS console.In the navigation pane, choose Clusters, and then choose the cluster where your container instance is draining.Choose the ECS Instances tab, and then choose DRAINING in the Status section.Choose your container instance, and then find out the service for the tasks that are draining or taking a long time to drain.Choose the Services tab, select the service, and then choose Deployments.Check the values for minimumHealthyPercent and maximumPercent.Note: Service tasks on the container instance that are in the RUNNING state are stopped and replaced according to the service's deployment configuration parameters: minimumHealthyPercent and maximumPercent.Confirm that the deregistration delay value is set correctlyImportant: The following steps apply only to services using the Application Load Balancer or Network Load Balancer. If your service is using the Classic Load Balancer, check the connection draining values.Open the Amazon ECS console.In the navigation pane, choose Clusters, and then choose the cluster where your container instance is draining.Choose the Services tab, and then select the service with the stack stuck in RUNNING.Choose Target Group Name.On the Details tab, scroll down, and then select the Deregistration delay check box.Confirm that the ECS_CONTAINER_STOP_TIMEOUT value is set correctlyConnect to your container instance using SSH.Run the docker inspect ecs-agent --format '{{json .Config.Env}}' command.Check if there is a value for ECS_CONTAINER_STOP_TIMEOUT.Note: ECS_CONTAINER_STOP_TIMEOUT is an ECS container agent parameter that defines the amount of time that Amazon ECS waits before ending a container. The time duration starts counting when a task is stopped. If you don't see the ECS_CONTAINER_STOP_TIMEOUT parameter in the output after running the command in step 2, then Amazon ECS is using the default value of 30s.Look for other task-related issuesConnect to your container instance using SSH.Verify that the Docker daemon and Amazon ECS container agent are running for either your Amazon Linux 1 AMIs or Amazon Linux 2 AMIs.Check the application logs based on the log driver set by logConfiguration.Note: For example, if your tasks are using the awslogs log driver, check your Amazon CloudWatch Logs for issues.Follow"
https://repost.aws/knowledge-center/ecs-tasks-stop-delayed-draining
How do I use the Amazon Connect StartOutboundVoiceContact API to make outbound calls to customers?
I want to use Amazon Connect to program outbound calls to contact customers. How do I automate outbound calling using the Amazon Connect StartOutboundVoiceContact API?
"I want to use Amazon Connect to program outbound calls to contact customers. How do I automate outbound calling using the Amazon Connect StartOutboundVoiceContact API?Short descriptionYou can follow the instructions in this article to create an example setup that allows you initiate calls using the StartOutboundVoiceContact API.In this example setup, your Amazon Connect contact center calls the destination number and greets the recipient with "Hello" and the name that you specify. Then, the call disconnects automatically.For other example setups, see Automating outbound calling to customers using Amazon Connect.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Create an outbound contact flowImportant: To create a contact flow, you must log in to your Amazon Connect instance as a user with sufficient permissions in their security profile.1.    Log in to your Amazon Connect instance using your access URL (https://alias.awsapps.com/connect/login -or- https://alias.awsapps.com/connect/login).Note: Replace alias with your instance's alias.2.    In the left navigation pane, hover over Routing, and then choose Contact flows.3.    On the Contact flows page, choose a template, or choose Create contact flow to design a contact flow from scratch.4.    In the contact flow designer, for Enter a name, enter a name for the contact flow. For example: Outbound calling.5.    Choose Save.For more information, see Create a new contact flow.Add a Play prompt blockTo configure the audio prompt that customers hear during the call, use a Play prompt contact block.1.    In the contact flow designer, expand Interact.2.    Drag and drop a Play prompt block onto the canvas.3.    Choose the Play prompt block title. The block's settings menu opens.4.    For Prompt, do the following: Choose Text to speech (Ad hoc). For Enter text, enter "Hello. This is a test call." Confirm that Interpret as is set to Text. Choose Save.For more information, see Add text-to-speech to prompts and Use Amazon Connect contact attributes.Add a Disconnect / hang up blockTo automatically end the call after the outgoing message is played, use a Disconnect / hang up contact block.1.    Choose Terminate/Transfer.2.    Drag and drop a Disconnect / hang up block onto the canvas to the right of the Play prompt block.Connect the contact blocksConnect all the connectors in your contact flow to a block in the following order:Entry point > Play prompt > Disconnect / hang upImportant: All connectors must be connected to a block before you can publish the contact flow.Save and publish the contact flow1.    Choose Save to save a draft of the flow.2.    Choose Publish to activate the flow immediately.Get your Amazon Connect instance ID and contact flow ID1.    In the contact flow designer, expand Show additional flow information.2.    Under ARN, copy the Amazon Resource Name (ARN). The contact flow ARN includes your Amazon Connect instance ID and your contact flow ID. You need these IDs to call the StartOutboundVoiceContact API.Example contact flow ARNarn:aws:connect:region:123456789012:instance/12a34b56-7890-1234-cde5-6789f0a1b2c3/contact-flow/123a45b6-c7d8-9012-34e5-6fab789c012dConfirm your IAM permissions for Amazon ConnectIf you haven't already, create and attach an AWS Identity and Access Management (AWS IAM) policy that allows you to call the connect:StartOutboundVoiceContact API.The following example JSON policy document provides the required permissions:Important: Replace the instance ARN (the "Resource" value) with your Amazon Connect instance's ARN .{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "connect:StartOutboundVoiceContact", "Resource": "arn:aws:connect:region:123456789012:instance/12a34b56-7890-1234-cde5-6789f0a1b2c3/contact/*" } ]}Call the StartOutboundVoiceContact APIRun the following command from the AWS CLI:Important: Replace awsRegion with the AWS Region of your Amazon Connect instance. Replace phoneNumber with a recipient's phone number in E.164 format. Replace contactFlowId with your contact flow ID. Replace instanceId with your Amazon Connect instance ID. Replace instancePhoneNumber with the phone number for your contact center in E.164 format. For more information, see start-outbound-voice-contact in the AWS CLI Command Reference.$ aws connect start-outbound-voice-contact --region awsRegion --destination-phone-number phoneNumber --contact-flow-id contactFlowId --instance-id instanceId --source-phone-number instancePhoneNumberThe command response returns a ContactId when the action is successful, and an error code when unsuccessful.For more information about errors that are common to the StartOutboundVoiceContact API, see the Errors section in StartOutboundVoiceContact.Related informationconnect (AWS CLI Command Reference)Create promptsSet up outbound caller IDFollow"
https://repost.aws/knowledge-center/connect-outbound-calls-api
How do I resolve the "HIVE_CURSOR_ERROR" exception when I query a table in Amazon Athena?
"When I run queries on my Amazon Athena table, I get an "HIVE_CURSOR_ERROR" exception."
"When I run queries on my Amazon Athena table, I get an "HIVE_CURSOR_ERROR" exception.ResolutionYou might get this exception under one of the following conditions:The data objects in your table location are corrupt, aren't valid, or are incorrectly compressed.The records within the table data aren't valid (example: a malformed JSON record).Common HIVE_CURSOR_ERROR exceptionsYou might get one of the following HIVE_CURSOR_ERROR exceptions when there is an issue with the underlying objects:HIVE_CURSOR_ERROR: incorrect header checkHIVE_CURSOR_ERROR: invalid block typeHIVE_CURSOR_ERROR: incorrect data checkHIVE_CURSOR_ERROR: Unexpected end of input streamHIVE_CURSOR_ERROR: invalid stored block lengthsHIVE_CURSOR_ERROR: invalid distance codeIf you recently added new objects to your Amazon Simple Storage Service (Amazon S3) table location, then be sure that these objects are valid and aren't corrupt. Download the objects, and then inspect them using an appropriate tool. For example, decompress the GZIP-compressed objects to check whether the compression is valid.If your table is partitioned, check if you are able to query individual partitions. If there are new partitions in the table, they might contain objects that aren't valid.Specific HIVE_CURSOR_ERROR exceptionsHIVE_CURSOR_ERROR: Row is not a valid JSON ObjectYou get this error when a row within the table is not a valid JSON record.To resolve this issue, do the following:Recreate the table by adding the property 'ignore.malformed.json' = 'true'.Query the new table to identify the files with malformed records by running a command similar to the following:SELECT "$path" FROM example_table WHERE example_column = NULLFor more information, see Resolve JSON errors in Amazon Athena and OpenX JSON SerDe Options.HIVE_CURSOR_ERROR: Corrupted uncompressed blockYou get this error when the objects in your table location are compressed using LZO format.To avoid getting this error with LZO-compressed data, recreate the table with the INPUTFORMAT "com.hadoop.mapred.DeprecatedLzoTextInputFormat". For example:CREATE EXTERNAL TABLE example_table ( example_column_1 STRING, example_column_2 STRING)STORED AS INPUTFORMAT 'com.hadoop.mapred.DeprecatedLzoTextInputFormat'OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'LOCATION 's3://example_bucket/example_prefix/'You can also resolve this error by using a different compression format.HIVE_CURSOR_ERROR: Can not read value at 0 in block 0 in fileYou get this error when you have invalid Parquet objects in the S3 path. Check if your Parquet objects contain DECIMAL data types. If Parquet objects contain DECIMAL data types and were written using Spark, they might be incompatible with Hive. To check this condition, try reading the Parquet objects from a Spark job.If you can read the Parquet objects using the Spark job, then rewrite the Parquet objects with legacy mode in Spark with the following configuration:spark.sql.parquet.writeLegacyFormat = trueFor information on Spark configuration Options for Parquet, see Configuration in Spark SQL guide.HIVE_CURSOR_ERROR: org.apache.hadoop.io.ArrayWritable cannot be cast to org.apache.hadoop.io.TextYou get this error if you used an incorrect SerDe during table definition. For example, the table might be using a JSON SerDe, and the source data includes Parquet objects.To resolve this error, check the source data and confirm that the correct SerDe is used. For more information, see Supported SerDes and data formats.Related informationHow do I resolve "HIVE_CURSOR_ERROR: Row is not a valid JSON Object - JSONException: Duplicate key" when reading files from AWS Config in Athena?Troubleshooting in AthenaFollow"
https://repost.aws/knowledge-center/athena-hive-cursor-error
How do I troubleshoot issues when changing my Amazon EBS volume type?
"When I try to change my Amazon Elastic Block Store (Amazon EBS) volume type, I encounter issues. How do I troubleshoot these issues?"
"When I try to change my Amazon Elastic Block Store (Amazon EBS) volume type, I encounter issues. How do I troubleshoot these issues?Short descriptionThe following are common issues that can occur when you change your Amazon EBS volume type:The size or performance isn't within the limits of the target volume type that you're changing to.The option to change the volume type on your Amazon EBS Multi-Attach activated io1 or io2 volume is grayed out.You want to change your io1, io2, gp2, gp3, or standard volume that's also the root volume of your instance. But, you don't see the sc1 and st1 HDD-backed volume types in the dropdown list.You don't see the expected performance after changing your volume type.ResolutionThe size or performance isn't within the limits of the target volume type that you're changing toWhen you modify a volume type, the size and performance must be within the limits of the target volume type. For example, if you modify a gp3 volume that's bigger than 1024 GiB to Magnetic, then you get an error message. This error occurs because standard volumes have a maximum size of 1024 GiB. Magnetic is a previous generation volume type. It's a best practice to use a current volume type for higher performance. For more information see, Amazon EBS volume types.Also, after you increase the size of your volume, the new size is immediately available to use. However, if the volume is in the "optimizing" state, then volume performance is in between the source and target configuration specifications.The option to change the volume type on your Multi-Attach activated io1 or io2 volume is grayed outYou can't change the volume type on your Multi-Attach activated io1 or io2 volume. You can modify only the size and Provisioned IOPS for Multi-Attach activated io2 volumes. See Considerations and limitations for Multi-Attach.You don't see the sc1 and st1 HDD-backed volume types in the dropdown list for certain volume typesYou can't change the following volume types to an sc1 or st1 volume because sc1 and st1 volumes aren't supported as boot volumes:io1io2gp2gp3standardFor more information on modifying Amazon EBS volumes, see Limitations.You don't see the expected performance after changing your volume typeMake sure that your workload doesn't exceed the IOPS or throughput limits of your new volume type or instance limits. If Amazon CloudWatch doesn't show that you reached your limits, then your workload might still be micro-bursting. If your workload isn't micro-bursting and you still can't determine whether you reached your limits, then contact AWS Support to further investigate the issue.Related informationWhy is my Amazon EBS volume stuck in the Optimizing state when I modify the volume?Follow"
https://repost.aws/knowledge-center/ebs-change-volume-type-issues
How do I use persistent storage in Amazon EKS?
I want to use persistent storage in Amazon Elastic Kubernetes Service (Amazon EKS).
"I want to use persistent storage in Amazon Elastic Kubernetes Service (Amazon EKS).Short descriptionSet up persistent storage in Amazon EKS using either of the following options:Amazon Elastic Block Store (Amazon EBS) Container Storage Interface (CSI) driverAmazon Elastic File System (Amazon EFS) Container Storage Interface (CSI) driverTo use one of these options, complete the steps in either of the following sections:Option A: Deploy and test the Amazon EBS CSI driverOption B: Deploy and test the Amazon EFS CSI driverThe commands in this article require kubectl version 1.14 or greater. To see your version of kubectl, run the following command:kubectl version --client --shortNote: It's a best practice to make sure you install the latest version of the drivers. For more information, see in the GitHub repositories for the Amazon EBS CSI driver and Amazon EFS CSI driver.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you're using the most recent version of the AWS CLI.Before you complete the steps in either section, you must:1.    Install the AWS CLI.2.    Set AWS Identity and Access Management (IAM) permissions for creating and attaching a policy to the Amazon EKS worker node role CSI Driver Role.3.    Create your Amazon EKS cluster and join your worker nodes to the cluster.Note: Run the kubectl get nodes command to verify your worker nodes are attached to your cluster.4.    Run the following command to verify that your AWS IAM OpenID Connect (OIDC) provider exists for your cluster:aws eks describe-cluster --name your_cluster_name --query "cluster.identity.oidc.issuer" --output textNote: Replace your_cluster_name with your cluster name.5.    Run the following command to verify that your IAM OIDC provider is configured:`aws iam list-open-id-connect-providers | grep <ID of the oidc provider>`;Note: Replace ID of the oidc provider with your OIDC ID. If you receive a No OpenIDConnect provider found in your account error, you must create an IAM OIDC provider.6.    Install or update eksctl.7.    Run the following command to create an IAM OIDC provider:eksctl utils associate-iam-oidc-provider --cluster my-cluster --approve Note: Replace my-cluster with your cluster name.Option A: Deploy and test the Amazon EBS CSI driverDeploy the Amazon EBS CSI driver:1.    Create an IAM trust policy file, similar to the one below:cat <<EOF > trust-policy.json{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::YOUR_AWS_ACCOUNT_ID:oidc-provider/oidc.eks.YOUR_AWS_REGION.amazonaws.com/id/<your OIDC ID>" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "oidc.eks.YOUR_AWS_REGION.amazonaws.com/id/<XXXXXXXXXX45D83924220DC4815XXXXX>:aud": "sts.amazonaws.com", "oidc.eks.YOUR_AWS_REGION.amazonaws.com/id/<XXXXXXXXXX45D83924220DC4815XXXXX>:sub": "system:serviceaccount:kube-system:ebs-csi-controller-sa" } } } ]}EOFNote: Replace YOUR_AWS_ACCOUNT_ID with your account ID. Replace YOUR_AWS_REGION with your AWS Region. Replace your OIDC ID with the output from creating your IAM OIDC provider.2.    Create an IAM role named Amazon_EBS_CSI_Driver:aws iam create-role \ --role-name AmazonEKS_EBS_CSI_Driver \ --assume-role-policy-document file://"trust-policy.json"3.    Attach the AWS managed IAM policy for the EBS CSI Driver to the IAM role you created:aws iam attach-role-policy \--policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \--role-name AmazonEKS_EBS_CSI_DriverRole4.    Deploy the Amazon EBS CSI driver:Note: You can deploy the EBS CSI driver using Kustomize, Helm, or an Amazon EKS managed add-on. In the example below, the driver is deployed using the Amazon EKS add-on feature. For more information, see the aws-ebs-csi-driver installation guide.aws eks create-addon \ --cluster-name my-cluster \ --addon-name aws-ebs-csi-driver \ --service-account-role-arn arb:aws:iam::YOUR_AWS_ACCOUNT_ID:role/AmazonEKS_EBS_CSI_DriverRoleNote: Replace my-cluster with your cluster name and YOUR_AWS_ACCOUNT_ID with your account ID.Test the Amazon EBS CSI driver:You can test your Amazon EBS CSI driver with a sample application that uses dynamic provisioning for the pods. The Amazon EBS volume is provisioned on demand.1.    Clone the aws-ebs-csi-driver repository from AWS GitHub:git clone https://github.com/kubernetes-sigs/aws-ebs-csi-driver.git2.    Change your working directory to the folder that contains the Amazon EBS driver test files:cd aws-ebs-csi-driver/examples/kubernetes/dynamic-provisioning/3.    Create the Kubernetes resources required for testing:kubectl apply -f manifests/Note: The kubectl command creates a StorageClass (from the Kubernetes website), PersistentVolumeClaim (PVC) (from the Kubernetes website), and pod. The pod references the PVC. An Amazon EBS volume is provisioned only when the pod is created.4.    Describe the ebs-sc storage class:kubectl describe storageclass ebs-sc5.    Watch the pods in the default namespace and wait for the app pod's status to change to Running. For example:kubectl get pods --watch6.    View the persistent volume created because of the pod that references the PVC:kubectl get pv7.    View information about the persistent volume:kubectl describe pv your_pv_nameNote: Replace your_pv_name with the name of the persistent volume returned from the preceding step 6. The value of the Source.VolumeHandle property in the output is the ID of the physical Amazon EBS volume created in your account.8.    Verify that the pod is writing data to the volume:kubectl exec -it app -- cat /data/out.txtNote: The command output displays the current date and time stored in the /data/out.txt file. The file includes the day, month, date, and time.Option B: Deploy and test the Amazon EFS CSI driverBefore deploying the CSI driver, create an IAM role that allows the CSI driver's service account to make calls to AWS APIs on your behalf.1.    Download the IAM policy document from GitHub:curl -o iam-policy-example.json https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/docs/iam-policy-example.json2.    Create an IAM policy:aws iam create-policy \ --policy-name AmazonEKS_EFS_CSI_Driver_Policy \ --policy-document file://iam-policy-example.json3.    Run the following command to determine your cluster's OIDC provider URL:aws eks describe-cluster --name your_cluster_name --query "cluster.identity.oidc.issuer" --output textNote: In step 3, replace your_cluster_name with your cluster name.4.    Create the following IAM trust policy, and then grant the AssumeRoleWithWebIdentity action to your Kubernetes service account. For example:cat <<EOF > trust-policy.json{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::YOUR_AWS_ACCOUNT_ID:oidc-provider/oidc.eks.YOUR_AWS_REGION.amazonaws.com/id/<XXXXXXXXXX45D83924220DC4815XXXXX>" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "oidc.eks.YOUR_AWS_REGION.amazonaws.com/id/<XXXXXXXXXX45D83924220DC4815XXXXX>:sub": "system:serviceaccount:kube-system:efs-csi-controller-sa" } } } ]}EOFNote: In step 4, replace YOUR_AWS_ACCOUNT_ID with your account ID. Replace YOUR_AWS_REGION with your Region. Replace XXXXXXXXXX45D83924220DC4815XXXXX with the value returned in step 3.5.    Create an IAM role:aws iam create-role \ --role-name AmazonEKS_EFS_CSI_DriverRole \ --assume-role-policy-document file://"trust-policy.json"6.    Attach your new IAM policy to the role:aws iam attach-role-policy \ --policy-arn arn:aws:iam::<AWS_ACCOUNT_ID>:policy/AmazonEKS_EFS_CSI_Driver_Policy \ --role-name AmazonEKS_EFS_CSI_DriverRole7.    Save the following contents to a file named efs-service-account.yaml.---apiVersion: v1kind: ServiceAccountmetadata: labels: app.kubernetes.io/name: aws-efs-csi-driver name: efs-csi-controller-sa namespace: kube-system annotations: eks.amazonaws.com/role-arn: arn:aws:iam::<AWS_ACCOUNT_ID>:role/AmazonEKS_EFS_CSI_DriverRole8.    Create the Kubernetes service account on your cluster. The Kubernetes service account named efs-csi-controller-sa is annotated with the IAM role that you created.kubectl apply -f efs-service-account.yaml9.    Install the driver using images stored in the public Amazon ECR registry by downloading the manifest:$ kubectl kustomize "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.5" > public-ecr-driver.yamlNote: You can install the EFS CSI Driver using Helm and a Kustomize with AWS Private or Public Registry. For more information, see the AWS EFS CSI Driver documentation.10.    Edit the file 'public-ecr-driver.yaml' and annotate 'efs-csi-controller-sa' Kubernetes service account section with the ARN of the IAM role that you created:apiVersion: v1kind: ServiceAccountmetadata: labels: app.kubernetes.io/name: aws-efs-csi-driver annotations: eks.amazonaws.com/role-arn: arn:aws:iam::<accountid>:role/AmazonEKS_EFS_CSI_DriverRole name: efs-csi-controller-sa namespace: kube-systemDeploy the Amazon EFS CSI driverThe Amazon EFS CSI driver allows multiple pods to write to a volume at the same time with the ReadWriteMany mode.1.    To deploy the Amazon EFS CSI driver, apply the manifest:$ kubectl apply -f public-ecr-driver.yaml2.    If your cluster contains only AWS Fargate pods (no nodes), then deploy the driver with the following command (all Regions):kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/deploy/kubernetes/base/csidriver.yamlCreate an Amazon EFS File system1.    Get the VPC ID for your Amazon EKS cluster:aws eks describe-cluster --name your_cluster_name --query "cluster.resourcesVpcConfig.vpcId" --output textNote: In step 3, replace your_cluster_name with your cluster name.2.    Get the CIDR range for your VPC cluster:aws ec2 describe-vpcs --vpc-ids YOUR_VPC_ID --query "Vpcs[].CidrBlock" --output textNote: In step 4, replace the YOUR_VPC_ID with the VPC ID from the preceding step 3.3.    Create a security group that allows inbound network file system (NFS) traffic for your Amazon EFS mount points:aws ec2 create-security-group --description efs-test-sg --group-name efs-sg --vpc-id YOUR_VPC_IDNote: Replace YOUR_VPC_ID with the output from the preceding step 3. Save the GroupId for later.4.    Add an NFS inbound rule so that resources in your VPC can communicate with your Amazon EFS file system:aws ec2 authorize-security-group-ingress --group-id sg-xxx --protocol tcp --port 2049 --cidr YOUR_VPC_CIDRNote: Replace YOUR_VPC_CIDR with the output from the preceding step 4. Replace sg-xxx with the security group ID from the preceding step 5.5.    Create an Amazon EFS file system for your Amazon EKS cluster:aws efs create-file-system --creation-token eks-efsNote: Save the FileSystemId for later use.6.    To create a mount target for Amazon EFS, run the following command:aws efs create-mount-target --file-system-id FileSystemId --subnet-id SubnetID --security-group sg-xxxImportant: Be sure to run the command for all the Availability Zones with the SubnetID in the Availability Zone where your worker nodes are running. Replace FileSystemId with the output of the preceding step 7 (where you created the Amazon EFS file system). Replace sg-xxx with the output of the preceding step 5 (where you created the security group). Replace SubnetID with the subnet used by your worker nodes. To create mount targets in multiple subnets, you must run the command in step 8 separately for each subnet ID. It's a best practice to create a mount target in each Availability Zone where your worker nodes are running.Note: You can create mount targets for all the Availability Zones where worker nodes are launched. Then, all the Amazon Elastic Compute Cloud (Amazon EC2) instances in the Availability Zone with the mount target can use the file system.The Amazon EFS file system and its mount targets are now running and ready to be used by pods in the cluster.Test the Amazon EFS CSI driverYou can test the Amazon EFS CSI driver by deploying two pods that write to the same file.1.    Clone the aws-efs-csi-driver repository from AWS GitHub:git clone https://github.com/kubernetes-sigs/aws-efs-csi-driver.git2.    Change your working directory to the folder that contains the Amazon EFS CSI driver test files:cd aws-efs-csi-driver/examples/kubernetes/multiple_pods/3.    Retrieve your Amazon EFS file system ID that was created earlier:aws efs describe-file-systems --query "FileSystems[*].FileSystemId" --output textNote: If the command in step 3 returns more than one result, you can use the Amazon EFS file system ID that you saved earlier.4.    In the specs/pv.yaml file, replace the spec.csi.volumeHandle value with your Amazon EFS FileSystemId from previous steps.5.    Create the Kubernetes resources required for testing:kubectl apply -f specs/Note: The kubectl command in the preceding step 5 creates an Amazon EFS storage class, PVC, persistent volume, and two pods (app1 and app2).6.    List the persistent volumes in the default namespace, and look for a persistent volume with the default/efs-claim claim:kubectl get pv -w7.    Describe the persistent volume:kubectl describe pv efs-pv8.    Test if the two pods are writing data to the file:`kubectl exec -it app1 -- tail /data/out1.txt kubectl exec -it app2 -- tail /data/out1.txt`bWait for about one minute. The output shows the current date written to /data/out1.txt by both pods.Related informationTroubleshooting Amazon EFSFollow"
https://repost.aws/knowledge-center/eks-persistent-storage
How do I install a standard Let's Encrypt SSL certificate in a Lightsail instance?
How do I install a standard SSL certificate for my website hosted in an Amazon Lightsail instance that doesn't use a Bitnami stack?
"How do I install a standard SSL certificate for my website hosted in an Amazon Lightsail instance that doesn't use a Bitnami stack?Short descriptionThe following resolution covers installing a standard Let's Encrypt SSL certificate for websites hosted in Lightsail instances that don't use a Bitnami stack. Examples of these instance blueprints include Amazon Linux 2, Ubuntu, and so on. If you have a different instance blueprint or want to install a standard certificate, see one of the following:Standard Let's Encrypt certificatesFor information installing a standard Let's Encrypt SSL certificate (not a wildcard) in a Lightsail instance with a Bitnami stack, such as WordPress, LAMP, Magento, and so on, see How do I install a standard Let's Encrypt SSL certificate in a Bitnami stack hosted on Amazon Lightsail?Wildcard Let's Encrypt certificates (for example, *.example.com)For information on installing a wildcard Let's Encrypt certificate in a Lightsail instance with a Bitnami stack, such as WordPress, Lamp, Magento, MEAN, and so on, see How do I install a wildcard Let's Encrypt SSL certificate in a Bitnami stack hosted on Amazon Lightsail?For information on installing a wildcard Let's Encrypt certificate in a Lightsail instance that doesn't use a Bitnami stack, such as Amazon Linux 2, Ubuntu, and so on, see How do I install a wildcard Let's Encrypt SSL certificate in Amazon Lightsail?ResolutionPrerequisites and limitationsThe following steps cover installing the certificate in the server. You must manually complete additional steps, such as configuring the certificate and setting up HTTPS redirection.Make sure that the domain is pointing to the Lightsail Instance either directly or through a load balancer or distribution. For the certificate verification to complete, make sure that the website URL doesn't return errors from the load balancer or distribution in the web browser.Note: This method requires the installation of the Certbot tool first. For installation instructions, see How do I install the Certbot package in my Lightsail instance for Let's Encrypt installation?1.    Stop the web service running in your instance. The following are example commands for different Linux distributions:Apache web service in Linux distributions such as Amazon Linux2, CentOS, and so onsudo service httpd stopApache web service in Linux distributions such as Ubuntu, Debian, and so onsudo service apache2 stopNGINX web servicesudo service nginx stop2.    Run the following command to install the SSL certificate. Make sure to replace example.com with your domain name.sudo certbot certonly --standalone -d example.com -d www.example.comAfter the SSL certificate generates successfully, you receive the message "Successfully received certificate". The certificate and key file locations are also provided. Save these file locations to a notepad for use in step 5.3.    Start the web service. The following are example commands for different Linux distributions:Apache web service in Linux distributions such as Amazon Linux 2, CentOS, and so onsudo service httpd startApache web service in Linux distributions such as Ubuntu, Debian, and so onsudo service apache2 startNGINX web servicesudo service nginx start4.    Set up automatic certificate renewal.If the certbot package is installed using snapd, then the renewal is configured automatically in systemd timers or cronjobs. However, because the web service must be stopped before running the Certbot command, you must automate stopping and starting web service. To set up this automation, run the following commands. The following example uses Apache2 as the web service. Replace the code and stop-start command according to your web service.sudo sh -c 'printf "#!/bin/sh\n service apache2 stop \n" > /etc/letsencrypt/renewal-hooks/pre/webservice.sh'sudo sh -c 'printf "#!/bin/sh\n service apache2 start \n" > /etc/letsencrypt/renewal-hooks/post/webservice.sh'sudo chmod 755 /etc/letsencrypt/renewal-hooks/*/webservice.shIf the Linux distribution is Amazon Linux 2 or FreeBSD, then the Certbot package isn't installed using snapd. In this case, you must configure the renewal manually by running the following command. The following example uses Apache2 as the web service. Replace the code and stop-start command according to your web service.echo "30 0,12 * * * root python -c 'import random; import time; time.sleep(random.random() * 3600)' && certbot renew --pre-hook 'service apache2 stop' --post-hook 'service apache2 start'" | sudo tee -a /etc/crontab > /dev/null5.     Only the certificate installation and renewal setup is complete. You still must configure your web server to use this certificate and setup HTTPS redirection. This configuration varies and depends on the web server setup that you have in your instance. Refer to the official documentation based on your web service for instructions on completing these steps.Follow"
https://repost.aws/knowledge-center/lightsail-standard-ssl-certificate
How can I manage my snapshots and create backups for my Lightsail instances using AWS CLI commands?
I want to manage my snapshots and create backups for my Amazon Lightsail instances using AWS Command Line Interface (AWS CLI) commands. How can I do this?
"I want to manage my snapshots and create backups for my Amazon Lightsail instances using AWS Command Line Interface (AWS CLI) commands. How can I do this?Short descriptionFor a list of Amazon Lightsail AWS CLI commands, see the AWS CLI Command Reference and the Amazon Lightsail API Reference.The following are scenarios for using AWS CLI commands on your Lightsail instances to manage snapshots and backups:Managing manual backups:Create manual backups for your instance.List available snapshots.Managing automatic snapshots:Verify if automatic snapshots are enabled on your instances.Enable automatic snapshots.List automatic snapshots and creating a new instance from a backup with a higher bundle size or higher Lightsail plan.Important: Keep in mind the following when using AWS CLI commands:If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.JSON is the default AWS CLI output. You can use the default, or append --output json to the commands to receive output as shown in the following examples. For more information, see Controlling command output from the AWS CLI.For general information on solving AWS CLI errors, refer to Why am I receiving errors when running AWS CLI commands?The AWS CLI output displays timestamps in Unix Epoch time. Use one of the following methods to convert the timestamp into UTC:macOS:Remove the decimal point from the timestamp and any digits to the right of the decimal point, and then run the following command:# date -r 1602175741 -uThu Oct 8 16:49:01 UTC 2020Linux:Run the following command:# date -d @1602175741.603 -uThu Oct 8 16:49:01 UTC 2020Windows:Convert the timestamp using a converter, such as epochconverter.com.ResolutionManaging manual backupsCreate a manual backup for a Lightsail instanceRun the create-instance-snapshot command to create a snapshot of the Lightsail instance.The following example creates a snapshot of the instance SnapshotTestLightsailInstance1 in the eu-west-1 Region. Replace the --instance-snapshot-name, --instance-name, and --region with the appropriate values for your request.# aws lightsail create-instance-snapshot --instance-name TestLightsailInstance1 --instance-snapshot-name SnapshotTestLightsailInstance1{ "operations": [ { "id": "d3196be7-3dc6-4508-b335-16ce45f11c90", "resourceName": "SnapshotTestLightsailInstance1", "resourceType": "InstanceSnapshot", "createdAt": 1602180831.638, "location": { "availabilityZone": "all", "regionName": "eu-west-1" }, "isTerminal": false, "operationDetails": "TestLightsailInstance1", "operationType": "CreateInstanceSnapshot", "status": "Started", "statusChangedAt": 1602180831.638 }, { "id": "df237a33-bca9-4fc3-8f46-ea5d12606f5c", "resourceName": "TestLightsailInstance1", "resourceType": "Instance", "createdAt": 1602180831.638, "location": { "availabilityZone": "eu-west-1a", "regionName": "eu-west-1" }, "isTerminal": false, "operationDetails": "SnapshotTestLightsailInstance1", "operationType": "CreateInstanceSnapshot", "status": "Started", "statusChangedAt": 1602180831.638 } ]}List available snapshotsRun the get-instance-snapshots command to list all snapshots for your Lightsail instances. The following example shows details of snapshots available in eu-west-1. Replace the --region with the appropriate values for your request.# aws lightsail get-instance-snapshots --region eu-west-1 --query 'instanceSnapshots[].{name:name,createdAt:createdAt,resourceType:resourceType,state:state,fromInstanceName:fromInstanceName,sizeInGb:sizeInGb}' --output table-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| GetInstanceSnapshots |+----------------+-----------------------------------------+------------------------------------------------------------------------------------------------------------+-------------------+-----------+-------------+| createdAt | fromInstanceName | name | resourceType | sizeInGb | state |+----------------+-----------------------------------------+------------------------------------------------------------------------------------------------------------+-------------------+-----------+-------------+| 1602180831.638| TestLightsailInstance1 | SnapshotTestLightsailInstance1 | InstanceSnapshot | 40 | available |+----------------+-----------------------------------------+------------------------------------------------------------------------------------------------------------+-------------------+-----------+-------------+Managing automatic snapshotsVerify if automatic snapshots are enabled on your instancesRun the following command to verify if your instance has automatic snapshots enabled and to show the defined schedule. Replace TestLightsailInstance1 with your instance name and --region with the appropriate Region.# aws lightsail get-instances --region eu-west-1 --query 'instances[].{addOns:addOns,name:name,publicIpAddress:publicIpAddress,AutoMatciSnapshotStatus:(addOns[].status),Schedule:(addOns[].snapshotTimeOfDay)}' --output text| grep -w "TestLightsailInstance1"['Enabled'] ['20:00'] [{'name': 'AutoSnapshot', 'status': 'Enabled', 'snapshotTimeOfDay': '20:00'}] TestLightsailInstance1 3.250.xx.xxEnable automatic snapshotsRun the enable-add-on command to enable automatic snapshots for your Lightsail instances. The following example creates daily automatic snapshots set to an hourly increment in UTC (08PM UTC). Replace the --resource-name, snapshotTimeOfDay, and --region with the appropriate values for your request.# aws lightsail enable-add-on --region eu-west-1 --resource-name TestLightsailInstance1 --add-on-request addOnType=AutoSnapshot,autoSnapshotAddOnRequest={snapshotTimeOfDay=20:00}{ "operations": [ { "id": "823bb162-9848-4897-b845-8f41c375801a", "resourceName": "TestLightsailInstance1", "resourceType": "Instance", "createdAt": 1602181856.652, "location": { "availabilityZone": "eu-west-1", "regionName": "eu-west-1" }, "isTerminal": false, "operationDetails": "EnableAddOn - AutoSnapshot", "operationType": "EnableAddOn", "status": "Started" } ]}List automatic snapshots and creating a new instance from the backup with higher bundle size or higher Lightsail plan1.FSPRun the command get-auto-snapshots to list all available automatic snapshots for your Lightsail instances or disk. The following example shows details of snapshots available for the instance TestLightsailInstance1. Replace the --resource-name and --region with the appropriate values for your request.# aws lightsail get-auto-snapshots --region eu-west-1 --resource-name TestLightsailInstance1{ "resourceName": "TestLightsailInstance1", "resourceType": "Instance", "autoSnapshots": [ { "date": "2020-10-08", "createdAt": 1602188663.0, "status": "Success", "fromAttachedDisks": [] } ]}2.FSPRun the create-instances-from-snapshot command to create one or more Lightsail instances from a manual or automatic backup. The following example creates an instance in the eu-west-1 Region using a specific backup and a higher sized bundle. Replace --instance-snapshot-name, --instance-names, bundle-id, and --region with the appropriate values for your request.# aws lightsail create-instances-from-snapshot --region eu-west-1 --instance-snapshot-name SnapshotTestLightsailInstance1 --instance-names RestoredTestLightsailInstance1-New --availability-zone eu-west-1a --bundle-id large_2_0{ "operations": [ { "id": "09f7d1bb-90f4-48dc-b304-543499e11208", "resourceName": "RestoredTestLightsailInstance1-New", "resourceType": "Instance", "createdAt": 1602182374.625, "location": { "availabilityZone": "eu-west-1a", "regionName": "eu-west-1" }, "isTerminal": false, "operationType": "CreateInstancesFromSnapshot", "status": "Started", "statusChangedAt": 1602182374.625 } ]}The following example creates a new instance based on the specified backup and a higher size bundle:# aws lightsail get-instances --region eu-west-1 --query 'instances[].{name:name,createdAt:createdAt,blueprintId:blueprintId,blueprintName:blueprintName,publicIpAddress:publicIpAddress}' --output table |grep -i RestoredTestLightsailInstance1-New| wordpress | WordPress | 1602182374.625 | RestoredTestLightsailInstance1-New | 34.247.xx.xx |Related informationHow can I manage my Lightsail instance using AWS CLI commands? How can I manage static IP addresses on my Lightsail instances using AWS CLI commands? Enabling or disabling automatic snapshots for instances or disks in Amazon LightsailLightsail docsFollow"
https://repost.aws/knowledge-center/lightsail-aws-cli-snapshots-backups
How do I minimize downtime during required Amazon RDS maintenance?
I received a maintenance notification that says one of my Amazon Relational Database Service (Amazon RDS) DB instances requires maintenance. What are some strategies that I can use to minimize downtime?
"I received a maintenance notification that says one of my Amazon Relational Database Service (Amazon RDS) DB instances requires maintenance. What are some strategies that I can use to minimize downtime?ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Occasionally, AWS performs maintenance to the hardware, operating system (OS), or database engine version for a DB instance or cluster. For more information, see Maintaining a DB instance and Upgrading a DB instance engine version.For information about pending maintenance events for your Amazon RDS DB instances, check the Events pane of the Amazon RDS console. Then, check for engine-specific maintenance events. You can run describe-pending-maintenance-actions using the AWS Command Line Interface (AWS CLI) or the Amazon RDS API for DescribeDBInstances. You can also check Amazon RDS recommendations for Pending maintenance available.Hardware maintenanceBefore maintenance is scheduled, you receive an email notification about scheduled hardware maintenance windows that includes the time of the maintenance and the Availability Zones that are affected. During hardware maintenance, Single-AZ deployments are unavailable for a few minutes. Multi-AZ deployments are unavailable for the time it takes the instance to failover (usually about 60 seconds) if the Availability Zone is affected by the maintenance. If only the secondary Availability Zone is affected, then there is no failover or downtime.OS maintenanceAfter OS maintenance is scheduled for the next maintenance window, maintenance can be postponed by adjusting your preferred maintenance window. Maintenance can also be deferred by choosing Defer Upgrade from the Actions dropdown menu. To minimize downtime, modify the Amazon RDS DB instance to a Multi-AZ deployment. For Multi-AZ deployments, OS maintenance is applied to the secondary instance first, then the instance fails over, and then the primary instance is updated. The downtime is during failover. For more information, see Maintenance for Multi-AZ deployments.DB engine maintenanceUpgrades to the database engine level require downtime. Even if your RDS DB instance uses a Multi-AZ deployment, both the primary and standby DB instances are upgraded at the same time. This causes downtime until the upgrade is complete, and the duration of the downtime varies based on the size of your DB instance. For more information, see the section for your DB engine in Upgrading a DB instance engine version.Note: If you upgrade a SQL Server DB instance in a Multi-AZ deployment, then both the primary and standby instances are upgraded. Amazon RDS performs rolling upgrades, so you have an outage only for the duration of a failover. For more information, see Multi-AZ and in-memory optimization considerations.Related informationBest practices for Amazon RDSUpgrading a read replica to reduce downtime when upgrading a MySQL databaseWhat happens to Amazon RDS and Amazon Redshift queries that are running during a maintenance window?How do I configure notifications for Amazon RDS or Amazon Redshift maintenance windows?Follow"
https://repost.aws/knowledge-center/rds-required-maintenance
How do I troubleshoot binary logging errors that I received when using AWS DMS with Aurora MySQL as the source?
"I have an Amazon Aurora DB instance that is running MySQL and binary logging is enabled. I'm using the Aurora DB instance as the source for an AWS Database Migration Service (AWS DMS) task, and I received an error. How do I troubleshoot and resolve this error?"
"I have an Amazon Aurora DB instance that is running MySQL and binary logging is enabled. I'm using the Aurora DB instance as the source for an AWS Database Migration Service (AWS DMS) task, and I received an error. How do I troubleshoot and resolve this error?Short descriptionYou must enable binary logging on the source Aurora MySQL-Compatible Edition DB writer instance to use change data capture (CDC) with an AWS DMS task that is either FULL LOAD AND CDC or CDC only. You must use the writer instance because read replicas aren't supported as a source for CDC operations. For more information, see the Limitations on using a MySQL database as a source for AWS DMS.If binary logging isn't enabled or you're connected to the reader instance, then you see a log entry similar to the following:Messages[SOURCE_CAPTURE ]I: System var 'log_bin' = 'OFF'[SOURCE_CAPTURE ]E: Error Code [10001] : Binary Logging must be enabled for MySQL server [1020418] (mysql_endpoint_capture.c:366)ResolutionIf you're connected to the reader instance, first identify the writer instance, and then connect to the writer instance with AWS DMS. It's a best practice to connect to the cluster endpoint because the cluster endpoint points to the current writer of the cluster at all times.Then, connect to the source Aurora cluster writer node by using the cluster endpoint to confirm that binary logging is enabled:mysql> show global variables like "log_bin";+---------------+-------+| Variable_name | Value |+---------------+-------+| log_bin | OFF |+---------------+-------+If the log_bin parameter is set to OFF, then check the Aurora cluster's cluster parameter group to confirm that the binlog_format parameter is set to ROW. If binlog_format isn't set to ROW, modify the parameter to enable binary logging for Aurora for MySQL.Note: This is a static parameter, so you must reboot your Aurora instance for this change to take effect.After you set the binlog_format parameter to ROW, confirm that binary logging is enabled by connecting to your Aurora instance:mysql> show global variables like "log_bin";+---------------+-------+| Variable_name | Value |+---------------+-------+| log_bin | ON |+---------------+-------+After you enable binary logging and confirm that you're using the cluster writer endpoint with AWS DMS, restart your task.Related informationUsing a MySQL-compatible database as a source for AWS DMSFollow"
https://repost.aws/knowledge-center/dms-binary-logging-aurora-mysql
Why do I still see my Amazon S3 bucket even though I deleted it?
"I deleted my Amazon Simple Storage Service (Amazon S3) bucket. However, I can still see the bucket in the Amazon S3 console or in responses to API requests. Why is this happening?"
"I deleted my Amazon Simple Storage Service (Amazon S3) bucket. However, I can still see the bucket in the Amazon S3 console or in responses to API requests. Why is this happening?ResolutionBecause of the distributed nature of Amazon S3 and itseventual consistency model, it can take some time for changes to propagate across all AWS Regions. This is why you might temporarily see your bucket in the console, or in a response to aListBuckets API request, even after you delete the bucket.Until the bucket is completely removed by Amazon S3, you might still see the bucket, but you can't perform any other actions on the bucket.Related informationBuckets overviewDeleting a BucketFollow"
https://repost.aws/knowledge-center/s3-listing-deleted-bucket
How do I use IAM policy variables with federated users?
"When I use the GetFederationToken API to generate temporary credentials, the ${aws:userName} policy variable doesn't work."
"When I use the GetFederationToken API to generate temporary credentials, the ${aws:userName} policy variable doesn't work.ResolutionWhen using the GetFederationToken API, use the ${aws:userID} policy variable instead of the ${aws:userName} policy variable. This is because the variable ${aws:userName} isn't present if the principal is a federated user. For more information, see where you can use policy variables.The following JSON IAM policy provides an example where the ${aws:userName} policy variable has been replaced with the ${aws:userID} policy variable:{ "Version":"2012-10-17", "Statement":[ { "Sid":"AllowListingOfUserFolder", "Action":[ "s3:ListBucket" ], "Effect":"Allow", "Resource":[ "arn:aws:s3:::TESTBUCKET" ], "Condition":{ "StringLike":{ "s3:prefix":[ "TESTBUCKET/${aws:userid}/*" ] } } }, { "Sid":"AllowAllS3ActionsInUserFolder", "Action":[ "s3:PutObject", "s3:GetObject", "s3:GetObjectVersion", "s3:DeleteObject" ], "Effect":"Allow", "Resource":[ "arn:aws:s3:::TESTBUCKET/${aws:userid}/*" ] } ]}The value for the aws:userid variable should be "ACCOUNTNUMBER:caller-specified-name".When calling the GetFederationToken API, the Name parameter value must follow the guidelines established in GetFederationToken. For example, if you specify the friendly name Bob, the correct format is "123456789102:Bob". This names your session and allows access to the Amazon Simple Storage Service (Amazon S3) bucket with a matching prefix.Note: This example assumes that the caller-specified name (friendly name) portion of the aws:userid variable is unique. A unique friendly name prevents the scenario where another user with the same friendly name is not granted access to resources specified in the JSON policy. For more information, see Unique identifiers.Related informationPermissions for GetFederationTokenToken vending machine for identity registration - sample java web applicationIAM policy elements: variables and tagsIAM identifiersFollow"
https://repost.aws/knowledge-center/iam-policy-variables-federated
How can I filter AWS DMS tasks by date?
I want to filter my AWS Database Migration Service (AWS DMS) tasks by date. How can I do that?
"I want to filter my AWS Database Migration Service (AWS DMS) tasks by date. How can I do that?ResolutionTo filter AWS DMS tasks by date, use table mappings. When entering your table mappings, the filter-operator parameter can have one of the following values:lte – less than or equal to one valueste – less than or equal to one value (lte alias)gte – greater than or equal to one valueeq – equal to one valuenoteq – not equal to one valuebetween – equal to or between two valuesnotbetween – not equal to or between two valuesThe following JSON example filter uses gte and date_of_record >= 2019-01-08.{ "rules": [ { "rule-type": "selection", "rule-id": "1", "rule-name": "1", "object-locator": { "schema-name": "testonly", "table-name": "myTable_test" }, "rule-action": "include", "filters": [ { "filter-type": "source", "column-name": "date_of_record", "filter-conditions": [ { "filter-operator": "gte", "value": "2019-01-08" } ] } ] } ]}Note: When importing data, AWS DMS uses the date format YYYY-MM-DD and the time format YYYY-MM-DD HH:MM:SS for filtering.Related informationUsing table mapping to specify task settingsUsing source filtersFiltering by time and dateFollow"
https://repost.aws/knowledge-center/dms-filter-task-by-date
How do I configure CloudFront to forward the host header to the origin?
"The origin that's configured on my Amazon CloudFront distribution uses virtual hosting. Because of this, my distribution must forward the host header to my origin server. I want to configure my distribution to forward the host header."
"The origin that's configured on my Amazon CloudFront distribution uses virtual hosting. Because of this, my distribution must forward the host header to my origin server. I want to configure my distribution to forward the host header.Short descriptionTo configure your distribution to forward the host header to the origin, take one of the following actions:Create a cache policy and an origin request policy.Edit the settings of an existing behavior in the distribution.Important: For Amazon Simple Storage Service (Amazon S3) origins, caching that's based on the host header isn't supported. For more information, see Selecting the headers to base caching on.ResolutionCreate a cache policy and an origin request policyFollow the steps to create a cache policy using the CloudFront console.Under Cache key settings, for Headers, choose Include the following headers. From the Add header dropdown list, choose Host.Complete all other settings of the cache policy based on the requirements of the behavior that you're attaching the policy to, and then choose Create.After you create the cache policy, follow the steps to attach the policies to the relevant behavior of your CloudFront distribution.Edit the settings of an existing behaviorOpen the CloudFront console, and then choose your distribution.Choose the Behaviors tab, and then choose the path to forward the host header to.Choose Edit.Under Cache key and origin requests, confirm that Legacy cache settings is selected. If it's not selected, then follow the steps in the preceding section to create a cache policy. If Legacy cache settings is selected, then complete the following:For Headers, choose Include the following headers.From the Add header dropdown list, choose Host.Choose Save Changes.Related informationCaching content based on request headersWorking with policiesFollow"
https://repost.aws/knowledge-center/configure-cloudfront-to-forward-headers
How can I be notified when changes are made to Route 53 hosted zone records?
How can I receive an email response with a custom notification when resource record sets are created or deleted from Amazon Route 53?
"How can I receive an email response with a custom notification when resource record sets are created or deleted from Amazon Route 53?Short descriptionYou can use a custom event pattern with an Amazon EventBridge or Amazon CloudWatch Events rule that triggers when ChangeResourceRecordSets API activity is logged in AWS CloudTrail. Then, route the response to an Amazon Simple Notification Service (Amazon SNS) topic.ResolutionIf you haven't already created an Amazon SNS topic with an email subscription, then follow the instructions for Getting started with Amazon SNS. This topic and subscription will be used later. This article breaks up the task into three parts:Create an EventBridge rule to match Route 53 API calls captured by CloudTrailAssociate the EventBridge rule with an SNS target for email notificationConfigure Input Transformer on the target so that the notification can be customized into a human-readable messageTo be notified when changes are made to Route 53 hosted zone records, follow all the steps for each task.Create an EventBridge RuleRoute 53 is an AWS global service that is available only in US East (N. Virginia). The EventBridge rule must be created in US East (N. Virginia).1.    Open the EventBridge console.2.    In the navigation pane, choose Rules, and then choose Create rule.3.    In Name and Description fields, enter a name and description for the rule. To receive events from AWS services, select Enable the rule on the selected eventbus.4.    Choose Rule with an event pattern. Then, choose Next.5.    Choose AWS Events or EventBridge partner events.6.    Under Event Pattern, choose the following:For Event Source, choose AWS servicesFor AWS service, choose Route 53For Event Type, choose AWS API Call via CloudTrail7.    Choose Specific Operation(s) and enter ChangeResourceRecordSets into the field. This will limit events to only match for create, delete, or updates to resource record sets.The following event pattern appears:{ "source": ["aws.route53"], "detail-type": ["AWS API Call via CloudTrail"], "detail": { "eventSource": ["route53.amazonaws.com"], "eventName": ["ChangeResourceRecordSets"] }}8.    Choose Next to proceed to the next step.Associate SNS Target with EventBridge Rule1.     In the Target types section, choose AWS Service.2.    In the Select a target dropdown list, choose SNS topic.3.    In the Topic dropdown list, choose the SNS topic you created previously.Configure Input Transformer to Customize SNS NotificationBy default, EventBridge forwards the entire CloudTrail event to the target. The SNS topic then delivers a notification as unformatted JSON. This might be difficult to read and quickly understand the contents.By using the Input Transformer, specific fields in the inbound event can be selected and then integrated into a more human-readable message. The Input Path identifies the desired fields.For this example, the eventTime, hostedZone, username, and eventID are included in the notification. The fields can be changed to align with your use case. The Input Template contains the message body of the notification and placeholders that will be dynamically updated with the desired fields.1.    Expand the Additional settings dropdown list. In the Configure target input dropdown list, choose Input transformer.2.    Choose Configure input transformer.3.    In the Input path field, paste the following text:{ "eventTime": "$.detail.eventTime", "hostedZone": "$.detail.requestParameters.hostedZoneId", "userName": "$.detail.userIdentity.sessionContext.sessionIssuer.userName", "eventID": "$.detail.eventID"}4.    In the Template field, paste the following text:"At <eventTime>, one or more Route 53 records within Hosted Zone <hostedZone> were modified by user <userName>. To view the event directly in your Event History and review these changes, use the following link. Note that the event may take up to 15 minutes to be available in your Event History: https://console.aws.amazon.com/cloudtrail/home?region=us-east-1#/events?EventId=<eventID>"6.    Choose Confirm.7.    (Optional) Add tags to the EventBridge rule. Then, choose Next.8.    Review the rule configuration. Then, choose Create rule.After the rule is created, any changes to Route 53 resource sets will result in a notification similar to the following:"At 2022-08-16T21:02:46Z, one or more Route 53 records within Hosted Zone ZB3A123456789 were modified by user Admin. To view the event directly in your Event History and review these changes, use the following link. Note that the event may take up to 15 minutes to be available in your Event History: https://console.aws.amazon.com/cloudtrail/home?region=us-east-1#/events?EventId=04d08662-537e-4424-97c2-8bc796943b75"Related informationHow can I create a custom event pattern for an EventBridge rule?How do I set up human-readable EventBridge notifications for API calls using input transformer?Tutorial: Use input transformer to customize what EventBridge passes to the event targetTutorial: Log AWS API calls using EventBridgeFollow"
https://repost.aws/knowledge-center/route53-change-notifications
How do I resolve WorkSpaces system errors by restoring or rebuilding my WorkSpace?
"Amazon WorkSpaces is failing, returning system errors, or failing while Windows is updating. How do I resolve these issues?"
"Amazon WorkSpaces is failing, returning system errors, or failing while Windows is updating. How do I resolve these issues?ResolutionYou can resolve critical errors or failures with your WorkSpace by using either the restore or rebuild feature. Restoring a WorkSpace is easier than rebuilding one, and it deletes less data in the process. Keep the following points in mind when deciding whether to restore or rebuild a WorkSpace:Restoring your WorkSpaceTo restore a WorkSpace, it must have a state of AVAILABLE, ERROR, UNHEALTHY, or STOPPED. A WorkSpace in the REBOOTING state cannot be restored.To restore a WorkSpace, you must have snapshots of both the root volume and user volume.When you restore a WorkSpace, both the root volume and user volume are recreated from the most recent healthy snapshot.To restore your WorkSpace, then follow the instructions at Restore a WorkSpace.Rebuilding your WorkSpaceTo restore a WorkSpace, it must have a state of AVAILABLE, ERROR, UNHEALTHY, STOPPED, or REBOOTING.To rebuild a WorkSpace, you must have a snapshot of the user volume.When you rebuild a WorkSpace, it is restored to the most recent image of the bundle that the WorkSpace was launched from. Any modifications made after the WorkSpace was launched from the image (custom system settings, installed applications) are lost.Because automatic snapshots of the D: drive are taken every 12 hours, you might be able to preserve the contents of the D: drive by rebooting the WorkSpace, waiting 12 hours, and then rebuilding the WorkSpace.Note: The file structure of an application installed to the D: drive will be restored, but if it was installed after the Workspace was launched, the application will require reinstallation.To rebuild your WorkSpace, then follow the instructions at Rebuild a WorkSpace.Related informationAdminister your WorkSpacesFollow"
https://repost.aws/knowledge-center/rebuild-workspace
How does EKS Anywhere cluster bootstrapping work?
I want to understand the bootstrapping process for Amazon Elastic Kubernetes Service (Amazon EKS) Anywhere.
"I want to understand the bootstrapping process for Amazon Elastic Kubernetes Service (Amazon EKS) Anywhere.ResolutionThe bootstrap clusterWhen you create an initial standalone cluster or management cluster, Amazon EKS Anywhere also creates a bootstrap cluster. This is a temporary, single node Kubernetes in Docker (KinD) cluster that's created on a separate Administrative machine to facilitate your main cluster's creation.EKS Anywhere creates a bootstrap cluster on the Administrative machine that hosts CAPI and CAPX operators. To create the bootstrap cluster, EKS Anywhere must complete the following steps:Pull the KinD node imagePrepare the nodeWrite the configurationStart the control planeInstall the CNIInstall the StorageClass on the KinD-based single node clusterCluster creationWhen the bootstrap cluster is running and properly configured on the Administrative machine, the creation of the target cluster begins. EKS Anywhere uses kubectl to apply a target cluster configuration in the following process:1.    After etcd, the control plane, and the worker nodes are ready, the target cluster receives its networking configuration.2.    The target cluster receives its default storage class installation.3.    CAPI providers are configured on the target cluster. This allows the target cluster to control and run the components that it needs to manage itself.4.    After CAPI runs on the target cluster, CAPI objects are moved from the bootstrap cluster to the target cluster's CAPI service. This happens internally with the clusterctl command.5.    The target cluster receives Kubernetes CRDs and other add-ons that are specific to EKS Anywhere.6.    The cluster configuration is saved.The bootstrapping process creates a YAML file that's located in CLUSTER_NAME/generated/CLUSTER_NAME-eks-a-cluster.yaml.When boostrapping succeeds, this YAML file moves to CLUSTER_NAME/CLUSTER_NAME-eks-a-cluster.yamlSimilarly, Kubeconfig moves from CLUSTER_NAME/generated/CLUSTER_NAME.kind.kubeconfig to CLUSTER_NAME/CLUSTER_NAME-eks-a-cluster.kubeconfig.When etcd, the control plane, and the worker nodes are ready, the workload cluster receives its networking configuration. When the cluster is active and the CAPI service is running on the new cluster, the bootstrap cluster is no longer needed. Then, the service deletes the bootstrap cluster.Package workflowsDuring the bootstrapping process, EKS Anywhere uses the following logic in its workflows for the target cluster creation, cluster upgrade, and cluster deletion.Cluster creationFor the full package workflow during cluster creation, see create.go on GitHub. During this workflow, EKS Anywhere uses the following logic:Setups and validationsNote: If this step fails, then either preflights failed or the registry isn't set up properly.Create a new bootstrap clusterCreate a new KinD clusterProvider specific pre-capi-install-setup on the bootstrap clusterInstall cluster-api providers on bootstrap clusterProvider specific post-setupCreate a new workload clusterWait for external etcd to be readyWait for control plane to become availableWait for workload kubeconfig generationInstall networking the on workload clusterInstall machine health checks on the bootstrap clusterWait for control plane and worker machines to be readyInstall resources on managementInstall the eks-a components taskInstall the Git ops managerMove cluster managementWrite ClusterConfigDelete the bootstrap clusterInstall curated packagesCluster upgradeFor the full package workflow during a cluster upgrade, see upgrade.go on GitHub. During this workflow, EKS Anywhere uses the following logic:Setups and validationsUpdate secretsVerify etcd CAPI components existUpgrade core componentsVerify the needed upgradePause eks-a reconciliationCreate the bootstrap clusterInstall CAPIMove management to the bootstrap clusterMove management to the workload clusterUpgrade the workload clusterDelete the bootstrap clusterUpdate the workload cluster and Git resourcesResume eks-a reconciliationWrite ClusterConfigCluster deletionFor the full package workflow during a cluster's deletion, see delete.go on GitHub. During this workflow, EKS Anywhere uses the following logic:Setups and validatationsCreate a management clusterInstall CAPIMove cluster managementDelete the workload clusterClean up the Git repositoryDelete package resourcesDelete the management clusterErrors during cluster creationIf you encounter issues or errors, then look for logs in the Administrative machine and the capc-controller-manager. View the capc-controller-manager logs with kubectl in the capc-system namespace. For further troubleshooting, check the status of the CAPI objects for your cluster, located in the eksa-system namespace.You might also find related information on errors in the logs of other CAPI managers, such as capi-kubeadm-bootstrap-controller, capi-kubeadm-control-plane-controller and capi-controller-manager. These managers work together, and you can locate each in their own namespace with the kubectl get pods -A command. For more information, see the troubleshooting guide for EKS Anywhere.For a script to fix linting errors during the bootstrapping process, see bootstrapper.go on GitHub.Related informationKinD quick start (on the KinD website)Cluster creation workflowFollow"
https://repost.aws/knowledge-center/eks-anywhere-bootstrapping-process
Why am I being billed for Elastic IP addresses when all my Amazon EC2 instances are terminated?
"I terminated all my Amazon Elastic Compute Cloud (Amazon EC2) instances, but I'm still billed for Elastic IP addresses. The Amazon EC2 On-Demand pricing page says that Elastic IP addresses are free. Why am I being billed?"
"I terminated all my Amazon Elastic Compute Cloud (Amazon EC2) instances, but I'm still billed for Elastic IP addresses. The Amazon EC2 On-Demand pricing page says that Elastic IP addresses are free. Why am I being billed?ResolutionAn Elastic IP address doesn’t incur charges as long as all the following conditions are true:The Elastic IP address is associated with an EC2 instance.The instance associated with the Elastic IP address is running.The instance has only one Elastic IP address attached to it.The Elastic IP address is associated with an attached network interface. For more information, see Network interface basics.Note: If the address is from a BYOIP address pool, then you're never charged for that address.You're charged by the hour for each Elastic IP address that doesn't meet these conditions. For pricing information, see Elastic IP addresses on the Amazon EC2 pricing page.To see the Elastic IP addresses that are provisioned to your account, open the Amazon EC2 console, and then choose Elastic IPs in the navigation pane.If you don’t need an Elastic IP address, you can stop the charges by releasing the IP address.Note: When you release the IP address, it might be provisioned automatically by another AWS account. You might be able to recover that same Elastic IP address again later, but only if no one else is using it.If you receive an error when you try to release an Elastic IP address from your Amazon EC2 instance, see How do I resolve the error "The address with allocation id cannot be released because it is locked to your account" when trying to release an Elastic IP address from my Amazon EC2 instance?Related informationElastic IP addressesFollow"
https://repost.aws/knowledge-center/elastic-ip-charges
Why does the "blended" annotation appear on some line items in my AWS bill?
I want to understand why some of the line items on my AWS bill have the "blended" annotation on them.
"I want to understand why some of the line items on my AWS bill have the "blended" annotation on them.ResolutionFor billing purposes, AWS treats all accounts in an organization as if they're one account. The pricing tiers and capacity reservations of the accounts in the organization are combined into one consolidated bill. Combining the accounts into consolidated billing can lower the effective price per hour for some services.This effective price per hour is displayed as the "blended" rate in your Cost and Usage Reports or AWS Cost Explorer. The blended rate that's shown in the console is for informational purposes only. For more information, see Blended rates and costs.An organization's management account can restrict the pricing benefit of Amazon Elastic Compute Cloud (Amazon EC2) Reserved Instances to the accounts that they’re purchased on. To restrict the pricing benefit of Amazon EC2 Reserved Instances, see Turning off shared Reserved Instances and Savings Plans discounts.Related informationWhat is AWS Billing?Understanding consolidated billsFollow"
https://repost.aws/knowledge-center/blended-rates-intro
How do I prevent "rate exceeded" ThrottlingException errors when I run monitoring scripts in Amazon EMR?
"To monitor my Amazon EMR clusters, I run scripts that make API calls. The scripts return errors similar to the following:"Rate exceeded (Service: AmazonElasticMapReduce; Status Code: 400; Error Code: ThrottlingException; Request ID: e2b6191c-gkl5-269r-u735-cryyz251a837)"How do I prevent "rate exceeded" errors?"
"To monitor my Amazon EMR clusters, I run scripts that make API calls. The scripts return errors similar to the following:"Rate exceeded (Service: AmazonElasticMapReduce; Status Code: 400; Error Code: ThrottlingException; Request ID: e2b6191c-gkl5-269r-u735-cryyz251a837)"How do I prevent "rate exceeded" errors?Short descriptionAmazon EMR throttles API calls to maintain system stability. Throttling exceptions usually occur when you run monitoring scripts at regular intervals to check clusters for a parameter. Here's an example: calling DescribeCluster every 60 seconds to check if the cluster has reached the WAITING state. The more clusters that you have and the more monitoring scripts that you run, the more likely you are to get throttling errors.ResolutionTo prevent throttling errors:Reduce the frequency of the API calls.Stagger the intervals of the API calls so that they don't all run at the same time.Implement exponential backoff when making API calls.Consider moving to an event-based architecture.To understand the source of throttling errors, use AWS CloudTrail to track Event History. CloudTrail can help identify event details such as the following:Frequent API callsRate exceeded errors and their related API callsWhether API calls are triggered by users or automationRelated informationCommon errorsManaging and monitoring API throttling in your workloadsHow CloudTrail worksFollow"
https://repost.aws/knowledge-center/emr-cluster-status-throttling-error
How do I run an AWS Batch job using Java?
I want to run AWS Batch jobs using Java. How do I set that up?
"I want to run AWS Batch jobs using Java. How do I set that up?ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you're using the most recent AWS CLI version.Prepare your environment1.    Install Java by following the download instructions on the Oracle website.2.    Install Eclipse by following the download instructions on the Eclipse Foundation website.3.    Create an AWS Batch compute environment, job definition, and job queue.4.    Verify that the job queue is assigned to a valid compute environment by running the following describe-compute-environments AWS CLI command:Important: Replace your-compute-env-name with your compute environment's name.$ aws batch describe-compute-environments --compute-environments your-compute-env-name5.    Verify that the "status" value in the command output is "VALID". If your compute environment isn't valid, make the compute environment valid before continuing.Note: You can use Java code to create an AWS Batch compute environment, job definition, and job queue. However, you must complete the steps in the Convert the Java project to a Maven project section before creating the resources. For more information, see the sample code from the AWS SDK for Java API Reference.Install the AWS Toolkit for Eclipse1.    Open the Eclipse Integrated Development Environment (IDE).2.    From the Eclipse IDE menu bar, choose Help.3.    Under Help, choose Eclipse Marketplace.4.    In Eclipse Marketplace, choose the Search tab.5.    In the Find search box, enter AWS.6.    From the search results, choose Install for AWS Toolkit for Eclipse.7.    From the Eclipse menu bar, choose Navigate.8.    Choose Preferences. Then, choose Add Access Key ID and Secret Access key.Create a new Java project1.    From the Eclipse IDE menu bar, choose File.2.    Choose New. Then, choose Project.Convert the Java project to a Maven project1.    In the Eclipse IDE, right-click the Java project that you created.2.    Choose Configure. Then, choose Convert to Maven Project. Maven creates a POM.xml file that contains information about the project and the configuration details used to build the project.3.    Add the required dependencies to the POM.xml file by adding the following code after the closing build tag in the file:<!-- https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-batch --> <dependencies> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-java-sdk-batch</artifactId> <version>1.11.470</version></dependency> </dependencies>Important: The code for step 3 works only if you include the code after the closing build tag in the POM.xml file.Create a Java program to submit AWS Batch jobs1.    In the Eclipse IDE, choose the project that you created.2.    Inside the project, right-click the src folder.3.    Choose New. Then, choose File.4.    Name the new file BatchClient.java.Important: The .java extension name must match with the public class name in the Java program.5.    Add your AWS Batch environment details to the BatchClient.java file by entering the following code into the file:Important: Replace new-queue with your queue name. Replace us-east-1 with the AWS Region that your environment is in. Replace sleep30:4 with your job definition.public class BatchClient {public static void main(String[] args) { AWSBatch client = AWSBatchClientBuilder.standard().withRegion("us-east-1").build();SubmitJobRequest request = new SubmitJobRequest().withJobName("example").withJobQueue("new-queue").withJobDefinition("sleep30:4");SubmitJobResult response = client.submitJob(request);System.out.println(response);}}6.    To submit the AWS Batch job and run the Java program, choose Run from the Run menu.Note: If you receive a "SubmitJobResult can not be resolved" error, then you must import the package required for the SubmitJobResult API action. To import the package in the Eclipse IDE, do the following:In the Java code BatchClient.java, select the SubmitJobResult.Right-click Choose.Choose Source.Choose Add import.Follow"
https://repost.aws/knowledge-center/batch-run-jobs-java
How do I configure and attach an internet gateway for use with Elastic Load Balancing?
How do I configure and attach an internet gateway for use with Elastic Load Balancing?
"How do I configure and attach an internet gateway for use with Elastic Load Balancing?ResolutionFirst, create an internet gateway and attach it to your Amazon Virtual Private Cloud (Amazon VPC).Next, configure the internet gateway to route traffic to and from the internet:Add an entry to the route table for your subnet that points to your internet gateway.Confirm that your Amazon VPC's security groups and network access control list (ACL) allow traffic to and from the internet. For example configurations for network ACLs, see Network ACL rules .Confirm that any instances that route traffic through the internet gateway have a public IP address or an attached Elastic IP address.If you're creating a new load balancer, choose an internet-facing load balancer.Related informationInternet-facing Classic Load BalancersFollow"
https://repost.aws/knowledge-center/attach-igw-elb
How do I configure routing for my Direct Connect private virtual interface?
I created a private virtual interface (VIF) in AWS Direct Connect. How do I check if I'm routing properly over my Direct Connect connection?
"I created a private virtual interface (VIF) in AWS Direct Connect. How do I check if I'm routing properly over my Direct Connect connection?ResolutionAfter you create your private virtual interface, do the following to verify that routing is set up correctly.Verify that the virtual private gateway associated with your private virtual interface is attached to the correct Amazon Virtual Private Cloud (Amazon VPC):Sign in to the Direct Connect console.In the navigation pane, choose Virtual Interfaces.Choose the virtual interface (VIF), and then choose View details.For private VIFs attached to a virtual gateway (VGW), in General configuration choose the VGW ID.If the virtual gateway isn't attached to your VPC, follow the instructions to attach it.For private VIFs attached to a Direct Connect gateway, in General configuration choose the gateway ID.In Gateway associations, verify the Direct Connect gateway is attached to your virtual gateway.Confirm that the allowed prefixes contains the VPC CIDR.Verify that you're advertising and receiving the correct routes through Border Gateway Protocol (BGP). For more information, see Routing policies and BGP communities.Be sure that you're advertising routes to AWS that cover the networks that are communicating with your VPC.Be sure that you're receiving the VPC CIDR route from AWS.Verify that you've enabled route propagation to your subnet route tables. This step propagates the routes learned through VPN connections and Direct Connect virtual interfaces to your VPC route tables. Any changes to the routes are updated dynamically, and you don't need to manually enter or update routes.Verify that your security groups allow traffic from your local network.Sign in to the VPC console.In the navigation pane under Security, choose Security Groups.In the content pane, select the security group that's associated with your instances.Choose the Inbound Rules view.Be sure that there are rules permitting traffic from your local network over the desired ports.Choose the Outbound Rules view.Be sure that there are rules permitting traffic to your local network over the desired ports.Verify that your network access control lists (ACLs) allow traffic from your local network.Sign in to the VPC console.In the navigation pane under Security, choose Network ACLs.In the content pane, select the network ACL that's associated with your VPC and subnets.Choose the Inbound Rules view.Be sure that there are rules permitting traffic from your local network over the desired ports.Choose the Outbound Rules view.Be sure that there are rules permitting traffic to your local network over the desired ports.Verify that your Direct Connect private virtual interface is traversable using the ping utility. Security groups, network ACLs, and on-premises security allow for bidirectional connectivity tests using ping.Related informationWhich type of virtual interface should I use to connect different resources in AWS?My virtual interface BGP status For Direct Connect is down in the AWS console. What should I do?Follow"
https://repost.aws/knowledge-center/routing-dx-private-virtual-interface
How can I add certificates for multiple domains to a load balancer using AWS Certificate Manager?
I want to upload multiple certificates for different domains using Elastic Load Balancing (ELB).
"I want to upload multiple certificates for different domains using Elastic Load Balancing (ELB).Short descriptionAs of April 2018, Classic Load Balancer doesn't support adding multiple certificates.To add multiple certificates for different domains to a load balancer, do one of the following:Use a Subject Alternative Name (SAN) certificate to validate multiple domains behind the load balancer, including wildcard domains, with AWS Certificate Manager (ACM).Use either an Application Load Balancer (ALB) or Network Load Balancer (NLB), which supports multiple certificates and smart certificate selection using Server Name Indication (SNI).Note: ACM certificates can't be downloaded, and are used only with AWS services integrated with ACM.ResolutionTo use a Classic Load Balancer, follow these steps to create a SAN certificate using ACM.Open the ACM console.Note: If you've never created a certificate, choose Get started.Follow the instructions for Requesting a public certificate.In the ACM console, verify that the Status of the certificate request has changed from Pending validation to Issued.Attach the certificate to a load balancer. For instructions, see Replace the SSL certificate for your Classic Load Balancer.To add multiple certificates with an ALB, see Application Load Balancers now support multiple TLS certificates with smart selection using SNI.To add multiple certificates with an NLB, see Elastic Load Balancing: Network Load Balancers now support multiple TLS certificates using Server Name Indication (SNI).Note: The ALB and NLB certificate quota limit is 25 (excluding default certificates). This limit can be increased. For more information, see Quotas for your Application Load Balancers and Quotas for your Network Load Balancers.Related informationAWS Certificate Manager FAQsEmail validationDNS validationFollow"
https://repost.aws/knowledge-center/acm-add-domain-certificates-elb
What's the source IP address of the traffic that Elastic Load Balancing sends to my web servers?
I'm using Elastic Load Balancing for my web servers. I want to know the IP address that the load balancer uses to forward traffic to my web servers.
"I'm using Elastic Load Balancing for my web servers. I want to know the IP address that the load balancer uses to forward traffic to my web servers.Short descriptionYou can determine the IP addresses associated with an internal load balancer or an internet-facing load balancer by resolving the DNS name of the load balancer. These are the IP addresses where the clients should send the requests that are destined for the load balancer. However, Classic Load Balancers and Application Load Balancers use the private IP addresses associated with their elastic network interfaces as the source IP address for requests forwarded to your web servers. For Network Load Balancers, the source IP address of these requests depends on the configuration of its target group.These IP addresses can be used for various purposes, such as allowing the load balancer traffic on the web servers and for request processing. It's a best practice to use security group referencing on the web server's security group inbound rules for allowing load balancer traffic from Classic Load Balancers or Application Load Balancers. However, because Network Load Balancers don't support security groups, then based on the target group configurations, the IP addresses of the clients or the private IP addresses associated with the Network Load Balancers must be allowed on the web server's security group.ResolutionImportant: The IP addresses for Classic Load Balancers and Application Load Balancers change over time. Avoid using this information to statically configure your applications to point to these IP addresses.Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you're using the most recent AWS CLI version.Find private IP addresses associated with load balancer elastic network interfaces using the AWS Management Console1.    Open the Amazon Elastic Compute Cloud (Amazon EC2) console.2.    Under Load Balancing, choose Load Balancers from the navigation pane.3.    Select the load balancer that you're finding the IP addresses for.4.    On the Description tab, copy the Name.5.    Under Network & Security, choose Network Interfaces from the navigation pane.6.    Paste the load balancer name that you copied in step 4 in the search box. The filtered results show all elastic network interfaces associated with the load balancer.7.    For each of the elastic network interfaces in the filtered results:Select the elastic network interface.Choose the Details tab.Find the interface that contains an IP address for Primary private IPv4 IP. This is the primary private IP address of the elastic network interface.Find private IP addresses associated with load balancer elastic network interfaces using the AWS CLIRun the following command:aws ec2 describe-network-interfaces --filters Name=description,Values="ELB elb-name" --query 'NetworkInterfaces[*].PrivateIpAddresses[*].PrivateIpAddress' --output textReplace elb-name with one of the following:For Classic Load Balancers: Name of the load balancerFor Application Load Balancers: app/load-balancer-name/load-balancer-idFor Network Load Balancers: net/load-balancer-name/load-balancer-idFor Application Load Balancers and Network Load Balancers, use the following command to find the load-balancer-id:aws elbv2 describe-load-balancers --names load-balancer-nameThe load-balancer-id is the last field of characters that follows the trailing slash after the load balancer's name in the ARN.Follow"
https://repost.aws/knowledge-center/elb-find-load-balancer-ip
How do I troubleshoot "S3 write failed for bucket" 403 Access Denied errors from Amazon S3 when creating resource data sync for Systems Manager inventory?
I want to troubleshoot 403 Access Denied errors from Amazon Simple Storage Service (Amazon S3) when creating a resource data sync.
"I want to troubleshoot 403 Access Denied errors from Amazon Simple Storage Service (Amazon S3) when creating a resource data sync.ResolutionThere are two methods for configuring resource data sync for AWS Systems Manager inventory:Create a resource data sync for multiple accounts within the same organization.Create a resource data sync for multiple accounts that aren't within the same organization.To resolve S3 write failed for bucket errors, do the following:Troubleshooting for multiple accounts within the same organizationMake sure that the central Amazon S3 bucket policy has the required permissions to allow multiple AWS accounts to send inventory data to the bucket.To create a resource data sync for multiple accounts within the same organization, use the CreateResourceDataSync API and be sure to specify the DestinationDataSharing parameter. From AWS CloudTrail, you can check the API request for event name CreateResourceDataSync to confirm the DestinationDataSharing parameter is included in the event.Note: You can't create a resource data sync from the AWS Management Console when the resource data sync is for multiple accounts within the same organization.The following is an example AWS Command Line Interface (AWS CLI) command for CreateResourceDataSync:aws ssm create-resource-data-sync --sync-name name --s3-destination "BucketName=DOC-EXAMPLE-BUCKET,Prefix=prefix-name,SyncFormat=JsonSerDe,Region=AWS Region ID,DestinationDataSharing={DestinationDataSharingType=Organization}"Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Troubleshooting for multiple accounts that aren't within the same organizationTo create a resource data sync, confirm that the S3 bucket policy of the target S3 bucket allows the required actions from the source account.For example, you have Account A and Account B sending the inventory data to an S3 bucket in Account C.The S3 bucket policy in Account C is similar to the following example policy:{          "Version": "2012-10-17", "Statement": [ { "Sid": "SSMBucketPermissionsCheck", "Effect": "Allow", "Principal": { "Service": "ssm.amazonaws.com" }, "Action": "s3:GetBucketAcl", "Resource": "arn:aws:s3:::S3_bucket_name" }, { "Sid": " SSMBucketDelivery", "Effect": "Allow", "Principal": { "Service": "ssm.amazonaws.com" }, "Action": "s3:PutObject", "Resource": [ "arn:aws:s3:::S3_bucket_name/*/accountid=AWS_AccountA_ID/*", "arn:aws:s3:::S3_bucket_name/*/accountid=AWS_AccountB_ID/*" ], "Condition": { "StringEquals": { "aws:SourceAccount": [ "AWS_AccountA_ID", "AWS_AccountB_ID" ], "s3:x-amz-acl": "bucket-owner-full-control" }, "ArnLike": { "aws:SourceArn": [ "arn:aws:ssm:*:AWS_AccountA_ID:resource-data-sync/*", "arn:aws:ssm:*:AWS_AccountB_ID:resource-data-sync/*" ] } } } ] } Note: To encrypt the resource data sync, be sure to update the AWS Key Management Service (AWS KMS) key policy and S3 bucket policy. For more information, see Walkthrough: Use resource data sync to aggregate inventory data.Follow"
https://repost.aws/knowledge-center/ssm-data-sync-s3-403-error
Is my application impacted by the migration of Amazon S3 and Amazon CloudFront certificates to Amazon Trust Services?
My application might be impacted by the migration of Amazon Simple Storage Service (Amazon S3) and Amazon CloudFront certificates to Amazon Trust Services. I want to verify that the Amazon Trust Services Certificate Authorities (CAs) are in my trust store.
"My application might be impacted by the migration of Amazon Simple Storage Service (Amazon S3) and Amazon CloudFront certificates to Amazon Trust Services. I want to verify that the Amazon Trust Services Certificate Authorities (CAs) are in my trust store.Short descriptionAs of March 23, 2021, AWS will start migrating the Secure Sockets Layer/Transport Layer Security (SSL/TLS) CA for Amazon S3 and CloudFront from DigiCert to Amazon Trust Services.Application traffic that matches any of the following scenarios isn't impacted by this migration:HTTP trafficHTTPS traffic to CloudFront using custom domains and certificatesHTTPS traffic to S3 buckets in AWS Regions where S3 is already using Amazon Trust Services for its certificates (eu-west-3, eu-north-1, me-south-1, ap-northeast-3, ap-east-1, or us-gov-east-1)ResolutionYou must confirm that your applications trust Amazon Trust Services as a CA if either of the following is true:You send HTTPS traffic directly to S3 buckets in Regions that aren't listed above.You send HTTPS traffic to CloudFront domains that are covered by *.cloudfront.net.If you use other AWS services, your application might already trust Amazon Trust Services. Many AWS services, such as Amazon Elastic Compute Cloud (Amazon EC2) and Amazon DynamoDB, already migrated their CAs.Certificates issued by Amazon Trust Services are already included in trust stores across most web browsers, operating systems, and applications. You might not need to update your configurations to handle the migration, but there are exceptions. If you build custom certificate trust stores or use certificate pinning, then you might need to update your configurations. If Amazon Trust Services isn't in your trust store, you'll see error messages in browsers (see Example) and applications.To verify if Amazon Trust Services is in your trust store, run one of following tests from the system that you're using to connect to an Amazon S3 or CloudFront endpoint:Retrieve a test object from this test URL. Then, verify that you are either getting a 200 response or seeing the green check mark in the test image.Create an Amazon S3 bucket in one of the following AWS Regions: eu-west-3, eu-north-1, me-south-1, ap-northeast-3, ap-east-1, or us-gov-east-1. (S3 buckets in these Regions already use Amazon Trust Services certificates.) Then, retrieve a test object from the bucket over HTTPS.If any of these tests are successful, then your client is ready for migration to Amazon Trust Services.To verify that each of the four root CAs of Amazon Trust Services are included in your trust store, do the following:Choose each Test URL in the list of Amazon Trust Services certificates.Verify that the Test URLs work for you.For this migration, your application doesn't need to trust the Amazon Trust Services' root CAs directly. It's sufficient if your application trusts the Starfield Services Root CA. Amazon S3 and CloudFront will present certificate chains with an Amazon root CA that's cross-signed by the Starfield Service Root CA.If any of the first two tests fail, then the Amazon Trust Services CAs aren't in your trust store. Update your trust store to include the Amazon Trust Services CAs by doing one or more of the following:Upgrade your operating system or web browser.Update your application to use CloudFront with a custom domain name and your own certificate.If your application is using a custom trust store, then you must add the Amazon root CAs to your application's trust store.If you're using certificate pinning to lock down the CAs that you trust, then you must adjust your pinning to include the Amazon Trust Services CAs.Most AWS SDKs and AWS Command Line Interfaces (AWS CLIs) aren't impacted by the migration. But if you're using a version of the Python AWS SDK or AWS CLI released before October 29, 2013, then you must upgrade your certificates.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Related informationHow to prepare for AWS move to its own Certificate AuthorityFollow"
https://repost.aws/knowledge-center/s3-cloudfront-certificate-migration
What are common issues that might occur when using native backup and restore in RDS for SQL Server?
I'm performing a native backup or restore for my Amazon Relational Database Service (Amazon RDS) for Microsoft SQL Server instance. What are common errors that I might encounter during this process?
"I'm performing a native backup or restore for my Amazon Relational Database Service (Amazon RDS) for Microsoft SQL Server instance. What are common errors that I might encounter during this process?ResolutionWhen using the RDS for SQL Server native backup and restore option, you might encounter validation errors. These errors are displayed immediately, and a task isn't created. The following are common errors and suggested fixes:Error: Aborted the task because of a task failure or a concurrent RESTORE_DB requestThis error occurs if you have space-related issues on the DB instance when restoring the backup from Amazon Elastic Compute Cloud (Amazon EC2) or on-premises:[2022-04-07 05:21:22.317] Aborted the task because of a task failure or a concurrent RESTORE_DB request.[2022-04-07 05:21:22.437] Task has been aborted[2022-04-07 05:21:22.440] There is not enough space on the disk to perform restore database operation.To resolve this error, do the following:Option 1:1.    Run the following command on the source instance (EC2 or on-premises). This command verifies the size of the database, including the data file and Tlog file. In the following example, replace [DB_NAME] with the name of your database.SELECT DB_NAME(database_id) AS DatabaseName,Name AS Logical_Name,Physical_Name, (size*8)/1024/1024 SizeGBFROM sys.master_filesWHERE DB_NAME(database_id) = '[DB_NAME]'GODatabase Size = (DB_Name size + DB_Name_Log size)2.    Compare the source instances data base size with the available storage on the DB instance. Increase the available storage accordingly and then restore the database.Option 2:Shrink the current DB log file on the source SQL Server to clear up unused space, and then perform the database backup.Use the following command to shrink the log file.DBCC SHRINKFILE (LogFileName, Desired Size in MB)Error: Aborted the task because of a task failure or a concurrent RESTORE_DB requestThe following error occurs when you have permission issues related to the AWS Identity and Access Management (IAM) role or policy associated with the SQLSERVER_BACKUP_RESTORE option:[2020-12-15 08:56:22.143] Aborted the task because of a task failure or a concurrent RESTORE_DB request.[2020-12-15 08:56:22.213] Task has been aborted[2020-12-15 08:56:22.217] Access DeniedTo resolve this error, do the following:1.    Verify the restore query to make sure that the S3 bucket and the folder prefix are correct:exec msdb.dbo.rds_restore_database @restore_db_name='database_name', @s3_arn_to_restore_from='arn:aws:s3:::bucket_name/file_name_and_extension';2.    Verify that the IAM policy includes the following attributes:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation" ], "Resource": "arn:aws:s3:::bucket_name" }, { "Effect": "Allow", "Action": [ "s3:GetObjectAttributes", "s3:GetObject", "s3:PutObject", "s3:ListMultipartUploadParts", "s3:AbortMultipartUpload" ], "Resource": "arn:aws:s3:::bucket_name/*" } ]}Note: Replace arn:aws:s3:::bucket_name with the ARN of your S3 bucket.3.    Verify that the policy is correctly associated with the role given in the SQLSERVER_BACKUP_RESTORE option.4.    Verify that the SQLSERVER_BACKUP_RESTORE option is the option group associated with the DB instance:S3 Bucket ARNS3 folder prefix (Optional)For more information, see How do I perform native backups of an Amazon RDS DB instance that's running SQL Server?Error: Aborted the task because of a task failure or a concurrent RESTORE_DB requestThis error is commonly associated with cross account database restore.Example:Account A has an S3 bucket where the backup is stored.Account B has an RDS DB instance where the restore needs to be done.The error occurs when you have permission-related issues in an IAM role or policy associated with the option. Or, there is a permissions issue with the bucket policy associated with the S3 bucket in the cross account.[2022-02-03 15:57:22.180] Aborted the task because of a task failure or a concurrentRESTORE_DB request.[2022-02-03 15:57:22.260] Task has been aborted[2022-02-03 15:57:22.263] Error making request with Error Code Forbidden and Http Status Code Forbidden. No further error information was returned by the service.To resolve this error, do the following:1.    Verify that the IAM policy in Account B (the account where the DB instance that you will be restoring to is located) includes the following attributes:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation" ], "Resource": "arn:aws:s3:::name_of_bucket_present_in_Account_A" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:ListMultipartUploadParts", "s3:AbortMultipartUpload" ], "Resource": "arn:aws:s3::: name_of_bucket_present_in_Account_A /*" }, { "Action": [ "kms:DescribeKey", "kms:GenerateDataKey", "kms:Decrypt", "kms:Encrypt" "kms:ReEncryptTo", "kms:ReEncryptFrom" ], "Effect": "Allow", "Resource": [ "arn:aws: PUT THE NAME OF THE KEY HERE", "arn:aws:s3::: name_of_bucket_present_in_Account_A /*" ] } ]}2.    Verify that the bucket policy associated with the S3 bucket in Account A includes the following attributes:{ "Version": "2012-10-17", "Statement": [ { "Sid": "Permission to cross account", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::AWS-ACCOUNT-ID-OF-RDS:role/service-role/PUT-ROLE-NAME" /*---- Change Details here ] }, "Action": [ "s3:ListBucket", "s3:GetBucketLocation" ], "Resource": [ "arn:aws:s3:::PUT-BUCKET-NAME" /*---- Change Details here ] }, { "Sid": "Permission to cross account on object level", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::AWS-ACCOUNT-ID-OF-RDS:role/service-role/PUT-ROLE-NAME" /*---- Change Details here ] }, "Action": [ "s3:GetObject", "s3:PutObject", "s3:ListMultipartUploadParts", "s3:AbortMultipartUpload" ], "Resource": [ "arn:aws:s3::: PUT-BUCKET-NAME/*" /*---- Change Details here ] } ]}For more information, see the following:Importing and exporting SQL Server databases using native backup and restoreExample 4: Bucket owner granting cross-account permission to objects it does not ownError: Cannot find server certificate with thumbprint 'XXXXXX'This error occurs when you try to restore a database with Transparent Data Encryption (TDE) from EC2 or on-premises to RDS for SQL Server:[2022-06-1511:55:22.280] Cannot find server certificate with thumbprint 'XXXXXXX'.[2022-06-15 11:55:22.280] RESTORE FILELIST is terminating abnormally.[2022-06-15 11:55:22.300] Aborted the task because of a task failure or a concurrent RESTORE_DB request.[2022-06-15 11:55:22.333] Task has been aborted[2022-06-15 11:55:22.337] Empty restore file list result retrieved.This error indicates an attempt to restore a backup of a database that is encrypted using TDE to a SQL instance other than its original server. The TDE certificate of the original server must be imported to the destination server. For more information on importing server certificates and respective limitations, see Support for Transparent Data Encryption in SQL Server.To resolve this error apart from importing certificates, do the following:There are two workarounds available to prevent this error.Option 1: The database backup is sourced from on-premises or an EC2 instance but target RDS SQL Server is in MultiAZ1.    Create a backup of the source database with TDE turned on.2.    Restore the backup as new DB with in your on-premises server.3.    Turn off TDE on the newly created database. Use the following commands to turn off TDE:Run the following command to turn off encryption on the database. In the following command, replace Databasename with the correct name for your database.USE master;GOALTER DATABASE [Databasename] SET ENCRYPTION OFF;GORun the following command to drop the DEK used for encryption. In the following command, replace Databasename with the correct name for your database.USE [Databasename];GODROP DATABASE ENCRYPTION KEY;GO4.    Create a native SQL Server backup and restore this new backup to the desired RDS instance. For more information, see How do I perform native backups of an Amazon RDS DB instance that's running SQL Server?5.    Turn TDE back on in the new RDS database.Option 2: The database is sourced from an RDS for SQL Server database that’s encrypted with TDE1.    Use a snapshot from the source instance to restore the DB in to a new instance.2.    Turn off TDE on the database created from the snapshot.3.    Create a native SQL backup and restore this new backup to the desired RDS instance.4.    Turn TDE back on in the new RDS database.Common errors observed for native backup on RDS for SQL ServerError: Aborted the task because of a task failure or an overlap with your preferred backup window for RDS automated backupThe following error occurs when you have permission issues related to the IAM role or policy associated with the SQLSERVER_BACKUP_RESTORE option.[2022-07-16 16:08:22.067]Task execution has started. [2022-07-16 16:08:22.143] Aborted the task because of a task failure or an overlap with your preferred backup window for RDS automated backup.[2022-07-16 16:08:22.147] Task has been aborted [2022-07-16 16:08:22.150] Access DeniedTo resolve this issue, do the following:1.    Verify the restore query to make sure that the S3 bucket and the folder prefix are correct:exec msdb.dbo.rds_restore_database @restore_db_name='database_name', @s3_arn_to_restore_from='arn:aws:s3:::bucket_name/file_name_and_extension';2.    Verify that the IAM policy includes the following attributes:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation" ], "Resource": "arn:aws:s3:::bucket_name" }, { "Effect": "Allow", "Action": [ "s3:GetObjectAttributes", "s3:GetObject", "s3:PutObject", "s3:ListMultipartUploadParts", "s3:AbortMultipartUpload" ], "Resource": "arn:aws:s3:::bucket_name/*" } ]}Note: Replace arn:aws:s3:::bucket_name with the ARN of your S3 bucket.3.    Verify that the policy is correctly associated with the role shown in the SQLSERVER_BACKUP_RESTORE option.4.    Verify that the SQLSERVER_BACKUP_RESTORE option in the option group associated with the DB instance.S3 Bucket ARNS3 folder prefix (Optional)For more information, see How do I perform native backups of an Amazon RDS DB instance that's running SQL Server?Error: Write on "XXX" failed, Unable to write chunks to S3, S3 write stream upload failedThis is a known issue with RDS for SQL Server. The database size is sometimes estimated incorrectly and causes the backup procedure to fail with the following error.[2022-04-21 16:45:04.597] reviews_consumer/reviews_consumer_PostUpdate_042122.bak: Completed processing 100% of S3 chunks.[2022-04-21 16:47:05.427] Write on "XXXX" failed: 995(The I/O operation has been aborted because of either a thread exit or an application request.) A nonrecoverable I/O error occurred on file "XXXX:" 995(The I/O operation has been aborted because of either a thread exit or an application request.). BACKUP DATABASE is terminating abnormally.[2022-04-21 16:47:22.033] Unable to write chunks to S3 as S3 processing has been aborted. [2022-04-21 16:47:22.040] reviews_consumer/reviews_consumer_PostUpdate_042122.bak: Aborting S3 upload, waiting for S3 workers to clean up and exit[2022-04-21 16:47:22.053] Aborted the task because of a task failure or an overlap with your preferred backup window for RDS automated backup.[2022-04-21 16:47:22.060] reviews_consumer/reviews_consumer_PostUpdate_042122.bak: Aborting S3 upload, waiting for S3 workers to clean up and exit[2022-04-21 16:47:22.067] S3 write stream upload failed. Encountered an error while uploading an S3 chunk: Part number must be an integer between 1 and 10000, inclusive S3 write stream upload failed. Encountered an error while uploading an S3 chunk: Part number must be an integer between 1 and 10000, inclusive S3 write stream upload failed. Encountered an error while uploading an S3 chunk: Part number must be an integer between 1 and 10000, inclusive S3 write stream upload failed. Encountered an error while uploading an S3 chunk: Part number must be an integer between 1 and 10000, inclusiveThe work around for this error is to turn on database backup compression. This compresses the backup, making it easier for S3 to receive the file.Run the following command to turn on backup compression:exec rdsadmin..rds_set_configuration 'S3 backup compression', 'true';Follow"
https://repost.aws/knowledge-center/rds-sql-server-fix-native-backup-restore
How can I verify if my AWS DMS migration task is stuck or making progress?
My AWS Database Migration Service (AWS DMS) task is stuck or it is not progressing. How do I troubleshoot why my task is not progressing?
"My AWS Database Migration Service (AWS DMS) task is stuck or it is not progressing. How do I troubleshoot why my task is not progressing?Short descriptionAlthough AWS DMS tasks are rarely stuck, they can sometimes be slow to progress. Follow the steps in this article to check if the data from your DMS task is migrated from source to target.ResolutionCheck the status of your AWS DMS taskUse the following steps below to check the status of your DMS task.Log in to the AWS DMS console.From the navigation pane, choose the Database migration tasks.Review the status of your task. The status should be as follows:During the full load phase, your task status should be Running.During change data capture (CDC) phase or ongoing replication phase of a CDC-only task, your task status should be Replication ongoing.During a full load and CDC, your task status should be Load complete, replication ongoing.Monitor Amazon CloudWatch logsCheck the migration task by monitoring the Amazon CloudWatch logs.Log in to the AWS DMS console.From the navigation pane, choose Database migration tasks, and then select your task.Choose View CloudWatch logs. This redirects you to the AWS CloudWatch console, where you can monitor the logs for your task.Note: When viewing your logs, choose Retry to refresh the logs and display the latest information with the timestamp. If you don’t see a new message in log after five minutes, proceed to the next step.Refresh the table statistics of your DMS taskRefresh the Table statistics of your AWS DMS task.Log in to the AWS DMS console.From the navigation pane, choose Database migration tasks, and then select your task.Choose Table statistics.During a full load, you can see an increase in the Full Load rows value and a change in the Load state value. During ongoing replication (CDC), you can see an increase in DMLs (Inserts, Updates, and Deletes) and DDLs.If you have a test database that has little activity, you might not see any change in your task logs or in the table statistic counters.Monitor CloudWatch metrics for rows unloading and applyingLog in to the AWS DMS console.From the navigation pane, choose Database migration tasks, and then choose your task.Choose CloudWatch metrics. This action re-directs you to the CloudWatch Console.During the full load phase of the DMS task, monitor the following metrics using CloudWatch:Choose Full load from the dropdown list in the CloudWatch console.Monitor the FullLoadThroughputRowsSource metric. This metric gives a detailed picture of the rate at which AWS DMS can unload source data into the replication instance during the full load phase.Monitor the FullLoadThroughputRowsTarget. This metric shows the rate at which the rows are leaving the replication instance to commit to the target.During the CDC phase, monitor the following metrics using CloudWatch:Choose CDC from the drop down list in the CloudWatch console.Monitor the CDCThroughputRowsSource metric. This metric gives a detailed picture of the rate at which changes are captured from the source and moved to the replication instance during the CDC phase.Monitor the CDCThroughputRowsTarget. This metric shows the rate at which the changes are moved from the replication instance to the target.Monitor the CDCLatencySource metric. This metric shows the latency between source and replication instance in seconds.Monitor the CDCLatencyTarget metric. This metric shows the latency between replication instance and target in seconds.You can also query the record count on the target at specific intervals to confirm that the data is being migrated to the target. This interval varies based on the load on the source, target, and replication instances, and the amount of data that is in a single record.If you see no latency on your DMS task, and no new log message appears in the task log, then enable debug logging. Make sure to do this while the task is running, and then monitor the Amazon Cloud Watch logs of the DMS task.Note: It's a best practice to enable debug logging for a short time only while you are actively troubleshooting the task. If you enable debug logging for longer, the replication instance disk space can fill up quickly and impact running tasks on the DB instance.Related informationData Migration Service metricsFollow"
https://repost.aws/knowledge-center/dms-stuck-task-progress
How can I resolve DNS resolution or SSL certificate mismatch errors for my API Gateway custom domain name?
I configured a custom domain name for my Amazon API Gateway API. I am unable to connect to the domain name and receive DNS resolution or SSL certificate mismatch errors. How can I resolve this?
"I configured a custom domain name for my Amazon API Gateway API. I am unable to connect to the domain name and receive DNS resolution or SSL certificate mismatch errors. How can I resolve this?Short descriptionThere are two types of custom domain names that you can create for API Gateway APIs: Regional or (for REST APIs only) edge-optimized.ResolutionBefore creating a custom domain name for your API, you must do one of the following:Request an SSL/TLS certificate from AWS Certificate Manager (ACM).-or-Import an SSL/TLS certificate into ACM.For more information, see Getting certificates ready in AWS Certificate Manager.After you have your SSL/TLS certificate, you can follow the instructions to set up a custom domain name for my API Gateway API.To connect to a custom domain name for API Gateway APIs, you must configure Amazon Route 53 to route traffic to an API Gateway endpoint.If the DNS records for the custom domain name aren't mapped to the correct API Gateway domain name, then the SSL connection fails. This is because the default *.execute-api.<region>.amazonaws.com certificate is returned instead of the SSL/TLS certificate.To confirm that the DNS mapping is correct, run the following command from the client:$ nslookup <customdomainname>The output should be the API Gateway domain name. Make sure that the domain name matches the API Gateway domain name. If a Route 53 alias record is used for DNS mapping, then the output is the IP address. Make sure that the IP address matches the API Gateway domain name IP address.Note:When configuring Route 53, you must create either a public hosted zone or a private hosted zone. For internet-facing applications with resources that you want to make available to users, choose a public hosted zone. For more information, see Working with hosted zones.Route 53 uses records to determine where traffic is routed for your domain. Alias records provide easier DNS queries to AWS resources, while CNAME (non-alias) records can redirect DNS queries outside of AWS resources. For more information, see Choosing between alias and non-alias records.Related informationMigrating a custom domain name to a different API endpointFollow"
https://repost.aws/knowledge-center/api-gateway-custom-domain-error
How do I restore data from an Amazon OpenSearch Service domain in another AWS account?
I want to restore data from an Amazon OpenSearch Service domain in another AWS account. How can I do this?
"I want to restore data from an Amazon OpenSearch Service domain in another AWS account. How can I do this?Short descriptionTo restore data from an OpenSearch Service domain in another AWS account, you must set up cross-account access. Cross-account access must be established between your OpenSearch Service domain and the domain that you're trying to restore data from. You must also allow your domain to access the Amazon Simple Storage Service (Amazon S3) bucket that stores your data.To create this cross-account access, perform the following steps:1.    In Account A, set up the following:Source: OpenSearch Service domain with fine-grained access controlSource: Amazon S3 bucket2.    In Account B, set up your destination (OpenSearch Service domain) with fine-grained access control.Note: You don't need to create an S3 bucket in the destination (Account B). A single S3 bucket is used to restore the data across the AWS accounts. This setup also works for the OpenSearch Service domains without fine-grained access control.ResolutionNote: The examples in this article use Python and Postman code.Set up cross-account access for Account A1.    Create an S3 bucket in Account A in the same Region as the OpenSearch Service domain.2.    Create an AWS Identity Access Management (IAM) policy to provide S3 bucket access permissions:{ "Version": "2012-10-17", "Statement": [{ "Action": [ "s3:ListBucket" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::snapshot" ] }, { "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "iam:PassRole" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::snapshot/*" ] }]}Note: Replace "arn:aws:s3:::snapshot" with your bucket ARN from Step 1.3.    Create an IAM role and select Amazon Elastic Compute Cloud (Amazon EC2) as your service.4.    Add the IAM policy (created in Step 2) to your newly created IAM role.5.    Open your IAM role and choose Trust relationships.6.    Update the trust relationship of the following policy:{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "Service": "es.amazonaws.com" }, "Action": "sts:AssumeRole" }]}Note: Replace "Service": "ec2.amazonaws.com" with "Service": "es.amazonaws.com". Also, record the role ARN, which you'll need for later steps.7.    Choose one of these options:Update the policy (from Step 2) to include the "iam:PassRole" permissions, attaching the policy to your IAM role. This permission allows OpenSearch service to have write access to an S3 bucket.-or-Create a new IAM policy, attaching the policy to your IAM role.Note: You can have all your permissions set under one IAM role by updating the policy. Or, if you want to create a new IAM policy and split up the permissions, you can reuse the IAM policy for another use case.Here's an example policy with the required IAM permissions:{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": "iam:PassRole", "Resource": "arn:aws:iam::Account A:role/cross" }, { "Effect": "Allow", "Action": "es:ESHttpPut", "Resource": "arn:aws:es:us-east-1:Account A:domain/srestore/*" }]}This policy must be attached to the IAM user or role that is used to sign the HTTP request.Note: Replace "arn:aws:iam::Account A:role/cross" with the role that you created in Step 3. Also, update "arn:aws:es:us-east-1:Account A:domain/srestore/*" with the OpenSearch Service domain listed as the Source in Account A. The Source in Account A is used for cluster snapshots.8.    Create an IAM user and attach the policy that you created in Step 2 (which includes the required permissions to access Amazon S3). This IAM user must have Admin access to the OpenSearch Service domain in Account A to provide access to the read/write API using FGAC. For more information about using fine-grained access control, see Map the snapshot role in OpenSearch Dashboards (if using fine-grained access control).9.    (Optional) If you're using Python code to register the S3 bucket to OpenSearch Service, launch an Amazon EC2 machine in Account A. Then, attach the IAM role created in Step 3.Note: Make sure that your security group can access the OpenSearch Service domain.Register the S3 bucket to the Source in Account ATo register the S3 bucket to the Source domain in Account A, perform the following steps:1.    Update the PUT field with a URL that includes the OpenSearch Service domain endpoint and S3 bucket name. For example:https://endpointofdomain.amazonaws.com/_snapshot/snapshot2.    Choose the Authorization tab.3.    Update the AccessKey and SecretKey of the IAM user.4.    Update the AWS Region and Service Name.5.    Choose Save.6.    Choose the Headers tab.7.    Select Content-Type for your key type.8.    Select Application/JSON for your key value.9.    Choose Save.10.    Choose the Body tab.11.    Use the following code:{ "type": "s3", "settings": { "bucket": "snapshot", "region": "us-east-1", "role_arn": "arn:aws:iam::Account A:role/cross" }}12.    Choose Send to submit the query through the OpenSearch Service console. After the registration completes, you'll receive a Status Code: 200 OK message.13.    Log in to OpenSearch Dashboards in Account A. Then, check the available data on the S3 bucket.14.    Use the following command to take a new snapshot:PUT /_snapshot/<registered_snapshot_repository>/<snapshot_name>Here's an example output:GET _cat/snapshots/casnapshottoday SUCCESS 1585190280 02:38:00 1585190284 02:38:04 3.9s 4 4 0 4This output verifies the completion of the S3 bucket registration to the OpenSearch Service domain in Account A.Set up cross-account access for Account B1.    Create a policy and IAM role in Account B, specifying the same S3 bucket ARN as Account A:{ "Version": "2012-10-17", "Statement": [{ "Action": [ "s3:ListBucket" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::snapshot" --> S3 bucket ARN from Account A ] }, { "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "iam:PassRole" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::snapshot/*" ] }, { "Effect": "Allow", "Action": "iam:PassRole", "Resource": "arn:aws:iam::Account B:role/cross" --> Role created in Account B }, { "Effect": "Allow", "Action": "es:*", "Resource": "arn:aws:es:us-east-1:Account B:domain/restore/*" --> Destination ES domain in Account B }]}Here's an example trust policy for your role:{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": { "Service": "es.amazonaws.com" }, "Action": "sts:AssumeRole" }]}2.    Attach the IAM role that you previously created to the IAM user in Account B. The same IAM user must have administrator access to the Destination (domain with FGAC) in Account B. For more information about updating IAM user access, see Registering a manual snapshot repository.3.     Update the S3 bucket policy for your bucket in Account A, providing Account B access to your bucket:{ "Version": "2012-10-17", "Id": "Policy1568001010746", "Statement": [{ "Sid": "Stmt1568000712531", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::Account B:role/cross" --> Role which is created in Account B }, "Action": "s3:*", "Resource": "arn:aws:s3:::snapshot" }, { "Sid": "Stmt1568001007239", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::Account B:role/cross" --> Role which is created in Account B }, "Action": "s3:*", "Resource": "arn:aws:s3:::snapshot/*" }]}4.    Register the S3 bucket to your domain (in Account B).Note: You must use the authentication credentials of the IAM user in Account B. Make sure to choose OpenSearch Service as your destination.{ "type": "s3", "settings": { "bucket": "snapshot", "region": "us-east-1", "role_arn": "arn:aws:iam::Account B:role/cross" --> Role which is created in Account B. }}5.    Log in to OpenSearch Dashboards in Account B.6.    Check the snapshots from Account A that are available in the S3 bucket:GET _cat/snapshots/casnapshotHere's an example of the output:today SUCCESS 1585190280 02:38:00 1585190284 02:38:04 3.9s 4 4 0 4This output confirms that cross-account access is successfully set up in Account B.Related informationHow can I migrate data from one Amazon OpenSearch Service domain to another?Attach a bucket policy to grant cross-account permissions to Account BMigrating Amazon OpenSearch Service indices using remote reindexFollow"
https://repost.aws/knowledge-center/opensearch-restore-data
How do I troubleshoot subscription filter policy issues in Amazon SNS?
My Amazon Simple Notification Service (Amazon SNS) subscription filter policy isn't working. How do I troubleshoot the issue?
"My Amazon Simple Notification Service (Amazon SNS) subscription filter policy isn't working. How do I troubleshoot the issue?ResolutionImportant: Additions or changes to a subscription filter policy require up to 15 minutes to take effect.Verify that message attributes are included in the messages published to your Amazon SNS topicSubscription filter policies can filter message attributes only, not the message body. If the MessageAttributeValue is left empty on a message, then the filter policy rejects the message.To see if your filter policy rejected messages because they didn't include attributes, review the following metric in your Amazon CloudWatch metrics for Amazon SNS:NumberOfNotificationsFilteredOut-NoMessageAttributesFor more information, see Amazon SNS message filtering. For a tutorial on how to send messages with attributes, see To publish messages to Amazon SNS topics using the AWS Management Console.Verify that the messages published to your Amazon SNS topic meet the required filter policy constraintsFor a complete list of restraints, see Filter policy constraints.Verify that your subscription filter policy's attributes are configured correctlyAfter you define a subscription filter policy's attributes, the subscription endpoint receives only messages that include those defined attributes. For more information see, Attribute string value matching and Attribute numeric value matching.To see the messages that your filter policy rejected because of mismatching or incorrectly formatted attributes, review the following CloudWatch metrics for Amazon SNS:NumberOfNotificationsFilteredOutNumberOfNotificationsFilteredOut-InvalidAttributesNote: The NumberOfNotificationsFilteredOut metric shows messages that your filter policy rejected because the message attributes didn’t match the policy attributes. The NumberOfNotificationsFilteredOut-InvalidAttributes metric shows messages that your filter policy rejected because the message attributes weren't in a valid format.Follow"
https://repost.aws/knowledge-center/sns-subscription-filter-policy-issues
How can I access an Amazon S3 bucket from an application running on an Elastic Beanstalk instance?
I want to access an Amazon Simple Storage Service (Amazon S3) bucket from an application running on an AWS Elastic Beanstalk instance.
"I want to access an Amazon Simple Storage Service (Amazon S3) bucket from an application running on an AWS Elastic Beanstalk instance.Short descriptionTo access an S3 bucket from Elastic Beanstalk, verify that your AWS Identity and Access Management (IAM) instance profile is attached to an Amazon Elastic Compute Cloud (Amazon EC2) instance. The instance must have the correct permissions for Amazon S3. Then, confirm that your S3 bucket policy doesn't deny access to the role attached to your instance profile.ResolutionValidate permissions for your instance profileOpen the Elastic Beanstalk console.Select your environment.From the navigation menu, choose Configuration.In the Configuration overview section, from the Category column, for Security, choose Modify.From the IAM instance profile menu, note the name of your instance profile.Open the IAM console.In the navigation pane, choose Roles.In the search box, enter the name of your instance profile from step 5.Verify that the role from step 8 has the required Amazon S3 permissions for the bucket that you want to access. For more information, see Identity and access management in Amazon S3 and Actions, resources, and condition keys for Amazon S3.Validate permissions for your S3 bucketOpen the Amazon S3 console.From the list of buckets, choose the bucket with the bucket policy that you want to change.Choose the Permissions tab.Choose Bucket Policy.Search for "Effect": "Deny" statements.In your bucket policy, edit or remove any "Effect": "Deny" statements that are denying the IAM instance profile access to your role. For more information, see Adding a bucket policy using the Amazon S3 console.Note: Be careful not to remove any necessary deny statements to align to the security best practice of principle of least privilege. For more information, see Amazon S3 security.Access your S3 bucketYou can now access your S3 bucket, and then use your S3 bucket to do the following tasks:Manage application versions.Note: Set the S3 URL when you create or update an application.Read or write application files or photos.Note: For more information, see Developing with Amazon S3 using the AWS SDKs, and explorers.Related informationBuckets overviewElastic Beanstalk instance profileBucket policy examplesStoring private keys securely in Amazon S3Follow"
https://repost.aws/knowledge-center/elastic-beanstalk-s3-bucket-instance
How do I create a HAR file from my browser for an AWS Support case?
AWS Support asked me to create a HAR file from my web browser to help them troubleshoot my support case. How do I create that file?
"AWS Support asked me to create a HAR file from my web browser to help them troubleshoot my support case. How do I create that file?Short descriptionAn HTTP Archive (HAR) file is a JSON file that contains the latest network activity recorded by your browser. AWS Support can use a HAR file from your browser to investigate or replicate networking issues that you've documented in a technical support case.Important: You must have a Developer, Business, or Enterprise Support plan to open a technical support case. If AWS Support asks you for a HAR file for troubleshooting, create a HAR file in your browser and then submit that file in the AWS Support Center. HAR files can capture sensitive information, such as user names, passwords, and keys. Be sure to remove any sensitive information from a HAR file before you send it to AWS Support.ResolutionCreate a HAR file in your browserNote: These instructions were last tested on Google Chrome version 98.0.4758.102, Microsoft Edge (Chromium) version 98.0.1108.62, and Mozilla Firefox version 91.6. Because these browsers are third-party products, these instructions might not match the experience in the latest versions or in the version that you use. In another browser, such as legacy Microsoft Edge (EdgeHTML) or Apple Safari for macOS, the process to generate a HAR file might be similar, but the steps will be different.Google ChromeIn the browser, at the top right, choose Customize and control Google Chrome.Pause on More tools, and then choose Developer tools.With DevTools open in the browser, choose the Network panel.Select the Preserve log check box.Choose Clear to clear all current network requests.In the AWS Management Console, reproduce the issue from your support case. Or, follow the steps that AWS Support advised in a local setup.In DevTools, open the context (right-click) menu on any network request.Choose Save all as HAR with content, and then save the file.For more information, see Open Chrome DevTools and Save all network requests to a HAR file on the Google Developers website.Microsoft Edge (Chromium)In the browser, at the top right, choose Settings and more.Pause on More tools, and then choose Developer tools.With DevTools open in the browser, choose the Network panel.Select the Preserve log check box.Choose Clear to clear all current network requests.In the AWS Management Console, reproduce the issue from your support case. Or, follow the steps that AWS Support advised in a local setup.In DevTools, open the context (right-click) menu on any network request.Choose Save all as HAR with content, and then save the file.For more information, see Save all network requests to a HAR file on the Network Analysis Reference page of the Microsoft Docs website.Mozilla FirefoxIn the browser, at the top right, choose Open Application Menu.Choose More tools, and then choose Web Developer tools.In the Web Developer menu, choose Network. (In some versions of Firefox, the Web Developer menu is in the Tools menu.)Choose the gear icon, and then select Persist Logs.Choose the trash can icon (Clear) to clear all current network requests.In the AWS Management Console, reproduce the issue from your support case. Or, follow the steps that AWS Support advised in a local setup.In the Network Monitor, open the context menu (right-click) on any network request in the request list.Choose Save All As HAR, and then save the file.For more information, see Network Monitor and Network request list on the MDN Web Docs website.Edit the HAR fileOpen the HAR file in a text editor application.Use the text editor's Find and Replace tools to identify and replace all sensitive information captured in the HAR file. This includes any user names, passwords, and keys that you entered in your browser while creating the file.Save the edited HAR file with the sensitive information removed.Submit the HAR fileIn the AWS Support Center, under Open support cases, choose your support case.In your support case, choose your preferred contact option, attach the edited HAR file, and then submit.Related informationWhat browsers are supported for use with the AWS Management Console?Follow"
https://repost.aws/knowledge-center/support-case-browser-har-file
How do I recreate a "Deleted" Amazon SNS topic subscription for an Amazon SQS queue in another AWS account?
"My Amazon Simple Queue Service (Amazon SQS) queue was subscribed to an Amazon Simple Notification Service (Amazon SNS) topic in a different AWS account. I deleted the cross-account subscription, and the topic subscription is now in "Deleted" status. How do I recreate a deleted Amazon SNS topic subscription for an Amazon SQS queue in another account?"
"My Amazon Simple Queue Service (Amazon SQS) queue was subscribed to an Amazon Simple Notification Service (Amazon SNS) topic in a different AWS account. I deleted the cross-account subscription, and the topic subscription is now in "Deleted" status. How do I recreate a deleted Amazon SNS topic subscription for an Amazon SQS queue in another account?Short descriptionIf you call the Amazon SNS Unsubscribe API from an account that doesn't own the subscription, the subscription enters Deleted status for 72 hours. While the SNS topic subscription is in Deleted status, the account that owns the subscription can't resubscribe the same endpoint to the topic.After 72 hours, Amazon SNS clears the Deleted subscription and the account that owns the subscription can resubscribe the same endpoint to the topic.If you don't want to wait 72 hours to resubscribe, you can manually recreate the subscription by doing any of the following.Note: It's a best practice to run Subscribe and Unsubscribe API calls from the same AWS account. When you call the Subscribe API, the AWS account that you use to make the call becomes the subscription owner.ResolutionImportant: The following procedures apply to HTTP and HTTPS endpoint subscribers. It doesn't apply to AWS Lambda function subscribers.Send an HTTP GET method request to the SubscribeURL in the UnsubscribeConfirmation message you received1.    In the UnsubscribeConfirmation message sent to the SQS queue after you deleted the subscription, find the SubscribeURL. Then, copy and paste the URL to a text document.2.    Send an HTTP GET method request to the SubscribeURL.HTTP GET method request examplecurl -X GET "https://sns.us-west-2.amazonaws.com/?Action=ConfirmSubscription&TopicArn=arn:aws:sns:us-west-2:123456789012:MyTopic&Token=<token>"Call the Amazon SNS Subscribe API from the AWS account that owns the SNS topic, then confirm the subscription1.    Call the Amazon SNS Subscribe API from the AWS account that owns the SNS topic.2.    Have an AWS user with permissions to read messages from the SQS queue confirm the subscription.Create a new Amazon SNS topic to replace the current topic, then subscribe to the new topic1.    Create a new SNS topic to replace the current topic.2.    Subscribe the SQS queue to the new topic.Related informationDeleting an Amazon SNS subscription and topicSending Amazon SNS messages to an Amazon SQS queue in a different accountFanout to Amazon SQS queuesFollow"
https://repost.aws/knowledge-center/sns-cross-account-subscription
Why am I receiving an error when copying my ElastiCache snapshot?
I'm trying to copy my Amazon ElastiCache for Redis snapshot or backup to an Amazon Simple Storage Service (Amazon S3) bucket. The copy operation is stuck or failing. How can I troubleshoot this?
I'm trying to copy my Amazon ElastiCache for Redis snapshot or backup to an Amazon Simple Storage Service (Amazon S3) bucket. The copy operation is stuck or failing. How can I troubleshoot this?Short descriptionElastiCache snapshot copy operations might fail or get stuck for one of the following reasons:The destination S3 bucket is not in the same Region as the snapshot.S3 doesn't have the necessary permission to access ElastiCache.The user doesn't have the proper Identify and Access Management (IAM) policy set to access an S3 bucket.The cluster is using data tiering.ResolutionMake sure that you followed the steps for exporting your backup.Make sure that ElastiCache has the necessary permissions to access your S3 bucket. You can review error messages to rule out S3 permission issues.Make sure that you're not using data tiering in your cluster. Data tiering is turned on by default if you're using a combination of Redis engine version 6.2n or later with the rdg cache node family.Related informationcopy-snapshot (AWS CLI Command reference)Follow
https://repost.aws/knowledge-center/elasticache-snapshot-copy-stuck
Why isn't my SC1 or ST1 EBS volume achieving the rated throughput performance?
My ST1 or SC1 Amazon Elastic Block Store (Amazon EBS) volume isn't reaching the throughput performance listed in the AWS documentation. Why is this?
"My ST1 or SC1 Amazon Elastic Block Store (Amazon EBS) volume isn't reaching the throughput performance listed in the AWS documentation. Why is this?Short descriptionWhen using HDD Amazon EBS volumes, such as SC1 and ST1, keep the following:These volumes always use a 1024 KiB I/O token regardless of the actual I/O size used by the workload on the instance. Even if the actual I/O size of the application workload is set to 16 KiB, the volume still uses the entire 1024 KiB size of the I/O token. This wastes most of the token's space. To maximize efficiency, fill the entire 1024KiB.When the I/O size of a sequential workload is greater than 32 KiB, Amazon EBS always merges the I/Os into a single I/O operation of 1024 KiB. This merging fills the entire token size.When the I/O size is smaller than 32 KiB, or if the workload is random, Amazon EBS doesn't merge the I/Os to 1024 KiB. However, Amazon EBS still uses the entire 1024 KiB token size. This leaves most of the space inside the token empty. Since the I/Os aren't merged, the instance uses more IOPS sending the same amount of data to the volume. This causes a reduction in the burst balance even though the throughput is below its baseline value.ResolutionTo enable your ST1 and SC1 EBS volumes to reach their maximum rated throughput, do the following:Set up your application to use an I/O size greater than 32 KiB.Verify that your application uses sequential workload.ExampleWhen calculating throughput, use the following formula:Throughput = I/Osize * IOPSIf the I/O size is smaller than 32 KiB, the volume hits its IOPS limit, throttling the throughput. When this happens, the volume never achieves it's rated throughput performance.For example, setting the I/O size to 16 KiB and sending 3 MiB/s of data results in the following:3MiB/s/16KiB = 192 IOPSSetting the I/O size to 32 KiB with a sequential/contiguous workload causes Amazon EBS to merge to 1024 KiB. Amazon only sends 3 IOPS, in this case, as shown in the following calculation:3MiB/s/1024Kib = 3 IOPSAssume you're using 0.5 TiB (500 GiB) of an ST1 volume. This volume ideally provides a baseline throughput performance of 20 MiB/s and can burst up to 125 MiB/s.If the volume is bursting at 125 MiB/s, and the applications I/O size is 1024 KiB, then maximum theoretical IOPS equals 125 IOPS.Throughput / IOsize = 125MiB/s / 1024 KiB = 125 IOPS.However, if the application uses an I/O size of 16 KiB, then sending 3 MiB of data uses 192 IOPS. The application can't push 192 IOPS because the volume only achieves a theoretical maximum of 125 IOPS. In this case, the volume throttles IOPS to 125. In this scenario, the actual throughput is as follows:Actual throughput = 16 KiB * 125 = 1.95 MiB/sAs shown in the preceding calculations, IOPS is throttled to 125 IOPS, so Amazon EBS throttles the throughput to 1.95 MiB/s. This throttling occurs even though the burstable theoretical throughput for the volume is 125 IOPS given that the application used 1024 KiB I/O size.Related informationI/O characteristics and monitoringAmazon EBS volume typesFollow"
https://repost.aws/knowledge-center/ebs-low-throughput-performance
How do I push Systems Manager SSM Agent logs to CloudWatch?
I want to send AWS Systems Manager SSM Agent logs to Amazon CloudWatch Logs. How can I do that?
"I want to send AWS Systems Manager SSM Agent logs to Amazon CloudWatch Logs. How can I do that?ResolutionCreate a log group in CloudWatch LogsTo create a log group in CloudWatch Logs, follow these steps:Open the CloudWatch console, and then choose Log groups from the navigation pane.Choose Create log group.For Log group name, enter a name.Choose Create.Attach permissionsThe Amazon Elastic Compute Cloud (Amazon EC2) instance must include AWS Identity and Access Management (IAM) permissions to send the logs. You must attach the CloudWatchLogsFullAccess IAM role to the instance. For instructions, see Attach an IAM role to an instance.Note: You can include these permissions with already existing permissions. You can also further narrow the permissions based on your requirements.Configure SSM Agent to send logs to CloudWatch LogsFor instructions to configure SSM Agent to send logs to CloudWatch Logs, see Sending SSM Agent logs to CloudWatch Logs.Related informationChecking SSM Agent status and starting the agentWhat is Amazon CloudWatch Logs?Getting started with CloudWatch LogsFollow"
https://repost.aws/knowledge-center/ssm-agent-logs-cloudwatch
I can't open Jupyter on my Amazon SageMaker notebook instance
I get an error when I try to open my Amazon SageMaker Jupyter notebook in my browser.
"I get an error when I try to open my Amazon SageMaker Jupyter notebook in my browser.ResolutionFirst, try the following:On the Amazon SageMaker console, and then confirm that the notebook instance status is InService. If the status is Pending, the notebook instance isn't ready yet.Clear your browser cache or try a different browser.Check the Jupyter logs for errors.If you still can't open the Jupyter notebook, restart the notebook instance. It's a best practice to regularly restart notebook instances. Restarting helps keep notebook instance software updated. When you restart, the notebook instance moves to a new underlying host. This can help resolve HTTP 503 and 504 errors in your browser.Note: The only persistent storage on the notebook instance is the /home/ec2-user/SageMaker file system. When you restart, you lose all other data.To restart a notebook instance:1.    Open the Amazon SageMaker console.2.    In the navigation pane, choose Notebook instances.3.    Select the circle next to the notebook instance name.4.    Choose the Actions dropdown list, and then choose Stop.5.    Wait for the notebook instance to reach the Stopped status.6.    Choose the Actions dropdown list, and then choose Start.7.    Open the notebook instance URL.To prevent this issue from happening again, check for the following common causes of an overloaded notebook instance.Too many open sessionsOn the Jupyter dashboard, check the Running tab. When you have a large number of active sessions and notebooks, notebooks take longer to load and might time out in the browser. To resolve this issue, shut down unnecessary notebook or terminal sessions.High CPU or memory utilization1.    Open the Jupyter dashboard, and then choose the Files tab.2.    Choose New, and then choose Terminal.3.    Check memory utilization:free -h4.    Check CPU utilization:topIf CPU or memory utilization is high and you can't free up any more resources, consider switching to a larger notebook instance type:1.    Stop the notebook instance, as explained earlier.2.    When the notebook instance reaches the Stopped status, choose the Actions dropdown list, and then choose Update settings.3.    Choose a new Notebook instance type, and then choose Save. For a list of instance types available in each Region, see Supported instance types and Availability Zones.4.    Choose the Actions dropdown list, and then choose Start.5.    Open the notebook instance URL.High disk utilization1.    Open the Jupyter dashboard, and then choose the Files tab.2.    Choose New, and then choose Terminal.3.    Run a command similar to the following to start a shell session and check disk utilization:df -h4.    Check disk utilization for filesystem /home/ec2-user/SageMaker.If disk utilization is high, then remove temporary files from the /home/ec2-user/SageMaker directory, if possible. Or, increase the Amazon Elastic Block Store (Amazon EBS) volume size:1.    Stop the notebook instance, as explained earlier.2.    When the notebook instance reaches the Stopped status, choose the Actions dropdown list, and then choose Update settings.3.    Enter a new volume size, and then choose Save. The default EBS volume size is 5 GB. You can increase the volume size up to 16 TB.4.    Choose the Actions dropdown list, and then choose Start.5.    Open the notebook instance URL.Follow"
https://repost.aws/knowledge-center/open-sagemaker-jupyter-notebook
How do I allow my IP address while blocking other IP addresses using AWS WAF?
I've set up AWS WAF and I need to allow my IP address while blocking other IP addresses using AWS WAF. How can I do this?
"I've set up AWS WAF and I need to allow my IP address while blocking other IP addresses using AWS WAF. How can I do this?ResolutionAWS WAF can inspect the source IP address of a web request against a set of IP addresses and address ranges. You can create a rule that blocks requests from all IPs except the specific IPs in an IP set.First, create an IP setOpen the AWS WAF console.In the navigation pane, choose IP sets, and then choose Create IP set.Enter an IP set name and Description - optional for the IP set. For example: MyTrustedIPs.Note: You can't change the IP set name after you create the IP set.For Region, choose the AWS Region where you want to store the IP set. To use an IP set in web ACLs that protect Amazon CloudFront distributions, you must use Global (CloudFront).For IP version, choose the version that you want to use.For IP addresses, enter one IP address or an IP address range per line that you want to allow in CIDR notation.Note: AWS WAF supports all IPv4 and IPv6 CIDR ranges except for /0.Examples:To specify the IPv4 address 10.20.0.5, enter 10.20.0.5/32.To specify the IPv6 address 0:0:0:0:0:ffff:c000:22c, enter 0:0:0:0:0:ffff:c000:22c/128.To specify the range of IPv4 addresses from 10.20.0.0 to 10.20.0.255, enter 10.20.0.0/24.To specify the range of IPv6 addresses from 2620:0:2d0:200:0:0:0:0 to 2620:0:2d0:200:ffff:ffff:ffff:ffff, enter 2620:0:2d0:200::/64.Review the settings for the IP set. If the IP set matches your specifications, choose Create IP set.Then, create an IP match ruleIn the navigation pane, under AWS WAF, choose Web ACLs.For Region, select the AWS Region where you created your web ACL.Note: Select Global if your web ACL is set up for Amazon CloudFront.Select your web ACL.Choose Rules, and then choose Add Rules, Add my own rules and rule groups.For Name, enter a name to identify this rule. For example: Block-Other-IPs.For Type, choose Regular rule.For If a request, choose doesn't match the statement (NOT).On Statement, for Inspect, choose Originates from IP address in.For IP Set, choose the IP Set you created earlier. For example: MyTrustedIPs.For IP address to use as the originating address, choose Source IP address.For Action, choose Block.Choose Add rule.Choose Save.The IP match rule blocks any IP not added to the IP set. For IPs added to an IP set, the request is evaluated by other rules below the rule. If there isn't a match, the web ACL default action is applied. For more information, see Processing order of rules and rule groups in a web ACL.Related informationHow do I use AWS WAF to block HTTP requests that don't contain a User-Agent header?Follow"
https://repost.aws/knowledge-center/waf-allow-my-ip-block-other-ip
How do I configure Amazon EMR to run a PySpark job using Python 3.4 or 3.6?
"Python 3.4 or 3.6 is installed on my Amazon EMR cluster instances, but Spark is running Python 2.7. I want to I upgrade Spark to Python 3.4 or 3.6."
"Python 3.4 or 3.6 is installed on my Amazon EMR cluster instances, but Spark is running Python 2.7. I want to I upgrade Spark to Python 3.4 or 3.6.Short descriptionIn most Amazon EMR release versions, cluster instances and system applications use different Python versions by default:Amazon EMR release versions 4.6.0-5.19.0: Python 3.4 is installed on the cluster instances. Python 2.7 is the system default.Amazon EMR release versions 5.20.0 and later: Python 3.6 is installed on the cluster instances. For 5.20.0-5.29.0, Python 2.7 is the system default. For Amazon EMR version 5.30.0 and later, Python 3 is the system default.To upgrade the Python version that PySpark uses, point the PYSPARK_PYTHON environment variable for the spark-env classification to the directory where Python 3.4 or 3.6 is installed.ResolutionOn a running clusterAmazon EMR release version 5.21.0 and laterSubmit a reconfiguration request with a configuration object similar to the following:[ { "Classification": "spark-env", "Configurations": [ { "Classification": "export", "Properties": { "PYSPARK_PYTHON": "/usr/bin/python3" } } ] }]Amazon EMR release version 4.6.0-5.20.x1.    Connect to the master node using SSH.2.    Run the following command to change the default Python environment:sudo sed -i -e '$a\export PYSPARK_PYTHON=/usr/bin/python3' /etc/spark/conf/spark-env.sh3.    Run the pyspark command to confirm that PySpark is using the correct Python version:[hadoop@ip-X-X-X-X conf]$ pysparkThe output shows that PySpark is now using the same Python version that is installed on the cluster instances. Example:Python 3.4.8 (default, Apr 25 2018, 23:50:36)Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /__ / .__/\_,_/_/ /_/\_\ version 2.3.1 /_/Using Python version 3.4.8 (default, Apr 25 2018 23:50:36)SparkSession available as 'spark'.Spark uses the new configuration for the next PySpark job.On a new clusterAdd a configuration object similar to the following when you launch a cluster using Amazon EMR release version 4.6.0 or later:[ { "Classification": "spark-env", "Configurations": [ { "Classification": "export", "Properties": { "PYSPARK_PYTHON": "/usr/bin/python3" } } ] }]Related informationConfigure SparkApache SparkPySpark documentationFollow"
https://repost.aws/knowledge-center/emr-pyspark-python-3x
How can I share an encrypted Amazon RDS DB snapshot with another account?
I have an encrypted snapshot of an Amazon Relational Database Service (Amazon RDS) DB instance. It uses the default AWS Key Management Service (AWS KMS) key. I want to share an encrypted snapshot of a DB instance with another AWS account.
"I have an encrypted snapshot of an Amazon Relational Database Service (Amazon RDS) DB instance. It uses the default AWS Key Management Service (AWS KMS) key. I want to share an encrypted snapshot of a DB instance with another AWS account.Short descriptionYou can't use the default AWS KMS encryption key to share a snapshot that's encrypted. For more information about the limitations of sharing DB snapshots, see Sharing encrypted snapshots.To share an encrypted Amazon RDS DB snapshot, complete the following steps:Add the target account to a custom (non-default) KMS key.Use the customer managed key to copy the snapshot, and then share the snapshot with the target account.Copy the shared DB snapshot from the target account.Note: You can also follow the steps in the AWSSupport-ShareRDSSnapshot AWS Systems Manager Automation document to share your snapshot. Provide a snapshot to copy and share with the target account. You can also provide the DB instance or DB cluster ID to share with snapshots. Provide an existing KMS Key, or keep it blank to create a new key. For more information, see Add a key policy statement in the local account and Run an automation.ResolutionAllow access to the target account on the AWS KMS key of the source accountLog in to the source account, and then open the AWS KMS console in the same AWS Region as the DB snapshot.Choose Customer managed keys from the navigation pane.Choose the name of your customer managed key. If you don't have a key, then choose Create key. For more information, see Creating keys.From the Key administrators section, Add the AWS Identity and Access Management (IAM) users and roles who can administer the AWS KMS key.From the Key users section, Add the IAM users and roles who can use the AWS KMS key (KMS key) to encrypt and decrypt data.In the Other AWS accounts section, choose Add another AWS account, and then enter the AWS account number of the target account. For more information, see Allowing users in other accounts to use a KMS key.Copy and share the snapshotOpen the Amazon RDS console, and then choose Snapshots from the navigation pane.Choose the name of the snapshot that you created, choose Actions, and then choose Copy Snapshot.Choose the same AWS Region that your KMS key is in, and then enter a New DB Snapshot Identifier.In the Encryption section, choose the KMS key that you created.Choose Copy Snapshot.Share the copied snapshot with the target account.Copy the shared DB snapshotLog in to the target account, and then open the Amazon RDS console.Choose Snapshots from the navigation pane.From the Snapshots pane, choose the Shared with Me tab.Select the DB snapshot that you shared.Choose Actions. Then, choose Copy Snapshot to copy the snapshot into the same AWS Region and with a KMS key from the target account.After you copy the DB snapshot, you can use the copy to launch the instance.Related informationHow can I change the encryption key used by my Amazon RDS DB instances and DB snapshots?Encrypting Amazon RDS resourcesCopying a DB snapshotFollow"
https://repost.aws/knowledge-center/share-encrypted-rds-snapshot-kms-key
How do I view my Reserved Instance utilization and coverage?
I want to see my Reserved Instance (RI) utilization and coverage.
"I want to see my Reserved Instance (RI) utilization and coverage.Short descriptionYou can use AWS Cost Explorer to generate the RI utilization and RI coverage reports. With these reports, you can dive deeper into RIs for the following services:Amazon Elastic Compute Cloud (Amazon EC2)Amazon ElastiCacheAmazon MemoryDB for RedisAmazon OpenSearch ServiceAmazon RedshiftAmazon Relational Database Service (Amazon RDS)ResolutionRI utilization and coverageYou can use your RI utilization reports and RI coverage reports to understand where your highest RI costs are. Also, how you can potentially lower RI costs.To view your RI utilization and RI coverage reports, do the following:Open the AWS Cost Management console.In the left navigation pane, choose Reports.To view the RI utilization report, choose RI Utilization.To view the RI coverage report, choose RI Coverage.In the right navigation pane for Report parameters, select the service for which you require the utilization data.For information on how to use the RI utilization reports, see How do I use the Reserved Instance utilization report in Cost Explorer?For information on how to use the RI coverage reports, see How do I use the Reserved Instance coverage report in Cost Explorer?RI recommendationsTo view RI purchase recommendations that could help reduce your costs, in the navigation pane choose Reservations, and then choose Recommendations. For more information, see Accessing Reserved Instance recommendations.Related informationReserved Instance reportsUnderstanding your reservations with Cost ExplorerReserved Instance reportingHow can I use Cost Explorer to analyze my spending and usage?Follow"
https://repost.aws/knowledge-center/ec2-ri-utilization-coverage-cost-explorer
How do I resolve an HTTP 503 Service Unavailable error in Amazon OpenSearch Service?
"When I query my Amazon OpenSearch Service domain, I get an HTTP 503 Service Unavailable error. How do I resolve this error?"
"When I query my Amazon OpenSearch Service domain, I get an HTTP 503 Service Unavailable error. How do I resolve this error?Short descriptionA load balancer sits in front of each OpenSearch Service domain. The load balancer distributes incoming traffic to the data nodes. An HTTP 503 error indicates that one or more data nodes in the cluster is overloaded. When a node is overloaded by expensive queries or incoming traffic, it doesn't have enough capacity to handle any other incoming requests.Note: You can use the RequestCount metric in Amazon CloudWatch to track HTTP response codes.ResolutionUse one of the following methods to resolve HTTP 503 errors:Provision more compute resourcesScale up your domain by switching to larger instances, or scale out by adding more nodes to the cluster. For more information, see Creating and managing OpenSearch Service domains.Confirm that you are using an instance type that is appropriate for your use case. For more information, see Choosing instance types and testing.Reduce the resource utilization for your queriesConfirm that you are following best practices for shard and cluster architecture. A poorly designed cluster can't use all available resources. Some nodes might be overloaded while other nodes sit idle. OpenSearch Service can't fetch documents from overloaded nodes. For more information about shard and cluster best practices, see Get started with OpenSearch Service: How many shards do I need?Reduce the number of concurrent requests to the domain.Reduce the scope of your query. For example, if you run a query for a specific time frame, reduce the date range. You can also filter the results by configuring the index pattern in OpenSearch Dashboards.Avoid running select * queries on large indices. Instead, use filters to query a part of the index and search as few fields as possible.Re-index and reduce the number of shards. The more shards you have in your cluster, the more likely it will result in a courier fetch error. Because each shard has its own resource allocation and overheads, a large number of shards can strain your cluster. To lower your shard count, see Why is my Amazon OpenSearch Service domain stuck in the "Processing" state?Related informationHow can I prevent HTTP 504 gateway timeout errors in Amazon OpenSearch Service?Best practices for Amazon OpenSearch ServiceTroubleshooting Amazon OpenSearch ServiceFollow"
https://repost.aws/knowledge-center/opensearch-http-503-errors
How do I run security assessments or penetration tests on AWS?
I want to run a security test or other simulated event on my AWS architecture.
"I want to run a security test or other simulated event on my AWS architecture.ResolutionTo carry out penetration tests against or from resources on your AWS account, follow the policies and guidelines at Penetration Testing. You don't need approval from AWS to run penetration tests against or from resources on your AWS account. For a list of prohibited activities, see Customer service policy for penetration testing.If you plan to run a security test other than a penetration test, see the guidelines at Other simulated events.Note: You're not permitted to conduct any security assessments of AWS infrastructure that isn't on your AWS account. You also aren't permitted to conduct security assessments of AWS services themselves. If you discover a security issue within any AWS service in the course of your security assessment, contact AWS Security immediately.To request permission for network stress-testingBefore stress-testing your network, review the Amazon EC2 Testing Policy. If your planned tests exceed the limits outlined in the policy, then submit a request using the Simulated Event form.Important: Submit the simulated event request at least 14 business days before your planned test. Provide a full description of your plan, including expected risks and outcomes.To request permission for other simulated eventsFor any other simulated events, submit a request using the Simulated Event form. Provide a full description of your planned event, including details, risks, and desired outcomes.Other simulated event types can include:Red, blue, or purple teamCapture the flagDisaster recoverySimulated phishingMalware testingFollow"
https://repost.aws/knowledge-center/penetration-testing
How do I select the best Amazon API Gateway Cache capacity to avoid hitting a rate limit?
My API Gateway is rate limiting and I want to prevent throttling. How can I select the best API Gateway Cache capacity for my workload?
"My API Gateway is rate limiting and I want to prevent throttling. How can I select the best API Gateway Cache capacity for my workload?Short descriptionAmazon API Gateway throttles requests to your API to prevent it from being overwhelmed by too many requests. Turn on API caching to reduce the number of calls made to your endpoint.There are multiple API Gateway Cache sizes available. To select the appropriate cache size, run a load test on your API and then review the Amazon CloudWatch metrics.ResolutionTurn on API Gateway cachingTurn on Amazon API Gateway caching for your API stage. The cache capacity depends on the size of your responses and workload.Note: Cache capacity affects the CPU, memory, and network bandwidth of the cache instance. As a result, cache capacity can affect the performance of your cache.After creating your cache, run a load test to determine if the cache size is high enough to prevent throttling.Run a load testRun a load test on your API. You can use AWS Distributed Load Testing to simulate the load test.Run the load test for at least 10 minutes and mirror your production traffic. When the load test is running, monitor related CloudWatch metrics using the steps in the following section.Monitor API metrics in CloudWatchOpen the CloudWatch console.In the navigation pane, select Metrics.Choose the ApiGateway metric.Monitor Latency, 4XXError, 5XXError, CacheHitCount, and CacheMissCount metrics for the API that you are load testing against.If you see an increase in Latency, 4XXError, 5XXError or CacheMissCount with a CacheHitCount decrease, then resize your API Gateway cache to a larger capacity.If you see an increase in CacheHitCount and no corresponding increase in CacheMissCount, then resize your API Gateway cache to a smaller capacity.After any changes to your cache's capacity, run the load test again until there are no sudden increases or decreases.Related informationAmazon API Gateway pricingTurn on API caching to enhance responsivenessAmazon API Gateway dimensions and metricsFollow"
https://repost.aws/knowledge-center/api-gateway-cache-capacity
Do I need to have a configuration set to get Amazon SES bounce details?
I want to get more information about my Amazon Simple Email Service (Amazon SES) messages that result in bounces. Do I need to use a configuration set to get bounce details?
"I want to get more information about my Amazon Simple Email Service (Amazon SES) messages that result in bounces. Do I need to use a configuration set to get bounce details?ResolutionNo, configuration sets (for Amazon SES event publishing) aren't the only way to get details about bounces on Amazon SES. You can set up bounce notifications in one of these ways:You can send bounce notifications to an Amazon Simple Notification Service (Amazon SNS) topic. Because Amazon SNS notifications use the JSON format, you can process the notifications programmatically.You can use email feedback forwarding, which is enabled by default. If you previously disabled email feedback forwarding, you can re-enable the feature using the Amazon SES console.Follow"
https://repost.aws/knowledge-center/ses-options-for-bounce-information
Why aren't my Amazon S3 server access logs getting delivered?
"I set up Amazon Simple Storage Service (Amazon S3) server access logging. However, the logs aren't populating the bucket that they're supposed to be delivered to."
"I set up Amazon Simple Storage Service (Amazon S3) server access logging. However, the logs aren't populating the bucket that they're supposed to be delivered to.Short descriptionIf you set up Amazon 3 server access logging but you're not seeing logs in the expected bucket, then check for the following:The Log Delivery group (delivery account) has access to the target bucket.The bucket policy of the target bucket must not deny access to the logs.Amazon S3 Object Lock must not be turned on for the target bucket.If default encryption is turned on for the target bucket, AES256 (SSE-S3) must be selected as the encryption key.Allow some time for recent logging configuration changes to take effect.ResolutionThe Log Delivery group has access to the target bucketServer access logs are delivered to the target bucket (the bucket where logs are sent to) by a delivery account called the Log Delivery group. To receive server access logs, the Log Delivery group must have write access to the target bucket. Check the target bucket's access control list (ACL) to verify if the Log Delivery group has write access.To check and modify the target bucket's ACL using the Amazon S3 console, do the following:Open the Amazon S3 console.From the list of buckets, choose the target bucket that server access logs are supposed to be sent to.Choose the Permissions tab.Choose Access Control List.Under S3 log delivery group, check if the group has access to Write objects. If the group doesn't have access to Write objects, proceed to the next step.Select Log Delivery.In the LogDelivery dialog box, under Access to the objects, select Write objects.Choose Save.The bucket policy of the target bucket must not deny access to the logsCheck the bucket policy of the target bucket. Search the bucket policy for any statements that contain "Effect": "Deny". Then, verify that the deny statement isn't preventing access logs from being written to the bucket.Note: It's a best practice to use a separate bucket for server access logs. By default, S3 buckets are private, so you don't need to use a deny statement in the bucket policy to prevent unauthorized access to the bucket. If an AWS Identity and Access Management (IAM) user or role is in the same AWS account as the bucket, and the IAM identity has permissions to the bucket in its IAM policies, then the user or role can access the bucket.Amazon S3 Object Lock must not be turned on for the target bucketCheck if the target bucket has Object Lock turned on. Object Lock prevents server access logs from getting delivered, so you must turn off Object Lock on the bucket that you want logs sent to.If default encryption is turned on for the target bucket, AES256 (SSE-S3) must be selectedIf you use default encryption on the target bucket, then confirm that AES-256 (SSE-S3) is selected as the encryption key. Encryption using AWS-KMS (SSE-KMS) is not supported. For instructions on how to configure default encryption using the Amazon S3 console, see Turning on Amazon S3 default bucket encryption.Allow some time for recent logging configuration changes to take effectTurning on server access logging for the first time, or changing the target bucket for logs, can take time to fully implement. During the hour after you turn on logging, some requests might not be logged. During the hour after you change the target bucket, some logs might still be delivered to the previous target bucket. After you make a configuration change to logging, be sure to wait around one hour after the change to verify logs. For more information, see Best effort server log delivery.Related informationHow are logs delivered?Follow"
https://repost.aws/knowledge-center/s3-server-access-log-not-delivered
Why can't I create a WorkSpaces BYOL image?
"I tried to create an Amazon WorkSpaces bring your own license (BYOL) image, but it failed. What can I do to fix this issue?"
"I tried to create an Amazon WorkSpaces bring your own license (BYOL) image, but it failed. What can I do to fix this issue?ResolutionFollow these steps to resolve a WorkSpaces BYOL image validation failure:1.    Connect to the virtual machine (VM) in the on-premises environment. Download and run the BYOL Checker PowerShell script. The BYOL Checker script verifies that the image you want to import meets all the requirements for a successful import. The script also provides a detailed report of any issues that must be corrected before image creation.Important: All BYOL Checker script tests must pass for WorkSpaces image validation to succeed.The BYOL Checker script generates two log files: BYOLPrevalidationlogYYYY-MM-DD_HHmmss.txt and ImageInfo.text. These files are located in the directory that contains the BYOL Checker script files. Review the log files for errors.If issues persist after running the BYOL Checker script and reviewing the log files, then perform the following steps.2.    If your imported BYOL image has a firewall turned on, verify in the VM that the firewall isn't blocking any necessary ports. Certain ports are required for downloading scripts used for image creation.3.    Verify that the applications installed on your WorkSpace are compatible with Sysprep. Run the following PowerShell command to list all the AppX packages installed:Get-AppxPackage -AllUsers | Format-List -Property PackageFullName,PackageUserInformationIf image creation fails at the Sysprep stage, see the article about Sysprep failures on the Microsoft website.4.    Make sure that the disk isn't encrypted. Image creation fails if the disk used for importing a VM is encrypted.Related informationBring Your Own Windows desktop licensesHow do I troubleshoot WorkSpaces image creation issues?Follow"
https://repost.aws/knowledge-center/workspaces-create-byol-image
How can I troubleshoot the "Could not connect to the endpoint URL" error when I run the sync command on my Amazon S3 bucket?
"I'm trying to run the cp or sync command on my Amazon Simple Storage Service (Amazon S3) bucket. However, I'm getting the "Could not connect to the endpoint URL" error message."
"I'm trying to run the cp or sync command on my Amazon Simple Storage Service (Amazon S3) bucket. However, I'm getting the "Could not connect to the endpoint URL" error message.Short descriptionTo run the cp or sync commands using the AWS Command Line Interface (AWS CLI), your machine must connect to the correct Amazon S3 endpoints. Otherwise, you get the "Could not connect to the endpoint URL" error message.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.To troubleshoot this error, check the following:Confirm that you're using the correct AWS Region and Amazon S3 endpoint.Verify that your network can connect to those Amazon S3 endpoints.Verify that your DNS can resolve to those Amazon S3 endpoints.If you're seeing this error on an Amazon Elastic Compute Cloud (Amazon EC2) instance, then check the Amazon Virtual Private Cloud (Amazon VPC) configuration.ResolutionConfirm that you're using the correct AWS Region and Amazon S3 endpointWhen you run a command using the AWS CLI, API requests are sent to the default AWS Region's S3 endpoint. Or, API requests are sent to a Region-specific S3 endpoint when the Region is specified in the command. Then, the AWS CLI can redirect the request to the bucket's Regional S3 endpoint.You can get the "Could not connect to the endpoint URL" error if there's a typo or error in the specified Region or endpoint.For example, the following command results in the error because there's an extra "e" in the endpoint name:aws s3 cp filename s3://DOC-EXAMPLE-BUCKET/ --endpoint-url https://s3-acceleratee.amazonaws.comBefore you run the cp or sync command, be sure to confirm that the associated Region and S3 endpoint are written correctly.Note: If you're using Amazon S3 Transfer Acceleration, see Configuring fast, secure file transfers using Amazon S3 Transfer Acceleration for the endpoint name.Verify that your network can connect to the S3 endpointsConfirm that your network's firewall allows traffic to the Amazon S3 endpoints on the port that you're using for Amazon S3 traffic.For example, the following telnet command tests the connection to the ap-southeast-2 Regional S3 endpoint on port 443:Note: Make sure to replace the Regional endpoint and the port (443 or 80) with the values associated with your use case.telnet s3.ap-southeast-2.amazonaws.com 443Verify that your DNS can resolve to the S3 endpointsTo confirm that your DNS can resolve to the Amazon S3 endpoints, use a DNS query tool such as nslookup or ping. The following example uses nslookup:nslookup s3.amazonaws.comThe following example uses ping to confirm that the DNS resolves to the S3 endpoint:ping s3.amazonaws.comIf your DNS can't resolve to the S3 endpoints, then troubleshoot your DNS configuration. If Amazon Route 53 is your DNS provider, then see Troubleshooting Amazon Route 53.If you're seeing this error on an EC2 instance, then check the VPC configurationIf the EC2 instance is in a public subnet:Check the network access control list (ACL) of the Amazon VPC that your instance is in. In the network ACL, check the outbound rule for port 443. If the outbound rule is "DENY", then change it to "ALLOW".If the network ACL restricts access to only a specific region of Amazon S3 IP address ranges, then check the configuration file of the AWS CLI. The configuration file must specify the correct AWS Region.If the EC2 instance is in a private subnet:Check if there is a network address translation (NAT) gateway associated with the route table of the subnet. The NAT gateway provisions an internet path to reach the Amazon S3 endpoint.If you're using a VPC endpoint for Amazon S3, then verify that the correct Region is set in the AWS CLI config file. VPC endpoints for Amazon S3 are Region-specific. If you run a sync command using --region us-west-1 when the VPC endpoint is in a different Region, then the CLI contacts https://s3.us-west-1.amazonaws.com. As a result, you receive the "Could not connect to the endpoint URL" error.Follow"
https://repost.aws/knowledge-center/s3-could-not-connect-endpoint-url
How do I change object ownership for an Amazon S3 bucket when the objects are uploaded by other AWS accounts?
I'm trying to change ownership of objects uploaded by other AWS accounts in an Amazon Simple Storage Service (Amazon S3) bucket using S3 Object Ownership. How can I do this?
"I'm trying to change ownership of objects uploaded by other AWS accounts in an Amazon Simple Storage Service (Amazon S3) bucket using S3 Object Ownership. How can I do this?Short descriptionImportant: Objects in S3 are no longer always automatically owned by the AWS account that uploads it.With the Bucket owner-enforced setting in S3 Object Ownership, all objects in an Amazon S3 bucket can now be owned by the bucket owner. The Bucket owner enforced feature also turns off all access control lists (ACLs), which simplifies access management for data stored in S3.You can turn on Bucket owner enforced settings to apply ownership of all objects within a newly created bucket to the bucket owner account.ResolutionChanging object ownership of objects uploaded by other AWS accountsNote: Before you use S3 Object Ownership to change object ownership for a bucket, make sure that you have access to the s3:PutBucketOwnershipControls action. For more information about S3 permissions, see Actions, resources, and condition keys for Amazon S3.Changing object ownership to bucket owner account for new and existing objects uploaded by other accounts in Amazon S3 buckets (disable ACLs)If you're trying to change object ownership for objects in an existing Amazon S3 bucket, choose the ACLs disabled option under S3 Object Ownership. This option allows the bucket owner full control over all the objects in the S3 bucket and transfers the ownership to the bucket owner's account.When using this option, ACLs no longer affect the permission to access data in your S3 bucket. This option changes the ownership of all objects in the bucket, including the objects that exist currently and any objects that you add after setting the ACLs disabled option. To define access control, use a bucket policy.Note: If your existing ACLs grant access to an external AWS account or any other group, then the Bucket owner enforced setting won't work. To apply the Bucket owner enforced setting, your bucket ACL must give full control only to the bucket owner. Before turning on the Bucket owner enforced setting, see Prerequisites for disabling ACLs.Changing object ownership to bucket owner account for new objects uploaded by other accounts in Amazon S3 buckets (enable ACLs)Under S3 Object Ownership settings, from the list of ACLs enabled options, choose the Bucket owner preferred option. With this setting, new objects that are written by other accounts with the bucket-owner-full-control canned ACL are automatically owned by the bucket owner rather than the object writer. However, the Bucket owner preferred setting doesn't affect the ownership of existing objects. Also, ACLs can be updated and used to grant permissions. For more information about the Bucket owner preferred setting and ACLs, see Enforcing ownership of Amazon S3 objects in a multi-account environment.Changing object ownership to the AWS account that uploaded it (enable ACLs)To transfer object ownership to the AWS account that uploaded the object, turn on the Object writer option from the list of ACLs that are turned on under S3 Object Ownership. This option makes sure that the AWS account that uploaded the object owns the object. The object owner then has full control over the object, and can grant other users access to the object using ACLs.Related informationControlling ownership of objects and disabling ACLs for your bucketHow can I copy all objects from one Amazon S3 bucket to another bucket?How can I add bucket-owner-full-control ACL to my objects in Amazon S3?Follow"
https://repost.aws/knowledge-center/s3-change-object-ownership
How can I identify the usage associated with an API key for API Gateway?
How can I get the usage associated with an API key for Amazon API Gateway?
"How can I get the usage associated with an API key for Amazon API Gateway?ResolutionFollow these instructions to get the API key usage using either the AWS Management Console or the AWS Command Line Interface (AWS CLI).Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Use the AWS Management ConsoleOpen the API Gateway console.In the navigation pane, choose APIs.Choose your API, and then choose Usage Plans.Choose your Usage Plan.Choose Actions, and then choose Export Usage Data.Choose the export From and To date range.For Export as, choose JSON or CSV, and then choose Export.For more information, see Create, configure, and test usage plans with the API Gateway console.Use the AWS CLIYou can use the AWS CLI command get-usage to get the usage data of a usage plan in a date range similar to the following:aws apigateway get-usage --usage-plan-id <usage-plan-id> --start-date "20xx-xx-xx" --end-date "20xx-xx-xx" --key-id <api-key-id>Note: The usage date range can't exceed 90 days.For more information, see Create, configure, and test usage plans using the API Gateway CLI and REST API.Related informationBest practices for API keys and usage plansFollow"
https://repost.aws/knowledge-center/api-gateway-usage-key
How do I resolve "no space left on device" stage failures in Spark on Amazon EMR?
"I submitted an Apache Spark application to an Amazon EMR cluster. The application fails with a "no space left on device" stage failure similar to the following:Job aborted due to stage failure: Task 31 in stage 8.0 failed 4 times, most recent failure: Lost task 31.3 in stage 8.0 (TID 2036, ip-xxx-xxx-xx-xxx.compute.internal, executor 139): org.apache.spark.memory.SparkOutOfMemoryError: error while calling spill() on org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@1a698b89 : No space left on device"
"I submitted an Apache Spark application to an Amazon EMR cluster. The application fails with a "no space left on device" stage failure similar to the following:Job aborted due to stage failure: Task 31 in stage 8.0 failed 4 times, most recent failure: Lost task 31.3 in stage 8.0 (TID 2036, ip-xxx-xxx-xx-xxx.compute.internal, executor 139): org.apache.spark.memory.SparkOutOfMemoryError: error while calling spill() on org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@1a698b89 : No space left on deviceShort descriptionSpark uses local disks on the core and task nodes to store intermediate data. If the disks run out of space, then the job fails with a "no space left on device" error. Use one of the following methods to resolve this error:Add more Amazon Elastic Block Store (Amazon EBS) capacity.Add more Spark partitions.Use a bootstrap action to dynamically scale up storage on the core and task nodes. For more information and an example bootstrap action script, see Dynamically scale up storage on Amazon EMR clusters.ResolutionAdd more EBS capacityFor new clusters: use larger EBS volumesLaunch an Amazon EMR cluster and choose an Amazon Elastic Compute Cloud (Amazon EC2) instance type with larger EBS volumes. For more information about the amount of storage and number of volumes allocated for each instance type, see Default Amazon EBS storage for instances.For running clusters: add more EBS volumes1.    If larger EBS volumes don't resolve the problem, attach more EBS volumes to the core and task nodes.2.    Format and mount the attached volumes. Be sure to use the correct disk number (for example, /mnt1 or /mnt2 instead of /data).3.    Connect to the node using SSH.4.    Create a /mnt2/yarn directory, and then set ownership of the directory to the YARN user:sudo mkdir /mnt2/yarnchown yarn:yarn /mnt2/yarn5.    Add the /mnt2/yarn directory inside the yarn.nodemanager.local-dirs property of /etc/hadoop/conf/yarn-site.xml. Example:<property> <name>yarn.nodemanager.local-dirs</name> <value>/mnt/yarn,/mnt1/yarn,/mnt2/yarn</value></property>6.    Restart the NodeManager service:sudo stop hadoop-yarn-nodemanagersudo start hadoop-yarn-nodemanagerAdd more Spark partitionsDepending on how many core and task nodes are in the cluster, consider increasing the number of Spark partitions. Use the following Scala code to add more Spark partitions:val numPartitions = 500val newDF = df.repartition(numPartitions)Related informationHow can I troubleshoot stage failures in Spark jobs on Amazon EMR?Follow"
https://repost.aws/knowledge-center/no-space-left-on-device-emr-spark
How do I troubleshoot issues with the API server endpoint of my Amazon EKS cluster?
"I can't run kubectl commands. Also, I changed the endpoint access setting from public to private on my Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Now, my cluster is stuck in the Failed state."
"I can't run kubectl commands. Also, I changed the endpoint access setting from public to private on my Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Now, my cluster is stuck in the Failed state.Short descriptionIf you have issues with your Kubernetes API server endpoint, complete the steps in one of the following sections:You can't run kubectl commands on the new or existing clusterYou can't run kubectl commands on the cluster after you change the endpoint access from public to privateYour cluster is stuck in the Failed state and you can't change the endpoint access setting from public to privateNote: To set up access to the Kubernetes API server endpoint, see Modifying cluster endpoint access.ResolutionYou can't run kubectl commands on the new or existing cluster1.    Confirm that you're using the correct kubeconfig files to connect with your cluster. For more information, see Organizing cluster access using kubeconfig files (from the Kubernetes website).2.    Check each cluster for multiple contexts in your kubeconfig files.Example output:kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}'Cluster name Servernew200.us-east-2.eksctl.io https://D8DC9092A7985668FF67C3D1C789A9F5.gr7.us-east-2.eks.amazonaws.comIf the existing kubeconfig files don't have the correct cluster details, then use the following command to create one with the correct details:aws eks update-kubeconfig --name cluster name --region regionNote: Replace cluster name with your cluster's name and region with your AWS Region.3.    Use the telnet on port 443 to validate the API server endpoint connectivity from your device.Example output:echo exit | telnet D8DC9092A7985668FF67C3D1C789A9F5.gr7.us-east-2.eks.amazonaws.com 443Trying 18.224.160.210...Connected to D8DC9092A7985668FF67C3D1C789A9F5.gr7.us-east-2.eks.amazonaws.com.Escape character is '^]'.Connection closed by foreign host.If the telnet isn't working, then use the following steps to troubleshoot:Check the DNS resolverIf the API server isn't resolving, then there's an issue with the DNS resolver.Run the following command from the same device where the kubectl commands failed:nslookup APISERVER endpointNote: Replace APISERVER endpoint with your APISERVER endpoint.Check if you restricted public access to the API server endpointIf you specified CIDR blocks to limit access to the public API server endpoint, then it's a best practice to also activate private endpoint access.4.    Check the API server endpoint access behavior. See Modifying cluster endpoint access.You can't run kubectl commands on the cluster after you change the endpoint access from public to private1.    Confirm that you're using a bastion host or connected networks, such as peered VPCs, AWS Direct Connect, or VPNs, to access the Amazon EKS API endpoint.Note: In private access mode, you can access the Amazon EKS API endpoint only from within the cluster's VPC.2.    Check whether security groups or network access control lists are blocking the API calls.If you access your cluster across a peered VPC, then confirm that the control plane security groups allow access from the peered VPC to the control plan security groups at port 443. Also, verify that the peered VPCs have port 53 open to each other. Port 53 is used for DNS resolution.Your cluster is stuck in the Failed state and you can't change the endpoint access setting from public to privateYour cluster might be in the Failed state because of a permissions issue with AWS Identity and Access Management (IAM).1.    Confirm that the IAM role for the user is authorized to perform the AssociateVPCWithHostedZone action.Note: If the action isn't blocked, then check whether the user's account has AWS Organizations policies that are blocking the API calls and causing the cluster to fail.2.    Confirm that the IAM user's permission isn't implicitly or explicitly blocked at any level above the account.Note: IAM user permission is implicitly blocked if it's not included in the Allow policy statement. It's explicitly blocked if it's included in the Deny policy statement. Permission is blocked even if the account administrator attaches the AdministratorAccess IAM policy with */* permissions to the user. Permissions from AWS Organizations policies override the permissions for IAM entities.Related informationAmazon EKS security group requirements and considerationsDNS resolution for EKS clusters using private endpointsAmazon EKS enables network access restrictions to Kubernetes cluster public endpointsAmazon EKS cluster endpoint access controlFollow"
https://repost.aws/knowledge-center/eks-api-server-endpoint-failed
Why does my VPC CNI plugin fail to reach the API Server in Amazon EKS?
My Amazon VPC Container Network Interface (CNI) plugin fails to reach the API Server in Amazon Kubernetes Service (Amazon EKS).
"My Amazon VPC Container Network Interface (CNI) plugin fails to reach the API Server in Amazon Kubernetes Service (Amazon EKS).Short descriptionIf the ipamD daemon tries to connect to the API Server before the kube-proxy added the Kubernetes Service port, then the connection between the ipamD and the API Server times out. To troubleshoot this situation, check the ipamD and the kube-proxy logs, and then compare the timestamp of each.You can also add an init container. The init container waits for the kube-proxy to create the Kubernetes Service port. The aws-node pods then finish the initialization to avoid a timeout.ResolutionCheck the ipamD and kube-proxy logsipamD logs:If the connection between the ipamD and the API Server times out, then you receive the following error:"Failed to create client: error communicating with apiserver:kube-proxy logs:The kube-proxy creates iptables routes for Kubernetes API Server endpoints on the worker node. After the kube-proxy creates the route, you see the following message:Adding new service port \"default/kubernetes:https\"Compare the timestamps between the logsipamD logs:{"level":"error","ts":"2021-09-22T10:40:49.735Z","caller":"aws-k8s-agent/main.go:28","msg":"Failed to create client: error communicating with apiserver: Get https://10.77.0.1:443/version?timeout=32s: dial tcp 10.77.0.1:443: i/o timeout"}kube-proxy logs:{"log":"I0922 10:41:15.267648 1 service.go:379] Adding new service port \"default/kubernetes:https\" at 10.77.0.1:443/TCP\n","stream":"stderr","time":"2021-09-22T10:40:49.26766844Z"}In the ipamD logs, you can see that the ipamD daemon tried to connect to the API Server at 2021-09-22T10:40:49.735Z. The connection timed out and failed. In the kube-proxy logs, you can see that the kube-proxy added the Kubernetes Service port at 2021-09-22T10:41:15.26766844Z.Add an init containerTo add an init container, complete the following steps:1.    Modify the aws-node specification so that the DNS is resolved for the Kubernetes Service name:$ kubectl -n kube-system edit daemonset/aws-nodeOutput:   initContainers: - name: init-kubernetes-api image: busybox:1.28 command: ['sh', '-c', "until nc -zv ${KUBERNETES_PORT_443_TCP_ADDR} 443; do echo waiting for kubernetes Service endpoint; sleep 2; done"]2.    Verify that the aws-node pods created the init containers:$ kubectl get pods -n kube-system -wOutput:    ... kube-proxy-smvfl 0/1 Pending 0 0s aws-node-v68bh 0/1 Pending 0 0s kube-proxy-smvfl 0/1 Pending 0 0s aws-node-v68bh 0/1 Pending 0 0s aws-node-v68bh 0/1 Init:0/1 0 0s kube-proxy-smvfl 0/1 ContainerCreating 0 0s kube-proxy-smvfl 1/1 Running 0 6s aws-node-v68bh 0/1 PodInitializing 0 9s aws-node-v68bh 0/1 Running 0 16s aws-node-v68bh 1/1 Running 0 53sFollow"
https://repost.aws/knowledge-center/eks-vpc-cni-plugin-api-server-failure
Why can't I register my EC2 instance running SUSE to the SUSE update infrastructure so that I can install or update packages?
I want to install or update packages on my Amazon Elastic Compute Cloud (Amazon EC2) SUSE instance. I'm unable to register my EC2 SUSE instance to the SUSE update infrastructure.
"I want to install or update packages on my Amazon Elastic Compute Cloud (Amazon EC2) SUSE instance. I'm unable to register my EC2 SUSE instance to the SUSE update infrastructure.Short descriptionTo troubleshoot SUSE registration failure, use the AWSSupport-CheckSUSERegisration automation document. This automation document does the following:Verifies security group configurations.Verifies network access control list (network ACL) configurations.Verifies route table configurations.Verifies that the cloud-regionsrv-client package is up to date.Verifies that the base product symbolic link is correct.Verifies that there aren't multiple entries for smt-ec2.susecloud.net in the /etc/hosts file.Verifies that your EC2 instance can access the Instance Metadata Service (IMDS).Verifies that your EC2 instance has a billing code or AWS Marketplace product codes.Determines if your EC2 instance is behind an SSL proxy.Determines if the regional server's IPs, SMT server's IPs, and smt-ec2.susecloud.net are allowed from the SSL interception if there is any proxy.Determines if the proxy can resolve smt-ec2.susecloud.net to an SMT server IP address.Determines if SMT servers are accessible over HTTP.Determines if SMT servers are accessible over HTTPS.Determines if the smt-ec2.susecloud.net URL is accessible over HTTPS.ResolutionNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Before you beginKeep in mind that the instance you want to troubleshoot using the automation document must be a managed instance in the AWS Systems Manager console.Copy the instance ID of the EC2 instance you want to troubleshoot. You need the instance ID to run the automation document.(Optional) Create and specify an AWS Identity and Access Management (IAM) role for automation. If you don't specify this role, AWS Systems Manager Automation uses the permissions of the user who runs this document. For more information about creating roles for automation, see Use IAM to configure roles for Automation.Run the AWSSupport-TroubleshootSUSERegistration automation from the Systems Manager consoleOpen the document in the AWS Systems Manager console. Be sure to open the document in the Region where your resources are located.In the navigation pane, choose Automation.Choose Execute automation.Enter AWSSupport-TroubleshootSUSERegistration in the search field, and then press Enter.Select AWSSupport-TroubleshootSUSERegistration in the search results.In the documents list, choose AWSSupport-TroubleshootSUSERegistration. The document owner is Amazon.In the Description section, verify that Document version is set to Default version at runtime.Select Execute Automation.In the Execute automation document section, choose Simple execution.In the Input parameters section, specify the following parameters:For InstanceID, specify or select the ID of the instance you want to troubleshoot.(Optional) For AutomationAssumeRole, specify the IAM role for this run. If a role isn't specified, AWS Systems Manager Automation uses the permissions of the user who runs this document.Choose Execute.To monitor the run progress, choose the running Automation, and then choose the Steps tab. When the run finishes, choose the Descriptions tab, and then choose View output to view the results. To view the output of individual steps, choose the Steps tab, and then choose View Outputs beside a step.Run the AWSSupport-TroubleshootSUSERegistration automation from the AWS Command Line Interface (AWS CLI)In the following command, replace i-xxxxxxxxxxxxxxxx with the EC2 instance that you want to troubleshoot. Replace us-east-1 with your instance's Region.aws ssm start-automation-execution --document-name "AWSSupport-TroubleshootSUSERegistration" --document-version "\$DEFAULT" --parameters '{"InstanceId":["i-xxxxxxxxxxxxxxxx"],"AutomationAssumeRole":[""]}' --region us-east-1Follow"
https://repost.aws/knowledge-center/suse-update-infrastructure-registration
How can I use an SSH tunnel to access OpenSearch Dashboards from outside of a VPC with Amazon Cognito authentication?
My Amazon OpenSearch Service cluster is in a virtual private cloud (VPC). I want to use an SSH tunnel to access OpenSearch Dashboards from outside the VPC with Amazon Cognito authentication.
"My Amazon OpenSearch Service cluster is in a virtual private cloud (VPC). I want to use an SSH tunnel to access OpenSearch Dashboards from outside the VPC with Amazon Cognito authentication.Short descriptionBy default, Amazon Cognito restricts OpenSearch Dashboards access to AWS Identity and Access Management (IAM) users in the VPC. You access an Amazon OpenSearch Service domain from another VPC by setting up an OpenSearch Service-managed VPC endpoint (powered by AWS PrivateLink). You can also access OpenSearch Dashboards from outside the VPC using an SSH tunnel.Important: Be sure that accessing OpenSearch Dashboards from outside the VPC is compliant with your organization's security requirements.Access Dashboards from outside the VPC using an SSH tunnel:1.    Create an Amazon Cognito user pool and identity pool.2.    Create an Amazon Elastic Compute Cloud (Amazon EC2) instance in a public subnet. This subnet must be in the same VPC as your OpenSearch Service domain.3.    Use a browser add-on such as FoxyProxy to configure a SOCKS proxy.4.    Create an SSH tunnel from your local machine to the EC2 instance.Note: You can also use an NGINX proxy or Client VPN to access Dashboards from outside of a VPC with Amazon Cognito authentication. For more information, see How can I access OpenSearch Dashboards from outside of a VPC using Amazon Cognito authentication?5.    (Optional) If fine-grained access control (FGAC) is turned on, add an Amazon Cognito authenticated role.ResolutionCreate an Amazon Cognito user pool and identity pool1.    Create an Amazon Cognito user pool.2.    Configure a hosted user pool domain.3.    In the Amazon Cognito console navigation pane, choose Users and groups.4.    Choose Create user, and then complete the fields. Be sure to enter an email address. Then, select the Mark email as verified check box.5.    Choose the Groups tab, and then choose Create group. For Precedence, enter 0. For more information, see Creating a new group in the AWS Management Console.6.    Open the Amazon Cognito console again.7.    Choose Manage Identity Pools, and then choose Create new identity pool.8.    Enter a name for your identity pool, and select the check box to Enable access to unauthenticated identities. Then choose Create Pool.9.    When you are prompted for access to your AWS resources, choose Allow. This creates the two default roles associated with your identity pool—one for unauthenticated users and one for authenticated users.10.    Configure your OpenSearch Service domain to use Amazon Cognito authentication for OpenSearch Dashboards:For Cognito User Pool, choose the user pool that you created in step 1.For Cognito Identity Pool, choose the identity pool that you created in step 8.11.    Configure your OpenSearch Service domain to use an access policy similar to the following. Replace these values:account-id with your AWS account IDidentity-name with the name of your Amazon Cognito identity pooldomain-name with the name of your domainRegion with the Region where your domain resides, such as us-east-1{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:sts::account-id:assumed-role/Cognito_identity-nameAuth_Role/CognitoIdentityCredentials" }, "Action": "es:*", "Resource": "arn:aws:es:Region:account-id:domain/domain-name/*" } ]}For example, the following access policy uses these values:AWS account ID: 111122223333Amazon Cognito identity pool name: MyIdentityPooldomain name: MyDomainRegion: us-east-1{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:sts::111122223333:assumed-role/Cognito_MyIdentityPoolAuth_Role/CognitoIdentityCredentials" }, "Action": "es:*", "Resource": "arn:aws:es:us-east-1:111122223333:domain/MyDomain/*" } ]}Create an EC2 instance and configure security group rules1.    Launch an EC2 instance in a public subnet of the VPC that your OpenSearch Service domain is in. On the Configure Instance Details page, be sure that Auto-assign Public IP is set to Enable.Note: In the following steps, the EC2 instance is referred to as tunnel_ec2.2.    Add inbound rules to the security group associated with the tunnel_ec2 instance. These rules must allow traffic to ports 8157 and 22 from the IP address of the machine that you access the OpenSearch Service dashboard from.3.    Add an inbound rule to the security group associated with the OpenSearch Service domain. This rule must allow traffic from the private IP address of the tunnel_ec2 instance.Configure the SOCKS proxy1.    Add FoxyProxy Standard to Google Chrome.2.    Open FoxyProxy, and then choose Options.3.    In the Proxy mode drop-down list, choose Use proxies based on their pre-defined patterns and priorities.4.    Choose Add New Proxy.5.    Select the General tab and enter a Proxy Name such as "Dashboards Proxy."6.    On the Proxy Details tab, be sure that Manual Proxy Configuration is selected and then complete the following fields:For Host or IP Address, enter localhost.For Port, enter 8157.Select SOCKS proxySelect SOCKS v5.7.    Choose the URL Patterns tab.8.    Choose Add new pattern and then complete the following fields:For Pattern Name, enter a name such as "VPC Endpoint."For URL pattern, enter the VPC endpoint for Dashboards. Be sure that accessing the URL is allowed. Be sure that Wildcards is selected.9.     Choose Save.Create the SSH tunnel1.    Run this command from the local machine that you use to access the Dashboards dashboard. Replace these items:mykeypair.pem: the name of the .pem file for the key pair that you specified when you launched the tunnel_ec2 EC2 instance.public_dns_name: the public DNS of your tunnel_ec2 EC2 instance. For more information, see View DNS hostnames for your EC2 instance.ssh -i "mykeypair.pem" ec2-user@public_dns_name -ND 81572.    Enter the Dashboards endpoint in your browser. The Amazon Cognito login page for Dashboards appears.(Optional) If FGAC is turned on, add an Amazon Cognito authenticated roleIf fine-grained access control (FGAC) is turned on for your OpenSearch Service cluster, you might encounter a "missing role" error. To resolve the "missing role" error, perform the following steps:1.    Sign in the OpenSearch Service console.2.    From the navigation pane, under Managed clusters, choose Domains.3.    Choose Actions.4.    Choose Modify master user.5.    Choose Set IAM ARN as your master user.6.    In the IAM ARN field, add the Amazon Cognito authenticated ARN role.7.    Choose Submit.For more information about fine-grained access control, see Tutorial: IAM master user and Amazon Cognito.Related informationConfiguring Amazon Cognito authentication for OpenSearch DashboardsLaunching your Amazon OpenSearch Service domains with a VPCFollow"
https://repost.aws/knowledge-center/opensearch-outside-vpc-ssh
Why am I receiving errors when running AWS CLI commands?
I'm receiving errors when running AWS Command Line Interface (AWS CLI) commands on my resource. How do I troubleshoot this?
"I'm receiving errors when running AWS Command Line Interface (AWS CLI) commands on my resource. How do I troubleshoot this?ResolutionVerify that you have the latest version of the AWS CLI installed. The AWS CLI is updated frequently. You can't access new released features when running an older CLI version. For information on how to update your version of the AWS CLI, see the General: Ensure you're running a recent version of the AWS CLI section in Troubleshooting AWS CLI errors.Make sure that the AWS Identity and Access Management (IAM) role or IAM user has the correct permissions to run the relevant commands. For instructions on how to do this, see Why am I receiving the error message "You are not authorized to perform this operation" when I try to launch an EC2 instance?Make sure that the time on your host machine is correct.For Linux, see Setting the time for your Linux instance.For Windows, see Setting the time for a Windows instance.Make sure that you're using the correct AWS Security Token Service (AWS STS) token format. For more information, see Why did I receive the IAM error, "AWS was not able to validate the provided access credentials" in some AWS Regions?Make sure that you're using the correct credentials to make the API call. If there are multiple sets of credentials on the instance, credential precedence might affect which credentials the instance uses to make the API call. Verify the set of credentials that you're using by running the aws sts get-caller-identity command. For more information, see Why is my Amazon EC2 instance using IAM user credentials instead of role credentials?Make sure that the AWS CLI program file has run permission on Linux or macOS. For more information, see the I get access denied errors section in Troubleshooting AWS CLI errors.Related informationTroubleshooting AWS CLI errorsWhy can't I run AWS CLI commands on my EC2 instance?Follow"
https://repost.aws/knowledge-center/troubleshoot-cli-errors
How can I resolve issues with switching IAM roles using the AWS Management Console?
I tried to switch AWS Identity and Access Management (IAM) roles using the AWS Management Console and received and error similar to the following:"Invalid information in one or more fields. Check your information or contact your administrator".
"I tried to switch AWS Identity and Access Management (IAM) roles using the AWS Management Console and received and error similar to the following:"Invalid information in one or more fields. Check your information or contact your administrator".Short descriptionThis error can occur because of:Incorrect AssumeRole action permissionsIncorrect IAM trust policyExplicit deny from policiesIncorrect Account ID or role nameRequiring external ID to switch rolesIncorrect trust policy conditionsResolutionFollow these instructions to verify the IAM policy configuration to switch IAM roles for your scenario.Missing or incorrect AssumeRole action permissionsTo switch to an IAM role, the IAM entity must have AssumeRole action permission. The IAM entity must have a policy with AssumeRole action permission similar to the following:{ "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::account_id_number:role/role-name-you-want-to-assume" }Make sure that the resource matches the Amazon Resource Name (ARN) of the IAM role that you want to switch to. For more information, see Granting a user permissions to switch roles.IAM role trust policy doesn't trust the IAM user’s account IDThe IAM role trust policy defines the principals that can assume the role Verify that the trust policy lists the IAM user’s account ID as the trusted principal entity. For example, an IAM user named Bob with account ID 111222333444 wants to switch to an IAM role named Alice for account ID 444555666777. The account ID 111222333444 is the trusted account, and account ID 444555666777 is the trusting account. The IAM role Alice has a trust policy that trusts Bob similar to the following:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "sts:AssumeRole", "Principal": { "AWS": “<111222333444> }, "Condition": {} } ]}Note: It's a best practice to follow the principle of least privilege and specify the complete ARN for only the roles that the user needs.For more information, see Modifying a role trust policy (console).Explicit deny from service control policies (SCPs) or an IAM policyIf your AWS account is a part of an AWS Organizations, then your management account might have SCPs. Make sure that there is no explicit deny from the SCPs for the AssumeRole action. Check for SCPs that deny API actions based on AWS Regions. AWS Security Token Service (AWS STS) is a global service that must be included in the global service exclusion list. Make sure that there isn't any explicit deny from the IAM policies, because "deny" statements take precedence over "allow" statements.For more information, see Deny access to AWS based on the requested AWS Region.Verify the AWS account ID and IAM role nameVerify that the account ID and IAM role name are correct on the switch role page. The account ID is a 12-digit identifier, and the IAM role name is the name of the role that you want to assume.For more information, see Things to know about switching roles in the console.Requiring external ID to switch to the IAM roleAdministrators can use an external ID to give third-party access to AWS resources. You can't switch IAM roles in the AWS Management Console to a role that requires an ExternalId condition key value. You can switch to IAM roles only by calling the AssumeRole action that supports the ExternalId key.For more information, see How to use an external ID when granting access to your AWS resources to a third party.Valid conditions on the IAM role trust policyVerify that you meet all the conditions that are specified in the IAM role's trust policy. A condition can specify an expiration date, an external ID, or that requests must come only from specific IP addresses. In the following example policy, If the current date is any time after the specified date, then the condition is false. The policy can't grant permissions to assume the IAM role."Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::account_id_number:role/role-name-you-want-to-assume" "Condition": { "DateLessThan" : { "aws:CurrentTime" : "2016-05-01T12:00:00Z" } }Related informationHow do I provide IAM users with a link to assume an IAM role?How do I set up an IAM user and sign in to the AWS Management Console using IAM credentials?How do I allow users from another account to access resources in my account through IAM?What's the difference between an AWS Organizations service control policy and an IAM policy?Follow"
https://repost.aws/knowledge-center/iam-switch-role
How do I troubleshoot SMTP connectivity or timeout issues with Amazon SES?
My Amazon Simple Email Service (Amazon SES) Simple Mail Transfer Protocol (SMTP) is timing out. How do I resolve SMTP connectivity or timeout errors with Amazon SES?
"My Amazon Simple Email Service (Amazon SES) Simple Mail Transfer Protocol (SMTP) is timing out. How do I resolve SMTP connectivity or timeout errors with Amazon SES?Short descriptionTimeout connections typically indicate that your client can't establish a TCP connection to the public Amazon SES endpoint. To resolve SMTP connectivity or timeout errors with Amazon SES, first troubleshoot the application's TCP connection. If the TCP connection is successful, then troubleshoot the SSL/TLS negotiations.Important: By default, Amazon Elastic Compute Cloud (Amazon EC2) restricts Amazon Virtual Private Cloud (Amazon VPC) egress traffic on port 25 for all Amazon EC2 instances. For applications that require traffic on SMTP port 25, you can request to remove this restriction.ResolutionTroubleshoot the application's TCP connection1.    Run the following telnet, netcat (nc), or Test-NetConnection commands. Replace email-smtp.us-east-1.amazonaws.com with the Amazon SES SMTP endpoint that you're using:telnet email-smtp.us-east-1.amazonaws.com 587telnet email-smtp.us-east-1.amazonaws.com 25telnet email-smtp.us-east-1.amazonaws.com 465nc -vz email-smtp.us-east-1.amazonaws.com 587nc -vz email-smtp.us-east-1.amazonaws.com 25nc -vz email-smtp.us-east-1.amazonaws.com 465-or-In PowerShell, run the following command to connect to the Amazon SES SMTP server:Test-NetConnection -Port 587 -ComputerName email-smtp.us-west-2.amazonaws.com2.    Note the output. If the connection is successful, then proceed to the Troubleshoot SSL/TLS negotiations section. If the connection is unsuccessful, then proceed to step 3.Successful connectionThe telnet command returns an output similar to the following:Trying 35.170.126.22...Connected to email-smtp.us-east-1.amazonaws.com.Escape character is '^]'.220 email-smtp.amazonaws.com ESMTP SimpleEmailService-d-A12BCD3EF example0mJncW410pSauThe PowerShell command returns an output similar to the following:ComputerName : email-smtp.us-west-2.amazonaws.comRemoteAddress : 198.51.100.126RemotePort : 587InterfaceAlias : EthernetSourceAddress : 203.0.113.46TcpTestSucceeded : TrueUnsuccessful connection (timeout)The telnet command returns an output similar to the following:Trying 18.232.32.150...telnet: connect to address 18.232.32.150: Connection timed outThe PowerShell command returns an output similar to the following:WARNING: Ping to 52.39.11.136 failed with status: TimedOutComputerName : email-smtp.us-west-2.amazonaws.comRemoteAddress : 35.155.47.104RemotePort : 587InterfaceAlias : Ethernet 2SourceAddress : 10.0.0.140PingSucceeded : FalsePingReplyDetails (RTT) : 0 msTcpTestSucceeded : False3.    For unsuccessful connections, confirm that your local firewall rules, routes, and access control lists (ACLs) allow traffic on the SMTP port that you're using. Also, confirm that your sending application has access to the internet.For example, if you're using an EC2 instance to send emails and connect to the SMTP endpoint, then verify the following:The security group outbound (egress) rules must allow traffic to the SMTP server on TCP port 25, 587, or 465.The network ACL outbound (egress) rules must allow traffic to the SMTP server on TCP port 25, 587, or 465.The network ACL inbound (ingress) rules must allow traffic from the SMTP server on TCP ports 1024-65535.The EC2 instance must have internet connectivity.Troubleshoot SSL/TLS negotiationsIf your TCP connection is successful but you're still having connectivity or timeout issues, check if there are problems with SSL/TLS.1.    From an EC2 Linux instance, run the openssl command. For Amazon EC2 Windows instances, see Test your connection to the Amazon SES SMTP interface using the command line, and choose the PowerShell tab.openssl s_client -crlf -connect email-smtp.us-east-1.amazonaws.com:465 openssl s_client -crlf -starttls smtp -connect email-smtp.us-east-1.amazonaws.com:587Note: Replace email-smtp.us-east-1.amazonaws.com with the Amazon SES SMTP endpoint that you're using. Modifying the location of the default certificate authority (CA), might cause problems when you run the preceding commands. When you install OpenSSL, make sure that you identify the location of the default CA bundle file.2.    Note the output. The expected responses are SMTP 220 and SMTP 250.3.    If you don't get the expected output, check the following:Verify that the SSL/TLS certificate store is configured correctly.Confirm that your sending application has the correct path to the certificate.Verify that the Amazon SES certificate is installed on your server.Note: You can test whether the correct certificates are installed. For instructions, go to About the Amazon Trust Services Migration, and review the About the certificates section.Related informationUsing the Amazon SES SMTP interface to send emailFollow"
https://repost.aws/knowledge-center/smtp-connectivity-timeout-issues-ses
How do I create an EMR cluster with EBS volume encryption?
"I want to turn on Amazon Elastic Block Store (Amazon EBS) encryption in Amazon EMR. Or, I want to use an AWS Key Management Service (AWS KMS) key to encrypt an EBS volume that's attached to my EMR cluster."
"I want to turn on Amazon Elastic Block Store (Amazon EBS) encryption in Amazon EMR. Or, I want to use an AWS Key Management Service (AWS KMS) key to encrypt an EBS volume that's attached to my EMR cluster.Short descriptionAmazon EBS encryption integrates with AWS KMS to provide the encryption keys that protect your data. Beginning with Amazon EMR version 5.24.0, you can choose to turn on EBS encryption. The EBS encryption option encrypts the EBS root device volume and attached storage volumes. For considerations and limitations, see Local disk encryption.There are two options to encrypt EBS volumes on your EMR cluster:Turn on encryption by default for EBS volumes at the account level.Create a KMS key and Amazon EMR security configuration to encrypt EBS volumes for a specific EMR cluster.ResolutionTurn on encryption by default for EBS volumes at the account levelFor more information, see Encryption by default.Create a KMS key and Amazon EMR security configuration to encrypt EBS volumes for a specific EMR clusterTo use this option, do the following:Create a KMS key.Create and configure the Amazon EMR security configuration.Provision an EMR cluster with the security configuration.Step 1: Create a KMS keyIf you don’t have a KMS key ready for this purpose, then do the following to create the key:Open the AWS KMS console.To change the AWS Region, use the Region selector in the upper-right corner of the page.In the navigation pane, choose Customer managed keys.Choose Create key.To create a symmetric encryption KMS key, for Key type choose Symmetric.In Key usage, the Encrypt and decrypt option is selected for you.Choose Next.Enter an alias for the key.Choose Next.Choose the Key administrator.Choose Next.Select the Amazon EMR service role. The default role is EMR_DefaultRole.Select the Amazon Elastic Compute Cloud (Amazon EC2) instance profile role. The default role for the instance profile is EMR_EC2_DefaultRole.Choose Next.Choose Finish.If you're using a custom Amazon EMR service role, then add the following policy to the role before provisioning the EMR cluster.{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": [ "kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt*", "kms:GenerateDataKey*", "kms:DescribeKey", "kms:CreateGrant", "kms:ListGrants" ], "Resource": [ "arn:aws:kms:region:account-id:key/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" ] }]}Step 2: Create and configure the Amazon EMR security configurationOpen the Amazon EMR console.Choose Security configurations.Choose Create.Under Local disk encryption, choose Enable at-rest encryption for local disks.For Key provider type, choose AWS KMS.For AWS KMS customer master key, choose the key ARN of your KMS key.Select Encrypt EBS volumes with EBS encryption.Choose Create.Step 3: Provision an EMR cluster with the security configurationIf you create your EMR cluster using the EMR console, then in Step 4: Security, choose the security configuration that you just created.When creating EMR clusters through other methods, specify the security configuration using the configuration you just created.Follow"
https://repost.aws/knowledge-center/emr-create-cluster-with-ebs-encryption
How do I automatically confirm users in Amazon Cognito?
I want to confirm users and then verify their email addresses and phone numbers automatically without using one-time-passwords (OTPs).
"I want to confirm users and then verify their email addresses and phone numbers automatically without using one-time-passwords (OTPs).Short descriptionWhen a user sign ups with an Amazon Cognito user pool, they generally must have their email address or phone number verified. This is usually done by sending an OTP to a user's email address or phone number for verification. A user can also be automatically confirmed without OTP verification.These are the high-level steps to automatically confirm a user without using an OTP with the user's email address or phone number:Create an AWS Lambda function.Create an Amazon Cognito user pool with a pre sign-up Lambda trigger.Sign up the user in Amazon Cognito. Verify the user attributes by using the AWS Management Console or an AWS API.ResolutionFollow these steps to automatically confirm a user and their attributes without OTP verification.Create a Lambda function1.    Use Amazon Cognito Events to create a Lambda function that handles the event that creates an Amazon Cognito user. The following Python code confirms the user and their attributes, such as the email address and phone number.Example Python user confirmation code:import jsondef lambda_handler(event, context): # Confirm the user event['response']['autoConfirmUser'] = True # Set the email as verified if it is in the request if 'email' in event['request']['userAttributes']: event['response']['autoVerifyEmail'] = True # Set the phone number as verified if it is in the request if 'phone_number' in event['request']['userAttributes']: event['response']['autoVerifyPhone'] = True # Return to Amazon Cognito return event2.    Set up a testing event in the Lambda function with data that's relevant to the pre sign-up Lambda trigger. The following example includes the test event for the sample Python code from step 1.Example JSON test event:{ "request": { "userAttributes": { "email": "email@example.com", "phone_number": "5550100" } }, "response": {}}Create an Amazon Cognito user pool1.    Create a new Amazon Cognito user pool or select an existing user pool.2.    In the selected user pool, add the pre sign-up Lambda trigger by selecting the Lambda function you created.The pre sign-up Lambda trigger can be used to add custom logic and validate the new user. When a new user signs up with your app, Amazon Cognito passes that event information to the Lambda function. (The example Lambda function is in step 1 of the Create a Lambda function section.) The Lambda function returns the same event object to Amazon Cognito with any changes in the response. The following is the output response for the test event from step 2 of the Create a Lambda function section.Example JSON test event response:{ "request": { "userAttributes": { "email": "email@example.com", "phone_number": "5550100" } }, "response": { "autoConfirmUser": true, "autoVerifyEmail": true, "autoVerifyPhone": true }}Note: If a new user signs up with a preexisting phone number or email address alias, the alias moves to the new user. Then, the previous user's phone number or email address is marked as unverified. To prevent these changes, invoke the ListUsers API to list the attributes for all users from the user pool. Review the existing user attributes and compare them to the new user attributes to make sure that no unexpected changes take place.5.    Verify that the pre sign-up Lambda trigger is configured in your user pool.Sign up the Amazon Cognito userSign up as a new user by using the Amazon Cognito hosted UI or invoking the SignUp API.Using the Amazon Cognito hosted UI1.    In the Amazon Cognito hosted UI, sign up as a new user. Make sure to provide all required attributes. Then, after you sign up, you go to a callback URL without any verification.2.    Verify your user attributes.Account status: Enabled/CONFIRMEDemail_verified: truephone_number_verified: trueUsing the AWS CLI1.    In the AWS Command Line Interface (AWS CLI), create a user by invoking the SignUp API.Important: In the example AWS CLI commands, replace all instances of example strings with your values. (For example, replace "example_client_id" with your client ID.)Example sign-up command:$ aws cognito-idp sign-up --client-id example_client_id --secret-hash example_secret_hash --username example_user_name --password example_password --user-attributes Name="email",Value="email@example.com" Name="phone_number",Value="5550100"2.    Compute the secret hash using the app client ID, the client secret, and the user name of the user in the Amazon Cognito user pool.3.    Install Python.4.    Save the following example Python script as a .py file.Important: Replace the following values before running the example script. For username, enter the user name of the user in the user pool. For AppClientId, enter your user pool's app client ID. Then, for AppClientSecret, enter your app client secret. For help, run following command: $ python3 secret_hash.py –help.Example Python script:import base64, hashlib, hmac, argparseparser = argparse.ArgumentParser()parser.add_argument("--username", required=True)parser.add_argument("--appclientid", required=True)parser.add_argument("--appclientsecret", required=True)args = parser.parse_args()message = bytes(args.username + args.appclientid, 'utf-8')key = bytes(args.appclientsecret, 'utf-8')secret_hash = base64.b64encode(hmac.new(key, message, digestmod=hashlib.sha256).digest()).decode()print('SecretHash: {}'.format(secret_hash))5.    Use the following command to obtain the computed secret hash from the Python script.Example command:$ python3 secret_hash.py --username example_user_name --appclientid example_app_client_id --appclientsecret example_app_client_secretAn automatically confirmed user example1.    Generate a secret hash by running a Python script that uses the user name, app client ID, and client secret.$ python3 secret_hash.py --username example_user_name --appclientid 11122223333 --appclientsecret je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEYOutput:SecretHash: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY2.    Create an Amazon Cognito user by invoking the SignUp API.$ aws cognito-idp sign-up --client-id 7morqrabcdEXAMPLE_ID --secret-hash wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY = --username example_user_name --password Password1@ --user-attributes Name='email',Value='email@example.com' Name='phone_number',Value='5550100'Output:{ "UserConfirmed": true, "UserSub": "a1b2c3d4-5678-90ab-cdef-EXAMPLE11111"}3.    To verify the status of user attributes, invoke the AdminGetUser API.$ aws cognito-idp admin-get-user --user-pool-id us-east-1_I 111122223333 --username example_user_nameOutput:{ "Username": "example_user_name", "UserAttributes": [ { "Name": "sub", "Value": "a1b2c3d4-5678-90ab-cdef-EXAMPLE11111" }, { "Name": "email_verified", "Value": "true" }, { "Name": "phone_number_verified", "Value": "true" }, { "Name": "phone_number", "Value": "5550100" }, { "Name": "email", "Value": "email@example.com" } ], "UserCreateDate": "2022-12-12T11:54:12.988000+00:00", "UserLastModifiedDate": "2022-12-12T11:54:12.988000+00:00", "Enabled": true, "UserStatus": "CONFIRMED"}The final output shows that the email address and phone number attributes are verified. The UserStatus is set to Confirmed without any external verification.Follow"
https://repost.aws/knowledge-center/cognito-automatically-confirm-users
How can I schedule an Amazon Athena query?
I want to schedule queries in Amazon Athena.
"I want to schedule queries in Amazon Athena.Short descriptionScheduling queries is useful in many scenarios, such as running periodic reporting queries or loading new partitions on a regular interval. Here are some of the ways that you can schedule queries in Athena:Create an AWS Lambda function, using the SDK of your choice, to schedule the query. For more information about the programming languages that Lambda supports, see AWS Lambda FAQs. Then, create an Amazon EventBridge rule to schedule the Lambda function. This is the method explained in the Resolution.If you're using Athena in an ETL pipeline, use AWS Step Functions to create the pipeline and schedule the query.On a Linux machine, use crontab to schedule the query.Use an AWS Glue Python shell job to run the Athena query using the Athena boto3 API. Then, define a schedule for the AWS Glue job.ResolutionFollow these steps to schedule an Athena query using a Lambda function and an EventBridge rule:1.    Create an AWS Identity and Access Management (IAM) service role for Lambda. Then, attach a policy that allows access to Athena, Amazon Simple Storage Service (Amazon S3), and Amazon CloudWatch Logs. For example, you can add AmazonAthenaFullAccess and CloudWatchLogsFullAccess to the role. AmazonAthenaFullAccess allows full access to Athena and includes basic permissions for Amazon S3. CloudWatchLogsFullAccess allows full access to CloudWatch Logs.2.    Open the Lambda console.3.    Choose Create function.4.    Be sure that Author from scratch is selected, and then configure the following options:For Name, enter a name for your function.For Runtime, choose one of the Python options.For Role, choose Use an existing role, and then choose the IAM role that you created in step 1.5.    Choose Create function.6.    Paste your code in the Function code section. The following example uses Python 3.7. Replace the following values in the example:default: the Athena database nameSELECT * FROM default.tb: the query that you want to schedules3://AWSDOC-EXAMPLE-BUCKET/: the S3 bucket for the query outputimport boto3# Query string to executequery = 'SELECT * FROM database.tb'# Database to execute the query againstDATABASE = 'database'# Output location for query resultsoutput='s3://OUTPUTBUCKET/'def lambda_handler(event, context): # Initiate the Boto3 Client client = boto3.client('athena') # Start the query execution response = client.start_query_execution( QueryString=query, QueryExecutionContext={ 'Database': DATABASE }, ResultConfiguration={ 'OutputLocation': output } ) # Return response after starting the query execution return response7.    Choose Deploy.8.    Open the Amazon EventBridge console.9.    In the navigation pane, choose Rules, and then choose Create rule.10.    Enter a name and description for the rule.11.    For Define pattern, select Schedule.12.    Select Cron expression, and then enter a cron expression.13.    For Select event bus, select AWS default event bus.14.    In the Select Targets section, do the following:For Target, select Lambda function from the dropdown list. For Function, select the name of your Lambda function from the dropdown list.15.    Choose Create.If you're scheduling multiple queries, note that there are quotas for the number of calls to the Athena API per account. For more information, see Per account API call quotas.Related informationTutorial: Schedule AWS Lambda functions using EventBridgeCreating an Amazon EventBridge rule that runs on a scheduleFollow"
https://repost.aws/knowledge-center/schedule-query-athena
How do I troubleshoot AWS Replication Agent installation failure on my EC2 Linux instance?
I'm installing the AWS Replication Agent for AWS Application Migration Service or AWS Elastic Disaster Recovery. The installation failed on my Amazon Elastic Compute Cloud (Amazon EC2) Linux instance.
"I'm installing the AWS Replication Agent for AWS Application Migration Service or AWS Elastic Disaster Recovery. The installation failed on my Amazon Elastic Compute Cloud (Amazon EC2) Linux instance.ResolutionIdentify the errorThe AWS Replication Agent installer log shows errors starting at the end of the log. Run the following command to view the last page of the installer log to determine the error. Then, review the following section that pertains to the error.less +G aws_replication_installer.logThe following resolution covers the most common AWS Replication Agent installation errors on Linux operating systems.libz.so .1: failed to map segment from shared object: Operation not permittedError example./aws-replication-installer-64bit: error while loading shared libraries: libz.so .1: failed to map segment from shared object: Operation not permittedThe installation script uses the /tmp directory. If noexec is set on /tmp, then libz.so can't map segments. When this occurs, you receive this operation not permitted error.To resolve this error, do the following:1.    Run the following command to unmount /tmp:# umount /tmp2.    Run the following command to mount the volume with exec permission:# sudo mount /tmp -o remount, execThe security token included in the request is expiredError examplebotocore.exceptions.ClientError: An error occurred (ExpiredTokenException) when calling the GetAgentInstallationAssetsForDrs operation: The security token included in the request is expired [installation_id: 1a9af9d3-9485-4e02-965e-611929428c61, agent_version: 3.7.0, mac_addresses: 206915885515739,206915885515740, _origin_client_type: installer]This error is often caused by an expired AWS Identify and Access Management (IAM) role. When the IAM role expires, API calls to the Application Migration Service or Elastic Disaster Recovery endpoint fail.To resolve this issue, refresh the IAM role, or install the role with an access key or secret access key. For more information, see:Application Migration Service: Generating the required AWS credentialsElastic Disaster Recovery: Generating the required AWS credentialsrmmod: ERROR: Module aws_replication_driver is not currently loadedError examplermmod: ERROR: Module aws_replication_driver is not currently loaded insmod: ERROR: could not insert module ./aws-replication-driver.ko: Required key not availableThis error occurs when secure boot is turned on in the source instance. Secure boot isn't supported by Application Migration Service or Elastic Disaster Recovery.To resolve this error, turn off secure boot in the source instance.ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED]Error examplessl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997) - urllib.error.URLError: <urlopen error unknown url type: https>Note: In addition to this error, you can resolve most urllib/SSL errors using the following method.This error might occur if the client is using an older OS version with Python 3.10 or newer. Python 3.10 added the PEP 644 – Require OpenSSL 1.1.1 or newer proposal.Older OS versions don't have the newest OpenSSL library that supports Python 3.10. So the AWS Replication Agent installation fails to verify the SSL certificate to the Application Migration Service or Elastic Disaster Recovery endpoint.To avoid this error, use an older version of Python, such as version 2.7 or 3.8.botocore.exceptions.CredentialRetrievalErrorError example:botocore.exceptions.CredentialRetrievalError: Error when retrieving credentials from cert: Oct 17, 2022 9:38:54 AM com.amazonaws.cloudendure.credentials_provider.SharedMain createAndSaveJksThis error might occur if you modify the AWS Replication Agent role, AWSElasticDisasterRecoveryAgentRole/ AWSApplicationMigrationAgentRole.To resolve this error, make sure that the AWS Replication Agent role is as follows:Application Migration Service{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "PrincipalGroup": { "AWS": "svc:mgn.amazonaws.com" }, "Action": [ "sts:AssumeRole", "sts:SetSourceIdentity" ], "Condition": { "StringLike": { "sts:SourceIdentity": "s-*", "aws:SourceAccount": "AWSACCOUNTIDHERE" } } } ]}Elastic Disaster Recovery{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "PrincipalGroup": { "AWS": "svc:drs.amazonaws.com" }, "Action": [ "sts:AssumeRole", "sts:SetSourceIdentity" ], "Condition": { "StringLike": { "aws:SourceAccount": "AWSACCOUNTIDHERE", "sts:SourceIdentity": "s-*" } } } ]}stderr: A dependency job for aws-replication.target failed.Error example:stderr: A dependency job for aws-replication.target failed. See 'journalctl -xe' for detailsThe following are the two possible causes for this error:The /var directory has permissions of 754.There was an issue creating a Linux group for the aws-replication user.To resolve the /var issue, run chmod 755 for the /var directory.To resolve the Linux group issue, do the following:1.    Fully uninstall AWS Replication Agent.2.    Run the following commands to delete the aws-replication user and aws-replication group:# userdel aws-replication # groupdel aws-replication3.    Reinstall AWS Replication Agent.For more information and installation prerequisites, see:Application Migration Service: Supported operating systemsElastic Disaster Recovery: Supported operating systemsException in thread "main" com.amazonaws.services.drs.model.InternalServerExceptionError example:Exception in thread "main" com.amazonaws.services.drs.model.InternalServerException: An unexpected error has occurred (Service: Drs; Status Code: 500; Error Code: InternalServerException; Request ID: 4f4a76cb-aaec-44cc-a07a-c3579454ca55; Proxy: null)This error occurs if the client turns off the AWS STS endpoint. If the AWS STS endpoint is turned off, then Application Migration Service can't call STS to assume the role in the client account. The same is true for Elastic Disaster Recovery.To resolve this error, turn on the AWS STS endpoint in the client. For more information, see Activating and deactivating AWS STS in an AWS Region.insmod: ERROR: could not insert module ./aws-replication-driver.ko: Required key not availableThis error occurs if the operating system has secure boot turned on. Application Migration Service and Elastic Disaster Recovery don't support Linux operating systems with secure boot turned on.To resolve this error, turn off secure boot for the Linux operating system. On most operating systems, turn off secure boot in the hypervisor.insmod: ERROR: could not insert module ./aws-replication-driver.ko: Cannot allocate memoryError example:insmod: ERROR: could not insert module ./aws-replication-driver.ko: Cannot allocate memoryrmmod: ERROR: Module aws_replication_driver is not currently loaded]2023-03-16 10:27:08,416 ERROR Exception during agent installation Traceback (most recent call last): File "cirrus/installer_shared/installer_main.py", line 308, in run_agent_installer_command_linux File "shared/installer_utils/command_utils.py", line 161, in runshared.installer_utils.command_utils.RunException: command: /tmp/tmp_tThis error occurs if the Linux operating system doesn't have sufficient memory for agent installation.To resolve this error, make sure that your operating system has at least 300 MB of free memory.Unexpected error while making agent driver! Are kernel linux headers installed correctly?Error example:Unexpected error while making agent driver! Are kernel linux headers installed correctly?Installation returned with code 1Installation failed due to unspecified error:During agent installation, the installation downloads a matching kernel-devel package from the package repository configured in your Linux operating system. This error occurs when the agent installation workflow can't install the matching kernel-devel package to the Linux OS's running kernel.To resolve this error, review the installation log to verify that there was an issue accessing the repository. Then, download the kernel-devel package manually from the internet. After downloading the package, run the installation again.You can download the matching kernel-devel/linux-headers package from the following sites:RHEL, CentOS, Oracle, and SUSE package directoryDebian package directory on the debian.org website.Ubuntu package directory on the Ubuntu packages website.The AWS Replication Agent also installs dependencies required for the installation, such as make gcc perl tar gawk rpm. For more information see, Linux installation requirements.Follow"
https://repost.aws/knowledge-center/mgn-linux-fix-replication-agent-install
How do I troubleshoot issues when setting up Cluster Autoscaler on an Amazon EKS cluster?
I want to troubleshoot issues when launching Cluster Autoscaler on my Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
"I want to troubleshoot issues when launching Cluster Autoscaler on my Amazon Elastic Kubernetes Service (Amazon EKS) cluster.Short descriptionMake sure that you verify the following before you start:You installed or updated eksctl to the latest version.You replaced the placeholder values in code snippets with your own values.Note: The --region variable isn't always used in the commands because the default value for your AWS Region is used. Check the default value by running the AWS Command Line Interface (AWS CLI) configure command. If you need to change the AWS Region, then use the --region flag.Note: If you receive errors when running AWS CLI commands, then confirm that you're running a recent version of the AWS CLI.ResolutionCluster Autoscaler pod is in a CrashLoopBackOff statusCheck the Cluster Autoscaler pod status by running the following command:kubectl get pods -n kube-system | grep cluster-autoscalerThe following is an example of a Cluster Autoscaler pod that's in a CrashLoopBackOff status:NAME READY STATUS RESTARTS AGEcluster-autoscaler-xxxx-xxxxx 0/1 CrashLoopBackOff 3 (20s ago) 99sView the Cluster Autoscaler pod logs by running the following command:kubectl logs -f -n kube-system -l app=cluster-autoscalerIf the logs indicate that there are AWS Identity and Access Management (IAM) permissions issues, then do the following:Check that an OIDC provider is associated with the Amazon EKS cluster.Check that the Cluster Autoscaler service account is annotated with the IAM role.Check that the correct IAM policy is attached to the preceding IAM role.Check that the trust relationship is configured correctly.Note: The following is an example of a log indicating IAM permissions issues:Failed to create AWS Manager: cannot autodiscover ASGs: AccessDenied: User: xxx is not authorized to perform: autoscaling: DescribeTags because no identity-based policy allows the autoscaling:DescribeTags action status code: 403, request id: xxxxxxxxImportant: Make sure to check all given AWS CLI commands and replace all instances of example strings with your values. For example, replace example-cluster with your cluster.Check that an OIDC provider is associated with the EKS cluster1.    Check that you have an existing IAM OpenID Connect (OIDC) provider for your cluster by running the following command:oidc_id=$(aws eks describe-cluster --name example-cluster --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)2.    Check that an IAM OIDC provider with your cluster's ID is already in your account by running the following command:aws iam list-open-id-connect-providers | grep $oidc_id | cut -d "/" -f4Note: If output is returned, then you already have an IAM OIDC provider for your cluster and you can skip the next step. If no output is returned, then you must create an IAM OIDC provider for your cluster at the next step.3.    Create an IAM OIDC identity provider for your cluster by running the following command:eksctl utils associate-iam-oidc-provider --cluster example-cluster --approveCheck that the Cluster Autoscaler service account is annotated with the IAM roleCheck that the service account is annotated with the IAM role by running the following command:kubectl get serviceaccount cluster-autoscaler -n kube-system -o yamlThe following is the expected outcome:apiVersion: v1kind: ServiceAccountmetadata: annotations: eks.amazonaws.com/role-arn: arn:aws:iam::012345678912:role/<cluster_auto_scaler_iam_role> name: cluster-autoscaler namespace: kube-systemCheck that the correct IAM policy is attached to the preceding IAM roleFor an example, see the following:{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "autoscaling:DescribeAutoScalingInstances", "autoscaling:SetDesiredCapacity", "autoscaling:DescribeAutoScalingGroups", "autoscaling:DescribeTags", "autoscaling:DescribeLaunchConfigurations", "ec2:DescribeLaunchTemplateVersions", "ec2:DescribeInstanceTypes", "autoscaling:TerminateInstanceInAutoScalingGroup" ], "Resource": "*" } ]}Check that the trust relationship is configured correctlyFor an example, see the following:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::<example_awsaccountid>:oidc-provider/oidc.eks.<example_region>.amazonaws.com/id/<example_oidcid>" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "oidc.eks.<example_region>.amazonaws.com/id/<example_oidcid>:aud": "sts.amazonaws.com", "oidc.eks.<example_region>.amazonaws.com/id/<example_oidcid>:sub": "system:serviceaccount:kube-system:cluster-autoscaler" } } } ]}Restart the Cluster Autoscaler pod when any change is made to the service account role or IAM policy.If the logs indicate any networking issues (for example, I/O timeout), then do the following:Note: The following is an example of a log that indicates networking issues:Failed to create AWS Manager: cannot autodiscover ASGs: WebIdentityErr: failed to retrieve credentials caused by: RequestError: send request failed caused by: Post https://sts.region.amazonaws.com/: dial tcp: i/o timeout1.    Check that the Amazon EKS cluster is configured with the required networking setup. Verify that the worker node subnet has a route table that can route traffic to the following endpoints, either on global or Regional endpoints:Amazon Elastic Compute Cloud (Amazon EC2)AWS Auto ScalingAWS Security Token Service (AWS STS)2.    Make sure that the subnet network access control list (network ACL) or the worker node security group isn't blocking traffic communicating to these endpoints.3.    If the Amazon EKS cluster is private, then check the setup of the relevant Amazon Virtual Private Cloud (VPC) endpoints. For example, Amazon EC2, AWS Auto Scaling, and AWS STS.Note: The security group of each VPC endpoint is required to allow the Amazon EKS worker node security group. It's also required to allow the Amazon EKS VPC CIDR block on 443 port on the ingress traffic.Cluster Autoscaler isn't scaling in or scaling out nodesIf your Cluster Autoscaler isn't scaling in or scaling out nodes, then check the following:Check the Cluster Autoscaler pod logs.Check the Auto Scaling group tagging for the Cluster Autoscaler.Check the configuration of the deployment manifest.Check the current number of nodes.Check the pod resource request.Check the taint configuration for the node in node group.Check whether the node is annotated with scale-down-disable.Check the Cluster Autoscaler pod logsTo view the pod logs and identify the reasons why your Cluster Autoscaler isn't scaling in or scaling out nodes, run the following command:kubectl logs -f -n kube-system -l app=cluster-autoscalerCheck whether the pod that's in a Pending status contains any scheduling rules, such as the affinity rule, by running the following describe pod command:kubectl describe pod <example_podname> -n <example_namespace>Check the events section from the output. This section shows information about why a pod is in a pending status.Note: Cluster Autoscaler respects nodeSelector and requiredDuringSchedulingIgnoredDuringExecution in nodeAffinity, assuming that you labeled your node groups accordingly. If a pod can't be scheduled with nodeSelector or requiredDuringSchedulingIgnoredDuringExecution, then Cluster Autoscaler considers only node groups that satisfy those requirements for expansion. Modify the scheduling rules defined on pods or nodes accordingly so that a pod is scheduled on a node.Check the Auto Scaling group tagging for the Cluster AutoscalerThe node group’s corresponding Auto Scaling group must be tagged for the Cluster Autoscaler to discover the Auto Scaling group as follows:Tag 1:key: k8s.io/cluster-autoscaler/example-clustervalue: ownedTag 2:key: k8s.io/cluster-autoscaler/enabledvalue: trueCheck the configuration of the deployment manifestTo check the configuration of the Cluster Autoscaler deployment manifest, run the following command:kubectl -n kube-system edit deployment.apps/cluster-autoscalerCheck that the manifest is configured with the correct node-group-auto-discovery argument as follows:containers:- command ./cluster-autoscaler --v=4 --stderrthreshold=info --cloud-provider=aws --skip-nodes-with-local-storage=false --expander=least-waste --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/example-cluster --balance-similar-node-groups --skip-nodes-with-system-pods=falseCheck the current number of nodesTo check whether the current number of nodes has reached the managed node group's minimum or maximum values, run the following command:aws eks describe-nodegroup --cluster-name <example-cluster> --nodegroup-name <example-nodegroup>If the minimum or maximum values are reached, then modify the values with the new workload requirements.Check the pod resource requestTo check whether the pod resource request can't be fulfilled by the current node instance types, run the following command:kubectl -n <example_namespace> get pod <example_podname> -o yaml | grep resources -A6To get the resource request fulfilled, either modify the pod resource requests or create a new node group. When creating a new node group, make sure that the nodes' instance type can fulfill the resource requirement for pods.Check the taint configuration for the node in node groupCheck whether taints are configured for the node and whether the pod can tolerate the taints by running the following command:kubectl describe node <example_nodename> | grep taint -A2If the taints are configured, then remove the taints defined on the node. If the pod can't tolerate taints, then define tolerations on the pod so that the pod can be scheduled on the node with the taints.Check whether the node is annotated with scale-down-disableTo check that the node is annotated with scale-down-disable, run the following command:kubectl describe node <example_nodename> | grep scale-down-disableThe following is the expected outcome:cluster-autoscaler.kubernetes.io/scale-down-disabled: trueIf scale-down-disable is set to true, then remove the annotation for the node to be able to scale down by running the following command:kubectl annotate node <example_nodename> cluster-autoscaler.kubernetes.io/scale-down-disabled-For more information on troubleshooting, see Cluster Autoscaler FAQ on the GitHub website.Follow"
https://repost.aws/knowledge-center/amazon-eks-troubleshoot-autoscaler
How do I configure a secondary private IPv4 address on my EC2 instance?
I want to configure a secondary private IPv4 address on my Amazon Elastic Compute Cloud (Amazon EC2) instance.
"I want to configure a secondary private IPv4 address on my Amazon Elastic Compute Cloud (Amazon EC2) instance.Short descriptionYou can assign a secondary private IPv4 address to an Amazon EC2 instance when you launch the instance. If you already launched the EC2 instance, then you can assign a secondary private IP address to the network interface.Resolution1.    Assign the secondary IPv4 address to your instance.2.    Configure the instance operating system to recognize the new IP address.For Windows Server, see Configure a secondary private IPv4 address for your Windows instance.For Amazon Linux, see Configure the operating system on your instance to recognize secondary private IPv4 addressesFor other Linux distributions, see the network configuration documentation for your distribution. For example, for Ubuntu, see Network configuration on the Ubuntu website.3.    (Optional) If you need a public IP, you can associate an Elastic IP address with your new secondary private IP address.Note: If you have more than one Elastic IP address on an EC2 instance, then charges apply. For instructions on releasing an Elastic IP address, see Release an Elastic IP address.Related informationMultiple IP addressesConnect your VPC to other networksAssign multiple IPv6 addressesFollow"
https://repost.aws/knowledge-center/secondary-private-ip-address
How do I share WorkSpaces images or BYOL images with other AWS accounts?
I want to share an Amazon WorkSpaces image or a WorkSpaces bring your own license (BYOL) image to another Amazon Web Services (AWS) account in the same AWS Region. How can I do that?
"I want to share an Amazon WorkSpaces image or a WorkSpaces bring your own license (BYOL) image to another Amazon Web Services (AWS) account in the same AWS Region. How can I do that?ResolutionYou can share custom WorkSpaces images across AWS accounts within the same Region. After a WorkSpaces image is shared, the recipient account can copy the image to other Regions as needed. You can self-manage WorkSpaces image transfers using the WorkSpaces console or the AWS Command Line Interface (AWS CLI).BYOL images can be shared only with other accounts with the same AWS payer account ID. To copy a BYOL image to another Region, the destination Region must be set up for BYOL images.Share an image using the WorkSpaces consoleYou can use the WorkSpaces console to share or unshare an image with other accounts in the same Region. For instructions, see Share or unshare a custom WorkSpaces image.Share an image using the AWS CLIYou can share or unshare images programmatically using API calls and the AWS CLI.Important: The commands in the following process require version 2 of the AWS CLI. For installation instructions, see Installing or updating the latest version of the AWS CLI.To copy a WorkSpaces image to a different account within the same Region, follow these steps:1.    From the source account, identify the image ID for the source image. Run the following command, replacing region-code with the WorkSpaces Region code:aws workspaces describe-workspace-images --region region-codeThen, note the ImageId from the output.2.    From the source account, call the UpdateWorkspaceImagePermission API to share the source image with the target account. Run the following command, replacing ImageId with the output from step 1, region-code with the WorkSpaces Region code, and target-account with the target account number:aws workspaces update-workspace-image-permission --image-id ImageId --region region-code --shared-account-id target-account --allow-copy-image3.    (Optional) From the source account, call the DescribeWorkspaceImagePermissions API to see the permissions and verify that the image is shared with the target account. Run the following command, replacing ImageId and region-code with your values:aws workspaces describe-workspace-image-permissions --image-id ImageId --region region-code4.    (Optional) From the target account, call the DescribeWorkspaceImages API to see the shared image. Run the following command, replacing ImageId and region-code with your values:aws workspaces describe-workspace-images --image-ids ImageId --region region-code --image-type SHARED5.    From the target account, call the CopyWorkspaceImage API to copy the shared image. Run the following command, replacing ImageId and region-code with your values. Also, replace new-image-name with the name that you want to use for the image on the target account:aws workspaces copy-workspace-image --source-image-id ImageId --source-region region-code --name new-image-name --region region-codeThe target account can now see the new image in the WorkSpaces console. The image state moves from Pending to Available after the workflow is complete, which typically takes about 15 minutes.Related informationHow do I create a WorkSpaces image?Copy a custom WorkSpaces imageFollow"
https://repost.aws/knowledge-center/workspaces-copy-image