Question
stringlengths
0
222
Description
stringlengths
0
790
Answer
stringlengths
0
28.2k
Link
stringlengths
35
92
How do I troubleshoot HTTP 403 errors from API Gateway?
"When I call my Amazon API Gateway API, I get a 403 error. How do I troubleshoot 403 errors from API Gateway?"
"When I call my Amazon API Gateway API, I get a 403 error. How do I troubleshoot 403 errors from API Gateway?Short descriptionAn HTTP 403 response code means that a client is forbidden from accessing a valid URL. The server understands the request, but it can't fulfill the request because of client-side issues.API Gateway APIs can return 403 responses for any of the following reasons:IssueResponse headerError messageRoot causeAccess denied"x-amzn-errortype" = "AccessDeniedException""User is not authorized to access this resource with an explicit deny"The caller isn't authorized to access an API that's using an API Gateway Lambda authorizer.Access denied"x-amzn-errortype" = "AccessDeniedException""User: <user-arn> is not authorized to perform: execute-api:Invoke on resource: <api-resource-arn> with an explicit deny"The caller isn't authorized to access an API that's using AWS Identity and Access Management (IAM) authorization. Or, the API has an attached resource policy that explicitly denies access to the caller. For more information, see IAM authentication and resource policy.Access denied"x-amzn-errortype" = "AccessDeniedException""User: anonymous is not authorized to perform: execute-api:Invoke on resource:<api-resource-arn>"The caller isn't authorized to access an API that's using IAM authorization. Or, the API has an attached resource policy that doesn't explicitly allow the caller to invoke the API. For more information, see IAM authentication and resource policy.Access denied"x-amzn-errortype" = "AccessDeniedException""The security token included in the request is invalid."The caller used IAM keys that aren't valid to access an API that's using IAM authorization.Missing authentication token"x-amzn-errortype" = "MissingAuthenticationTokenException""Missing Authentication Token"An authentication token wasn't found in the request.Authentication token expired"x-amzn-errortype" = "InvalidSignatureException""Signature expired"The authentication token in the request has expired.API key isn't valid"x-amzn-errortype" = "ForbiddenException""Invalid API Key identifier specified"The caller used an API key that's not valid for a method that requires an API key.Signature isn't valid"x-amzn-errortype" = "InvalidSignatureException""The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method."The signature in the request doesn't match that on the server when accessing an API that's using IAM authorization.AWS WAF filtered"x-amzn-errortype" = "ForbiddenException""Forbidden"The request is blocked by web application firewall filtering when AWS WAF is activated in the API.Resource path doesn't exist"x-amzn-errortype" = "MissingAuthenticationTokenException""Missing Authentication Token"A request with no "Authorization" header is sent to an API resource path that doesn't exist. For more information, see How do I troubleshoot 403 "Missing Authentication Token" errors from an API Gateway REST API endpoint?Resource path doesn't exist"x-amzn-errortype" = "IncompleteSignatureException""Authorization header requires 'Credential' parameter. Authorization header requires 'Signature' parameter. Authorization header requires 'SignedHeaders' parameter. Authorization header requires existence of either a 'X-Amz-Date' or a 'Date' header. Authorization=allow"A request with an "Authorization" header is sent to an API resource path that doesn't exist.Invoking a private API using public DNS names incorrectly"x-amzn-errortype" = "ForbiddenException""Forbidden"Invoking a private API from within an Amazon Virtual Private Cloud (Amazon VPC) using public DNS names incorrectly. For example: the "Host" or "x-apigw-api-id" header is missing in the request. For more information, see Invoking your private API using endpoint-specific public DNS hostnames.Invoking a REST API that has a custom domain name using the default execute-api endpoint"x-amzn-errortype" = "ForbiddenException""Forbidden"The caller uses the default execute-api endpoint to invoke a REST API after deactivating the default endpoint. For more information, see Disabling the default endpoint for a REST APIInvoking an API Gateway custom domain name that requires mutual Transport Layer Security (TLS) using a client certificate that's not valid."x-amzn-errortype" = "ForbiddenException""Forbidden"The client certificate presented in the API request isn't issued by the custom domain name's truststore, or it isn't valid. For more information, see How do I troubleshoot HTTP 403 Forbidden errors from an API Gateway custom domain name that requires mutual TLS?Invoking a custom domain name without a base path mapping"x-amzn-errortype" = "ForbiddenException""Forbidden"The caller invokes a custom domain without a base path being mapped to an API.   For more information, see Setting up custom domain names for REST APIs.Invoking an API with custom domain enabled when the domain URL includes the stage"x-amzn-errortype" = "MissingAuthenticationTokenException""Missing Authentication Token"An API mapping specifies an API, a stage, and optionally a path to use for the mapping. Therefore, when an API's stage is mapped to a custom domain, you no longer need to include the stage in the URL. For more information, see Working with API mappings for REST APIs.Stage in request URL is not valid"x-amzn-errortype" = "ForbiddenException""Forbidden"The caller's request URL includes a stage that doesn't exist. Verify that the stage exists and the spelling of the request URL. For more information, see Invoking a REST API in Amazon API Gateway.ResolutionConsider the source of the errorIf the 403 error was reported from other resources, there might be another cause for the error. For example:If the error was reported in a web browser, then that error might be caused by an incorrect proxy setting. The proxy server returns a 403 error if HTTP access isn't allowed.If there's another AWS service in front of the API, then that service can reject the request with a 403 error in the response. For example: Amazon CloudFront.Identify what's causing the errorIf you haven't done so already, set up Amazon CloudWatch access logging for your API. Then, view your API's execution logs in CloudWatch to determine if requests are reaching the API.Note: HTTP APIs don't support execution logging. To troubleshoot 403 errors returned by a custom domain name that requires mutual TLS and invokes an HTTP API, you must do the following:1.    Create a new API mapping for your custom domain name that invokes a REST API for testing only.2.    Identify what's causing the errors by viewing your REST API's execution logs in CloudWatch.3.    After the error is identified and resolved, reroute the API mapping for your custom domain name back to your HTTP API.Confirm that the requested resource exists in the API definitionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you're using the most recent AWS CLI version.Verify the following using either the API Gateway console or the AWS CLI:The API is deployed with the latest API definition.The requested resource exists in the API definition.Use curl to get request and response detailsIf the error can be reproduced, use the curl -v command to get more details between the client and the API similar to the following:curl -X HTTP_VERB -v https://{api_id}.execute-api.{region}.amazonaws.com/{stage_name}/{resource_name}Note: For more information, see the curl project website.Verify that the request header is correctIf the error is the result of an API key that's not valid, then verify that the "x-api-key" header was sent in the request.Verify that the DNS setting on any interface Amazon VPC endpoints is set correctlyNote: Confirm the following for APIs invoked from an Amazon VPC that has an interface VPC endpoint only.Verify that the DNS setting of the interface endpoint is set correctly based on the type of API that you're using.Keep in mind the following:To invoke a Regional API from inside an Amazon VPC, private DNS names must be deactivated on the interface endpoint. Then, the endpoint's hostname can be resolved by a public DNS. For more information, see Creating a private API in Amazon API Gateway.To invoke a private API from inside an Amazon VPC using the API's private DNS name, private DNS names must be activated on the interface endpoint. Then, the interface endpoint's hostname can be resolved to the Amazon VPC's local subnet resources. For more information, see How to invoke a private API.Note: You don't need to set up a private DNS if you're invoking the private API using either of the following:The private API's public DNS name.-or-An Amazon Route 53 alias.Review the API's resource policyReview your API's resource policy to verify the following:(For APIs invoked from an Amazon VPC with an interface VPC endpoint) The API's resource policy grants the Amazon VPC or the interface endpoint access to the API.The resource policy's resource specifications and formatting are correct.Note: There's no validation of the resource specification when saving a resource policy. For examples, see API Gateway resource policy examples.The caller is allowed to invoke the API endpoint by the authentication type that you've defined for the API. For more information, see How API Gateway resource policies affect authorization workflow.Review HTTP request and response messagesReproduce the error in a web browser, if possible. Then, use the browser's network tools to capture the HTTP request and response messages and analyze them to determine where the error occurred.Note: For offline analysis, save the messages in an HTTP Archive (HAR) file.Related informationCommon errors - Amazon API GatewayHow do I allow only specific IP addresses to access my API Gateway REST API?How do I troubleshoot issues when connecting to an API Gateway private API endpoint?How do I turn on Amazon CloudWatch Logs for troubleshooting my API Gateway REST API or WebSocket API?Follow"
https://repost.aws/knowledge-center/api-gateway-troubleshoot-403-forbidden
How can I allow Amazon Redshift database regular users permission to view data in system tables from other users for my cluster?
I am not able to view data generated by other users in system tables in my Amazon Redshift cluster. How can I view the tables?
"I am not able to view data generated by other users in system tables in my Amazon Redshift cluster. How can I view the tables?Short descriptionBy default, Amazon Redshift regular users don't have permission to view data from other users. Only Amazon Redshift database superusers have permission to view all databases.ResolutionYou can add the SYSLOG ACCESS parameter with UNRESTRICTED access for the regular user to view data generated by other users in system tables.Note: Regular users with SYSLOG ACCESS can't view superuser tables. Only superusers can view other superuser tables.1.    Connect to the Amazon Redshift database as a superuser.2.    Run the SQL command ALTER USER similar to the following:test=# ALTER USER testuser WITH SYSLOG ACCESS UNRESTRICTED;ALTER USERNote: The required privileges for ALTER USER are superuser, users with the ALTER USER privilege, and users that want to change their own passwords.The regular user now has SYSLOG ACCESS with UNRESTRICTED access.3.    Disconnect from the Amazon Redshift database as the superuser.4.    Connect to the Amazon Redshift database as the regular user that has SYSLOG ACCESS with UNRESTRICTED access.5.    Test the regular users access by running a SQL command similar to the following:test=> select * from stv_inflight;-[ RECORD 1 ]--------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------userid | 100slice | 12811query | 3036label | default xid | 35079530pid | 1073746910starttime | 2022-09-15 07:09:15.894317text | select * from my_schema.dw_lu_tiers_test a, my_schema.dw_lu_tiers_test; suspended | 0insert_pristine | 0concurrency_scaling_status | 0-[ RECORD 2 ]--------------+---------------------------------------------------------------------------------------------------------------------------------------------------userid | 181slice | 12811query | 3038label | defaultxid | 35079531pid | 1073877909starttime | 2022-09-15 07:09:17.694285text | select * from stv_inflight; suspended | 0insert_pristine | 0concurrency_scaling_status | 0In the example output, note that the regular user now has access to view another users table.Follow"
https://repost.aws/knowledge-center/amazon-redshift-system-tables
How can I troubleshoot the error "nfs: server 127.0.0.1 not responding" when mounting my EFS file system?
My Amazon Elastic File System (Amazon EFS) server isn't responding and hangs with the error message "nfs: server 127.0.0.1 not responding". How can I troubleshoot this?
"My Amazon Elastic File System (Amazon EFS) server isn't responding and hangs with the error message "nfs: server 127.0.0.1 not responding". How can I troubleshoot this?Short descriptionThe following are common reasons why you might see the server not responding error:The NFS client can't connect to the EFS server.A reboot or shutdown of the instance occurred. Or. any other disconnection from the EC2 instance occurred. These occurrences cause a network disconnection between the NFS client and the EFS server. This behavior isn't conformant with the TCP RFC. Disconnections might cause responses from Amazon EFS to an Amazon Elastic Compute Cloud (Amazon EC2) instance or an NFS client to be blocked for multiple minutes.The noresvport mount option wasn't used when mounting the file system using an NFS client.There might be an issue with the kernel version causing EFS mount failure. For example, there are a number of known NFS client issues with RHEL6 that cause symptoms similar to unresponsive file systems. In earlier kernel versions of RHEL6.X the file system might become unavailable and fail to remount. NFS connection hangs might occur in Amazon EFS if you're running:RHEL or CentOS 7.6 or later (kernel version of 3.10.0-957).Any other Linux distribution with kernel version 4.16 through 4.19.Resolution1.    Use the noresvport mount option when mounting your file system. This option makes sure that the NFS client uses the new TCP source port when a network connection must be reestablished. Using noresvport makes sure that the EFS file system has uninterrupted availability after a network recovery event.$ sudo mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport mount-target-ip:/ mntIf you're using the EFS mount helper, then the noresvport option is present by default. If you're using NFS to mount, then you must add this parameter explicitly. For more information, see Recommended NFS mount options.2.    Check the kernel version. There might be issues with the particular kernel version, such as RHEL or CentOS 7.6 or later (kernel version of 3.10.0-957), that might cause the file system mount failure. If you're running one of these kernel versions, reboot to recover access to the file system. To confirm that the kernel version is the issue, verify the output from the ps command when you're unable to run ls:$ ps auxwwwm | grep <mount_point_IP>If the kernel version is faulty, then upgrade the kernel. It's a best practice to use the current generation Linux NFS4v.1 client or later for better performance.3.    Verify that the client can connect to the server by running the following command:telnet <ip-of-efs> 2049Review the NFS client logs (EC2 instance OS logs) under /var/log/messages for errors. The logs might be under the /var/log/syslog or /var/log/dmesg directory, depending on your OS.Also, if you mounted the file system using the EFS mount helper, review the EFS util logs under the /var/log/amazon/efs directory. The EFS mount helper has a built-in logging mechanism.4.    Verify that you can connect to your EC2 instance.5.    Verify if EC2 is being overloaded due to resource over-utilization. You can do this by monitoring EC2 metrics in Amazon CloudWatch, such as CPUUtilization and network-related metrics. Resources might include CPU, memory, application-level issues, and so on.Memory over-utilization: This might occur when the RAM is overutilized. Over-utilization means that the instance is running out of memory space, if, for example, an application starts consuming more RAM. Over-utilization causes Out Of Memory (OOM) errors. When initiated, these errors terminate processes that have a high OOM score or is consuming more memory. Ideally when OOM errors initiates, the instance remains inaccessible.To temporarily resolve OOM errors, reboot the system to free up memory space.For a longer term solution, monitor system resource usage using tools such as "atop" and "top". Then, move to a different instance type that better suits your workload. For more information, see Why is my EC2 Linux instance becoming unresponsive due to over-utilization of resources?Network performance: Review the network performance of the instance. Sometimes, even if CloudWatch metrics show low network utilization, there might be micro-bursting. Micro-bursting sends a high amount of traffic from a workload within a few seconds. Micro-bursting typically lasts for less than a minute. This burst is obscured in CloudWatch graphs and Amazon Elastic Block Store (Amazon EBS) stats because the smallest interval used within these tools is one minute. Monitor micro-bursting behavior using tools such as sar, nload, iftop. For more information, see Why is my Amazon Elastic Compute Cloud (Amazon EC2) instance exceeding its network limits when average utilization is low?6.    Review the EFS CloudWatch metrics and verify if throttling occurs at the EFS-level. This means that EFS is performing beyond its capacity. If you're using Bursting Throughput mode, then review the BurstBalance CloudWatch metric to determine if the burst balance is depleted. Also, review the permitted throughput CloudWatch metrics to determine if you're using higher throughput than the provisioned amount. For more information on burst credits, see How do Amazon EFS burst credits work?If your applications need nearly continuous throughput, use Provisioned Throughput mode. Before switching to Provisioned Throughput mode from Bursting Throughput, consider how much throughput to provision. To determine the minimum amount of provisioned throughput needed, check the average throughput usage for your file system for the previous two weeks. Note the highest peak amount, rounded up to the next megabyte. For more information, see What throughput modes are available in EFS and what is the right throughput mode for my workload?Related informationTroubleshooting mount issuesTroubleshooting Amazon EFSFollow"
https://repost.aws/knowledge-center/efs-fix-nfs-server-not-responding
How do I configure Lambda functions as targets for Application Load Balancers and troubleshoot related issues?
I need to configure AWS Lambda functions as targets for Application Load Balancers and know how to troubleshoot issues I might encounter.
"I need to configure AWS Lambda functions as targets for Application Load Balancers and know how to troubleshoot issues I might encounter.ResolutionElastic Load Balancing supports using Lambda functions as targets to process requests from Application Load Balancers. For more information, see Using AWS Lambda with an Application Load Balancer.Step 1: Create a Lambda function1.    Open the Functions page of the Lambda console.2.    Choose Create function.3.    Choose Author from scratch.4.    Enter a Function name.5.    In the Runtime dropdown, choose Python 3.9 as the runtime for this scenario.6.    For Execution role, choose Create a new role with basic Lambda permissions.Note: For more information about execution roles, see Lambda execution role.7.    Choose Create function.8.     After the function is created, choose the Code tab. In the Code source section, replace the existing function code with the following code:import jsondef lambda_handler(event, context): return { "statusCode": 200, "statusDescription": "200 OK", "headers": { "Content-Type": "text/html" }, "isBase64Encoded": False, "body": "<h1>Hello from Lambda!</h1>" }9.    Choose Deploy.Step 2: Create a target group for the Lambda functionNote: For more information, see Step 1: Configure a target group.1.    Open the Amazon EC2 console.2.    In the navigation pane, under Load Balancing, choose Target Groups.3.    Choose Create target group.4.    Under Basic configuration, for Choose a target type, choose Lambda function.5.    For Target group name, type a name for the target group.6.    (Optional) To turn on health checks, in the Health checks section, choose Enable.7.    (Optional) Add one or more tags as follows:Expand the Tags section.Choose Add tag.Enter the tag key and the tag value.8.    Choose Next.9.    Choose a Lambda function as the target.-or-Choose Add a function later to specify a Lambda function later.10.    Choose Create target group.Note: Load balancer permissions to invoke a Lambda function are granted differently depending on the method used to create a target group and register a function. For more information, see Permissions to invoke the Lambda function.Step 3: Configure a load balancer and a listenerTo configure a load balancer and a listener, follow the steps in Step 3: Configure a load balancer and a listener.Step 4: Test the load balancerTo test the load balancer, follow the steps in Step 4: Test the load balancer. If the setup is working, the browser displays the message "Hello from Lambda!"Note: If you haven't turned on health checks for your Lambda function, the health status is unavailable. You can test the load balancer without a health check as it doesn't affect the Lambda functions as targets for Application Load Balancers.Limits of Lambda functions as targetsFor more information about the limits of Lambda functions as targets, see Lambda functions as targets and review the information under Limits.Lambda target groups are limited to a single Lambda function target. For more information, see Prepare the Lambda function.Common errors of Lambda functions as targets"The connection has timed out"This error indicates that the security groups for your load balancer don't allow traffic on the listener port. To resolve this error, manage your security groups and make sure your security group's Inbound rules allow incoming traffic on listener ports. Outbound rules aren't required for security groups because security groups are stateful. Responses to allowed inbound traffic are allowed to flow out, regardless of outbound rules."The target group could not be found"This error indicates that the target group was deleted. To resolve this error, delete the resource policy with the deleted target group. Deleting the resource policy removes the trigger.1.    Open the Functions page of the Lambda console.2.    Choose the Lambda function related to the target group.3.    Choose the Configuration tab and then choose Permissions.4.    Scroll down to the Resource-based policy statements section and then select the policy that you want to remove.5.    Choose Delete and then choose Delete in the warning alert to confirm that you want to permanently delete the policy statement from the resource policy.You can also use the following remove-permission AWS Command Line Interface (AWS CLI) command to remove the resource-based policy:Note: In the following command, replace EXAMPLE_FUNCTION with your Lambda function name and EXAMPLE_ID with your statement ID.aws lambda remove-permission --function-name EXAMPLE_FUNCTION --statement-id EXAMPLE_IDNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI."An error occurred (AccessDenied) when calling the RegisterTargets operation: elasticloadbalancing principal does not have permission to invoke arn <Lambda ARN> from target group <Target Group ARN>"When a request to a Lambda function fails, the load balancer stores a reason code in the access log's error_reason field. The load balancer also increments the corresponding Amazon CloudWatch metric. For more information, see Error reason codes.Register a Lambda function as a target using the AWS CLI. Use the add-permission AWS CLI command to grant Elastic Load Balancing permission to invoke your Lambda function.Known errors of Lambda functions as targets"New metrics related to this feature (LambdaUserError, LambdaInternalError, LambdaTargetProcessedBytes and StandardProcessedBytes) are not available in ELB console monitoring panel."Access the new Lambda metrics from the Amazon CloudWatch console."The new ModifyTargetGroup API allows to configure 120 Seconds health check timeout value, but ELB console does not allow value higher than 60 seconds."To configure a health check timeout greater than 60 seconds, call the ModifyTargetGroup API through the AWS CLI. You can configure the value to a maximum of 120 seconds.Example modify-target-group command:Note: In the following command, replace EXAMPLE_TARGET_GROUP_ARN with your target group ARN and EXAMPLE_REGION with your AWS Region code.aws elbv2 modify-target-group \--target-group-arn EXAMPLE_TARGET_GROUP_ARN \ --health-check-timeout-seconds 120 \--region EXAMPLE_REGIONRelated informationLambda functions as targetsLambda function versioning and aliasesTraffic shifting using aliasesFollow"
https://repost.aws/knowledge-center/lambda-troubleshoot-targets-for-albs
How do I troubleshoot a 502: "The request could not be satisfied" error from CloudFront?
"I configured an Amazon CloudFront distribution with a custom domain. When requesting the alternative Canonical Name record (CNAME) domain over CloudFront, I get a 502 Error response with the message "The request could not be satisfied.""
"I configured an Amazon CloudFront distribution with a custom domain. When requesting the alternative Canonical Name record (CNAME) domain over CloudFront, I get a 502 Error response with the message "The request could not be satisfied."Short descriptionA 502 error occurs when CloudFront is unable to connect to the origin. See the following sections for the causes of the error, and how to troubleshoot.ResolutionCloudFront can't establish a TCP connection with the origin serverBy default, CloudFront connects to the origin over port 80 (for HTTP) and port 443 (for HTTPS). If the origin doesn't allow traffic over these ports, or blocks the CloudFront IP address's connection, then the TCP connection fails. The failure produces a 502 error. To resolve, confirm that the CloudFront distribution's Protocol setting is set to the correct port for HTTP or HTTPS connections.To test port connectivity, run the following command:telnet ORIGIN_DOMAIN/ORIGIN_IP PORTNote: For ORIGIN_DOMAIN, enter the ID of your origin domain. For ORIGIN_IP, enter the IP address of your origin. For PORT, enter the port number you're using to connect to the origin.SSL/TLS negotiation with the origin server failedIf the SSL/TLS transaction fails, then the connection between CloudFront and the origin fails and produces a 502 error. See the following sections for causes of an SSL/TLS transaction failure, and how to resolve them.SSL certificate doesn't match the domain nameThe SSL certificate at the origin must include or cover one of the following domain names:The origin domain name in the certificate's Common Name field or Subject Alternative Names field.The host header's domain name for incoming viewer host headers that are forwarded to the origin in the CloudFront distribution.To check for the Common Name and Subject Alternative Names in the certificate, run the following command:$ openssl s_client -connect DOMAIN:443 -servername SERVER_DOMAIN | openssl x509 -text | grep -E '(CN|Alternative)' -A 2Note: For DOMAIN, enter the origin domain name. For SERVER_DOMAIN enter the origin domain name. Or, if the viewer host header is forwarded to the origin), for SERVER_DOMAIN, enter the incoming host header value.If the following are true, then configure the cache policy or origin request policy to include the host header:The SSL certificate's common name or subject alternate name (SAN) includes the viewer host header value.The host header is not forwarded to the origin.Origin Certificate is expired, not trusted, or self-signedThe certificate installed the custom origin must be signed by a trusted Certificate Authority. The certificate authorities trusted by CloudFront are found on the Mozilla included CA certificate list on the Mozilla website.CloudFront doesn't support self-signed certificates for SSL set up with the origin. Self-signed certificates are issued by the organizations themselves or generated locally on a web-server instead of being issued by a trusted Certificate Authority.To check if your origin certificate is expired, run the following OpenSSL command. In the output, find the Not Before and Not After parameters. Confirm that the current date and time is within the certificate's validity period.$ openssl s_client -connect DOMAIN:443 -servername SERVER_DOMAIN | openssl x509 -text | grep Validity -A 3Note: For DOMAIN, enter the origin domain name. For SERVER_DOMAIN enter the origin domain name. Or, if the viewer host header is forwarded to the origin), for SERVER_DOMAIN, enter the incoming host header value.Missing intermediate CA certificates or the incorrect order of intermediate certificates causes failure between HTTPS communication and the origin. To check the certificate chain, run the following command.$ openssl s_client -showcerts -connect DOMAIN:443 -servername SERVER_DOMAINNote: For DOMAIN, enter the origin domain name and for SERVER_DOMAIN enter the origin domain name. Or, if the viewer host header is forwarded to the origin), for SERVER_DOMAIN, enter the incoming host header value.The origin's cipher suit isn't supported by CloudFrontSSL/TLS transactions between CloudFront and Origin fail if there is no common negotiated ciphers suite. To confirm you're using the correct cipher suite, see Supported protocols and ciphers between CloudFront and the origin.You can also use an SSL server test tool to check if your origin domain name is included on the list of supported ciphers.CloudFront can't resolve the origin IP addressIf CloudFront can't resolve the origin domain, then it returns 502 error. To troubleshoot this, use a dig/nslookup command to check if the origin domain resolves to an IP.Linux:$ dig ORIGIN_DOMAIN_NAMEWindows:nslookup ORIGIN_DOMAIN_NAMENote: For ORIGIN_DOMAIN_NAME, enter the origin domain name.If successful, then the command returns the IP of the origin domain name. Use a DNS checker tool to check for DNS resolution across different geographies.The error is caused by upstream originThe custom origin defined in the CloudFront distribution can be a proxy, Content Delivery Network (CDN) hostname, or load balancer connected to the actual origin. If any of these intermediary services fails to connect to the origin, then a 502 error is returned to CloudFront. To resolve this, work with your service provider.The Lambda@Edge function associated with the CloudFront distribution failed validationIf the Lambda@Edge function returns a response to CloudFront that's not valid, then CloudFront returns a 502 error. To resolve this, check your Lambda@Edge function for the following common issues:Returned JSON objectMissing required fieldsInvalid object in the responseAdding or updating disallowed or read-only headersExceeding the maximum body sizeCharacters or values that aren't validFor more information, see Testing and debugging Lambda@Edge functions.Follow"
https://repost.aws/knowledge-center/cloudfront-502-errors
How do I troubleshoot issues related to scheduled tasks in Amazon ECS?
"I have scheduled my Amazon Elastic Container Service (Amazon ECS) task to run periodically. However, my Amazon ECS task isn't triggered. I'm not getting execution logs or the history of the tasks in the cluster."
"I have scheduled my Amazon Elastic Container Service (Amazon ECS) task to run periodically. However, my Amazon ECS task isn't triggered. I'm not getting execution logs or the history of the tasks in the cluster.ResolutionWhen you use a scheduled Amazon ECS task, Amazon CloudWatch Events calls the RunTask API to Amazon ECS to run the tasks on your behalf.Your scheduled Amazon ECS task might not be invoked due to the following reasons:The Amazon EventBridge time or cron expression is configured incorrectly.The EventBridge rule doesn't invoke the target.The RunTask API failed to run.The container exit due to application issues or resource constraints.Check whether the EventBridge cron expression is configured incorrectlyTo get the EventBridge cron expression, run the following AWS Command Line Interface (AWS CLI) command:$ aws events describe-rule --name "example-rule" --region example-regionIn the output of the command, you can view the configured EventBridge cron expression in the parameter ScheduleExpression. Be sure that you set the schedule for the rule in UTC+0 time zone.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Check whether the rule doesn't invoke the targetUse the Amazon CloudWatch metrics generated by EventBridge to view the rule performance. Invocation datapoints indicate that the target was invoked by the rule. If FailedInvocations data points are present, then there is an issue invoking the target. FailedInvocations represent a permanent failure and might be the result of incorrect permissions or a misconfiguration of the target.To review the CloudWatch metrics for the EventBridge rule, do the following:Open the CloudWatch console.In the navigation pane, choose Metrics, and then choose All metrics.Choose Events.Choose By Rule Name.Select the TriggerRules, Invocations, and FailedInvocations (if available) metrics for the EventBridge rule that's configured to run the ECS task.Choose the Graphed metrics tab.For all metrics listed, select SUM for Statistic.If FailedInvocations data points are present, there might be an issue related to inadequate target permissions. Be sure that EventBridge has access to invoke your ECS task. Verify that the EventBridge AWS Identity and Access Management (IAM) role has the required permissions. For more information, see Amazon ECS CloudWatch Events IAM Role.Check whether the RunTask action failed to runTo verify if the RunTask API failed to run, search in AWS CloudTrail event history for RunTask within the time range of when the scheduled ECS task was expected to be invoked.To find if the scheduled task wasn't invoked because the RunTask action failed, do the following:Open the AWS CloudTrail console.In the navigation pane, choose Event history.In the Event history page, for Lookup attributes, select Event name.For Enter an event name, enter RunTask.Choose the time range in the time range filter based on when the scheduled ECS task was expected to run.Note: The preset values for time range are 30 minutes, 1 hour, 3 hours, and 12 hours. To specify a custom time range, choose Custom.From the results list, choose the event that you want to view.Scroll to Event record on the Details page to view the JSON event record.Look for errorMessage or responseElements.failures.reason elements in the JSON event record.These elements in the JSON event record display the reason for the scheduled ECS task not being invoked.For examples of RunTask API failure reasons and their causes, see API failure reasons.Check whether the container exited after the task ranThe Amazon ECS tasks might be stopped even after the task runs successfully due to application issues or resource constraints. For more information, see How do I troubleshoot issues with containers exiting in my Amazon ECS tasks?Related informationHow do I troubleshoot Amazon ECS tasks on Fargate that stop unexpectedly?Follow"
https://repost.aws/knowledge-center/ecs-scheduled-task-issues
Why can't I view conversation logs for Amazon Lex in CloudWatch?
I am unable to view the conversation logs for Amazon Lex in Amazon CloudWatch.
"I am unable to view the conversation logs for Amazon Lex in Amazon CloudWatch.Short descriptionThere are several reasons why you might not see your Amazon Lex conversation logs in CloudWatch. For example, you might not have the right permissions configured to allow Amazon Lex use CloudWatch logs. Or, you might have enabled COPPA on your bot which doesn't allow you to use the conversation logs feature.Use the troubleshooting steps in this article to find the root cause of this issue.ResolutionAdd an IAM role and policy to Amazon LexCheck if you have granted the correct permissions to allow your Amazon Lex bot to log to CloudWatch. To log conversation logs, Amazon Lex needs to use CloudWatch logs and access Amazon Simple Storage Service (Amazon S3) buckets to store conversation logs. Follow these steps to add the required AWS Identity and Access Management (IAM) roles and policies using the Amazon Lex console.1.    Open the Amazon Lex console, and choose the bot that you want to edit.2.    Choose Settings, and then choose Conversation logs.3.    Choose the settings icon, and then choose IAM role.4.    Add an IAM role with trust relationship similar to this:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "lex.amazonaws.com" //For V2 "Service": "lexv2.amazonaws.com" }, "Action": "sts:AssumeRole" } ]}Attach an IAM policy to the role that allows logging of conversation text to CloudWatch logs:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": "arn:aws:logs:region:account-id:log-group:log-group-name:*" } ]}6.    Add an IAM policy to the role that allows audio logging to an S3 bucket:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": "arn:aws:s3:::bucket-name/*" } ]}Review your COPPA settingsCheck if COPPA is enabled for your bot. If you enabled COPPA, then you can't use the conversation logs feature for that bot.Using Amazon Lex V1To check this setting, check the general settings of your bot using the Amazon Lex console.1.    Open the Amazon Lex console, and then choose Return to the V1 console.2.    Choose the bot that you want to edit.2.    Choose Settings, and then choose General.3.    Choose COPPA.Use Amazon Lex V21.    Open the Amazon Lex V2 console, and choose bot versions.2.    Choose the version that you want to use, and then choose COPPA.3.    If COPPA is enabled for a version you want to use, you can't disable it. Instead, go to Draft versions, and choose COPPA. You can now change COPPA to no, and publish a new version.Further troubleshooting steps1.    Check that the log group you're using is in the same Region as your Amazon Lex bot.2.    Check that the bot alias that you're using and the alias that you specified logging for are the same. Conversation logs are configured according to bot alias, so it's important that they match.3.    Check that you aren't using $LATEST alias or a test bot that Amazon Lex provides for testing. You can't log conversations for either of these.4.    Check that you haven't enabled AI services opt out policies in your AWS organization. If you enable opt-out policies, then Amazon Lex doesn't log conversation logs.Related informationConversation logsIAM policies for conversation logsMonitoring with conversation logsFollow"
https://repost.aws/knowledge-center/lex-conversation-logs-cloudwatch
I'm trying to upload a large file using the Amazon S3 console. Why is the upload failing?
"I'm trying to upload a large file (1 GB or larger) to Amazon Simple Storage Service (Amazon S3) using the console. However, the upload persistently fails and I'm getting timeout errors. How do I resolve this?"
"I'm trying to upload a large file (1 GB or larger) to Amazon Simple Storage Service (Amazon S3) using the console. However, the upload persistently fails and I'm getting timeout errors. How do I resolve this?ResolutionFor large files, Amazon S3 might separate the file into multiple uploads to maximize the upload speed. The Amazon S3 console might time out during large uploads because of session timeouts. Instead of using the Amazon S3 console, try uploading the file using the AWS Command Line Interface (AWS CLI) or an AWS SDK.Note: If you use the Amazon S3 console, the maximum file size for uploads is 160 GB. To upload a file that is larger than 160 GB, use the AWS CLI, AWS SDK, or Amazon S3 REST API.AWS CLIFirst, install and configure the AWS CLI. Be sure to configure the AWS CLI with the credentials of an AWS Identity and Access Management (IAM) user or role. The IAM user or role must have the correct permissions to access Amazon S3.Important: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.To upload a large file, run the cp command:aws s3 cp cat.png s3://docexamplebucketNote: The file must be in the same directory that you're running the command from.When you run a high-level (aws s3) command such as aws s3 cp, Amazon S3 automatically performs a multipart upload for large objects. In a multipart upload, a large file is split into multiple parts and uploaded separately to Amazon S3. After all the parts are uploaded, Amazon S3 combines the parts into a single file. A multipart upload can result in faster uploads and lower chances of failure with large files.For more information on multipart uploads, see How do I use the AWS CLI to perform a multipart upload of a file to Amazon S3?AWS SDKFor a programmable approach to uploading large files, consider using an AWS SDK, such as the AWS SDK for Java. For an example operation, see Upload an object using the AWS SDK for Java.Note: For a full list of AWS SDKs and programming toolkits for developing and managing applications, see Tools to build on AWS.Related informationUsing Amazon S3 with the AWS CLIFollow"
https://repost.aws/knowledge-center/s3-large-file-uploads
How do I integrate Amazon Connect with Zendesk?
I want to use Amazon Connect with Zendesk. How do I set that up?
"I want to use Amazon Connect with Zendesk. How do I set that up?Short descriptionInstall and configure the Amazon Connect for Zendesk app in your Zendesk Support account, then integrate the app with Amazon Connect. After integration, you can create contact flows to use Amazon Connect with Zendesk ticketing.For this setup, you need the following:An Amazon Connect instance.A Zendesk Support account with a Zendesk Talk Partner Edition plan, or a Zendesk trial account.ResolutionGet your instance's access URL1.    Open the Amazon Connect console.2.    Under Instance Alias, choose your instance's alias.3.    On the Overview pane, copy the Access URL for your instance.Important: If the URL looks like the following, then copy through the .com only: https://instance-name.awsapps.com/connect/login (For example: https://instance-name.awsapps.com)-or-If the URL looks like the following, then copy the complete URL: https://instance-name.my.connect.awsInstall the Amazon Connect for Zendesk app1.    Sign in to your Zendesk account.2.    On the Amazon Connect app page of the Zendesk Marketplace website, choose Install.3.    In the APP INSTALLATION dialog box, confirm that your Zendesk account is selected. Then, choose Install.4.    In your Zendesk, under INSTALLATION, do the following:For Amazon Connect URL, paste the access URL for your instance that you copied earlier.For Default entry point phone number, enter a number.For Contact Flow attribute containing Zendesk Ticket Number, copy the value "zendesk_ticket". You need this attribute value later in this setup.(Optional) Update the other settings based on your use case.Note: You can change these settings in your Zendesk at any time after installation.5.    Choose Install.Integrate Zendesk with Amazon Connect1.    Refresh the Zendesk account's browser tab. The Amazon Connect app icon appears at the top-right of your Zendesk instance.2.    Select the Amazon Connect app icon at the top-right corner. Two URLs appear.3.    Copy the two URLs one at a time. Then, add them to the Approved domains list in the Approved origins section of your Amazon Connect instance by doing the following:Open the Amazon Connect console. Then, in the left navigation pane, choose Approved origins.Under Approved origins, choose Add domain. An Add domain text box appears.In the Add domain text box, under Enter domain URL, paste one of the URLs that you copied earlier.Choose Add.Important: You can add one URL at a time to the Approved domains list only. You must repeat step 3 of this section to add the second URL.4.    In a new browser tab, log in to the Amazon Connect dashboard either as a created user or by choosing Log in for emergency access.5.    In your Zendesk, select the Amazon Connect app icon. The Amazon Connect Contact Control Panel (CCP) opens in Zendesk.(Optional) Create a contact flow for Zendesk ticketingFollow these steps to create an example contact flow. This contact flow prompts callers for input (for example, an account number or ticket number). Then, the flow automatically generates a Zendesk ticket with the caller's information.Note: It's a best practice to include Check hours of operation and Check staffing blocks before transferring a call to an agent. These blocks verify that the call is within working hours and that agents are staffed to service the call. However, these blocks are not required to achieve the desired functionality of this setup. For more information, see Best practices for Amazon Connect.Create a contact flow1.    In the Amazon Connect console, under Access URL, choose the access URL for your instance.2.    Log in to your instance using the administrator account.3.    In the left navigation bar, pause on Routing. Then, choose Contact flows.4.    Under Contact flows, choose a template, or choose Create contact flow to design a contact flow from scratch. For more information, see Create a new contact flow.5.    In the contact flow designer, next to Save, choose the arrow icon. Then, choose Save as.Note: This option saves your contact flow as a new version, instead of overwriting the original.6.    In the Save as text box, enter a New name and Description for your contact flow.7.    Choose Save as.Add a Store customer input block1.    In the contact flow designer, choose Interact. Then, drag and drop a Store customer input block onto the canvas.2.    Choose the block title (Store customer input). The block's settings menu opens.3.    Under Prompt, do the following:Choose Text to speech (Ad hoc).For Enter text, enter a message that asks the customer for their input. For example, "Please enter a valid Zendesk ticket number followed by #." For more information, see Create prompts.4.    Under Customer input, do the following:Choose Custom.(Optional) For Maximum Digits, enter a maximum number of characters that callers can enter.(Optional) For Delay between entry, enter a number of seconds that callers have between entering each digit before timeout.For Encrypt entry (recommended), select the check box and configure encryption. For more information, see Encrypt customer input and Creating a secure IVR solution with Amazon Connect.5.    Choose Save.For more information, see Use Amazon Connect contact attributes.Add a Set contact attributes block1.    In the contact flow designer, choose Set. Then, drag and drop a Set contact attributes block onto the canvas.2.    Choose the block title (Set contact attributes). The block's settings menu opens.3.    Under Attribute to save, choose Use attribute. Then, do the following:For Destination key, enter "zendesk_ticket".For Type, choose System.For Attribute, choose Stored customer input.4.    Choose Save.For more information, see Use Amazon Connect contact attributes.Set up call transfersAdd a Set working queue block and a Transfer to queue block.For more information, see Set up contact transfers.Add a Disconnect / hang up blockFrom the contact flow designer, choose Terminate / Transfer. Then, drag and drop a Disconnect / hang up block onto the canvas.Connect the blocksIn the contact flow designer, drag the arrows from each block to the block that you want to perform the next action. All connectors must be connected to a block before you can publish the contact flow.As you connect the contact blocks, make sure that you do the following for each block:Connect Start or Success to the next block in the following order:Entry point > Store customer input > Set contact attributes > Set working queue > Transfer to queueConnect Error and At capacity to the Disconnect / hang up block.Save and publish the contact flow1.    Choose Save to save a draft of the flow.2.    Choose Publish to activate the flow immediately.For more information, see Create a new contact flow.Related informationZendesk (AWS Partner Network)Setup pre-built integrationsFollow"
https://repost.aws/knowledge-center/connect-integrate-zendesk
How do I Provide IAM users with a link to assume an IAM role?
How do I provide AWS Identity and Access Management (IAM) users links to an IAM role?
"How do I provide AWS Identity and Access Management (IAM) users links to an IAM role?Short descriptionYou can provide the IAM user with the URL link of the IAM role that you want them to assume.ResolutionFollow these instructions to get the IAM role link that you want the IAM user to assume. The following example uses IAM user Bob.1.    If you haven't already done so, follow the instructions for creating an IAM user in your AWS account. Then, follow the instructions for creating an IAM role.2.    Choose the role name. In the Summary pane, in Link to switch roles in console, copy the link. The link looks similar to the following: https://signin.aws.amazon.com/switchrole?roleName=YOURROLE&account=1234567890123.    Provide the link to IAM user Bob.4.    Bob opens the IAM console, and then pastes the link into the browser window.5.    The Switch Role page opens for Bob. In Display Name, enter Bob.6.    (Optional) Choose a Color for Bob.7.    Choose Switch Role.Bob now assumes the IAM role that you provided.For more information, see Things to know about switching roles in the console.Related informationGranting permissions to an IAM user to switch rolesSwitching to a role (console)Follow"
https://repost.aws/knowledge-center/iam-user-role-link
"How do I troubleshoot "Neither the global service principal states.amazonaws.com, nor the regional one is authorized to assume the provided role" errors in AWS Step Functions?"
"When I try to run my AWS Step Functions state machine, I receive the following error: "Neither the global service principal states.amazonaws.com, nor the regional one is authorized to assume the provided role." How do I troubleshoot the issue?"
"When I try to run my AWS Step Functions state machine, I receive the following error: "Neither the global service principal states.amazonaws.com, nor the regional one is authorized to assume the provided role." How do I troubleshoot the issue?ResolutionVerify that the AWS Identity and Access Management (IAM) role that your state machine assumes has the required trust relationships configuredOne of the following must be listed as a trusted entity in the IAM role's trust policy:An AWS Regional endpoint: states.<region>.amazonaws.comThe AWS global endpoint: states.amazonaws.comTo review and edit the trust policy of the IAM role that your state machine assumes, follow the instructions in Modifying a role trust policy (console). For more information, see How AWS Step Functions works with IAM.Note: When the StartExecution API action is called, Step Functions uses the IAM role that's associated with the state machine during the duration of the action's runtime. If the IAM role that the state machine assumes is changed during the action's runtime, then the IAM role isn't used on that API action.Verify that the IAM role that your state machine assumes still exists1.    Open the Step Functions console.2.    In the left navigation pane, choose State machines.3.    Select the name of your state machine.4.    In the Details section, choose the link under IAM role ARN. If the IAM role exists, the role opens in the IAM console. If the IAM role doesn't exist, the IAM console opens a page that says No Entity Found.If the IAM role that your state machine assumes doesn't exist, create a new IAM role with a different name that includes the required permissions. Then, configure your state machine to assume the new IAM role that you created. For more information, see How AWS Step Functions works with IAM.Important: The new IAM role that you create must have a different name than the previous IAM role.Follow"
https://repost.aws/knowledge-center/step-functions-iam-role-troubleshooting
Why are emails failing to deliver when I send Amazon SES emails using the SendTemplatedEmail operation?
"I'm using the SendTemplatedEmail operation to send messages from my Amazon Simple Email Service (Amazon SES) account. However, Amazon SES doesn't deliver some emails."
"I'm using the SendTemplatedEmail operation to send messages from my Amazon Simple Email Service (Amazon SES) account. However, Amazon SES doesn't deliver some emails.ResolutionWhen you use an email template, Amazon SES validates that the template data you send includes the required variables in the template. If the template data contains non-compliant variables or is missing variables, then Amazon SES can't deliver the email. This is called a Rendering Failure.Use Amazon Simple Notification Service (Amazon SNS) to set up Rendering Failure event notifications. Review the Rendering Failure event notifications to find out why Amazon SES doesn't deliver an email when you use the SendTemplatedEmail operation.After you set up Rendering Failure event notifications, you receive an Amazon SNS notification when a templated email delivery fails. The notification error message includes information about the template variable that led to the Rendering Failure.For example, the following template contains the variables name and favoritecolor:{ "Template": { "TemplateName": "ExampleTemplate", "SubjectPart": "Hello, {{name}}!", "HtmlPart": "<h1>Hello {{name}},</h1><p>Your favorite color is {{favoritecolor}}.</p>", "TextPart": "Dear {{name}},\r\nYour favorite color is {{favoritecolor}}." }}If you send the following template data, then Amazon SES can't deliver the email. This is because the favoritecolor variable is missing from the template.Important: Including extra variables that aren't present in the template, such as favoritenumber, doesn't cause an error. However, all variables that you include in the template must have an exact case-sensitive counterpart in the template data. See the following example:"TemplateData": "{ \"name\":\"Jane\", \"favoritenumber\": \"10\" }"With Rendering Failure event notifications, you receive a failure notification that's similar to the following message:{ "eventType": "Rendering Failure", "mail": { "timestamp": "2019-09-09T04:38:19.788Z", "source": "sender@example.com", "sourceArn": "arn:aws:ses:us-west-2:1234567890123:identity/sender@example.com", "sendingAccountId": "1234567890123", "messageId": "01010161a734a0eb-a706827a-3bda-490f-8eaa-63cf4b00d10c-000000", "destination": [ "receiver@example.com" ], "headersTruncated": false, "tags": { "ses:configuration-set": [ "RenderFailure" ] } }, "failure": { "errorMessage": "Attribute 'favoritecolor' is not present in the rendering data.", "templateName": "ExampleTemplate" }}To avoid Rendering Failures, follow these guidelines:Check the capitalization of variable names in your template data. Variable names in the template are case sensitive.Verify that your template data includes all the variables in the template.Related informationUsing templates to send personalized email with the Amazon SES APIFollow"
https://repost.aws/knowledge-center/ses-sendtemplatedemail-delivery-failure
How do I resolve the "cannotpullcontainererror" error for my Amazon ECS tasks on Fargate?
I want to resolve the "cannotpullcontainererror" error so that I can start my Amazon Elastic Container Service (Amazon ECS) tasks on AWS Fargate.
"I want to resolve the "cannotpullcontainererror" error so that I can start my Amazon Elastic Container Service (Amazon ECS) tasks on AWS Fargate.Short descriptionThe "cannotpullcontainererror" error can prevent tasks from starting. To start an Amazon ECS task on Fargate, your Amazon Virtual Private Cloud (Amazon VPC) networking configurations must allow your Amazon ECS infrastructure to access the repository where the image is stored. Without the correct networking, the image can't be pulled by Amazon ECS on Fargate and the container can't start.ResolutionConfirm that your VPC networking configuration allows your Amazon ECS infrastructure to reach the image repositoryThe route tables associated to the subnets that your task is created in must allow your Amazon ECS infrastructure to reach the repository endpoint. The endpoint can be reached through an internet gateway, NAT gateway, or VPC endpoints.If you're not using AWS PrivateLink, then complete the following steps:Open the Amazon VPC console.In the navigation pane, choose Subnets.Select the subnet that your ECS Fargate task is using.Choose the Route Table tab.In the Destination column, confirm that the default route (0.0.0.0/0) of the route table allows public internet access. This access can be either through a NAT gateway or an internet gateway.Important: The NAT gateway or internet gateway must be the target of the default route. For example route tables, see Example routing options. If you're not using a NAT gateway or internet gateway, then make sure that your custom configuration allows public internet access.If you're using an internet gateway (public subnets), then confirm that the task has a public IP assigned to it. To do this, launch your ECS task with Auto-assign public IP set to ENABLED in the VPC and security groups section when you create the task or service.If you're using PrivateLink, confirm that the security groups for your VPC endpoints allow the Fargate infrastructure to use them.Note: Amazon ECS tasks hosted on Fargate using version 1.3.0 or earlier require the com.amazonaws.region.ecr.dkrAmazon Elastic Container Registry (Amazon ECR) VPC endpoint and the Amazon Simple Storage Service (Amazon S3) gateway endpoint. Amazon ECS tasks hosted on Fargate using version 1.4.0 or later require both the com.amazonaws.region.ecr.dkr and com.amazonaws.region.ecr.api Amazon ECR VPC endpoints and the Amazon S3 gateway endpoint.Open the Amazon VPC console.In the navigation pane, choose Endpoints.Select the endpoint from the list of endpoints, and then choose the Subnets tab. The VPC endpoints com.amazonaws.region.ecr.dkr and com.amazonaws.region.ecr.api for Amazon ECR will display in the list of subnets and associated with the Fargate subnets. You also see the Amazon S3 gateway on the list of subnets.Note: If a subnet isn't listed, choose Manage Subnets. Next, select the subnet based on its Availability Zone. Then, choose Modify Subnets.Choose the Policy tab, and then confirm that the correct policy requirements are met.To confirm that the security group attached to the com.amazonaws.region.ecr.api and com.amazonaws.region.ecr.dkr VPC endpoints allows incoming connections on port 443 from the Amazon ECS tasks for Fargate, select the endpoint from the list of endpoints.Choose the Security Groups tab.For Group ID, choose the security group ID.Choose the Inbound rules tab, and then confirm that you can see the rule that allows 443 connections from your ECS tasks on Fargate.Check the VPC DHCP Option SetOpen the Amazon VPC console.In the navigation pane, choose Your VPCs.Select the VPC that contains your Fargate task.On the Details tab, note the setting for DHCP options set.In the navigation pane, choose DHCP Options Sets.Select the DHCP options set that you noted in step 4.Choose Actions, and then choose View details.Confirm that Domain name servers is set to AmazonProvidedDNS. If it isn't set to AmazonProvidedDNS, then configure conditional DNS forwarding.Check the task execution role permissionsOpen the IAM console. To confirm that the security group attached to the com.amazonaws.region.ecr.api and com.amazonaws.region.ecr.dkr VPC endpoints allows incoming connections on port 443 from the Amazon ECS tasks for Fargate, select the endpoint from the list of endpointsIn the navigation pane, choose Roles.Select the task execution role that your Fargate tasks are using.Confirm that the task execution role has the permissions to pull an image from Amazon ECR.Check that the image existsOpen the Amazon ECR console.Select the Amazon ECR repository that your Fargate task should be pulling the image from.Confirm that the URI and the tag in Amazon ECR are the same as what's specified in the task definition.Note: If you're not using Amazon ECR, then make sure that you see image:tag in the specified image repository.Follow"
https://repost.aws/knowledge-center/ecs-fargate-pull-container-error
Why is my AWS DMS task that uses PostgreSQL as the source failing with all of the replication slots in use?
"I have an AWS Database Migration Service (AWS DMS) task that uses an Amazon Relational Database Service (Amazon RDS) DB instance that is running PostgreSQL as the source. My task is failing, all of the replication slots are in use, and I received an error message. Why is my task failing, and how do I resolve these errors?"
"I have an AWS Database Migration Service (AWS DMS) task that uses an Amazon Relational Database Service (Amazon RDS) DB instance that is running PostgreSQL as the source. My task is failing, all of the replication slots are in use, and I received an error message. Why is my task failing, and how do I resolve these errors?Short descriptionFor Amazon RDS for PostgreSQL instances, AWS DMS uses native replication slots to perform the logical replication for change data capture (CDC).The number of replication slots that a PostgreSQL instance has is controlled by the max_replication_slots parameter. By default, there are five replication slots for RDS PostgreSQL instances. If you exceed the maximum number of replication slots, then you see log entries like these:Messages[SOURCE_CAPTURE ]E: Failed (retcode -1) to execute statement [1022502] (ar_odbc_stmt.c:2579)[SOURCE_CAPTURE ]E: RetCode: SQL_ERROR SqlState: 53400 NativeError: 1 Message: ERROR: all replication slots are in use;To resolve these errors, remove the used replication slots or increase the value of the max_replication_slots parameter.ResolutionRemove used replication slotsIf you are running multiple AWS DMS tasks or you have old tasks running on same DB instance, then remove used replication slots. These replication slots continue to occupy space, so by removing then you make the slots available for new tasks.First, identify the maximum number of replication slots. Then, remove or "drop" the used replication slots.Run this query to check the maximum number of replication slots:SELECT * FROM pg_replication_slots; slot_name | plugin | slot_type | datoid | database | active | xmin | catalog_xmin | restart_lsn -----------------+---------------+-----------+--------+----------+--------+--------+--------------+-------------old_and_used_slot | test_decoding | logical | 12052 | postgres | f | | 684 | 0/16A4408Run this query to drop a used replication slot:SELECT pg_drop_replication_slot('old_and_used_slot');Note: Replace old_and_used_slot with the name of your replication slot.Increase the value of the max_replication_slots parameterModify the DB parameter in the custom DB parameter groups that is attached to the Amazon RDS DB instance. Then, increase the value of the max_replication_slots parameter. This is a static parameter, so be sure to reboot the DB instance after changing the parameter value.After you remove the used replication slots or increase the value of the max_replication_slots parameter, restart the task.Related informationPostgreSQL on Amazon RDSUsing a PostgreSQL database as an AWS DMS sourcePostgreSQL documentation for Logical decoding examplesFollow"
https://repost.aws/knowledge-center/dms-postgresql-fail-slots-in-use
How do I rebalance the uneven shard distribution in my Amazon OpenSearch Service cluster?
"The disk space in my Amazon OpenSearch Service domain is unevenly distributed across the nodes. As a result, the disk usage is heavily skewed."
"The disk space in my Amazon OpenSearch Service domain is unevenly distributed across the nodes. As a result, the disk usage is heavily skewed.Short descriptionDisk usage can be heavily skewed because of the following reasons:Uneven shard sizes in a cluster. Although OpenSearch Service evenly distributes the number of shards across nodes, varying shard sizes require different amounts of disk space.Available disk space on a node. For more information, see Disk-based shard allocation on the Elasticsearch website.Incorrect shard allocation strategy. For more information, see Demystifying OpenSearch Service shard allocation.To rebalance the shard allocation in your OpenSearch Service cluster, consider the following approaches:Check the shard allocation, shard sizes, and index sharding strategy.Be sure that shards are of equal size across the indices.Keep shard sizes between 10 GB to 50 GB for better performance.Add more data nodes to your OpenSearch Service cluster.Update your sharding strategy.Delete the old or unused indices to free up disk space.ResolutionCheck the shard allocation, shard sizes, and index sharding strategyTo check the number of shards allocated to each node and the amount of disk space used on each node, use the following API:GET _cat/allocation?vTo check the shards allocated to each node and the size of each shard, use the following API:GET _cat/shards?vNote: This API shows that the size of shards can vary for different indices.The uneven sharding strategy for indices can cause data skewness. In this case, shards of bigger indices reside on only a few nodes. To check the sharding strategy for indices. use the following API:GET _cat/indices?vBe sure that shards are of equal size across the indicesIf the index size varies significantly, then use the rollover index API to create a new index when certain index sizes are reached. Or, you can use the Index State Management (ISM) to create a new index for OpenSearch Service versions 7.1 and later. For more information about rolling an alias using ISM, see rollover on the Open Distro website.Keep shard sizes between 10 GB to 50 GB for better performanceIf you have a large class of instances, then use the Petabyte scale for Amazon OpenSearch Service to determine shard sizes. For example, an OpenSearch Service domain with several i3.16xlarge.search instances can support shard sizes of up to 100 GB because there are more resources available. For more information about sharding strategy, see Choosing the number of shards.Add more data nodes to your OpenSearch Service clusterIf your OpenSearch Service cluster has reached high disk usage levels, then add more data nodes to your cluster. The addition of data nodes also adds more resources to improve cluster performance.Note: OpenSearch Service doesn't automatically rebalance the cluster when there's a lack of available storage space. As a result, if a data node runs out of unused storage space, then the cluster blocks any writes. For more information about disk space management, see How do I troubleshoot low storage space in my Amazon OpenSearch Service domain?Update your sharding strategyBy default, Amazon OpenSearch Service has a sharding strategy of 5:1, where each index is divided into five primary shards. Within each index, each primary shard also has its own replica. OpenSearch Service automatically assigns primary shards and replica shards to separate data nodes, and makes sure that there's a backup in case of failure.To modify OpenSearch Service default behavior, design your indices so that shards are distributed equally by size:For existing indices, use the reindex API to change the number of primary shards. The _reindex API can be used to merge smaller indices into a bigger index, or it can be used to split up the bigger index. When the bigger index is split into more primary shards, the shard sizes are decreased.For new indices, use the index template API to define the number of primary and replica shards.Then, update the indices settings for your shards. For more information, see Update index settings API on the Elasticsearch website.Delete the old or unused indices to free up disk spaceOpenSearch Service or Elasticsearch version 6.8 or later support ISM. With ISM, you can define custom management policies so that old or unused indices are deleted after an established duration.Related informationCalculating storage requirementsGet started with Amazon OpenSearch Service: How many shards do I need?Follow"
https://repost.aws/knowledge-center/opensearch-rebalance-uneven-shards
How do I deploy artifacts to Amazon S3 in a different AWS account using CodePipeline and a canned ACL?
I want to deploy artifacts to an Amazon Simple Storage Service (Amazon S3) bucket in a different account. Is there a way to do that using AWS CodePipeline with an Amazon S3 deploy action provider and a canned Access Control List (ACL)?
"I want to deploy artifacts to an Amazon Simple Storage Service (Amazon S3) bucket in a different account. Is there a way to do that using AWS CodePipeline with an Amazon S3 deploy action provider and a canned Access Control List (ACL)?ResolutionNote: The following example procedure assumes the following:You have two AWS accounts: A development account and a production account.The input bucket in the development account is called codepipeline-input-bucket (with versioning activated).The default artifact bucket in the development account is called codepipeline-us-east-1-0123456789.The output bucket in the production account is called codepipeline-output-bucket.You're deploying artifacts from the development account to an S3 bucket in the production account.You're using a canned ACL to provide the bucket owner in the production account with access to the objects owned by the development account.Note: To deploy artifacts and set the production account as object owner, see How do I deploy artifacts to Amazon S3 in a different account using CodePipeline?Create a CodePipeline in the development account1.    Open the CodePipeline console. Then, choose Create pipeline.2.    For Pipeline name, enter a name for your pipeline. For example: crossaccountdeploy.Note: The Role name text box is populated automatically with the service role name AWSCodePipelineServiceRole-us-east-1-crossaccountdeploy. You can also choose another, existing service role.3.    Expand the Advanced settings section.4.    For Artifact store, choose Default location.Note: You can select Custom location if that's necessary for your use case.5.    For Encryption key, select Default AWS Managed Key.6.    Choose Next.7.    On the Add source stage page, for Source provider, choose Amazon S3.8.    For Bucket, enter the name of your development input S3 bucket. For example: codepipeline-input-bucket.Important: The input bucket must have versioning activated to work with CodePipeline.9.    For S3 object key, enter sample-website.zip.Important: To use an example AWS website instead of your own website, see Tutorial: Create a pipeline that uses Amazon S3 as a deployment provider. Then, search for "sample static website" in the Prerequisites of the 1: Deploy Static Website Files to Amazon S3 section.10.    For Change detection options, choose Amazon CloudWatch Events (recommended).11.    Choose Next.12.    On the Add build stage page, choose Skip build stage. Then, choose Skip.13.    On the Add deploy stage page, for Deploy provider, choose Amazon S3.14.    For Region, choose the AWS Region that your output S3 bucket is in. For example: US East (N. Virginia).15.    For Bucket, enter the name of your production output S3 bucket. For example: codepipeline-output-bucket.16.    Select the Extract file before deploy check box.Note: If needed, enter a path for Deployment path.17.    Expand Additional configuration.18.    For Canned ACL, choose bucket-owner-full-control.Note: The bucket-owner-full-control gives the bucket owner in the production account full access to the objects deployed and owned by the development account. For more information, see Canned ACL.19.    Choose Next.20.    Choose Create pipeline. The pipeline runs, but the source stage fails. The following error appears: "The object with key 'sample-website.zip' does not exist."The Upload the sample website to the input bucket section of this article describes how to resolve this error.Configure a CodePipeline service role with an AWS Identity and Access Management (IAM) policy that adds S3 access for the output bucket of the production account1.    Open the IAM console in the development account.2.    In the navigation pane, choose Policies. Then, choose Create policy.3.    Choose the JSON tab. Then, enter the following policy into the JSON editor:Important: Replace codepipeline-output-bucket with your production output S3 bucket's name.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:*", "Resource": "arn:aws:s3:::codepipeline-output-bucket/*" }, { "Effect": "Allow", "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::codepipeline-output-bucket" } ]}4.    Choose Review policy.5.    For Name, enter a name for the policy. For example: prodbucketaccess.6.    Choose Create policy.7.    In the navigation pane, choose Roles.8.    From the list of roles, choose AWSCodePipelineServiceRole-us-east-1-crossaccountdeploy. This is the CodePipeline service role.Note: You can use your own service role, if required for your use case.9.    Choose Attach polices.10.    Select the policy that you created (prodbucketaccess). Then, choose Attach policy to grant CodePipeline access to the production output S3 bucket.Configure the output bucket in the production account to allow access from the development account1.    Open the Amazon S3 console in the production account.2.    In the Bucket name list, choose your production output S3 bucket. For example: codepipeline-output-bucket.3.    Choose Permissions. Then, choose Bucket Policy.4.    In the text editor, enter the following policy, and then choose Save:Important: Replace dev-account-id with your development environment's AWS account ID. Replace codepipeline-output-bucket with your production output S3 bucket's name.{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::dev-account-id:root" }, "Action": "s3:Put*", "Resource": "arn:aws:s3:::codepipeline-output-bucket/*" }, { "Sid": "", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::dev-account-id:root" }, "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::codepipeline-output-bucket" } ]}Upload the example website to the input bucket1.    Open the Amazon S3 console in the development account.2.    In the Bucket name list, choose your development input S3 bucket. For example: codepipeline-input-bucket.3.    Choose Upload. Then, choose Add files.4.    Select the sample-website.zip file that you downloaded.5.    Choose Upload to run the pipeline. When the pipeline runs, the following occurs:The source action selects the sample-website.zip from the development input S3 bucket (codepipeline-input-bucket). Then, the source action places the zip file as a source artifact inside the default artifact bucket in the development account ( codepipeline-us-east-1-0123456789).In the deploy action, the CodePipeline service role ( AWSCodePipelineServiceRole-us-east-1-crossaccountdeploy) uses its access to deploy to the production output S3 bucket ( codepipeline-output-bucket). The deploy action also applies the canned ACL bucket-owner-full-control.Note: The development account is the owner of the extracted objects in the production output S3 bucket ( codepipeline-output-bucket). The bucket owner in the production account also has full access to the deployed artifacts.Follow"
https://repost.aws/knowledge-center/codepipeline-artifacts-s3-canned-acl
How can I use IAM policies to grant user-specific access to specific S3 folders?
I want to use IAM user policies to restrict access to specific folders within Amazon Simple Storage Service (Amazon S3) buckets.
"I want to use IAM user policies to restrict access to specific folders within Amazon Simple Storage Service (Amazon S3) buckets.Short descriptionYou can use AWS Identity and Access Management (IAM) user policies to control who has access to specific folders in your Amazon S3 buckets.ResolutionSingle-user policy - This example policy allows a specific IAM user to see specific folders at the first level of the bucket and then to take action on objects in the desired folders and subfolders. This example uses an IAM user named David and a bucket named my-company with the following structure:/home/Adele/ /home/Bob/ /home/David/ /restricted/ /root-file.txt{ "Version":"2012-10-17", "Statement": [ { "Sid": "AllowUserToSeeBucketListInTheConsole", "Action": ["s3:ListAllMyBuckets", "s3:GetBucketLocation"], "Effect": "Allow", "Resource": "*" }, { "Sid": "AllowRootAndHomeListingOfCompanyBucket", "Action": ["s3:ListBucket"], "Effect": "Allow", "Resource": ["arn:aws:s3:::my-company"], "Condition":{"StringEquals":{"s3:prefix":["","home/"],"s3:delimiter":["/"]}} }, { "Sid": "AllowListingOfUserFolder", "Action": ["s3:ListBucket"], "Effect": "Allow", "Resource": ["arn:aws:s3:::my-company"], "Condition":{"StringLike":{"s3:prefix":["home/David/*"]}} }, { "Sid": "AllowAllS3ActionsInUserFolder", "Effect": "Allow", "Action": ["s3:*"], "Resource": ["arn:aws:s3:::my-company/home/David/*"] } ]}The Amazon S3 console uses the slash (/) as a special character to show objects in folders. The prefix (s3:prefix) and the delimiter (s3:delimiter) help you organize and browse objects in your folders.Multiple-user policy - In some cases, you might not know the exact name of the resource when you write the policy. For example, you might want to allow every user to have their own objects in an Amazon S3 bucket, as in the previous example. However, instead of creating a separate policy for each user that specifies the user's name as part of the resource, you can create a single group policy that works for any user in that group.You can do this by using policy variables, which allow you to specify placeholders in a policy. When the policy is evaluated, the policy variables are replaced with values that come from the request itself.This example shows a policy for an Amazon S3 bucket that uses the policy variable ${aws:username}:{ "Version": "2012-10-17", "Statement": [ { "Sid": "AllowUserToSeeBucketListInTheConsole", "Action": [ "s3:ListAllMyBuckets", "s3:GetBucketLocation" ], "Effect": "Allow", "Resource": "*" }, { "Sid": "AllowRootAndHomeListingOfCompanyBucket", "Action": [ "s3:ListBucket" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::my-company" ], "Condition": { "StringEquals": { "s3:prefix": [ "", "home/" ], "s3:delimiter": [ "/" ] } } }, { "Sid": "AllowListingOfUserFolder", "Action": [ "s3:ListBucket" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::my-company" ], "Condition": { "StringLike": { "s3:prefix": [ "home/${aws:username}/*" ] } } }, { "Sid": "AllowAllS3ActionsInUserFolder", "Effect": "Allow", "Action": [ "s3:*" ], "Resource": [ "arn:aws:s3:::my-company/home/${aws:username}/*" ] } ]}Note: Only StringLike recognizes an asterisk (*) as wildcard. StringEquals doesn't. For more information, see string condition operators.Related informationControlling access to a bucket with user policiesAmazon S3 condition key examplesBucket policy examplesFollow"
https://repost.aws/knowledge-center/iam-s3-user-specific-folder
Why can't I connect to an Amazon EC2 instance within my Amazon VPC from the internet?
I'm unable to connect to an Amazon Elastic Compute Cloud (Amazon EC2) instance within an Amazon Virtual Private Cloud (Amazon VPC) from the internet. How can I fix this?
"I'm unable to connect to an Amazon Elastic Compute Cloud (Amazon EC2) instance within an Amazon Virtual Private Cloud (Amazon VPC) from the internet. How can I fix this?Short descriptionProblems connecting to Amazon EC2 instances in Amazon VPC are usually related to the configuration of security groups, network access control lists (ACLs), or route tables.ResolutionBefore you start, be sure that your Amazon EC2 instance is passing system status checks and instance status checks.Check security groupsBe sure that the security groups associated with the elastic network interface of the instance allow connections from the required ports.Important: In a production environment, enable only a specific IP address or range of addresses to access your instance. For testing purposes, you can specify a custom IP address of 0.0.0.0/0 to enable all IP addresses to access your instance using SSH or RDP.Note: You don't need to configure security group egress rules, because security groups are stateful.Check network ACLsCheck your network ACLs for the following:Be sure that the network ACLs associated with your VPC subnet allow traffic through the required ports.Note: For more information, see Recommended network ACL rules for your VPC and Adding and deleting rules.Be sure that both inbound and outbound traffic are allowed.Note: Network ACLs are stateless. Responses to allowed inbound traffic are subject to the rules for outbound traffic, but responses to allowed outbound traffic are subject to the rules for inbound traffic.Be sure to open only ephemeral ports in outbound ACLs.Note: It's a best practice to allow only the ports that you need.Important: If you're still not certain what is blocking traffic from accessing your instance, consider enabling VPC flow logs. Flow logs capture IP address traffic that flows through your VPC. If you see rejected traffic in your flow logs, be sure to check your security groups and network ACL settings again.Check route tablesTo check if an internet gateway is attached to your VPC, complete the following steps:Sign in to the Amazon VPC console.On the navigation pane, in the Virtual Private Cloud section, choose Internet Gateways.In the search box, search for the internet gateway attached to your VPC. You can also use the search bar on the page to search for your Attached VPC ID (for example, vpc-xxxxxxxx).Note the ID of the internet gateway (for example, igw-xxxxxxxx).If an internet gateway is already attached to your VPC, complete the following steps:Check your VPC's route tables for a route to your internet gateway. Look for a route entry whose Target is the ID internet gateway attached to your VPC (for example, igw-xxxxxxxx), and whose Destination is 0.0.0.0/0.If the route doesn't exist, add a route entry with the internet gateway as the Target and 0.0.0.0/0 as the Destination.Be sure that the subnet route table also has a route entry to the internet gateway. If this entry doesn't exist, the instance is in a private subnet and is inaccessible from the internet.Note: Be sure that operating system-level (OS-level) route tables allow traffic from the internet. Use the command route -n (Linux instances) or netstat -rn (Linux or Windows instances), depending on your configuration.Check IP addressesCheck if a public IP address is assigned to your VPC instance, or an Elastic IP address is attached to the network interface of the instance. If a public IP address or elastic IP address isn't assigned to the network interface of the instance, then assign one.Note: For more information, see Working with IP addresses and Working with elastic IP addresses.Be sure that the OS-level software or firewalls on the instance allow traffic through the required ports.Related informationComparison of security groups and network ACLsFollow"
https://repost.aws/knowledge-center/vpc-connect-instance
How do I delete Amazon S3 objects and buckets?
I have Amazon Simple Storage Service (Amazon S3) objects and buckets that I don't need. How do I delete these resources?
"I have Amazon Simple Storage Service (Amazon S3) objects and buckets that I don't need. How do I delete these resources?ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.To delete individual S3 objects, you can use the Amazon S3 console, the AWS CLI, or an AWS SDK.To delete multiple S3 objects using a single HTTP request, you can use the AWS CLI, or an AWS SDK.To empty an S3 bucket of its objects, you can use the Amazon S3 console, AWS CLI, lifecycle configuration rule, or AWS SDK.To delete an S3 bucket (and all the objects that it contains), you can use the Amazon S3 console, AWS CLI, or AWS SDK.Related informationBuckets overviewAmazon S3 objects overviewFollow"
https://repost.aws/knowledge-center/s3-delete-objects-and-buckets
How can I configure a CloudWatch subscription filter to invoke my Lambda function?
I want to configure an Amazon CloudWatch subscription filter to invoke my AWS Lambda function.
"I want to configure an Amazon CloudWatch subscription filter to invoke my AWS Lambda function.Short descriptionWith Amazon CloudWatch Logs, you can use a subscription filter that sends log data to your Lambda function. CloudWatch Logs subscription filters are base64 encoded and compressed with the GZIP format.Before you create your Lambda function, calculate the volume of log data that will be generated. Be sure to create a function that can manage the volume amount. If the function doesn't have enough volume, then the log stream is throttled. For more information, see Lambda quotas.Note: Streaming large amounts of CloudWatch Logs data might result in high usage charges. It's a best practice to use AWS Budgets to track spending and usage. For instructions, see How can I use AWS Budgets to track my spending and usage?ResolutionCreate a CloudWatch Logs subscription filter that sends log data to your AWS Lambda function.Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.1.    To provide CloudWatch Logs permission to invoke your Lambda function, run the AWS CLI command add-permission similar to the following:aws lambda add-permission \ --function-name "helloworld" \ --statement-id "helloworld" \ --principal "logs.amazonaws.com" \ --action "lambda:InvokeFunction" \ --source-arn "arn:aws:logs:region:123456789123:log-group:YourLogGroup:*" \ --source-account "123456789012"Important: Replace "helloworld" with your Lambda function name, "YourLogGroup" with your log group, and the example account number with your account.2.    Create a subscription filter using the AWS CLI command put-subscription-filter to send log events that contains a keyword. In the following example, the keyword "ERROR" is used for the Lambda function:Important: Replace "YourLogGroup" with your log group and the example account number with your account.aws logs put-subscription-filter \ --log-group-name YourLogGroup \ --filter-name demo \ --filter-pattern "ERROR" \ --destination-arn arn:aws:lambda:region:123456789123:function:helloworldThe CloudWatch log group "YourLogGroup" invokes the Lambda function when it receives a log event that contains the keyword "ERROR" similar to the following:{ "awslogs": { "data": "H4sIAAAAAAAAAHWPwQqCQBCGX0Xm7EFtK+smZBEUgXoLCdMhFtKV3akI8d0bLYmibvPPN3wz00CJxmQnTO41whwWQRIctmEcB6sQbFC3CjW3XW8kxpOpP+OC22d1Wml1qZkQGtoMsScxaczKN3plG8zlaHIta5KqWsozoTYw3/djzwhpLwivWFGHGpAFe7DL68JlBUk+l7KSN7tCOEJ4M3/qOI49vMHj+zCKdlFqLaU2ZHV2a4Ct/an0/ivdX8oYc1UVX860fQDQiMdxRQEAAA==" }}Related informationFilter and pattern syntaxFollow"
https://repost.aws/knowledge-center/lambda-cloudwatch-filter
Why is my AWS Glue ETL job running for a long time?
My AWS Glue job is running for a long time.-or-My AWS Glue straggler task is taking a long time to complete.
"My AWS Glue job is running for a long time.-or-My AWS Glue straggler task is taking a long time to complete.Short descriptionSome common reasons why your AWS Glue jobs take a long time to complete are the following:Large datasetsNon-uniform distribution of data in the datasetsUneven distribution of tasks across the executorsResource under-provisioningResolutionEnable metricsAWS Glue provides Amazon CloudWatch metrics that can be used to provide information about the executors and the amount of done by each executor. You can enable CloudWatch metrics on your AWS Glue job by doing one of the following:Using a special parameter: Add the following argument to your AWS Glue job. This parameter allows you to collect metrics for job profiling for your job run. These metrics are available on the AWS Glue console and the CloudWatch console.Key: --enable-metricsUsing the AWS Glue console: To enable metrics on an existing job, do the following:Open the AWS Glue console.In the navigation pane, choose Jobs.Select the job that you want to enable metrics for.Choose Action, and then choose Edit job.Under Monitoring options, select Job metrics.Choose Save.Using the API: Use the AWS Glue UpdateJob API with --enable-metrics as the DefaultArguments parameter to enable metrics on an existing job.Note: AWS Glue 2.0 doesn't use YARN that reports metrics. This means that you can't get some of the executor metrics, such as numberMaxNeededExecutors and numberAllExecutor, for AWS Glue 2.0.Enable continuous loggingIf you enable continuous logging in your AWS Glue job, then the real-time driver and executor logs are pushed to CloudWatch every five seconds. With this real-time logging information, you can get more details on the running job. For more information, see Enabling continuous logging for AWS Glue jobs.Check the driver and executor logsIn the driver logs, check for tasks that ran for a long time before they were completed. For example:2021-04-15 10:53:54,484 ERROR executionlogs:128 - g-7dd5eec38ff57a273fcaa35f289a99ecc1be6901:2021-04-15 10:53:54,484 INFO [task-result-getter-1] scheduler.TaskSetManager (Logging.scala:logInfo(54)): Finished task 0.0 in stage 7.0 (TID 139) in 4538 ms on 10.117.101.76 (executor 10) (13/14)...2021-04-15 12:11:30,692 ERROR executionlogs:128 - g-7dd5eec38ff57a273fcaa35f289a99ecc1be6901:2021-04-15 12:11:30,692 INFO [task-result-getter-3] scheduler.TaskSetManager (Logging.scala:logInfo(54)): Finished task 13.0 in stage 7.0 (TID 152) in 4660742 ms on 10.117.97.97 (executor 11) (14/14)In these logs, you can view that a single task took 77 minutes to complete. Use this information to review why that particular task is taking a long time. You can do so by using the Apache Spark web UI. The Spark UI provides well-structured information for different stages, tasks and executors.Enable the Spark UIYou can use the Spark UI to troubleshoot Spark jobs that run for a long time. By launching the Spark history server and enabling the Spark UI logs, you can get information on the stages and tasks. You can use the logs to learn how the tasks are executed by the workers. You can enable Spark UI using the AWS Glue console or the AWS Command Line Interface (AWS CLI). For more information, see Enabling the Apache Spark web UI for AWS Glue jobs.After the job is complete, you might see driver logs similar to the following:ERROR executionlogs:128 - example-task-id:example-timeframe INFO [pool-2-thread-1] s3n.MultipartUploadOutputStream (MultipartUploadOutputStream.java:close(414)): close closed:false s3://dox-example-bucket/spark-application-1626828545941.inprogressAfter analyzing the logs for the job, you can launch the Spark history server either on an Amazon Elastic Compute Cloud (Amazon EC2) instance or using Docker. Open the UI, and navigate to the Executor tab to check whether a particular executor is running for a longer time. If so, the uneven distribution of work and under-utilization of available resources could be caused due to a data skew in the dataset. In the Stages tab, you can get more information and statistics on the stages that took a long time. You can find details on whether these stages involved shuffle spills that are expensive and time-consuming.Capacity planning for data processing units (DPUs)If all executors contribute equally for the job, but the job still takes a long time to complete, then consider adding more workers to your job to improve the speed. DPU capacity planning can help you to avoid the following:Under-provisioning that might result is slower execution timeOver-provisioning that incurs higher costs, but provides results in the same amount of timeFrom CloudWatch metrics, you can get information on the number of executors used currently and the maximum number of executors needed. The number of DPUs needed depends on the number of input partitions and the worker type requested.Keep the following in mind when you define the number of input partitions:If the Amazon Simple Storage Service (Amazon S3) files aren't splittable, then the number of partitions is equal to the number of input files.If the Amazon S3 files are splittable, and the data is unstructured/semi-structured, then the number of partitions is equal to the total file size / 64 MB. If the size of each file is less than 64 MB, then the number of partitions is equal to the number of files.If the Amazon S3 files are splittable and the data is structured, then the number of partitions is equal to the total file size / 128 MB.Do the following to calculate the optimal number of DPUs:For example, suppose that the number of input partitions is 428. Then, you can calculate the optimal number of DPUs by the following formula:Maximum number of executors needed = Number of input partitions / Number of tasks per executor = 428/4 = 107Keep in mind the following:The Standard worker type supports 4 tasks per executorG.1X supports 8 tasks per executorG.2X supports 16 tasks per executorThe standard worker type has two executors, including one driver, in one node. One of these executors is a driver in Spark. Therefore, you need 108 executors.The number of DPUs needed = (No. of executors / No. of executors per node) + 1 DPU = (108/2) + 1 = 55.Related informationSpecial parameters used by AWS GlueMonitoring jobs using the Apache Spark web UIMonitoring for DPU capacity planningFollow"
https://repost.aws/knowledge-center/glue-job-running-long
How do I associate a target network with a Client VPN endpoint?
I need to enable my clients to establish a VPN session with an AWS Client VPN endpoint so that they can access network resources. How do I associate a target network with a Client VPN endpoint?
"I need to enable my clients to establish a VPN session with an AWS Client VPN endpoint so that they can access network resources. How do I associate a target network with a Client VPN endpoint?Short descriptionA target network is a subnet in a VPC. When you associate a subnet with a Client VPN endpoint, clients can establish a VPN session. You can associate multiple subnets with a Client VPN endpoint. Note that all subnets must be from the same VPC.Before associating a target network to a Client VPN endpoint, consider the following:The clients will be able to establish a VPN connection to the Client VPN endpoint only after a target network is associated with the Client VPN endpoint.Associating a single target network allows you to establish a VPN session with the Client VPN endpoint. However, it's a best practice to associate at least two target networks from two different Availability Zones for redundancy.The subnet that you associate as the target must have a CIDR block with at least a /27 bitmask (for example, 192.168.0.0/27). Also, there must be at least eight available IP addresses in the subnet.When you associate a subnet with a Client VPN endpoint, the local route of the VPC in which the associated subnet is provisioned is automatically added to the Client VPN endpoint's route table.ResolutionAssociate a target network with a Client VPN endpointOpen the Amazon VPC console.In the navigation pane, choose Client VPN Endpoints.Select the Client VPN endpoint to associate with the target network.Choose Associations, and then choose Associate.For VPC, choose the VPC in which the subnet is provisioned.For Subnet to associate, choose the subnet to associate with the Client VPN endpoint.Choose Associate.Apply a security group to a target networkWhen you associate the first target network with a Client VPN endpoint, the default security group of the VPC is applied in the associated subnet. After you associate the first target network, you can change the security groups that are applied to the Client VPN endpoint. The security group rules that are required depend on the type of VPN access you want to configure.Open the Amazon VPC console.In the navigation pane, choose Client VPN Endpoints.Select the Client VPN endpoint to which you plan to apply the security groups.Choose Security Groups, select the current security group, and then choose Apply Security Groups.Select the new security groups in the list, and then choose Apply Security Groups.(Optional) Disassociate a target network from a Client VPN endpointAfter making sure that there are no clients connected to the Client VPN endpoint, you can disassociate unwanted target networks. You need at least one target network for the clients to establish a connection to the Client VPN endpoint. When you disassociate all target networks, the Client VPN endpoint removes the route that was automatically created when the target networks were associated.Open the Amazon VPC console.In the navigation pane, choose Client VPN Endpoints.Select the Client VPN endpoint with which the target network is associated.Choose Associations.Select the target network to disassociate.Choose Disassociate, and then choose Yes, Disassociate.Follow"
https://repost.aws/knowledge-center/client-vpn-associate-target-network
How do I resolve the error "FAILED: ParseException line 1:X missing EOF at '-' near 'keyword'" in Athena?
"When I run an MSCK REPAIR TABLE or SHOW CREATE TABLE statement in Amazon Athena, I get an error similar to the following: "FAILED: ParseException line 1:X missing EOF at '-' near 'keyword'"."
"When I run an MSCK REPAIR TABLE or SHOW CREATE TABLE statement in Amazon Athena, I get an error similar to the following: "FAILED: ParseException line 1:X missing EOF at '-' near 'keyword'".ResolutionYou get this error when the database name specified in the DDL statement contains a hyphen ("-"). AWS Glue allows database names with hyphens. However, underscores (_) are the only special characters that Athena supports in database, table, view, and column names.In the following example, the database name is alb-database1. When you run MSCK REPAIR TABLE or SHOW CREATE TABLE, Athena returns a ParseException error:Your query has the following error(s):FAILED: ParseException line 1:7 missing EOF at '-' near 'alb'This query ran against the "alb-database1" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: cc5c1234-4c12-4dcb-a123-bff954b305eb.To resolve this issue, recreate the database with a name that doesn't contain any special characters other than underscore (_).Related informationNames for tables, databases, and columnsFollow"
https://repost.aws/knowledge-center/parse-exception-missing-eof-athena
How do I resolve ORA-00018 or ORA-00020 errors for an Amazon RDS for Oracle DB instance?
I'm trying to connect as the primary user or DBA user on an Amazon Relational Database Service (Amazon RDS) for Oracle DB instance. But I receive one of these errors:ORA-00018 maximum sessions exceededORA-00020 maximum processes exceededHow do I resolve these errors?
"I'm trying to connect as the primary user or DBA user on an Amazon Relational Database Service (Amazon RDS) for Oracle DB instance. But I receive one of these errors:ORA-00018 maximum sessions exceededORA-00020 maximum processes exceededHow do I resolve these errors?Short descriptionThese errors can be caused by either a planned scaling exercise or an unplanned event that causes a large number of DB connections. In these cases, many client sessions are initiated against a DB instance, and one of the following database limits is reached:PROCESSES – the maximum number of user processes allowed.SESSIONS – the maximum number of user sessions allowed.If the maximum connections are reached because of a planned scaling exercise, then increase SESSIONS or PROCESSES, or both. This accommodates your application’s new scale. These two parameters are not dynamic, so modify the parameters and then reboot the instance.If the maximum connections are reached because of an unplanned event, then identify the cause of the event and take appropriate action.For example, your application might overwhelm the database when response times increase because of locking or block contention. In this case, increasing SESSIONS or PROCESSES only increases the number of connections before maxing out again. This can worsen issues caused by contention. This in turn can prevent the Amazon RDS monitoring system from logging in, performing health checks, or taking corrective action, such as rebooting.ResolutionOracle user PROFILEOne of the most common causes for the ORA-18 and ORA-20 is the large number of idle connections. Idle connections remain in a database without being properly closed by either the application or by database administrators. The increase of IDLE connections can cause the database to reach the maximum limit of SESSIONS/PROCESSES parameter. As a result, no new connections are allowed. It's a best practice to set a PROFILE for application connections with a limited IDLE_TIME value.In Oracle databases, each user is assigned to a PROFILE. An Oracle PROFILE is a set of resources that is assigned to each user attached to this PROFILE. One of those resources is IDLE_TIME. IDLE_TIME specifies in minutes the amount of continuous inactivity that's allowed during a session before the query is killed by the database. Long-running queries and other operations are not subject to this limit.This example shows how to create a profile with maximum IDLE_TIME of 30 minutes and assign it to an application user. Any IDLE connection for more than 30 minutes is automatically killed by the database:Create a PROFILE with a limited IDLE_TIME parameter:SQL> select count(*) from v$session where type= 'BACKGROUND';Assign this PROFILE to a specific user: SQL> ALTER USER <username> PROFILE <profile_name>;Scaling RDS instance sizeThe maximum number of SESSIONS/PROCESSES might be reached because of an increase of the incoming workload from application users. By default, in RDS for Oracle, both parameter limits are calculated based on a pre-defined formula that depends on the DB instance class memory. It's not a best practice to manually modify SESSIONS/PROCESSES parameter in this case. Instead, scale up/down the instance based on the workload.RDS default setting for PROCESSES/SESSIONS parameter are calculated using these formulas:PROCESSES= LEAST({DBInstanceClassMemory/9868951}, 20000)SESSIONS = Oracle Default = (1.5 * PROCESSES) + 22Manually setting either PROCESSES or SESSIONS parameters beyond their default limit might cause an increase in memory consumption. As a result, the database might crash due to out of memory issues. Also, manually setting PROCESSES or SESSIONS might cause configuration mismatch when scaling up/down the instance. This happens because the PROCESSES and SESSIONS parameters don't depend on the allocated instance memory anymore.LICENSE_MAX_SESSIONSThe LICENSE_MAX_SESSIONS parameter specifies the maximum number of concurrent user sessions allowed. This doesn't apply to Oracle background processes or to users with RESTRICTED SESSION privileges, including users with the DBA role. Set LICENSE_MAX_SESSIONS to a lower value than SESSIONS and PROCESSES. This causes the client connections get an ORA-00019 error instead of an ORA-18 or ORA-20 error. An ORA-00019 error doesn’t apply to users who have RESTRICTED SESSION privileges. So, primary and RDSADMIN users are able to log on to the DB instance and perform administrative troubleshooting and corrective actions. Also, Amazon RDS monitoring can continue to connect to the database by using RDSADMIN to perform health checks.Note: LICENSE_MAX_SESSIONS was originally intended to limit the usage based on the number of concurrent sessions. Oracle no longer offers licensing based on the number of concurrent sessions, and LICENSE_MAX_SESSIONS is deprecated. But, you can still use the parameter if you use Oracle versions up to 19c. Also, application users shouldn't be granted either the DBA role or RESTRICTED SESSION privileges. For more information, see the Oracle documentation for LICENSE_MAX_SESSIONS.LICENSE_MAX_SESSIONS is a dynamic parameter, so it can be set without restarting the DB instance. For more information, see Working with parameter groups.LICENSE_MAX_SESSIONS parameter can be set as a formula, similar to the PROCESSES parameter. It's a best practice to set the LICENSE_MAX_SESSIONS parameter to be based on a formula rather than a static value. This helps to avoid mis-configuration when scaling up/down the instance size. For example:To set it same as PROCESSES parameter: LICENSE_MAX_SESSIONS= LEAST({DBInstanceClassMemory/9868951}, 20000)To set it to 4/5 of PROCESSES parameter: LICENSE_MAX_SESSIONS= LEAST({DBInstanceClassMemory/12336188}, 20000)Using DEDICATED sessionsIf you use DEDICATED sessions, your client connections might exceed the limit of the PROCESSES parameter (ORA-20). If your client connections exceed the limit, set the value of the LICENSE_MAX_SESSIONS below PROCESSES:LICENSE_MAX_SESSIONS = maximum number of client connections only.PROCESSES = LICENSE_MAX_SESSIONS + all background processes, including parallel queries, DBA users including primary users, and a buffer. A buffer allows for unexpected background processes that might occur later. To see how many background processes you have now, run a query like this:SQL> select count(*) from v$session where type= 'BACKGROUND';Note: SESSIONS, which defaults to (1.5 * PROCESSES) + 22, should be sufficient. For more information, see the Oracle documentation for SESSIONS.To manually connect to your instance to verify the SESSIONS, run a command like this:SQL> select name, value from v$parameter where upper(name) in ('SESSIONS','PROCESSES','LICENSE_MAX_SESSIONS');NAME VALUE------------------------------ ------------------------------processes 84sessions 148license_max_sessions 0Using SHARED sessionsIf you use SHARED sessions, your client connections might exceed the limit of the SESSIONS parameter (ORA-0018). If your client connections exceed the limit, set the PROCESSES parameter to a higher value.LICENSE_MAX_SESSIONS = maximum number of client connections only.PROCESSES = all background processes, including parallel queries, and DBA users including primary users, and a buffer. Be sure to include settings of SHARED_SERVERS and DISPATCHERS with the count of background processes.SESSIONS = (1.5 * PROCESSES) + 22If you use SHARED servers, and you receive a max processes (ORA-20) error instead of max sessions (ORA-18) error, then your dispatchers might be overwhelmed. When dispatchers are overwhelmed, the connections are forced to come in as DEDICATED. Increase the number of DISPATCHERS to allow more sessions to connect shared. The SHARED_SERVERS parameter might also need to be increased.To check if you are using SHARED or DEDICATED servers, run a command like this:SQL> select decode(server, 'NONE', 'SHARED', server) as SERVER, count(*)from v$session group by decode(server, 'NONE', 'SHARED',server)Related informationBest practices for working with OracleFollow"
https://repost.aws/knowledge-center/rds-ora-error-troubleshoot
How can I create and use partitioned tables in Amazon Athena?
I want to create partitioned tables in Amazon Athena and use them to improve my queries.
"I want to create partitioned tables in Amazon Athena and use them to improve my queries.Short descriptionBy partitioning your Athena tables, you can restrict the amount of data scanned by each query, thus improving performance and reducing costs. Partitioning divides your table into parts and keeps related data together based on column values. Partitions act as virtual columns and help reduce the amount of data scanned per query.Consider the following when you create a table and partition the data:You must store your data on Amazon Simple Storage Service (Amazon S3) buckets as a partition.Include the partitioning columns and the root location of partitioned data when you create the table.Choose the appropriate approach to load the partitions into the AWS Glue Data Catalog. The table refers to the Data Catalog when you run your queries.Use partition projection for highly partitioned data in Amazon S3.ResolutionHere are a few things to keep in mind when you create a table with partitions.Store on Amazon S3The data must be partitioned and stored on Amazon S3. The partitioned data might be in either of the following formats:Hive style format (Example: s3://doc-example-bucket/example-folder/year=2021/month=01/day=01/myfile.csv)Note: The path includes the names of the partition keys and their values (Example: year=2021)Non-Hive style format (Example: s3://doc-example-bucket/example-folder/2021/01/01/myfile.csv)Include partitioning information when creating the tableThe CREATE TABLE statement must include the partitioning details. Use PARTITIONED BY to define the partition columns and LOCATION to specify the root location of the partitioned data. Run a query similar to the following:CREATE EXTERNAL TABLE doc-example-table (first string,last string,username string)PARTITIONED BY (year string, month string, day string)STORED AS parquetLOCATION 's3://doc-example-bucket/example-folder'Replace the following in the query:doc-example-table with the name of the table that you are creatingdoc-example-bucket with the name of the S3 bucket where you store your tableexample-folder with the name of your S3 folderfirst, last, and username with the names of the columnsyear, month, and day with the names of the partition columnsLoad partitions into the Data Catalog with an approach that's appropriate for your use caseAfter creating the table, add the partitions to the Data Catalog. You can do so using one of the following approaches:Use the MSCK REPAIR TABLE query for Hive style format data: The MSCK REPAIR TABLE command scans a file system, such as Amazon S3, for Hive-compatible partitions. The command compares them to the partitions that are already present in the table and then adds the new partitions to the Data Catalog. Run a command similar to the following:Note: This approach isn't a best practice if you have more than a few thousand partitions. Your DDL queries might face time out issues. For more information, see Why doesn't my MSCK REPAIR TABLE query add partitions to the AWS Glue Data Catalog?MSCK REPAIR TABLE doc-example-tableUse the ALTER TABLE ADD PARTITION query for both Hive style and non-Hive style format data: The ALTER TABLE ADD PARTITION command adds one or more partitions to the Data Catalog. In the command, specify the partition column name and value pairs along with the Amazon S3 path where the data files for that partition are stored. You can add one partition or a batch of partitions per query by running commands similar to the following:ALTER TABLE doc-example-table ADD PARTITION (year='2021', month='01', day='01') LOCATION 's3://doc-example-bucket/example-folder/2021/01/01/'ALTER TABLE doc-example-table ADDPARTITION (year='2021', month='01', day='01') LOCATION 's3://doc-example-bucket/example-folder/2021/01/01/'PARTITION (year='2020', month='06', day='01') LOCATION 's3://doc-example-bucket/example-folder/2020/06/01/'Use the AWS Glue crawler for both Hive and non-Hive style format data: You can use the Glue crawler to automatically infer table schema from your dataset, create the table, and then add the partitions to the Data Catalog. Or, you can use the crawler to only add partitions to a table that's created manually with the CREATE TABLE statement. To use the crawler to add partitions, when you define the crawler, specify one or more existing Data Catalog tables as the source of the crawl instead of specifying data stores. The crawler crawls the data stores specified by the Data Catalog tables. No new tables are created. Instead, the manually created tables are updated, and new partitions are added. For more information, see Setting crawler configuration options.Use partition projection for highly partitioned data in Amazon S3: When you have highly partitioned data in Amazon S3, adding partitions to the Data Catalog can be impractical and time consuming. Queries against a highly partitioned table don't complete quickly. In such cases, you can use the partition projection feature to speed up query processing of highly partitioned tables and automate partition management. In partition projection, partition values and locations are calculated from configuration rather than read from a repository, such as the Data Catalog. This means that there's no need to add partitions to the Data Catalog with partition projection. Because in-memory operations are usually faster than remote operations, partition projection can reduce the runtime of queries against highly partitioned tables. The partition projection feature currently works with enumerated values, integers, dates, or injected partition column types. For more information, see Partition Projection with Amazon Athena.Related informationWhy do I get zero records when I query my Amazon Athena table?Follow"
https://repost.aws/knowledge-center/athena-create-use-partitioned-tables
Why did my AWS DMS task fail when using Binary Reader for Amazon RDS for Oracle?
Why did my AWS Database Migration Service (AWS DMS) task fail when using Binary Reader for Amazon Relational Database Service (Amazon RDS) for Oracle?
"Why did my AWS Database Migration Service (AWS DMS) task fail when using Binary Reader for Amazon Relational Database Service (Amazon RDS) for Oracle?Short descriptionDuring the change data capture (CDC) phase, Oracle provides two methods to read the redo logs: Oracle LogMiner and Binary Reader. The Oracle LogMiner is an SQL interface that accesses the online and archived redo logs. The Binary Reader is an AWS DMS feature that reads and parses the redo logs directly.When using Binary Reader for migrations with a lot of changes, there is lower impact on the Oracle source when compared to using Oracle LogMiner. This is because the archive logs are copied and then parsed on the replication instance. For migrations that have a lot of changes, Binary Reader usually has better CDC performance than Oracle LogMiner. Be sure to provision sufficient network bandwidth to avoid network performance bottlenecks.If the prerequisites aren't met when using Binary Reader, you can receive two types of errors:Permissions errorsExtra connection attribute errorsResolutionPermissions errorsAWS DMS uses the Binary Reader by creating directories on the source database. So, the AWS DMS user account must have the required privileges to access the source Oracle endpoint and create necessary directories. If AWS DMS doesn't have permissions, then you see log entries similar to this:Messages"[SOURCE_CAPTURE ]E: OCI error 'ORA-00604: error occurred at rMeecursive SQL level 1 ORA-20900: Invalid path used for directory: /rdsdbdata/log/arch ORA-06512: at "RDSADMIN.RDSADMIN", line 321 ORA-06512: at line 2' [1022307] (oradcdc_bfilectx.c:164)"To resolve these errors, use the Amazon RDS primary user as the AWS DMS user. The directories are created automatically when an AWS DMS task starts running. If the directories aren't created, first log in to the Oracle database using the primary user. Then, run these commands to test if these directories can be created:SQL> exec rdsadmin.rdsadmin_master_util.create_archivelog_dir; SQL> exec rdsadmin.rdsadmin_master_util.create_onlinelog_dir;Check the results by querying the all_directories table:SQL> select directory_path from all_directories where directory_name in ('ONLINELOG_DIR','ARCHIVELOG_DIR');DIRECTORY_PATH--------------------------------------------------------------------------------/rdsdbdata/log/arch/rdsdbdata/log/onlinelogAfter you create the required directories ONLINELOG_DIR and ARCHIVELOG_DIR, restart your AWS DMS task.Extra connection attribute errorsIf you use the Binary Reader but you're missing the necessary extra connection attributes for your Oracle source, then you see this log entry:Messages"[TASK_MANAGER ]E: ORA-00604: error occurred at recursive SQL level 1 ORA-20900: Invalid path used for directory: awsdms_dir_test ORA-06512: at "RDSADMIN.RDSADMIN", line 321 ORA-06512: at line 2 ; Invalid RDS Oracle binary reader db settings, replacePathPrefix should be set to TRUE and usePathPrefix should be set to '/rdsdbdata/log/'; Invalid RDS Oracle binary reader db settings, useAlternateFolderForOnline should be set to TRUE; Invalid RDS Oracle binary reader db setting, oraclePathPrefix should not be empty; Invalid RDS Oracle binary reader db settings; Failed while preparing stream component 'st_0_4MGMBIOJCILNOU3UHICCDBCNFQ'.; Cannot initialize subtask; Stream component 'st_0_4MGMBIOJCILNOU3UHICCDBCNFQ' terminated [1020418] (replicationtask.c:2680)"To use the Binary Reader to capture change data for an Amazon RDS for Oracle source, add these extra connection attributes to the source endpoint:useLogMinerReader=N;useBfile=Y;replacePathPrefix=true;usePathPrefix=/rdsdbdata/log/;useAlternateFolderForOnline=true;oraclePathPrefix=/rdsdbdata/db/ORCL_A/;accessAlternateDirectly=falseRelated informationUser account privileges required on an AWS-managed Oracle source for AWS DMSAccessing online and archived redo logsMigrate an on-premises Oracle database to Amazon RDS for PostgreSQL using an Oracle bystander and AWS DMSFollow"
https://repost.aws/knowledge-center/dms-task-binary-reader-rds-oracle
Why are the emails that I send using Amazon SES failing with the error message "Email rejected per DMARC policy"?
I'm sending emails from Amazon Simple Email Services (Amazon SES) using a verified email address. But the emails are failing with the error message "Email rejected per DMARC policy".
"I'm sending emails from Amazon Simple Email Services (Amazon SES) using a verified email address. But the emails are failing with the error message "Email rejected per DMARC policy".Short descriptionThe following are common reasons why you can receive a Domain-based Message Authentication and Conformance (DMARC) failure:You have a "reject" DMARC policy on your domain, and your email address isn't authenticated through Sender Policy Framework (SPF) or DomainKeys Identified Mail (DKIM). To comply with DMARC, you must authenticate your email messages through SPF or DKIM, or both.Your email address is verified, but your domain isn't. To resolve this issue, you must verify your domain using DKIM to comply with DMARC.ResolutionAuthenticate your email messages through SPF or DKIMYou have a DMARC policy on your domain that's similar to the following:v=DMARC1; p=reject; rua=mailto:dmarcreports@example.com;When you have a DMARC policy that controls your domain's outbound email traffic and your email address isn't authenticated, your email is rejected.To resolve this issue, authenticate your email identity with DKIM or SPF, and comply with DMARC. For step-by-step instructions, see What do I do if my Amazon SES emails fail DMARC validation for SPF alignment or DKIM alignment?Verify your domain using DKIM to comply with DMARCIf you're sending email using an email address that's verified on a domain that's not verified, then your email fails DMARC with no DKIM authentication. Follow these instructions to verify your domain, Verifying a DKIM domain identity with your DNS provider. For methods on how to authenticate your email through DKIM see, Authenticating email with DKIM in Amazon SES.Note:You can't authenticate an email address that's on a domain that's not verified.When you send email using a separately verified email address on a DKIM-configured domain, Amazon SES automatically authenticates the message. However, if you turned off DKIM on a separately verified email address that's on a DKIM-configured domain, then your message isn't authenticated.Related informationWhy are the emails that I send using Amazon SES getting marked as spam?How do I enable DKIM for Amazon SES?Why is DKIM domain failing to verify on Amazon SES?Troubleshooting DKIM problems in Amazon SESFollow"
https://repost.aws/knowledge-center/amazon-ses-send-email-failure-dmarc
How do I troubleshoot Amazon MQ broker connection issues?
I can't connect to my Amazon MQ broker using the web console or wire-level endpoints. How do I troubleshoot the issue?
"I can't connect to my Amazon MQ broker using the web console or wire-level endpoints. How do I troubleshoot the issue?Short descriptionAmazon MQ broker connection issues can occur for many reasons and can result in many types of errors. The following are the most common reasons why Amazon MQ returns a connection-related error:Misconfigured local firewalls and routing tablesMisconfigured security groups and network access control lists (NACLs)Connecting to the wrong port numberDNS resolution issuesUnsupported SDK clientsIncorrect broker visibility settingsMisconfigured SSL/TLSOut-of-date broker endpoint certificatesThe following are examples of some of the error messages that Amazon MQ can return when there's a broker connection issue:SSLHandshakeExceptionJMSExceptionUnknownHostExceptionSocketTimeoutExceptionResolutionTo troubleshoot Amazon MQ broker connection issues, do the following.Verify that your local firewalls and routing tables are configured correctlyFor brokers with public accessibilityConfirm the following:Your application has internet access.Your local firewalls and route tables allow public internet traffic.Your subnet NACLs and broker's security group allow communication through ports that are supported by Amazon MQ.The broker in your Amazon Virtual Private Cloud (Amazon VPC) is in a public subnet with a default route to an internet gateway. For more information, see Enable internet access in the Amazon VPC User Guide.For private brokersConfirm the following:AWS Cloud service-specific configurations associated with your Amazon MQ broker are set up correctly.Your local firewalls and route tables allow inbound and outbound traffic between any associated virtual private networks (VPNs).Your subnet NACLs and broker's security group allow communication through ports that are supported by Amazon MQ.Test your Amazon MQ broker's network connectivityFollow the instructions in step 5 of I can't connect to my broker web console or endpoints in the Amazon MQ Developer Guide.Note: If the nslookup command request times out, or no values are present in the answer section of the command output, do the following:Clear your local DNS cache.Restart your local networking equipment and client.Verify that your local DNS servers are configured correctly.Verify that you're using a supported SDK clientMake sure that you're using the most recent SDK client implementation that's compatible with the broker engine that you're using.For Amazon MQ for ActiveMQ, you must use the AMQP 1.0 protocol. For Amazon MQ for RabbitMQ, you must use the AMPQ 0-9-1 protocol.Make sure that you're using SSL/TLS to communicate with AWS resourcesNote: Amazon MQ doesn't support Mutual Transport Layer Security (TLS) authentication currently.The use of incorrect SSL/TLS versions and unsupported cipher suits results in SSL errors. For more information, see Data protection in Amazon MQ in the Amazon MQ Developer Guide.For a list of supported ciphers for RabbitMQ and ActiveMQ, see Encryption in transit in the Amazon MQ Developer Guide.Note: Some client libraries require SSL to be activated explicitly. The following are examples of explicit SSL activation for the Java and Ruby programing languages.(Java) Explicit SSL activation code exampleConnectionFactory factory=new ConnectionFactory();factory.useSslProtocol()(Ruby) Explicit SSL activation code exampleconn = Bunny.new(:tls => true, :verify_peer => false)Verify that you're Amazon MQ broker endpoint certificate is up to dateFor more information, see the SSL exceptions section of the Amazon MQ Developer Guide.For instructions on how to tell if the Amazon Trust Services Certificate Authorities (CAs) are in your trust store, see the following article: How to prepare for AWS's move to its own certificate authority.Follow"
https://repost.aws/knowledge-center/mq-broker-connection-issues
How do I troubleshoot high latency issues when using ElastiCache for Redis?
How do I troubleshoot high latency issues when using Amazon ElastiCache for Redis?
"How do I troubleshoot high latency issues when using Amazon ElastiCache for Redis?Short descriptionThe following are common reasons for elevated latencies or time-out issues in ElastiCache for Redis:Latency caused by slow commands.High memory usage leading to increased swapping.Latency caused by network issues.Client side latency issues.ElastiCache cluster events.ResolutionLatency caused by slow commandsRedis is mostly single-threaded. So, when a request is slow to serve, all other clients must wait to be served. This waiting adds to command latencies. Redis commands also have a time complexity defined using the Big O notation.Use Amazon CloudWatch metrics provided by ElastiCache to monitor the average latency for different classes of commands. It's important to note that common Redis operations are calculated in microsecond latency. CloudWatch metrics are sampled every 1 minute, with the latency metrics showing an aggregate of multiple commands. So, a single command can cause unexpected results, such as timeouts, without showing significant changes in the metric graphs. In these situations use the SLOWLOG command to help determine what commands are taking longer to complete. Connect to the cluster and run the slowlog get 128 command in the redis-cli to retrieve the list. For more information, see How do I turn on Redis Slow log in an ElastiCache for Redis cache cluster?You might also see an increase in the EngineCPUUtilization metric in CloudWatch due to slow commands blocking the Redis engine. For more information, see Why am I seeing high or increasing CPU usage in my ElastiCache for Redis cluster?Examples of complex commands include:KEYS in production environments over large datasets as it sweeps the entire keyspace searching for a specified pattern.Long-running LUA scripts.High memory usage leading to increased swappingRedis starts to swap pages when there is increased memory pressure on the cluster by using more memory than what is available. Latency and timeout issues increase when memory pages are transferred to and from the swap area. The following are indications in CloudWatch metrics of increased swapping:Increasing of SwapUsage.Very low FreeableMemory.High BytesUsedForCache and DatabaseMemoryUsagePercentage metrics.SwapUsage is a host-level metric that indicates the amount of memory being swapped. It's normal for this metric to show non-zero values because it's controlled by the underlying operating system and can be influenced by many dynamic factors. These factors include OS version, activity patterns, and so on. Linux proactively swaps idle keys (rarely accessed by clients) to disk as an optimization technique to free up memory space for more frequently used keys.Swapping becomes a problem when there isn't enough available memory. When this happens, the system starts moving pages back and forth between disk and memory. Specifically, SwapUsage less than a few hundred megabytes doesn't negatively impact Redis performance. There are performance impacts if the SwapUsage is high and actively altering and there isn't enough memory available on the cluster. For more information, see:Why am I seeing high or increasing memory usage in my ElastiCache cluster?Why does swapping occur in ElastiCache?Latency caused by NetworkNetwork latency between the client and the ElastiCache clusterTo isolate network latency between the client and cluster nodes, use TCP traceroute or mtr tests from the application environment. Or, use a debugging tool such as the AWSSupport-SetupIPMonitoringFromVPC AWS Systems Manager document (SSM document) to test connections from the client subnet.The cluster is hitting network limitsAn ElastiCache node shares the same network limits as that of corresponding type Amazon Elastic Compute Cloud (Amazon EC2) instances. For example, the node type of cache.m6g.large has the same network limits as the m6g.large EC2 instance. For information on checking the three key network performance components of bandwidth capability, packet-per-second (PPS) performance, and connections tracked, see Monitor network performance for your EC2 instance.To troubleshoot issues network limits on your ElastiCache node, see Troubleshooting - Network-related limits.TCP/SSL handshake latencyClients connect to Redis clusters using a TCP connection. Creating a TCP connection takes a few milliseconds. The extra milliseconds create additional overhead on Redis operations run by your application and extra pressure on the Redis CPU. It's important to control the volume of new connections when your cluster is using the ElastiCache in-transit encryption feature due to the extra time and CPU utilization needed for a TLS handshake. A high volume of connections rapidly opened (NewConnections) and closed might impact the node’s performance. You can use connection pooling to cache established TCP connections into a pool. The connections are then reused each time a new client tries to connect to the cluster. You can implement connection pooling using your Redis client library (if supported), with a framework available for your application environment, or build it from the ground. You can also use aggregated commands such as MSET/MGET as an optimization technique.There are a large number of connections on the ElastiCache nodeIt's a best practice to track the CurrConnections and NewConnections CloudWatch metrics. These metrics monitor the number of TCP connections accepted by Redis. A large number of TCP connections might lead to the exhaustion of the 65,000 maxclients limit. This limit is the maximum concurrent connections you can have per node. If you reach the 65,000 limit, you receive the ERR max number of clients reached error. If more connections are added beyond the limit of the Linux server, or of the maximum number of connections tracked, then additional client connections result in connection timed out errors. For information on preventing a large number of connections, see Best practices: Redis clients and Amazon ElastiCache for Redis.Client side latency issuesLatency and timeouts might originate from the client itself. Verify the memory, CPU, and network utilization on the client side to determine if any of these resources are hitting their limits. If the application is running on an EC2 instance, then leverage the same CloudWatch metrics discussed previously to check for bottlenecks. Latency might happen in an operating system that can't be monitored thoroughly by default CloudWatch metrics. Consider installing monitoring tool inside the EC2 instance, such as atop or CloudWatch agent.If the timeout configuration values set up on the application side are too small, you might receive unnecessary timed out errors. Configure the client-side timeout appropriately to allow the server sufficient time to process the request and generate the response. For more information, see Best practices: Redis clients and Amazon ElastiCache for Redis.The timeout error received from your application reveals additional details. These details include whether a specific node is involved, the name of the Redis data type that's causing timeouts, the exact timestamp when the timeout occurred, and so on. This information helps you to find the pattern of the issue. Use this information to answer question such as the following:Do timeouts happen frequently during a specific time of a day?Did the timeout occur at one client or more clients?Did the timeout occur at one Redis node or at more nodes?Did the timeout occur at one cluster or more clusters?Use these patterns to investigate the most likely client, or ElastiCache node. You can also use your application log, and VPC Flow Logs to determine if the latency happened on the client side, ElastiCache node, or network.Synchronization of RedisSynchronization of Redis is initiated during backup, replacement, and scaling events. This is a compute-intensive workload that can cause latencies. Use the SaveInProgress CloudWatch metric to determine if synchronization is in progress. For more information, see How synchronization and backup are implemented.ElastiCache cluster eventsCheck the Events section in the ElastiCache console for the time period when latency was observed. Check for background activities such as node replacement or failover events that could be caused by ElastiCache Managed Maintenance and service updates, or for unexpected hardware failures. You receive notification of scheduled events through the PHD dashboard and email.The following is a sample event log:Finished recovery for cache nodes 0001Recovering cache nodes 0001Failover from master node <cluster_node> to replica node <cluster_node> completedRelated informationMonitoring best practices with Amazon ElastiCache for Redis using Amazon CloudWatchDiagnosing latency issues - RedisTroubleshooting ElastiCache for RedisFollow"
https://repost.aws/knowledge-center/elasticache-redis-correct-high-latency
How do I troubleshoot issues with the Amazon Connect CTI Adapter for Salesforce?
I want to troubleshoot common issues with the Amazon Connect CTI Adapter for Salesforce.
"I want to troubleshoot common issues with the Amazon Connect CTI Adapter for Salesforce.Short descriptionThe following are the most common issues with the Amazon Connect CTI Adapter for Salesforce:The Amazon Connect Contact Control Panel (CCP) doesn't open in Salesforce.The CTI Flows aren't running as expected.The recordings, transcripts, or Contact Lens data is missing.The CTRs aren't importing from Amazon Connect to Salesforce.ResolutionThe following resolution applies to version 5.19 of the Amazon Connect Salesforce CTI Adapter. To check your version, complete the following steps:Open the Salesforce service console.Choose Setup.Choose Installed Packages, and then choose Amazon Connect Universal Package.The Amazon Connect CCP doesn't open in SalesforceDon't use a private browsing window, such as an incognito window, as it blocks the necessary cookies to open the CCP.If you're using a SAML setup, then confirm the configuration of your single sign on settings.To troubleshoot the CCP not opening, see Why doesn't the Amazon Connect CCP open in Salesforce after I've set up the Amazon Connect CTI Adapter?If the issue still persists, then take one of the following actions:Confirm the Toolkit for Amazon Connect lists the correct instance URLTo check your Toolkit for Amazon Connect, complete the following steps:Open the Salesforce service console.Choose Setup.Choose Custom code, and then choose Custom settings.Choose the Toolkit for Amazon Connect. The URL listed must match the Amazon Connect instance ID.Review the status of clickjack protectionIf clickjack protection is turned on, then you might see errors. To turn off clickjack protections, complete the following steps:Open the Salesforce service console.Choose Setup.Choose Security, and then choose Session Settings.Under Clickjack Protection, turn off the following settings:Enable clickjack protection for customer Visualforce pages with standard headersEnable clickjack protection for customer Visualforce pages with headers disabledIf you still can't open the Amazon Connect CCP in Salesforce, then complete the following steps:Create a HAR file to capture browser related network issues.Create a case with AWS Support.Attach the HAR file, the event, and an export of the CTI Flow for the event to the support case.The CTI Flows aren't running as expectedConfigure the CTI Flow, and initiate the event. For additional information, see Appendix C: CTI Flow sources and events.If configuring the CTI doesn't resolve the issue, then complete the following steps:Create a HAR file to capture browser related issues. Also, capture the browser console logs.Create a case with AWS Support.Attach the HAR file, browser console logs, the event, and an export of the CTI Flow for the event to the support case.The recordings, transcripts, or Contact Lens data is missingFor additional troubleshooting, see Why can't I see or play call recordings after setting up the Amazon Connect CTI Adapter for Salesforce?To check your recordings, transcripts, and Contact Lens data, open the Salesforce service console. Then, choose AC Contact Channel Analytics.Note: Salesforce Lambda moves only the contact trace records that are calls.RecordingsAll recordings are directly streamed from the Amazon Connect instance. If your recordings aren't showing in the Salesforce service console, then use the Amazon Connect instance.To get the recording for a contact ID, use the following example URL:Note: Replace instance-name with your Amazon Connect instance ID and contact-id with the contact ID.https://instance-name.my.connect.aws/get-recording?format=wav&callLegId=contact-idIf using the preceding URL doesn't resolve the issue, then complete the following steps:Create a HAR file to capture the loading of the AC Contact Channel Analytics page for the contact ID where the recording isn't showing.Create a case with AWS Support.Attach the HAR file to the support case.Transcripts and Contact Lens dataIf your transcripts aren't showing in the Salesforce service console, check the following attribute settings and AWS Lambda function transcript:To move a transcript to Salesforce, set the postcallTranscribeEnabled attribute to true in your Amazon Connect contact flow. Also, set the postcallTranscribeLanguage attribute to the desired language, such as EN-US.Review the Lambda function transcript for function timeouts, throttles, or errors that might be causing the import issue. The Lambda functions responsible for moving the transcript to Salesforce are:sfExecuteTranscriptionStateMachinesfSubmitTranscribeJobsfGetTranscribeJobStatussfProcessTranscriptionResultIf there isn't an issue on the Lambda function level, then change the LambdaLoggingLevel to DEBUG, and review the logs for any issues. To view DEBUG level logs, see the AWS CloudFormation console for the Amazon Connect CTI Adapter stack. For more information, see Viewing AWS CloudFormation stack data and resources on the AWS Management Console.If your Contact Lens data isn't showing in the Salesforce service console, then check the following attribute settings and Contact Lens data:To move Contact Lens data to Salesforce, set the contactLensImportEnabled and postcallRecordingImportEnabled to true in your Amazon Connect contact flow.Note: If you turn on redaction of data, then set the postcallRedactedRecordingImportEnabled attribute to true instead of the postcallRecordingImportEnabled attribute.Review the Contact Lens data Lambda functions for function timeouts, throttles, or errors that might be causing the import issue. The Lambda function responsible for moving the Contact Lens data to Salesforce is sfProcessContactLens.If there isn't an issue on the Lambda function level, then change the LambdaLoggingLevel to DEBUG, and review the logs for any issues. To view DEBUG level logs, see the AWS CloudFormation console for the Amazon Connect CTI Adapter stack. For more information, see Viewing AWS CloudFormation stack data and resources on the AWS Management Console.If reviewing the Lambda DEBUG logs doesn't resolve the transcript or Contact Lens data issue, then complete the following steps:Create a case with AWS Support.Attach the Lambda logs for three or more occurrences of the issue to the support case.The CTRs aren't importing from Amazon Connect to SalesforceTo check your CTRs, open the Salesforce service console. Then, choose AC Contact Trace Records.Complete the steps in Contact trace report import. Then, check your attribute settings and Lambda functions:If you set up the Salesforce Lambdas manually, then set the postcallCTRImportEnabled attribute to true.Review the Lambda functions for function timeouts, throttles, or errors that might be causing the import issue. The Lambda functions that are responsible for moving the CTRs are sfContactTraceRecord and sfCTRTrigger.If there isn't an issue on the Lambda function level, then change the LambdaLoggingLevel to DEBUG, and review the logs for any issues. To view DEBUG level logs, see the AWS CloudFormation console for the Amazon Connect CTI Adapter stack. For more information, see Viewing AWS CloudFormation stack data and resources on the AWS Management Console.If reviewing the Lambda DEBUG logs doesn't resolve the issue, then complete the following steps:Create a case with AWS Support.Attach the Lambda logs for three or more occurrences of the issue to the support case.Follow"
https://repost.aws/knowledge-center/connect-salesforce-cti-adapter-issues
Why am I getting an HTTP 307 Temporary Redirect response from Amazon S3?
"When I send requests to an Amazon Simple Storage Service (Amazon S3) bucket, Amazon S3 returns a 307 Temporary Redirect response. Why am I receiving this error?"
"When I send requests to an Amazon Simple Storage Service (Amazon S3) bucket, Amazon S3 returns a 307 Temporary Redirect response. Why am I receiving this error?ResolutionAfter you create an Amazon S3 bucket, up to 24 hours can pass before the bucket name propagates across all AWS Regions. During this time, you might receive the 307 Temporary Redirect response for requests to Regional endpoints that aren't in the same Region as your bucket. For more information, see Temporary request redirection.To avoid the 307 Temporary Redirect response, send requests only to the Regional endpoint in the same Region as your S3 bucket:If you're using the AWS Command Line Interface (AWS CLI) to access the bucket, configure the AWS CLI. Your AWS CLI must reside in the same Region as your Amazon S3 bucket.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.If you're using an Amazon CloudFront distribution with an Amazon S3 origin, CloudFront forwards requests to the default S3 endpoint ( s3.amazonaws.com). The default S3 endpoint is in the us-east-1 Region. If you must access Amazon S3 within the first 24 hours of creating the bucket, you can change the origin domain name of the distribution. The domain name must include the Regional endpoint of the bucket. For example, if the bucket is in us-west-2, you can change the origin domain name from awsexamplebucketname.s3.amazonaws.com to awsexamplebucket.s3.us-west-2.amazonaws.com.Tip: To reduce the number of DNS redirects and DNS propagation issues, specify the AWS Region of your bucket in all HTTP requests. For example, if you're using the AWS CLI, include the --region parameter in your request to specify the AWS Region.Related informationAWS service endpointsFollow"
https://repost.aws/knowledge-center/s3-http-307-response
How do I perform Git operations on an AWS CodeCommit repository with an instance role on Amazon EC2 instances for Amazon Linux 2?
I want to perform Git operations on an AWS CodeCommit repository. And I want to use an instance role on Amazon Elastic Compute Cloud (Amazon EC2) instances for Amazon Linux 2.
"I want to perform Git operations on an AWS CodeCommit repository. And I want to use an instance role on Amazon Elastic Compute Cloud (Amazon EC2) instances for Amazon Linux 2.Short descriptionUse the AWS Command Line Interface (AWS CLI) credential helper for Git operations on a CodeCommit repository using an instance role on your EC2 instance.Note: Using a credential helper is the only connection method for CodeCommit repositories that doesn't require an AWS Identity and Access Management (IAM) user.ResolutionNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.1.    Create an IAM role for your EC2 instance, and then attach the following example IAM policy to the role. Replace arn:aws:codecommit:us-east-1:111111111111:testrepo with the ARN of your CodeCommit repository.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "codecommit:GitPull", "codecommit:GitPush" ], "Resource": "arn:aws:codecommit:us-east-1:111111111111:testrepo" } ]}Note: The policy for step 1 allows the IAM role to perform Git pull and push actions on the CodeCommit repository. For more examples on using IAM policies for CodeCommit, see Using identity-based policies (IAM Policies) for CodeCommit.2.    Attach the IAM role that you created in step 1 to an EC2 instance.3.    Install Git on your EC2 instance.Note: For more information, see Downloads on the Git website.4.    To set up the credential helper on the EC2 instance, run the following commands:$ git config --global credential.helper '!aws codecommit credential-helper $@' $ git config --global credential.UseHttpPath trueNote: The commands in step 4 specify the use of the Git credential helper with the AWS credential profile. The credential profile allows Git to authenticate with AWS to interact with CodeCommit repositories. To authenticate, Git uses HTTPS and a cryptographically signed version of your EC2 instance role.5.    To configure your name and email address explicitly, run the following commands:$ git config --global user.email "testuser@example.com"$ git config --global user.name "testuser"Note: Your name and email address are automatically configured based on your user name and hostname.6.    To clone the repository to the EC2 instance, run the following command:$ git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/testrepo7.    Create a commit in your CodeCommit repository.Note: If you're using Windows, see Setup steps for HTTPS connections to AWS CodeCommit repositories on Windows with the AWS CLI credential helper.Related informationSetup steps for HTTPS connections to AWS CodeCommit repositories on Linux, macOS, or Unix with the AWS CLI credential helperHow do I perform Git operations on an AWS CodeCommit repository with an instance role on Amazon EC2 instances for Windows?Follow"
https://repost.aws/knowledge-center/codecommit-git-repositories-ec2
Why is my Linux instance not booting after I changed its type to a Nitro-based instance type?
"I changed my Amazon Elastic Compute Cloud (Amazon EC2) Linux instance to a Nitro-based instance type, and now it doesn't boot."
"I changed my Amazon Elastic Compute Cloud (Amazon EC2) Linux instance to a Nitro-based instance type, and now it doesn't boot.Short descriptionHere are some common reasons why a Linux instance might not boot after you change it to a Nitro-based type:The Elastic Network Adapter (ENA) enaSupport attribute is disabled for the instance.The ENA module isn't installed on the instance.The NVMe module isn't installed on the instance, or, if installed, the NVMe module isn't loaded in the initramfs image of the instance.You are trying to mount the file systems at boot time in the "/etc/fstab" file using a device name. Amazon Elastic Block Store (Amazon EBS) volumes are exposed as NVMe devices to these instance types, and the device names are changed. To avoid this, mount the file systems using UUID/Label. For more information, see Amazon EBS and NVMe on Linux instances.To resolve these issues, confirm that ENA is turned on and that your Linux instance meets the Nitro-based instance module and file system mounting requirements.Or, you can also run the AWSSupport-MigrateXenToNitroLinux Systems Manager Automation runbook. This runbook migrates an Amazon EC2 Linux Xen without manual configuration. For more information, see AWSSupport-MigrateXenToNitroLinux.ResolutionMake sure that ENA is turned on1.    To confirm that ENA is turned on, see Test whether enhanced networking is turned on, and then follow the instructions under Instance Attribute (enaSupport).2.    If ENA isn't turned on, run the modify-instance-attribute action. For more information, see Turn on enhanced networking on the Amazon Linux AMI.Run the NitroInstanceChecks scriptThe NitroInstanceChecks script checks your instance and provides a pass/fail status of these requirements:Verifies that the NVMe module is installed on your instance. If it is installed, then the script verifies that the module is loaded in the initramfs image.Verifies that the ENA module is installed on your instance.Analyzes /etc/fstab and looks for block devices being mounted using device names.This script is supported on the following OS versions:Red Hat derivatives: Red Hat Linux, Red Hat Enterprise Linux, CentOSAmazon Linux, Amazon Linux 2Debian derivatives: Debian, UbuntuNote: For more information on the ENA driver on Red Hat, see How do I install and activate the latest ENA driver for enhanced network support on an Amazon EC2 instance running Red Hat 6/7?To run the NitroInstanceChecks script:1.    Take a snapshot of your volume or create an Amazon Machine Image (AMI) of an instance before making any changes so that you have a backup.2.    Change your instance type to its original type.3.    Download the script to your instance and make it executable:# chmod +x nitro_check_script.sh4.    Run the script as a root user or sudo:# sudo ./nitro_check_script.sh5.    At the prompt, type y or n (or No): Type y for the script to regenerate and modify the /etc/fstab file, and then replace the device name of each partition with its UUID. The original fstab file is saved as /etc/fstab.backup.$(date +%F-%H:%M:%S). For example, /etc/fstab.backup.2019-09-01-22:06:05. Type n or No to print the correct /etc/fstab file in the output, but not replace it.A successful output looks like this:------------------------------------------------OK NVMe Module is installed and available on your instanceOK ENA Module is installed and available on your instanceOK fstab file looks fine and does not contain any device names.------------------------------------------------6.    After all the requirements are met, change the instance to a Nitro-based instance type.Follow"
https://repost.aws/knowledge-center/boot-error-linux-nitro-instance
How do I set up Okta as a SAML identity provider in an Amazon Cognito user pool?
I want to use Okta as a Security Assertion Markup Language 2.0 (SAML 2.0) identity provider (IdP) in an Amazon Cognito user pool. How do I set that up?
"I want to use Okta as a Security Assertion Markup Language 2.0 (SAML 2.0) identity provider (IdP) in an Amazon Cognito user pool. How do I set that up?Short descriptionAmazon Cognito user pools allow sign-in through a third party (federation), including through an IdP, such as Okta. For more information, see Adding user pool sign-in through a third party and Adding SAML identity providers to a user pool.A user pool integrated with Okta allows users in your Okta app to get user pool tokens from Amazon Cognito. For more information, see Using tokens with user pools.ResolutionCreate an Amazon Cognito user pool with an app client and domain nameCreate a user pool.Note: During creation, the standard attribute email is selected by default. For more information, see Configuring user pool attributes.Create an app client in your user pool. For more information, see Add an app to enable the hosted web UI.Note: When adding an app client, clear the Generate client secret check box. In certain authorization flows, such as the authorization code grant flow and token refresh flow, authorization servers use an app client secret to authorize a client to make requests on behalf of a user. For the implicit grant flow used in this setup, an app client secret isn't required.Add a domain name for your user pool.Sign up for an Okta developer accountNote: If you already have an Okta developer account, sign in.On the Okta Developer signup webpage, enter the required information, and then choose SIGN UP. The Okta Developer Team sends a verification email to the email address that you provided.In the verification email, find the sign-in information for your account. Choose ACTIVATE MY ACCOUNT, sign in, and finish creating your account.Create a SAML app in OktaOpen the Okta Developer Console. For more information about the console, see Okta’s Redesigned Admin Console and Dashboard.In the navigation menu, expand Applications, and then choose Applications.Choose Create App Integration.In the Create a new app integration menu, choose SAML 2.0 as the Sign-in method.Choose Next.For more information, see Prepare your integration in the Build a Single Sign-On (SSO) Integration guide on the Okta Developer website.Configure SAML integration for your Okta appOn the Create SAML Integration page, under General Settings, enter a name for your app.(Optional) Upload a logo and choose the visibility settings for your app.Choose Next.Under GENERAL, for Single sign on URL, enter https://yourDomainPrefix.auth.region.amazoncognito.com/saml2/idpresponse.Note: Replace yourDomainPrefix and region with the values for your user pool. You can find these values in the Amazon Cognito console on the Domain name page for your user pool.For Audience URI (SP Entity ID), enter urn:amazon:cognito:sp:yourUserPoolId.Note: Replace yourUserPoolId with your Amazon Cognito user pool ID. You can find this value in the Amazon Cognito console on the General settings page for your user pool.Under ATTRIBUTE STATEMENTS (OPTIONAL), add a statement with the following information:For Name, enter the SAML attribute name http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress.For Value, enter user.email.For all other settings on the page, leave them as their default values or set them according to your preferences.Choose Next.Choose a feedback response for Okta Support.Choose Finish.For more information, see Create your integration in the Build a Single Sign-On (SSO) Integration guide on the Okta Developer website.Assign a user to your Okta applicationOn the Assignments tab for your Okta app, for Assign, choose Assign to People.Choose Assign next to the user that you want to assign.Note: If this is a new account, the only option available is to choose yourself (the admin) as the user.(Optional) For User Name, enter a user name, or leave it as the user's email address, if you want.Choose Save and Go Back. Your user is assigned.Choose Done.For more information, see Assign users in the Build a Single Sign-On (SSO) Integration guide on the Okta Developer website.Get the IdP metadata for your Okta applicationOn the Sign On tab for your Okta app, find the Identity Provider metadata hyperlink. Right-click the hyperlink, and then copy the URL.For more information, see Specify your integration settings in the Build a Single Sign-On (SSO) Integration guide on the Okta Developer website.Configure Okta as a SAML IdP in your user poolIn the Amazon Cognito console, choose Manage user pools, and then choose your user pool.In the left navigation pane, under Federation, choose Identity providers.Choose SAML.Under Metadata document, paste the Identity Provider metadata URL that you copied.For Provider name, enter Okta. For more information, see Choosing SAML identity provider names.(Optional) Enter any SAML identifiers (Identifiers (Optional)) and enable sign-out from the IdP (Okta) when your users sign out from your user pool (Enable IdP sign out flow).Choose Create provider.For more information, see Creating and managing a SAML identity provider for a user pool (AWS Management Console).Map email address from IdP attribute to user pool attributeIn the Amazon Cognito console, choose Manage user pools, and then choose your user pool.In the left navigation pane, under Federation, choose Attribute mapping.On the attribute mapping page, choose the SAML tab.Choose Add SAML attribute.For SAML attribute, enter the SAML attribute name http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress.For User pool attribute, choose Email from the list.For more information, see Specifying identity provider attribute mappings for your user pool.Change app client settings for your user poolIn the Amazon Cognito console, choose Manage user pools, and then choose your user pool.In the left navigation pane, under App integration, choose App client settings.On the app client page, do the following:Under Enabled Identity Providers, select the Okta and Cognito User Pool check boxes.For Callback URL(s), enter a URL where you want your users to be redirected after they log in. For testing, you can enter any valid URL, such as https://www.example.com/.For Sign out URL(s), enter a URL where you want your users to be redirected after they log out. For testing, you can enter any valid URL, such as https://www.example.com/.Under Allowed OAuth Flows, be sure to select at least the Implicit grant check box.Under Allowed OAuth Scopes, be sure to select at least the email and openid check boxes.Choose Save changes.For more information, see App client settings terminology.Construct the endpoint URLUsing values from your user pool, construct this login endpoint URL:https://yourDomainPrefix.auth.region.amazoncognito.com/login?response_type=token&client_id=yourClientId&redirect_uri=redirectUrlBe sure to do the following:Replace yourDomainPrefix and region with the values for your user pool. You can find these values in the Amazon Cognito console on the Domain name page for your user pool.Replace yourClientId with your app client's ID, and replace redirectUrl with your app client's callback URL. You can find these values in the Amazon Cognito console on the App client settings page for your user pool.For more information, see How do I configure the hosted web UI for Amazon Cognito? and LOGIN endpoint.Test the endpoint URLEnter the constructed login endpoint URL in your web browser.On your login endpoint webpage, choose Okta.Note: If you're redirected to your app client's callback URL, you're already logged in to your Okta account in your browser. The user pool tokens appear in the URL in your web browser's address bar.On the Okta Sign In page, enter the user name and password for the user that you assigned to your app.Choose Sign in.After logging in, you're redirected to your app client's callback URL. The user pool tokens appear in the URL in your web browser's address bar.(Optional) Skip the Amazon Cognito hosted UIIf you want your users to skip the Amazon Cognito hosted web UI when signing in to your app, use this endpoint URL instead:https://yourDomainPrefix.auth.region.amazoncognito.com/oauth2/authorize?response_type=token&identity_provider=samlProviderName&client_id=yourClientId&redirect_uri=redirectUrl&scope=allowedOauthScopesBe sure to do the following:Replace yourDomainPrefix and region with the values for your user pool. You can find these values in the Amazon Cognito console on the Domain name page for your user pool.Replace samlProviderName with the name of the SAML provider in your user pool (Okta).(Optional) If you added an identifier for your SAML IdP earlier in the Identifiers (optional) field, you can replace identity_provider=samlProviderName with idp_identifier=idpIdentifier, replacing idpIdentifier with your custom identifier string.Replace yourClientId with your app client's ID, and replace redirectUrl with your app client's callback URL. You can find these values in the Amazon Cognito console on the App client settings page for your user pool.Replace allowedOauthScopes with the specific scopes that you want your Amazon Cognito app client to request. For example, scope=email+openid.For more information, see How do I configure the hosted web UI for Amazon Cognito? and AUTHORIZATION endpoint.Related informationSAML user pool IdP authentication flowHow do I set up a third-party SAML identity provider with an Amazon Cognito user pool?How do I set up Okta as an OpenID Connect identity provider in an Amazon Cognito user pool?Follow"
https://repost.aws/knowledge-center/cognito-okta-saml-identity-provider
How do I create an Amazon RDS event subscription?
"I want to be notified when there's an event (for example, creation, failure, or low storage) on my Amazon Relational Database Service (Amazon RDS) resources. How do I subscribe to Amazon RDS event notifications?"
"I want to be notified when there's an event (for example, creation, failure, or low storage) on my Amazon Relational Database Service (Amazon RDS) resources. How do I subscribe to Amazon RDS event notifications?ResolutionTo subscribe to Amazon RDS event notifications, see Subscribing to Amazon RDS event notification. For a list of event categories and messages that you can subscribe to, see Amazon RDS event categories and event messages.It's a best practice to test your event subscription after its status becomes active. You can test your subscription by initiating one action in the Amazon RDS resources that you specify in the subscription that triggers the event. For example, you can test a subscription with a Multi-AZ failover or a reboot. If your subscription works as expected, you receive a notification message about the event.Related informationUsing Amazon RDS event notificationFollow"
https://repost.aws/knowledge-center/create-rds-event-subscription
How do I set up SAML 2.0-based authentication for my Amazon Connect instance using IAM Identity Center?
I want to set up SAML 2.0-based authentication for my Amazon Connect instance using AWS IAM Identity Center (successor to AWS Single Sign-On). How do I do that?
"I want to set up SAML 2.0-based authentication for my Amazon Connect instance using AWS IAM Identity Center (successor to AWS Single Sign-On). How do I do that?Short descriptionTo set up SAML 2.0-based authentication for your Amazon Connect instance, do the following:Create an Amazon Connect instance that uses SAML 2.0-based authentication.Create an IAM Identity Center cloud application to connect to your Amazon Connect instance.Create an AWS Identity and Access Management (IAM) identity provider (IdP)Create an IAM policy for your Amazon Connect instance that allows the GetFederationToken action.Create an IAM role that grants federated users access to your Amazon Connect instance.Map your Amazon Connect instance's user attributes to IAM Identity Center attributes.Create users in IAM Identity Center and assign them to your IAM Identity Center cloud application.Test your setup by logging in to Amazon Connect using your IdP and one of the IAM Identity Center user credentials that you created.Important: Make sure that you follow these steps in the same AWS Region that your Amazon Connect instance is in.ResolutionCreate an Amazon Connect instance that uses SAML 2.0-based authenticationFollow the instructions in Create an Amazon Connect instance. When you configure the instance, make sure that you do the following:When configuring identity management for your instance, choose SAML 2.0-based authentication.When specifying the administrator for your instance, select Add a new admin. Then, provide a name for the user account in Amazon Connect.Note: The password for this user is managed through your IdP.When configuring telephony options for your instance, accept the default options.When configuring the data storage settings for your instance, accept the default options.Create an IAM Identity Center cloud application to connect to your Amazon Connect instanceFollow the instructions in Add and configure a cloud application in the IAM Identity Center user guide. When you configure your cloud application, make sure that you do the following:Choose Amazon Connect as the cloud application's service provider.Under IAM Identity Center metadata, download the IAM Identity Center and the IAM Identity Center Certificate.Note: You need these files to set up an IAM IdP. If you use an IdP other than IAM Identity Center, you must get the SAML metadata files from that IdP.Under Application properties, accept the default Relay state.Create an IAM IdPFollow the instructions in Creating and managing an IAM identity provider (console). When you create the IdP, make sure that you do the following:For Provider name, enter ConnectIAM Identity Center.For Metadata document, choose the IAM Identity Center SAML metadata file that you downloaded in the previous step.Important: Make a note of the IdP's Amazon Rsource Name (ARN). You need it to map your Amazon Connect instance's user attributes to IAM Identity Center attributes.Create an IAM policy for your Amazon Connect instance that allows the GetFederationToken actionUse the following JSON template to create an IAM policy named Connect-SSO-Policy.Important: Replace <connect instance ARN> with your Amazon Connect instance's ARN.{ "Version": "2012-10-17", "Statement": [ { "Sid": "Statement1", "Effect": "Allow", "Action": "connect:GetFederationToken", "Resource": [ "<connect instance ARN>/user/${aws:userid}" ] } ]}For more information, see Creating IAM policies and GetFederationToken.Create an IAM role that grants federated users access to your Amazon Connect instanceFollow the instructions in Creating a role for SAML in the AWS IAM user guide. When you create the IAM role, make sure that you do the following:For SAML provider, enter Connect-SSO.Choose Allow programmatic and AWS Management Console access.For Policy, choose the Connect-SSO-Policy that you created in the previous step.For Role name, enter Connect-SSO.Important: Make note of the IAM role's ARN. You need it to map your Amazon Connect instance's user attributes to IAM Identity Center attributes.Map your Amazon Connect instance's user attributes to IAM Identity Center attributesFollow the instructions in Map attributes in your application to IAM Identity Center attributes. When you map your attributes, make sure that add the following attributes and values:Important: Replace <IAM role ARN> with your IAM role's ARN. Replace <IAM IdP ARN> with your IAM IdP's ARN.AttributeValueSubject${user:email}https://aws.amazon.com/SAML/Attributes/RoleSessionName${user:email}https://aws.amazon.com/SAML/Attributes/Role<IAM role ARN>,<IAM IdP ARN>For more information, see Attribute mappings.Create users in IAM Identity Center and assign them to your IAM Identity Center cloud applicationFollow the instructions in Manage identities in IAM Identity Center.Test your setup by logging in to Amazon Connect using your IdP and one of the IAM Identity Center user credentials that you createdFollow the instructions in How to sign in to the user portal in the IAM Identity Center user guide.Related informationTroubleshoot SAML with Amazon ConnectConfigure IAM Identity Center using Microsoft Azure Active Directory for Amazon ConnectConfigure IAM Identity Center for Amazon Connect using Okta Follow"
https://repost.aws/knowledge-center/connect-saml-2-authentication-aws-sso
How can I reactivate my suspended AWS account?
"My AWS account is suspended, and I want to regain access to my account and its services."
"My AWS account is suspended, and I want to regain access to my account and its services.ResolutionNote: If you closed your AWS account within the past 90 days and you want to reopen it, see Can I reopen my closed AWS account? If the suspended account is a member account in an organization, then contact the owner of the management account.If your account is suspended due to outstanding charges, you can reinstate your account by paying all past due charges. You can pay your past due charges in the Billing and Cost Management console.Here are a few important things to keep in mind if your account is suspended due to outstanding charges:Resources on the account can be deleted at any time while your account is suspended.If your account isn't reactivated within 30 days of suspension, then your account will be closed.If your account isn't reactivated within 90 days of closure, then your account is terminated.Terminated accounts can't be reopened, and all resources on the account are lost.To pay the past due charges on an account, first verify that your current payment information is accurate:Check Payment methods to confirm that the information associated with your payment method is correct.If your default payment method is no longer valid, add a new payment method, and then set it as the default payment method.Then, follow these steps to pay your outstanding charges:Open the Billing and Cost Management console.On the navigation pane, choose Payments.You can view your outstanding invoices on the Payments Due tab.On the Payments Due tab, select the invoice that you want to pay, and then choose Complete payment.On the Complete a payment page, confirm that the summary matches what you want to pay, and then choose Verify and Pay.If you paid your past due charges in full with a credit card, then services automatically reactivate within a few minutes.If you paid your past due charges in full with a different payment method, then contact AWS Support to reactivate your account.Sometimes account services can take up to 24 hours to reactivate an account. If you have paid your past due charges in full and your account isn't reactivated within 24 hours, then contact AWS Support.If your account was suspended by AWS, then you might need to provide additional information so AWS can review your reinstatement request. Check your email and spam folder to see if AWS needs any information from you to complete the reactivation process. Then, respond with the requested information and your account will be reviewed for reinstatement.If you have additional questions, or can't provide the requested information, then contact AWS Support. If you can't sign in to your account, then contact the AWS Account Verification team using the AWS Account Verification support form.Related informationManaging your paymentsMaking payments, checking unapplied funds, and viewing your payment historyClosing an accountWhat do I do if I'm having trouble signing in to or accessing my AWS account?Follow"
https://repost.aws/knowledge-center/reactivate-suspended-account
How do I resolve the error "An error occurred (TargetNotConnectedException) when calling the ExecuteCommand operation" in Amazon ECS?
"When I try to run the AWS Command Line Interface (AWS CLI) command execute-command in Amazon Elastic Container Service (Amazon ECS), I get the following error:"An error occurred (TargetNotConnectedException) when calling the ExecuteCommand operation: The execute command failed due to an internal error. Try again later""
"When I try to run the AWS Command Line Interface (AWS CLI) command execute-command in Amazon Elastic Container Service (Amazon ECS), I get the following error:"An error occurred (TargetNotConnectedException) when calling the ExecuteCommand operation: The execute command failed due to an internal error. Try again later"Short descriptionYou might get this error due to the following reasons:The Amazon ECS task role doesn't have the required permissions to run the execute-command command.The AWS Identity and Access Management (IAM) role or user that's running the command doesn't have the required permissions.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.ResolutionCheck the Amazon ECS task role permissionsYou get this error when the Amazon ECS task role doesn't have the required permissions. You might resolve this error by creating an IAM policy with the required permissions and then attaching the policy to the Amazon ECS task role.1.    Create the following IAM policy:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ssmmessages:CreateControlChannel", "ssmmessages:CreateDataChannel", "ssmmessages:OpenControlChannel", "ssmmessages:OpenDataChannel" ], "Resource": "*" } ]}Note: Be sure that these permissions are not denied at the AWS Organizations level.2.    Attach the policy to the Amazon ECS task role.There might be delays in propagating these changes at the task level. Therefore, wait for some time after attaching the policy to the task role, and then try running the execute-command command.Check the IAM user or role permissionsBe sure that the IAM user or role that's running the execute-command command has the following permissions:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "ecs:ExecuteCommand", "Resource": "arn:aws:ecs:example-region:example-arn:cluster/example-cluster/*" } ]}If you're still getting the error, run the amazon-ecs-exec-checker script. This script allows you to check and validate your AWS CLI environment and the Amazon ECS cluster or task. The script also notifies you about the prerequisite that's not met.Related informationEnabling and using ECS ExecFollow"
https://repost.aws/knowledge-center/ecs-error-execute-command
How can I configure IAM task roles in Amazon ECS to avoid "Access Denied" errors?
How to configure IAM task roles in Amazon ECS to resolve an "Access Denied" error message when my application makes AWS API calls.
"How to configure IAM task roles in Amazon ECS to resolve an "Access Denied" error message when my application makes AWS API calls.Short descriptionIf you don't configure IAM task roles correctly, you can receive "Access Denied" error messages when your application makes AWS API calls.To avoid this error, provide your AWS Identity and Access Management (IAM) task role in the task definition for Amazon Elastic Container Service (Amazon ECS). Your tasks can use this IAM role for AWS API calls. The IAM task role must have all the permissions required by your application. If a task can't find the IAM task role due to configuration issues, then the Amazon Elastic Compute Cloud (Amazon EC2) instance role is used.ResolutionTo correctly configure IAM roles for your task, check the following:Confirm that the ECS container agent is runningTo confirm that the ECS container agent is running, run the following command:docker psTurn on IAM roles in your ECS container agent configuration file1.    Open your /etc/ecs/ecs.config file.2.    To turn on IAM roles for tasks in containers with bridge and default network modes, set ECS_ENABLE_TASK_IAM_ROLE to true. See the following example:ECS_ENABLE_TASK_IAM_ROLE=true3.    To turn on IAM roles for tasks in containers with the host network mode, set ECS_ENABLE_TASK_IAM_ROLE_NETWORK_HOST to true. See the following example:ECS_ENABLE_TASK_IAM_ROLE_NETWORK_HOST=true4.    To update the configuration file, restart the AECS container agent by running either of the following commands:For Amazon ECS-optimized Amazon Linux AMIs:sudo stop ecssudo start ecsFor Amazon ECS-optimized Amazon Linux 2 AMIs:sudo systemctl restart ecsConfirm that your IAM policy has the correct trust relationship with your Amazon ECS tasksTo confirm that the IAM role has the correct trust relationship, update your IAM policy as follows:{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "ecs-tasks.amazonaws.com" }, "Action": "sts:AssumeRole" } ]}Verify proxy settings for the ECS container agentIf you're using HTTP_PROXY on your Amazon ECS container agent configuration, apply the following NO_PROXY setting:NO_PROXY=169.254.169.254,169.254.170.2,/var/run/docker.sockConfirm that you're using the right AWS SDKThe application running in your container must use a version of the AWS SDK no older than the July 2016 version.To update your AWS SDK, see Tools to build on AWS.Meet the requirements for non-Amazon ECS Optimized AMIsIf you're using a non-Amazon ECS Optimized AMI, set the required rules for iptables.Note: If you restart the instance, the rules for iptables are reset to the default. To avoid a reset, run one of the following commands to save the rules:For Amazon ECS-optimized Amazon Linux AMIs:sudo service iptables saveFor Amazon ECS-optimized Amazon Linux 2 AMIs:sudo iptables-save | sudo tee /etc/sysconfig/iptables && sudo systemctl enable --now iptablesMake the credential path environment variable available to non-PID 1 processesThe environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is available only to PID 1 processes within a container. If the container is running multiple processes or init processes (such as wrapper script, start script, or supervisord), the environment variable is unavailable to non-PID 1 processes.To set your environment variable so that it's available to non-PID 1 processes, export the environment variable in the .profile file. For example, run the following command to export the variable in the Dockerfile for your container image:RUN echo 'export $(strings /proc/1/environ | grep AWS_CONTAINER_CREDENTIALS_RELATIVE_URI)' >> /root/.profileNow additional processes can access the environment variable.Note: There's a dependency on the strings and grep commands when you export the environment variable.Related informationTroubleshooting IAM roles for tasksAdditional configuration for Windows IAM roles for tasksFollow"
https://repost.aws/knowledge-center/ecs-iam-task-roles-config-errors
How do I install a wildcard Let's Encrypt SSL certificate in Amazon Lightsail?
How do I install a wildcard SSL certificate for my website in an Amazon Lightsail instance?
"How do I install a wildcard SSL certificate for my website in an Amazon Lightsail instance?Short descriptionThe following resolution covers installing a wildcard Let's Encrypt SSL certificate for websites hosted in a Lightsail instance that doesn't use a Bitnami stack. Examples of these instance blueprints include Amazon Linux 2, Ubuntu, and so on. If you have a different instance blueprint or want to install a standard certificate, see one of the following:Standard Let's Encrypt certificatesFor information on installing a standard Let's Encrypt SSL certificate (not a wildcard) in a Lightsail instance that doesn't use a Bitnami stack, such as Amazon Linux 2, Ubuntu, and so on, see How do I install a standard Let's Encrypt SSL certificate in a Lightsail instance?For information on installing a standard Let's Encrypt SSL certificate (not a wildcard) in a Lightsail instance with a Bitnami stack, such as WordPress, LAMP, Magento, and so on, see How do I install a standard Let's Encrypt SSL certificate in a Bitnami stack hosted on Amazon Lightsail?Wildcard Let's Encrypt certificates (for example, *.example.com)For information on installing a wildcard Let's Encrypt certificate in a Lightsail instance with a Bitnami stack, such as WordPress, Lamp, Magento, MEAN, and so on, see How do I install a wildcard Let's Encrypt SSL certificate in a Bitnami stack hosted on Amazon Lightsail?ResolutionThe steps used to install a wildcard Let's Encrypt SSL certificate on your Lightsail instance depend on which DNS provider your domain uses. To determine which method to use, verify if your DNS provider is listed in the Cerbot DNS list in DNS Plugins. Then, select the appropriate method to use:Method 1: Use this method if your domain uses one of the listed DNS providers.Method 2: Use this method if your domain is not using any of the listed DNS providers.Method 1Prerequisites and limitationsThe following steps cover installing the certificate in the server. You must manually complete additional steps, such as configuring the webserver to use the certificate and setting up HTTPS redirection.The domain must be using one of the DNS providers listed in the Certbot DNS List.Note: This method requires the installation of the Certbot tool before beginning. For installation instructions, see How do I install the Certbot package in my Lightsail instance for Let's Encrypt installation?In the following example, the DNS provider is Amazon Route 53. For instructions for other supported DNS providers, see DNS Plugins.1.    Create an AWS Identify and Access Management (IAM) user with programmatic access. For the minimum permissions required to be attached to the IAM user for Certbot to complete the DNS challenge, see certbot-dns-route-53.2.    Run the following commands in the instance to open the /root/.aws/credentials file in nano editor.sudo mkdir /root/.awssudo nano /root/.aws/credentials3.    Copy the following lines to the file. Then save the file by pressing ctrl+x, then y, and then ENTER.In the following command, replace aws_access_key_id with the access key ID created in step 1. Replace aws_secret_access_key with the secret access key created in step 1.[default]aws_access_key_id = AKIA************Eaws_secret_access_key = 1yop**************************l4.    Create a Let's Encrypt certificate in the server. Replace example.com with your domain name.If your domains uses Amazon Route 53 as the DNS provider, run the following command:sudo certbot certonly --dns-route53 -d example.com -d *.example.comAfter the SSL certificate generates successfully, you receive the message "Successfully received certificate". The certificate and key file locations are also provided. Save these file locations to a notepad for use in step 6.5.    Setup automatic certificate renewal.If the Certbot package installed using snapd, then the renewal is configured automatically in systemd timers or cronjobs.If the OS distribution is Amazon Linux 2 or FreeBSD, then the Certbot package isn't installed using snapd. In this case, you must configure the renewal manually by running the following command:echo "30 0,12 * * * root python -c 'import random; import time; time.sleep(random.random() * 3600)' && certbot renew" | sudo tee -a /etc/crontab > /dev/null6.    Only the certificate installation and renewal setup is completed. You still must configure your web server to use this certificate and setup HTTPS redirection. This configuration varies and depends on the web server setup you have in your instance. Refer to the your web service documentation for instructions on completing these steps.Method 2Prerequisites and limitationsThe following steps cover installing the certificate in the server. You must manually complete additional steps, such as configuring the webserver to use the certificate and setting up HTTPS redirection.Automatic certificate renewal isn't supported in this method.Note: This method requires the installation of the Certbot tool before beginning. For installation instructions, see How do I install the Certbot package in my Lightsail instance for Let's Encrypt installation?1.    This method requires adding TXT records in the domain's DNS provider. This process might take some time, so it's a best practice to run the commands in Linux GNU Screen to prevent the session from timing out. To start a Screen session, enter the following command:screen -S letsencrypt2.    Enter the following command to start Certbot in interactive mode. This command tells Certbot to use a manual authorization method with DNS challenges to verify domain ownership. Replace example.com with your domain name.sudo certbot certonly --manual --preferred-challenges dns -d example.com -d *.example.com3.    You receive a prompt to verify that you own the specified domain by adding TXT records to the DNS records for your domain. Let's Encrypt provides either a single or multiple TXT records that you must use for verification.4.    When you see a TXT record in the screen, first add the provided record in your domain's DNS. DO NOT PRESS ENTER until you confirm that the TXT record is propagated to internet DNS. Also, DO NOT PRESS CTRL+D as it will terminate the current screen session.5.    To confirm the TXT record has been propagated to internet DNS, look it up at DNS Text Lookup. Enter the following text into the text box and choose TXT Lookup to run the check. Be sure to replace example.com with your domain._acme-challenge.example.com6.    If your TXT records have propagated to the internet’s DNS, you see the TXT record value in the page. You can now go back to the screen and press ENTER.Note: If you're removed from the shell, use the command screen -r SESSIONID to get back in. Get the Session ID by running the screen -ls command.7.    If the Certbot prompt asks you to add another TXT record, complete steps 4 -7 again.8.    After the SSL certificate generates successfully, you receive the message "Successfully received certificate". The certificate and key file locations are also provided. Save these file locations to a notepad for use in the next step.9.    Only the certificate installation and renewal setup is completed. You still must configure your web server to use this certificate and setup HTTPS redirection. This configuration varies and depends on the web server setup you have in your instance. Refer to your web service documentation for instructions on completing these steps.Follow"
https://repost.aws/knowledge-center/lightsail-wildcard-ssl-certificate
How do memory and computing power affect AWS Lambda cost?
I want to understand how memory and computing power affect AWS Lambda cost.
"I want to understand how memory and computing power affect AWS Lambda cost.Short descriptionMemory is available to Lambda developers to control the performance of a function. The amount of memory allocated to a Lambda function is between 128 MB and 10,240 MB. The Lambda console defaults new functions to 128 MB, and many developers choose 128 MB for their functions.However, it's a best practice to choose 128 MB only for simple Lambda functions. For example, functions that transform and route events to other AWS services. If the function performs any of the following actions, then it has a higher memory allocation:Imports libraries.Imports Lambda layers.Interacts with data loaded from Amazon Simple Storage Service (Amazon S3).Interacts with data loaded from Amazon Elastic File System (Amazon EFS).ResolutionLambda function pricingLambda charges are based on the number of requests for your functions and the duration that it takes for your code to run. Lambda counts a request each time it invokes in response to an event notification. For example, from Amazon Simple Notification Service (SNS) or Amazon EventBridge. Additionally, Lambda counts a request each time it starts in response to an invoke call. For example, from Amazon API Gateway, or using the AWS SDK, including test invokes from the Lambda console.Duration calculates from the time that your code begins to run until it returns or stops, rounded up to the nearest 1 millisecond. For more information, see AWS Lambda Pricing. The price depends on the amount of memory that you allocate to your function. The amount of memory also determines the amount of virtual CPU available to a function. Adding more memory proportionally increases the amount of CPU, which then increases the available computational power. If a function is CPU-bound, network-bound, or memory-bound, then changing the memory setting can improve performance. An increase in memory size initiates an equivalent increase in CPU available to your function.Effect of memory power on Lambda costThe Lambda service charges for the total amount of gigabyte-seconds consumed by a function. An increase of memory affects the overall cost if the total duration stays constant. Gigabyte-seconds are the product of total memory (in gigabytes) and duration (in seconds). However, if you increase the memory available, the duration decreases. As a result, the overall cost increase is negligible or even decreases.For example, 1000 invocations of a function that computes prime numbers has the following average durations at different memory levels:MemoryDurationCost128 MB11.722 s$0.024628512 MB6.678 s$0.0280351024 MB3.194 s$0.0268301536 MB1.465 s$0.02463In this example, at 128 MB, the function takes 11.722 seconds on average to complete, at a cost of $0.024628 for 1000 invocations. When the memory is increased to 1536 MB, the duration average drops to 1.465 seconds, so the cost is $0.024638. For a one-thousandth of a cent cost difference, the function has a 10-fold improvement in performance.If memory consumption approaches the configured maximum, then monitor functions with Amazon CloudWatch and set alarms. This helps to identify memory-bound functions. For CPU-bound and IO-bound functions, monitor the duration to provide more insight. In these cases, an increase of memory helps resolve the compute or network bottlenecks. For more information, see Monitoring and observability.Follow"
https://repost.aws/knowledge-center/lambda-memory-compute-cost
How do I resolve 504 HTTP errors in Amazon EKS?
I get HTTP 504 (Gateway timeout) errors when I connect to a Kubernetes Service that runs in my Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
"I get HTTP 504 (Gateway timeout) errors when I connect to a Kubernetes Service that runs in my Amazon Elastic Kubernetes Service (Amazon EKS) cluster.Short descriptionYou get HTTP 504 errors when you connect to a Kubernetes Service pod that's located in an Amazon EKS cluster configured for a load balancer.To resolve HTTP 503 errors, see How do I resolve HTTP 503 (Service unavailable) errors when I access a Kubernetes Service in an Amazon EKS cluster?To resolve HTTP 504 errors, complete the following troubleshooting steps.ResolutionVerify that your load balancer's idle timeout is set correctlyThe load balancer established a connection to the target, but the target didn't respond before the idle timeout period elapsed. By default, the idle timeout for the Classic Load Balancer and Application Load Balancer is 60 seconds.1.    Review the Amazon CloudWatch metrics for your Classic Load Balancer or Application Load Balancer.Note: At least one request has timed out when:The latency data points are equal to your currently configured load balancer timeout value.There are data points in the HTTPCode_ELB_5XX metric.2.    Modify the idle timeout for your load balancer so that the HTTP request can complete within the idle timeout period. Or configure your application to respond quicker.To modify the idle timeout for your Classic Load Balancer, update the service definition to include the service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout annotation.To modify the idle timeout for your Application Load Balancer, update the Ingress definition to include the alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds annotation.Verify that your backend instances have no backend connection errorsIf a backend instance closes a TCP connection before it's reached the idle timeout value, then the load balancer fails to fulfill the request.1.    Review the CloudWatch BackendConnectionErrors metrics for your Classic Load Balancer and the target group's TargetConnectionErrorCount for your Application Load Balancer.2.    Activate keep-alive settings on your backend worker node or pods, and set the keep-alive timeout to a value greater than the load balancer's idle timeout.To see if the keep-alive timeout is less than the idle timeout, verify the keep-alive value in your pods or worker node. See the following example for pods and nodes.For pods:$ kubectl exec your-pod-name -- sysctl net.ipv4.tcp_keepalive_time net.ipv4.tcp_keepalive_intvl net.ipv4.tcp_keepalive_probesFor nodes:$ sysctl net.ipv4.tcp_keepalive_time net.ipv4.tcp_keepalive_intvl net.ipv4.tcp_keepalive_probesOutput:net.ipv4.tcp_keepalive_time = 7200net.ipv4.tcp_keepalive_intvl = 75net.ipv4.tcp_keepalive_probes = 9Verify that your backend targets can receive traffic from the load balancer over the ephemeral port rangeThe network access control list (ACL) for the subnet doesn't allow traffic from the targets to the load balancer nodes on the ephemeral ports (1024-65535).You must configure security groups and network ACLs to allow data to move between the load balancer and the backend targets. For example, depending on the load balancer type, these targets can be IP addresses or instances.You must configure the security groups for ephemeral port access. To do so, connect the security group egress rule of your nodes and pods to the security group of your load balancer. For more information, see Security groups for your Amazon Virtual Private Cloud (Amazon VPC) and Add and delete rules.Related informationI receive HTTP 5xx errors when connecting to web servers running on EC2 instances configured to use Classic Load Balancing. How do I troubleshoot these errors?HTTP 504: Gateway timeoutMonitor your Classic Load BalancerMonitor your Application Load BalancersTroubleshoot a Classic Load Balancer: HTTP errorsFollow"
https://repost.aws/knowledge-center/eks-http-504-errors
How can I configure CloudFront to serve my content using an alternate domain name over HTTPS?
I want to configure CloudFront to serve my content using an alternate domain name over HTTPS. How can I do this?
"I want to configure CloudFront to serve my content using an alternate domain name over HTTPS. How can I do this?Short descriptionBy default, you can use CloudFront domain names only to serve content over HTTPS. However, you can associate your own domain name with CloudFront to serve your content over HTTPS.To associate your own domain name with CloudFront, add analternate domain names (CNAME).ResolutionRequest an SSL certificate in AWS Certificate Manager (ACM) or import your own certificateTo use an Amazon-issued certificate, see Requesting a public certificate.When using a public certificate, keep in mind:You must request the certificate in the US East (N. Virginia) Region.You must have permission to use and request the ACM certificate.To use an imported certificate, see Importing certificates into AWS Certificate Manager.When using an imported certificate, keep in mind:Your key length must be 1024 or 2048 bits and cannot exceed 2048 bits.You must import the certificate in the US East (N. Virginia) Region.You must have permission to use and import the SSL/TLS certificate.The certificate must be issued from a trusted CA listed at Mozilla Included CA Certificate List.The certificate must be in X.509 PEM format.For more information, see Requirements for using SSL/TLS certificates with CloudFront. Note: It is a best practice to import your certificate to ACM. However, you can also import your certificate in IAM certificate store.Attach an SSL certificate and alternate domain names to your distributionAccess the CloudFront console.Select the distribution that you want to update.On the General tab, choose Edit.For Alternate Domain Names (CNAMEs), add the applicable alternate domain names. Separate domain names with commas, or type each domain name on a new line.Note: The alternate domain that you are trying to add must not have a DNS record that points to a different CloudFront distribution.For SSL Certificate, choose Custom SSL Certificate. Then, choose a certificate from the list.Note: Up to 100 certificates are available in the dropdown list. If you have more than 100 certificates and the certificate that you want isn't listed, enter the certificate Amazon Resource Name (ARN). If you previously uploaded a certificate to the IAM certificate store but it isn't available in the dropdown list, confirm that you correctly uploaded the certificate.If you want CloudFront to serve your HTTPS content using dedicated IP addresses, turn on Legacy Client support.Note: When using Legacy Client support, you incur additional charges if you associate your SSL/TLS certificate with a distribution where the setting is turned on. For more information, see Amazon CloudFront Pricing.Choose Save changes.Configure CloudFront to require HTTPS between viewers and CloudFrontAccess the CloudFront console.On the Behaviors tab, choose the cache behavior that you want to update. Then, choose Edit.For Viewer Protocol Policy, choose:Redirect HTTP to HTTPS. Viewers can use both protocols, but HTTP requests are automatically redirected to HTTPS requests. -or-**HTTPS Only.**Viewers can access your content only if they're using HTTPS. If a viewer sends an HTTP request instead of an HTTPS request, CloudFront returns HTTP status code 403 (Forbidden) and does not return the file.Choose Save changes.Repeat steps 1-4 for each additional cache behavior that you want to require HTTPS for between viewers and CloudFront.Create DNS records to point your domain to CloudFront distributionUsing Amazon Route 53Create an alias resource record set. With an alias resource record set, there are no charges for Route 53 queries. Additionally, you can create an alias resource record set for the root domain name (example.com), which DNS doesn’t allow for CNAMEs. For more information, see Configuring Amazon Route 53 to route traffic to a CloudFront distribution.Using another DNS service providerUse the method provided by your DNS service provider to add a CNAME record for your domain. The CNAME record will redirect DNS queries from your alternate domain name (for example: www.example.com) to the CloudFront domain name for your distribution (for example: example.cloudfront.net).Follow"
https://repost.aws/knowledge-center/cloudfront-https-content
How do I analyze my Application Load Balancer access logs using Amazon Athena?
I want to analyze my Application Load Balancer access logs with Amazon Athena.
"I want to analyze my Application Load Balancer access logs with Amazon Athena.I want to analyze my Application Load Balancer access logs with Amazon Athena.Short descriptionElastic Load Balancing doesn't activate access logging by default. When you activate access logging, you specify an Amazon Simple Storage Service (Amazon S3) bucket. All Application Load Balancer and Classic Load Balancer access logs are stored in the Amazon S3 bucket. To troubleshoot or analyze the performance of your load balancer, use Athena to analyze the access logs in Amazon S3.Note: Although you can use Athena to analyze access logs for Application Load Balancers and Classic Load Balancers, this resolution applies only to Application Load Balancers.ResolutionCreate a database and table for Application Load Balancer logsTo analyze access logs in Athena, create a database and table with the following steps:1.    Open the Athena console.2.    To create a database, run the following command in the Query Editor. It's a best practice to create the database in the same AWS Region as the S3 bucket.create database alb_db3.    In the database that you created in step 2, create an alb_logs table for the Application Load Balancer logs. For more information, see Creating the table for Application Load Balancer logs.Note: For better query performance, you can create a table with partition projection. In partition projection, Athena calculates partition values and locations from configuration rather than read from a repository, such as the AWS Glue Data Catalog. For more information, see Partition projection with Amazon Athena.CREATE EXTERNAL TABLE IF NOT EXISTS alb_logs ( type string, time string, elb string, client_ip string, client_port int, target_ip string, target_port int, request_processing_time double, target_processing_time double, response_processing_time double, elb_status_code int, target_status_code string, received_bytes bigint, sent_bytes bigint, request_verb string, request_url string, request_proto string, user_agent string, ssl_cipher string, ssl_protocol string, target_group_arn string, trace_id string, domain_name string, chosen_cert_arn string, matched_rule_priority string, request_creation_time string, actions_executed string, redirect_url string, lambda_error_reason string, target_port_list string, target_status_code_list string, classification string, classification_reason string ) PARTITIONED BY ( day STRING ) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe' WITH SERDEPROPERTIES ( 'serialization.format' = '1', 'input.regex' = '([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*):([0-9]*) ([^ ]*)[:-]([0-9]*) ([-.0-9]*) ([-.0-9]*) ([-.0-9]*) (|[-0-9]*) (-|[-0-9]*) ([-0-9]*) ([-0-9]*) \"([^ ]*) (.*) (- |[^ ]*)\" \"([^\"]*)\" ([A-Z0-9-_]+) ([A-Za-z0-9.-]*) ([^ ]*) \"([^\"]*)\" \"([^\"]*)\" \"([^\"]*)\" ([-.0-9]*) ([^ ]*) \"([^\"]*)\" \"([^\"]*)\" \"([^ ]*)\" \"([^\s]+?)\" \"([^\s]+)\" \"([^ ]*)\" \"([^ ]*)\"') LOCATION 's3://your-alb-logs-directory/AWSLogs/1111222233334444/elasticloadbalancing//' TBLPROPERTIES ( "projection.enabled" = "true", "projection.day.type" = "date", "projection.day.range" = "2022/01/01,NOW", "projection.day.format" = "yyyy/MM/dd", "projection.day.interval" = "1", "projection.day.interval.unit" = "DAYS", "storage.location.template" = "s3://your-alb-logs-directory/AWSLogs/1111222233334444/elasticloadbalancing//${day}" )Note: Replace the table name and S3 locations according to your use case.Or, use the following query to create a table with partitions:CREATE EXTERNAL TABLE IF NOT EXISTS alb_logs_partitioned ( type string, time string, elb string, client_ip string, client_port int, target_ip string, target_port int, request_processing_time double, target_processing_time double, response_processing_time double, elb_status_code string, target_status_code string, received_bytes bigint, sent_bytes bigint, request_verb string, request_url string, request_proto string, user_agent string, ssl_cipher string, ssl_protocol string, target_group_arn string, trace_id string, domain_name string, chosen_cert_arn string, matched_rule_priority string, request_creation_time string, actions_executed string, redirect_url string, lambda_error_reason string, target_port_list string, target_status_code_list string, classification string, classification_reason string ) PARTITIONED BY(day string) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe' WITH SERDEPROPERTIES ( 'serialization.format' = '1', 'input.regex' = '([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*):([0-9]*) ([^ ]*)[:-]([0-9]*) ([-.0-9]*) ([-.0-9]*) ([-.0-9]*) (|[-0-9]*) (-|[-0-9]*) ([-0-9]*) ([-0-9]*) \"([^ ]*) ([^ ]*) (- |[^ ]*)\" \"([^\"]*)\" ([A-Z0-9-]+) ([A-Za-z0-9.-]*) ([^ ]*) \"([^\"]*)\" \"([^\"]*)\" \"([^\"]*)\" ([-.0-9]*) ([^ ]*) \"([^\"]*)\" \"([^\"]*)\" \"([^ ]*)\" \"([^\s]+?)\" \"([^\s]+)\" \"([^ ]*)\" \"([^ ]*)\"') LOCATION 's3://my_log_bucket/AWSLogs/1111222233334444/elasticloadbalancing/us-east-1/'Then, use the ALTER TABLE ADD PARTITION command to load the partitions:ALTER TABLE alb_logs_partitioned ADD PARTITION (day = '2022/05/21')  LOCATION's3://my_log_bucket/AWSLogs/1111222233334444/elasticloadbalancing/us-east-1/2022/05/21/'Note: It's not a best practice to use an AWS Glue crawler on the Application Load Balancer logs.4.    Under Tables in the navigation pane, choose Preview table from the menu. You can view the data from the Application Load Balancer access logs in the Results window.5.    Use the Query editor to run SQL statements on the table. You can save queries, view previous queries, or download query results in .csv file format.Example queriesIn the following examples, be sure to modify the table name, column values, and other variables to fit your query:View the first 100 access log entries in chronological orderUse this query for analysis and troubleshooting:SELECT * FROM alb_logs ORDER by time ASC LIMIT 100List all client IP addresses that accessed the Application Load Balancer, and how many times they accessed the Application Load Balancer Use this query for analysis and troubleshooting:SELECT distinct client_ip, count() as count from alb_logs GROUP by client_ip ORDER by count() DESC;List the average amount of data (in kilobytes) that's passing through the Application Load Balancer in request or response pairsUse this query for analysis and troubleshooting:SELECT (avg(sent_bytes)/1000.0 + avg(received_bytes)/1000.0) as prewarm_kilobytes from alb_logs;List all targets that the Application Load Balancer routes traffic to and the number of routed requests per target, by percentage distribution   Use this query to identify potential target traffic imbalances:SELECT target_ip, (Count(target_ip)* 100.0 / (Select Count(*) From alb_logs)) as backend_traffic_percentage FROM alb_logs GROUP by target_ip< ORDER By count() DESC;List the times that a client sent a request to the Application Load Balancer and then closed the connection before the idle timeout elapsed (HTTP 460 error)Use this query to troubleshoot HTTP 460 errors:SELECT * from alb_logs where elb_status_code = '460';List the times that a client request wasn't routed because the listener rule forwarded the request to an empty target group (HTTP 503 error)Use this query to troubleshoot HTTP 503 errors:SELECT * from alb_logs where elb_status_code = '503';List clients, in descending order, by the number of times that each client visited a specified URLUse this query to analyze traffic patterns:SELECT client_ip, elb, request_url, count(*) as count from alb_logs GROUP by client_ip, elb, request_url ORDER by count DESC;List the 10 URLs that Firefox users accessed most frequently, in descending orderUse this query to analyze traffic distribution and patterns:SELECT request_url, user_agent, count(*) as count FROM alb_logs WHERE user_agent LIKE '%Firefox%' GROUP by request_url, user_agent ORDER by count(*) DESC LIMIT 10;List clients, in descending order, by the amount of data (in megabytes) that each client sent in their requests to the Application Load BalancerUse this query to analyze traffic distribution and patterns:SELECT client_ip, sum(received_bytes/1000000.0) as client_datareceived_megabytes FROM alb_logs GROUP by client_ip ORDER by client_datareceived_megabytes DESC;List each time in a specified date range when the target processing time was more than 5 secondsUse this query to troubleshoot latency in a specified time frame:SELECT * from alb_logs WHERE (parse_datetime(time,'yyyy-MM-dd''T''HH:mm:ss.SSSSSS''Z')     BETWEEN parse_datetime('2018-08-08-00:00:00','yyyy-MM-dd-HH:mm:ss')     AND parse_datetime('2018-08-08-02:00:00','yyyy-MM-dd-HH:mm:ss')) AND (target_processing_time >= 5.0);Count the number of HTTP GET requests received by the load balancer grouped by the client IP addressUse this query to analyze incoming traffic distribution:SELECT COUNT(request_verb) AS count, request_verb, client_ip FROM alb_logs_partitioned WHERE day = '2022/05/21' GROUP by request_verb, client_ip;Follow"
https://repost.aws/knowledge-center/athena-analyze-access-logs
How do I pass CommaDelimitedList parameters to nested stacks in AWS CloudFormation?
I want to pass CommaDelimitedList parameters to nested stacks in AWS CloudFormation.
"I want to pass CommaDelimitedList parameters to nested stacks in AWS CloudFormation.Short descriptionYou can't pass values of type CommaDelimitedList to a nested stack. Instead, use the Fn::Join intrinsic function in your parent stack to convert type CommaDelimitedList to type String.ResolutionThe following example shows you how to pass a list of SecurityGroupIds from a parent stack to a nested stack.1.    Open the JSON or YAML file of your parent stack, and then set the Type of SecurityGroupIds to CommaDelimitedList.In the Resources section of the JSON file, the Fn::Join function returns the combined string. In the Resources section of the YAML file, the !Join function returns the combined string. In both JSON and YAML files, the combined string converts the SecurityGroupIds parameter type from CommaDelimitedList to String.Example parent JSON file:{ "AWSTemplateFormatVersion": "2010-09-09", "Parameters": { "SubnetId": { "Type": "AWS::EC2::Subnet::Id" }, "SecurityGroupIds": { "Type": "List<AWS::EC2::SecurityGroup::Id>" }, "KeyName": { "Type": "AWS::EC2::KeyPair::KeyName" }, "ImageId": { "Type": "String" } }, "Resources": { "Instance": { "Type": "AWS::CloudFormation::Stack", "Properties": { "TemplateURL": "https://s3.amazonaws.com/cloudformation-templates-us-east-2/nested.yml", "Parameters": { "SubnetId": { "Ref": "SubnetId" }, "SecurityGroupIds": { "Fn::Join": [ ",", { "Ref": "SecurityGroupIds" } ] }, "KeyName": { "Ref": "KeyName" }, "ImageId": { "Ref": "ImageId" } } } } }}Example parent YAML file:AWSTemplateFormatVersion: 2010-09-09Parameters: SubnetId: Type: 'AWS::EC2::Subnet::Id' SecurityGroupIds: Type: 'List<AWS::EC2::SecurityGroup::Id>' KeyName: Type: 'AWS::EC2::KeyPair::KeyName' ImageId: Type: StringResources: Instance: Type: 'AWS::CloudFormation::Stack' Properties: TemplateURL: 'https://s3.amazonaws.com/cloudformation-templates-us-east-2/nested.yml' Parameters: SubnetId: !Ref SubnetId SecurityGroupIds: !Join - ',' - !Ref SecurityGroupIds KeyName: !Ref KeyName ImageId: !Ref ImageIdNote: If you pass two subnets, such as ["subnet-aaaa, subnet-bbbb"], the output of Fn::Join is {"subnet-aaaa, subnet-bbbb"}.2.    In the JSON or YAML file of your nested stack, set the Type of SecurityGroupIds to CommaDelimitedList.Example nested JSON file:{ "AWSTemplateFormatVersion": "2010-09-09", "Parameters": { "SubnetId": { "Type": "String" }, "SecurityGroupIds": { "Type": "CommaDelimitedList" }, "KeyName": { "Type": "String" }, "ImageId": { "Type": "String" } }, "Resources": { "Ec2instance": { "Type": "AWS::EC2::Instance", "Properties": { "ImageId": { "Ref": "ImageId" }, "KeyName": { "Ref": "KeyName" }, "SecurityGroupIds": { "Ref": "SecurityGroupIds" }, "SubnetId": { "Ref": "SubnetId" } } } }}Example nested YAML file:AWSTemplateFormatVersion: 2010-09-09Parameters: SubnetId: Type: String SecurityGroupIds: Type: CommaDelimitedList KeyName: Type: String ImageId: Type: StringResources: Ec2instance: Type: 'AWS::EC2::Instance' Properties: ImageId: !Ref ImageId KeyName: !Ref KeyName SecurityGroupIds: !Ref SecurityGroupIds SubnetId: !Ref SubnetIdNote: In the nested stack, the combined string from the parent stack is passed to SecurityGroupIds as CommaDelimitedList. For example, the value {"sg-aaaaa, sg-bbbbb"} is converted back to ["sg-aaaaa", "sg-bbbbb"]. Therefore, SecurityGroupIds must be directly referenced by SecurityGroupIds: !Ref SecurityGroupIds and not as a list of strings.Related informationWorking with nested stacksAWS::CloudFormation::StackFollow"
https://repost.aws/knowledge-center/cloudformation-parameters-nested-stacks
Why aren't my EMR Spot Instances being provisioned during a cluster resize?
My Amazon EMR Spot Instances aren't being provisioned during a resize of my EMR cluster.
"My Amazon EMR Spot Instances aren't being provisioned during a resize of my EMR cluster.ResolutionAmazon Elastic Compute Cloud (Amazon EC2) might interrupt your Spot Instance at any time for the following reasons:Lack of Spot capacity.The request constraints can't be met.The Spot Price is higher than the designated maximum price.Your Spot account quota is exhausted. If this is the case, then you can request an increase.For more information, see Why did Amazon EC2 terminate my Spot Instance?Note: It's a best practice to use Spot Instances for workloads that are stateless, fault-tolerant, and flexible enough to withstand interruptions.Also, Spot Instances and On-Demand Instances might not be resized because the bootstrap scripts were modified or contain errors.Check the logs for the bootstrap script at /emr/instance-controller/log/bootstrap-actions or s3://cluster_id/node-failed/bootstrap-actions/stderr.gz. The logs show the error STARTUP_SCRIPT_FAILED_RET_CODE.For example, the following bootstrap action log shows that bootstrap action 1 (emr_bootstrap_actions.sh) failed:Another app is currently holding the yum lock; waiting for it to exit... The other application is: yum Memory : 125 M RSS (444 MB VSZ) Started: Tue Jul 19 05:36:36 2022 - 00:03 ago State : Running, pid: 7914Error: Package: falcon-sensor-4.18.0-6403.amzn2.x86_64 (/falcon-sensor-4.18.0-6403.amzn2.x86_64) Requires: systemdIf you see the preceding error, then the following actions happen:All of the new replacement nodes terminate.The node stops provisioning new replacement instances.The core node instance group goes into arrested mode as shown in the following example:"state": "ARRESTED", "message": "Instance group ig-2JN5xxxxxxxx in Amazon EMR cluster j-37H4xxxxxxx (emr-xxxxx-spark-cluster) was arrested at for the following reason: Error provisioning instances."=====Related informationSpot Instance interruptionsSpot request statusSpot Instance best practicesWhy is my Spot Instance terminating even though the maximum price is higher than the Spot price?Follow"
https://repost.aws/knowledge-center/emr-spot-instance-provisioning
How do I use AWS WAF with AWS Global Accelerator to block Layer 7 HTTP method and headers from accessing my application?
"Using the AWS WAF-enabled Application Load Balancer with AWS Global Accelerator, I want to block requests to my application if the request method is POST or if the user-agent header value matches curl/7.79."
"Using the AWS WAF-enabled Application Load Balancer with AWS Global Accelerator, I want to block requests to my application if the request method is POST or if the user-agent header value matches curl/7.79.Short descriptionYou can use AWS WAF and the Application Load Balancer with Global Accelerator to block access to the Layer 7 HTTP method and headers. In this architecture, AWS WAF uses the web access control list (web ACL) rules with the Application Load Balancer. The load balancer becomes an endpoint to the Global Accelerator.Note: AWS Global Accelerator itself doesn't support AWS WAF.The web ACL rule associated with the load balancer evaluates incoming traffic and forwards only the rule-compliant requests to the endpoint.ResolutionThe web ACL rule provides fine-grained control over all of the HTTP(S) web requests to your protected resources. Use the rule to configure a string or a regex match with one or more request attributes, such as the Uniform Resource Identifier (URI), query string, HTTP method, or header key.PrerequisitesMake sure you have the following traffic flow configuration for Global Accelerator, Application Load Balancer, and AWS WAF:User --> Global Accelerator --> Application Load Balancer with AWS WAF --> EC2 instanceNote: In this setup, the user accesses the application by making a request to the accelerator. The accelerator routes user traffic to the Application Load Balancer and AWS WAF associated with it. AWS WAF evaluates and either blocks or allows the user request that has the Layer 7 HTTP method or the user-agent header value.Create a rule-based web ACLUse the following 3-step process to create a rule-based web ACL. For more information, see Creating a web ACL.Create a web ACLNavigate to the AWS WAF Console to create web ACL.Choose Create web ACL.Name the web ACL. Select Region of the Application Load Balancer.Associate the Application Load Balance with the web ACL.Choose Next.Add a custom rule to the web ACLContinue to configure as follows:Choose Add Rules. Select from the drop down Add my own rules and rule groups.Under Rule builder, add a rule.Name the rule (for example, deny_User-Agent_with_POST).Under Type, select Regular rule.Configure the match criteria for the ruleComplete the remaining steps:Select matches at least one of the statements (OR).Under statement1 complete as follows:Inspect: single headerHeader field name: User-AgentMatch type: Exactly matches stringString to match: curl/7.79.0Under statement2 complete as follows:Inspect: HTTP methodMatch type: Exactly matches stringString to match: POSTChoose Block for Action.Test the results with user-agent header valueAccess the application using the Global Accelerator's URL and user-agent header value curl/7.79.0, with the GET request method.curl http://<your Global Accelerator URL> -v -H "User-Agent:curl/7.79.0"> GET / HTTP/1.1> Host: <your Global Accelerator DNS>> User-Agent:curl/7.79.0< HTTP/1.1 403 Forbidden < Server: awselb/2.0<<html><head><title>403 Forbidden</title></head><body><center><h1>403 Forbidden</h1></center></body></html>Note: Replace <your Global Accelerator URL> with your Global Accelerator URL. Replace <your Global Accelerator DNS> with your DNS.Notice that AWS WAF blocked the request and the Application Load Balancer responded with 403 Forbidden message.Test the results with POST requestAccess the application using the Global Accelerator's URL and user-agent header value curl/7.79.1, with the POST request method.curl -X POST http://<your Global Accelerator URL> --user "test-user:test-password" -v> POST / HTTP/1.1> Host: <your Global Accelerator DNS>> Authorization: Basic dGVzdC11c2VyOnRlc3QtcGFzc3dvcmQ=> User-Agent: curl/7.79.1>< HTTP/1.1 403 Forbidden< Server: awselb/2.0<html><head><title>403 Forbidden</title></head><body><center><h1>403 Forbidden</h1></center></body></html>Note: Replace <your Global Accelerator URL> with your Global Accelerator URL. Replace <your Global Accelerator DNS> with your DNS.Notice that AWS WAF blocked the request and the Application Load Balancer responded with a 403 Forbidden message.Follow"
https://repost.aws/knowledge-center/globalaccelerator-aws-waf-filter-layer7-traffic
How can I create a VPC peering connection between two VPCs?
How can I create an Amazon Virtual Private Cloud (Amazon VPC) peering connection between two VPCs?
"How can I create an Amazon Virtual Private Cloud (Amazon VPC) peering connection between two VPCs?Short descriptionYou can create a VPC peering connection between two VPCs in the same or different AWS accounts and Regions. The VPC peering connection allows you to communicate between hosts using private IPv4 or the IPV6 addresses. VPC peering uses the AWS infrastructure. VPC peering isn't a gateway or a VPN connection, and doesn't rely on a separate piece of physical hardware.Important: Before proceeding, review the following:VPC peering scenariosUnsupported VPC peering configurationsVPC peering limitationsResolutionCreate VPC peering from the Amazon VPC consoleOpen the Amazon VPC console.In the left navigation pane scroll down and click on peering connection.Click on Create peering connection.Once the peering connection is active we can proceed to update the route tables to enable traffic to traverse over the peering connection.Create VPC peering using AWS CLINote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you're using the most recent version of the AWS CLI.Follow the AWS CLI command reference create-vpc-peering-connection for steps and examples of different scenarios.To accept the VPC peering connection use accept-vpc-peering-connection.Related informationHow do I resolve Amazon VPC peering network connectivity issues?Why can't I create an Amazon VPC peering connection with a VPC in another AWS account?Amazon VPC FAQsFollow"
https://repost.aws/knowledge-center/vpc-peering-connection-create
Why does my AWS Glue ETL job fail with the error "Container killed by YARN for exceeding memory limits"?
"My AWS Glue extract, transform, and load (ETL) job fails with the error "Container killed by YARN for exceeding memory limits"."
"My AWS Glue extract, transform, and load (ETL) job fails with the error "Container killed by YARN for exceeding memory limits".Short descriptionThe most common causes for this error are the following:Memory-intensive operations, such as joining large tables or processing datasets with a skew in the distribution of specific column values, exceeding the memory threshold of the underlying Spark clusterFat partitions of data that consume more than the memory that's assigned to the respective executorLarge files that can't be split resulting in large in-memory partitionsResolutionUse one or more of the following solution options to resolve this error:Upgrade the worker type from G.1x to G.2x that has higher memory configurations. For more information on specifications of worker types, see the Worker type section in Defining job properties for Spark jobs.To learn more about migrating from AWS Glue 2.0 to 3.0, see Migrating from AWS Glue 2.0 to AWS Glue 3.0.You can also review the following table for information on worker type specifications:AWS Glue versions 1.0 and 2.0Standardspark.executor.memory: 5g spark.driver.memory: 5g spark.executor.cores: 4G.1xspark.executor.memory: 10g spark.driver.memory: 10g spark.executor.cores: 8G.2xspark.executor.memory: 20g spark.driver.memory: 20g spark.executor.cores: 16AWS Glue version 3.0Standardspark.executor.memory: 5g spark.driver.memory: 5g spark.executor.cores: 4G.1xspark.executor.memory: 10g spark.driver.memory: 10g spark.executor.cores: 4G.2xspark.executor.memory: 20g spark.driver.memory: 20g spark.executor.cores: 8If the error persists after upgrading the worker type, then increase the number of executors for the job. Each executor has a certain number of cores. This number determines the number of partitions that can be processed by the executor. Spark configurations for the data processing units (DPUs) are defined based on the worker type.Be sure that data is properly parallelized so that executors can be used evenly before any shuffle operation, such as joins. You can repartition the data across all the executors. You can do so by including the following commands for AWS Glue DynamicFrame and Spark DataFrame in your ETL job, respectively.dynamicFrame.repartition(totalNumberOfExecutorCores)dataframe.repartition(totalNumberOfExecutorCores)Using job bookmarks allows only the newly written files to be processed by the AWS Glue job. This reduces the number of files processed by the AWS Glue job and alleviate memory issues. Bookmarks store the metadata about the files processed in the previous run. In the subsequent run, the job compares the timestamp and then decides whether to process these files again. For more information, see Tracking processed data using job bookmarks.When connecting to a JDBC table, Spark opens only one concurrent connection by default. The driver tries to download the whole table at once in a single Spark executor. This might take longer and even cause out of memory errors for the executor. Instead, you can set specific properties of your JDBC table to instruct AWS Glue to read data in parallel through DynamicFrame. For more information, see Reading from JDBC tables in parallel. Or, you can achieve parallel reads from JDBC through Spark DataFrame. For more information, see Spark DataFrame parallel reads from JDBC, and review properties, such as partitionColumn, lowerBound, upperBound, and numPartitions.Avoid using user-defined functions in your ETL job, especially when combining Python/Scala code with Spark's functions and methods. For example, avoid using Spark's df.count() for verifying empty DataFrames within if/else statements or for loops. Instead, use better performant function, such as df.schema() or df.rdd.isEmpty().Test the AWS Glue job on a development endpoint and optimize the ETL code accordingly.If none of the preceding solution options work, split the input data into chunks or partitions. Then, run multiple AWS Glue ETL jobs instead of running one big job. For more information, see Workload partitioning with bounded execution.Related informationDebugging OOM exceptions and job abnormalitiesBest practices to scale Apache Spark jobs and partition data with AWS GlueOptimize memory management in AWS GlueFollow"
https://repost.aws/knowledge-center/glue-container-yarn-memory-limit
How do I set up Okta as an OpenID Connect identity provider in an Amazon Cognito user pool?
I want to use Okta as an OpenID Connect (OIDC) identity provider (IdP) in an Amazon Cognito user pool. How do I set that up?
"I want to use Okta as an OpenID Connect (OIDC) identity provider (IdP) in an Amazon Cognito user pool. How do I set that up?Short descriptionAmazon Cognito user pools allow sign-in through a third party (federation), including through an IdP such as Okta. For more information, see Adding user pool sign-in through a third party and Adding OIDC identity providers to a user pool.A user pool integrated with Okta allows users in your Okta app to get user pool tokens from Amazon Cognito. For more information, see Using tokens with user pools.ResolutionCreate an Amazon Cognito user pool with an app client and domain nameCreate a user pool.Note: During creation, the standard attribute email is selected by default. For more information, see Configuring user pool attributes.Create an app client in your user pool. For more information, see Add an app to enable the hosted web UI.Add a domain name for your user pool.Sign up for an Okta developer accountNote: If you already have an Okta developer account, sign in.On the Okta Developer signup webpage, enter your personal information, and then choose SIGN UP. The Okta Developer Team sends a verification email to the email address that you provided.In the verification email, find the sign-in information for your account. Choose ACTIVATE, sign in, and finish creating your account.Create an Okta appOpen the Okta Developer Console. For more information about the console, see Okta’s Redesigned Admin Console and Dashboard—Now in GA! on the Okta Developer Blog.In the navigation pane, expand Applications, and then choose Applications. This opens the Applications Console. For more information, see Administrator Console on the Okta Organizations page of the Okta Developer website.Choose Create App Integration.On the Create a new app integration page, choose OpenID Connect, choose Web Application, and then choose Next.Configure settings for your Okta appOn the New Web App Integration page, under General Settings, enter a name for your app. For example, TestApp.Under Grant type, confirm that the Authorization Code check box is selected. Your user pool uses this flow to communicate with Okta OIDC for federated user sign-in.For Sign-in redirect URIs, enter https://myUserPoolDomain/oauth2/idpresponse. This is where Okta sends the authentication response and ID token.Note: Replace myUserPoolDomain with your Amazon Cognito user pool domain. You can find the domain in the Amazon Cognito console on the Domain name page for your user pool.Under CONFIGURE OPENID CONNECT, for Login redirect URIs, enter https://myUserPoolDomain/oauth2/idpresponse. This is where Okta sends the authentication response and ID token.Note: Replace myUserPoolDomain with your Amazon Cognito user pool domain. Find the domain in the Amazon Cognito console on the Domain name page for your user pool.In Controlled access, choose your preferred access setting, and then choose Save.In Client Credentials, copy the Client ID and Client secret. You need these credentials for configuring Okta in your Amazon Cognito user pool.Choose Sign On.On the Sign On page, In OpenID Connect ID Token, note the Issuer URL. You need this URL for configuring Okta in your user pool.Add an OIDC IdP in your user poolIn the Amazon Cognito console, choose Manage user pools, and then choose your user pool.In the left navigation pane, under Federation, choose Identity providers.Choose OpenID Connect.Do the following:For Provider name, enter a name for the IdP. This name appears in the Amazon Cognito hosted web UI.Note: You can't change this field after creating the provider. If you plan to include this field in your app or use the Amazon Cognito hosted web UI, use a name that you're comfortable with your app's users seeing.For Client ID, paste the Client ID that you noted earlier from Okta.For Client secret (optional), paste the Client secret that you noted earlier from Okta.For Attributes request method, leave the setting as GET.For Authorize scope, enter the OIDC scope values that you want to authorize, separated by spaces. For more information, see Scope values in OpenID Connect Basic Client Implementer's Guide 1.0 on the OpenID website.Important: The openid scope is required for OIDC IdPs, and you can add other scopes according to your user pool configuration. For example, if you kept email as a required attribute when creating your user pool, enter email openid to include both scopes. You can map the email attribute to your user pool later in this setup.For Issuer, paste the Issuer URL that you copied earlier from Okta.For Identifiers (optional), you can optionally enter a custom string to use later in the endpoint URL in place of your OIDC IdP's name.Choose Run discovery to fetch the OIDC configuration endpoints for Okta.Choose Create provider.For more information, see Add an OIDC IdP to your user pool.Change app client settings for your user poolIn the Amazon Cognito console, choose Manage user pools, and then choose your user pool.In the left navigation pane, under App integration, choose App client settings.On the app client page, do the following:Under Enabled Identity Providers, choose the OIDC provider check box for the IdP that you created earlier.(Optional) Choose the Cognito User Pool check box.For Callback URL(s), enter a URL where you want your users to be redirected after logging in. For testing, you can enter any valid URL, such as https://example.com/.For Sign out URL(s), enter a URL where you want your users to be redirected after logging out. For testing, you can enter any valid URL, such as https://example.com/.Under Allowed OAuth Flows, select the flows that correspond to the grant types that you want your application to receive after authentication from Cognito.Note: The allowed OAuth flows you enable determine which values (code or token) you can use for the response_type parameter in your endpoint URL.Under Allowed OAuth Scopes, select at least the email and openid check boxes.Choose Save changes.For more information, see App client Settings terminology.Map the email attribute to a user pool attributeIf you authorized the email OIDC scope value earlier, map it to a user pool attribute.In the Amazon Cognito console, choose Manage user pools, and then choose your user pool.In the left navigation pane, under Federation, choose Attribute mapping.On the attribute mapping page, choose the OIDC tab.If you have more than one OIDC provider in your user pool, choose your new provider from the dropdown list.Confirm that the OIDC attribute sub is mapped to the user pool attribute Username.Choose Add OIDC attribute, and then do the following:For OIDC attribute, enter email.For User pool attribute, choose Email.For more information, see Specifying identity provider attribute mappings for your user pool.Log in to test your setupAuthenticate with Okta using the Amazon Cognito hosted web UI. After you log in successfully, you're redirected to your app client's callback URL. The authorization code or user pool tokens appear in the URL in your web browser's address bar.For more information, see Using the Amazon Cognito Hosted UI for sign-up and sign-in.Related informationOIDC user pool IdP authentication flowHow do I set up Okta as a SAML identity provider with an Amazon Cognito user pool?Follow"
https://repost.aws/knowledge-center/cognito-okta-oidc-identity-provider
How do I allow a legitimate IP address when using the IP reputation list or anonymous IP list in AWS WAF?
My legitimate requests are being blocked by an Amazon IP reputation list managed rule group or Anonymous IP list managed rule group. How do I allow my IP address in AWS WAF?
"My legitimate requests are being blocked by an Amazon IP reputation list managed rule group or Anonymous IP list managed rule group. How do I allow my IP address in AWS WAF?Short descriptionLegitimate requests might be blocked by one of the following AWS managed rule groups:Amazon IP reputation list managed rule groups allow you to block requests based on their source IP address that are typically associated with bots or other threats.Anonymous IP list managed rule groups contains rules to block requests from services that allow the obfuscation of viewer identity like VPNs or proxies.To allow a specific IP address or addresses, use one of the following methods to resolve this problem:Scope-down statements to narrow the scope of the requests that the rule evaluates. Choose this option for addressing logic in a single rule group.Labels on web requests to allow a rule that matches the request to communicate the match results to rules that are evaluated later in the same web ACL. Choose this option to reuse the same logic across multiple rules.ResolutionOption 1: Using scope-down statementsFirst, create an IP set.Open the AWS WAF console.In the navigation pane, choose IP sets, and then choose Create IP set.Enter an IP set name and Description - optional for the IP set. For example: MyTrustedIPs.Note: You can't change the IP set name after you create the IP set.For Region, choose the AWS Region where you want to store the IP set. To use an IP set in web ACLs that protect Amazon CloudFront distributions, you must use Global (CloudFront).For IP version, choose the version you want to use.For IP addresses, enter one IP address or IP address range per line that you want to allow in CIDR notation.Note: AWS WAF supports all IPv4 and IPv6 CIDR ranges except for /0.Examples:To specify the IPv4 address 192.168.0.26, enter 192.168.0.26/32.To specify the IPv6 address 0:0:0:0:0:ffff:c000:22c, enter 0:0:0:0:0:ffff:c000:22c/128.To specify the range of IPv4 addresses from 192.168.20.0 to 192.168.20.255, enter 192.168.20.0/24.To specify the range of IPv6 addresses from 2620:0:2d0:200:0:0:0:0 to 2620:0:2d0:200:ffff:ffff:ffff:ffff, enter 2620:0:2d0:200::/64.Review the settings for the IP set. If it matches your specifications, choose Create IP set.Then, add a scope-down statement to the specific AWS Managed Rule blocking your requests.In the navigation pane, under AWS WAF, choose Web ACLs.For Region, select the AWS Region where you created your web ACL.Note: Select Global if your web ACL is set up for Amazon CloudFront.Select your web ACL.In the web ACL Rules tab, choose the specific AWS Managed Rule group that is blocking your request, and then choose Edit.For Scope-down statement - optional, choose the Enable scope-down statement.For If a request, choose doesn't match the statement (NOT).On Statement, for Inspect, choose Originates from IP address in.For IP Set, choose the IP Set you created earlier. For example: MyTrustedIPs.For IP address to use as the originating address, choose Source IP address.Choose Save rule.Option 2: Using labels on web requestsFirst, create an IP set.Open the AWS WAF console.In the navigation pane, choose IP sets, and then choose Create IP set.Enter an IP set name and Description - optional for the IP set. For example: MyTrustedIPs.Note: You can't change the IP set name after you create the IP set.For Region, choose the AWS Region where you want to store the IP set. To use an IP set in web ACLs that protect Amazon CloudFront distributions, you must use Global (CloudFront).For IP version, choose the version you want to use.For IP addresses, enter one IP address or IP address range per line that you want to allow in CIDR notation.Note: AWS WAF supports all IPv4 and IPv6 CIDR ranges except for /0.Examples:To specify the IPv4 address 192.168.0.26, enter 192.168.0.26/32.To specify the IPv6 address 0:0:0:0:0:ffff:c000:22c, enter 0:0:0:0:0:ffff:c000:22c/128.To specify the range of IPv4 addresses from 192.168.20.0 to 192.168.20.255, enter 192.168.20.0/24.To specify the range of IPv6 addresses from 2620:0:2d0:200:0:0:0:0 to 2620:0:2d0:200:ffff:ffff:ffff:ffff, enter 2620:0:2d0:200::/64.Review the settings for the IP set. If it matches your specifications, choose Create IP set.Then, change rule actions to count in a rule group.In the web ACL page Rules tab, select the AWS Managed rule group blocking your request, and then choose Edit.In the Rules section for the rule group, do one of the following:For AWSManagedIPReputationList, turn on Count.For AnonymousIPList Rule, turn on Count.Choose Save rule.Finally, create a rule with higher numeric priority than the specific AWS Managed Rule blocking the request.In the navigation pane, under AWS WAF, choose Web ACLs.For Region, choose the AWS Region where you created your web ACL. Note: Select Global if your web ACL is set up for Amazon CloudFront.Select your web ACL.Choose Rules.Choose Add Rules, and then choose Add my own rules and rule groups.For Name, enter a rule name, and then choose Regular Rule.For If a request, choose matches all the statements (AND).On Statement 1:For Inspect, choose Has a label.For Match scope, choose Label.For Match key, select either awswaf:managed:aws:amazon-ip-list:AWSManagedIPReputationList or awswaf:managed:aws:anonymous-ip-list:AnonymousIPList based on that Managed Rule that was blocking your requestOn Statement 2:For Negate statement (NOT), choose Negate statement results.For Inspect, choose Originates from IP address in.For IP set, choose the IP set you created earlier.For IP address to use as the originating address, choose Source IP address.For Action, choose Block.Choose Add Rule.For Set rule priority, move the rule below the AWS Managed Rule that was blocking the request.Choose Save.Important: It’s a best practice to test rules in a non-production environment with the Action set to Count. Evaluate the rule using Amazon CloudWatch metrics combined with AWS WAF sampled requests or AWS WAF logs. When you're satisfied that the rule does what you want, change the Action to Block.Related informationHow can I detect false positives caused by AWS Managed Rules and add them to a safe list?Follow"
https://repost.aws/knowledge-center/waf-allow-ip-using-reputation-anon-list
"I'm running the sync command to transfer data between my EC2 instance and my S3 bucket, but the transfer is slow. How can I troubleshoot this?"
"I'm running the sync command to transfer data between my Amazon Elastic Compute Cloud (Amazon EC2) instance and my Amazon Simple Storage Service (Amazon S3) bucket. However, the transfer is slow. How can I troubleshoot this?"
"I'm running the sync command to transfer data between my Amazon Elastic Compute Cloud (Amazon EC2) instance and my Amazon Simple Storage Service (Amazon S3) bucket. However, the transfer is slow. How can I troubleshoot this?Short descriptionThesync command on the AWS Command Line Interface (AWS CLI) is a high-level command that includes the ListObjectsV2, HeadObject, GetObject, and PutObject API calls.To identify what might be contributing to the slow transfer:Review the architecture of your use case.Check the network connectivity.Test the speed of uploading to and downloading from Amazon S3.Review the network and resource load while sync runs as a background process.ResolutionReview the architecture of your use caseBefore you test the network connectivity, transfer speeds, and resource loads, consider the following architecture factors that can influence transfer speed:Which Amazon EC2 instance type are you using? For this transfer use case, it's a best practice to use an instance that has a minimum of 10 Gbps throughput.Are the EC2 instance and the S3 bucket in the same AWS Region? It's a best practice to deploy the instance and the bucket in the same Region. It's also a best practice to attach a VPC endpoint for Amazon S3 to the VPC where your instance is deployed.For instances and buckets that are in the same Region, is the AWS CLI configured to use the Amazon S3 Transfer Acceleration endpoint? It's a best practice to not use the Transfer Acceleration endpoint if the resources are in the same Region.What's the nature of the source data set that you want to transfer? For example, are you transferring a lot of small files or a few large files to Amazon S3? For more information about using the AWS CLI to transfer different source data sets to Amazon S3, see Getting the most out of the Amazon S3 CLI.What version of the AWS CLI are you using? Make sure that you’re using the most recent version of the AWS CLI.What's your configuration of the AWS CLI?If you're still experiencing slow transfers after following best practices, then check the network connectivity, transfer speeds, and resource loads.Check the network connectivityRun the dig command on the S3 bucket and review the query response time returned in the Query time field. In the following example, the Query time is 0 msec:Bash$ dig +nocomments +stats +nocmd awsexamplebucket.s3.amazonaws.com;awsexamplebucket.s3.amazonaws.com. INAawsexamplebucket.s3.amazonaws.com. 2400 IN CNAMEs3-3-w.amazonaws.com.s3-3-w.amazonaws.com.2INA52.218.24.66;; Query time: 0 msec;; SERVER: 172.31.0.2#53(172.31.0.2);; WHEN: Fri Dec 06 09:30:47 UTC 2019;; MSG SIZE rcvd: 87Longer response times for the Domain Name System (DNS) resolution queries to return an IP address can impact performance. If you get a longer query response time, then try changing the DNS servers for your instance.As another network connectivity test, run traceroute or mtr using TCP to the virtual style hostname and the S3 Regional endpoint for your bucket. The request in the following mtr example is routed through a VPC endpoint for Amazon S3 that's attached to the instance's VPC:Bash$ mtr -r --tcp --aslookup --port 443 -c50 awsexamplebucket.s3.eu-west-1.amazonaws.comStart: 2019-12-06T10:03:30+0000HOST: ip-172-31-4-38.eu-west-1.co Loss% Snt Last Avg Best Wrst StDev 1. AS??? ??? 100.0 50 0.0 0.0 0.0 0.0 0.0 2. AS??? ??? 100.0 50 0.0 0.0 0.0 0.0 0.0 3. AS??? ??? 100.0 50 0.0 0.0 0.0 0.0 0.0 4. AS??? ??? 100.0 50 0.0 0.0 0.0 0.0 0.0 5. AS??? ??? 100.0 50 0.0 0.0 0.0 0.0 0.0 6. AS??? ??? 100.0 50 0.0 0.0 0.0 0.0 0.0 7. AS16509 s3-eu-west-1-r-w.am 62.0% 50 0.3 0.2 0.2 0.4 0.0Test the speed of uploading to and downloading from Amazon S31.    Create five test files that contain 2 GB of content:Bash$ seq -w 1 5 | xargs -n1 -P 5 -I % dd if=/dev/urandom of=bigfile.% bs=1024k count=2048$ ls -ltotal 10244-rw-rw-r-- 1 ec2-user ec2-user 2097152 Nov 8 08:14 bigfile.1-rw-rw-r-- 1 ec2-user ec2-user 2097152 Nov 8 08:14 bigfile.2-rw-rw-r-- 1 ec2-user ec2-user 2097152 Nov 8 08:14 bigfile.3-rw-rw-r-- 1 ec2-user ec2-user 2097152 Nov 8 08:14 bigfile.4-rw-rw-r-- 1 ec2-user ec2-user 2097152 Nov 8 08:14 bigfile.52.    Run the sync command using the AWS CLI to upload the five test files. To get the transfer time, insert the time command (from Linux documentation) at the beginning of the sync command:Note: Be sure to also note the throughput speed while the sync command is in progress.Bash $ time aws s3 sync . s3://awsexamplebucket/test_bigfiles/ --region eu-west-1Completed 8.0 GiB/10.2 GiB (87.8MiB/s) with 3 file(s) remainingreal 2m14.402suser 2m6.254ssys 2m22.314sYou can use these test results as a baseline to compare to the time of the actual sync for your use case.Review the network and resource load while sync runs as a background process1.    Append & to the end of the sync command to run the command in the background:Note: You can also append a stream operator (>) to write output to a text file that you can review later.Bash$ time aws s3 sync . s3://awsexamplebucket/test_bigfiles/ --region eu-west-1 \> ~/upload.log &[1] 4262$2.    While the sync command runs in the background, run the mpstat command (from Linux documentation) to check CPU usage. The following example shows that 4 CPUs are being used and they are utilized around 20%:Bash $ mpstat -P ALL 10Average: CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idleAverage: all 21.21 0.00 23.12 0.00 0.00 2.91 0.00 0.00 0.00 52.77Average: 0 21.82 0.00 21.71 0.00 0.00 3.52 0.00 0.00 0.00 52.95Average: 1 21.32 0.00 23.76 0.00 0.00 2.66 0.00 0.00 0.00 52.26Average: 2 20.73 0.00 22.76 0.00 0.00 2.64 0.00 0.00 0.00 53.88Average: 3 21.03 0.00 24.07 0.00 0.00 2.87 0.00 0.00 0.00 52.03In this case, the CPU isn't the bottleneck. If you see utilization percentages that are equal to or greater than 90%, then try launching an instance that has additional CPUs. You can also run the top command to review the highest CPU utilization percentages that are running. Try to stop those processes first, and then run the sync command again.3.    While the sync command runs in the background, run the lsof command (from Linux documentation). This checks how many TCP connections are open to Amazon S3 on port 443:Note: If max_concurrent_requests is set to 20 for the user profile in the AWS CLI config file, then expect to see a maximum of 20 established TCP connections.Bash$ lsof -i tcp:443COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAMEaws 4311 ec2-user 3u IPv4 44652 0t0 TCP ip-172-31-4-38.eu-west-1.compute.internal:33156->52.218.36.91:https (CLOSE_WAIT)aws 4311 ec2-user 4u IPv4 44654 0t0 TCP ip-172-31-4-38.eu-west-1.compute.internal:39240->52.216.162.179:https (ESTABLISHED)aws 4311 ec2-user 5u IPv4 44655 0t0 TCP ip-172-31-4-38.eu-west-1.compute.internal:39242->52.216.162.179:https (ESTABLISHED)aws 4311 ec2-user 6u IPv4 47528 0t0 TCP ip-172-31-4-38.eu-west-1.compute.internal:39244->52.216.162.179:https (ESTABLISHED)aws 4311 ec2-user 7u IPv4 44656 0t0 TCP ip-172-31-4-38.eu-west-1.compute.internal:39246->52.216.162.179:https (ESTABLISHED)aws 4311 ec2-user 8u IPv4 45671 0t0 TCP ip-172-31-4-38.eu-west-1.compute.internal:39248->52.216.162.179:https (ESTABLISHED)aws 4311 ec2-user 13u IPv4 46367 0t0 TCP ip-172-31-4-38.eu-west-1.compute.internal:39254->52.216.162.179:https (ESTABLISHED)aws 4311 ec2-user 14u IPv4 44657 0t0 TCP ip-172-31-4-38.eu-west-1.compute.internal:39252->52.216.162.179:https (ESTABLISHED)aws 4311 ec2-user 15u IPv4 45673 0t0 TCP ip-172-31-4-38.eu-west-1.compute.internal:39250->52.216.162.179:https (ESTABLISHED)aws 4311 ec2-user 32u IPv4 47530 0t0 TCP ip-172-31-4-38.eu-west-1.compute.internal:39258->52.216.162.179:https (ESTABLISHED)aws 4311 ec2-user 33u IPv4 45676 0t0 TCP ip-172-31-4-38.eu-west-1.compute.internal:39256->52.216.162.179:https (ESTABLISHED)aws 4311 ec2-user 34u IPv4 44660 0t0 TCP ip-172-31-4-38.eu-west-1.compute.internal:39266->52.216.162.179:https (ESTABLISHED)aws 4311 ec2-user 35u IPv4 45678 0t0 TCP ip-172-31-4-38.eu-west-1.compute.internal:39260->52.216.162.179:https (ESTABLISHED)aws 4311 ec2-user 36u IPv4 45679 0t0 TCP ip-172-31-4-38.eu-west-1.compute.internal:39262->52.216.162.179:https (ESTABLISHED)aws 4311 ec2-user 37u IPv4 45680 0t0 TCP ip-172-31-4-38.eu-west-1.compute.internal:39268->52.216.162.179:https (ESTABLISHED)aws 4311 ec2-user 38u IPv4 45681 0t0 TCP ip-172-31-4-38.eu-west-1.compute.internal:39264->52.216.162.179:https (ESTABLISHED)aws 4311 ec2-user 39u IPv4 45683 0t0 TCP ip-172-31-4-38.eu-west-1.compute.internal:39272->52.216.162.179:https (ESTABLISHED)aws 4311 ec2-user 40u IPv4 47533 0t0 TCP ip-172-31-4-38.eu-west-1.compute.internal:39270->52.216.162.179:https (ESTABLISHED)aws 4311 ec2-user 41u IPv4 44662 0t0 TCP ip-172-31-4-38.eu-west-1.compute.internal:39276->52.216.162.179:https (ESTABLISHED)aws 4311 ec2-user 42u IPv4 44661 0t0 TCP ip-172-31-4-38.eu-west-1.compute.internal:39274->52.216.162.179:https (ESTABLISHED)aws 4311 ec2-user 43u IPv4 44663 0t0 TCP ip-172-31-4-38.eu-west-1.compute.internal:39278->52.216.162.179:https (ESTABLISHED)If you see other TCP connections on port 443, then try stopping those connections before running the sync command again.To get a count of the TCP connections, run this command:$ lsof -i tcp:443 | tail -n +2 | wc -l214.    After the single sync process is optimized, you can run multiple sync processes in parallel. This avoids single-process slower uploads when high network bandwidth is available, but only half of the network bandwidth is being utilized. When you run parallel sync processes, target different prefixes to get the desired throughput.For more information, see How can I optimize performance when I upload large amounts of data to Amazon S3?Follow"
https://repost.aws/knowledge-center/s3-troubleshoot-sync-instance-bucket
How do I publish MQTT messages to AWS IoT Core from my device when using Python?
I can't send or receive MQTT (MQ Telemetry Transport) messages between AWS IoT Core and my device or MQTT client. How do I publish MQTT messages to AWS IoT Core?
"I can't send or receive MQTT (MQ Telemetry Transport) messages between AWS IoT Core and my device or MQTT client. How do I publish MQTT messages to AWS IoT Core?Short descriptionVerify that your AWS IoT thing is correctly configured and its certificates are properly attached. To test your setup, you can use the AWS IoT MQTT client and the example Python code provided in this article.ResolutionSet up a directory to test MQTT publishing1.    Create a working directory in your development environment. For example: iot-test-publish.2.    Create a sub-directory for certificates in your new working directory. For example: certificates.3.    From the command line, change the directory to your new working directory.Install pip and the AWS IoT SDK for Python1.    If you haven't already done so, install pip for Python 3 packaging. For more information, see Installation on the Python Packaging Authority (PyPA) website.2.    Install the AWS IoT SDK for Python v2 by running the following from the command line:pip install awsiotsdk-or-Install the AWS IoT Device SDK for Python (the previous SDK version) by running the following command:pip install AWSIoTPythonSDKFor more information, see AWS IoT SDK for Python v2 or AWS IoT Device SDK for Python on GitHub.Note: These SDKs are recommended for connecting to AWS IoT Core, but they aren't required. You can also connect using any compliant third-party MQTT client.Create an AWS IoT Core policy1.    Open the AWS IoT Core console.2.    In the left navigation pane, choose Secure.3.    Under Secure, choose Policies.4.    If you have existing AWS IoT Core policies, then choose Create to create a new policy.-or-If you don't have any existing policies, then on the You don't have any policies yet page, choose Create a policy.5.    On the Create a policy page, enter a Name for your policy. For example: admin.6.    Under Add statements, do the following:For Action, enter iot:*.Important: Allowing all AWS IoT actions (iot:*) is useful for testing. However, it's a best practice to increase security for a production setup. For more secure policy examples, see Example AWS IoT policies.For Resource ARN, enter *.For Effect, select the Allow check box.7.    Choose Create.For more information, see Create an AWS IoT Core policy and AWS IoT Core policies.Create an AWS IoT thingNote: You don't need to create a thing to connect to AWS IoT. However, things allow you to use additional security controls and other AWS IoT features, such as Fleet Indexing, Jobs, or Device Shadow.1.    In the AWS IoT Core console, in the left navigation pane, choose Manage.2.    If you have existing things, then choose Create to create a new thing.-or-If you don't have any existing things, then on the You don't have any things yet page, choose Register a thing.3.    On the Creating AWS IoT things page, choose Create a single thing.4.    On the Add your device to the thing registry page, do the following:Enter a Name for your thing. For example: Test-Thing.(Optional) Under Add a type to this thing, choose or create a thing type.(Optional) Under Add this thing to a group, choose or create a group. For more information about groups, see Static thing groups and Dynamic thing groups.(Optional) Under Set searchable thing attributes (optional), add attributes as key-value pairs.Choose Next.5.    On the Add a certificate for your thing page, choose Create certificate. You see notifications confirming that your thing and a certificate for your thing are created.6.    On the Certificate created page, do the following:Under In order to connect a device, you need to download the following, choose Download for the certificate, public key, and private key.Save each of the downloaded files to the certificates sub-directory that you created earlier.Under You also need to download a root CA for AWS IoT, choose Download. The Server authentication page opens to CA certificates for server authentication.7.    Under Amazon Trust Services Endpoints (preferred), choose Amazon Root CA 1. The certificate opens in your browser.8.    Copy the certificate (everything from -----BEGIN CERTIFICATE----- to -----END CERTIFICATE-----) and paste it into a text editor.9.    Save the certificate as a .pem file named root.pem to the certificates sub-directory.10.    On the Certificate created page in the AWS IoT Core console, choose Activate. The button changes to Deactivate.11.    Choose Attach a policy.12.    On the Add a policy for your thing page, do the following:Select the AWS IoT Core policy that you previously created. For example: admin.Choose Register Thing.For more information, see Create a thing object.Copy the AWS IoT Core endpoint URL1.    In the AWS IoT Core console, in the left navigation pane, choose Settings.2.    On the Settings page, under Custom endpoint, copy the Endpoint. This AWS IoT Core custom endpoint URL is personal to your AWS account and Region.Create a Python program fileSave one of the following Python code examples as a Python program file named publish.py.If you installed the AWS IoT SDK for Python v2 earlier, then use the following example code:Important: Replace customEndpointUrl with your AWS IoT Core custom endpoint URL. Replace certificates with the name of your certificates sub-directory. Replace a1b23cd45e-certificate.pem.crt with the name of your client .crt. Replace a1b23cd45e-private.pem.key with the name of your private key.# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.# SPDX-License-Identifier: MIT-0from awscrt import io, mqtt, auth, httpfrom awsiot import mqtt_connection_builderimport time as timport json# Define ENDPOINT, CLIENT_ID, PATH_TO_CERTIFICATE, PATH_TO_PRIVATE_KEY, PATH_TO_AMAZON_ROOT_CA_1, MESSAGE, TOPIC, and RANGEENDPOINT = "customEndpointUrl"CLIENT_ID = "testDevice"PATH_TO_CERTIFICATE = "certificates/a1b23cd45e-certificate.pem.crt"PATH_TO_PRIVATE_KEY = "certificates/a1b23cd45e-private.pem.key"PATH_TO_AMAZON_ROOT_CA_1 = "certificates/root.pem"MESSAGE = "Hello World"TOPIC = "test/testing"RANGE = 20# Spin up resourcesevent_loop_group = io.EventLoopGroup(1)host_resolver = io.DefaultHostResolver(event_loop_group)client_bootstrap = io.ClientBootstrap(event_loop_group, host_resolver)mqtt_connection = mqtt_connection_builder.mtls_from_path( endpoint=ENDPOINT, cert_filepath=PATH_TO_CERTIFICATE, pri_key_filepath=PATH_TO_PRIVATE_KEY, client_bootstrap=client_bootstrap, ca_filepath=PATH_TO_AMAZON_ROOT_CA_1, client_id=CLIENT_ID, clean_session=False, keep_alive_secs=6 )print("Connecting to {} with client ID '{}'...".format( ENDPOINT, CLIENT_ID))# Make the connect() callconnect_future = mqtt_connection.connect()# Future.result() waits until a result is availableconnect_future.result()print("Connected!")# Publish message to server desired number of times.print('Begin Publish')for i in range (RANGE): data = "{} [{}]".format(MESSAGE, i+1) message = {"message" : data} mqtt_connection.publish(topic=TOPIC, payload=json.dumps(message), qos=mqtt.QoS.AT_LEAST_ONCE) print("Published: '" + json.dumps(message) + "' to the topic: " + "'test/testing'") t.sleep(0.1)print('Publish End')disconnect_future = mqtt_connection.disconnect()disconnect_future.result()-or-If you installed the AWS IoT Device SDK for Python (the previous SDK version), then use the following example code:Important: Replace customEndpointUrl with your AWS IoT Core custom endpoint URL. Replace certificates with the name of your certificates sub-directory. Replace a1b23cd45e-certificate.pem.crt with the name of your client .crt. Replace a1b23cd45e-private.pem.key with the name of your private key.# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.# SPDX-License-Identifier: MIT-0import time as timport jsonimport AWSIoTPythonSDK.MQTTLib as AWSIoTPyMQTT# Define ENDPOINT, CLIENT_ID, PATH_TO_CERTIFICATE, PATH_TO_PRIVATE_KEY, PATH_TO_AMAZON_ROOT_CA_1, MESSAGE, TOPIC, and RANGEENDPOINT = "customEndpointUrl"CLIENT_ID = "testDevice"PATH_TO_CERTIFICATE = "certificates/a1b23cd45e-certificate.pem.crt"PATH_TO_PRIVATE_KEY = "certificates/a1b23cd45e-private.pem.key"PATH_TO_AMAZON_ROOT_CA_1 = "certificates/root.pem"MESSAGE = "Hello World"TOPIC = "test/testing"RANGE = 20myAWSIoTMQTTClient = AWSIoTPyMQTT.AWSIoTMQTTClient(CLIENT_ID)myAWSIoTMQTTClient.configureEndpoint(ENDPOINT, 8883)myAWSIoTMQTTClient.configureCredentials(PATH_TO_AMAZON_ROOT_CA_1, PATH_TO_PRIVATE_KEY, PATH_TO_CERTIFICATE)myAWSIoTMQTTClient.connect()print('Begin Publish')for i in range (RANGE): data = "{} [{}]".format(MESSAGE, i+1) message = {"message" : data} myAWSIoTMQTTClient.publish(TOPIC, json.dumps(message), 1) print("Published: '" + json.dumps(message) + "' to the topic: " + "'test/testing'") t.sleep(0.1)print('Publish End')myAWSIoTMQTTClient.disconnect()Test the setup1.    In the AWS IoT Core console, in the left navigation pane, choose Test.2.    On the MQTT client page, for Subscription topic, enter test/testing.3.    Choose Subscribe to topic. A test topic named test/testing is ready for test message publication. For more information, see View device MQTT messages with the AWS IoT MQTT client.4.    Run the following from your command line:python3 publish.pyThe Python program publishes 20 test messages to the topic test/testing that you created in the AWS IoT Core console. View the topic in the console to see the published messages.Tip: You can also test other features of the SDKs, such as subscribing and connecting via WebSockets, using the included pubsub samples. For more information, see pubsub (AWS IoT SDK for Python v2) or BasicPubSub (AWS IoT Device SDK for Python) on GitHub.(Optional) Activate AWS IoT logging to Amazon CloudWatchYou can monitor event logs for MQTT messages that you publish to AWS IoT Core. For setup instructions, see Configure AWS IoT logging and Monitor AWS IoT using CloudWatch Logs.Related informationGetting started with AWS IoT CoreDevice provisioningFrequently asked questions (MQTT messaging protocol website)Follow"
https://repost.aws/knowledge-center/iot-core-publish-mqtt-messages-python
When will my AWS bill be ready?
I want to know when I'll receive my AWS bill.
"I want to know when I'll receive my AWS bill.ResolutionYour AWS bill for the previous month is finalized at the beginning of each new month. Shortly after, the bill is charged to your default payment method, usually between the third and the fifth day of the month.Note: The billing cycle is based in the UTC (+00:00) time zone.If you use the AWS Cost and Usage Report, then check the bill/InvoiceID column. This column is blank until your bill is finalized.Related informationWhat payment methods does AWS accept?Can I use more than one payment method to pay my AWS charges?How do I view past or current AWS payments?Follow"
https://repost.aws/knowledge-center/monthly-aws-billing
Why isn’t my Amazon SNS topic receiving Amazon S3 event notifications?
"I created an Amazon Simple Storage Service (Amazon S3) event notification to send messages through my Amazon Simple Notification Service (Amazon SNS) topic. My Amazon SNS topic isn't publishing messages when new events occur in my Amazon S3 bucket, though."
"I created an Amazon Simple Storage Service (Amazon S3) event notification to send messages through my Amazon Simple Notification Service (Amazon SNS) topic. My Amazon SNS topic isn't publishing messages when new events occur in my Amazon S3 bucket, though.ResolutionConfirm that your Amazon S3 event type is configured correctlyWhen you configure an Amazon S3 event notification, you must specify which supported Amazon S3 event types cause Amazon S3 to send the notification. If an event type that you didn't specify occurs in your Amazon S3 bucket, then Amazon S3 doesn't send the notification.Confirm that your object key name filters are in URL-encoded (percent-encoded) formatIf your event notifications are configured to use object key name filtering, then notifications are published only for objects with specific prefixes or suffixes.If you use any special characters in your prefixes or suffixes, then you must enter them in URL-encoded (percent-encoded) format. For more information see Object key naming guidelines and Working with object metadata.Note: A wildcard character ("*") can't be used in filters as a prefix or suffix to represent any character.Confirm that you've granted Amazon S3 the required permissions to publish messages to your topicYour Amazon SNS topic's resource-based policy must allow the Amazon S3 bucket to publish messages to the topic.Check your topic's AWS Identity and Access Management (IAM) policy to confirm that it has the required permissions, and add them if needed. For more information, see Granting permissions to publish messages to an SNS topic or an SQS queue.(For topics with server-side encryption (SSE) activated) Confirm that your topic has the required AWS Key Management (AWS KMS) permissionsYour Amazon SNS topic must use an AWS KMS key that is customer managed. This KMS key must include a custom key policy that gives Amazon S3 sufficient key usage permissions.To set up the required AWS KMS permissions, complete the following steps:1.    Create a new KMS key that is customer managed and includes the required permissions for Amazon S3.2.    Configure SSE for your Amazon SNS topic using the custom KMS key you just created.3.    Configure AWS KMS permissions that allow Amazon S3 to publish messages to your encrypted topic.Example IAM policy statement that allows Amazon S3 to publish messages to an encrypted Amazon SNS topic{"version": "2012-10-17","statement": [{ "effect": "allow", "principal": {"service": "s3.amazonaws.com"}, "action": ["kms:generatedatakey*", "kms:decrypt"], "resource": "*"}]}If the Amazon S3 event notification still isn't received on the SNS topic, then check the Amazon SNS CloudWatch metric NumberOfMessagePublished. This metric shows whether Amazon S3 is publishing the events. If the metric doesn't populate, then there's an issue with the Amazon S3 to Amazon SNS configuration.If the NumberOfMessagePublished metric is populated, then check the NumberOfNotificationsDelivered and NumberOfNotificationsFailed metrics. These metrics show whether the messages are successfully delivered to subscribing endpoints from your Amazon SNS topic.Amazon SNS provides support to log the delivery status of notification messages sent to topics with Amazon SNS endpoints. This includes HTTP, Amazon Kinesis Data Firehose, AWS Lambda, Platform application endpoint, Amazon Simple Queue Service, and AWS SMS. Turn on Amazon SNS topic Delivery status logs to further troubleshoot the issue.Related informationAllow Amazon S3 event notifications to publish to a topicFollow"
https://repost.aws/knowledge-center/sns-not-receiving-s3-event-notifications
Why is my EC2 Windows instance down with an instance status check failure?
How to troubleshoot your Amazon Elastic Compute Cloud (Amazon EC2) Windows instance going down due to an instance status check failure.
"How to troubleshoot your Amazon Elastic Compute Cloud (Amazon EC2) Windows instance going down due to an instance status check failure.Short descriptionAmazon Web Services (AWS) monitors the health of each EC2 instance with two status checks. An EC2 instance becomes unreachable if a status check fails.An instance status check failure indicates a problem with the instance, such as:Networking or startup configuration issuesExhausted memoryFile system issuesFailure to boot the operating systemFailure to mount volumes correctlyIncompatible driversCPU exhaustionIf both the instance and system status checks fail, then see Why is my EC2 Windows instance down with a system status check failure or status check 0/2?ResolutionDue to instance status checks likely being caused by issues within the guest operating system, focus your troubleshooting on reviewing the following:Console outputSystem logsOperating system or application error messagesFor more information, see Troubleshooting EC2 Windows instances.Tip: You can also use EC2Rescue for Windows Server to diagnose and troubleshoot issues.Related informationTroubleshoot an unreachable instanceTroubleshooting connecting to your Windows instanceWhy is my EC2 Linux instance unreachable and failing one or both of its status checks?How do I diagnose high CPU utilization on my EC2 Windows instance when my CPU is not being throttled?Follow"
https://repost.aws/knowledge-center/ec2-windows-instance-status-check-fail
How can I troubleshoot local storage issues in Aurora PostgreSQL-Compatible instances?
I am experiencing issues with local storage in my Amazon Aurora PostgreSQL-Compatible Edition DB instances.
"I am experiencing issues with local storage in my Amazon Aurora PostgreSQL-Compatible Edition DB instances.Short descriptionDB instances that are in Amazon Aurora clusters have two types of storage:Storage used for persistent data (shared cluster volume). For more information, see What the cluster volume contains.Local storage for each Aurora instance in the cluster, based on the instance class. This storage size is bound to the instance class and can be changed only by moving to a larger DB instance class. Aurora PostgreSQL-Compatible uses local storage for storing error logs and temporary files. For more information, see Temporary storage limits for Aurora PostgreSQL.ResolutionYou can monitor the local storage space that's associated with the Aurora DB instance or node by using the Amazon CloudWatch metric for FreeLocalStorage. This metric reports the amount of storage available to each DB instance for temporary tables and logs. For more information, see Monitoring Amazon Aurora metrics with Amazon CloudWatch.If your Aurora local storage is full, then use these troubleshooting steps depending on the error you receive.Local storage space is used by temporary tables or files"ERROR: could not write block XXXXXXXX of temporary file: No space left on device."This error occurs when temporary storage is exhausted on the DB instance. This can have a number of causes, including operations that:Alter large tablesAdd indexes on large tablesPerform large SELECT queries with complex JOINs, GROUP BY, or ORDER BY clauses.Use these methods to check temporary tables and temporary files size:1.    For temporary files, turn on the log_temp_files parameter on the Aurora PostgreSQL-Compatible DB instance. This parameter logs the use of temporary files that are larger than the number of specified kilobytes. After this parameter is turned on, a log entry is made for each temporary file when the file is deleted. A value of 0 logs all temporary file information. A positive value logs only the files that are larger than or equal to the specified number of kilobytes. The default value is -1, which turns off temporary file logging. Use this parameter to identify the temporary file details, and then relate these temporary files with the FreeLocalStorage metric.Note: Turning on the log_temp_files parameter can cause excessive logging on the Aurora PostgreSQL-Compatible DB instance. For this reason, it's a best practice to check the size of the Aurora PostgreSQL-Compatible log files before turning on log_temp_files. If log files are consuming the maximum space for the local storage, then reduce the value of rds.log_retention to reclaim space. The default value for rds.log_retention is three days.You can also review temporary files by using the delta of subsequent runs of this command:maxiops=> select datname, temp_files , pg_size_pretty(temp_bytes) as temp_file_size FROM pg_stat_database order by temp_bytes desc;Note: In the temp_files column, all temporary files are counted—regardless of when you created the temporary file (for example, by sorting or hashing). The columns temp_files and temp_bytes in view pg_stat_database are collecting statistics for the accumulated value. This value can be reset by using the pg_stat_reset() function or by restarting the DB instance. For more information, see the PostgreSQL documentation for Additional statistics functions.If you use Aurora PostgreSQL-Compatible 10 or later, you can monitor temp_bytes and temp_files by using Performance Insights. This also applies to Amazon Relational Database Service (Amazon RDS) for PostgreSQL. Performance Insights provide native counters for your DB engine's internal metrics, in addition to wait events. For more information, see Native counters for Amazon RDS for PostgreSQL.You can also increase maintenance_work_mem and work_mem to allocate more memory to the processes that are performing the operation. This uses more memory for the operation, which can use less temporary disk storage. For more information about these parameters, see the PostgreSQL documentation for maintenance_work_mem and work_mem. It's a best practice to set the values for maintenance_work_mem and work_mem at a query or session level to avoid running out of memory. For more information, see Amazon Aurora PostgreSQL reference.2.    For temporary tables, run a query like this:maxiops=> SELECTn.nspname as SchemaName,c.relname as RelationName,CASE c.relkindWHEN 'r' THEN 'table'WHEN 'v' THEN 'view'WHEN 'i' THEN 'index'WHEN 'S' THEN 'sequence'WHEN 's' THEN 'special'END as RelationType,pg_catalog.pg_get_userbyid(c.relowner) as RelationOwner,pg_size_pretty(pg_relation_size(n.nspname ||'.'|| c.relname)) as RelationSizeFROM pg_catalog.pg_class cLEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespaceWHERE c.relkind IN ('r','s')AND (n.nspname !~ '^pg_toast' and nspname like 'pg_temp%')ORDER BY pg_relation_size(n.nspname ||'.'|| c.relname) DESC;It's a best practice to closely monitor your application and see which transactions create temporary tables. By doing so, you can manage the usage of the available local storage capacity. You can also move to a higher instance class for your Aurora instance so that the instance has more available local storage.Local storage used by log filesExcessive logging can also cause your DB instance to run out of local storage. These are some examples of logging parameters that can consume the local storage space. The consumption could be due to either excessive logging or retaining the error log for a long time.rds.log_retention_periodauto_explain.log_min_durationlog_connectionslog_disconnectionslog_lock_waitslog_min_duration_statementlog_statementlog_statement_statsTo identify which parameter is causing excessive logging, analyze the PostgreSQL logs to find the largest logs. Then, identify which parameter is responsible for the majority of the entries in those logs. You can then modify the parameter that is causing the excessive logging.If you're repeatedly running a query that's failing with an error, then PostgreSQL logs the errors to the PostgreSQL error log by default. Review the errors logged, and then fix the failing query to prevent logs from using excessive storage. You can also reduce the default value for rds.log_retention (three days) to reclaim space used by the error logs.If excessive logging is required and you are throttling with available local storage because of log files, consider moving to a higher instance class. This means that your Aurora DB instance has more available local storage.Related informationBest practices with Amazon Aurora PostgreSQLFollow"
https://repost.aws/knowledge-center/postgresql-aurora-storage-issue
How do I retrieve my Windows administrator password after launching an instance?
"I launched an Amazon Elastic Compute Cloud (Amazon EC2) instance, and now I need to retrieve my Windows administrator password."
"I launched an Amazon Elastic Compute Cloud (Amazon EC2) instance, and now I need to retrieve my Windows administrator password.ResolutionRetrieve your initial administrator password with the Amazon EC2 console or the AWS Command Line Interface (AWS CLI).Retrieve your initial administrator password using the Amazon EC2 consoleNote: This method requires an Amazon EC2 key pair.If you lost your key pair, then see How can I reset the administrator password on an EC2 Windows instance?If you need an Amazon EC2 key pair, then see Create a key pair using Amazon EC2.Follow these steps:1.    Open the Amazon EC2 console, and then choose Instances.2.    Select the check box for the instance, and then expand the Actions dropdown list. For the old console, choose Get Windows Password. For the new console, choose Security, and then choose Get Windows Password.Note: When you first launch a new instance, this option might not be available for a few minutes.3.    Choose Browse, select your key pair file, and then choose Open.-or-Paste the contents of your key pair into the text box.4.    Choose Decrypt Password.Retrieve your initial administrator password using the AWS CLINote: If you receive errors when running AWS CLI commands, make sure that you're using the most recent version of the AWS CLI.Use the get-password-data command. For example syntax, see Examples.Related informationGetting started with Amazon EC2 Windows instancesConnect to your Windows instanceFollow"
https://repost.aws/knowledge-center/retrieve-windows-admin-password
Why can't I drop a user or role in my RDS for PostgreSQL DB instance?
"When I try to drop a user or role in my Amazon Relational Database Service (Amazon RDS) for PostgreSQL instance, I get the error "role cannot be dropped because some objects depend on it"."
"When I try to drop a user or role in my Amazon Relational Database Service (Amazon RDS) for PostgreSQL instance, I get the error "role cannot be dropped because some objects depend on it".Short descriptionWhen a user or role in RDS for PostgreSQL creates an object, such as a table or schema, the user or role is the owner of the object created. If you try to drop a user or role that owns one or more objects in any database or has privileges on these objects, then you receive an error indicating that there are objects that depend on the user or role along with granted permissions, if there are any.To drop a user or role that has dependent objects, you must do the following:Reassign the ownership of these objects to another user.Revoke any permissions that were granted to the user or role.Note: If these objects are no longer needed, consider dropping these objects and then deleting the role. You can drop all objects that are owned by a role in a database using the DROP OWNED command. You can also revoke any privileges granted to the role on objects in that database or shared objects. After the DROP OWNED command runs successfully, you can drop the role.ResolutionIn the following example, three different database roles are used:test_user: This is the user or role that must be dropped.admin_user: This is the role that's used to drop the required user or role. This user is the highest privileged user in RDS with the rds_superuser role attached to it.another_user: This is the user or role that's assigned ownership of objects owned by test_user.Run the following command to see the role with which you logged in:pg_example=> SELECT current_user;The output looks similar to the following:current_user-------------- admin_user(1 row)When you try to drop a user or role with dependent objects, you get an error similar to the following:pg_example=> DROP ROLE test_user;ERROR: role "test_user" cannot be dropped because some objects depend on itDETAIL: privileges for database pg_exampleowner of table test_tableowner of schema test_schemaowner of sequence test_schema.test_seqprivileges for table test_t2In this example, the role being dropped is test_user. Note that the role that's currently logged in is admin_user, which is the master user of the database.From the error message, you get the following information:The role test_user has privileges granted on the database pg_example and table test_t2.The role test_user owns the table test_table, schema test_schema, and a sequence object test_seq in test_schema.Note: if you drop a user or role when you're connected to a different database, then you get an output similar to the following:pg_another_db=> DROP ROLE test_user;ERROR: role "test_user" cannot be dropped because some objects depend on itDETAIL: privileges for database pg_example4 objects in database pg_exampleTo see objects that are owned by a user or role, be sure to connect to the database where the owned objects are located.To drop the user or role, you must reassign the ownership of the owned objects to another user or role and revoke associated permissions. You can use the PostgreSQL REASSIGN OWNED command to reassign the ownership of these objects to another user. When running this command, you might get an error similar to the following:pg_example=> select current_user; current_user-------------- test_userpg_example=> REASSIGN OWNED BY test_user TO another_user;ERROR: permission denied to reassign objectsTo resolve this issue, you must grant the user or role to the user that's reassigning ownership. You can't use test_user to do so because test_user isn't the owner of another_user. Therefore, you might see an error similar to the following:pg_example=> select current_user; current_user-------------- test_userpg_example=> grant another_user to test_user;ERROR: must have admin option on role "another_user"You can do either of the following to grant the user or role to the user that's reassigning ownership:Sign in to your master user and run the GRANT command:pg_example=> select current_user; current_user-------------- admin_userpg_example=> GRANT another_user TO test_user;GRANT ROLESign in to the user that will reassign ownership and run the GRANT command:pg_example=> select current_user; current_user-------------- another_userpg_example=> GRANT another_user TO test_user;GRANT ROLEAfter choosing one of the preceding options, reassign the ownership of objects owned by test_user to another_user after logging in to test_user:pg_example=> select current_user; current_user-------------- test_userpg_example=> reassign owned by test_user to another_user;REASSIGN OWNEDIf you sign in to your master user and attempt to drop test_user that still has existing privileges, then you might error similar to the following:pg_example=> select current_user; current_user-------------- admin_userpg_example=> DROP ROLE test_user;ERROR: role "test_user" cannot be dropped because some objects depend on itDETAIL: privileges for database pg_exampleprivileges for table test_t2In this case, you get an error even though the REASSIGN command is successful. This is because, the privileges of test_user must be revoked. Run the REVOKE command to revoke all usage permissions from any object on which test_user has privileges. In this example, revoke the permissions on the database pg_example and table test_t2 for test_user.pg_example=> REVOKE ALL ON TABLE test_t2 FROM test_user;REVOKEpg_example=> REVOKE ALL ON DATABASE pg_example FROM test_user;REVOKEThen, drop the user test_user:pg_example=> DROP ROLE test_user;DROP ROLEAfter revoking the privileges, you can successfully drop the role.Related informationPostgreSQL documentation for DROP ROLEFollow"
https://repost.aws/knowledge-center/rds-postgresql-drop-user-role
How do I truncate the sys.aud$ table on my Amazon RDS DB instance that is running Oracle?
How do I truncate the sys.aud$ table on my Amazon Relational Database Service (Amazon RDS) DB instance that is running Oracle?
"How do I truncate the sys.aud$ table on my Amazon Relational Database Service (Amazon RDS) DB instance that is running Oracle?ResolutionTo truncate thesys.aud$ table, run the following command as the master user:SQL> exec rdsadmin.rdsadmin_master_util.truncate_sys_aud_table;If the procedure is successful, you receive output similar to the following:PL/SQL procedure successfully completed.SQL> select count(*) from sys.aud$; COUNT(*)---------- 0Note: Truncating the table requires that your RDS DB instance can run the TRUNCATE_SYS_AUD_TABLE procedure as a master user. Oracle versions 12.1.0.2.v2 and 11.2.0.4.v6, as well as subsequent versions, support this operation.If the preceding command is unsuccessful, contact AWS Support for assistance. To determine what kind of assistance AWS Support needs to provide, run the following commands and then note their output:1.    Run the following command to determine if the TRUNCATE_SYS_AUD_TABLE procedure is available on your RDS DB instance:SQL> desc rdsadmin.rdsadmin_master_utilIf your RDS DB instance has theTRUNCATE_SYS_AUD_TABLE procedure, then you receive output similar to the following:FUNCTION IS_DML_ENABLED RETURNS BOOLEANPROCEDURE TRUNCATE_SYS_AUD_TABLEPROCEDURE TRUNCATE_SYS_FGA_LOG_TABLE2.    Run the following command to determine if theRDS_MASTER_ROLE role is available on your RDS DB instance:SQL> select role from dba_roles where role='RDS_MASTER_ROLE';If the RDS_MASTER_ROLE role is available on your RDS DB instance, then you receive output similar to the following:ROLE--------------------------------------------RDS_MASTER_ROLE3.    Run the following command to verify that the master user has permissions to run the TRUNCATE_SYS_AUD_TABLE procedure:SQL> select granted_role, grantee, admin_option from dba_role_privs where granted_role='RDS_MASTER_ROLE';If the master user has permissions to run the TRUNCATE_SYS_AUD_TABLE procedure, then you receive output similar to the following:GRANTED_ROLE GRANTEE ADM-------------------- -------------------- ---RDS_MASTER_ROLE SYS YESRDS_MASTER_ROLE MASTER_USER NORelated informationOracle on Amazon RDSCommon DBA tasks for Oracle DB instancesFollow"
https://repost.aws/knowledge-center/truncate-audit-tables-oracle
What happens when I make a configuration change to my Amazon OpenSearch Service cluster?
I'm trying to minimize the downtime during a configuration change. What happens if I make a configuration change to my Amazon OpenSearch Service cluster?
"I'm trying to minimize the downtime during a configuration change. What happens if I make a configuration change to my Amazon OpenSearch Service cluster?ResolutionWhen you change your OpenSearch Service cluster configuration, a blue/green deployment can be triggered. During a blue/green deployment, a cluster state changes to "Processing" while a new OpenSearch Service domain is being created. When your new domain is created, the following occurs:The total number of nodes are doubled. Or, the total number of nodes is equal to the node count in the old and new domain.The number of nodes are doubled until the old domain nodes are terminated.If a shard allocation is finally in progress, the cluster state returns to "Active".Note: During blue/green deployment, you might observe some latency. To avoid any latency issues, it's a best practice to run blue/green deployment when the cluster is healthy and there is low network traffic.Configuration change durationYour configuration change can take longer depending on the cluster size, workload, shard size, and shard count. Use the cat recovery command to monitor the status of your shard relocation.To see which shards are still relocating, use the following command syntax:curl -X GET "cluster_endpoint/_cat/recovery?v=true&pretty" | awk '/peer/ {print $1" "$2" "$3" "$4" "$18}' | grep -v 100\.0\%To list the shard relocation by byte percentages, use the following command syntax:curl -X GET "https://<end_point>/_cat/recovery?v=true&pretty" | awk '/peer/ {print $1" "$2" "$3" "$4" "$18}' | tr -d "%" | sort -k 5 -nNote: To sort the data by byte percentage (which is in the fifth column), you must specify "5" for -k.If you observe minimal progress for the shard relocation, your cluster might be stuck.Reasons your blue/green deployment process is stuckYour blue/green deployment process might get stuck for the following reasons:An unhealthy cluster state from before the configuration change.Consistently high JVM memory pressure. Aim to keep your JVM memory pressure below 75% to avoid out of memory (OOM) issues.Consistently high CPU utilization. Aim to keep your CPU utilization below 80%.Too many shards on a cluster or incorrect shard sizing. It's a best practice to keep your shard count between 10 GiB and 50 GiB. For more information about indexing strategy, see Choosing the number of shards.Invalid configuration setup or too many configuration changes at the same time. Make sure to verify your configuration settings and wait to send a configuration change until the first configuration change completes.Insufficient disk space or capacity for the relocation process or requested instance type.Lack of available IPs on the requested subnet for a cluster inside a virtual private cloud (VPC).Using volume size for the instance type. Your volume size must be within the limit range.Using index settings like "index.routing.allocation.require._name" or "NODE_NAME" or "index.blocks.write": true". These settings indicate a write block. Make sure to remove these settings from your index settings before you proceed.For more information, see Why is my OpenSearch Service domain stuck in the "Processing" state?Related informationWhy is my Amazon OpenSearch Service domain upgrade taking so long?Follow"
https://repost.aws/knowledge-center/opensearch-configuration-change
How do I resolve the "Courier fetch: n of m shards failed" error in OpenSearch Dashboards on Amazon OpenSearch Service?
"When I try to load a dashboard in OpenSearch Dashboards on my Amazon OpenSearch Service domain, it returns a Courier fetch error. How do I resolve this?"
"When I try to load a dashboard in OpenSearch Dashboards on my Amazon OpenSearch Service domain, it returns a Courier fetch error. How do I resolve this?Short descriptionWhen you load a dashboard in OpenSearch Dashboards, a search request is sent to the OpenSearch Service domain. The search request is routed to a cluster node that acts as thecoordinating node for the request. The"Courier fetch: n of m shards failed" error occurs when the coordinating node fails to complete thefetch phase of the search request. There are two types of issues that commonly cause this error:Persistent issues: Mapping conflicts or unassigned shards. If you have several indices in your index pattern using the same name but different mapping types, you might get a Courier fetch error. If your cluster is in red cluster status, it means that at least one shard is unassigned. Because OpenSearch Service can't fetch documents from unassigned shards, a cluster in red status throws a Courier fetch error. If the value of "n" in the Courier fetch error message is the same each time you receive the error, then it is likely a persistent issue. Check the application error logs for troubleshooting suggestions.Note: Persistent issues can't be resolved by retrying or provisioning more cluster resources.Transient issues: Transient issues include rejections of thread pools, search timeouts, and tripped field data circuit breakers. These issues occur when you don't have enough compute resources on the cluster. A transient issue is likely the cause when you receive the error message intermittently with a different value of "n" each time. You can also monitor Amazon CloudWatch metrics such as CPUUtilization, JVMMemoryPressure, and ThreadpoolSearchRejected to determine if a transient issue is causing the Courier fetch error.ResolutionEnable application error logs for the domain. The logs can help you identify the root cause and solution for both transient and persistent issues. For more information, see Viewing OpenSearch Service error logs.Persistent issuesThe following example shows a log entry for a Courier fetch error caused by a persistent issue:[2019-07-01T12:54:02,791][DEBUG][o.e.a.s.TransportSearchAction] [ip-xx-xx-xx-xxx] [1909731] Failed to execute fetch phaseorg.elasticsearch.transport.RemoteTransportException: [ip-xx-xx-xx-xx][xx.xx.xx.xx:9300][indices:data/read/search[phase/fetch/id]]Caused by: java.lang.IllegalArgumentException: Fielddata is disabled on text fields by default. Set fielddata=true on [request_departure_date] in order to load fielddata in memory by uninverting the inverted index.Note that this can however use significant memory. Alternatively use a keyword field instead.In this example, the issue is caused by the request_departure_date field. The log entry shows that you can resolve this issue by setting fielddata=true in the index settings or by using a keyword field.Transient issuesMost transient issues can be resolved by either provisioning more compute resources or reducing the resource utilization for your queries.Provisioning more compute resourcesScale up your domain by switching to a larger instance type, or scale out by adding more nodes to the cluster. For more information, see Creating and managing OpenSearch Service domains.Confirm that you're using an instance type that is appropriate for your use case. For more information, see Choosing instance types and testing.Reducing the resource utilization for your queriesConfirm that you're following best practices for shard and cluster architecture. A poorly designed cluster can't use all available resources. Some nodes might get overloaded while other nodes sit idle. OpenSearch Service can't fetch documents from overloaded nodes.You can also reduce the scope of your query. For example, if you query on time frame, reduce the date range or filter the results by configuring the index pattern in Kibana.Avoid running select * queries on large indices. Instead, use filters to query a part of the index and search as few fields as possible. For more information, see Tune for search speed and Query and filter context on the Elasticsearch website.Reindex and reduce the number of shards. The more shards you have in your cluster, the more likely you are to get a Courier fetch error. Because each shard has its own resource allocation and overheads, a large number of shards places excessive strain on your cluster. For more information, see Why is my OpenSearch Service domain stuck in the "Processing" state?The following example shows a log entry for a Courier fetch error caused by a transient issue:Caused by: org.elasticsearch.common.util.concurrent.EsRejectedExecutionException: rejected execution of org.elasticsearch.common.util.concurrent.TimedRunnable@26fdeb6f on QueueResizingEsThreadPoolExecutor[name = __PATH__ queue capacity = 1000, min queue capacity = 1000, max queue capacity = 1000, frame size = 2000, targeted response rate = 1s, task execution EWMA = 2.9ms, adjustment amount = 50,org.elasticsearch.common.util.concurrent.QueueResizingEsThreadPoolExecutor@1968ac53[Running, pool size = 2, active threads = 2, queued tasks = 1015, completed tasks = 96587627]]In this example, the issue is caused by search threadpool queue rejections. To resolve this issue, scale up your domain by choosing a larger instance type. For more information, see Thread pools on the Elasticsearch website.Related informationBest practices for Amazon OpenSearch ServiceTroubleshooting Amazon OpenSearch ServiceFollow"
https://repost.aws/knowledge-center/opensearch-dashboards-courier-fetch
How can I disable public access for an AWS DMS replication DB instance?
How can I disable public access for an AWS Database Migration Service (AWS DMS) replication instance?
"How can I disable public access for an AWS Database Migration Service (AWS DMS) replication instance?Short descriptionAn AWS DMS replication instance can have one public IP address and one private IP address, just like an Amazon Elastic Compute Cloud (Amazon EC2) instance that has a public IP address.To use a public IP address, choose the Publicly accessible option when you create your replication instance. Or specify the --publicly-accessible option when you create the replication instance using the AWS Command Line Interface (AWS CLI).If you uncheck (disable) the box for Publicly accessible, then the replication instance has only a private IP address. As a result, the replication instance can communicate with a host that is in the same Amazon Virtual Private Cloud (Amazon VPC) and that can communicate with the private IP address. Or the replication instance can communicate with a host that is connected privately, for example, by VPN, VPC peering, or AWS Direct Connect.After you create the replication instance, you can't modify the Publicly accessible option.ResolutionNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.To disable public access to your replication instance, delete the replication instance and then recreate it. Before you can delete a replication instance, you must delete all the tasks that use the replication instance.Instead of recreating the replication instance, you can change the subnets that are in the subnet group that is associated with the replication instance to private subnets. A private subnet is a subnet that isn't routed to an internet gateway. Instances in a private subnet can't communicate with a public IP address, even if they have a public IP address. For more information, see Setting up a network for a replication instance.Related informationWorking with an AWS DMS replication instanceFollow"
https://repost.aws/knowledge-center/dms-disable-public-access
How can I automatically discover the subnets used by my Application Load Balancer in Amazon EKS?
I want to automatically discover the subnets used by my Application Load Balancer (ALB) in Amazon Elastic Kubernetes Service (Amazon EKS).
"I want to automatically discover the subnets used by my Application Load Balancer (ALB) in Amazon Elastic Kubernetes Service (Amazon EKS).Short descriptionYou can tag your AWS subnets to allow the AWS Load Balancer controller to auto discover subnets used for Application Load Balancers.Resolution1.    Deploy the AWS Load Balancer Controller for your Amazon EKS cluster.2.    Verify that the AWS Load Balancer Controller is installed:kubectl get deployment -n kube-system aws-load-balancer-controllerNote: If the Deployment is deployed in a different namespace, then replace -n kube-system with the appropriate namespace.3.    Create a Kubernetes Ingress resource on your cluster with the following annotation:annotations: kubernetes.io/ingress.class: albNote: The AWS Load Balancer Controller creates load balancers. The Ingress resource configures the Application Load Balancer to route HTTP(S) traffic to different pods within your cluster.4.    Add either an internal or internet-facing annotation to specify where you want the Ingress to create your load balancer:alb.ingress.kubernetes.io/scheme: internal-or-alb.ingress.kubernetes.io/scheme: internet-facingNote: Choose internal to create an internal load balancer, or internet-facing to create a public load balancer.5.    Use tags to allow the Application Load Balancer Ingress Controller to create a load balancer using auto-discovery. For example:kubernetes.io/role/internal-elb Set to 1 or empty tag value for internal load balancerskubernetes.io/role/elb Set to 1 or empty tag value for internet-facing load balancersNote: You can use tags for auto-discovery instead of the manual alb.ingress.kubernetes.io/subnets annotation.Example of a subnet with the correct tags for a cluster with an internal load balancer:kubernetes.io/role/internal-elb 1Example of a subnet with the correct tags for a cluster with a public load balancer:kubernetes.io/role/elb 1Note: For cluster versions 1.18 and earlier, Amazon EKS adds the following tag to all subnets passed in during cluster creation. The tag isn't added to version 1.19 clusters. If you're using the tag and you update to cluster version 1.19 from an earlier version, then you don't have to add the tag again. The tag stays on your subnet. You can use the following tag to control where an Application Load Balancer is provisioned. Use this tag in addition to the subnet tags required for automatically provisioning an Application Load Balancer.kubernetes.io/cluster/$CLUSTER_NAME sharedImportant: The AWS Load Balancer Controller workflow checks subnet tags for the value of " " (empty string) and 1. For private subnets, set the value of the kubernetes.io/role/internal-elb tag to an empty string or 1. For public subnets, set the value of the kubernetes.io/role/elb tag to an empty string or 1. These tags allow your subnets to be auto-discovered from the Amazon EKS VPC subnets of your Application Load Balancer.6.    Validate that your Amazon EKS VPC subnets have the correct tags:aws ec2 describe-subnets --subnet-ids your-subnet-xxxxxxxxxxxxxxxxx7.    Deploy a sample application to verify that the AWS Load Balancer Controller creates an Application Load Balancer as a result of the Ingress object:kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/examples/2048/2048_full.yaml8.    Verify that the Ingress resource gets created and has an associated Application Load Balancer:kubectl get ingress/2048-ingress -n 2048-gameEither an internal or internet-facing load balancer is created, depending on the annotations (alb.ingress.kubernetes.io/scheme:) that you defined in the Ingress object and subnets.Follow"
https://repost.aws/knowledge-center/eks-subnet-auto-discovery-alb
How do I raise the priority of agent to agent or agent to queue transferred calls in Amazon Connect?
I want to raise the priority of agent to agent or agent to queue transferred calls in my Amazon Connect contact center.
"I want to raise the priority of agent to agent or agent to queue transferred calls in my Amazon Connect contact center.ResolutionImportant: To create and edit contact flows, you must log in to your Amazon Connect instance as a user with sufficient permissions in their security profile.Create a Transfer to queue contact flowLog in to your Amazon Connect instance.Important: Replace the alias with your instance's alias.In the left navigation bar, hover over Routing, and then choose Contact flows.On the Contact flows page, choose the arrow icon next to Create contact flow, and then choose Create transfer to queue flow.In the contact flow designer, for Enter a name, enter a name for the contact flow. For example: Transfer to [your_queue_name] flowChoose Save.For more information, see Create a new contact flow.Add a Play prompt block for your Transfer to queue contact flowIn the contact flow designer, expand Interact.Drag and drop a Play prompt block onto the canvas.Choose the Play prompt block title. The block's settings menu opens.For Prompt, configure the audio prompt that you want to play.Choose Save.For more information, see Contact block: Play prompt.Add a Change routing priority / age block for your Transfer to queue contact flowIn the contact flow designer, expand Set.Drag and drop a Change routing priority / age block onto the canvas.Choose the Change routing priority / age block title. The block's settings menu opens.For Set priority or routing age, configure the priority of contact by either priority or time.Note: Contacts are routed by priority. 1 is the highest priority and 5 is the lowest. Contacts are further ordered by time or age in the queue.Choose Save.For more information, see Contact block: Change routing priority / age.Add a Transfer to queue block for your Transfer to queue contact flowIn the contact flow designer, expand Transfer.Drag and drop a Transfer to queue block onto the canvas.Choose the Transfer to queue block title. The block's settings menu opens.Note: You don't need to set the Transfer to queue block, because the destination of this flow is set later with quick connects.Choose Save.For more information, see Contact block: Transfer to queue.Add a Disconnect / hang up block for your Transfer to queue contact flowIn the contact flow designer, expand Terminate / Transfer.Drag and drop a Disconnect contact block onto the canvas.For more information, see Contact block: Disconnect / hang up.Create a Transfer to agent contact flowLog in to your Amazon Connect instance.Important: Replace the alias with your instance's alias.In the left navigation bar, hover over Routing, and then choose Contact flows.On the Contact flows page, choose the arrow icon next to Create contact flow, and then choose Create transfer to agent flow.In the contact flow designer, for Enter a name, enter a name for the contact flow. For example: Transfer to [your_agent_name] flowChoose Save.Add a Play prompt block for your Transfer to agent contact flowIn the contact flow designer, expand Interact.Drag and drop a Play prompt block onto the canvas.Choose the Play prompt block title. The block's settings menu opens.For Prompt, configure the audio prompt that you want to play.Choose Save.Add a Change routing priority / age block for your Transfer to agent contact flowIn the contact flow designer, expand Set.Drag and drop a Change routing priority / age block onto the canvas.Choose the Change routing priority / age block title. The block's settings menu opens.For Set priority or routing age, configure the priority of contact by either priority or time.Note: Contacts are routed by priority. 1 is the highest priority and 5 is the lowest. Contacts are further ordered by time or age in the queue.Choose Save.Add a Transfer to agent (beta) block for your Transfer to agent contact flowIn the contact flow designer, expand Terminate / Transfer.Drag and drop a Transfer to agent (beta) block onto the canvas.Choose Save.For more information, see Contact block: Transfer to agent (beta).Note: The Transfer to agent block is a beta feature and works only for voice interactions. If you want an agent to agent transfer flow for voice, see Set up agent-to-agent transfers.Create and enable a quick connectCreate a quick connect. During creation, do the following:For Destination, choose any queue for a queue quick connect and agent for agent quick connect, because the contact doesn't actually enter the queue or agent.For Contact flow, choose the Transfer to queue or agent contact flow that you created depending on your type of quick connect.Add the quick connect to the queues where you want the agents to see it.Follow"
https://repost.aws/knowledge-center/connect-priority-agent-queue-calls
How do I sign up for an AWS Support plan?
I'd like to speak with AWS support engineers about my resources. How do I sign up for an AWS Support plan?
"I'd like to speak with AWS support engineers about my resources. How do I sign up for an AWS Support plan?ResolutionTo sign up for an AWS Support plan, you must sign in with:AWS account root user credentials-or-AWS Identity and Access Management (IAM) user credentials with access permissions for AWS Support plans. For more information, see Manage access for AWS Support Plans.To sign up for an AWS Support plan, do the following:Open AWS Support Plans.(Optional) On the Support Plans page, compare the Support plans. For pricing information, see AWS Support Plan pricing.Under AWS Support pricing example, choose See examples, and then choose one of the Support plan options to see the estimated cost.Choose Review downgrade or Review upgrade for the plan that you want. Important: If you have an Enterprise On-Ramp or Enterprise Support plan, on the Change plan confirmation dialog box, choose Contact us, fill in the form, and then choose Submit.For Change plan confirmation, expand the support items to see the features included with the plan.Under Pricing, you can view the projected one-time charges for the new Support plan.Choose Accept and agree.If the Support plan that you choose has a monthly cost associated with it, then your payment method is automatically charged the prorated monthly cost according to the days remaining in the current month. When this charge is processed, your AWS Support subscription is activated within a few hours.To learn about the benefits and costs of the available AWS Support plans, visit AWS Support.Related informationHow do I change my AWS Support plan?Follow"
https://repost.aws/knowledge-center/sign-up-support
How do I capture client IP addresses in the web server logs behind an ELB?
"I'm using Elastic Load Balancing (ELB) for my web server, and I can see my load balancer's IP address in the web server access logs. How do I capture client IP addresses instead?"
"I'm using Elastic Load Balancing (ELB) for my web server, and I can see my load balancer's IP address in the web server access logs. How do I capture client IP addresses instead?Short descriptionYour web server access logs capture the IP address of your load balancer because the load balancer establishes the connection to your instances. To capture the IP addresses of clients in your web server access logs, configure the following:For Application Load Balancers and Classic Load Balancers with HTTP/HTTPS listeners, the X-Forwarded-For HTTP header captures client IP addresses. You can then configure your web server access logs to record these IP addresses.For Classic Load Balancers with TCP/SSL listeners, activate Proxy Protocol support on the Classic Load Balancer and the target application. Make sure to configure Proxy Protocol support on both the load balancer and the application.For Network Load Balancers, register your targets by instance ID to capture client IP addresses without additional web server configuration. For instructions, see Target group attributes instead of the following resolutions.For Network Load Balancers when you can register only IP addresses as targets, activate proxy protocol version 2 on the load balancer. For instructions, see Enable proxy protocol instead of the following resolutions.ResolutionApplication Load Balancers and Classic Load Balancers with HTTP/HTTPS listeners (Apache)1.    Open your Apache configuration file using a text editor. The location varies by configuration, such as /etc/httpd/conf/httpd.conf for Amazon Linux and RHEL**,** or /etc/apache2/apache2.conf for Ubuntu.2.    In the LogFormat section, add %{X-Forwarded-For}i, similar to the following:... LogFormat "%{X-Forwarded-For}i %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common ...3.    Save your changes.4.    Reload the Apache service.For Sysvinit, Debian-based systems (such as Ubuntu) and SUSE (such as SLES11), run this command:# /etc/init.d/apache2 reloadFor Sysvinit, RPM-based systems (such as RHEL 6 and Amazon Linux), except SUSE, run this command:# /etc/init.d/httpd reloadFor Systemd, Debian-based systems (such as Ubuntu) and SUSE (such as SLES12), run this command:# systemctl reload apache2For Systemd, RPM-based systems (such as RHEL 7 and Amazon Linux 2), except SUSE, run this command:# systemctl reload httpd5.    Open your Apache web server access logs. The location varies by configuration.6.    Verify that client IP addresses are now recorded under the X-Forwarded-For header.Application Load Balancers and Classic Load Balancers with HTTP/HTTPS Listeners (NGINX)1.    Open your NGINX configuration file using a text editor. The location is typically /etc/nginx/nginx.conf.2.    In the LogFormat section, add $http_x_forwarded_for, similar to the following:http { ... log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; ...}3.    Save your changes.4.    Reload the NGINX service.For example, on Amazon Linux 2 or RHEL, run this command:systemctl reload nginxNote: The command to reload the NGINX service is different on other systems. The commands to reload NGINX are similar to the commands to reload the Apache service in the previous section.5.    Open your NGINX web server access logs. The location varies by configuration.6.    Verify that client IP addresses now recorded under the X-Forwarded-For header.Classic Load Balancers with TCP/SSL Listeners (Apache)1.    Open your Apache configuration file using a text editor. The location varies by configuration, such as /etc/httpd/conf/httpd.conf for Amazon Linux and RHEL, or /etc/apache2/apache2.conf for Ubuntu.2.    Make sure that your Apache configuration loads the module mod_remoteip (available for Apache version 2.4.31 and newer). This module includes the RemoteIPProxyProtocol directive. In your configuration file, check for a line that's similar to the following:Amazon Linux or RHEL:LoadModule remoteip_module modules/mod_remoteip.soUbuntu:LoadModule remoteip_module /usr/lib/apache2/modules/mod_remoteip.so3.    Confirm that the mod_remoteip module loads:$ sudo apachectl -t -D DUMP_MODULES | grep -i remoteip4.    Review the output and verify that the output contains a line that's similar to:remoteip_module (shared)Important: If the output doesn't contain this line, then the module isn’t included or loaded in your configuration. Make sure to activate the module before you proceed.5.    Add the following line to your Apache configuration file to activate Proxy Protocol support:RemoteIPProxyProtocol On6.    Edit the LogFormat section of the configuration file to capture the remote IP address (%a) and the remote port ( %{remote}p:), similar to the following:LogFormat "%h %p %a %{remote}p %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined7.    Save your changes.8.    Reload the Apache service.For Sysvinit, Debian-based systems (such as Ubuntu), and SUSE (such as SLES11), run this command:# /etc/init.d/apache2 reloadFor Sysvinit, RPM-based systems (such as RHEL 6 and Amazon Linux), except SUSE, run this command:# /etc/init.d/httpd reloadFor Systemd, Debian-based systems (such as Ubuntu) and SUSE (such as SLES12), run this command:# systemctl reload apache2For Systemd, RPM-based systems (such as RHEL 7 and Amazon Linux 2), except SUSE, run this command:# systemctl reload httpd9.    Open the Apache web server access logs. The location varies by configuration.10.    Verify that client IP addresses are now recorded under the Proxy Protocol header.11.    Activate support for Proxy Protocol in your target application.Classic Load Balancers with TCP/SSL Listeners (NGINX)1.    Open the NGINX configuration file using a text editor. The location is typically /etc/nginx/nginx.conf.2.    Change the listen line of the server section to *NOTE: THIS IS PLACEHOLDER CONTENT THAT WILL BE REPLACED AFTER EDITING*###Long SentencesXX###Wrong/Misspelled Service Name###Link broken or incorrect titlelink ###Sensitive TermsTerms###Changes**WAS:** **IS:** **REASON:** **WAS:** **IS:** **REASON:** **WAS:** **IS:** **REASON:** **WAS:** **IS:** **REASON:**proxy_protocolMake sure to change the log_format line of the http section to set the proxy_protocol_addr:http { ... log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$proxy_protocol_addr"'; access_log /var/log/nginx/access.log main; ...}server { ... listen 80 default_server proxy_protocol; ... }...}3.    Save your changes.4.    Reload the NGINX service.For example, on Amazon Linux 2 or RHEL, run this command:systemctl reload nginxNote: The command to reload the NGINX service is different on other systems. The commands to reload NGINX are similar to the commands to reload the Apache service in the previous section.5.    Open the NGINX web server access logs. The location varies by configuration.6.    Verify that client IP addresses are now recorded under the Proxy Protocol header.7.    Activate support for Proxy Protocol in your target application.Follow"
https://repost.aws/knowledge-center/elb-capture-client-ip-addresses
Why can't I delete my snapshot in Amazon Redshift?
"I'm trying to delete a snapshot of my Amazon Redshift cluster. However, I receive an error message indicating that my snapshot is accessible by another AWS account. How do I resolve this?"
"I'm trying to delete a snapshot of my Amazon Redshift cluster. However, I receive an error message indicating that my snapshot is accessible by another AWS account. How do I resolve this?Short descriptionIf you're trying to delete a snapshot that shares access with another AWS account, you might encounter the following error message:"Cannot delete the snapshot- xxx-xxx-xxx because other accounts still have access to it."To resolve this error message, remove the shared access from the AWS account that created the cluster snapshot in Amazon Redshift. Then, delete your cluster snapshot.ResolutionTo delete a shared cluster snapshot using the Amazon Redshift console, perform the following steps:1.    Sign in to the AWS Management Console with the AWS account that created the cluster snapshot.2.    Open the Amazon Redshift console.3.    From Clusters, choose the snapshot that you want to delete.4.    Choose Actions.5.    Choose Manage Access to view the access settings for your cluster.6.    Choose Remove Account to delete the shared access of your cluster snapshot.7.    Delete your cluster snapshot.To delete a shared cluster snapshot using the AWS Command Line Interface (AWS CLI), perform the following steps:Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.1.    Revoke the shared snapshot access using the revoke-snapshot-access command:aws redshift revoke-snapshot-access --snapshot-id my-snapshot-id --account-with-restore-access <AWS-account-id-with-access>2.    Delete your Amazon Redshift cluster snapshot using the delete-cluster-snapshot command:aws redshift delete-cluster-snapshot --snapshot-identifier my-snapshot-idFollow"
https://repost.aws/knowledge-center/redshift-delete-snapshot
How do I find out if my Amazon EC2 Reserved Instances are being fully used?
"I've purchased some Amazon Elastic Compute Cloud (Amazon EC2) Reserved Instances for my account, and I'd like to be sure that I'm getting the maximum benefit from them. How do I find out if my Reserved Instances (RIs) are being fully utilized?"
"I've purchased some Amazon Elastic Compute Cloud (Amazon EC2) Reserved Instances for my account, and I'd like to be sure that I'm getting the maximum benefit from them. How do I find out if my Reserved Instances (RIs) are being fully utilized?ResolutionTo see the billing hours that are covered by your Amazon EC2 RIs, do the following:Open the Billing and Cost Management console.Choose Bills in the navigation pane.You can view the charges for different services in the Details section.Expand the Elastic Compute Cloud section and the AWS Region to view the billing information about your RIs.You can view the Reserved Instance types and billing hours for your RIs.You can use Cost Explorer reports or Amazon EC2 usage reports to understand more about the utilization of your Reserved Instances. You can use the RI Utilization and RI Coverage reports in Cost Explorer to:Track the number of RI hours used against the number of RI hours purchasedTrack the number of hours covered by RIs against the On Demand hours in the tableTrack the number of RI hours that you have reserved but did not useFor information on how to use Cost Explorer to view your RI utilization and coverage, see How can I use Cost Explorer to analyze my spending and usage?Related informationCreating Cost and Usage ReportsUnderstanding your reservations with Cost ExplorerHow Reserved Instances are appliedFollow"
https://repost.aws/knowledge-center/ec2-reserved-instances-being-used
How do I troubleshoot lag in my RDS for SQL Server read replica?
I have an Amazon Relational Database Service (Amazon RDS) for Microsoft SQL Server instance with read replica. I'm seeing one of the following in my DB instance:There is a sudden increase in replica lag.Modification of the instance started to cause replica lag.The database on the read replica instance isn't accessible.How can I troubleshoot these issues?
"I have an Amazon Relational Database Service (Amazon RDS) for Microsoft SQL Server instance with read replica. I'm seeing one of the following in my DB instance:There is a sudden increase in replica lag.Modification of the instance started to cause replica lag.The database on the read replica instance isn't accessible.How can I troubleshoot these issues?Short descriptionAmazon RDS for SQL Server enterprise edition supports creation of a read replica within the same Region. Data replication is asynchronous and uses Always-On technology to replicate data from a master to a replica instance. RDS for SQL Server doesn't intervene to mitigate high replica lag between a source DB instance and its read replicas.Resolution1.    Check resource utilization on the master and on the replica instance using Amazon CloudWatch. Use the Enhanced Monitoring and Performance Insights features to check resource usage at a granular level.Important considerations for metrics on the master and replica instances:Make sure that CPU utilization isn't throttled. If you're running a burstable instance class, make sure that CPU credits are available or that you're running in Unlimited mode.Make sure that there is sufficient FreeableMemory. And, make sure that ReadIOPS and WriteIOPS are hitting provisioned limits. Validate that there is sufficient burst balance available if you're using a GP2 volume. For more information, see How do I troubleshoot low freeable memory in my RDS for SQL Server instance?Make sure that ReadThroughput and WriteThroughput aren't reaching instance class limits.2.    It's a best practice to create the master and replica instances with the same instance class, storage type, and number of IOPS. This avoids replica lag due to lack of resources in the replica instance. Additionally, depending on the workload, the read replica can be scaled up or scaled down if usage is minimal compared to the master instance.3.    Identify the timeframe when replica lag started to increase and then do the following:Check the WriteIOPS, WriteThroughput, NetworkReceiveThroughput and NetworkTrasmitThroughput metrics on the master instance based on start time of replica lag. Determine if the lag is due to write activity. Check the same metrics in the same time period on the read replica.Check if there are long-running transactions on the master instance. The following is an example query to the check status of active transactions:SELECT * FROM sys.sysprocesses WHERE open_tran = 1;4.    On the replica instance, check if there are any significant lock waits or deadlocks. Deadlocks occur between Select and DDL/DML transactions and cause delays in applying transaction logs from the master instance.The following is an example query to check for blocking:SELECT * FROM sys.sysprocesses WHERE blocked > 0;5.    Query to check for replica lag and maximum replica lag.Replica lagSELECT AR.replica_server_name , DB_NAME (ARS.database_id) 'database_name' , AR.availability_mode_desc , ARS.synchronization_health_desc , ARS.last_hardened_lsn , ARS.last_redone_lsn , ARS.secondary_lag_secondsFROM sys.dm_hadr_database_replica_states ARSINNER JOIN sys.availability_replicas AR ON ARS.replica_id = AR.replica_id--WHERE DB_NAME(ARS.database_id) = 'database_name'ORDER BY AR.replica_server_name;Verify that the 'last_hardened_lsn' value is progressing on the read replica.Maximum replica lagFor SQL Server, the ReplicaLag metric is the maximum lag of databases that have fallen behind, in seconds. For example, if you have two databases that lag 5 seconds and 10 seconds, respectively, then ReplicaLag is 10 seconds. The ReplicaLag metric returns the value of the following query. Run the query on the master instance.select max(secondary_lag_seconds) max_lag from sys.dm_hadr_database_replica_states;6.    After you initiate read replica creation, a snapshot is taken from the master instance, and then restored to create a read replica instance. Transactions logs are replayed to synchronize the data with the master instance. However, after you create new instance, that instance experiences lazy loading, which causes replica lag. This is expected behavior. To minimize the effect of lazy loading, use IO1 type storage during read replica creation and then convert it back to GP2, if required.7.    Run transactions in batches on the master instance. This avoids running long transactions and keeps the transaction log file size minimal. Don't restart the replica instance unless required during high replica lag as doing this further delays replay of transaction logs.8.    Modification of the instance class on the master or replica instance might cause temporary replica lag. This is expected behavior because logs are processing from the master instance.Changing the storage type or storage size has a longer impact on replica lag until the storage optimization is completed. It isn't possible to find how much percentage of storage optimization is completed on RDS instances.9.    If the read replica reaches the storage full state, then the transaction logs from the master instance aren't processed and replica lag increases.If you suspect that storage space is due to TempDB or temporary tables, then restart the replica instance to temporarily release space.10.    If you're experiencing no progress in replica lag status, check the status of user databases on the replica instance. To replay logs, the database status must be Online.Be aware of the following:Newly created databases aren't included in the lag calculation until they're accessible on the read replica.ReplicaLag returns -1 if RDS can't determine the lag, such as during replica setup, or when the read replica is in the error state.Related informationWorking with read replicas for Microsoft SQL Server in Amazon RDSFollow"
https://repost.aws/knowledge-center/rds-sql-server-read-replica-lag
How can I stop Route 53 health check requests that are sent to my application?
How can I stop Amazon Route 53 health check requests that are sent to my application?
"How can I stop Amazon Route 53 health check requests that are sent to my application?Short descriptionYou can configure Route 53 health checks against any public resource. If your application is receiving health check requests from Route 53 when you haven't configured health checks on your application, the cause might be:A health check was mistakenly configured against your application by another customer.A health check was configured from your account for testing purposes but wasn't deleted when testing was complete.A health check was configured against another customer's public AWS resources. However, the IP addresses of those resources were reassigned to your AWS resources. In this scenario, the health check was configured on the reassigned IP addresses. If the health check was based on domain names, the requests were sent due to DNS caching.The Elastic Load Balancing service updated its public IP addresses due to scaling, and the IP addresses were reassigned to your load balancer.ResolutionTo stop unwanted health checks requests from Route 53:Find the ID of the unwanted health check by reviewing your application logs. For more information, see How can I identify and resolve unwanted health checks from Route 53?Contact AWS. If you have an AWS Support plan, create a support case. If you don't have an AWS Support plan, complete the Stop unwanted Amazon Route 53 health checks form. In both scenarios, be sure to include the health check ID that you found in step 1.(Optional) Block the health check IP address ranges in your firewall. To find the IP address ranges for each AWS Region used by the Route 53 health check service, see the IP ranges JSON file. In the JSON file, search for "ROUTE53_HEALTHCHECKS". For more information, see Configuring router and firewall rules for Amazon Route 53 health checks.Follow"
https://repost.aws/knowledge-center/route-53-stop-health-checks
How do I install WordPress in a Lightsail instance instead of using the WordPress blueprint provided by Bitnami?
I want to install the WordPress application in my Amazon Lightsail instance instead of using the Lightsail WordPress blueprint provided by Bitnami.
"I want to install the WordPress application in my Amazon Lightsail instance instead of using the Lightsail WordPress blueprint provided by Bitnami.Short descriptionAmazon Lightsail provides WordPress blueprints that you can use to launch and start using the WordPress application. This WordPress application is packaged by Bitnami. Instead of using this Bitnami stack, you can install WordPress manually in your Lightsail OS instances, such as Amazon Linux 2, Ubuntu, CentOS, and so on. The following resolution covers the steps for installing WordPress in the major Linux distributions available in Lightsail.Before you begin, be aware of the following:WordPress recommends using either Apache or NGINX as a preferred hosting service. The following resolution installs Apache.WordPress has minimum requirements for the PHP and MariaDB versions that are used for their latest packages. A minimum of PHP7.3 and MariaDB 10.2 are suggested. It's a best practice to use newer versions of these packages and to use the latest Linux distributions available in Amazon Lightsail.For more information, see Server Environment on WordPress.org.The latest package and WordPress's minimum requirements are subject to change. The following resolution uses the configurations supported and recommended by WordPress as of October 2021.The following resolution provides the basic installation steps. You can personalize WordPress by adding plugins, modifying the OS level firewall, and so on.ResolutionFor instructions on installing WordPress in Amazon Linux 2, see Host a WordPress blog on Amazon Linux 2.Install a LAMP stackFor installing LAMP (Linux, Apache, MariaDB, and PHP) in your Lightsail instance, see How do I install a LAMP stack manually on my Lightsail instance?Create the database and a userWordPress is a database-oriented website. You must create a database and a user before the installing the WordPress application.1.    Run the following command to enter the MySQL shell as root:sudo mysql -u root -ppassword: <insert-root-password>Note: The password doesn't appear as you enter it so that it isn't visible to other users.2.    Create a database and user with a password, and then add privileges to the new database:mysql> CREATE DATABASE databasename;mysql> GRANT ALL PRIVILEGES ON databasename.* TO 'wordpress_user'@'localhost' IDENTIFIED BY 'PASSWORD';mysql> FLUSH PRIVILEGES;mysql> exit;Note: Replace databasename with the name of the database that you want to create. Replace wordpress_user with the name of the user for WordPress. Replace PASSWORD with the desired password.Install and configure the WordPress packageTo download the latest WordPress package from the official website to the /tmp directory and extract the package to access the configuration files, do the following:1.    Download the latest WordPress package:cd /tmpwget https://wordpress.org/latest.tar.gz2. Run the following command to extract the package:sudo tar -xzvf latest.tar.gz3.    Move the WordPress files to the /var/www/html directory so that they are accessible through Apache:sudo cp -pr /tmp/wordpress/* /var/www/html/4.    Create the WordPress configuration file wp-config.php by renaming the file wp-config-sample.php:cd /var/www/htmlsudo mv wp-config-sample.php wp-config.php5.    Run the following command to open the WordPress configuration file in the vi editor:sudo vi wp-config.php6.    Add the DB credentials. The following is an example snippet:// ** MySQL settings - You can get this info from your web host ** ///** The name of the database for WordPress */define( 'DB_NAME', 'databasename' );/** MySQL database username */define( 'DB_USER', 'wordpress_user');/** MySQL database password *define( 'DB_PASSWORD', 'PASSWORD' );/** MySQL hostname */define( 'DB_HOST', 'localhost' );/** Database charset to use in creating database tables. */define( 'DB_CHARSET', 'utf8' );/** The database collate type. Don't change this if in doubt. */define( 'DB_COLLATE', '' )Note: Replace databasename, wordpress_user, and PASSWORD with the credentials that you created in the previous step.7.    Save the file by pressing esc, type :wq!, and then press ENTER.8.    (Optional) In some distributions such as Ubuntu and Debian, the Apache installation might have added a pre-existing file named index.html. This file causes conflicts with the WordPress index.php file. In this occurs, delete index.html or move it to a backup file:$ sudo mv index.html backup_index.html9.    Restart the Apache service:CentOS and Amazon Linux 2$ sudo systemctl restart httpdUbuntu and Debian versions$ sudo systemctl restart apache2Verify that the port is open and listeningPort 80 is open by default when you launch a Lightsail instance. If you have SSL enabled for your website, then make sure to open port 443 so that the port is accessible over the internet. For information on adding a firewall rule to your instance, see Instance firewalls in Amazon Lightsail.Final checkAccess your instance's public IP address in your web browser, and then confirm that it goes to the page wp-admin/install.php. You can now create WP credentials in that page and then access the WordPress dashboard.Follow"
https://repost.aws/knowledge-center/lightsail-instance-install-wordpress
How do I resolve kubelet or CNI plugin issues for Amazon EKS?
I want to resolve issues with my kubelet or CNI plugin for Amazon Elastic Kubernetes Service (Amazon EKS).
"I want to resolve issues with my kubelet or CNI plugin for Amazon Elastic Kubernetes Service (Amazon EKS).Short descriptionTo assign and run an IP address to the pod on your worker node with your CNI plugin (on the Kubernetes website), you must have the following configurations:AWS Identity and Access Management (IAM) permissions, including a CNI policy that attaches to your worker node's IAM role. Or, IAM permissions that you provide through service account IAM roles.An Amazon EKS API server endpoint that's reachable from the worker node.Network access to API endpoints for Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Container Registry (Amazon ECR), and Amazon Simple Storage Service (Amazon S3).Sufficient available IP addresses in your subnet.A kube-proxy that runs successfully for the aws-node pod to progress into Ready status.The kube-proxy version and VPC CNI version that support the Amazon EKS version.ResolutionVerify that the aws-node pod is in Running status on each worker nodeTo verify that the aws-node pod is in Running status on a worker node, run the following command:kubectl get pods -n kube-system -l k8s-app=aws-node -o wideIf the command output shows that the RESTARTS count is 0, then the aws-node pod is in Running status. Try the steps in the Verify that your subnet has sufficient free IP addresses available section.If the command output shows that the RESTARTS count is greater than 0, then verify that the worker node can reach the API server endpoint of your Amazon EKS cluster. Run the following command:curl -vk https://eks-api-server-endpoint-urlVerify connectivity to your Amazon EKS cluster1.    Verify that your worker node's security group settings for Amazon EKS are configured correctly. For more information, see Amazon EKS security group requirements and considerations.2.    Verify that your worker node's network access control list (network ACL) rules for your subnet allow communication with the Amazon EKS API server endpoint.Important: Allow inbound and outbound traffic on port 443.3.    Verify that the kube-proxy pod is in Running status on each worker node:kubectl get pods -n kube-system -l k8s-app=kube-proxy -o wide4.    Verify that your worker node can access API endpoints for Amazon EC2, Amazon ECR, and Amazon S3.Note: Configure these services through public endpoints or AWS PrivateLink.Verify that your subnet has sufficient free IP addresses availableTo list available IP addresses in each subnet in the Amazon Virtual Private Cloud (Amazon VPC) ID, run the following command:aws ec2 describe-subnets --filters "Name=vpc-id,Values= VPCID" | jq '.Subnets[] | .SubnetId + "=" + "\(.AvailableIpAddressCount)"'Note: The AvailableIpAddressCount must be greater than 0 for the subnet where the pods are launched.Check whether your security group limits have been reachedIf you reach the limits of your security group's per elastic network interface, then your pod networking configuration can fail.For more information, see Amazon VPC quotas.Verify that you're running the latest stable version of the CNI pluginTo confirm that you have the latest version of the CNI plugin, see Updating the Amazon VPC CNI plugin for Kubernetes self-managed add-on.For additional troubleshooting, see the AWS GitHub issues page and release notes for the CNI plugin.Check the logs of the VPC CNI plugin on the worker nodeIf you create a pod, and an IP address doesn't get assigned to the container, then you receive the following error:failed to assign an IP address to containerTo check the logs, go to the /var/log/aws-routed-eni/ directory, and then locate the file names plugin.log and ipamd.log.Verify that your kubelet pulls the Docker container imagesIf your kubelet doesn't pull the Docker container images for the kube-proxy and amazon-k8s-cni containers, then you receive the following error:network plugin is not ready: cni config uninitializedMake sure that you can reach the Amazon EKS API server endpoint from the worker node.Verify that the WARM_PREFIX_TARGET value is set correctlyNote: This applies only if prefix delegation is turned on. If prefix delegation is turned on, then check for the following logged error message:Error: Setting WARM_PREFIX_TARGET = 0 is not supported while WARM_IP_TARGET/MINIMUM_IP_TARGET is not set. Please configure either one of the WARM_{PREFIX/IP}_TARGET or MINIMUM_IP_TARGET env variableWARM_PREFIX_TARGET must be set to a value greater than or equal to 1. If it's set to 0, then you receive the following error:See CNI configuration variables on the GitHub website for more information.Check the reserved space in the subnetNote: This applies only if prefix delegation is turned on. If prefix delegation is turned on, then check for the following logged error message:InsufficientCidrBlocksMake sure that you have sufficient available /28 IP CIDR (16 IPs) blocks in the subnet. All 16 IPs must be contiguous. If you don't have a /28 range of continuous IPs, then you receive the InsufficientCidrBlocks error.To resolve the error, create a new subnet, and launch the pods from there. Also, use an Amazon EC2 subnet CIDR reservation to reserve space within a subnet with an assigned prefix. For more information, see Use subnet CIDR reservations.Updates that are made with Infrastructure as Code (IaC) roll back with conflictsIf you use Amazon EKS managed add-ons, the update errors that use the following services roll back when the conflict method is undefined:AWS Cloud Development Kit (AWS CDK)AWS CloudFormationeksctl (from the eksctl website)Correct methods are NONE, OVERWRITE, or PRESERVE.If no method is defined, then the default is NONE. When the system detects conflicts, the update to the CloudFormation stack rolls back, and no changes are made.To set the default configuration for the add-ons, use the overwrite method. You you must use OVERWRITE when you move from self-managed to Amazon EKS managed add-ons.Use the PRESERVE method when you use custom defined configurations, such as WARM_IP_TARGET or custom networking.Nodes are in NotReady statusWhen you have aws-nodes that aren't in the Running status, it's common for nodes to be in the NotReady status. For more information, see How can I change the status of my nodes from NotReady or Unknown status to Ready status?Custom networking configuration challengesWhen custom networking is active for the VPC CNI, ENIConfig custom resource definitions (CRDs) must define the correct subnet and security groups.To verify if custom networking is active, describe the aws-node pod in the kube-system namespace. Then, see if the following environment variable is set to true:AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFGIf custom networking is active, then check that the CRDs are configured properly.kubectl get ENIConfig -A -o yamlDescribe each entry that matches the Availability Zone name. The subnet IDs match for the VPC and worker node placement. The security groups are accessible or shared with the cluster security group. For more information on best practices, see the Amazon  EKS Best Practices Guides on the GitHub website.Follow"
https://repost.aws/knowledge-center/eks-cni-plugin-troubleshooting
How do I set up weighted target groups for my Application Load Balancer?
I want to register weighted target groups for my Application Load Balancer.
"I want to register weighted target groups for my Application Load Balancer.ResolutionComplete the following steps to register your target group with a load balancer, and add weight to the target group.Create target groups1.    Open the Amazon Elastic Compute Cloud (Amazon EC2) console.2.    Choose the AWS Region that your Amazon EC2 instances are located in.3.    On the navigation pane, under LOAD BALANCING, choose Target Groups.4.    Create the first target group:     Choose Create Target group.For Target group name, specify a name for the target group.Configure the protocol, port, and virtual private cloud (VPC) for the target group.Choose Create.For Instances, select one or more instances.Specify a port for the instances.Choose Add to registered, and then choose Save.5.    Repeat step 4 to create a second target group.Create an Application Load BalancerNote: If you already have an Application Load Balancer, then proceed to the next section.1.    On the navigation pane, under LOAD BALANCING, choose Load Balancers.2.    Choose Create Load Balancer.3.    For Select load balancer type, choose Application Load Balancer.4.    Choose Continue.5.    Complete the steps in Create an Application Load Balancer.6.    Complete the Configure Routing steps:For Target group, choose Existing Target.For Name, choose the first target group that you created.Choose Next: Register Targets.7.    On the Register Targets page, the instances that you registered with the target group appear under Registered instances. You can't modify the register targets here.8.    On the Review page, choose Create.9.    After you receive a notification that your load balancer is created, choose Close.10.    Select the load balancer.Configure the Listener rules and add weight to the target groups1.    On the Listeners tab, choose View/edit rules.2.    Choose Edit rules (the pencil icon).3.    Choose Edit next to the Forward to option.4.    Add the other target group.5.    Enter the target group weight values. These values must be numeric values between zero and 999.6.    Select the check mark, and then choose Update.7.    (Optional) If the target group is sticky, then set the Group level Stickiness. When you configure this setting, routed requests remain in the target group for the duration of the session. The default value is 1 hour. After the session duration ends, requests are distributed according to the weights of the target group.Note: The Application Load Balancer distributes traffic to the target groups based on weights. If all targets in a target group fail health checks, then the Application Load Balancer doesn't route or fail over the requests to another target group. If a target group has only unhealthy registered targets, then the load balancer nodes route requests across its unhealthy targets. When all the targets in a target group are unhealthy, don't use a weighted target group as a failover mechanism.For example, if the weight of the first target is 70%, and the second target is 30%, most requests are from the first target group:$ for X in `seq 6`; do curl -so -i /dev/null -w "" http://FINAL-721458494.us-east-2.elb.amazonaws.com; done<h1> This is T1 </h1><h1> This is T1 </h1><h1> This is T1 </h1><h1> This is T1 </h1><h1> This is T2 </h1><h1> This is T2 </h1>If you set the weight of the second target as 70% and the first as 30%, then most requests are from the second target group:$ for X in `seq 7`; do curl -so -i /dev/null -w "" http://FINAL-721458494.us-east-2.elb.amazonaws.com; done<h1> This is T2 </h1><h1> This is T2 </h1><h1> This is T2 </h1><h1> This is T1 </h1><h1> This is T1 </h1><h1> This is T2 </h1><h1> This is T2 </h1>If all targets in a target group fail health checks, then the Application Load Balancer doesn't automatically route or failover the requests to another target groupFollow"
https://repost.aws/knowledge-center/elb-make-weighted-target-groups-for-alb
How do I organize my AWS resources?
I want to organize my AWS resources so that I can manage them as a group.
"I want to organize my AWS resources so that I can manage them as a group.ResolutionYou can use AWS Resource Groups to organize and manage your AWS resources that are in the same AWS Region. With Resource Groups, you can automate tasks, such as applying security patches and updates, on a group of AWS resources at the same time.To create AWS CloudFormation stack-based Resource Groups, see Create an AWS CloudFormation stack-based group.To create tag-based Resource Groups, see Build a tag-based query and create a group.You can add tags to most of your resources to help identify and sort your resources within your organization. You can add these tags when you create or edit your AWS resources. Also, you can add tags to supported resources using Tag Editor.You can also access Resource Groups by opening the AWS Systems Manager console, and then choosing Resource Groups & Tag Editor from the navigation pane.Related informationResources you can use with AWS Resource Groups and Tag EditorAmazon Resource Names (ARNs)Follow"
https://repost.aws/knowledge-center/resource-groups
How do I troubleshoot AWS Marketplace connection errors in my AWS Glue ETL jobs?
"I'm using AWS Marketplace connectors in AWS Glue, but receive errors in the logs."
"I'm using AWS Marketplace connectors in AWS Glue, but receive errors in the logs.ResolutionConnectors aren't showingYou subscribe to a connector from AWS Marketplace, but you can’t find this connector in the AWS Glue Studio’s connector page. To resolve this issue, complete the following steps.Note: You can repeat these steps even if you previously subscribed to the connector.Open the AWS Marketplace.Choose Discover products, and then find the connector that you want to use.Choose Continue to Subscribe, and then, if promoted, log in to your AWS account.Choose Continue to Configure. If this option is grayed out and you can't choose it, then make sure to read the terms and conditions. Choose Accept Terms, and then wait until the Continue to Configure button becomes available.From the dropdown list, choose Delivery Method and Software Version. If you're not sure which version to choose, then choose the latest version.Choose Continue to Launch, and then choose Usage Instruction.In the pop-up window that appears, choose Active the Glue connector from AWS Glue Studio.(Optional) To install only the connector, choose Active Connector Only. For more information on this option, see Using connectors and connections with AWS Glue Studio. If you're working with custom connectors instead, then see Developing custom connectors.Issues with AWS Identity and Access Management (IAM) roleWhen trying to subscribe to a connector in the AWS Marketplace, you get an IAM permissions error similar to the following one:"You do not have the right permissions to make this request.Some controls have been disabled because you are missing the correct permission(s). The missing permission(s) are: aws-marketplace:Subscribe."To resolve this issue, add an IAM policy to the IAM user that received the error. For AWS Marketplace use, add the following IAM policies to your IAM user:To grant permissions to view subscriptions but not change them, choose AWSMarketplaceRead-only.To grant permissions to subscribe and unsubscribe, choose AWSMarketplaceManageSubscriptions.To grant complete control of your subscriptions, choose AWSMarketplaceFullAccess.For more information, see Controlling access to AWS Marketplace subscriptions.AccessDeniedException errorsYou receive an AccessDeniedException error similar to the following one in the AWS Glue job's logs:"An error occurred (AccessDeniedException) when calling the GetAuthorizationToken operation: User: arn:aws:sts::xxxxxxxxxxxx:assumed-role/<IamRole>/GlueJobRunnerSession is not authorized to perform: ecr:GetAuthorizationToken on resource: * because no identity-based policy allows the ecr:GetAuthorizationToken action Glue ETL Marketplace - failed to download connector, activation script exited with code 1LAUNCH ERROR | Glue ETL Marketplace - failed to download connector. Please refer logs for details."This error occurs when the IAM role that's associated with your AWS Glue job has insufficient permissions when it tries to perform the GetAuthorizationToken operation. To resolve this issue, give your AWS Glue job the ecr:GetAuthorizationToken permission:Open the IAM console.Choose the IAM role that you're using in the AWS Glue job.Choose Attach policies.Under Filter policies, enter AmazonEC2ContainerRegistryReadOnly, and then choose this policy.Choose Attach Policy.After you attach the required policy to the IAM role, run the AWS Glue job again.For more information, see AmazonEC2ContainerRegistryReadOnly, Adding and removing IAM identity permissions, and Setting up IAM permissions for AWS Glue.Networking issues - No network pathway from VPCYour networking setup might not be adequate for AWS Glue connectors to work correctly when it's used in an AWS Glue job.botocore.exceptions.ConnectTimeoutError: Connect timeout on endpoint URL: "https://api.ecr.us-east-1.amazonaws.com/"Glue ETL Marketplace - failed to download connector, activation script exited with code 1LAUNCH ERROR | Glue ETL Marketplace - failed to download connector.Please refer logs for details.Exception in thread "main" java.lang.Exception: Glue ETL Marketplace - failed to download connector.In this example, there's no network pathway from the virtual private cloud (VPC) containing the job's components to the Amazon Elastic Container Registry (Amazon ECR) repository. The Amazon ECR repository contains the images for the connectors. AWS Glue stores all connectors in an Amazon ECR repository in the us-east-1 AWS Region. If the AWS Glue job wants to use a connector, then it must download it from this Region.When a connection is added to an AWS Glue job, you must establish a network route. This network route allows traffic to flow to or from the service, in this case Amazon ECR. AWS Glue uses private IPs to communicate with the components of the job and services, such as Amazon ECR. This error can occur if your connection uses a public subnet with an internet gateway in its route table.The job of the internet gateway is to route traffic. But it can't convert the private IPs that AWS Glue uses into public IPs that the Amazon ECR endpoint in the us-east-1 Region recognizes. You must use a NAT gateway in the connection subnet that's capable of performing these address translations (private to public IPs).When you create the connection, the networking information such as, VPC, subnet, and security group are optional. If you create the connection with only the connector and secrets key, then the AWS Glue job uses an internal NAT gateway. The job doesn't rely on a NAT gateway in your account.To resolve this issue, choose one of the following solutions, and incorporate it into your network design.Approach 1: Create and attach a NAT gateway to the connection subnetInstead of using an internet gateway, create and attach a NAT gateway to the connection subnet:Provision an unattached Elastic IP address to your account. Make sure that you associate this IP address with the NAT gateway.Create a NAT gateway, and then choose a public subnet and the Elastic IP address that you provisioned for the NAT gateway. This creates the NAT gateway in a public subnet.Create a private subnet (without an internet gateway route) and a related route table. In the route table, add a rule with 0.0.0.0/0 pointing to the NAT Gateway that you created. Or, you can edit one of the existing subnets to use the route table with the NAT gateway route. Make sure that there's no internet gateway route that's used with the NAT gateway route.Revise the AWS Glue connection's subnet to use the private subnet that you created in Step 3.Run the AWS Glue job again and confirm that the error doesn't re-occur.Approach 2: Don't use VPC information in the connectionDon't include VPC information in the connection. Use an internet NAT gateway instead:Create a new connection for your connector in the AWS Glue Studio.When you create the connection, specify only the Secrets Manager key. Don't add any VPC options. This means that AWS Glue uses the internal NAT instead of relying on the subnet.Edit the AWS Glue job to use the new connection, and then rerun the job.Approach 3: For private network setups, create a VPC endpointIf you have a private network setup, then you can also use a VPC endpoint instead of using a NAT gateway.Create a private networkLog in to the Amazon Virtual Private Cloud (Amazon VPC) console.From the navigation pane, choose Subnet, and then choose Create subnet.Choose your VPC ID, and then add a Subnet name, Availability Zone, IPV4 CIDR block and tags. Then, choose Create Subnet.From the navigation pane, choose Route tables. Add a name for your route table, choose your VPC, and then choose Create route table.Open the route table that you created. Under Subnet association, choose the tab for Explicit subnet associations.Choose Edit subnet association, choose the newly created subnet from the list, and then choose Save.If you check the Route tab, you can now see that there is no internet access (0.0.0.0/0).From the navigation pane, choose Security groups. Add the details for your security group, and then choose the VPC. Add an inbound rule to allow TCP protocol for port 22 with source 0.0.0.0./0.Add a second rule. For Protocol, choose ALL. For Source, choose the new security group that you created. If you can't find the new security group name in the dropdown list, then save the group, and edit the inbound rules again.Create VPC endpointNext, create three VPC endpoints: an Amazon ECR API endpoint, a VPC endpoint for the com.amazonaws.<region>.ecr.dkr service, and an Amazon Simple Storage Service (Amazon S3) endpoint.First, create the Amazon ECR API endpoint:From the navigation pane, choose Endpoints.Choose Create endpoint, and then add an endpoint name for your Amazon ECR API endpoint.For Service category, choose AWS services.For Services, add the ECR filter, and then choose com.amazonaws.<region>ecr.api.For VPC, choose the VPC that you want to create the endpoint in. Under Additional settings, choose Enable DNS Name.For Subnets, choose the Availability Zone that you created the new subnet in. Then, from the Subnet ID dropdown list, choose the Subnet name.For Security groups, choose the security group that you created.For Policy, choose Full access to allow all operations by all principles on all resources over the VPC endpoint.Add an optional tag, and then choose Create endpoint.Using the same steps, create another VPC endpoint for the service name com.amazonaws.<region>.ecr.dkr.Then, complete the following steps to create the Amazon S3 endpoint:From the navigation pane, choose Endpoints.Choose Create endpoint, and then add an endpoint name for your Amazon S3 endpoint.For Service category, choose AWS servicesFor Services, add the Type:Gateway filter, and then choose com.amazonaws.<region>.s3.For VPC, choose the VPC that you want to create the endpoint in.For Route tables, choose the route tables that you created.For Policy, choose Full access to allow all operations by all principles on all resources over the VPC endpoint.Add an optional tag, and then choose Create endpoint.Subscribe to and configure connectorsIf you already subscribed to and configured your connector in AWS Glue, then proceed to the Create AWS Glue connection section.If you didn't subscribe to and configure your connector in AWS Glue, then follow the steps in Subscribing to AWS Marketplace connectors. In the Usage instruction pop-up window that appears, choose Activate the Glue connector from AWS Glue Studio. This takes you to the Create Glue Connection page.Create AWS Glue ConnectionIf you already added your connector in the AWS Glue console, then navigate to Connections and choose your connector. Then choose, Create connection.If you followed the previous steps to subscribe to and configure connectors, then the Create Glue Connection page is open. Complete the following steps to create your connection:On the Create Glue Connection page, add a Connection name.For Network options, choose your VPC, and the subnet and security groups that you previously created.Choose Create connection and activate connector.You can now use the connection name in your AWS Glue job to control the connector.Networking issues - too many connections in the AWS Glue jobYou receive this error in the AWS Glue job's logs:INFO - Glue ETL Marketplace - Start downloading connector jars for connection: <connection name>test connection feature: "Caused by: com.amazonaws.services.glue.exceptions.InvalidInputException: Connection: does not exist"LAUNCH ERROR | Glue ETL Marketplace - failed to download connector.Please refer logs for details.AWS Glue supports one connection per job or development endpoint. If you specify more than one connection in a job, then AWS Glue uses only the first connection. If you must access more than one VPC, then see Connect to and run ETL jobs across multiple VPCs using a dedicated AWS Glue VPC.Related informationCreating ETL jobs with AWS Glue StudioFollow"
https://repost.aws/knowledge-center/glue-marketplace-connector-errors
I enabled public access on my bucket's ACL using the Amazon S3 console. Is my bucket open to everyone?
I used the Amazon Simple Storage Service (Amazon S3) console to update my bucket's access control list (ACL) to allow public access. Can anyone access my bucket?
"I used the Amazon Simple Storage Service (Amazon S3) console to update my bucket's access control list (ACL) to allow public access. Can anyone access my bucket?ResolutionEven if you enable all available ACL options in the Amazon S3 console, the ACL alone won't allow everyone to download objects from your bucket. However, depending on which option you select, any user could perform these actions:If you select List objects for the Everyone group, then anyone can get a list of objects that are in the bucket.If you select Write objects, then anyone can upload, overwrite, or delete objects that are in the bucket.If you select Read bucket permissions, then anyone can view the bucket's ACL.If you select Write bucket permissions, then anyone can change the bucket's ACL.For more information, see What permissions can I grant?To prevent any accidental change to public access on a bucket's ACL, you can configure public access settings for the bucket. If you select Block new public ACLs and uploading public objects, then users can't add new public ACLs or upload public objects to the bucket. If you select Remove public access granted through public ACLs, then all existing or new public access granted by ACLs is respectively overridden or denied.Important: Granting cross-account access through bucket and object ACLs doesn't work for buckets that have S3 Object Ownership set to Bucket Owner Enforced. In most cases, ACLs aren't required to grant permissions to objects and buckets. Instead, use AWS Identity Access and Management (IAM) policies and S3 bucket policies to grant permissions to objects and buckets.Related informationUsing Amazon S3 Block Public AccessManaging access with ACLsFollow"
https://repost.aws/knowledge-center/s3-public-access-acl
How do I configure a Lambda function to connect to an RDS instance?
I want my AWS Lambda function to connect to an Amazon Relational Database Service (Amazon RDS) instance.
"I want my AWS Lambda function to connect to an Amazon Relational Database Service (Amazon RDS) instance.Short descriptionNote: The following information and steps refer to Amazon RDS instances. However, the resolution also applies to any endpoint or database that's located in a virtual private cloud (VPC).To connect a Lambda function to an RDS instance, set the networking configurations to allow the connection.There are different configuration settings for each of the following connection types:A Lambda function and RDS instance in the same VPCA Lambda function and RDS instance in different VPCsFor security reasons, it's a best practice to keep your RDS instance in a VPC. For public databases, use a NoSQL database service such as Amazon DynamoDB.A Lambda function that's outside of a VPC can't access an RDS instance that's inside a VPC.For information on how to configure a Lambda function's network settings, see Configuring a Lambda function to access resources in a VPC. If the network settings are incorrect, then the Lambda function times out and displays a Task timed out error message.To connect a Lambda function to an Amazon Aurora DB cluster, use the Data API for Aurora Serverless.ResolutionImportant: Make sure that you change each Port Range, Source, and Destination setting that's provided in the following examples to match your own network configurations. Transmission Control Protocol (TCP) is the required protocol for each type of network configuration.A Lambda function and RDS instance in the same VPCWhen connecting a Lambda function to an RDS instance in the same VPC, use the following networking configurations.Note: By default, all subnets within a VPC contain a local route. The destination is the VPC's Classless Inter-Domain Routing (CIDR) and the target is local. For more information, see Route table concepts.1.    For Security Groups, use one of the following network settings:For instances that are attached to the same security group, make the security group the source for the inbound rule. Make the security group the destination for the outbound rule.For example, if the Lambda function and RDS instance are both in security group sg-abcd1234, then each instance has the following inbound and outbound rules.Example inbound rule for instances that are attached to the same security groupTypeProtocolPort RangeSourceCustom TCPTCP3306sg-abcd1234Example outbound rule for instances that are attached to the same security groupTypeProtocolPort RangeDestinationCustom TCPTCP3306sg-abcd1234-or-For instances in different security groups, make sure that both security groups allow access to each other.For example, if the Lambda function is in security group sg-1234 and the RDS instance is in sg-abcd, then each group has the following rules:Example outbound rule for a Lambda function in a different security group than the RDS instance that you want to connect it toTypeProtocolPort RangeDestinationCustom TCPTCP3306sg-abcdExample inbound rule for an RDS instance in a different security group than the Lambda function that you want to connect it toTypeProtocolPort RangeSourceCustom TCPTCP3306sg-1234Important: Make sure that the rules allow a TCP connection over the database's port.2.    For the network access control lists (NACLs), make sure that the inbound and outbound rules allow communication between the Lambda function and RDS instance.Note: By default, NACLs allow all inbound and outbound traffic. However, you can change these default settings.For each subnet that’s associated with the RDS instance and Lambda function, configure the NACLs to allow outbound TCP connection to the other instance’s subnets’ CIDRs.Note: The following example uses four example subnets that their CIDRs labeled:For the Lambda function's subnets, 172.31.1.0/24 and 172.31.0.0/28.For the RDS instance's subnets, 172.31.10.0/24 and 172.31.64.0/20.Example outbound rules for a Lambda function's subnets' NACLsTypeProtocolPort RangeDestinationAllow/DenyCustom TCPTCP3306172.31.10.0/24AllowCustom TCPTCP3306172.31.64.0/20AllowImportant: Apply the same Outbound rules to the NACLs of the RDS instance's subnets, but with the destination set as the Lambda's subnets' CIDRs.Make sure that the NACLs for each subnet have an inbound rule on the ephemeral ports over the CIDR range of the other instance's subnets.Example inbound rules for a Lambda function's subnets' NACLsTypeProtocolPort RangeSourceAllow/DenyCustom TCPTCP1024-65535172.31.10.0/24AllowCustom TCPTCP1024-65535172.31.64.0/20AllowImportant: Apply the same inbound rules to the NACLs of the RDS instance's subnets, but with the source set as the Lambda's subnets' CIDRs.A Lambda function and RDS instance in different VPCsFirst, use VPC peering to connect the two VPCs. Then, use the following networking configurations to connect the Lambda function in one VPC to the RDS instance in the other:Important: Be sure to turn on Domain Name System (DNS) for the VPC peering connection.1.    For the Route Table, confirm that the VPC peering connection is successful:For the Destination, look for the CIDR of the peered VPC.For the Target, look for the peering connection.Note: The following example includes two example VPCs:CIDR of source VPC (Lambda function): 10.0.0.0/16CIDR of peered VPC (RDS instance): 172.31.0.0/16Peering connection: pcx-01234abcdExample route table for a source VPC that's associated with the Lambda functionDestinationTarget172.31.0.0/16pcx-01234abcd10.0.0.0/16localExample route table for a peered VPC with an RDS instanceDestinationTarget10.0.0.0/16pcx-01234abcd172.31.0.0/16localFor more information, see Update your route tables for a VPC peering connection.2.    For Security Groups, use the following network settings:For the Lambda function's security group, make sure that traffic is allowed to go in and out of the CIDR of the RDS instance's VPC.Note: The following example includes two example subnets labeled by their CIDRs:For the RDS instance, 172.31.0.0/16For the Lambda function, 10.0.0.0/16Example outbound rule for a Lambda function in a different VPC than the RDS instanceTypeProtocolPort RangeDestinationCustom TCPTCP3306172.31.0.0/16For the RDS instance's security group, allow traffic to go in and out of the CIDR of the Lambda function's security group.Example inbound rule for an RDS instance in a different VPC than the Lambda functionTypeProtocolPort RangeSourceCustom TCPTCP330610.0.0.0/163.    For the NACLs, follow the previous procedures in step 3 of the A Lambda function and RDS instance in the same VPC section. The origin of the Lambda function's subnet CIDR is in a different VPC.Note: As an alternative to VPC peering, you can use AWS PrivateLink to access Amazon RDS across VPCs. This solution works across AWS accounts and VPCs in the same AWS Region.Follow"
https://repost.aws/knowledge-center/connect-lambda-to-an-rds-instance
Why am I seeing "Error" in the Access field for some buckets in the Amazon S3 console?
I'm using the Amazon Simple Storage Service (Amazon S3) console to view buckets. Why am I seeing "Error" in the Access field for certain buckets?
"I'm using the Amazon Simple Storage Service (Amazon S3) console to view buckets. Why am I seeing "Error" in the Access field for certain buckets?ResolutionThe bucket list view in the Amazon S3 console includes an Access column that provides information about public access to each bucket. To see the Access value, the AWS Identity and Access Management (IAM) user or role using the console must have the following permissions to each bucket:s3:GetAccountPublicAccessBlocks3:GetBucketPublicAccessBlocks3:GetBucketPolicyStatuss3:GetBucketAcls3:ListAccessPointsIf the IAM identity (user or role) doesn't have the required permissions, then the identity sees "Error" in the Access field. This is also true if the identity explicitly is denied access to the required permissions.To allow an IAM identity to see Access values in the Amazon S3 console, add the required permissions to the user's or role's policy.Note: Because of eventual consistency, a bucket that recently was deleted might appear in the console with "Error" in the Access field. To confirm that a bucket was deleted, check the AWS CloudTrail event history for DeleteBucket events.Follow"
https://repost.aws/knowledge-center/s3-console-error-access-field
How can I get my AWS::ECS::Service resources out of UPDATE_IN_PROGRESS or UPDATE_ROLLBACK_IN_PROGRESS status?
My AWS CloudFormation stack update to the AWS::ECS::Service resource got stuck in UPDATE_IN_PROGRESS or UPDATE_ROLLBACK_IN_PROGRESS status. I want to stabilize the stack and get my service to launch new tasks.
"My AWS CloudFormation stack update to the AWS::ECS::Service resource got stuck in UPDATE_IN_PROGRESS or UPDATE_ROLLBACK_IN_PROGRESS status. I want to stabilize the stack and get my service to launch new tasks.Short descriptionYour Amazon Elastic Container Service (Amazon ECS) service can get stuck in UPDATE_IN_PROGRESS or UPDATE_ROLLBACK_IN_PROGRESS status when the service fails to launch tasks.Here are some common reasons why an Amazon ECS service can fail to launch new tasks:Container image issuesA lack of necessary resources for launching tasksA health check failure on a load balancerInstance configuration or Amazon ECS container agent issuesAn Amazon ECS service that fails to launch tasks causes AWS CloudFormation to get stuck in UPDATE_IN_PROGRESS status. Then, AWS CloudFormation waits for several hours before rolling back to a previous configuration. If the issue that's causing stack failure continues during stack rollback to a previous configuration, then the stack gets stuck in UPDATE_ROLLBACK_IN_PROGRESS status. Finally, the stack changes to UPDATE_ROLLBACK_FAILED status.It can take the AWS CloudFormation stack several hours to stabilize. To stabilize your stack more quickly, complete the following steps.Important: The following resolution is intended to help you stabilize an AWS CloudFormation stack quickly without waiting for the stack to time out. The resolution isn't intended for production environments, as the Amazon ECS service is out of sync with the known state of AWS CloudFormation. To sync resources between your Amazon ECS service and the AWS CloudFormation stack, you must perform an error-free update on the stack.ResolutionChange the desired task count of the Amazon ECS serviceOpen the Amazon ECS console.Choose your cluster.Select the service, and then choose Update.Set Number of tasks to 0, and then save the configuration.Identify why the Amazon ECS service can't launch new tasksOpen the Amazon ECS console.Choose your cluster.Select the service, and then choose Events.Note: The Events section displays the reason why your service didn't stabilize.Choose a solution based on the issue that you identified:Your task failed Elastic Load Balancing (ELB) health checks.A container marked as essential for the task definition exited or died.You can't place a task because your container instance didn't meet the necessary requirements.You receive a "cannot pull container image" error.Follow"
https://repost.aws/knowledge-center/ecs-service-stuck-update-status
Why is the checkpoint in my Amazon Kinesis Data Analytics application failing?
The checkpoint or savepoint in my Amazon Kinesis Data Analytics application is failing.
"The checkpoint or savepoint in my Amazon Kinesis Data Analytics application is failing.Short descriptionCheckpointing is the method that is used for implementing fault tolerance in Amazon Kinesis Data Analytics for Apache Flink. Your application not being optimized or properly provisioned might result in checkpoint failures.Some of the major causes for checkpoint failures are the following:For Rocks DB, Apache Flink reads files from the local storage and writes to the remote persistent storage, that is Amazon Simple Storage Service (Amazon S3). The performance of the local disk and upload rate might impact checkpointing and result in checkpoint failures.Savepoint and checkpoint states are stored in a service-owned Amazon S3 bucket that's fully managed by AWS. These states are accessed whenever an application fails over. Transient server errors or latency in this S3 bucket might lead to checkpoint failures.A process function that you created where the function communicates with an external resource, such as Amazon DynamoDB, during checkpointing might result in checkpoint failures.Failure due to serialization of the state, such as serializer mismatch with the incoming data, can cause checkpoint failures.The number of Kinesis Processing Units (KPUs) provisioned for the application might not be sufficient. To find the allocated KPUs, use the following calculation:Allocated KPUs for the application = Parallelism / ParallelismPerKPULarger application state sizes might lead to an increase in checkpoint latency. This is because, the task manager takes more time to save the checkpoint, which can result in an out-of-memory exception.A skewed distribution of state could result in one task manager handling more data compared to other task managers. Even if sufficient KPUs(resources) are provisioned, one or more overloaded task managers can cause an out-of-memory exception.A high cardinality indicates that there is a large number of unique keys in the incoming data. If the job uses the KeyBy operator for partitioning the incoming data, and the key on which the data is partitioned has high cardinality, slow checkpointing might occur. This might eventually result in checkpoint failures.ResolutionThe size of your application state might increase rapidly, causing an increase in the checkpoint size and duration. You can monitor these values using the Amazon CloudWatch metrics lastCheckPointDuration and lastCheckpointSize. For more information, see Application metrics.Increase the parallelism of the operator that processes more data. You can define the parallelism for an individual operator, data source, or data sink by calling the setParallelism() method.Tune the values of Parallelism and ParallelismPerKPU for optimum utilization of KPUs. Be sure that automatic scaling isn't turned off for your Amazon Kinesis Data Analytics application. The value of the maxParallelism parameter allows you to scale to a desired number of KPUs. For more information, see Application Scaling in Kinesis Data Analytics for Apache Flink.Define TTL on the state to make sure that the state is periodically cleaned.Optimize the code to allow for better partitioning strategy. You can use rebalancing partitioning to help distribute the data evenly. This method uses a round robin method for distribution.Optimize the code to reduce the window size so that the cardinality of the number of keys in that window is reduced.Related informationApache Flink documentation for CheckpointsImplementing fault tolerance in Kinesis Data Analytics for Apache FlinkFollow"
https://repost.aws/knowledge-center/kinesis-data-analytics-checkpoint-fail
How can I troubleshoot the 404 "NoSuchKey" error from Amazon S3?
"My users are trying to access objects in my Amazon Simple Storage Service (Amazon S3) bucket. However, Amazon S3 is returning the 404 "NoSuchKey" error. How can I troubleshoot this error?"
"My users are trying to access objects in my Amazon Simple Storage Service (Amazon S3) bucket. However, Amazon S3 is returning the 404 "NoSuchKey" error. How can I troubleshoot this error?ResolutionAmazon S3 generally returns 404 errors if the requested object is missing from the bucket. Before users make GET or HEAD requests for an object, make sure that the object is created and available in the S3 bucket.To check if an object is available in a bucket, you can review the contents of the bucket from the Amazon S3 console. Or, you can run the head-object command using the AWS Command Line Interface (AWS CLI):aws s3api head-object --bucket awsexamplebucket --key object.jpgImportant: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.Note that Amazon S3 delivers strong read-after-write consistency for all applications. After a successful write of a new object, or an overwrite or delete of an existing object, any subsequent read request immediately receives the latest version of the object. S3 also provides strong consistency for list operations. After a write, you can perform a listing of the objects in a bucket. For more information about S3 consistency, see Consistency.If the requested object was available in the S3 bucket for some time and you receive a 404 NoSuchKey error again, then check the following:Confirm that the request matches the object name exactly, including the capitalization of the object name. Requests for S3 objects are case sensitive. For example, if an object is named myimage.jpg, but Myimage.jpg is requested, then requester receives a 404 NoSuchKey error.Confirm that the requested path matches the path to the object. Otherwise, the requester receives a 404 NoSuchKey error.If the path to the object contains any spaces, be sure that the request uses the correct syntax to recognize the path. For example, if you're using the AWS CLI to download an object to your Windows machine, you must use quotation marks around the object path. The object path must look like this: aws s3 cp "s3://awsexamplebucket/Backup Copy Job 4/3T000000.vbk".Check the object name for any special characters or URL-encoded characters that are difficult to see, such as carriage returns (\r) or new lines (\n). For example, the object name test with a carriage return at the end shows as test%0A in the Amazon S3 console. To check object names for special characters, you can run the list-objects-v2 command with the parameter --output json. The JSON output makes characters like returns (\r) visible. If an object name has a special character that's not always visible, remove the character from the object name. Then, try accessing the object again.Optionally, you can enable server access logging to review request records in further detail for issues that might be causing the 404 NoSuchKey error.Note: If an object is missing from the bucket and the requester doesn’t have s3:ListBucket access, then the requester will receive a 403 Access Denied error. If you receive a 403 Access Denied error, resolve the issue related to the missing object.Related informationTroubleshooting Amazon S3Follow"
https://repost.aws/knowledge-center/404-error-nosuchkey-s3
How do I set up the AWS Load Balancer Controller on an Amazon EKS cluster for Fargate and deploy the 2048 game?
"I want to set up the AWS Load Balancer Controller on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster for AWS Fargate. Then, I want to deploy the 2048 game."
"I want to set up the AWS Load Balancer Controller on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster for AWS Fargate. Then, I want to deploy the 2048 game.Short descriptionYou can set up AWS Load Balancer Controller without any existing Application Load Balancer (ALB) Ingress Controller deployments.Before setting up the AWS Load Balancer Controller on a new Fargate cluster, consider the following:Uninstall the AWS ALB Ingress Controller for Kubernetes. The AWS Load Balancer Controller replaces the functionality of the AWS ALB Ingress Controller.Use eksctl version 0.109.0 or greater.Install Helm on the workstation.The --region variable isn't always used in the commands because the default value for your AWS Region is used. To check the default value, run the aws configure command. To change the AWS Region, use the --region flag.Amazon EKS on Fargate is available in all AWS Regions, except AWS GovCloud (US-East) and AWS GovCloud (US-West).Replace placeholder values in code snippets with your own values.ResolutionCreate an Amazon EKS cluster, service account policy, and role-based access control (RBAC) policies1.    To use eksctl to create an Amazon EKS cluster, run the following command:eksctl create cluster --name YOUR_CLUSTER_NAME --version 1.23 --fargateNote: You don't need to create a Fargate pod execution role for clusters that use only Fargate pods (--fargate).2.    Allow the cluster to use AWS Identity and Access Management (IAM) for service accounts by running the following command:eksctl utils associate-iam-oidc-provider --cluster YOUR_CLUSTER_NAME --approveNote: The FargateExecutionRole is the role that's used for the kubelet and kube-proxy to run your Fargate pod on. However, it's not the role used for the Fargate pod (that is, the aws-load-balancer-controller). For Fargate pods, you must use the IAM role for the service account. For more information, see IAM roles for service accounts.3.    To download an IAM policy that allows the AWS Load Balancer Controller to make calls to AWS APIs on your behalf, run the following command:curl -o iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.4/docs/install/iam_policy.json4.    To create an IAM policy using the policy that you downloaded in step 3, run the following command:aws iam create-policy \ --policy-name AWSLoadBalancerControllerIAMPolicy \ --policy-document file://iam_policy.json5.    To create a service account named aws-load-balancer-controller in the kube-system namespace for the AWS Load Balancer Controller, run the following command:eksctl create iamserviceaccount \ --cluster=YOUR_CLUSTER_NAME \ --namespace=kube-system \ --name=aws-load-balancer-controller \ --attach-policy-arn=arn:aws:iam::<AWS_ACCOUNT_ID>:policy/AWSLoadBalancerControllerIAMPolicy \ --override-existing-serviceaccounts \ --approve6.    To verify that the new service role is created, run one of the following commands:eksctl get iamserviceaccount --cluster YOUR_CLUSTER_NAME --name aws-load-balancer-controller --namespace kube-system-or-kubectl get serviceaccount aws-load-balancer-controller --namespace kube-systemInstall the AWS Load Balancer Controller using HelmImportant: For more information, see cert-manager on the Jetstack GitHub website and the discussion topic Cert-manager issues with Fargate on the Kubernetes GitHub website.1.    To add the Amazon EKS chart repo to Helm, run the following command:helm repo add eks https://aws.github.io/eks-charts2.    To install the TargetGroupBinding custom resource definitions (CRDs), run the following command:kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master"3.    To install the Helm chart, run the following command:helm install aws-load-balancer-controller eks/aws-load-balancer-controller \ --set clusterName=YOUR_CLUSTER_NAME \ --set serviceAccount.create=false \ --set region=YOUR_REGION_CODE \ --set vpcId=<VPC_ID> \ --set serviceAccount.name=aws-load-balancer-controller \ -n kube-systemTest the AWS Load Balancer ControllerYou can use the AWS Load Balancer Controller to create either an Application Load Balancer for Ingress or a Network Load Balancer for creating a k8s service. To deploy a sample app called 2048 with Application Load Balancer Ingress, do the following:1.    To create a Fargate profile that's required for the game deployment, run the following command:eksctl create fargateprofile --cluster your-cluster --region your-region-code --name your-alb-sample-app --namespace game-20482.    To deploy the sample game and verify that the AWS Load Balancer Controller creates an ALB Ingress resource, run the following command:kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.4/docs/examples/2048/2048_full.yaml3.    After a few minutes, verify that the Ingress resource was created by running the following command:kubectl get ingress/ingress-2048 -n game-2048Output:NAME CLASS HOSTS ADDRESS PORTS AGEingress-2048 <none> * k8s-game2048-ingress2-xxxxxxxxxx-yyyyyyyyyy.us-east-2.elb.amazonaws.com 80 2m32sNote: If your Ingress isn't created after several minutes, view the AWS Load Balancer Controller logs by running the following command:kubectl logs -n kube-system deployment.apps/aws-load-balancer-controllerNote: Your logs might contain error messages that can help you diagnose issues with your deployment.4.    Open a browser, and navigate to the ADDRESS URL from the previous command output to view the sample application.Note: If you don't see the sample application, then wait a few minutes and refresh your browser.Deploy a sample application with the Network Load Balancer IP address mode serviceTo use the Network Load Balancer IP address mode, you must have a cluster running at least Kubernetes v1.16 or higher.1.    To create a Fargate profile, run the following command:eksctl create fargateprofile --cluster your-cluster --region your-region-code --name your-alb-sample-app --namespace game-20482.    To get the manifest for deploying the 2048 game, run the following command:curl -o 2048-game.yaml https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.4/docs/examples/2048/2048_full.yaml3.    In the manifest from step 2, delete this Ingress section:apiVersion: networking.k8s.io/v1kind: Ingressmetadata: namespace: game-2048 name: ingress-2048 annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ipspec:ingressClassName: alb rules: - http: paths: - path: / pathType: Prefix backend: service: name: service-2048 port: number: 804.    Modify the Service object:apiVersion: v1kind: Servicemetadata: namespace: game-2048 name: service-2048 annotations: service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip" service.beta.kubernetes.io/aws-load-balancer-type: external service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facingspec: ports: - port: 80 targetPort: 80 protocol: TCP type: LoadBalancer selector: app.kubernetes.io/name: app-20485.    To create the service and deployment manifest, run the following command:kubectl apply -f 2048-game.yaml6.    To check for service creation and the DNS name of the Network Load Balancer, run the following command:kubectl get svc -n game-2048Output:NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice-2048 LoadBalancer 10.100.114.197 k8s-game2048-service2-xxxxxxxxxx-yyyyyyyyyy.us-east-2.elb.amazonaws.com 80:30159/TCP 23m7.    Wait a few minutes until the load balancer is active. Then, to check that you can reach the deployment, open the fully qualified domain name (FQDN) of the NLB that's referenced in the EXTERNAL-IP section in a web browser.Troubleshoot the AWS Load Balancer ControllerIf you have issues setting up the controller, then run the following commands:$ kubectl logs -n kube-system deployment.apps/aws-load-balancer-controller$ kubectl get endpoints -n game-2048$ kubectl get ingress/ingress-2048 -n game-2048The output from the logs command returns error messages (for example, with tags or subnets). These error messages can help you troubleshoot common errors (from the Kubernetes GitHub website). The get endpoints command shows you if the backed deployment pods are correctly registered. The get ingress commands show you if Ingress resources are deployed.Follow"
https://repost.aws/knowledge-center/eks-alb-ingress-controller-fargate
How do I set up LinkedIn as a social identity provider in an Amazon Cognito user pool?
I want my app's users to be able to sign in using LinkedIn. How do I set up LinkedIn as a social identity provider (IdP) in an Amazon Cognito user pool?
"I want my app's users to be able to sign in using LinkedIn. How do I set up LinkedIn as a social identity provider (IdP) in an Amazon Cognito user pool?Short descriptionLinkedIn doesn't provide all the fields that Amazon Cognito requires when adding an OpenID Connect (OIDC) provider to a user pool.You must use a third-party service as a middle agent between LinkedIn and Amazon Cognito, such as Auth0. Auth0 gets identities from LinkedIn, and Amazon Cognito then gets those identities from Auth0.Note: Auth0 is a third-party service that's not affiliated with AWS. You might incur separate fees using Auth0.You can also use this setup for other social IdPs with similar integration issues. For more information, see Identity providers on the Auth0 website.ResolutionCreate an Amazon Cognito user pool with an app client and domain nameFor more information on creating these prerequisites, see the following articles:Tutorial: Creating a user poolImportant: When creating a user pool, keep the standard attribute email selected.Configuring a user pool app clientAdding a domain name for your user poolSign up for an Auth0 accountEnter your email address and a password on the Auth0 website sign-up page to get started. Or, if you already have an Auth0 account, log in. After logging in, take note of your Auth0 tenant name.Create an Auth0 applicationOn the Auth0 website dashboard, choose + Create Application.Note: If you've already created the Auth0 application you want to use, continue to the next section.In the Create Application dialog box, enter a name for your application. For example, My App.Under Choose an application type, choose Single Page Web Applications.Choose Create.On the Settings pane of your new application, do the following:Find the Client ID and Client Secret and copy them. You'll need these later when connecting Auth0 to your Amazon Cognito user pool.For Allowed Callback URLs, enter https://yourDomainPrefix.auth.region.amazoncognito.com/oauth2/idpresponse.Note: Replace yourDomainPrefix and region with the values for your user pool. Find them in the Amazon Cognito console on the Domain name tab of the management page for your user pool.Choose Save changes.Sign up for a LinkedIn accountEnter your email address and a password on the LinkedIn website sign up page to get started. Or, sign in if you already have a LinkedIn account.Create a LinkedIn appOn the LinkedIn Developers website, choose Create app.On the Create an app page, complete all required and preferred fields to customize your LinkedIn app, and then choose Create app.Choose the Auth tab. Confirm that r_emailaddress and r_liteprofile are listed. These permissions are required for Auth0 to access the required LinkedIn user info.Note: If you don't see the r_emailaddress and r_liteprofile listed, then add the product "Sign In with LinkedIn" to your application. This is found on the Products tab of your LinkedIn Dev page.Under Application credentials, find the Client ID and Client Secret and copy them. You need both of these later when connecting LinkedIn to your Auth0 app.Under OAuth 2.0 settings, next to Redirect URLs:, choose the pencil icon, and then choose + Add redirect URL.Under Redirect URLs:, enter https://tenantName.us.auth0.com/login/callback, replacing tenantName with your Auth0 tenant name (or an Auth0 custom domain).Connect to LinkedIn from Auth0On the Auth0 website dashboard, in the left navigation pane, choose Authentication, and then choose Social.Choose LinkedIn.On the Settings pane of the LinkedIn dialog box, do the following:For API Key, enter the Client ID that you copied earlier from your LinkedIn app.For Secret Key, enter the Client Secret that you copied earlier from your LinkedIn app.For Attributes, select the Email address check box.Choose Save.On the Applications pane of the LinkedIn dialog box, choose the applications that you want to enable LinkedIn as a social IdP for.Choose Save.Test your LinkedIn social connection with Auth0In the LinkedIn dialog box, choose Try. Or, on the Auth0 website dashboard, in the left navigation pane, choose Connections, choose Social, and then next to LinkedIn, choose Try. A new browser tab or window opens to the LinkedIn sign-in page.Sign in to LinkedIn with your email address and password.When prompted to allow your app to access your LinkedIn user info, choose Allow.Add an OIDC provider to your user poolIn the Amazon Cognito console management page for your user pool, under Federation, choose Identity Providers.Choose OpenID Connect.Enter the details of your Auth0 app for the OIDC provider details, as follows:For Provider name, enter a name (for example, Auth0-LinkedIn). This name appears in the Amazon Cognito hosted web UI.Note: You can't change this field after creating the provider.For Client ID, enter the Client ID that you copied earlier from your Auth0 application.For Client secret (optional), enter the Client Secret that you copied earlier from your Auth0 application.For Attributes request method, leave the setting as GET.For Authorize scope, enter openid profile email.For Issuer, enter the URL of your Auth0 profile. For example, https://tenantName.auth0.com, replacing tenantName with your Auth0 tenant name.For Identifiers (optional), you can optionally enter a custom string to use later in the endpoint URL in place of your OIDC provider's name.Choose Run discovery to fetch the OIDC configuration endpoints for Auth0.Choose Create provider.For more information, see Add an OIDC IdP to your user pool.Change app client settings for your user poolIn the Amazon Cognito console management page for your user pool, under App integration, choose App client settings.On the app client page, do the following:Under Enabled Identity Providers, select the OIDC provider (for example, Auth0-LinkedIn) and Cognito User Pool check boxes.For Callback URL(s), enter a URL where you want your users to be redirected after logging in. For testing, you can enter any valid URL, such as https://example.com/.For Sign out URL(s), enter a URL where you want your users to be redirected after logging out. For testing, you can enter any valid URL, such as https://example.com/.Under Allowed OAuth Flows, select either the Authorization code grant or Implicit grant check box, or both.Note: The allowed OAuth flows you enable determine which values ("code" or "token") you can use for the response_type parameter in your endpoint URL.Under Allowed OAuth Scopes, select at least the email and openid check boxes.Choose Save changes.For more information, see App client settings terminology.Map the attributes from Auth0 to your user poolIn the Amazon Cognito console management page for your user pool, under Federation, choose Attribute mapping.On the attribute mapping page, choose the OIDC tab.If you have more than one OIDC provider in your user pool, choose your new provider from the dropdown list.Confirm that the OIDC attribute sub is mapped to the user pool attribute Username.Choose Add OIDC attribute. For the new OIDC attribute, enter email. For User pool attribute, choose Email.(Optional) Add any additional OIDC attributes you want to pass along from Auth0. For example, you might map given_name and family_name to the corresponding Amazon Cognito user pool attributes.Note: To see all the OIDC attributes stored for an Auth0 user, from the Auth0 website dashboard, in the left navigation pane, choose Users & Roles, choose Users, choose a user, and then choose Raw JSON.For more information, see Specifying identity provider attribute mappings for your user pool.Construct the endpoint URLUsing values from your own setup, construct this endpoint URL:https://yourDomainPrefix.auth.region.amazoncognito.com/oauth2/authorize?response_type=code&client_id=yourClientId&redirect_uri=redirectUrlDo the following to customize the URL for your setup:Replace yourDomainPrefix and region with the values for your user pool. Find them in the Amazon Cognito console on the Domain name tab of the management page for your user pool.If you selected only the Implicit grant flow earlier for Allowed OAuth Flows, then change response_type=code to response_type=token.Replace yourClientId with your app client's ID, and replace redirectUrl with your app client's callback URL. Find them in the Amazon Cognito console on the App client settings tab of the management page for your user pool.For more information, see How do I configure the hosted web UI for Amazon Cognito? and Authorize endpoint.Test the endpoint URLEnter the constructed endpoint URL in your web browser.Under Sign in with your corporate ID, choose the name of your OIDC provider (for example, Auth0-LinkedIn). You're redirected to the log-in page for your Auth0 application.Choose Log in with LinkedIn. You're redirected to the LinkedIn sign-in page.Note: If you're redirected to your Amazon Cognito app client's callback URL instead, then you're already signed in to LinkedIn.On the LinkedIn sign-in page, enter the email address (or phone number) and password for your LinkedIn account.Choose Sign in.After you log in successfully, you're redirected to your app client's callback URL. The authorization code or user pool tokens appear in the URL in your web browser's address bar.(Optional) Skip the Amazon Cognito hosted UIIf you want your users to skip the Amazon Cognito hosted web UI when signing in to your app, use this as the endpoint URL instead:https://yourDomainPrefix.auth.region.amazoncognito.com/oauth2/authorize?response_type=code&identity_provider=oidcProviderName&client_id=yourClientId&redirect_uri=redirectUrl&scope=allowedOauthScopesDo the following to customize the URL for your setup:Replace yourDomainPrefix and region with the values for your user pool. Find them in the Amazon Cognito console on the Domain name tab of the management page for your user pool.If you selected only the Implicit grant flow earlier for Allowed OAuth Flows, change response_type=code to response_type=token.Replace oidcProviderName with the name of the OIDC provider in your user pool. For example, Auth0-LinkedIn.(Optional) If you added an identifier for your OIDC provider earlier in the Identifiers (optional) field, you can replace identity_provider=oidcProviderName with **idp_identifier=**idpIdentifier, replacing idpIdentifier with your custom identifier string.Replace yourClientId with your app client's ID, and replace redirectUrl with your app client's callback URL. Find them in the Amazon Cognito console on the App client settings tab of the management page for your user pool.Replace allowedOauthScopes with the specific scopes that you want your Amazon Cognito app client to request.Related informationOIDC user pool IdP authentication flowHow do I set up Auth0 as a SAML identity provider with an Amazon Cognito user pool?Add an app client and set up the hosted UIFollow"
https://repost.aws/knowledge-center/cognito-linkedin-auth0-social-idp
How can I access my Amazon SES logs?
"I send emails using Amazon Simple Email Service (Amazon SES), and I want to get logs or notifications for bounces, complaints, delivery, failures, or rejections."
"I send emails using Amazon Simple Email Service (Amazon SES), and I want to get logs or notifications for bounces, complaints, delivery, failures, or rejections.Short descriptionTo get logs or notifications about your email-sending events on Amazon SES, use one of the following mechanisms:Email feedback forwardingAmazon Simple Notification Service (Amazon SNS) topic for event notificationsConfiguration set to publish email-sending events to Amazon SNS, Amazon CloudWatch, Amazon Pinpoint, or Amazon Kinesis Data FirehoseNote: These mechanisms don't provide logs for emails that you sent before you implemented the mechanism.ResolutionUse email feedback forwardingBy default, Amazon SES sends you a notification when a message that you send bounces or gets a complaint. The destination that you check for the bounce or complaint notifications depends on your email-sending operation. For more information, see Email feedback forwarding destination.If you de-activated email feedback forwarding, then you can re-activate feedback forwarding using the Amazon SES console.Use an Amazon SNS topic to send event notificationsYou can use an Amazon SNS topic for notifications about Amazon SES email delivery, bounces, or complaints.Follow these steps to configure an Amazon SNS topic for Amazon SES events:Create an Amazon SNS topic.Subscribe an endpoint (such as an email address) to receive notifications from the Amazon SNS topic.Configure your Amazon SES verified identity (domain or email address) to send event notifications to the Amazon SNS topic that you created. You can do this using either the Amazon SES console or the Amazon SES API.After you configure an Amazon SNS topic for Amazon SES event notifications, you receive notifications at the endpoint that you subscribed to the topic. For more information on the details in the notifications, see Amazon SNS notification contents for Amazon SES and Amazon SNS notification examples for Amazon SES.Note: Amazon SES doesn't support FIFO type topics.Use a configuration set to publish email-sending events to Amazon SNS, CloudWatch, Amazon Pinpoint, or Kinesis Data FirehoseUse an Amazon SES configuration set that publishes events to get information about the following outcomes:SendsRejectsHard bouncesComplaintsDeliveriesDelivery delaysSubscriptionsOpensClicksRendering failuresFor details about each outcome, see Event publishing terminology.To create the configuration set, first specify the event destination, and then specify the parameters for the events that you want to publish. For step-by-step instructions, see Setting up Amazon SES event publishing.Make sure that you configure your email-sending method to pass the name of the configuration set in the headers of your emails. This is required for Amazon SES to apply the configuration set to your emails. For more information, see Specifying a configuration set when you send email. You can also assign a default configuration set to the verified identity that's used as the From/Source address. Then, messages that you send from this identity automatically use the assigned default configuration set.Follow"
https://repost.aws/knowledge-center/ses-email-event-logs
How do I make sure I don't incur charges when I'm using the AWS Free Tier?
I'm using the AWS Free Tier to test AWS services and want to make sure that all the resources that I'm using are covered under the AWS Free Tier.
"I'm using the AWS Free Tier to test AWS services and want to make sure that all the resources that I'm using are covered under the AWS Free Tier.Short descriptionYou can help avoid unnecessary charges when using the Free Tier by following these best practices:Understand what services and resources are covered by the AWS Free Tier.Monitor Free Tier usage with AWS Budgets.Monitor costs in the Billing and Cost Management console.Find and terminate resources when you're done using them.Important: The AWS Free Tier makes certain amounts and types of resources for new AWS accounts available free of charge for a one-year period. Any amounts and types of resources that aren't covered are charged at standard rates.ResolutionUnderstand what services and resources are coveredBefore you create any new resources, do the following:Access the list of covered services and resources at AWS Free Tier.Scroll down to see the Free Tier details listing.Use the filter options or search to locate a specific service.Choose the service to expand the tile and view specific usage limits.Important: Not all AWS services are covered under Free Tier. Some services launched under the Free Tier have usage limits. If you exceed the usage limits, then you are charged at standard rates.Use AWS Budgets to monitor your Free Tier usageYou can track your Free Tier usage with the AWS Free Tier usage alerts. These alerts notify you when your free tier usage exceeds 85 percent of your monthly limit.To opt in to the AWS Free Tier usage alerts, do the following:Sign in to the AWS Management Console, and then open the Billing and Cost Management console.Under Preferences in the navigation pane, choose Billing preferences.Under Cost Management Preferences, select Receive AWS Free Tier Usage Alerts to opt in to Free Tier usage alerts.Monitor your billTrack your AWS Free Tier usage in your Billing and Cost Management console to see how much of the AWS Free Tier that you're currently using. You can use the Top AWS Free Tier Services by Usage table to check if your current usage rate will incur charges.Check your usage in the Billing and Cost Management console. Even if your resources are covered under the AWS Free Tier, you see a line item on your bill for each covered resource.Note: Most benefits offered by the AWS Free Tier apply globally to all resources on your account, not individually to each AWS Region. Be sure to monitor your usage in all AWS Regions.Find and terminate resources when you're done using themTo avoid incurring unexpected charges, it's a best practice to routinely check if you have active resources that you no longer need. Then, terminate these unneeded resources.To find your active test resources, see How do I check for active resources that I no longer need on my AWS account?To terminate your resources after testing, see How do I terminate active resources that I no longer need on my AWS account?Related informationUsing the AWS Free TierAWS Free Tier FAQsCreating a billing alarm to monitor your estimated AWS chargesI unintentionally incurred charges while using the AWS Free Tier. How do I make sure that I'm not billed again?What do I need to know if my Free Tier period with AWS is expiring?Follow"
https://repost.aws/knowledge-center/free-tier-charges
How can I configure automatic scaling in Amazon EMR?
I want to use Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling on an Amazon EMR cluster.
"I want to use Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling on an Amazon EMR cluster.Short descriptionAmazon EMR versions 5.30.0, 6.1.0, and later: Use Amazon EMR managed scaling. Or, use automatic scaling with a custom policy for instance groups.Amazon EMR versions 4.0.0-5.29.0 and 6.0.0: Use automatic scaling with a custom policy for instance groups.ResolutionAmazon EMR versions 5.30.0, 6.1.0, and laterIf you're using Amazon EMR 5.30.0, 6.1.0, or later versions, then you have two options for automatic scaling: turn on Amazon EMR managed scaling to automatically increase or decrease the number of instances or units in your cluster based on workload. Or, use automatic scaling with a custom policy for instance groups, as explained in the following section.Amazon EMR versions 4.0.0 and laterFollow the steps at Using automatic scaling with a custom policy for instance groups. For information about the Amazon CloudWatch metrics that you can use for automatic scaling in Amazon EMR, see Monitor metrics with CloudWatch. The following are two commonly used metrics for automatic scaling:YarnMemoryAvailablePercentage: This is the percentage of remaining memory that's available for YARN.ContainerPendingRatio: This is the ratio of pending containers to allocated containers. You can use this metric to scale a cluster based on container-allocation behavior for varied loads. This is useful for performance tuning.To confirm that the scaling policy is attached to the instance group, choose Events from the navigation pane.Check for automatic scaling policy events.Related informationScaling cluster resourcesFollow"
https://repost.aws/knowledge-center/auto-scaling-in-amazon-emr
How do I increase Amazon WorkSpaces quotas?
I want to increase my service quota for Amazon WorkSpaces. How can I do this?
"I want to increase my service quota for Amazon WorkSpaces. How can I do this?Short descriptionAmazon WorkSpaces-based resources in your AWS account have default quotas (also known as service limits). For more information, see Amazon WorkSpaces quotas.ResolutionTo request a quota increase, do the following:Open the WorkSpaces Limits form.Select the Region.Select the Limit type you're requesting a quota increase for. For example, Max WorkSpaces.Note: Graphics and Graphics Pro bundles are supported in the following Regions: US East (N. Virginia), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Asia Pacific (Singapore). You can request quota increases for Graphics and Graphics Pro in these Regions only.Enter the New Limit Value.Select Add another request to enter another increase request for the same limit type.Enter the Use Case Description explaining why you're requesting a limit increase.Note: Request for more than 200 WorkSpaces or more than 20 Graphics/Graphics Pro WorkSpaces require additional information. Make sure you answer the additional questions listed within the request form in your use case description.Enter Contact options.Select Submit.Related informationTroubleshoot WorkSpaces issuesFollow"
https://repost.aws/knowledge-center/workspaces-increase-quotas
Why is my MySQL DB instance showing a high number of active sessions waiting on SYNCH wait events in Performance Insights?
"When I activate Performance Insights, my DB instance shows a large number of Average Active Sessions (AAS) waiting on synchronization (SYNCH) wait events. I want to improve the performance of my DB instance."
"When I activate Performance Insights, my DB instance shows a large number of Average Active Sessions (AAS) waiting on synchronization (SYNCH) wait events. I want to improve the performance of my DB instance.Short descriptionPerformance Insights are activated on any of the following services:Amazon Relational Database Service (Amazon RDS) for MySQL.Amazon RDS for MariaDB.Amazon Aurora MySQL-Compatible Edition.If you see MySQL SYNCH wait events in Performance Insights, then a large number of sessions in the database are attempting to access the same protected objects or memory structures. Protected objects in MySQL include the following:The active binary log file in a binlog source instance - contains a mutex that allows only one session to read or write it at a time.The data dictionary - for writes that are usually caused by data control language (DCL) or data definition language (DDL) statements.The adaptive hash index- contains a mutex that allows only one session to read or write it at a time.The open table cache - only one session can add or remove a table from the cache.Each single database block inside the InnoDB Buffer Pool - only one session can modify the content of a block in memory at a time.ResolutionMake sure that the DB instance has enough CPU resources to handle the workloadIf you have a high number of sessions waiting on SYNCH events, then this causes high CPU usage. If the usage hits 100%, then the number of waiting sessions increases. When troubleshooting, increase the size of your DB instance to make sure that there is enough CPU to process the extra workload.Because these events are usually short-lived, the Amazon CloudWatch CPU utilization metric might not show the peak usage correctly. The best way to check this is to use the one-second CPU counters in RDS Enhanced Monitoring. These counters are more specific and detailed.Increase MySQL's mutex/lock wait arrayMySQL uses an internal data structure to coordinate threads. This array has a size of one, by default. This is suitable for single-CPU machines, but it can cause issues on machines with several CPUs. If your workload has a large number of waiting threads, then increase the array size. Set the MYSQL parameter innodb_sync_array_size to the amount of CPUs (or higher, up to 1024).Note: The innodb_sync_array_size parameter applies only at database start up.Reduce concurrencyIn general, parallelism helps to improve throughput. But when a large number of sessions try to do the same or similar activities, then the sessions need access to the same protected objects. The higher the number of sessions, the more CPU you use while waiting.Spread these activities over time, or schedule them in series. You can also bundle several operations into a single statement, such as multi-row inserts.Examine specific wait eventsUse the following examples to troubleshoot your specific wait event. For more information on Aurora MySQL wait events, see Tuning Aurora MySQL with wait events.synch/rwlock/innodb/dict sys RW lock, ORsynch/rwlock/innodb/dict_operation_lock - This indicates a high number of concurrent DCLs of DDLs are triggered at the same time. Reduce the application's dependency on using DDLs during regular application activity.synch/cond/sql/MDL_context::COND_wait_status - This indicates a high number of SQLs (including selects) trying to access a table that a DCL or DDL is modifying. Avoid running DDL statements to high-traffic tables during regular application activity.synch/mutex/sql/LOCK_open, ORsynch/mutex/sql/LOCK_table_cache - This indicates that the number of tables that your sessions are opening exceeds the size of the table definition cache or the table open cache. Increase the size of these caches.synch/mutex/sql/LOG - Your database might be executing a large number of statements, and the current logging methods can't support it. If you use the TABLE output method, then try to use FILE instead. If you use general log, then use Amazon Aurora's advanced auditing instead. If you use 0 or less than 1 for the long_query_time parameter, then try to increase it.synch/mutex/innodb/buf_pool_mutex,OR synch/mutex/innodb/aurora_lock_thread_slot_futex,OR synch/rwlock/innodb/index_tree_rw_lock - There are a large number of similar DMLs accessing the same database object at the same time. Use multi-row statements, and use partitioning to spread the workload over different database objects.synch/mutex/innodb/aurora_lock_thread_slot_futex - This occurs when one session has locked a row for an update, and then another session tries to update the same row. Your action depends on the other wait events that you see. Either find and respond to the SQL statements responsible for this wait event, or find and respond to the blocking session.synch/cond/sql/MYSQL_BIN_LOG::COND_done, ORsynch/mutex/sql/MYSQL_BIN_LOG::LOCK_commit, ORsynch/mutex/sql/MYSQL_BIN_LOG::LOCK_log - You have turned on binary logging, and there might be one of the following:                         - a high commit throughput.                         - a large number of transactions committing.                         - replicas reading binlogs.                         - a combination of these.Consider upgrading the database to a major version compatible with 5.7 or higher. Also, use multi-row statements, or bundle several statements into a single transaction. In Amazon Aurora, use global databases instead of binary log replication, or use the aurora_binlog parameters.Related informationUsing Amazon RDS performance insightsWorking with DB parameter groupsAurora MySQL eventsFollow"
https://repost.aws/knowledge-center/aurora-mysql-synch-wait-events
How can I troubleshoot connectivity to an Amazon RDS DB instance that uses a public or private subnet of a VPC?
I can't connect to my Amazon Relational Database Service (Amazon RDS) DB instance. How can I troubleshoot connectivity issues in a public or private subnet of an Amazon Virtual Private Cloud (Amazon VPC)?
"I can't connect to my Amazon Relational Database Service (Amazon RDS) DB instance. How can I troubleshoot connectivity issues in a public or private subnet of an Amazon Virtual Private Cloud (Amazon VPC)?Short descriptionYou can launch Amazon RDS databases in the public or private subnet of a VPC. However, incorrect VPC configuration on the RDS instance side can cause connection problems. Or, configuration or connectivity issues on the client that you are connecting from might also cause connection problems.To resolve these issues, see the following resolutions depending on your environment.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.My DB instance is in a public subnet, and I can't connect to it over the internet from my local computerThis issue can occur when the Publicly Accessible property of the DB instance is set to No. To check whether a DB instance is publicly accessible:Open the Amazon RDS console, select Databases from the navigation pane, and select your DB instance. Then, review the Connectivity & Security section of your instance.-or-Use the describe-db-instances command in the AWS CLI.To change the Publicly Accessible property of the Amazon RDS instance to Yes:1.    Open the Amazon RDS console.2.    Choose Databases from the navigation pane, and then select the DB instance.3.    Choose Modify.4.    Under Connectivity, extend the Additional configuration section, and then choose Publicly accessible.5.    Choose Continue.6.    Choose Modify DB Instance.Note: This change is applied immediately, even if you don't select the Apply Immediately option. Downtime occurs only if you have pending maintenance action set up with this modification, which requires downtime, and you choose Apply Immediately.If you set the Publicly Accessible property to Yes and you're still unable to connect to your RDS instance, then check these details:Verify that your VPC has an internet gateway attached to it.Make sure that the inbound rules for the security group of your RDS instance allow connections from your source IP.My DB instance is in a private subnet, and I can't connect to it from my local computerYou can resolve this issue by using a public subnet. When you use a public subnet, all the resources on the subnet are accessible from the internet. If this solution doesn't meet your security requirements, then use AWS Site-to-Site VPN. With Site-to-Site VPN, you configure a customer gateway that allows you to connect your VPC to your remote network.Another method to resolve this issue is using an Amazon EC2 instance as a bastion (jump) host. For more information, see How can I connect to a private Amazon RDS DB instance from a local machine using an Amazon EC2 instance as a bastion host?To switch to a public subnet:1.    Open the Amazon RDS console.2.    Choose Databases from the navigation pane, and then choose the DB instance.3.    From the Connectivity & Security section, copy the endpoint of the DB instance.4.    Perform an nslookup to the DB instance endpoint from an EC2 instance within the VPC. See the following example output:nslookup myexampledb.xxxx.us-east-1.rds.amazonaws.comServer: xx.xx.xx.xxAddress: xx.xx.xx.xx#53Non-authoritative answer:Name: myexampledb.xxxx.us-east-1.rds.amazonaws.comAddress: 172.31.xx.x5.    After you have the private IP address of your RDS DB instance, you can relate the private IP address to a particular subnet in the VPC. The VPC subnet is based on the subnet CIDR range and private IP address.6.    Open the Amazon VPC console, and then choose Subnets from the navigation pane.7.    Choose the subnet that is associated to the DB instance that you found in step 5.8.    From the Description pane, choose the Route Table.9.    Choose Actions, and then choose Edit routes.10.    Choose Add route. For IPv4 and IPv6 traffic, in the Destination box, enter the routes for your external or on-premises network. Then, select the internet gateway ID in the Target list.Note: Be sure that the Inbound security group rule for your instance restricts traffic to the addresses of your external or on-premises network.    11.    Choose Save.Important: If you change a subnet to public, then other DB instances in the subnet also become accessible from the internet. The DB instances are accessible from the internet if they have an associated public address.If the DB instance still isn't accessible after following these steps, check to see if the DB instance is Publicly Accessible. To do this, follow the steps in My DB instance is in a private subnet, and I can't connect to it from my local computer.My DB instance can't be accessed by an Amazon EC2 instance from a different VPCCreate a VPC peering connection between the VPCs. A VPC peering connection allows two VPCs to communicate with each other using private IP addresses.1.    Create and accept a VPC peering connection.Important: If the VPCs are in the same AWS account, be sure that the IPv4 CIDR blocks don't overlap. For more information, see VPC peering limitations.2.    Update both route tables.3.    Update your security groups to reference peer VPC groups.4.    Activate DNS resolution support for your VPC peering connection.5.    On the Amazon Elastic Compute Cloud (Amazon EC2) instance, test the VPC peering connection by using a networking utility. See the following example:nc -zv <hostname> <port>If the connection is working, then the output looks similar to the following:$ nc -zv myexampledb.xxxx.us-east-1.rds.amazonaws.com 5439found 0 associationsfound 1 connections: 1: flags=82<CONNECTED,PREFERRED> outif en0 src xx.xxx.xxx.xx port 53396 dst xx.xxx.xxx.xxx port 5439 rank info not available TCP aux info availableConnection to myexampledb.xxxx.us-east-1.rds.amazonaws.com port 5439 [tcp/*] succeeded!Related informationScenarios for accessing a DB instance in a VPCWorking with a DB instance in a VPCFollow"
https://repost.aws/knowledge-center/rds-connectivity-instance-subnet-vpc
How do I empty an Amazon S3 bucket using a lifecycle configuration rule?
I have an Amazon Simple Storage Service (Amazon S3) bucket that stores millions of objects. I want to empty the bucket using a lifestyle configuration rule so that I won't be charged for storage anymore.
"I have an Amazon Simple Storage Service (Amazon S3) bucket that stores millions of objects. I want to empty the bucket using a lifestyle configuration rule so that I won't be charged for storage anymore.ResolutionFollow these steps to create a lifecycle configuration rule that expires current versions of objects and permanently delete previous versions of objects:1.    Open the Amazon S3 console.2.    From the list of buckets, choose the bucket that you want to empty.3.    Choose the Management tab.4.    Choose Create lifecycle rule.5.    For Lifecycle rule name, enter a rule name.6.    For Choose a rule scope, select This rule applies to all objects in the bucket.7.    Select I acknowledge that this rule will apply to all objects in the bucket.8.    For Lifecycle rule actions, select the following to create a lifecycle rule:Expire current versions of objectsPermanently delete previous versions of objectsDelete expired delete markers or incomplete multipart uploads9.    In the Expire current versions of objects field, enter "1" in the Days after object creation field.10.    In the Permanently delete previous versions of objects field, enter "1" in the Number of days after objects become noncurrent field.11.    Leave the Number of newer versions to retain field empty to delete all versions.12.    Select Delete incomplete multipart uploads and enter "1" for the Number of days field.13.    Choose Create rule.14.    Create a second lifecycle rule by repeating steps 4-7.15.    Then, select the following: Delete expired delete markers or incomplete multipart uploads16.    Select Delete expired object delete markers.17.    Choose Create rule.Amazon S3 runs lifecycle rules once every day. After the first time that Amazon S3 runs the rules, all objects that are eligible for expiration are marked for deletion. You're no longer charged for objects that are marked for deletion. However, rules might take a few days to run before the bucket is empty because expiring object versions and cleaning up delete markers are asynchronous steps. For more information about this asynchronous object removal in Amazon S3, see Expiring objects.Related informationRemoving expired object delete markersManaging your storage lifecycleHow do I delete Amazon S3 objects and buckets?Deleting a bucketFollow"
https://repost.aws/knowledge-center/s3-empty-bucket-lifecycle-rule
How can I change my CloudTrail trail to an AWS Organizations trail?
"Instead of creating a new AWS Organizations organization trail, I want to change my existing AWS CloudTrail trail to an organization trail. How do I change my CloudTrail trail to an organization trail?"
"Instead of creating a new AWS Organizations organization trail, I want to change my existing AWS CloudTrail trail to an organization trail. How do I change my CloudTrail trail to an organization trail?Resolution(Prerequisite) Activate trusted service access with CloudTrailFollow the instructions in Activating trusted access with CloudTrail in the AWS Organizations User Guide.For more information about integrating CloudTrail into Organizations, see AWS CloudTrail and AWS Organizations.Update the Amazon S3 bucket policy for your CloudTrail log files to allow the following:The CloudTrail trail to deliver log files to the Amazon Simple Storage Service (Amazon S3) bucket.The CloudTrail trail to deliver logs for the accounts in the organization to the Amazon S3 bucket.1.    Open the Amazon S3 console.2.    Choose Buckets.3.    For Bucket name, choose the S3 bucket that contains your CloudTrail log files.4.    Choose Permissions. Then, choose Bucket Policy.5.    Copy and paste the following example bucket policy statement into the policy editor, and then choose Save.Important: Replace primary-account-id with your Organizations primary account ID. Replace bucket-name with your S3 bucket name. Replace org-id with your Organizations ID. Replace your-region with your AWS Region.{ "Version": "2012-10-17", "Statement": [ { "Sid": "AWSCloudTrailAclCheck", "Effect": "Allow", "Principal": { "Service": "cloudtrail.amazonaws.com" }, "Action": "s3:GetBucketAcl", "Resource": "arn:aws:s3:::bucket-name" }, { "Sid": "AWSCloudTrailWrite20150319", "Effect": "Allow", "Principal": { "Service": "cloudtrail.amazonaws.com" }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::bucket-name/AWSLogs/primary-account-id/*", "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } } }, { "Sid": "AWSCloudTrailWrite", "Effect": "Allow", "Principal": { "Service": "cloudtrail.amazonaws.com" }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::bucket-name/AWSLogs/org-id/*", "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } } } ]}(Optional) Configure permissions to monitor the organization's CloudTrail log files using CloudWatch Logs.Note: The following steps are required only if you're monitoring CloudTrail log files with Amazon CloudWatch Logs.1.    Make sure that your organization has all features activated.2.    Follow the instructions Activate CloudTrail as a trusted service in AWS Organizations.3.    Open the AWS Identity and Access Management (IAM) console.4.    Choose Policies.5.    For Policy name, choose the IAM policy associated with your CloudWatch logs group AWS primary account.6.    Choose Edit policy, copy and paste the following example IAM policy statement, and then choose Save.Important: Replace your-region with your AWS Region. Replace primary-account-id with your Organizations primary account ID. Replace org-id with your organization ID. Replace log-group-name with your CloudWatch log group name.{ "Version": "2012-10-17", "Statement": [ { "Sid": "AWSCloudTrailCreateLogStream", "Effect": "Allow", "Action": [ "logs:CreateLogStream" ], "Resource": [ "arn:aws:logs:your-region:primary-account-id:log-group:CloudTrail/log-group-name:log-stream:primary-account-id_CloudTrail_your-region*", "arn:aws:logs:your-region:primary-account-id:log-group:CloudTrail/log-group-name:log-stream:org-id*" ] }, { "Sid": "AWSCloudTrailPutLogEvents", "Effect": "Allow", "Action": [ "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:your-region:primary-account-id:log-group:CloudTrail/log-group-name:log-stream:primary-account-id_CloudTrail_your-region*", "arn:aws:logs:your-region:primary-account-id:log-group:CloudTrail/log-group-name:log-stream:org-id*" ] } ]}7.    Open the CloudTrail console.8.    In the navigation pane, choose Trails.9.    For Trail name, choose your trail's name.10.    For CloudWatch logs, choose the edit icon. Then, choose Continue.11.    For Role Summary, choose Allow.Update your CloudTrail trail to an organization trail1.    Open the CloudTrail console, and then choose Trails in the navigation pane.2.    For Trail name, choose your trail.3.    For Trail settings, choose the edit icon.4.    For Apply trail to my organization, choose Yes. Then, choose Save.Related informationHow do I get started with AWS Organizations?Running update-trail to update an organization trailFollow"
https://repost.aws/knowledge-center/change-cloudtrail-trail
Can I modify the terms of my Amazon EC2 Reserved Instance?
"I purchased an Amazon Elastic Compute Cloud (Amazon EC2) Reserved Instance, but I want to modify the attributes of my Reserved Instance contract."
"I purchased an Amazon Elastic Compute Cloud (Amazon EC2) Reserved Instance, but I want to modify the attributes of my Reserved Instance contract.ResolutionFirst, check the conditions that are listed at Requirements and restrictions for modification. If your Reserved Instance meets the conditions, then you can request to modify the following attributes:Availability ZoneScopeInstance sizeNetwork platformIf your modification request meets the guidelines and you’re ready to request a modification, then see Submitting modification requests.If you purchased a Convertible Reserved Instance, then consider exchanging the Reserved Instance for another Convertible Reserved Instance. For more information, see Exchange Convertible Reserved Instances.Related informationAmazon EC2 FAQsHow to purchase Amazon EC2 Reserved InstancesWhat are the differences between Standard and Convertible Reserved Instances?Follow"
https://repost.aws/knowledge-center/ec2-modify-reserved-instance
How do I set up a trusted IP address list for GuardDuty?
I want to set up a trusted IP address list for Amazon GuardDuty.
"I want to set up a trusted IP address list for Amazon GuardDuty.Short descriptionYou can configure GuardDuty to use your own custom trusted IP list. Use this list to configure your allowed IP addresses for secure communication with your AWS infrastructure and applications. For more information, see Working with trusted IP lists and threat lists.ResolutionCreate a trusted IP listReview the accepted format for trusted IP list files. Then, follow the instructions to upload the file to an Amazon Simple Storage Service (Amazon S3) bucket.Note: The trusted IP list file must be in TXT, STIX, OTX_CSV, ALIEN_VAULT, PROOF_POINT, or FIRE_EYE format. The trusted IP list doesn't support IPv6 addresses. You can have a maximum number of 2000 IP addresses and CIDR for each trusted IP list. You can have only one trusted IP list per Detector resource. For more information, see Quotas for Amazon GuardDuty.Check IAM identity permissionsBe sure that your AWS Identity and Access Management (IAM) identity has permissions with trusted IP lists and GuardDuty:{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "guardduty:*IPSet*", "guardduty:List*", "guardduty:Get*" ], "Resource": "*" } ]}Be sure that your IAM identity has permissions for PutRolePolicy and DeleteRolePolicy for the GuardDuty service linked role AWSServiceRoleForAmazonGuardDuty.{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "iam:DeleteRolePolicy", "iam:PutRolePolicy" ], "Resource": "arn:aws:iam::123456789123:role/aws-service-role/guardduty.amazonaws.com/AWSServiceRoleForAmazonGuardDuty" } ]}For more information, see Editing IAM policies.Add and activate a trusted IP list in GuardDutyOpen the GuardDuty console.In the navigation pane, choose Lists.Choose Add a trusted IP list.For List name, enter a name that is meaningful to you.For Location, enter the location for your S3 bucket. For example, https://s3.amazonaws.com/bucket-name/file.txt.Choose the Format dropdown menu, and then choose your list's file type.Select the I agree check box, and then choose Add list.In Trusted IP lists, choose Active for your trusted IP list name.Note: It might take up to 5 minutes for the list to activate.If you change a trusted IP list in GuardDuty, you must update and then reactivate the list. For instructions, see To update trusted IP lists and threat lists.Related informationHow to use Amazon GuardDuty and AWS Web Application Firewall to automatically block suspicious hostsWhy did GuardDuty send me alert findings for a trusted IP list address?Follow"
https://repost.aws/knowledge-center/guardduty-trusted-ip-list
"How do I move my EC2 instance to another subnet, Availability Zone, or VPC?"
"I want to move or copy my Amazon Elastic Compute Cloud (Amazon EC2) instance to another subnet, Availability Zone, or virtual private cloud (VPC)."
"I want to move or copy my Amazon Elastic Compute Cloud (Amazon EC2) instance to another subnet, Availability Zone, or virtual private cloud (VPC).Short descriptionIt's not possible to move an existing instance to another subnet, Availability Zone, or VPC. Instead, you can manually migrate the instance by creating a new Amazon Machine Image (AMI) from the source instance. Then, launch a new instance using the new AMI in the desired subnet, Availability Zone, or VPC. Finally, you can reassign any Elastic IP addresses from the source instance to the new instance.There are two methods for migrating the instance:Use the AWS Systems Manager automation document AWSSupport-CopyEC2Instance.Manually copy an instance and a launch a new instance from the copy.ResolutionBefore you begin, note the following:AMIs are based on Amazon Elastic Block Store (Amazon EBS) snapshots. For large file systems without a previous snapshot, AMI creation can take several hours. To decrease the AMI creation time, create an Amazon EBS snapshot before you create the AMI.Creating an AMI doesn't create a snapshot for instance store volumes on the instance. For information on backing up instance store volumes to Amazon EBS, see How do I back up an instance store volume on my Amazon EC2 instance to Amazon EBS?The new EC2 instance has a different private IPv4 or public IPv6 IP address. You must update all references to the old IP addresses (for example, in DNS entries) with the new IP addresses that are assigned to the new instance. If you're using an Elastic IP address on your source instance, be sure to attach it to the new instanceDomain security identifier (SID) conflict issues can occur when the copy launches and tries to contact the domain. Before you capture the AMI, use Sysprep or remove the domain-joined instance from the domain to prevent conflict issues. For more information, see How can I use Sysprep to create and install custom reusable Windows AMIs?Use the AWS System Manager automation runbook AWSSupport-CopyEC2InstanceYou can use the AWS Systems Manager Automation runbook AWSSupport-CopyEC2Instance to complete the following tasks automatically:Create a new imageLaunch a new instanceAfter these procedures complete, follow the instructions in Reassign the Elastic IP address section, if needed.To run the automation, do the following:1.    Open the AWSSupport-CopyEC2Instance runbook.Note: Make sure that you're in the same Region as the instance that you want to copy.2.    For Execute automation document, choose Simple execution.3.    For Input parameters, enter the InstanceID of the EC2 instance you want to copy. If you use the Interactive instance picker, then make sure that you select Show all instances from the dropdown list.4.    Provide the destination Region and/or the SubnetID where you want to copy the instance to.5.    Complete any of the additional optional fields that are required for your use case, and then select Execute.6.    To monitor the execution progress, open the Systems Manger console, and then choose Automation from the navigation pane. Choose the running automation, and then review the Executed steps. To view the automation output, expand Outputs.For more information about this runbook, see AWSSupport-CopyEC2Instance.Manually copy the instance and launch a new instance from the copyCreate a new image1.    Open the Amazon EC2 console, and then choose Instances from the left navigation pane.2.    Select the instance that you want to move, and then choose Actions, Instance State, Stop. This makes sure that the data is consistent between the old and new EBS volumes.Note: You can skip this step if you're testing this procedure or if you don't want to stop or reboot your instance.3.    Choose Actions, Image, Create Image.For Image name, enter a name for the image.For Image description, enter a description of the image.Note: If you select No reboot on the Create Image page, then the file system integrity of the image can't be guaranteed.4.    Choose Create Image.5.    Under Create Image request received, choose View pending image [ID]. Wait for the Status to change from pending to available.Note: You can also view pending images by choosing AMIs from the Images section of the navigation pane.Launch a new instance1.    Select the new AMI, and then choose Launch.2.    Choose the same instance type as the instance that you want to move, and then choose Next: Configure Instance Details.For Network, choose your VPC.For Subnet, choose the subnet where you want to launch the new instance.If the instance is a production instance, then for Enable termination protection, choose Protect against accidental termination.3.    Choose Next: Add Storage.4.    Accept the defaults, and then choose Next: Add Tags.For Key, enter Name.For Value, enter your instance name.5.    Choose Next: Configure Security Group.6.    Choose the same security group that's applied to the instance that you're moving.Note: If you're moving your instance between VPCs, you must create a new security group on the destination VPC.7.    Choose Review and Launch.8.    Choose Launch.9.    For Select a key pair, choose your key pair from the drop-down menu.10.  Select the agreement check box, and then choose Launch Instances.11.  Choose the instance ID to return to the EC2 console.Reassign the Elastic IP addressTo reassign the Elastic IP address, you must first disassociate the Elastic IP address from the source instance. Then, you can reassociate the Elastic IP address with the new instance. For instructions, see Disassociate an Elastic IP address.Note: Elastic IP addresses can be used in only one Region. If you move an instance to a different Region, you can't use the same Elastic IP address.Related informationCreate an Amazon EBS-backed Linux AMICreate a custom Windows AMIHow do I create and copy an Amazon Machine Image (AMI) from one AWS Region to another?Follow"
https://repost.aws/knowledge-center/move-ec2-instance
How do I troubleshoot domain transfer failures in Route 53?
I want to troubleshoot domain transfer failures in Amazon Route 53.
"I want to troubleshoot domain transfer failures in Amazon Route 53.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Troubleshoot failure to transfer a domain from another registrar to Route 53 (transfer in)Before transferring a domain to Route 53, confirm the following:Your current registrar allows the transfer.The domain doesn't have any of the following EPP codes or statuses:clientTransferProhibitedserverTransferProhibitedpendingDeletependingTransferredemptionPeriodRoute 53 supports the top-level domain (TLD) of your domain name.Your AWS account has a valid payment method.Resolve invalid authcode errorsIf there are authcode errors, then you receive the message, "The authorization code that you got from the current registrar is not valid". For next steps, see The authorization code that you got from the current registrar is not valid.Resolve clientTransferProhibited status or domain lock errorsIf transfer lock is turned on with your current registrar, then the transfer-in process fails. Run a whois command to confirm that this is causing your domain transfer failure. For example:$ whois example.com | grep "Status" Domain Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibitedIf you see the serverTransferProhibited status in your whois output, then contact your current registrar for more information.To turn off transfer lock, use your current registrar's console or contact the registrar.Determine why a transfer is stuck on step 5 of the transfer processIn step 5 of the domain transfer process, Amazon Route 53 sends a Form Of Authorization (FOA) to the registrant contact email. You must select the confirmation link in that email. If you didn't receive an FOA email, then see To resend the authorization email for a domain transfer.Note: If you change the registrant email address during the transfer process, then the authorization email might be sent to both the new and previous addresses. You must follow the confirmation link in both emails to proceed.Determine why you didn't receive a domain transfer authorization emailAs part of the domain transfer-in process, Amazon Route 53 sends an authorization email to the domain registrant's email address. You must select the link in that email to verify your email address. Failing to do so might cause your domain to stop working. If you didn't receive the authorization email, then check your email's spam or junk folder. If you still can't find the email, then see To resend the authorization email for a domain transfer.Determine why the status is "Waiting for the current registrar to complete the transfer"If your domain transfer-in request is stuck on step 7, then the status is "Waiting for the current registrar to complete the transfer". For more information on how to check your status, see Viewing the status of a domain transfer.This status indicates that the transfer is waiting on your current registrar's approval. After approval is received, the transfer process can proceed. Depending on your registrar and the requirements of the top-level domain (TLD), this step can take up to 7 days for generic TLDs. The step can take up to ten days for country code TLDs (ccTLDs).Note: You can't expediate this step in Amazon Route 53. However, you might be able to expedite the domain transfer by contacting your current registrar.Troubleshoot failure to transfer a domain from Route 53 to another registrar (transfer out)Resolve "clientTransferProhibited" status errorsThe domain registries for all generic TLDs and several geographic TLDs provide the option to lock your domain. Locking a domain prevents someone from transferring the domain to another registrar without your permission. If you turn on transfer lock for a domain, then the status is updated to "clientTransferProhibited". To remove the status, turn off the transfer lock using the following steps:1.    Open the Route 53 console.2.    In the navigation pane, choose Registered Domains.3.    Select the name of the domain that you plan to update.4.    Under Transfer lock, choose Disable.Or, you can run the following command in the AWS CLI:aws route53domains disable-domain-transfer-lock \ --region us-east-1 \ --domain-name example.comIn the preceding example, replace example.com with your domain name.Unlock "Transfer Lock" or remove "clientTransferProhibited" statusYou tried to unlock a domain from the AWS Management Console or the DisableDomainTransferLock API. However, you received the following error message: "TLDRulesViolation: [TLD] does not support domain lock/unlock operation".To resolve this, determine if the TLD supports transfer locking. If the TLD doesn't support transfer locking but you see a lock icon on your domain, then create a support case. For case type, choose Account and billing support.Transfer domains in closed AWS accountsWhen you close an AWS account, all associated AWS resources are deleted, including hosted zones. However, all domain names are maintained until their expiration date. After deleting your account, you can't modify the configuration of the remaining domain names. In this scenario, you can't update name servers or complete the transfer out process.Create a support case to transfer your domain from the closed account to another AWS account or another registrar. When creating your case, be sure to do the following:Create the support case from the closed account. You can log in to your closed account using the credentials of the AWS account root user.Choose Account and billing support for the case type.Include the domain names that you want to transfer out.Include the destination AWS account number (if transferring to another active AWS account).Troubleshoot failure to transfer a domain from an AWS account to another AWS account (cross-account transfer)To initiate the domain transfer, see To transfer a domain to a different AWS account and To accept a domain transfer from a different AWS account.If you encounter issues when following the steps listed in the preceding documentations, create a support case. For case type, choose Account and billing support.Follow"
https://repost.aws/knowledge-center/route-53-fix-domain-transfer-failure
Why is no data migrated from my Amazon S3 source endpoint even though my AWS DMS task is successful?
"My AWS Database Migration Service (AWS DMS) task is successful, but no data is migrated from my Amazon Simple Storage Service (Amazon S3) source endpoint. Why isn't my data migrating, and how do I troubleshoot this issue?"
"My AWS Database Migration Service (AWS DMS) task is successful, but no data is migrated from my Amazon Simple Storage Service (Amazon S3) source endpoint. Why isn't my data migrating, and how do I troubleshoot this issue?Short descriptionThe following are the most common reasons why an AWS DMS task is successful but no data is migrated:The task status is Load complete, replication ongoing, but no data is loaded on the target.The task status is Load complete, replication ongoing, but no table is in the Table statistics section.The task status is Running and the table is created in the target endpoint, but no data is loaded. Also, you receive a No response body error in the replication log.ResolutionThe task status is Load complete, replication ongoing, but no data is loaded on the targetConfirm that the Amazon S3 path defined for the source endpoint is correct. From the replication log, review the log entries. Identify entries that indicate that AWS DMS can't find the data files in the Amazon S3 path that is defined for the source endpoint. See the following example replication log entry:[SOURCE_UNLOAD ]I: Unload finished for table 'dms_schema'.'dms_table' (Id = 1). 0 rows sent. (streamcomponent.c:3396)[TARGET_LOAD ]I: Load finished for table 'dms_schema'.'dms_table' (Id = 1). 0 rows received. 0 rows skipped. Volume transferred 0. (streamcomponent.c:3667)In Amazon S3, the data file for the full load phase (data.csv) and the data file for on-going changes (change_data.csv) are stored like this:S3-bucket/dms-folder/sub-folder/dms_schema/dms_table/data.csvS3-bucket/dms-folder/sub-folder/dms-cdc-path/dms-cdc-sub-path/change_data.csvThe Amazon S3 source endpoint uses three important fields to find the data files:Bucket folderChange data capture (CDC) pathTable structureIn the example file paths listed previously, dms-folder/sub-folder is the Bucket folder. The CDC path that you enter when creating the Amazon S3 source endpoint is dms-cdc-path/dms-cdc-sub-path. The following example Table structure uses the same example file path listed previously:{ "TableCount": 1, "Tables": [ { "TableColumns": […], "TableColumnsTotal": "1", "TableName": "dms_table", "TableOwner": "dms_schema", "TablePath": "dms_schema/dms_table/" } ]}Important: Don't include the bucket folder path ( dms-folder/sub-folder) in the TablePath of the table structure.When specifying your Endpoint configuration, consider the following:The bucket folder is optional. If a bucket folder is specified, then the CDC path and table path (the TablePath field for a full load) must be in the same folder in Amazon S3. If the bucket folder isn't specified, then the TablePath and CDC path are directly under the S3 bucket.The Bucket folder field of the Amazon S3 source endpoint can be any folder directory between the S3 bucket name and the schema name of the table structure. In the previous example, it's dms-schema. If you don't have a hierarchy of folders under the S3 bucket, then you can leave the fields blank.Bucket folders or CDC paths can be individual folders or they can include subfolders, such as dms-folder or dms-folder/sub-folder.If your DMS task setting uses Amazon S3 as the source endpoint, then you must include the schema and table in the table mapping. This is required to successfully migrate data to the target. For more information, see Source data types for Amazon S3.If you use Drop tables on target as the table preparation mode for the task, then DMS creates the target table dms_schema.dms_table. See the following example:CREATE TABLE 'dms_schema'.'dms_table' (...);Note: Folder and object names in Amazon S3 are case-sensitive. Be sure to specify both folder and object names with correct case in the S3 endpoint.The task status is Load complete, replication ongoing, but no table is in the Table statistics sectionYou might find that no table was created in the target endpoint when Drop tables on target was used. This means that the issue might be caused by the table structure that is specified for the Amazon S3 source endpoint.Confirm that the Amazon S3 path for the source endpoint is correct, as described previously. Then, verify that your data type is supported by the Amazon S3 endpoint.After confirming that the Amazon S3 path is correct and that your data type is supported, check the filter that is defined by the table mapping of your DMS task. Check to see if the filter is the cause of the missing tables. Review the table that is needed within the task table mapping and check that the table is defined in the table structure of the Amazon S3 source endpoint.The task status is Running and the table is created in the target endpoint, but no data is loaded. Also, you receive a No response body error in the replication logIf AWS DMS can't get the content from the Amazon S3 path, you can find errors in the replication log. See the following examples:[SOURCE_CAPTURE ]E: No response body. Response code: 403 [1001730] (transfer_client.cpp:589)[SOURCE_CAPTURE ]E: failed to download file </dms-folder/sub-folder/dms_schema/dms_table/data.csv> from bucket <dms-test> as </rdsdbdata/data/tasks/NKMBA237MEB4UFSRDF5ZAF3EZQ/bucketFolder/dms-folder/sub-folder/dms_schema/dms_table/data.csv>, status = 4 (FAILED) [1001730] (transfer_client.cpp:592)This error occurs when the AWS Identity and Access Management (IAM) role for the Amazon S3 source endpoint doesn't have the correct permissions: s3:GetObject. To resolve this error, confirm that the data file exists in the Amazon S3 path that is in the error message. Then, confirm that the IAM user has permissions for s3:GetObject.Note: If the source Amazon S3 bucket has versioning enabled, additional s3:GetObjectVersion permissions is required.Related informationUsing Amazon S3 as a source for AWS DMSFollow"
https://repost.aws/knowledge-center/dms-task-successful-no-data-s3
How can I require MFA authentication for IAM users that use the AWS CLI?
"I created a multi-factor authentication (MFA) condition policy to restrict access to AWS services for AWS Identity and Access Management (IAM) users. The policy works with the AWS Management Console, but not with the AWS Command Line Interface (AWS CLI). How can I use MFA with the AWS CLI?"
"I created a multi-factor authentication (MFA) condition policy to restrict access to AWS services for AWS Identity and Access Management (IAM) users. The policy works with the AWS Management Console, but not with the AWS Command Line Interface (AWS CLI). How can I use MFA with the AWS CLI?Short descriptionThe following example IAM policy requires IAM users to use MFA to access specific AWS services:{ "Sid": "BlockMostAccessUnlessSignedInWithMFA", "Effect": "Deny", "NotAction": [ "iam:CreateVirtualMFADevice", "iam:DeleteVirtualMFADevice", "iam:ListVirtualMFADevices", "iam:EnableMFADevice", "iam:ResyncMFADevice", "iam:ListAccountAliases", "iam:ListUsers", "iam:ListSSHPublicKeys", "iam:ListAccessKeys", "iam:ListServiceSpecificCredentials", "iam:ListMFADevices", "iam:GetAccountSummary", "sts:GetSessionToken" ], "Resource": "*", "Condition": { "Bool": { "aws:MultiFactorAuthPresent": "false", "aws:ViaAWSService": "false" } }}IAM users with the AWS Management Console are prompted to enter MFA authentication credentials and can then access AWS services. However, IAM users with the AWS CLI aren't prompted to enter MFA authentication credentials and can access AWS services.ResolutionNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.The MultiFactorAuthPresent key doesn't exist in requests made using long-term credentials. With the Boolean condition operator, if the key in the policy isn't present, then the values don't match. The MultiFactorAuthPresent key doesn't deny access to requests made using long-term credentials.IAM users using the AWS Management Console generate temporary credentials and allow access only if MFA is used.The Boolean condition lets you restrict access with a key value set to true or false. You can add the IfExists condition operator to check if the MultiFactorAuthPresent key is present in the request. If the MultiFactorAuthPresent key isn't present, IfExists evaluates the condition element as true similar to the following:"Effect" : "Deny","Condition" : { "BoolIfExists" : { "aws:MultiFactorAuthPresent" : "false", "aws:ViaAWSService":"false"} }Note: IAM users using the AWS CLI with long-term credentials are denied access and must use MFA to authenticate. Therefore, be sure to use an MFA token to authenticate your CLI session.Related informationUsing multi-factor authenticationAssumeRoleEnabling and managing virtual MFA devices (AWS CLI or AWS API)Follow"
https://repost.aws/knowledge-center/mfa-iam-user-aws-cli
How can I create a layer for my Lambda Python function?
I want to create a layer for my AWS Lambda Python function.
"I want to create a layer for my AWS Lambda Python function.ResolutionThe following instructions deploy an application to create a layer to invoke a Lambda Python function.Open the AWS Serverless Application Repository console.In the navigation pane, choose Available applications.Select Show apps that create custom IAM roles or resource policies.In the search pane, enter python-lambda-layer-creation.Choose the python-lambda-layer-creation function.From the python-lambda-layer-creation Applications settings, select I acknowledge that this app creates custom IAM roles, and then choose Deploy.You can create a layer to invoke your Lambda function and pass a list of dependencies included with the layer metadata.The following example creates Python Lambda layers containing requests (latest version), numpy (version 1.20.1), and keyring (version >= 4.1.1) libraries. You can invoke the Lambda function with a payload similar to the following:{ "dependencies": { "requests": "latest", "numpy": "== 1.20.1", "keyring": ">= 4.1.1" }, "layer": { "name": "a-sample-python-lambda-layer", "description": "this layer contains requests, numpy and keyring libraries", "compatible-runtimes": ["python3.6","python3.7","python3.8"], "license-info": "MIT" } }To test the Lambda Python function layer, note the ARN. Then, create an AWS CloudFormation stack using a YAML template similar to the following:AWSTemplateFormatVersion: '2010-09-09'Parameters: Layer: Type: String Description: The ARN of the lambda function layerResources: LambdaFunction: Type: AWS::Lambda::Function Properties: Code: ZipFile: | import json import requests import numpy as np def handler(event, context): try: ip = requests.get("http://checkip.amazonaws.com/") x = np.array([2,3,1,0]) except requests.RequestException as e: raise e return { "statusCode": 200, "body": json.dumps({ "array[0]": ("%s" % str(x[0])), "location": ip.text.replace("\n", "") }), } Handler: index.handler Runtime: python3.7 MemorySize: 128 Timeout: 30 Layers: - !Ref Layer Role: Fn::GetAtt: - LambdaExecutionRole - Arn LambdaExecutionRole: Description: Allow Lambda to connect function to publish Lambda layers Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Statement: - Effect: Allow Principal: Service: - lambda.amazonaws.com Action: sts:AssumeRole Path: / ManagedPolicyArns: - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole # Policies: # - PolicyName: AllowPublishLambdaLayer # PolicyDocument: # Version: '2012-10-17' # Statement: # - Effect: Allow # Action: lambda:PublishLayerVersion # Resource: '*'Run the Lambda Python function. Example response:{ "statusCode": 200, "body": "{\"array[0]\": \"2\", \"location\": \"[your-outbound-ip-address]\"}"}Note: This Lambda function uses pip to manage dependencies so that libraries included in the Lambda layer exist in the Python package index repository.For examples of Python libraries included in the layer defined for the dependencies and layer attributes, see pip install on the pip website.For more information, see Creating and sharing Lambda layers.Related informationBuilding Lambda functions with PythonUsing layers with your Lambda functionHow do I create a Lambda layer using a simulated Lambda environment with Docker?How do I integrate the latest version of the AWS SDK for JavaScript into my Node.js Lambda function using layers?Follow"
https://repost.aws/knowledge-center/lambda-python-function-layer
How can I send custom HTTP responses for specific URLs from an Application Load Balancer?
I want to forward custom HTTP responses and drop client requests for specific URLs. How can I send custom HTTP responses for specific URLs from an Application Load Balancer?
"I want to forward custom HTTP responses and drop client requests for specific URLs. How can I send custom HTTP responses for specific URLs from an Application Load Balancer?ResolutionYou can use fixed-response actions to drop client requests and return a custom HTTP response. You can use this action to return a 2XX, 4XX, or 5XX response code and an optional message.To add a rule with a fixed-response action on your Application Load Balancer's listener:Open the Amazon Elastic Compute Cloud (Amazon EC2) console.On the navigation pane, from LOAD BALANCING, choose Load Balancers.Select your load balancer and choose the Listenerstab.Choose View/edit rules.Choose Add rules(the plus sign) in the menu bar. This action allows you to add Insert Ruleicons at every location where you can insert a rule in the priority order.Define the rule as follows:Choose Insert Rule.(Optional) To configure host-based routing, choose Add condition, Host is. Enter the hostname (for example, *.example.com), and then choose the check mark.(Optional) To configure path-based routing, choose Add condition, Path is. Enter the path pattern (for example, /img/*), and then choose the check mark.To add a fixed-response action, choose Add action, Return fixed response. Enter a response code and optional response body, and then choose the check mark.(Optional) To change the order of the rule, use the arrows. The default rule always has the last priority.Choose Save.Related InformationListener RulesFollow"
https://repost.aws/knowledge-center/elb-send-custom-http-responses-from-alb